Re: [ovirt-users] Snapshot issue

2015-03-25 Thread Omer Frenkel


- Original Message -
 From: Koen Vanoppen vanoppen.k...@gmail.com
 To: users@ovirt.org
 Sent: Wednesday, March 25, 2015 8:09:17 AM
 Subject: Re: [ovirt-users] Snapshot issue
 
 I'm still not able to start my VM's:
 Cannot run VM. The VM is performing an operation on a Snapshot. Please wait
 for the operation to finish, and try again...
 
 I already restarted the vdsm deamons on the hypervisors and restarted the
 engine too... Does anybody has any clue how I can solve this state?
 
 Kind regards,
 
 Koen
 
 2015-03-24 7:45 GMT+01:00 Koen Vanoppen  vanoppen.k...@gmail.com  :
 
 
 
 This is in the logs:
 2015-03-24 07:41:50,436 WARN [org.ovirt.engine.core.bll.RunVmCommand]
 (ajp--127.0.0.1-8702-12) [686c18ce] CanDoAction of action RunVm failed for
 user Reasons:
 VAR__ACTION__RUN,VAR__TYPE__VM,ACTION_TYPE_FAILED_VM_IS_DURING_SNAPSHOT
 
 Can't I clear all this? Because I Still have several machines down due the
 reason he is still in the cannot run VM... state...
 

this happens because you have a snapshot in status locked,
probably you did some operation on snapshot and something went wrong (it might 
have failed without clearing the state..)
there is no easy and safe way to fix this (you can change the status of the 
snapshot in the db but i am not sure what will happen)
what action did you do with snapshots on this vm?
what was the result?

 
 
 2015-03-23 8:40 GMT+01:00 Koen Vanoppen  vanoppen.k...@gmail.com  :
 
 
 
 Dear all,
 
 I have the following problem:
 
 Cannot run VM. The VM is performing an operation on a Snapshot. Please wait
 for the operation to finish, and try again.
 
 It is like this since Friday... How can I resolve this? I really need this vm
 to be up again...
 
 Kind regards,
 
 Koen
 
 
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Snapshot issue

2015-03-25 Thread Koen Vanoppen
I'm still not able to start my VM's:
Cannot run VM. The VM is performing an operation on a Snapshot. Please wait
for the operation to finish, and try again...

I already restarted the vdsm deamons on the hypervisors and restarted the
engine too... Does anybody has any clue how I can solve this state?

Kind regards,

Koen

2015-03-24 7:45 GMT+01:00 Koen Vanoppen vanoppen.k...@gmail.com:

 This is in the logs:
 2015-03-24 07:41:50,436 WARN  [org.ovirt.engine.core.bll.RunVmCommand]
 (ajp--127.0.0.1-8702-12) [686c18ce] CanDoAction of action RunVm failed for
 user  Reasons:
 VAR__ACTION__RUN,VAR__TYPE__VM,ACTION_TYPE_FAILED_VM_IS_DURING_SNAPSHOT

 Can't I clear all this? Because I Still have several machines down due the
 reason he is still in the cannot run VM... state...



 2015-03-23 8:40 GMT+01:00 Koen Vanoppen vanoppen.k...@gmail.com:

 Dear all,

 I have the following problem:

 Cannot run VM. The VM is performing an operation on a Snapshot. Please
 wait for the operation to finish, and try again.

 It is like this since Friday... How can I resolve this? I really need
 this vm to be up again...

 Kind regards,

 Koen



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] [QE] oVirt 3.6.0 status

2015-03-25 Thread Sandro Bonazzola
Hi, here's an update on 3.6 status on integration / rel-eng side
The tracker bug for 3.6.0 [1] currently shows no blockers.

Repository closure is currently broken due to a missing required dependency on 
python-blivet, patch fixing this issue is currently under review[4].

There are 577 bugs [2] targeted to 3.6.0.

NEW ASSIGNEDPOSTTotal
docs11  0   0   11
external1   0   0   1
gluster 49  2   2   53
i18n2   0   0   2
infra   77  6   9   92
integration 59  6   5   70
network 41  1   9   51
node27  3   2   32
ppc 0   0   1   1
sla 51  3   2   56
spice   1   0   0   1
storage 72  5   8   85
ux  31  0   2   33
virt74  5   10  89
Total   496 31  50  577


Features submission is still open until 2015-04-22 as per current release 
schedule.
Maintainers: be sure to have your features tracked in the google doc[3]

[1] https://bugzilla.redhat.com/1155425
[2] 
https://bugzilla.redhat.com/buglist.cgi?quicksearch=target_release%3A3.6.0%20Product%3AoVirt%20status%3Anew%2Cassigned%2Cpost
[3] http://goo.gl/9X3G49
[4] https://gerrit.ovirt.org/38942

-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] suggestions of improvement

2015-03-25 Thread Yaniv Dary



On 03/23/2015 12:27 PM, Nathanaël Blanchet wrote:

Hi all,

Two suggestions:

  * We use AD for authenticating users into the webadmin portal. But
the preferences are common to everyone, especially bookmarks. Is
it possible to add this kind of feature so as everyone to have a
personnal session with his own preferences?

Webadmin doesn't have personal view. Use the user portal or power user 
portal for this.



  * Some colleagues logon with the french translation of the webadmn,
and they all do the same error when booting a vm : run and run
once are translated in French to the same word exécuter. So
everybody runs once instead of a final run instance. I guess
this needs to be opened in a BZ ticket.


Yes, please open a ticket on this one.


 *



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


--
Yaniv Dary
Red Hat Israel Ltd.
34 Jerusalem Road
Building A, 4th floor
Ra'anana, Israel 4350109

Tel : +972 (9) 7692306
8272306
Email: yd...@redhat.com
IRC : ydary

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted Engine on oVirt Node Hypervisor

2015-03-25 Thread Fabian Deutsch
Hey Jason,

- Original Message -
 I'm setting up some new oVirt infrastructure, and wanted to give hosted
 engine a try.  I downloaded and installed the oVirt Node Hypervisor ISO
 (3.5-0.999.201502231653.el7.centos) on one of 3 nodes.  One of the
 options in the hypervisor menu is Hosted Engine.  This requires an
 Engine ISO/OVA URL for download.  The thing is - as far as I can tell,
 there is no download link for this ISO/OVA on the ovirt release web
 site.  I also can't find anything in the documentation that refers to it
 (or even this menu in the hypervisor). I did find this after some searching:

Yes, you are right, the documentation of Hosted Engine is currently ... sparse.

 http://jenkins.ovirt.org/user/fabiand/my-views/view/Node/job/ovirt-appliance_engine-3.5_master_merged/oVirt-Engine-Appliance-CentOS-x86_64-7-20150319.424.ova

 (Now replaced with a build from 0322).  I asked on the ovirt IRC channel
 and was told that this might work, but because of new functionality
 introduced recently that it also might not. If the feature is available
 in the node ISO, shouldn't there be an appropriate release of the hosted
 engine ISO/OVA that works hand in hand with the node that I've
 downloaded?   If it's not there because it isn't ready, isn't this
 functionality something that should be added to maybe a beta node
 release and tested before being released into the stable node hypervisor
 release?

Yes, the problem here is with the ovirt-appliance.
We did not start building images based on the 3.5 status of the appliance.

I'll see if we can get up a builder for the ovirt-3.5 branch up quickly.

 I asked on the IRC channel whether it might be possible for me to
 kickstart my own engine from the node.  I ran into trouble with that as
 well.   On the installed node, I can only configure one network
 interface.  This is, of course, intended to enable ovirtmgmt for
 communication with engine which would take over and configure everything
 else for you.  Of course, when you don't yet have engine installed and
 need to get it, this leads to a chicken and egg problem.  To kickstart
 engine on node, I need an IP (from mgmt), an image (I guess it could
 come from the mgmt network), but then I also need access to the external
 network (on another NIC) to be able to install the appropriate ovirt yum
 repository, and download the engine!  If I installed my own node
 manually instead if using ISO, I guess I could configure the network,
 and make it work, but I'm trying to take advantage of the work that has
 already been put into node to make this all possible.

I believe I saw a corresponding bug for what you described above.

Does it help if you temporarily add a route to your management network's
router, which enables the hosted-engine host to access the relevant networks
(external, storage, …)?


Greetings
fabian

 Anyway, I'm certainly interested in any feedback from users who have
 been able to make this work.  I guess I could kickstart one node as an
 engine, create the virtual image there, suck the ova down to the mgmt
 server, install node, then use node to re-suck down the hosted engine
 image, but it just seems like a lot of extra work.  Somehow I think it's
 intended to be a little more straightforward than that.
 
 Jason.
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Snapshot issue

2015-03-25 Thread Koen Vanoppen
Hi Omer,

Thanks for your reply.
I was cleaning our snapshots on oVirt. So I started deleting snapshots one
by one. In total in deleted 10 snapshots. 5 VM's came back up, without the
error message, the rest of them are still down with this error message.

2015-03-25 7:36 GMT+01:00 Omer Frenkel ofren...@redhat.com:



 - Original Message -
  From: Koen Vanoppen vanoppen.k...@gmail.com
  To: users@ovirt.org
  Sent: Wednesday, March 25, 2015 8:09:17 AM
  Subject: Re: [ovirt-users] Snapshot issue
 
  I'm still not able to start my VM's:
  Cannot run VM. The VM is performing an operation on a Snapshot. Please
 wait
  for the operation to finish, and try again...
 
  I already restarted the vdsm deamons on the hypervisors and restarted the
  engine too... Does anybody has any clue how I can solve this state?
 
  Kind regards,
 
  Koen
 
  2015-03-24 7:45 GMT+01:00 Koen Vanoppen  vanoppen.k...@gmail.com  :
 
 
 
  This is in the logs:
  2015-03-24 07:41:50,436 WARN [org.ovirt.engine.core.bll.RunVmCommand]
  (ajp--127.0.0.1-8702-12) [686c18ce] CanDoAction of action RunVm failed
 for
  user Reasons:
  VAR__ACTION__RUN,VAR__TYPE__VM,ACTION_TYPE_FAILED_VM_IS_DURING_SNAPSHOT
 
  Can't I clear all this? Because I Still have several machines down due
 the
  reason he is still in the cannot run VM... state...
 

 this happens because you have a snapshot in status locked,
 probably you did some operation on snapshot and something went wrong (it
 might have failed without clearing the state..)
 there is no easy and safe way to fix this (you can change the status of
 the snapshot in the db but i am not sure what will happen)
 what action did you do with snapshots on this vm?
 what was the result?

 
 
  2015-03-23 8:40 GMT+01:00 Koen Vanoppen  vanoppen.k...@gmail.com  :
 
 
 
  Dear all,
 
  I have the following problem:
 
  Cannot run VM. The VM is performing an operation on a Snapshot. Please
 wait
  for the operation to finish, and try again.
 
  It is like this since Friday... How can I resolve this? I really need
 this vm
  to be up again...
 
  Kind regards,
 
  Koen
 
 
 
  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users
 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Repeated inability to complete setup of hosted engine

2015-03-25 Thread Simone Tiraboschi


- Original Message -
 From: Cale Bouscal lunix.capt...@gmail.com
 To: users@ovirt.org
 Sent: Saturday, March 21, 2015 9:15:35 PM
 Subject: [ovirt-users] Repeated inability to complete setup of hosted engine
 
 Hi,
 
 I have tried setting this up several times and am getting stuck post-engine
 setup, at the host setup stage. I receive an error stating that we cannot
 connect to the host (I assume that is the bare metal system hosting the
 engine guest in this context, would be nice if it could echo the command
 it's issuing but cannot see it in the logs). DNS and ssh are working fine, I
 can ssh between the host and guest without issue. SElinux is disabled, as is
 iptables. I can't figure out how to move forward. I'm running CentOS 7 and
 ovirt 3.5. Here is the last of the log file before I get booted out of the
 installer:
 
 http://pastebin.com/LDqFHSZw
 
 So my questions are:
 
 1) What am I missing here?

The engine VM will contact your host via SSH in order to deploy it within 
oVirt; it seams that is failing there.
Cold you please try to connect from the engine VM to the host via SSH?
Could you please attach your host-deploy logs from the engine VM (the should be 
under /var/log/ovirt-engine/host-deploy/) ?

 2) how do I pick up from where I left off and use the engine I've
 successfully created?

You can try to run
hosted-engine --deploy --config-append=/etc/ovirt-hosted-engine/answer.conf
but it success really depends from how and where it stopped on first attempt (a 
partial failed setup it's by definition a dirty environment)
If you get any warning or errors it would be better to redeploy the engine VM 
cleaning the shared storage.

 Thank you.
 
 Cale
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ManageIQ and Ovirt

2015-03-25 Thread Yaniv Dary

Open a bug on ManageIQ then?

On 03/14/2015 05:48 PM, Christian Rebel wrote:


Hi,

can someone from RD please check the below issue/rca.

ManageIQ tries by performing a Smart State Analys to access a ovf 
File**on Ovirt 3.5.x for a VM.


I think oVirt changed in 3.5 their behavior by introducing the Disk 
for OVF_STORE, so maybe our below issue is related to it..


more Infos under 
http://talk.manageiq.org/t/no-results-from-smartstate-analysis-in-ovirt-environment/585/15


thx,

Christian



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


--
Yaniv Dary
Red Hat Israel Ltd.
34 Jerusalem Road
Building A, 4th floor
Ra'anana, Israel 4350109

Tel : +972 (9) 7692306
8272306
Email: yd...@redhat.com
IRC : ydary

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Virtualbox Image to Ovirt

2015-03-25 Thread Yaniv Dary

You will need a v2v tool to convert the image.

On 03/25/2015 04:57 AM, Sandvik Agustin wrote:

Hi users,


Good day, Is it possible to run a virtualbox image inside the 
hypervisor? I'm using Ovirt 3.5, or viceversa, ovirt image run inside 
virtualbox?



Thanks in Advance


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


--
Yaniv Dary
Red Hat Israel Ltd.
34 Jerusalem Road
Building A, 4th floor
Ra'anana, Israel 4350109

Tel : +972 (9) 7692306
8272306
Email: yd...@redhat.com
IRC : ydary

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Gluster-users] VM failed to start | Bad volume specification

2015-03-25 Thread Punit Dambiwal
Hi All,

With the help of gluster community and ovirt-china community...my issue got
resolved...

The main root cause was the following :-

1. the glob operation takes quite a long time, longer than the ioprocess
default 60s..
2. python-ioprocess updated which makes a single change of configuration
file doesn't work properly, only because this we should hack the code
manually...

 Solution (Need to do on all the hosts) :-

 1. Add the the ioprocess timeout value in the /etc/vdsm/vdsm.conf file as
 :-


[irs]
process_pool_timeout = 180
-

2. Check /usr/share/vdsm/storage/outOfProcess.py, line 71 and see whether
there is  still IOProcess(DEFAULT_TIMEOUT) in it,if yes...then changing
the configuration file takes no effect because now timeout is the third
parameter not the second of IOProcess.__init__().

3. Change IOProcess(DEFAULT_TIMEOUT) to
IOProcess(timeout=DEFAULT_TIMEOUT) and remove the
 /usr/share/vdsm/storage/outOfProcess.pyc file and restart vdsm and
supervdsm service on all hosts

Thanks,
Punit Dambiwal


On Mon, Mar 23, 2015 at 9:18 AM, Punit Dambiwal hypu...@gmail.com wrote:

 Hi All,

 Still i am facing the same issue...please help me to overcome this issue...

 Thanks,
 punit

 On Fri, Mar 20, 2015 at 12:22 AM, Thomas Holkenbrink 
 thomas.holkenbr...@fibercloud.com wrote:

  I’ve seen this before. The system thinks the storage system us up and
 running and then attempts to utilize it.

 The way I got around it was to put a delay in the startup of the gluster
 Node on the interface that the clients use to communicate.



 I use a bonded link, I then add a LINKDELAY to the interface to get the
 underlying system up and running before the network comes up. This then
 causes Network dependent features to wait for the network to finish.

 It adds about 10seconds to the startup time, in our environment it works
 well, you may not need as long of a delay.



 CentOS

 root@gls1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-bond0



 DEVICE=bond0

 ONBOOT=yes

 BOOTPROTO=static

 USERCTL=no

 NETMASK=255.255.248.0

 IPADDR=10.10.1.17

 MTU=9000

 IPV6INIT=no

 IPV6_AUTOCONF=no

 NETWORKING_IPV6=no

 NM_CONTROLLED=no

 LINKDELAY=10

 NAME=System Storage Bond0









 Hi Michal,



 The Storage domain is up and running and mounted on all the host
 nodes...as i updated before that it was working perfectly before but just
 after reboot can not make the VM poweron...



 [image: Inline image 1]



 [image: Inline image 2]



 [root@cpu01 log]# gluster volume info



 Volume Name: ds01

 Type: Distributed-Replicate

 Volume ID: 369d3fdc-c8eb-46b7-a33e-0a49f2451ff6

 Status: Started

 Number of Bricks: 48 x 2 = 96

 Transport-type: tcp

 Bricks:

 Brick1: cpu01:/bricks/1/vol1

 Brick2: cpu02:/bricks/1/vol1

 Brick3: cpu03:/bricks/1/vol1

 Brick4: cpu04:/bricks/1/vol1

 Brick5: cpu01:/bricks/2/vol1

 Brick6: cpu02:/bricks/2/vol1

 Brick7: cpu03:/bricks/2/vol1

 Brick8: cpu04:/bricks/2/vol1

 Brick9: cpu01:/bricks/3/vol1

 Brick10: cpu02:/bricks/3/vol1

 Brick11: cpu03:/bricks/3/vol1

 Brick12: cpu04:/bricks/3/vol1

 Brick13: cpu01:/bricks/4/vol1

 Brick14: cpu02:/bricks/4/vol1

 Brick15: cpu03:/bricks/4/vol1

 Brick16: cpu04:/bricks/4/vol1

 Brick17: cpu01:/bricks/5/vol1

 Brick18: cpu02:/bricks/5/vol1

 Brick19: cpu03:/bricks/5/vol1

 Brick20: cpu04:/bricks/5/vol1

 Brick21: cpu01:/bricks/6/vol1

 Brick22: cpu02:/bricks/6/vol1

 Brick23: cpu03:/bricks/6/vol1

 Brick24: cpu04:/bricks/6/vol1

 Brick25: cpu01:/bricks/7/vol1

 Brick26: cpu02:/bricks/7/vol1

 Brick27: cpu03:/bricks/7/vol1

 Brick28: cpu04:/bricks/7/vol1

 Brick29: cpu01:/bricks/8/vol1

 Brick30: cpu02:/bricks/8/vol1

 Brick31: cpu03:/bricks/8/vol1

 Brick32: cpu04:/bricks/8/vol1

 Brick33: cpu01:/bricks/9/vol1

 Brick34: cpu02:/bricks/9/vol1

 Brick35: cpu03:/bricks/9/vol1

 Brick36: cpu04:/bricks/9/vol1

 Brick37: cpu01:/bricks/10/vol1

 Brick38: cpu02:/bricks/10/vol1

 Brick39: cpu03:/bricks/10/vol1

 Brick40: cpu04:/bricks/10/vol1

 Brick41: cpu01:/bricks/11/vol1

 Brick42: cpu02:/bricks/11/vol1

 Brick43: cpu03:/bricks/11/vol1

 Brick44: cpu04:/bricks/11/vol1

 Brick45: cpu01:/bricks/12/vol1

 Brick46: cpu02:/bricks/12/vol1

 Brick47: cpu03:/bricks/12/vol1

 Brick48: cpu04:/bricks/12/vol1

 Brick49: cpu01:/bricks/13/vol1

 Brick50: cpu02:/bricks/13/vol1

 Brick51: cpu03:/bricks/13/vol1

 Brick52: cpu04:/bricks/13/vol1

 Brick53: cpu01:/bricks/14/vol1

 Brick54: cpu02:/bricks/14/vol1

 Brick55: cpu03:/bricks/14/vol1

 Brick56: cpu04:/bricks/14/vol1

 Brick57: cpu01:/bricks/15/vol1

 Brick58: cpu02:/bricks/15/vol1

 Brick59: cpu03:/bricks/15/vol1

 Brick60: cpu04:/bricks/15/vol1

 Brick61: cpu01:/bricks/16/vol1

 Brick62: cpu02:/bricks/16/vol1

 Brick63: cpu03:/bricks/16/vol1

 Brick64: cpu04:/bricks/16/vol1

 Brick65: cpu01:/bricks/17/vol1

 Brick66: cpu02:/bricks/17/vol1

 Brick67: cpu03:/bricks/17/vol1

 Brick68: cpu04:/bricks/17/vol1

 Brick69: cpu01:/bricks/18/vol1

 Brick70: cpu02:/bricks/18/vol1

 Brick71: 

[ovirt-users] [QE][ACTION REQUIRED] oVirt 3.5.2 and 3.5.3 status

2015-03-25 Thread Sandro Bonazzola
Hi,
we still have 5 open blockers for 3.5.2[1]:

Whiteboard  Bug ID  Status  Summary
network 1187244 POST[RHEL  7.0 + 7.1] Host configure with 
DHCP is losing connectivity after some time - dhclient is not running
storage 1176581 ASSIGNEDStorage Tab - import Domain - help 
button is missing
storage 1176582 ASSIGNEDTemplates tab - export template - 
help leads to exporting VM
storage 1176583 ASSIGNEDStorage tab- ISO Domain - Data Center 
- Attach - help button is missing
storage 1177220 ASSIGNED[BLOCKED] Failed to Delete First 
snapshot with live merge


And 2 dependency on libvirt not yet fixed:
Bug ID  Status  Summary
1199182 POST2nd active commit after snapshot triggers qemu failure
1199036 POSTLibvirtd was restarted when do active blockcommit while 
there is a blockpull job running

ACTION: Assignee to provide ETA for the blocker bug.

We're going to build RC3 hopefully next week once all remaining blockers will 
be fixed.

We still have 5 bugs in MODIFIED and 13 on QA[3]:

MODIFIEDON_QA   Total
infra   0   8   8
integration 1   0   1
network 1   1   2
node0   1   1
sla 1   1   2
storage 1   1   2
virt1   1   2
Total   5   13  18


ACTION: Testers: you're welcome to verify bugs currently ON_QA.

All remaining bugs not marked as blockers have been moved to 3.5.3.
A release management entry has been added for tracking the schedule of 3.5.3[4]
A bug tracker [5] has been created for 3.5.3 and currently shows no blockers.
If you're going to test nightly snapshot on CentOS please enable CR repo[2] for 
CentOS 7.1 testing.

We have 29 bugs currently targeted to 3.5.3[6]:

Whiteboard  NEW ASSIGNEDPOSTTotal
docs2   0   0   2
external1   0   0   1
gluster 1   0   0   1
infra   1   3   0   4
node2   0   1   3
ppc 0   0   1   1
sla 4   0   0   4
storage 9   0   0   9
ux  1   0   1   2
virt1   0   1   2
Total   22  3   4   29


ACTION: Maintainers / Assignee: to review the bugs targeted to 3.5.3 ensuring 
they're correctly targeted.
ACTION: Maintainers: to fill release notes for 3.5.2, the page has been created 
and updated here [7]
ACTION: Testers: please add yourself to the test page [8]


[1] https://bugzilla.redhat.com/1186161
[2] http://mirror.centos.org/centos/7/cr/x86_64/
[3] http://goo.gl/UEVTCf
[4] http://www.ovirt.org/OVirt_3.5.z_Release_Management#oVirt_3.5.3
[5] https://bugzilla.redhat.com/1198142
[6] 
https://bugzilla.redhat.com/buglist.cgi?quicksearch=product%3Aovirt%20target_release%3A3.5.3
[7] http://www.ovirt.org/OVirt_3.5.2_Release_Notes
[8] http://www.ovirt.org/Testing/oVirt_3.5.2_Testing


-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Change network of ovirt-nodes of a self-hosted engine cluster

2015-03-25 Thread Kostyrev Aleksandr

Good day!

I'm in position when I have to change network settings of three 
ovirt-nodes that comprise my cluster with hosted engine.
Am I correct that to change network is not a big deal and it is 
possible?

My plan was:
1. shutdown all vms (except engine itself) on all nodes.
2. enable maintenance mode on all nodes, except the one with running 
engine

3. change network settings in engine node
4. shut engine down with hosted-engine --vm-stop
5. change network settings on all nodes, verify that nodes can ping each 
other

6. start engine with hosted-engine --vm-start
7. change VLAN tag at Logical Network for VMs
8. activate nodes
9. start vms

Does it make sense?

--
С уважением,
Костырев Александр,
системный администратор
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] New VMs don't have network connection

2015-03-25 Thread Dan Kenigsberg
On Fri, Mar 13, 2015 at 02:56:36PM +0100, m...@nepu.moe wrote:
 Hello,
 
 New VMs can't access the network anymore, there's no ethernet device. This 
 happens both when generating a new VM from a saved template and when 
 installing a new VM with the blank template.
 It still works fine with all older VMs, they work properly even after a 
 restart. This happens on both of my nodes.
 
 What could be causing this?

Which Engine and vdsm versions are you using?
When did it work earlier? What has changed since?

Can you share the vmCreate out of vdsm.log, as well as the output of
virsh -r dumpxml name of your vm ?

Which guest OS are you running?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted engine multipath iscsi

2015-03-25 Thread wodel youchi
Hi,

I am planning an ovirt hosted-engine installation with two nodes and an
iSCSI SAN for the engine and VMs and NFS for ISO and Export domains, on
another NAS.

The iSCSI SAN have two controllers, each one have two NICs at 1Gb/s, and I
was planning to use a bond manually configured (two NICs) on
hypervisors/nodes, so I'll end up with 4 paths.

- Creating a bond is it a good idea? or do I have to use the node NICs
separately as in the link mentioned earlier?

- If I'll use a bond, will it survive the engine's installation?

- After the engine's installation, the link(s)/path(s) used to connect to
the SAN storage will also be used to connect to the NAS, this will
constitute a storage network, this network will be mandatory so I can't
create an iSCSI bond on top of it. Will the multipath be configured
automatically by the engine's installation on all hypervisors/nodes?

thanks in advance.

2015-03-24 10:30 GMT+01:00 Simone Tiraboschi stira...@redhat.com:



 - Original Message -
  From: Baptiste Agasse baptiste.aga...@lyra-network.com
  To: users users@ovirt.org
  Sent: Tuesday, March 24, 2015 10:12:36 AM
  Subject: [ovirt-users] Hosted engine multipath iscsi
 
  Hi all,
 
  I'm currently testing ovirt 3.5 on centos 7, with hosted engine on iscsi
  equallogic SAN.. It's running fine, i've configured multipath through the
  engine (http://www.ovirt.org/Feature/iSCSI-Multipath) for VMs storage
 but i
  didn't find any documentation on hosted engine iscsi multipath. There is
 a
  way to configure iscsi multipath for the engine storage ?

 In 3.5 is not directly supported from assisted setup.
 Basically you have to manually pre-connect the storage on each involved
 host and manually persist that configuration.

 You can find some hint there:
 https://bugzilla.redhat.com/show_bug.cgi?id=1193961#c2

 Adding Nir to double check it.


  Have a nice day.
 
  Regards.
 
  --
  Baptiste
  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted engine multipath iscsi

2015-03-25 Thread Simone Tiraboschi


- Original Message -
 From: wodel youchi wodel.you...@gmail.com
 To: Simone Tiraboschi stira...@redhat.com
 Cc: Baptiste Agasse baptiste.aga...@lyra-network.com, Nir Soffer 
 nsof...@redhat.com, users
 users@ovirt.org
 Sent: Wednesday, March 25, 2015 2:59:58 PM
 Subject: Re: [ovirt-users] Hosted engine multipath iscsi
 
 Hi,
 
 I am planning an ovirt hosted-engine installation with two nodes and an
 iSCSI SAN for the engine and VMs and NFS for ISO and Export domains, on
 another NAS.
 
 The iSCSI SAN have two controllers, each one have two NICs at 1Gb/s, and I
 was planning to use a bond manually configured (two NICs) on
 hypervisors/nodes, so I'll end up with 4 paths.
 
 - Creating a bond is it a good idea? or do I have to use the node NICs
 separately as in the link mentioned earlier?

Creating a iSCSI bond is indeed the recommended way to use iSCSI multipathing 
with oVirt
http://www.ovirt.org/Feature/iSCSI-Multipath#iSCSI_Bond_behaviour
But it relies on oVirt logical network that are still not available when you 
are deploying hosted-engine and with oVirt 3.5 you cannot than edit the 
hosted-engine storage domain.
A manually configured bond should work but than is not directly managed by the 
engine for additional hosts.

 - If I'll use a bond, will it survive the engine's installation?

hosted-engine is generally working with bonded interface and they generally 
survive but honestly I never directly tried deploying hosted-engine on iSCSI 
over bonded interface.
Please report us if you find any issue with that.

 - After the engine's installation, the link(s)/path(s) used to connect to
 the SAN storage will also be used to connect to the NAS, this will
 constitute a storage network, this network will be mandatory so I can't
 create an iSCSI bond on top of it. Will the multipath be configured
 automatically by the engine's installation on all hypervisors/nodes?

I think no, currently you cannot edit the hosted engine storage domain from the 
engine to configure the iSCSI bond involving another logical network.
We are working on that for 3.6

 thanks in advance.
 
 2015-03-24 10:30 GMT+01:00 Simone Tiraboschi stira...@redhat.com:
 
 
 
  - Original Message -
   From: Baptiste Agasse baptiste.aga...@lyra-network.com
   To: users users@ovirt.org
   Sent: Tuesday, March 24, 2015 10:12:36 AM
   Subject: [ovirt-users] Hosted engine multipath iscsi
  
   Hi all,
  
   I'm currently testing ovirt 3.5 on centos 7, with hosted engine on iscsi
   equallogic SAN.. It's running fine, i've configured multipath through the
   engine (http://www.ovirt.org/Feature/iSCSI-Multipath) for VMs storage
  but i
   didn't find any documentation on hosted engine iscsi multipath. There is
  a
   way to configure iscsi multipath for the engine storage ?
 
  In 3.5 is not directly supported from assisted setup.
  Basically you have to manually pre-connect the storage on each involved
  host and manually persist that configuration.
 
  You can find some hint there:
  https://bugzilla.redhat.com/show_bug.cgi?id=1193961#c2
 
  Adding Nir to double check it.
 
 
   Have a nice day.
  
   Regards.
  
   --
   Baptiste
   ___
   Users mailing list
   Users@ovirt.org
   http://lists.ovirt.org/mailman/listinfo/users
  
  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users
 
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VDSM memory consumption

2015-03-25 Thread Darrell Budic

 On Mar 25, 2015, at 5:34 AM, Dan Kenigsberg dan...@redhat.com wrote:
 
 On Tue, Mar 24, 2015 at 02:01:40PM -0500, Darrell Budic wrote:
 
 On Mar 24, 2015, at 4:33 AM, Dan Kenigsberg dan...@redhat.com wrote:
 
 On Mon, Mar 23, 2015 at 04:00:14PM -0400, John Taylor wrote:
 Chris Adams c...@cmadams.net writes:
 
 Once upon a time, Sven Kieske s.kie...@mittwald.de said:
 On 13/03/15 12:29, Kapetanakis Giannis wrote:
 We also face this problem since 3.5 in two different installations...
 Hope it's fixed soon
 
 Nothing will get fixed if no one bothers to
 open BZs and send relevants log files to help
 track down the problems.
 
 There's already an open BZ:
 
 https://bugzilla.redhat.com/show_bug.cgi?id=1158108
 
 I'm not sure if that is exactly the same problem I'm seeing or not; my
 vdsm process seems to be growing faster (RSS grew 952K in a 5 minute
 period just now; VSZ didn't change).
 
 For those following this I've added a comment on the bz [1], although in
 my case the memory leak is, like Chris Adams, a lot more than the 300KiB/h
 in the original bug report by Daniel Helgenberger .
 
 [1] https://bugzilla.redhat.com/show_bug.cgi?id=1158108
 
 That's interesting (and worrying).
 Could you check your suggestion by editing sampling.py so that
 _get_interfaces_and_samples() returns the empty dict immediately?
 Would this make the leak disappear?
 
 Looks like you’ve got something there. Just a quick test for now, watching 
 RSS in top. I’ll let it go this way for a while and see what it looks in a 
 few hours.
 
 System 1: 13 VMs w/ 24 interfaces between them
 
 11:47 killed a vdsm @ 9.116G RSS (after maybe a week and a half running)
 
 11:47: 97xxx
 11:57 135544 and climbing
 12:00 136400
 
 restarted with sampling.py modified to just return empty set:
 
 def _get_interfaces_and_samples():
links_and_samples = {}
return links_and_samples
 
 Thanks for the input. Just to be a little more certain that the culprit
 is _get_interfaces_and_samples() per se, would you please decorate it
 with memoized, and add a log line in the end
 
 @utils.memoized   # add this line
 def _get_interfaces_and_samples():
...
logging.debug('LINKS %s', links_and_samples)  ## and this line
return links_and_samples
 
 I'd like to see what happens when the function is run only once, and
 returns a non-empty reasonable dictionary of links and samples.

Looks similar, I modified my second server for this test:

12:25, still growing from yesterday: 544512

restarted with mods for logging and memoize:
stabilized @ 12:32: 114284
1:23: 115300

Thread-12::DEBUG::2015-03-25 
12:28:08,080::sampling::243::root::(_get_interfaces_and_samples) LINKS 
{'vnet18': virt.sampling.InterfaceSample instance at 0x7f38c03e85f0, 
'vnet19': virt.sampling.InterfaceSample instance at 0x7f38b42cbcf8, 'bond0': 
virt.sampling.InterfaceSample instance at 0x7f38b429afc8, 'vnet13': 
virt.sampling.InterfaceSample instance at 0x7f38b42c8680, 'vnet16': 
virt.sampling.InterfaceSample instance at 0x7f38b42cb368, 'private': 
virt.sampling.InterfaceSample instance at 0x7f38b42b8bd8, 'bond0.100': 
virt.sampling.InterfaceSample instance at 0x7f38b42bdd88, 'vnet0': 
virt.sampling.InterfaceSample instance at 0x7f38b42c1f80, 'enp3s0': 
virt.sampling.InterfaceSample instance at 0x7f38b429cef0, 'vnet2': 
virt.sampling.InterfaceSample instance at 0x7f38b42bbbd8, 'vnet3': 
virt.sampling.InterfaceSample instance at 0x7f38b42c37e8, 'vnet4': 
virt.sampling.InterfaceSample instance at 0x7f38b42c5518, 'vnet5': 
virt.sampling.InterfaceSample instance at 0x7f38b42c6ab8, 'vnet6': 
virt.sampling.InterfaceSample instance at 0x7f38b42c7248, 'vnet7': 
virt.sampling.InterfaceSample instance at 0x7f38c03e7a28, 'vnet8': 
virt.sampling.InterfaceSample instance at 0x7f38b42c7c20, 'bond0.1100': 
virt.sampling.InterfaceSample instance at 0x7f38b42be710, 'bond0.1103': 
virt.sampling.InterfaceSample instance at 0x7f38b429dc68, 'ovirtmgmt': 
virt.sampling.InterfaceSample instance at 0x7f38b42b16c8, 'lo': 
virt.sampling.InterfaceSample instance at 0x7f38b429a8c0, 'vnet22': 
virt.sampling.InterfaceSample instance at 0x7f38c03e7128, 'vnet21': 
virt.sampling.InterfaceSample instance at 0x7f38b42cd368, 'vnet20': 
virt.sampling.InterfaceSample instance at 0x7f38b42cc7a0, 'internet': 
virt.sampling.InterfaceSample instance at 0x7f38b42aa098, 'bond0.1203': 
virt.sampling.InterfaceSample instance at 0x7f38b42aa8c0, 'bond0.1223': 
virt.sampling.InterfaceSample instance at 0x7f38b42bb128, ‘XXX': 
virt.sampling.InterfaceSample instance at 0x7f38b42bee60, ‘XXX': 
virt.sampling.InterfaceSample instance at 0x7f38b42beef0, ';vdsmdummy;': 
virt.sampling.InterfaceSample instance at 0x7f38b42bdc20, 'vnet14': 
virt.sampling.InterfaceSample instance at 0x7f38b42ca050, 'mgmt': 
virt.sampling.InterfaceSample instance at 0x7f38b42be248, 'vnet15': 
virt.sampling.InterfaceSample instance at 0x7f38b42cab00, 'enp2s0': 
virt.sampling.InterfaceSample instance at 0x7f38b429c200, 'bond0.1110': 

Re: [ovirt-users] [Gluster-users] VM failed to start | Bad volume specification

2015-03-25 Thread Punit Dambiwal
Hi Kaushal,

I am really thankful to you and the guy form ovirt-china huntxu to help
me to resolve this issue... once again thanks to all...

Punit

On Wed, Mar 25, 2015 at 6:52 PM, Kaushal M kshlms...@gmail.com wrote:

 Awesome Punit! I'm happy to have been a part of the debugging process.

 ~kaushal

 On Wed, Mar 25, 2015 at 3:09 PM, Punit Dambiwal hypu...@gmail.com wrote:

 Hi All,

 With the help of gluster community and ovirt-china community...my issue
 got resolved...

 The main root cause was the following :-

 1. the glob operation takes quite a long time, longer than the ioprocess
 default 60s..
 2. python-ioprocess updated which makes a single change of configuration
 file doesn't work properly, only because this we should hack the code
 manually...

  Solution (Need to do on all the hosts) :-

  1. Add the the ioprocess timeout value in the /etc/vdsm/vdsm.conf file
 as  :-

 
 [irs]
 process_pool_timeout = 180
 -

 2. Check /usr/share/vdsm/storage/outOfProcess.py, line 71 and see whether
 there is  still IOProcess(DEFAULT_TIMEOUT) in it,if yes...then changing
 the configuration file takes no effect because now timeout is the third
 parameter not the second of IOProcess.__init__().

 3. Change IOProcess(DEFAULT_TIMEOUT) to
 IOProcess(timeout=DEFAULT_TIMEOUT) and remove the
  /usr/share/vdsm/storage/outOfProcess.pyc file and restart vdsm and
 supervdsm service on all hosts

 Thanks,
 Punit Dambiwal


 On Mon, Mar 23, 2015 at 9:18 AM, Punit Dambiwal hypu...@gmail.com
 wrote:

 Hi All,

 Still i am facing the same issue...please help me to overcome this
 issue...

 Thanks,
 punit

 On Fri, Mar 20, 2015 at 12:22 AM, Thomas Holkenbrink 
 thomas.holkenbr...@fibercloud.com wrote:

  I’ve seen this before. The system thinks the storage system us up and
 running and then attempts to utilize it.

 The way I got around it was to put a delay in the startup of the
 gluster Node on the interface that the clients use to communicate.



 I use a bonded link, I then add a LINKDELAY to the interface to get the
 underlying system up and running before the network comes up. This then
 causes Network dependent features to wait for the network to finish.

 It adds about 10seconds to the startup time, in our environment it
 works well, you may not need as long of a delay.



 CentOS

 root@gls1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-bond0



 DEVICE=bond0

 ONBOOT=yes

 BOOTPROTO=static

 USERCTL=no

 NETMASK=255.255.248.0

 IPADDR=10.10.1.17

 MTU=9000

 IPV6INIT=no

 IPV6_AUTOCONF=no

 NETWORKING_IPV6=no

 NM_CONTROLLED=no

 LINKDELAY=10

 NAME=System Storage Bond0









 Hi Michal,



 The Storage domain is up and running and mounted on all the host
 nodes...as i updated before that it was working perfectly before but just
 after reboot can not make the VM poweron...



 [image: Inline image 1]



 [image: Inline image 2]



 [root@cpu01 log]# gluster volume info



 Volume Name: ds01

 Type: Distributed-Replicate

 Volume ID: 369d3fdc-c8eb-46b7-a33e-0a49f2451ff6

 Status: Started

 Number of Bricks: 48 x 2 = 96

 Transport-type: tcp

 Bricks:

 Brick1: cpu01:/bricks/1/vol1

 Brick2: cpu02:/bricks/1/vol1

 Brick3: cpu03:/bricks/1/vol1

 Brick4: cpu04:/bricks/1/vol1

 Brick5: cpu01:/bricks/2/vol1

 Brick6: cpu02:/bricks/2/vol1

 Brick7: cpu03:/bricks/2/vol1

 Brick8: cpu04:/bricks/2/vol1

 Brick9: cpu01:/bricks/3/vol1

 Brick10: cpu02:/bricks/3/vol1

 Brick11: cpu03:/bricks/3/vol1

 Brick12: cpu04:/bricks/3/vol1

 Brick13: cpu01:/bricks/4/vol1

 Brick14: cpu02:/bricks/4/vol1

 Brick15: cpu03:/bricks/4/vol1

 Brick16: cpu04:/bricks/4/vol1

 Brick17: cpu01:/bricks/5/vol1

 Brick18: cpu02:/bricks/5/vol1

 Brick19: cpu03:/bricks/5/vol1

 Brick20: cpu04:/bricks/5/vol1

 Brick21: cpu01:/bricks/6/vol1

 Brick22: cpu02:/bricks/6/vol1

 Brick23: cpu03:/bricks/6/vol1

 Brick24: cpu04:/bricks/6/vol1

 Brick25: cpu01:/bricks/7/vol1

 Brick26: cpu02:/bricks/7/vol1

 Brick27: cpu03:/bricks/7/vol1

 Brick28: cpu04:/bricks/7/vol1

 Brick29: cpu01:/bricks/8/vol1

 Brick30: cpu02:/bricks/8/vol1

 Brick31: cpu03:/bricks/8/vol1

 Brick32: cpu04:/bricks/8/vol1

 Brick33: cpu01:/bricks/9/vol1

 Brick34: cpu02:/bricks/9/vol1

 Brick35: cpu03:/bricks/9/vol1

 Brick36: cpu04:/bricks/9/vol1

 Brick37: cpu01:/bricks/10/vol1

 Brick38: cpu02:/bricks/10/vol1

 Brick39: cpu03:/bricks/10/vol1

 Brick40: cpu04:/bricks/10/vol1

 Brick41: cpu01:/bricks/11/vol1

 Brick42: cpu02:/bricks/11/vol1

 Brick43: cpu03:/bricks/11/vol1

 Brick44: cpu04:/bricks/11/vol1

 Brick45: cpu01:/bricks/12/vol1

 Brick46: cpu02:/bricks/12/vol1

 Brick47: cpu03:/bricks/12/vol1

 Brick48: cpu04:/bricks/12/vol1

 Brick49: cpu01:/bricks/13/vol1

 Brick50: cpu02:/bricks/13/vol1

 Brick51: cpu03:/bricks/13/vol1

 Brick52: cpu04:/bricks/13/vol1

 Brick53: cpu01:/bricks/14/vol1

 Brick54: cpu02:/bricks/14/vol1

 Brick55: cpu03:/bricks/14/vol1

 Brick56: cpu04:/bricks/14/vol1

 Brick57: cpu01:/bricks/15/vol1

 Brick58: cpu02:/bricks/15/vol1

 

Re: [ovirt-users] Virtualbox Image to Ovirt

2015-03-25 Thread Sandvik Agustin
Hi Yaniv,

Thanks for the reply, I'll try this one v2v tool, then i'll update you
guys, If what happen.

Thanks Again

On Wed, Mar 25, 2015 at 4:06 PM, Yaniv Dary yd...@redhat.com wrote:

  You will need a v2v tool to convert the image.


 On 03/25/2015 04:57 AM, Sandvik Agustin wrote:

 Hi users,


  Good day, Is it possible to run a virtualbox image inside the
 hypervisor? I'm using Ovirt 3.5, or viceversa, ovirt image run inside
 virtualbox?


  Thanks in Advance


 ___
 Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users


 --
 Yaniv Dary
 Red Hat Israel Ltd.
 34 Jerusalem Road
 Building A, 4th floor
 Ra'anana, Israel 4350109

 Tel : +972 (9) 7692306
 8272306
 Email: yd...@redhat.com
 IRC : ydary


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted Engine config in our hardware environment

2015-03-25 Thread Simone Tiraboschi


- Original Message -
 From: Eric Wong eric.w...@solvians.com
 To: Simone Tiraboschi stira...@redhat.com
 Cc: users@ovirt.org
 Sent: Wednesday, March 25, 2015 12:12:47 PM
 Subject: RE: [ovirt-users] Hosted Engine config in our hardware environment
 
 Simone:
 
 I discussed with our SAN guy.  We currently is not able to configure the SAN
 in D1  X1 to be seen by blades in both sides.  So we are pretty much stuck
 with our current ovirt mgmt. vms configuration.

That is not an hosted engine issue, it's a design issue.
If you want to provide HA or disaster recovery capabilities between your two 
racks both of them should be able to access your shared storage otherwise you 
simply cannot directly migrate a VM from D1 to X1 or viceversa.
If you have two racks with two different and insulated SAN and they cannot 
access each other you have two distinct insulated systems and you could only 
manage them as that; you could use a single oVirt engine for that but the 
engine VM itself could not migrate to the other system by design.

 I guess the main concern is how to control all the virtual machines with
 oVirt infrastructure if both our ovirt-mgmt vms went down, how can we start
 them up again. That also applies to any virtual machines.  If the oVirt
 mgmt infrastructure is not able, I think there is no way to start a VM,
 correct?

At low level you could directly use virsh directly on the host to start a VM 
directly over libvirtd but it's really a extreme solution.
 
 Again, Many Thanks,
 Eric
 
 
 -Original Message-
 From: Simone Tiraboschi [mailto:stira...@redhat.com]
 Sent: Tuesday, March 24, 2015 5:11 PM
 To: Eric Wong
 Cc: users@ovirt.org
 Subject: Re: [ovirt-users] Hosted Engine config in our hardware environment
 
 
 
 - Original Message -
  From: Eric Wong eric.w...@solvians.com
  To: Simone Tiraboschi stira...@redhat.com
  Cc: users@ovirt.org
  Sent: Tuesday, March 24, 2015 9:36:11 AM
  Subject: RE: [ovirt-users] Hosted Engine config in our hardware environment
  
  Simone:
  
  Thanks for your quick respond.  It clarified some of the confusion I have
  with the hosted-engine.
  
  However, I still have one question.  Sorry.  It is my bad that I did not
  explain clearer on our hardware configuration, mainly on the SAN.  We also
  have 2 SAN, one on D1  another one on X1.  The blades on D1 can only see
  the storage in SAN D1.  X1 blades can only see SAN X1.
  
  Does it mean that we will need to manage our oVirt environment separately
  when we switch to use hosted-engine?  Because from your explanation, the
  hosted-engine in selected nodes will use separate iSCSI hosted-engine
  storage to store the DB.
 
 With hosted engine not only the DB but all the engine VM is going to be
 stored on a dedicated iSCSI LUN so each host involved (not managed by) in
 hosted-engine should be able to access that device in order to be able to
 start the engine VM after other host failures.
 
 So, if you want to have it restarting the engine VM on a X1 host after a
 complete failure of D1 rack, you need to store the engine VM on a location
 that is accessible both by D1 and X1.
 NFS could be an option too.
 Otherwise you should check your storage path design in order to have a cross
 rack path; you could also evaluate iSCSI multipathing.
 
 
  Since there is no cross D1  X1 storage in our
  current config, that means we need separate hosted-engine setup on each
  side.
  
  Thanks,
  Eric
  
  
  -Original Message-
  From: Simone Tiraboschi [mailto:stira...@redhat.com]
  Sent: Friday, March 20, 2015 7:06 PM
  To: Eric Wong
  Cc: users@ovirt.org
  Subject: Re: [ovirt-users] Hosted Engine config in our hardware environment
  
  
  
  - Original Message -
   From: Eric Wong eric.w...@solvians.com
   To: users@ovirt.org
   Sent: Thursday, March 19, 2015 7:04:04 PM
   Subject: [ovirt-users] Hosted Engine config in our hardware environment
   
   Hello oVirt guru out there:
   
   I want to seek some advice on upgrade path for our oVirt management vm
   configuration. We have been using oVirt for over 3 years. When we first
   setup oVirt environment, Hosted Engine componment did not exist. Our
   question is should we migrate our current configuration to use Hosted
   Engine?
   
   First let me give an overview of our configuration. We have blade servers
   in
   2 separate racks. D1  X1. Each side has 10 blades. Storage is iSCSI SAN.
   
   Inside our oVirt 3.5.0.1-1.el6 installation, it is configured with 2 data
   centers. D1  X1. Each datacenter has the 10 blades for that side. The
   management function of oVirt (oVirt web console) is running off 2 VMs,
   ovirt-mgmt-1 on D1, and ovirt-mgmt-2 on X1. We have keepalived to
   maintain
   a
   flowing IP for the oVirt management console. The keepalived script makes
   sure only one copy of ovirt-engine is running at any time. It can be on
   D1
   or X1. The mgmt VMs have Postgresql setup in replication mode. In case
   one
   of 

Re: [ovirt-users] [Gluster-users] VM failed to start | Bad volume specification

2015-03-25 Thread Kaushal M
Awesome Punit! I'm happy to have been a part of the debugging process.

~kaushal

On Wed, Mar 25, 2015 at 3:09 PM, Punit Dambiwal hypu...@gmail.com wrote:

 Hi All,

 With the help of gluster community and ovirt-china community...my issue
 got resolved...

 The main root cause was the following :-

 1. the glob operation takes quite a long time, longer than the ioprocess
 default 60s..
 2. python-ioprocess updated which makes a single change of configuration
 file doesn't work properly, only because this we should hack the code
 manually...

  Solution (Need to do on all the hosts) :-

  1. Add the the ioprocess timeout value in the /etc/vdsm/vdsm.conf file as
  :-

 
 [irs]
 process_pool_timeout = 180
 -

 2. Check /usr/share/vdsm/storage/outOfProcess.py, line 71 and see whether
 there is  still IOProcess(DEFAULT_TIMEOUT) in it,if yes...then changing
 the configuration file takes no effect because now timeout is the third
 parameter not the second of IOProcess.__init__().

 3. Change IOProcess(DEFAULT_TIMEOUT) to
 IOProcess(timeout=DEFAULT_TIMEOUT) and remove the
  /usr/share/vdsm/storage/outOfProcess.pyc file and restart vdsm and
 supervdsm service on all hosts

 Thanks,
 Punit Dambiwal


 On Mon, Mar 23, 2015 at 9:18 AM, Punit Dambiwal hypu...@gmail.com wrote:

 Hi All,

 Still i am facing the same issue...please help me to overcome this
 issue...

 Thanks,
 punit

 On Fri, Mar 20, 2015 at 12:22 AM, Thomas Holkenbrink 
 thomas.holkenbr...@fibercloud.com wrote:

  I’ve seen this before. The system thinks the storage system us up and
 running and then attempts to utilize it.

 The way I got around it was to put a delay in the startup of the gluster
 Node on the interface that the clients use to communicate.



 I use a bonded link, I then add a LINKDELAY to the interface to get the
 underlying system up and running before the network comes up. This then
 causes Network dependent features to wait for the network to finish.

 It adds about 10seconds to the startup time, in our environment it works
 well, you may not need as long of a delay.



 CentOS

 root@gls1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-bond0



 DEVICE=bond0

 ONBOOT=yes

 BOOTPROTO=static

 USERCTL=no

 NETMASK=255.255.248.0

 IPADDR=10.10.1.17

 MTU=9000

 IPV6INIT=no

 IPV6_AUTOCONF=no

 NETWORKING_IPV6=no

 NM_CONTROLLED=no

 LINKDELAY=10

 NAME=System Storage Bond0









 Hi Michal,



 The Storage domain is up and running and mounted on all the host
 nodes...as i updated before that it was working perfectly before but just
 after reboot can not make the VM poweron...



 [image: Inline image 1]



 [image: Inline image 2]



 [root@cpu01 log]# gluster volume info



 Volume Name: ds01

 Type: Distributed-Replicate

 Volume ID: 369d3fdc-c8eb-46b7-a33e-0a49f2451ff6

 Status: Started

 Number of Bricks: 48 x 2 = 96

 Transport-type: tcp

 Bricks:

 Brick1: cpu01:/bricks/1/vol1

 Brick2: cpu02:/bricks/1/vol1

 Brick3: cpu03:/bricks/1/vol1

 Brick4: cpu04:/bricks/1/vol1

 Brick5: cpu01:/bricks/2/vol1

 Brick6: cpu02:/bricks/2/vol1

 Brick7: cpu03:/bricks/2/vol1

 Brick8: cpu04:/bricks/2/vol1

 Brick9: cpu01:/bricks/3/vol1

 Brick10: cpu02:/bricks/3/vol1

 Brick11: cpu03:/bricks/3/vol1

 Brick12: cpu04:/bricks/3/vol1

 Brick13: cpu01:/bricks/4/vol1

 Brick14: cpu02:/bricks/4/vol1

 Brick15: cpu03:/bricks/4/vol1

 Brick16: cpu04:/bricks/4/vol1

 Brick17: cpu01:/bricks/5/vol1

 Brick18: cpu02:/bricks/5/vol1

 Brick19: cpu03:/bricks/5/vol1

 Brick20: cpu04:/bricks/5/vol1

 Brick21: cpu01:/bricks/6/vol1

 Brick22: cpu02:/bricks/6/vol1

 Brick23: cpu03:/bricks/6/vol1

 Brick24: cpu04:/bricks/6/vol1

 Brick25: cpu01:/bricks/7/vol1

 Brick26: cpu02:/bricks/7/vol1

 Brick27: cpu03:/bricks/7/vol1

 Brick28: cpu04:/bricks/7/vol1

 Brick29: cpu01:/bricks/8/vol1

 Brick30: cpu02:/bricks/8/vol1

 Brick31: cpu03:/bricks/8/vol1

 Brick32: cpu04:/bricks/8/vol1

 Brick33: cpu01:/bricks/9/vol1

 Brick34: cpu02:/bricks/9/vol1

 Brick35: cpu03:/bricks/9/vol1

 Brick36: cpu04:/bricks/9/vol1

 Brick37: cpu01:/bricks/10/vol1

 Brick38: cpu02:/bricks/10/vol1

 Brick39: cpu03:/bricks/10/vol1

 Brick40: cpu04:/bricks/10/vol1

 Brick41: cpu01:/bricks/11/vol1

 Brick42: cpu02:/bricks/11/vol1

 Brick43: cpu03:/bricks/11/vol1

 Brick44: cpu04:/bricks/11/vol1

 Brick45: cpu01:/bricks/12/vol1

 Brick46: cpu02:/bricks/12/vol1

 Brick47: cpu03:/bricks/12/vol1

 Brick48: cpu04:/bricks/12/vol1

 Brick49: cpu01:/bricks/13/vol1

 Brick50: cpu02:/bricks/13/vol1

 Brick51: cpu03:/bricks/13/vol1

 Brick52: cpu04:/bricks/13/vol1

 Brick53: cpu01:/bricks/14/vol1

 Brick54: cpu02:/bricks/14/vol1

 Brick55: cpu03:/bricks/14/vol1

 Brick56: cpu04:/bricks/14/vol1

 Brick57: cpu01:/bricks/15/vol1

 Brick58: cpu02:/bricks/15/vol1

 Brick59: cpu03:/bricks/15/vol1

 Brick60: cpu04:/bricks/15/vol1

 Brick61: cpu01:/bricks/16/vol1

 Brick62: cpu02:/bricks/16/vol1

 Brick63: cpu03:/bricks/16/vol1

 Brick64: cpu04:/bricks/16/vol1

 Brick65: 

Re: [ovirt-users] running vm when its configured memory is bigger than host memory

2015-03-25 Thread Artyom Lukianov
How I know in 3.6 we will have possibility for memory 
hotplug(http://www.ovirt.org/Features/Memory_Hotplug), so you can increase vm 
memory without stopping vm, so I believe your friend can try to implement some 
external load balancing 
module(http://www.ovirt.org/Features/oVirt_External_Scheduler):
1) Run vms
2) if one of tasks not success to run because memory limitation on host migrate 
it on host with enough memory
3) hotplug vm memory to new size

I hope it will help you.
Thanks

- Original Message -
From: Jiří Sléžka jiri.sle...@slu.cz
To: users@ovirt.org
Sent: Tuesday, March 24, 2015 6:04:16 PM
Subject: [ovirt-users] running vm when its configured memory is bigger than 
host memory

Hello,

my colleague uses oVirt3.5 for scientific purposes and has a question. 
As I know he needs to run virtual machine with huge amount of over 
allocated memory (more than one host has) and when it is really needed 
then migrate it to host where is much more memory.

It looks to me like nice use case for oVirt.

But here is his own question.

 Currently, the virtual machine of given memory size S cannot be run
 on the host with less than S physical memory.

 We need to run several virtual machines (with unpredictable memory
 requirements) on the cluster consisting of several different hosts
 (with different amount of physical memory) in such the way that any
 virtual machine can be run on any host.

 Due to Ovirt limitation, virtual machines memory sizes has to be set
 to the MINIMUM of host physical memory sizes (in order to be able to
 run any virtual machine on any host). As far as I know, this rule has
 no connection to cluster's Memory optimization 'Max Memory Over
 Commitment' parameter.

 But we cannot predict memory needs of our virtual machines so we need
 to set the memory size of all of them to the MAXIMUM of host's
 physical memory sizes.

 Explanation:

 We are running several computational tasks (every one on single
 independent virtual machine). We have several (and different) host
 machines (see Figure 1).

 1. At the beginning, every task consumes a decent amount of memory.

 2. After a while, some task(s) allocate a huge amount of memory
 (Figure 2). At this moment, some of them cannot continue (due to
 unavailable memory on its current host) without migration to the host
 with higher memory available.

 3. After migration (Figure 3), all tasks may continue.

 4. Some tasks finally consumes a LOT of memory (Figure 4).

 The algorithm above cannot be realized, since every virtual machine
 (i.e. task) has predefined (and fixed  when running) its memory size
 set to the MINIMUM of hosts physical memory sizes.


Thanks in advance

Jiri Slezka


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted Engine config in our hardware environment

2015-03-25 Thread Eric Wong
Simone:

I discussed with our SAN guy.  We currently is not able to configure the SAN in 
D1  X1 to be seen by blades in both sides.  So we are pretty much stuck with 
our current ovirt mgmt. vms configuration.

I guess the main concern is how to control all the virtual machines with oVirt 
infrastructure if both our ovirt-mgmt vms went down, how can we start them up 
again.  That also applies to any virtual machines.  If the oVirt mgmt 
infrastructure is not able, I think there is no way to start a VM, correct?

Again, Many Thanks,
Eric


-Original Message-
From: Simone Tiraboschi [mailto:stira...@redhat.com] 
Sent: Tuesday, March 24, 2015 5:11 PM
To: Eric Wong
Cc: users@ovirt.org
Subject: Re: [ovirt-users] Hosted Engine config in our hardware environment



- Original Message -
 From: Eric Wong eric.w...@solvians.com
 To: Simone Tiraboschi stira...@redhat.com
 Cc: users@ovirt.org
 Sent: Tuesday, March 24, 2015 9:36:11 AM
 Subject: RE: [ovirt-users] Hosted Engine config in our hardware environment
 
 Simone:
 
 Thanks for your quick respond.  It clarified some of the confusion I have
 with the hosted-engine.
 
 However, I still have one question.  Sorry.  It is my bad that I did not
 explain clearer on our hardware configuration, mainly on the SAN.  We also
 have 2 SAN, one on D1  another one on X1.  The blades on D1 can only see
 the storage in SAN D1.  X1 blades can only see SAN X1.
 
 Does it mean that we will need to manage our oVirt environment separately
 when we switch to use hosted-engine?  Because from your explanation, the
 hosted-engine in selected nodes will use separate iSCSI hosted-engine
 storage to store the DB. 

With hosted engine not only the DB but all the engine VM is going to be stored 
on a dedicated iSCSI LUN so each host involved (not managed by) in 
hosted-engine should be able to access that device in order to be able to start 
the engine VM after other host failures.

So, if you want to have it restarting the engine VM on a X1 host after a 
complete failure of D1 rack, you need to store the engine VM on a location that 
is accessible both by D1 and X1.
NFS could be an option too.
Otherwise you should check your storage path design in order to have a cross 
rack path; you could also evaluate iSCSI multipathing.


 Since there is no cross D1  X1 storage in our
 current config, that means we need separate hosted-engine setup on each
 side.
 
 Thanks,
 Eric
 
 
 -Original Message-
 From: Simone Tiraboschi [mailto:stira...@redhat.com]
 Sent: Friday, March 20, 2015 7:06 PM
 To: Eric Wong
 Cc: users@ovirt.org
 Subject: Re: [ovirt-users] Hosted Engine config in our hardware environment
 
 
 
 - Original Message -
  From: Eric Wong eric.w...@solvians.com
  To: users@ovirt.org
  Sent: Thursday, March 19, 2015 7:04:04 PM
  Subject: [ovirt-users] Hosted Engine config in our hardware environment
  
  Hello oVirt guru out there:
  
  I want to seek some advice on upgrade path for our oVirt management vm
  configuration. We have been using oVirt for over 3 years. When we first
  setup oVirt environment, Hosted Engine componment did not exist. Our
  question is should we migrate our current configuration to use Hosted
  Engine?
  
  First let me give an overview of our configuration. We have blade servers
  in
  2 separate racks. D1  X1. Each side has 10 blades. Storage is iSCSI SAN.
  
  Inside our oVirt 3.5.0.1-1.el6 installation, it is configured with 2 data
  centers. D1  X1. Each datacenter has the 10 blades for that side. The
  management function of oVirt (oVirt web console) is running off 2 VMs,
  ovirt-mgmt-1 on D1, and ovirt-mgmt-2 on X1. We have keepalived to maintain
  a
  flowing IP for the oVirt management console. The keepalived script makes
  sure only one copy of ovirt-engine is running at any time. It can be on D1
  or X1. The mgmt VMs have Postgresql setup in replication mode. In case one
  of the mgmt vm failed, the other mgmt vm on the other rack can pick up the
  mgmt role. Both mgmt VMs can see all blades and SAN resources on D1  X1.
  
  This configuration has been working well for us. The drawback is if both
  ovirt mgmt vm crashed, we will not be able to start them or make any change
  to the ovirt environment. It is because the mgmt VMs are running within the
  oVirt domain.
  
  We tried to upgrade our configuration to Hosted Engine configuration. From
  what I understand, the Hosted Engine will run in a separate storage domain.
  In both times we tried to upgrade to Hosted Engine, they both failed during
  export and import of current configuration.
 
 Here you can find some hint about how to migrate to hosted-engine:
 http://www.ovirt.org/Migrate_to_Hosted_Engine
 
  I think my questions are:
  - will the Hosted Engine model works in our hardware configuration. With
  hardware in 2 racks, D1  X1. Can a single Hosted Engine manage hardware on
  both sides?
  - How can we achieve redundancy when running Hosted Engine? We need to have

Re: [ovirt-users] Hosted Engine config in our hardware environment

2015-03-25 Thread Yedidyah Bar David
- Original Message -
 From: Eric Wong eric.w...@solvians.com
 To: Simone Tiraboschi stira...@redhat.com
 Cc: users@ovirt.org
 Sent: Wednesday, March 25, 2015 1:12:47 PM
 Subject: Re: [ovirt-users] Hosted Engine config in our hardware environment
 
 Simone:
 
 I discussed with our SAN guy.  We currently is not able to configure the SAN
 in D1  X1 to be seen by blades in both sides.  So we are pretty much stuck
 with our current ovirt mgmt. vms configuration.
 
 I guess the main concern is how to control all the virtual machines with
 oVirt infrastructure if both our ovirt-mgmt vms went down, how can we start
 them up again.  That also applies to any virtual machines.  If the oVirt
 mgmt infrastructure is not able, I think there is no way to start a VM,
 correct?

Not really - ovirt-hosted-engine-ha does that :-)

In principle you can check how it does that and do so too.

IIRC we have an RFE to make it support more VMs (in addition to the
engine's one). IIUC this will not help you directly, because HA still
relies on shared storage.

Best,
-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users