Re: [ovirt-users] Guest VM Console Creation/Access using REST API and noVNC

2014-07-17 Thread Punit Dambiwal
Hi All,

We are also struggling with the same problemcan anybody mind to update
here the resolution or suggest us the way to get rid of this "Failed to
connect to server (code: 1006" error.

Thanks,
Punit


On Thu, Jul 17, 2014 at 5:20 PM, Shanil S  wrote:

> Hi,
>
> We are waiting for the updates, it will be great if anyone can give the
> helpful details.. :)
>
> --
> Regards
> Shanil
>
>
> On Thu, Jul 17, 2014 at 10:23 AM, Shanil S  wrote:
>
>> Hi,
>>
>> we have enabled our portal ip address on the engine and hosts firewall
>> but still the connection failed. so there should be no firewall issues.
>>
>> --
>> Regards
>> Shanil
>>
>>
>> On Wed, Jul 16, 2014 at 3:26 PM, Shanil S  wrote:
>>
>>> Hi Sven,
>>>
>>> Regarding the ticket "path", Is it the direct combination of host and
>>> port ? suppose if the host is 1.2.3.4 and the port is 5100 then what should
>>> be the "path" value ? Is there encryption needs here ?
>>>
>>>
>>> >>so you have access from the browser to the websocket-proxy, network
>>> wise? can you ping the proxy?
>>> and the websocket proxy can reach the host where the vm runs?
>>>
>>>  yes.. there should be no firewall issue as we can access the console
>>> from ovirt engine portal
>>>
>>>  Do we need to allow our own portal ip address in the ovirt engine and
>>> hypervisiors also ???
>>>
>>>
>>> --
>>> Regards
>>> Shanil
>>>
>>>
>>> On Wed, Jul 16, 2014 at 3:13 PM, Sven Kieske 
>>> wrote:
>>>


 Am 16.07.2014 11:30, schrieb Shanil S:
 > We will get the ticket details like host,port and password from the
 ticket
 > api funcion call but didn't get the "path" value. Will it get it from
 the
 > ticket details ? i couldn't find out any from the ticket details.

 the "path" is the combination of host and port.

 so you have access from the browser to the websocket-proxy, network
 wise? can you ping the proxy?
 and the websocket proxy can reach the host where the vm runs?
 are you sure there are no firewalls in between?
 also you should pay attention on how long your ticket
 is valid, you can specify the duration in minutes in your api call.

 --
 Mit freundlichen Grüßen / Regards

 Sven Kieske

 Systemadministrator
 Mittwald CM Service GmbH & Co. KG
 Königsberger Straße 6
 32339 Espelkamp
 T: +49-5772-293-100
 F: +49-5772-293-333
 https://www.mittwald.de
 Geschäftsführer: Robert Meyer
 St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad
 Oeynhausen
 Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad
 Oeynhausen

>>>
>>>
>>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Deploying hosted engine on second host with different CPU model

2014-07-17 Thread Andrew Lau
I think you should be able specify this within the ovirt-engine, just
modify the cluster's cpu compatibility. I hit this too, but i think I
just ended up provisioning the older machine first then the newer ones
joined with the older model

On Thu, Jul 17, 2014 at 11:05 PM, George Machitidze
 wrote:
> Hello,
>
> I am deploying hosted engine (HA) on hosts with different CPU models on one
> of my oVirt labs.
>
> Host have different CPU's or there is also the problem: virtualization
> platform cannot detect CPU at all, "The following CPU types are supported by
> this host:" is empty:
>
>
> 2014-07-17 16:51:42 DEBUG otopi.plugins.ovirt_hosted_engine_setup.vdsmd.cpu
> cpu._customization:124 Compatible CPU models are: []
>
> Is there any way to override this setting and use CPU of old machine for
> both hosts?
>
> ex.
> host1:
>
> cpu family  : 6
> model   : 15
> model name  : Intel(R) Xeon(R) CPU5160  @ 3.00GHz
> stepping: 11
>
> host2:
>
> cpu family  : 6
> model   : 42
> model name  : Intel(R) Xeon(R) CPU E31220 @ 3.10GHz
> stepping: 7
>
>
>
> [root@ovirt2 ~]# hosted-engine --deploy
> [ INFO  ] Stage: Initializing
>   Continuing will configure this host for serving as hypervisor and
> create a VM where you have to install oVirt Engine afterwards.
>   Are you sure you want to continue? (Yes, No)[Yes]:
> [ INFO  ] Generating a temporary VNC password.
> [ INFO  ] Stage: Environment setup
>   Configuration files: []
>   Log file:
> /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20140717165111-7tg2g7.log
>   Version: otopi-1.2.1 (otopi-1.2.1-1.el6)
> [ INFO  ] Hardware supports virtualization
> [ INFO  ] Stage: Environment packages setup
> [ INFO  ] Stage: Programs detection
> [ INFO  ] Stage: Environment setup
> [ INFO  ] Stage: Environment customization
>
>   --== STORAGE CONFIGURATION ==--
>
>   During customization use CTRL-D to abort.
>   Please specify the storage you would like to use (nfs3,
> nfs4)[nfs3]:
>   Please specify the full shared storage connection path to use
> (example: host:/path): ovirt-hosted:/engine
>   The specified storage location already contains a data domain. Is
> this an additional host setup (Yes, No)[Yes]?
> [ INFO  ] Installing on additional host
>   Please specify the Host ID [Must be integer, default: 2]:
>
>   --== SYSTEM CONFIGURATION ==--
>
> [WARNING] A configuration file must be supplied to deploy Hosted Engine on
> an additional host.
>   The answer file may be fetched from the first host using scp.
>   If you do not want to download it automatically you can abort the
> setup answering no to the following question.
>   Do you want to scp the answer file from the first host? (Yes,
> No)[Yes]:
>   Please provide the FQDN or IP of the first host: ovirt1.test.ge
>   Enter 'root' user password for host ovirt1.test.ge:
> [ INFO  ] Answer file successfully downloaded
>
>   --== NETWORK CONFIGURATION ==--
>
>   The following CPU types are supported by this host:
> [ ERROR ] Failed to execute stage 'Environment customization': Invalid CPU
> type specified: model_Conroe
> [ INFO  ] Stage: Clean up
> [ INFO  ] Stage: Pre-termination
> [ INFO  ] Stage: Termination
>
> --
> BR
>
> George Machitidze
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] safest way to backup a VM

2014-07-17 Thread Niklas Fondberg
Hi

I have had problems with the export functionality, the qemu-img -convert 
commands hangs after being orphaned and the VM is in LOCKED mode.
I am therefor wondering if there is a command line tool to backup a VM, 
possibly with snapshots and the OVF metadata file?

Niklas


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Relationship bw storage domain uuid/images/children and VM's

2014-07-17 Thread Steve Dainard
Hello,

I'd like to get an understanding of the relationship between VM's using a
storage domain, and the child directories listed under ...///images/.

Running through some backup scenarios I'm noticing a significant difference
between the number of provisioned VM's using a storage domain (21) +
templates (6) versus the number of child directories under images/ (107).

Running RHEV 3.4 trial.

Thanks,
Steve
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [ovirt-devel] oVirt 3.4.3 GA postponed due to blocker

2014-07-17 Thread Nir Soffer
- Original Message -
> From: "Sandro Bonazzola" 
> To: "Nir Soffer" 
> Cc: de...@ovirt.org, Users@ovirt.org
> Sent: Thursday, July 17, 2014 6:29:43 PM
> Subject: Re: [ovirt-devel] oVirt 3.4.3 GA postponed due to blocker
> 
> Il 17/07/2014 17:25, Nir Soffer ha scritto:
> > - Original Message -
> >> From: "Sandro Bonazzola" 
> >> To: de...@ovirt.org, Users@ovirt.org
> >> Sent: Thursday, July 17, 2014 6:17:13 PM
> >> Subject: [ovirt-devel] oVirt 3.4.3 GA postponed due to blocker
> >>
> >> Hi,
> >> recent python upgrade in Fedora 19 broke vdsmd service.
> > 
> > Can you provide more details?
> 
> Sure:
> python-2.7.5-13.fc19.x86_64 hit F19 updates where
> python-cpopen-1.3-1.fc19.x86_64 and existing vdsm code cause the following
> stack trace:
> 
> vdsm: Running mkdirs
> vdsm: Running configure_coredump
> vdsm: Running configure_vdsm_logs
> vdsm: Running run_init_hooks
> vdsm: Running gencerts
> vdsm: Running check_is_configured
> Traceback (most recent call last):
>   File "/usr/bin/vdsm-tool", line 145, in 
> sys.exit(main())
>   File "/usr/bin/vdsm-tool", line 142, in main
> return tool_command[cmd]["command"](*args[1:])
>   File "/usr/lib64/python2.7/site-packages/vdsm/tool/configurator.py", line
>   260, in isconfigured
> if c.getName() in args.modules and c.isconfigured() == NOT_CONFIGURED
>   File "/usr/lib64/python2.7/site-packages/vdsm/tool/configurator.py", line
>   113, in isconfigured
> self._exec_libvirt_configure("check_if_configured")
>   File "/usr/lib64/python2.7/site-packages/vdsm/tool/configurator.py", line
>   88, in _exec_libvirt_configure
> raw=True,
>   File "/usr/lib64/python2.7/site-packages/vdsm/utils.py", line 645, in
>   execCmd
> deathSignal=deathSignal, childUmask=childUmask)
>   File "/usr/lib64/python2.7/site-packages/cpopen/__init__.py", line 50, in
>   __init__
> stderr=PIPE)
>   File "/usr/lib64/python2.7/subprocess.py", line 711, in __init__
> errread, errwrite)
> TypeError: _execute_child_v275() takes exactly 17 arguments (18 given)

Ok, this is old news, was fixed on Jun 24.

But it seems that cpopen code is not available any more.

It used to be here:
https://github.com/ovirt-infra/cpopen

It was moved to gerrit, but does not contain any data yet:
https://github.com/oVirt/cpopen
http://gerrit.ovirt.org/gitweb?p=cpopen.git

Yaniv? Saggi?

Nir 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Logical network error

2014-07-17 Thread Dan Kenigsberg
On Thu, Jul 17, 2014 at 10:54:34AM -0400, Maurice James wrote:
> 
> I put the entire cluster into maintenance mode and reboot. I was then able to 
> make changes to the network

Still, if you can trace back the bit of supervdsm.log with the
traceback, we may be able to nail down a bug that may harm others.

Thanks!
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] small cluster

2014-07-17 Thread Jeremiah Jahn
you probably shouldn't snapshot your data directory though, as
postgres pretty much does it's own thing as far as journalling and
what not. It also has it's own snapshotting capabilities.  again just
an FYI and chances are bringing  back a db from a snapshot could cause
all sorts of problems with your PG-XC cluster, esp in a mutli-master
configuration. Generally we just drop the db, and resync from the
beginning, but we use slony, and not XC, so your milage may vary.
I've got nothing on the remote start though :-)

On Thu, Jul 17, 2014 at 10:23 AM, Jeremiah Jahn
 wrote:
> you can use puppet to remotely reboot the machine.. FYI
>
> http://stackoverflow.com/questions/8528392/remotely-shutdown-reboot-linux-boxes-without-ssh
>
> On Thu, Jul 17, 2014 at 10:17 AM, urthmover  wrote:
>> Thank you for the insight and advice Jeremiah.
>>
>> I think that creating a base image on one machine and using puppet/scripting
>> to create the others is a great idea (I’ve been using clonezilla and
>> manually editing IP/hostname afterwards).
>>
>> As for not using overt, the aspects that I’d be missing are remotely
>> rebooting a hung machine.  Also, taking a snapshot of the entire guest is
>> relatively easy/without reboot/started remotely.
>>
>> As for “Why Mini’s” we have a bunch of extra machines on hand and off lease.
>>
>> If anyone else has other thoughts and suggestions, please share.
>>
>> Thanks,
>>
>> --
>> urthmover
>>
>> On July 17, 2014 at 9:04:12 AM, Jeremiah Jahn
>> (jerem...@goodinassociates.com) wrote:
>>
>> Just a couple of thoughts for you. You are correct. Gluster would not
>> be a happy thing for for a DB. But for that matter, no network file
>> system would be good for postgres or any DB. Your SSD's probably max
>> out at 6Gb/s while your nics on a mini only go up to 1Gb/s. The whole
>> point of postgres-xc is that it takes care of all of the replication
>> and redundancy. Depending on your usage, your probably going to want
>> all of the 16 GB of ram for your indexes as well. I'd be very tempted
>> to make an install image from one mini and use it to add/create the
>> other nodes with puppet or just a custom script to configure the image
>> for addition to the pg-xc cluster. You're not going to gain a whole
>> lot of anything by running ovirt on your mini's except some slowdown.
>> if your servers that PG will be running on are linux, then you don't
>> really need more than 10GB for a linux install. If you're going to
>> use your mini's for other guests careful of the memory you use so you
>> don't make your dba unhappy. and using gluster on the exported 100G
>> partition would only net you about 500G for a storage domain if you've
>> got gluster replication going, which is not a bad idea. And finally,
>> why mac mini's? pretty pricey for server hardware unless your planning
>> on using them to host osx guests, which I'm not sure can actually be
>> done with anything but vmware, which is even a hack, at the moment.
>>
>> just my 2 cents as a person who runs gluster, ovirt, and a postgres cluster.
>>
>> On Wed, Jul 16, 2014 at 2:50 PM, urthmover  wrote:
>>> After further investigation and reading. Glusterfs is not really designed
>>> for database operations. So I am retracting one question and curious about
>>> anyone’s thoughts regrading the new set of questions.
>>>
>>> Should I partition the mirrored pair into two slices (100GB and 800GB).
>>> Then present ovirt with 10 storage domains each being a 100G partition for
>>> the OS of each guest. Then use nfs for the 800G /data partitions (not
>>> using
>>> /data as a storage domain within overt, just as plain old nfs mounts hard
>>> coded to each guest machine)
>>>
>>> Should I present each mac mini’s mirrored pair as an nfs share to
>>> ovirt-engine? This would create 10 1TB storage domains. Then create 10
>>> large 800GB /data partitions (a /data for each guest).
>>>
>>> Should I NOT use ovirt and just run each mac mini as a mirrored pair of
>>> disks and a standalone server?
>>>
>>> --
>>> urthmover
>>>
>>> On July 16, 2014 at 12:12:16 PM, urthmover (urthmo...@gmail.com) wrote:
>>>
>>> I have 10 mac minis at my disposal. (Currently, I’m booting each device
>>> using a centos6.5 usbstick leaving the 2 disks free for use)
>>>
>>> GOAL:
>>> To build a cluster of 10 servers running postgres-xc
>>>
>>> EQUIPMENT:
>>> 10 mac mini: i7-3720QM@2.60GHz/16G RAM/1x1gbit NIC/2x1TB SSDs (zfs
>>> mirrored)
>>>
>>> REQUEST:
>>> Please run the software application postgres-xc (a multi-master version of
>>> postgres). I'm told by the DBA that disk IO is the most important factor
>>> for the tasks that he’ll be running. The DBA wants 10 servers each with a
>>> 50G OS partition and a 800GB /data.
>>>
>>> THOUGHTS:
>>> I have a few ideas for how to accomplish but I'm unsure which is the best
>>> balance between disk IO and overall IT management of the environment.
>>>
>>> QUESTIONS FOR THE LIST:
>>> Should I present each of the 10 mac mini’s mirrored disks to glusterfs
>>

Re: [ovirt-users] [ovirt-devel] oVirt 3.4.3 GA postponed due to blocker

2014-07-17 Thread Sandro Bonazzola
Il 17/07/2014 17:25, Nir Soffer ha scritto:
> - Original Message -
>> From: "Sandro Bonazzola" 
>> To: de...@ovirt.org, Users@ovirt.org
>> Sent: Thursday, July 17, 2014 6:17:13 PM
>> Subject: [ovirt-devel] oVirt 3.4.3 GA postponed due to blocker
>>
>> Hi,
>> recent python upgrade in Fedora 19 broke vdsmd service.
> 
> Can you provide more details?

Sure:
python-2.7.5-13.fc19.x86_64 hit F19 updates where 
python-cpopen-1.3-1.fc19.x86_64 and existing vdsm code cause the following 
stack trace:

vdsm: Running mkdirs
vdsm: Running configure_coredump
vdsm: Running configure_vdsm_logs
vdsm: Running run_init_hooks
vdsm: Running gencerts
vdsm: Running check_is_configured
Traceback (most recent call last):
  File "/usr/bin/vdsm-tool", line 145, in 
sys.exit(main())
  File "/usr/bin/vdsm-tool", line 142, in main
return tool_command[cmd]["command"](*args[1:])
  File "/usr/lib64/python2.7/site-packages/vdsm/tool/configurator.py", line 
260, in isconfigured
if c.getName() in args.modules and c.isconfigured() == NOT_CONFIGURED
  File "/usr/lib64/python2.7/site-packages/vdsm/tool/configurator.py", line 
113, in isconfigured
self._exec_libvirt_configure("check_if_configured")
  File "/usr/lib64/python2.7/site-packages/vdsm/tool/configurator.py", line 88, 
in _exec_libvirt_configure
raw=True,
  File "/usr/lib64/python2.7/site-packages/vdsm/utils.py", line 645, in execCmd
deathSignal=deathSignal, childUmask=childUmask)
  File "/usr/lib64/python2.7/site-packages/cpopen/__init__.py", line 50, in 
__init__
stderr=PIPE)
  File "/usr/lib64/python2.7/subprocess.py", line 711, in __init__
errread, errwrite)
TypeError: _execute_child_v275() takes exactly 17 arguments (18 given)
vdsm: stopped during execute check_is_configured task (task returned with error 
code 1).


We need python-cpopen-1.3-3 to be backported from Fedora 20 to F19 in order to 
fix the issue and then run again basic sanity tests.

> 
> Nir
> 


-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [ovirt-devel] oVirt 3.4.3 GA postponed due to blocker

2014-07-17 Thread Nir Soffer
- Original Message -
> From: "Sandro Bonazzola" 
> To: de...@ovirt.org, Users@ovirt.org
> Sent: Thursday, July 17, 2014 6:17:13 PM
> Subject: [ovirt-devel] oVirt 3.4.3 GA postponed due to blocker
> 
> Hi,
> recent python upgrade in Fedora 19 broke vdsmd service.

Can you provide more details?

Nir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] small cluster

2014-07-17 Thread Jeremiah Jahn
you can use puppet to remotely reboot the machine.. FYI

http://stackoverflow.com/questions/8528392/remotely-shutdown-reboot-linux-boxes-without-ssh

On Thu, Jul 17, 2014 at 10:17 AM, urthmover  wrote:
> Thank you for the insight and advice Jeremiah.
>
> I think that creating a base image on one machine and using puppet/scripting
> to create the others is a great idea (I’ve been using clonezilla and
> manually editing IP/hostname afterwards).
>
> As for not using overt, the aspects that I’d be missing are remotely
> rebooting a hung machine.  Also, taking a snapshot of the entire guest is
> relatively easy/without reboot/started remotely.
>
> As for “Why Mini’s” we have a bunch of extra machines on hand and off lease.
>
> If anyone else has other thoughts and suggestions, please share.
>
> Thanks,
>
> --
> urthmover
>
> On July 17, 2014 at 9:04:12 AM, Jeremiah Jahn
> (jerem...@goodinassociates.com) wrote:
>
> Just a couple of thoughts for you. You are correct. Gluster would not
> be a happy thing for for a DB. But for that matter, no network file
> system would be good for postgres or any DB. Your SSD's probably max
> out at 6Gb/s while your nics on a mini only go up to 1Gb/s. The whole
> point of postgres-xc is that it takes care of all of the replication
> and redundancy. Depending on your usage, your probably going to want
> all of the 16 GB of ram for your indexes as well. I'd be very tempted
> to make an install image from one mini and use it to add/create the
> other nodes with puppet or just a custom script to configure the image
> for addition to the pg-xc cluster. You're not going to gain a whole
> lot of anything by running ovirt on your mini's except some slowdown.
> if your servers that PG will be running on are linux, then you don't
> really need more than 10GB for a linux install. If you're going to
> use your mini's for other guests careful of the memory you use so you
> don't make your dba unhappy. and using gluster on the exported 100G
> partition would only net you about 500G for a storage domain if you've
> got gluster replication going, which is not a bad idea. And finally,
> why mac mini's? pretty pricey for server hardware unless your planning
> on using them to host osx guests, which I'm not sure can actually be
> done with anything but vmware, which is even a hack, at the moment.
>
> just my 2 cents as a person who runs gluster, ovirt, and a postgres cluster.
>
> On Wed, Jul 16, 2014 at 2:50 PM, urthmover  wrote:
>> After further investigation and reading. Glusterfs is not really designed
>> for database operations. So I am retracting one question and curious about
>> anyone’s thoughts regrading the new set of questions.
>>
>> Should I partition the mirrored pair into two slices (100GB and 800GB).
>> Then present ovirt with 10 storage domains each being a 100G partition for
>> the OS of each guest. Then use nfs for the 800G /data partitions (not
>> using
>> /data as a storage domain within overt, just as plain old nfs mounts hard
>> coded to each guest machine)
>>
>> Should I present each mac mini’s mirrored pair as an nfs share to
>> ovirt-engine? This would create 10 1TB storage domains. Then create 10
>> large 800GB /data partitions (a /data for each guest).
>>
>> Should I NOT use ovirt and just run each mac mini as a mirrored pair of
>> disks and a standalone server?
>>
>> --
>> urthmover
>>
>> On July 16, 2014 at 12:12:16 PM, urthmover (urthmo...@gmail.com) wrote:
>>
>> I have 10 mac minis at my disposal. (Currently, I’m booting each device
>> using a centos6.5 usbstick leaving the 2 disks free for use)
>>
>> GOAL:
>> To build a cluster of 10 servers running postgres-xc
>>
>> EQUIPMENT:
>> 10 mac mini: i7-3720QM@2.60GHz/16G RAM/1x1gbit NIC/2x1TB SSDs (zfs
>> mirrored)
>>
>> REQUEST:
>> Please run the software application postgres-xc (a multi-master version of
>> postgres). I'm told by the DBA that disk IO is the most important factor
>> for the tasks that he’ll be running. The DBA wants 10 servers each with a
>> 50G OS partition and a 800GB /data.
>>
>> THOUGHTS:
>> I have a few ideas for how to accomplish but I'm unsure which is the best
>> balance between disk IO and overall IT management of the environment.
>>
>> QUESTIONS FOR THE LIST:
>> Should I present each of the 10 mac mini’s mirrored disks to glusterfs
>> thus
>> creating a large 10TB storage area. Then connect the storage area to
>> ovirt-engine creating on 10TB storage domain, and use it as the storage
>> domain for 10 large 800GB disks (a /data for each guest) ?
>>
>> Should I present each mac mini’s mirrored pair as an nfs share to
>> ovirt-engine? This would create 10 1TB storage domains. Then create 10
>> large 800GB /data partitions (a /data for each guest).
>>
>> Should I NOT use ovirt and just run each mac mini as a mirrored pair of
>> disks and a sandalone server?
>>
>> LASTLY:
>> I’m open to any other thoughts or ideas for how to best accomplish this
>> task.
>>
>> Thanks in advance,
>>
>> --
>> urthmover
>

Re: [ovirt-users] small cluster

2014-07-17 Thread urthmover
Thank you for the insight and advice Jeremiah.

I think that creating a base image on one machine and using puppet/scripting to 
create the others is a great idea (I’ve been using clonezilla and manually 
editing IP/hostname afterwards).

As for not using overt, the aspects that I’d be missing are remotely rebooting 
a hung machine.  Also, taking a snapshot of the entire guest is relatively 
easy/without reboot/started remotely.

As for “Why Mini’s” we have a bunch of extra machines on hand and off lease.

If anyone else has other thoughts and suggestions, please share.

Thanks,

-- 
urthmover

On July 17, 2014 at 9:04:12 AM, Jeremiah Jahn (jerem...@goodinassociates.com) 
wrote:

Just a couple of thoughts for you. You are correct. Gluster would not  
be a happy thing for for a DB. But for that matter, no network file  
system would be good for postgres or any DB. Your SSD's probably max  
out at 6Gb/s while your nics on a mini only go up to 1Gb/s. The whole  
point of postgres-xc is that it takes care of all of the replication  
and redundancy. Depending on your usage, your probably going to want  
all of the 16 GB of ram for your indexes as well. I'd be very tempted  
to make an install image from one mini and use it to add/create the  
other nodes with puppet or just a custom script to configure the image  
for addition to the pg-xc cluster. You're not going to gain a whole  
lot of anything by running ovirt on your mini's except some slowdown.  
if your servers that PG will be running on are linux, then you don't  
really need more than 10GB for a linux install. If you're going to  
use your mini's for other guests careful of the memory you use so you  
don't make your dba unhappy. and using gluster on the exported 100G  
partition would only net you about 500G for a storage domain if you've  
got gluster replication going, which is not a bad idea. And finally,  
why mac mini's? pretty pricey for server hardware unless your planning  
on using them to host osx guests, which I'm not sure can actually be  
done with anything but vmware, which is even a hack, at the moment.  

just my 2 cents as a person who runs gluster, ovirt, and a postgres cluster.  

On Wed, Jul 16, 2014 at 2:50 PM, urthmover  wrote:  
> After further investigation and reading. Glusterfs is not really designed  
> for database operations. So I am retracting one question and curious about  
> anyone’s thoughts regrading the new set of questions.  
>  
> Should I partition the mirrored pair into two slices (100GB and 800GB).  
> Then present ovirt with 10 storage domains each being a 100G partition for  
> the OS of each guest. Then use nfs for the 800G /data partitions (not using  
> /data as a storage domain within overt, just as plain old nfs mounts hard  
> coded to each guest machine)  
>  
> Should I present each mac mini’s mirrored pair as an nfs share to  
> ovirt-engine? This would create 10 1TB storage domains. Then create 10  
> large 800GB /data partitions (a /data for each guest).  
>  
> Should I NOT use ovirt and just run each mac mini as a mirrored pair of  
> disks and a standalone server?  
>  
> --  
> urthmover  
>  
> On July 16, 2014 at 12:12:16 PM, urthmover (urthmo...@gmail.com) wrote:  
>  
> I have 10 mac minis at my disposal. (Currently, I’m booting each device  
> using a centos6.5 usbstick leaving the 2 disks free for use)  
>  
> GOAL:  
> To build a cluster of 10 servers running postgres-xc  
>  
> EQUIPMENT:  
> 10 mac mini: i7-3720QM@2.60GHz/16G RAM/1x1gbit NIC/2x1TB SSDs (zfs  
> mirrored)  
>  
> REQUEST:  
> Please run the software application postgres-xc (a multi-master version of  
> postgres). I'm told by the DBA that disk IO is the most important factor  
> for the tasks that he’ll be running. The DBA wants 10 servers each with a  
> 50G OS partition and a 800GB /data.  
>  
> THOUGHTS:  
> I have a few ideas for how to accomplish but I'm unsure which is the best  
> balance between disk IO and overall IT management of the environment.  
>  
> QUESTIONS FOR THE LIST:  
> Should I present each of the 10 mac mini’s mirrored disks to glusterfs thus  
> creating a large 10TB storage area. Then connect the storage area to  
> ovirt-engine creating on 10TB storage domain, and use it as the storage  
> domain for 10 large 800GB disks (a /data for each guest) ?  
>  
> Should I present each mac mini’s mirrored pair as an nfs share to  
> ovirt-engine? This would create 10 1TB storage domains. Then create 10  
> large 800GB /data partitions (a /data for each guest).  
>  
> Should I NOT use ovirt and just run each mac mini as a mirrored pair of  
> disks and a sandalone server?  
>  
> LASTLY:  
> I’m open to any other thoughts or ideas for how to best accomplish this  
> task.  
>  
> Thanks in advance,  
>  
> --  
> urthmover  
>  
>  
> ___  
> Users mailing list  
> Users@ovirt.org  
> http://lists.ovirt.org/mailman/listinfo/users  
>  
_

[ovirt-users] oVirt 3.4.3 GA postponed due to blocker

2014-07-17 Thread Sandro Bonazzola
Hi,
recent python upgrade in Fedora 19 broke vdsmd service.
While we wait for an updated python-cpopen package to be built, we're 
postponing oVirt 3.4.3 GA.
The package should be built for tomorrow and will be hosted on ovirt repo until 
it will be available on Fedora repositories.
We'll release 3.4.3 after basic sanity testing with the new package.
Thanks,

-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Logical network error

2014-07-17 Thread Maurice James

I put the entire cluster into maintenance mode and reboot. I was then able to 
make changes to the network



- Original Message -
From: "Moti Asayag" 
To: "Maurice James" 
Cc: "users" , "Antoni Segura Puimedon" , 
"Dan Kenigsberg" 
Sent: Thursday, July 17, 2014 5:20:34 AM
Subject: Re: [ovirt-users] Logical network error



- Original Message -
> From: "Maurice James" 
> To: "users" 
> Sent: Wednesday, July 16, 2014 5:06:23 PM
> Subject: [ovirt-users] Logical network error
> 
> While attempting to remove a logical network from one of my hosts, Im getting
> the following error.
> Error while executing action Setup Networks: Unexpected exception
> 
> 
> Im seeing the following error in the vdsm.log.
> 

Could you attach the entire vdsm.log and supervdsm.log ?

> 
> Thread-72::ERROR::2014-07-16
> 10:03:46,773::BindingXMLRPC::1086::vds::(wrapper) unexpected error
> Traceback (most recent call last):
> File "/usr/share/vdsm/BindingXMLRPC.py", line 1070, in wrapper
> res = f(*args, **kwargs)
> File "/usr/share/vdsm/BindingXMLRPC.py", line 494, in setupNetworks
> return api.setupNetworks(networks, bondings, options)
> File "/usr/share/vdsm/API.py", line 1297, in setupNetworks
> supervdsm.getProxy().setupNetworks(networks, bondings, options)
> File "/usr/share/vdsm/supervdsm.py", line 50, in __call__
> return callMethod()
> File "/usr/share/vdsm/supervdsm.py", line 48, in 
> **kwargs)
> File "", line 2, in setupNetworks
> File "/usr/lib64/python2.6/multiprocessing/managers.py", line 725, in
> _callmethod
> conn.send((self._id, methodname, args, kwds))
> IOError: [Errno 32] Broken pipe
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [QE] Hardening Guide

2014-07-17 Thread Sandro Bonazzola
Il 17/07/2014 16:16, Jeremiah Jahn ha scritto:
> Did you ever get an answer to this?  I've got a compliance audit
> coming up, and would love to have a place to verify things against.

Not yet

> 
> On Thu, Jun 19, 2014 at 2:33 AM, Sandro Bonazzola  wrote:
>> Hi,
>> while I was working on Bug 1097022 - ovirt-engine-setup: weak default 
>> passwords for PostgreSQL database users
>> I was wondering where to write hardening tips described in comment #18.
>> It looks like we don't have any page on oVirt wiki about hardening.
>> Anyone interested in contributing to such page?
>> I guess it can be created as http://www.ovirt.org/OVirt_Hardening_Guide
>> Thoughts?
>>
>>
>> --
>> Sandro Bonazzola
>> Better technology. Faster innovation. Powered by community collaboration.
>> See how it works at redhat.com
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users


-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [QE] Hardening Guide

2014-07-17 Thread Jeremiah Jahn
Did you ever get an answer to this?  I've got a compliance audit
coming up, and would love to have a place to verify things against.

On Thu, Jun 19, 2014 at 2:33 AM, Sandro Bonazzola  wrote:
> Hi,
> while I was working on Bug 1097022 - ovirt-engine-setup: weak default 
> passwords for PostgreSQL database users
> I was wondering where to write hardening tips described in comment #18.
> It looks like we don't have any page on oVirt wiki about hardening.
> Anyone interested in contributing to such page?
> I guess it can be created as http://www.ovirt.org/OVirt_Hardening_Guide
> Thoughts?
>
>
> --
> Sandro Bonazzola
> Better technology. Faster innovation. Powered by community collaboration.
> See how it works at redhat.com
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] small cluster

2014-07-17 Thread Jeremiah Jahn
Just a couple of thoughts for you. You are correct. Gluster would not
be a happy thing for for a DB.  But for that matter, no network file
system would be good for postgres or any DB. Your SSD's probably max
out at 6Gb/s while your nics on a mini only go up to 1Gb/s. The whole
point of postgres-xc is that it takes care of all of the replication
and redundancy.  Depending on your usage, your probably going to want
all of the 16 GB of ram for your indexes as well.  I'd be very tempted
to make an install image from one mini and use it to add/create the
other nodes with puppet or just a custom script to configure the image
for addition to the pg-xc cluster.  You're not going to gain  a whole
lot of anything by running ovirt on your mini's except some slowdown.
if your servers that PG will be running on are linux, then you don't
really need more than 10GB for a linux install.  If you're going to
use your mini's for other guests careful of the memory you use so you
don't make your dba unhappy.  and using gluster on the exported 100G
partition would only net you about 500G for a storage domain if you've
got gluster replication going, which is not a bad idea.  And finally,
why mac mini's? pretty pricey for server hardware unless your planning
on using them to host osx guests, which I'm not sure can actually be
done with anything but vmware, which is even a hack, at the moment.

just my 2 cents as a person who runs gluster, ovirt, and a postgres cluster.

On Wed, Jul 16, 2014 at 2:50 PM, urthmover  wrote:
> After further investigation and reading.  Glusterfs is not really designed
> for database operations.  So I am retracting one question and curious about
> anyone’s thoughts regrading the new set of questions.
>
> Should I partition the mirrored pair into two slices (100GB and 800GB).
> Then present ovirt with 10 storage domains each being a 100G partition for
> the OS of each guest.  Then use nfs for the 800G /data partitions (not using
> /data as a storage domain within overt, just as plain old nfs mounts hard
> coded to each guest machine)
>
> Should I present each mac mini’s mirrored pair as an nfs share to
> ovirt-engine?  This would create 10 1TB storage domains.  Then create 10
> large 800GB /data partitions (a /data for each guest).
>
> Should I NOT use ovirt and just run each mac mini as a mirrored pair of
> disks and a standalone server?
>
> --
> urthmover
>
> On July 16, 2014 at 12:12:16 PM, urthmover (urthmo...@gmail.com) wrote:
>
> I have 10 mac minis at my disposal.  (Currently, I’m booting each device
> using a centos6.5 usbstick leaving the 2 disks free for use)
>
> GOAL:
> To build a cluster of 10 servers running postgres-xc
>
> EQUIPMENT:
> 10 mac mini:  i7-3720QM@2.60GHz/16G RAM/1x1gbit NIC/2x1TB SSDs (zfs
> mirrored)
>
> REQUEST:
> Please run the software application postgres-xc (a multi-master version of
> postgres).  I'm told by the DBA that disk IO is the most important factor
> for the tasks that he’ll  be running.  The DBA wants 10 servers each with a
> 50G OS partition and a 800GB /data.
>
> THOUGHTS:
> I have a few ideas for how to accomplish but I'm unsure which is the best
> balance between disk IO and overall IT management of  the environment.
>
> QUESTIONS FOR THE LIST:
> Should I present each of the 10 mac mini’s mirrored disks to glusterfs thus
> creating a large 10TB storage area.  Then connect the storage area to
> ovirt-engine creating on 10TB storage domain, and use it as the storage
> domain for 10 large 800GB disks (a /data for each guest) ?
>
> Should I present each mac mini’s mirrored pair as an nfs share to
> ovirt-engine?  This would create 10 1TB storage domains.  Then create 10
> large 800GB /data partitions (a /data for each guest).
>
> Should I NOT use ovirt and just run each mac mini as a mirrored pair of
> disks and a sandalone server?
>
> LASTLY:
> I’m open to any other thoughts or ideas for how to best accomplish this
> task.
>
> Thanks in advance,
>
> --
> urthmover
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Missing Storage domain

2014-07-17 Thread Dafna Ron

x10 :)
thanks for letting us know it was resolved

have a nice one.
Dafna

On 07/17/2014 02:20 PM, Maurice James wrote:

I ended putting the problematic storage domain into maintenance mode. That 
allowed the other hosts to come online. I then rebooted that storage domain 
host. That seemed to clear up the problem




- Original Message -
From: "Dafna Ron" 
To: "Maurice James" , "users" 
Sent: Thursday, July 17, 2014 2:53:46 AM
Subject: Re: [ovirt-users] Missing Storage domain

even if the other hosts can see the domain, it doesn't mean that there
is no problem from that particular host.
if you checked everything and you are positive that the host can see and
connect to the domain please restart vdsm to see that there is no cache
issue.

Dafna

On 07/16/2014 07:21 PM, Maurice James wrote:

What do I do when a host in a cluster cant find a storage domain that
"it thinks" doesnt exist? The storage domain is in the db and is
online because one of the other hosts is working just fine. I pulled
this out of the vdsm.log. I even tried rebooting


Thread-30::ERROR::2014-07-16
14:19:10,522::domainMonitor::239::Storage.DomainMonitorThread::(_monitorDomain)
Error while collecting domain b7663d70-e658-41fa-b9f0-8da83c9eddce
monitoring information
Traceback (most recent call last):
   File "/usr/share/vdsm/storage/domainMonitor.py", line 204, in
_monitorDomain
 self.domain = sdCache.produce(self.sdUUID)
   File "/usr/share/vdsm/storage/sdc.py", line 98, in produce
 domain.getRealDomain()
   File "/usr/share/vdsm/storage/sdc.py", line 52, in getRealDomain
 return self._cache._realProduce(self._sdUUID)
   File "/usr/share/vdsm/storage/sdc.py", line 122, in _realProduce
 domain = self._findDomain(sdUUID)
   File "/usr/share/vdsm/storage/sdc.py", line 141, in _findDomain
 dom = findMethod(sdUUID)
   File "/usr/share/vdsm/storage/sdc.py", line 171, in _findUnfetchedDomain
 raise se.StorageDomainDoesNotExist(sdUUID)
StorageDomainDoesNotExist: Storage domain does not exist:
('b7663d70-e658-41fa-b9f0-8da83c9eddce',)



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users





--
Dafna Ron
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Missing Storage domain

2014-07-17 Thread Maurice James
I ended putting the problematic storage domain into maintenance mode. That 
allowed the other hosts to come online. I then rebooted that storage domain 
host. That seemed to clear up the problem




- Original Message -
From: "Dafna Ron" 
To: "Maurice James" , "users" 
Sent: Thursday, July 17, 2014 2:53:46 AM
Subject: Re: [ovirt-users] Missing Storage domain

even if the other hosts can see the domain, it doesn't mean that there 
is no problem from that particular host.
if you checked everything and you are positive that the host can see and 
connect to the domain please restart vdsm to see that there is no cache 
issue.

Dafna

On 07/16/2014 07:21 PM, Maurice James wrote:
> What do I do when a host in a cluster cant find a storage domain that 
> "it thinks" doesnt exist? The storage domain is in the db and is 
> online because one of the other hosts is working just fine. I pulled 
> this out of the vdsm.log. I even tried rebooting
>
>
> Thread-30::ERROR::2014-07-16 
> 14:19:10,522::domainMonitor::239::Storage.DomainMonitorThread::(_monitorDomain)
>  
> Error while collecting domain b7663d70-e658-41fa-b9f0-8da83c9eddce 
> monitoring information
> Traceback (most recent call last):
>   File "/usr/share/vdsm/storage/domainMonitor.py", line 204, in 
> _monitorDomain
> self.domain = sdCache.produce(self.sdUUID)
>   File "/usr/share/vdsm/storage/sdc.py", line 98, in produce
> domain.getRealDomain()
>   File "/usr/share/vdsm/storage/sdc.py", line 52, in getRealDomain
> return self._cache._realProduce(self._sdUUID)
>   File "/usr/share/vdsm/storage/sdc.py", line 122, in _realProduce
> domain = self._findDomain(sdUUID)
>   File "/usr/share/vdsm/storage/sdc.py", line 141, in _findDomain
> dom = findMethod(sdUUID)
>   File "/usr/share/vdsm/storage/sdc.py", line 171, in _findUnfetchedDomain
> raise se.StorageDomainDoesNotExist(sdUUID)
> StorageDomainDoesNotExist: Storage domain does not exist: 
> ('b7663d70-e658-41fa-b9f0-8da83c9eddce',)
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users


-- 
Dafna Ron
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Deploying hosted engine on second host with different CPU model

2014-07-17 Thread George Machitidze


Hello, 

I am deploying hosted engine (HA) on hosts with different CPU
models on one of my oVirt labs. 

Host have different CPU's or there is
also the problem: virtualization platform cannot detect CPU at all, "The
following CPU types are supported by this host:" is empty: 

2014-07-17
16:51:42 DEBUG otopi.plugins.ovirt_hosted_engine_setup.vdsmd.cpu
cpu._customization:124 Compatible CPU models are: [] 

Is there any way to
override this setting and use CPU of old machine for both hosts? 

ex.

host1: 

cpu family : 6
model : 15
model name : Intel(R) Xeon(R) CPU 5160
@ 3.00GHz
stepping : 11 

host2: 

cpu family : 6
model : 42
model name :
Intel(R) Xeon(R) CPU E31220 @ 3.10GHz
stepping : 7

[root@ovirt2 ~]#
hosted-engine --deploy
[ INFO ] Stage: Initializing
 Continuing will
configure this host for serving as hypervisor and create a VM where you
have to install oVirt Engine afterwards.
 Are you sure you want to
continue? (Yes, No)[Yes]: 
[ INFO ] Generating a temporary VNC password.
[
INFO ] Stage: Environment setup
 Configuration files: []
 Log file:
/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20140717165111-7tg2g7.log

Version: otopi-1.2.1 (otopi-1.2.1-1.el6)
[ INFO ] Hardware supports
virtualization
[ INFO ] Stage: Environment packages setup
[ INFO ] Stage:
Programs detection
[ INFO ] Stage: Environment setup
[ INFO ] Stage:
Environment customization

 --== STORAGE CONFIGURATION ==--

 During
customization use CTRL-D to abort.
 Please specify the storage you would
like to use (nfs3, nfs4)[nfs3]: 
 Please specify the full shared storage
connection path to use (example: host:/path): ovirt-hosted:/engine
 The
specified storage location already contains a data domain. Is this an
additional host setup (Yes, No)[Yes]? 
[ INFO ] Installing on additional
host
 Please specify the Host ID [Must be integer, default: 2]: 

 --==
SYSTEM CONFIGURATION ==--

[WARNING] A configuration file must be supplied
to deploy Hosted Engine on an additional host.
 The answer file may be
fetched from the first host using scp.
 If you do not want to download it
automatically you can abort the setup answering no to the following
question.
 Do you want to scp the answer file from the first host? (Yes,
No)[Yes]: 
 Please provide the FQDN or IP of the first host:
ovirt1.test.ge
 Enter 'root' user password for host ovirt1.test.ge: 
[ INFO
] Answer file successfully downloaded

 --== NETWORK CONFIGURATION ==--

 
THE FOLLOWING CPU TYPES ARE SUPPORTED BY THIS HOST:
[ ERROR ] FAILED TO
EXECUTE STAGE 'ENVIRONMENT CUSTOMIZATION': INVALID CPU TYPE SPECIFIED:
MODEL_CONROE
[ INFO ] Stage: Clean up
[ INFO ] Stage: Pre-termination
[
INFO ] Stage: Termination

-- 
BR 
GEORGE MACHITIDZE
 ___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] fileserver as a guest oVirt

2014-07-17 Thread Niklas Fondberg
Thanks for your thorough answer and explaination. I have gone with the direct 
lun to start with. I bought new Sata disks and added them to the storage chassi 
for the purpose of file sharing. 


Regards,
Niklas 

> On 17 jul 2014, at 14:33, "Daniel Helgenberger" 
>  wrote:
> 
> Hello,
> 
> for some reason the message was flagged spam; so there was a deplay.
> 
> oVirt supports direct LUNs. These LUNs are often already partitions of
> some RAID enclosure.
> AFAIK the MSA60 is a JBOD. You can use the p411 controller to create
> your partitions / LUNs.
> 
> The Virtio-SCSI paravirt driver supports a wide range of (=most) SCSI
> commands. This way clients can access them as 'real' SCSI devices.
> 
> If you did partition the LUN with parted, then the client(s) will see
> these partitions also, along with the file system on it. 
> 
> As you might know, you cannot have 'normal' file systems mounted rw on
> several machines at one, you need a cluster file system; see examples in
> [1]; there are several open source FS's around.
> 
> Also, nothing stops you from mounting a file system read only on several
> hosts.
> 
> One note on the subject, though:
> I consider shared disk file systems as 'old' approach. I support them in
> our setup because of historical reasons. 
> Today in a new deployment I would tend to use more 'modern' scale out
> file systems like GlusterFS (support in oVirt is quite well) or
> Ceph/Rados as object store file system. Using the native clients, you
> basically have a shared disk file system with less bottlenecks (MDC's in
> shared disk fs). Also, both examples have APIs - an application using
> this can greatly benefit in performance. Again, I consider using APIs
> for file storage the approach of the future.
> 
> If you need to attach NFS / CIFS clients, you can always reshare these
> file systems and use (p)NFS or CTDB if you want to cluster this.
> 
> Note, with one host and one JBOD this makes little sense to me.
> 
> [1] http://en.wikipedia.org/wiki/Clustered_file_system
> [2] https://ctdb.samba.org/
> 
>> On Mo, 2014-07-14 at 08:10 +, Niklas Fondberg wrote:
>> Thanks, after reading it makes sense. I suppose I need to drop my hope of
>> having sharing possibility with the host.
>> I have two questions that you might be able to answer:
>> 1. Does direct LUN support partitions or only whole devices
>> 2. Do you know of any open source way of making them shareable to Ovirt?
>> 
>> My setup is simple:
>> - HP DL380 with dual Xeon and lots of RAM. Boots from seperate disk (USB)
>> - MSA60 with p411 attached to the HP DL380
>> 
>> When we grow oVirt we will grow with DL360¹s and use all shared storage
>> from the guest fileserver on the first host.
>> 
>> 
>> 
>> On 14/07/14 08:57, "Daniel Helgenberger" 
>> wrote:
>> 
>>> Hello,
>>> 
>>> just add my 2ct: I did a lot of bench marking for our SAN (FC LUN's). I
>>> also need file servers for our SMB Clients.
>>> 
>>> I recommend using Direct Attached LUNs for your purpose and attach them
>>> to the VMs as VirtIO-SCSI disks. You can even added them as shareable to
>>> oVirt if you deploy some kind of SAN file system (we use Quantum's
>>> StorNext).
>>> 
>>> Bottom line, the implementation of VirtIO-SCSI is so well done and
>>> support in oVirt is great. I cound not see any bottlenecks in the
>>> visualization. For the foreseeable future I will not deploy bare metal
>>> file Servers again.
>>> 
>>> HTH,
>>> 
 On So, 2014-07-13 at 15:47 +, Niklas Fondberg wrote:
 
 From: Karli Sjöberg mailto:karli.sjob...@slu.se>>
 Date: Sunday 13 July 2014 14:51
 To: Niklas Fondberg mailto:nik...@vireone.com>>
 Cc: "users@ovirt.org"
 mailto:users@ovirt.org>>, Karli Sjöberg
 mailto:karli.sjob...@slu.se>>
 Subject: Re: [ovirt-users] fileserver as a guest oVirt
 
 
 Den 12 jul 2014 22:49 skrev Niklas Fondberg
 mailto:nik...@vireone.com>>:
> 
> 
> 
>> On 12 jul 2014, at 16:57, "Karli Sjöberg"
> mailto:karli.sjob...@slu.se>> wrote:
> 
>> 
>> Den 12 jul 2014 15:45 skrev Niklas Fondberg
 mailto:nik...@vireone.com>>:
>>> 
>>> Hi,
>>> 
>>> I'm new to oVirt but I must say I am impressed!
>>> I am running it on a HP DL380 with an external SAS chassi.
>>> Linux dist is Centos 6.5 and oVirt is 3.4 running all-in-one (for
 now until we need to have a second host).
>>> 
>>> Our company (www.vireone.com) deals with system architecture for
 many telco and media operators and is now setting up a small own
 datacenter for our internal tests as well as our IT infrastructure.
>>> We are in the process of installing Zentyal for the SMB purposes
 on a guest and it would be great to have that guest also serving a fs
 path directory with NFS + SMB (which is semi crippled on the host after
 oVirt installation with version 3 et.c.).
>>> 
>>> Does anyone have an idea of how I can through oVirt (seen seve

Re: [ovirt-users] fileserver as a guest oVirt

2014-07-17 Thread Daniel Helgenberger
Hello,

for some reason the message was flagged spam; so there was a deplay.

oVirt supports direct LUNs. These LUNs are often already partitions of
some RAID enclosure.
AFAIK the MSA60 is a JBOD. You can use the p411 controller to create
your partitions / LUNs.

The Virtio-SCSI paravirt driver supports a wide range of (=most) SCSI
commands. This way clients can access them as 'real' SCSI devices.

If you did partition the LUN with parted, then the client(s) will see
these partitions also, along with the file system on it. 

As you might know, you cannot have 'normal' file systems mounted rw on
several machines at one, you need a cluster file system; see examples in
[1]; there are several open source FS's around.

Also, nothing stops you from mounting a file system read only on several
hosts.

One note on the subject, though:
I consider shared disk file systems as 'old' approach. I support them in
our setup because of historical reasons. 
Today in a new deployment I would tend to use more 'modern' scale out
file systems like GlusterFS (support in oVirt is quite well) or
Ceph/Rados as object store file system. Using the native clients, you
basically have a shared disk file system with less bottlenecks (MDC's in
shared disk fs). Also, both examples have APIs - an application using
this can greatly benefit in performance. Again, I consider using APIs
for file storage the approach of the future.

If you need to attach NFS / CIFS clients, you can always reshare these
file systems and use (p)NFS or CTDB if you want to cluster this.

Note, with one host and one JBOD this makes little sense to me.

[1] http://en.wikipedia.org/wiki/Clustered_file_system
[2] https://ctdb.samba.org/

On Mo, 2014-07-14 at 08:10 +, Niklas Fondberg wrote:
> Thanks, after reading it makes sense. I suppose I need to drop my hope of
> having sharing possibility with the host.
> I have two questions that you might be able to answer:
> 1. Does direct LUN support partitions or only whole devices
> 2. Do you know of any open source way of making them shareable to Ovirt?
> 
> My setup is simple:
> - HP DL380 with dual Xeon and lots of RAM. Boots from seperate disk (USB)
> - MSA60 with p411 attached to the HP DL380
> 
> When we grow oVirt we will grow with DL360¹s and use all shared storage
> from the guest fileserver on the first host.
> 
> 
> 
> On 14/07/14 08:57, "Daniel Helgenberger" 
> wrote:
> 
> >Hello,
> >
> >just add my 2ct: I did a lot of bench marking for our SAN (FC LUN's). I
> >also need file servers for our SMB Clients.
> >
> >I recommend using Direct Attached LUNs for your purpose and attach them
> >to the VMs as VirtIO-SCSI disks. You can even added them as shareable to
> >oVirt if you deploy some kind of SAN file system (we use Quantum's
> >StorNext).
> >
> >Bottom line, the implementation of VirtIO-SCSI is so well done and
> >support in oVirt is great. I cound not see any bottlenecks in the
> >visualization. For the foreseeable future I will not deploy bare metal
> >file Servers again.
> >
> >HTH,
> >
> >On So, 2014-07-13 at 15:47 +, Niklas Fondberg wrote:
> >> 
> >> From: Karli Sjöberg mailto:karli.sjob...@slu.se>>
> >> Date: Sunday 13 July 2014 14:51
> >> To: Niklas Fondberg mailto:nik...@vireone.com>>
> >> Cc: "users@ovirt.org"
> >>mailto:users@ovirt.org>>, Karli Sjöberg
> >>mailto:karli.sjob...@slu.se>>
> >> Subject: Re: [ovirt-users] fileserver as a guest oVirt
> >> 
> >> 
> >> Den 12 jul 2014 22:49 skrev Niklas Fondberg
> >>mailto:nik...@vireone.com>>:
> >> >
> >> >
> >> >
> >> > On 12 jul 2014, at 16:57, "Karli Sjöberg"
> >>mailto:karli.sjob...@slu.se>> wrote:
> >> >
> >> >>
> >> >> Den 12 jul 2014 15:45 skrev Niklas Fondberg
> >>mailto:nik...@vireone.com>>:
> >> >> >
> >> >> > Hi,
> >> >> >
> >> >> > I'm new to oVirt but I must say I am impressed!
> >> >> > I am running it on a HP DL380 with an external SAS chassi.
> >> >> > Linux dist is Centos 6.5 and oVirt is 3.4 running all-in-one (for
> >>now until we need to have a second host).
> >> >> >
> >> >> > Our company (www.vireone.com) deals with system architecture for
> >>many telco and media operators and is now setting up a small own
> >>datacenter for our internal tests as well as our IT infrastructure.
> >> >> > We are in the process of installing Zentyal for the SMB purposes
> >>on a guest and it would be great to have that guest also serving a fs
> >>path directory with NFS + SMB (which is semi crippled on the host after
> >>oVirt installation with version 3 et.c.).
> >> >> >
> >> >> > Does anyone have an idea of how I can through oVirt (seen several
> >>solutions using virsh and kvm) letting my Zentyal Ubuntu guest have
> >>access to a host mount point or if necessary (second best) a seperate
> >>partition?
> >> >> >
> >> >> > Best regards
> >> >> > Niklas
> >> >> >
> >> >>
> >> >> Why not just give the guest a thin provision virtual hard drive and
> >>expand it on a demand basis?
> >> >>
> >> >> /K
> >> >
> >> > Thanks for the adv

Re: [ovirt-users] 3.5 thinks HA is already running

2014-07-17 Thread Maël Lavault
Hi,

Just apply the 2 line fix in this link on your initscripts :

http://gerrit.ovirt.org/#/c/29574/

Or wait for an update.

It worked well for me !

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Setup of hosted Engine Fails

2014-07-17 Thread Maël Lavault
Had the same problem, you need to update your CentOS, there is a new
version of python-pthreading which solves the issue.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Misc architecture questions

2014-07-17 Thread Maël Lavault
Hi,

I'm a student currently doing my final study project based around Ovirt.

I played a lot with Ovirt to have an overview of what could or could not
be done, but I still have a few questions and stuff I want to clarify.

My project consist of a HA architecture using Ovirt, powered by 8
reasonably powerful servers (4 x Quad core Opteron, 32Go RAM, 2x10k RPM
HDD) equipped with 2 NIC each.

I followed this tutorial to use Ovirt with GlusterFS :

http://community.redhat.com/blog/2014/05/ovirt-3-4-glusterized/

Worked well !

I installed 2 self hosted engine on 2 different servers and a GlusterFS
using 2 other servers.

But since we had only 2 x 1Gb NIC per server, we decided to go with
bonding and VLAN to separate each networks inspired by this blog post :

http://captainkvm.com/2013/04/maximizing-your-10gb-ethernet-in-rhev/

Unfortunately, it seems like Ovirt 3.4 does not support installing
self-hosted engine on bond + vlan. I tried 3.5 but there were to much
bug to be usable and the project is set to be deployed in 2 months.

A colleague suggested me a workaround using OpenvSwitch between Ovirt
and NIC bond to "translate tagged packet into non tagged packet" and
hide the bond from Ovirt. Does this have a chance to work ?

Since the GlusterFS is accessed by NFS, I was able to bond the two
servers.

A few questions :

- What is the purpose of ovirtmgmt network ? I did a lot of search by
haven't found any clear explanation. Does it need to be publicly
accessible or a private ip is fine too ?

- Is the display network used for SPICE/VNC connection ?

- How do Ovirt differentiate VM network from a storage network ? They
both have the same vm role in the interface. Both networks could
(should) be on private ip range right ?

- How can I add the storage network after (or before, would be better)
self hosted engine is installed ? (Since self hosted engine is stored on
Gluster, I will loose connection to the engine)

- Using self hosted engine, does all my nodes need to be installed with
hosted-engine --deploy, or can I have only 2 self hosted engine nodes
and 4 classic nodes ?

- I'm trying to cleanly re-install the second hosted engine after some
experiments, but the behavior is strange :

[ INFO  ] Stage: Transaction commit
[ INFO  ] Stage: Closing up
  To continue make a selection from the options below:
  (1) Continue setup - engine installation is complete
  (2) Power off and restart the VM
  (3) Abort setup
Isn't this supposed to give me information to connect to the vm so i can
install the engine ?

Thanks a lot for this truly well made software !


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Guest VM CPU Information Different from the Host

2014-07-17 Thread Punit Dambiwal
Hi,

In my Ovirt cluster i am using the following CPU (Intel(R) Xeon(R) CPU
E5-2648L v2 @ 1.90GHz),but in the guest machine it shows "Intel Xeon E312xx
(Sandy Bridge)"

Seems it recognize E5 CPU as E3 CPUis there any way to modify it to "Intel
Xeon E526xx (Sandy Bridge)" instead of "Intel Xeon E312xx (Sandy Bridge)"

Thanks,
Punit
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Can't create disks on gluster storage

2014-07-17 Thread andreas.ewert
Hi,

Due creation of new disk in gluster-storage I got some errors. Finally the disk 
is not being created.
engine.log says that there is no such file or directory, but the gluster domain 
(124e6273-c6f5-471f-88e7-e5d9d37d7385) is active:

[root@ipet etc]# vdsClient -s 0 getStoragePoolInfo 
5849b030-626e-47cb-ad90-3ce782d831b3
name = Test
isoprefix = 
/rhev/data-center/mnt/mixcoatl:_srv_mirror_ISOs/c1582b82-9bfc-4e2b-ab2a-83551dcfba8f/images/----
pool_status = connected
lver = 0
spm_id = 1
master_uuid = 54f86ad7-2c12-4322-b2d1-f129f3d20e57
version = 3
domains = 
9bdf01bd-78d6-4408-b3a9-e05469004d78:Active,e4a3928d-0475-4b99-bfb8-86606931296a:Active,c1582b82-9bfc-4e2b-ab2a-83551dcfba8f:Active,124e6273-c6f5-471f-88e7-e5d9d37d7385:Active,54f86ad7-2c12-4322-b2d1-f129f3d20e57:Active
type = FCP
master_ver = 8
9bdf01bd-78d6-4408-b3a9-e05469004d78 = {'status': 'Active', 'diskfree': 
'1043408617472', 'isoprefix': '', 'alerts': [], 'disktotal': '1099108974592', 
'version': 3}
e4a3928d-0475-4b99-bfb8-86606931296a = {'status': 'Active', 'diskfree': 
'27244910346240', 'isoprefix': '', 'alerts': [], 'disktotal': '34573945667584', 
'version': 3}
124e6273-c6f5-471f-88e7-e5d9d37d7385 = {'status': 'Active', 'diskfree': 
'12224677937152', 'isoprefix': '', 'alerts': [], 'disktotal': '16709735940096', 
'version': 3}
c1582b82-9bfc-4e2b-ab2a-83551dcfba8f = {'status': 'Active', 'diskfree': 
'115282018304', 'isoprefix': 
'/rhev/data-center/mnt/mixcoatl:_srv_mirror_ISOs/c1582b82-9bfc-4e2b-ab2a-83551dcfba8f/images/----',
 'alerts': [], 'disktotal': '539016298496', 'version': 0}
54f86ad7-2c12-4322-b2d1-f129f3d20e57 = {'status': 'Active', 'diskfree': 
'148579024896', 'isoprefix': '', 'alerts': [], 'disktotal': '1197490569216', 
'version': 3}


[root@ipet etc]# ll /rhev/data-center/mnt/glusterSD/moly\:_repo1/
insgesamt 0
drwxr-xr-x 4 vdsm kvm 96 17. Apr 10:02 124e6273-c6f5-471f-88e7-e5d9d37d7385
-rwxr-xr-x 1 vdsm kvm  0 11. Feb 12:52 __DIRECT_IO_TEST__
[root@ipet etc]# ll /rhev/data-center/
insgesamt 12
drwxr-xr-x 2 vdsm kvm 4096 14. Jul 10:42 5849b030-626e-47cb-ad90-3ce782d831b3
drwxr-xr-x 2 vdsm kvm 4096 16. Dez 2013  hsm-tasks
drwxr-xr-x 7 vdsm kvm 4096 14. Jul 10:24 mnt
[root@ipet etc]# ll /rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/
insgesamt 16
lrwxrwxrwx 1 vdsm kvm 66 14. Jul 10:42 54f86ad7-2c12-4322-b2d1-f129f3d20e57 -> 
/rhev/data-center/mnt/blockSD/54f86ad7-2c12-4322-b2d1-f129f3d20e57
lrwxrwxrwx 1 vdsm kvm 66 14. Jul 10:42 9bdf01bd-78d6-4408-b3a9-e05469004d78 -> 
/rhev/data-center/mnt/blockSD/9bdf01bd-78d6-4408-b3a9-e05469004d78
lrwxrwxrwx 1 vdsm kvm 84 14. Jul 10:42 c1582b82-9bfc-4e2b-ab2a-83551dcfba8f -> 
/rhev/data-center/mnt/mixcoatl:_srv_mirror_ISOs/c1582b82-9bfc-4e2b-ab2a-83551dcfba8f
lrwxrwxrwx 1 vdsm kvm 66 14. Jul 10:42 mastersd -> 
/rhev/data-center/mnt/blockSD/54f86ad7-2c12-4322-b2d1-f129f3d20e57


Here i miss the directory of the gluster domain 
124e6273-c6f5-471f-88e7-e5d9d37d7385 and the symbolic link belonging to it. The 
directory is missed on every Host.
What can I do to fix this issue?

With best regards
Andreas



 

engine.log:
2014-07-17 05:30:33,347 INFO  [org.ovirt.engine.core.bll.AddDiskCommand] 
(ajp--127.0.0.1-8702-7) [19404214] Running command: AddDiskCommand internal: 
false. Entities affected :  ID: 124e6273-c6f5-471f-88e7-e5d9d37d7385 Type: 
Storage
2014-07-17 05:30:33,358 INFO  
[org.ovirt.engine.core.bll.AddImageFromScratchCommand] (ajp--127.0.0.1-8702-7) 
[3e7d9b07] Running command: AddImageFromScratchCommand internal: true. Entities 
affected :  ID: 124e6273-c6f5-471f-88e7-e5d9d37d7385 Type: Storage
2014-07-17 05:30:33,364 INFO  
[org.ovirt.engine.core.vdsbroker.irsbroker.CreateImageVDSCommand] 
(ajp--127.0.0.1-8702-7) [3e7d9b07] START, CreateImageVDSCommand( storagePoolId 
= 5849b030-626e-47cb-ad90-3ce782d831b3, ignoreFailoverLimit = false, 
storageDomainId = 124e6273-c6f5-471f-88e7-e5d9d37d7385, imageGroupId = 
7594000c-23a9-4941-8218-2c0654518a3d, imageSizeInBytes = 68719476736, 
volumeFormat = RAW, newImageId = c48e46cd-9dd5-4c52-94a4-db0378aecc3c, 
newImageDescription = ), log id: 642dadf4
2014-07-17 05:30:33,366 INFO  
[org.ovirt.engine.core.vdsbroker.irsbroker.CreateImageVDSCommand] 
(ajp--127.0.0.1-8702-7) [3e7d9b07] -- executeIrsBrokerCommand: calling 
'createVolume' with two new parameters: description and UUID
2014-07-17 05:30:33,392 INFO  
[org.ovirt.engine.core.vdsbroker.irsbroker.CreateImageVDSCommand] 
(ajp--127.0.0.1-8702-7) [3e7d9b07] FINISH, CreateImageVDSCommand, return: 
c48e46cd-9dd5-4c52-94a4-db0378aecc3c, log id: 642dadf4
2014-07-17 05:30:33,403 INFO  [org.ovirt.engine.core.bll.CommandAsyncTask] 
(ajp--127.0.0.1-8702-7) [3e7d9b07] CommandAsyncTask::Adding 
CommandMultiAsyncTasks object for command 7429f506-e57d-45e7-bc66-42bbdc90174c
2014-07-17 05:30:33,404 INFO  
[org.ovirt.eng

Re: [ovirt-users] Guest VM Console Creation/Access using REST API and noVNC

2014-07-17 Thread Shanil S
Hi,

We are waiting for the updates, it will be great if anyone can give the
helpful details.. :)

-- 
Regards
Shanil


On Thu, Jul 17, 2014 at 10:23 AM, Shanil S  wrote:

> Hi,
>
> we have enabled our portal ip address on the engine and hosts firewall but
> still the connection failed. so there should be no firewall issues.
>
> --
> Regards
> Shanil
>
>
> On Wed, Jul 16, 2014 at 3:26 PM, Shanil S  wrote:
>
>> Hi Sven,
>>
>> Regarding the ticket "path", Is it the direct combination of host and
>> port ? suppose if the host is 1.2.3.4 and the port is 5100 then what should
>> be the "path" value ? Is there encryption needs here ?
>>
>>
>> >>so you have access from the browser to the websocket-proxy, network
>> wise? can you ping the proxy?
>> and the websocket proxy can reach the host where the vm runs?
>>
>>  yes.. there should be no firewall issue as we can access the console
>> from ovirt engine portal
>>
>>  Do we need to allow our own portal ip address in the ovirt engine and
>> hypervisiors also ???
>>
>>
>> --
>> Regards
>> Shanil
>>
>>
>> On Wed, Jul 16, 2014 at 3:13 PM, Sven Kieske 
>> wrote:
>>
>>>
>>>
>>> Am 16.07.2014 11:30, schrieb Shanil S:
>>> > We will get the ticket details like host,port and password from the
>>> ticket
>>> > api funcion call but didn't get the "path" value. Will it get it from
>>> the
>>> > ticket details ? i couldn't find out any from the ticket details.
>>>
>>> the "path" is the combination of host and port.
>>>
>>> so you have access from the browser to the websocket-proxy, network
>>> wise? can you ping the proxy?
>>> and the websocket proxy can reach the host where the vm runs?
>>> are you sure there are no firewalls in between?
>>> also you should pay attention on how long your ticket
>>> is valid, you can specify the duration in minutes in your api call.
>>>
>>> --
>>> Mit freundlichen Grüßen / Regards
>>>
>>> Sven Kieske
>>>
>>> Systemadministrator
>>> Mittwald CM Service GmbH & Co. KG
>>> Königsberger Straße 6
>>> 32339 Espelkamp
>>> T: +49-5772-293-100
>>> F: +49-5772-293-333
>>> https://www.mittwald.de
>>> Geschäftsführer: Robert Meyer
>>> St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad
>>> Oeynhausen
>>> Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad
>>> Oeynhausen
>>>
>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Logical network error

2014-07-17 Thread Moti Asayag


- Original Message -
> From: "Maurice James" 
> To: "users" 
> Sent: Wednesday, July 16, 2014 5:06:23 PM
> Subject: [ovirt-users] Logical network error
> 
> While attempting to remove a logical network from one of my hosts, Im getting
> the following error.
> Error while executing action Setup Networks: Unexpected exception
> 
> 
> Im seeing the following error in the vdsm.log.
> 

Could you attach the entire vdsm.log and supervdsm.log ?

> 
> Thread-72::ERROR::2014-07-16
> 10:03:46,773::BindingXMLRPC::1086::vds::(wrapper) unexpected error
> Traceback (most recent call last):
> File "/usr/share/vdsm/BindingXMLRPC.py", line 1070, in wrapper
> res = f(*args, **kwargs)
> File "/usr/share/vdsm/BindingXMLRPC.py", line 494, in setupNetworks
> return api.setupNetworks(networks, bondings, options)
> File "/usr/share/vdsm/API.py", line 1297, in setupNetworks
> supervdsm.getProxy().setupNetworks(networks, bondings, options)
> File "/usr/share/vdsm/supervdsm.py", line 50, in __call__
> return callMethod()
> File "/usr/share/vdsm/supervdsm.py", line 48, in 
> **kwargs)
> File "", line 2, in setupNetworks
> File "/usr/lib64/python2.6/multiprocessing/managers.py", line 725, in
> _callmethod
> conn.send((self._id, methodname, args, kwds))
> IOError: [Errno 32] Broken pipe
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt Weekly Sync Meeting: July 16, 2014

2014-07-17 Thread Dan Kenigsberg
On Wed, Jul 16, 2014 at 11:26:18AM -0400, Brian Proffitt wrote:
> Minutes:
> http://ovirt.org/meetings/ovirt/2014/ovirt.2014-07-16-14.05.html
> Minutes (text): 
> http://ovirt.org/meetings/ovirt/2014/ovirt.2014-07-16-14.05.txt
> Log:
> http://ovirt.org/meetings/ovirt/2014/ovirt.2014-07-16-14.05.log.html
> 
> =
> #ovirt: oVirt Weekly Sync
> =
> 
> 
> Meeting started by bkp at 14:05:27 UTC. The full logs are available at
> http://ovirt.org/meetings/ovirt/2014/ovirt.2014-07-16-14.05.log.html .
> 
> 
> 
> Meeting summary
> ---

>   * 3.5 status sla Everything sla had for vdsm is in master now  (bkp,
> 15:09:41)
>   * 3.5 status sla iotune now needs to be merged to 3.5. Verified and
> unittests written. There are still strong concerns from danken.
> msivak to try to address and get this in the green.  (bkp, 15:09:42)

If there's indeed an excpetional need to rush this feature into the
stable branch at this stage, please invest exceptional testing on it.
I'd appreciate if someone (preferably out of the development team) give
a serious test-day-like examination of this feature and its effects on
migration/hibernation in particular.

Dan.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users