[ovirt-users] oVirt with Gluster upgraded to 4.2: unable to boot vm with libgfapi

2018-01-01 Thread Gianluca Cecchi
Hello,
a system upgraded from 4.1.7 (with libgfapi not enabled) to 4.2.
3 hosts in a HC configuration

Now I try to enable libgfapi:

Before a CentOS 6 VM booted with a qemu-kvm line of type:

 -drive
file=/rhev/data-center/mnt/glusterSD/ovirt01.localdomain.local:data/190f4096-003e-4908-825a-6c231e60276d/images/02731d5e-c222-4697-8f1f-d26a6a23ec79/1836df76-835b-4625-9ce8-0856176dc30c,format=raw,if=none,id=drive-virtio-disk0,serial=02731d5e-c222-4697-8f1f-d26a6a23ec79,cache=none,werror=stop,rerror=stop,aio=thread

Shutdown VM named centos6

Setup engine
root@ovengine log]# engine-config -s LibgfApiSupported=true
Please select a version:
1. 3.6
2. 4.0
3. 4.1
4. 4.2
4
[root@ovengine log]# engine-config -g LibgfApiSupported
LibgfApiSupported: false version: 3.6
LibgfApiSupported: false version: 4.0
LibgfApiSupported: false version: 4.1
LibgfApiSupported: true version: 4.2

Restart engine

[root@ovengine log]# systemctl restart ovirt-engine
[root@ovengine log]#

reconnect to web admin portal

Power on the centos6 VM
I get "Failed to run VM" on all the 3 configured hosts

Jan 1, 2018, 11:53:35 PM Failed to run VM centos6 (User:
admin@internal-authz).
Jan 1, 2018, 11:53:35 PM Failed to run VM centos6 on Host
ovirt02.localdomain.local.
Jan 1, 2018, 11:53:35 PM Failed to run VM centos6 on Host
ovirt03.localdomain.local.
Jan 1, 2018, 11:53:35 PM Failed to run VM centos6 on Host
ovirt01.localdomain.local.

In engine.log
2018-01-01 23:53:34,996+01 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand]
(EE-ManagedThreadFactory-engine-Thread-2885)
[8d7c68c6-b236-4e76-b7b2-f000e2b07425] Failed in 'CreateBrokerVDS' method,
for vds: 'ovirt01.localdomain.local'; host: 'ovirt01.localdomain.local': 1
2018-01-01 23:53:34,996+01 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand]
(EE-ManagedThreadFactory-engine-Thread-2885)
[8d7c68c6-b236-4e76-b7b2-f000e2b07425] Command
'CreateBrokerVDSCommand(HostName = ovirt01.localdomain.local,
CreateVDSCommandParameters:{hostId='e5079118-1147-469e-876f-e20013276ece',
vmId='64da5593-1022-4f66-ae3f-b273deda4c22', vm='VM [centos6]'})' execution
failed: 1
2018-01-01 23:53:34,996+01 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand]
(EE-ManagedThreadFactory-engine-Thread-2885)
[8d7c68c6-b236-4e76-b7b2-f000e2b07425] FINISH, CreateBrokerVDSCommand, log
id: e3bbe56
2018-01-01 23:53:34,996+01 ERROR
[org.ovirt.engine.core.vdsbroker.CreateVDSCommand]
(EE-ManagedThreadFactory-engine-Thread-2885)
[8d7c68c6-b236-4e76-b7b2-f000e2b07425] Failed to create VM: 1
2018-01-01 23:53:34,997+01 ERROR
[org.ovirt.engine.core.vdsbroker.CreateVDSCommand]
(EE-ManagedThreadFactory-engine-Thread-2885)
[8d7c68c6-b236-4e76-b7b2-f000e2b07425] Command 'CreateVDSCommand(
CreateVDSCommandParameters:{hostId='e5079118-1147-469e-876f-e20013276ece',
vmId='64da5593-1022-4f66-ae3f-b273deda4c22', vm='VM [centos6]'})' execution
failed: java.lang.ArrayIndexOutOfBoundsException: 1
2018-01-01 23:53:34,997+01 INFO
[org.ovirt.engine.core.vdsbroker.CreateVDSCommand]
(EE-ManagedThreadFactory-engine-Thread-2885)
[8d7c68c6-b236-4e76-b7b2-f000e2b07425] FINISH, CreateVDSCommand, return:
Down, log id: ab299ce
2018-01-01 23:53:34,997+01 WARN  [org.ovirt.engine.core.bll.RunVmCommand]
(EE-ManagedThreadFactory-engine-Thread-2885)
[8d7c68c6-b236-4e76-b7b2-f000e2b07425] Failed to run VM 'centos6':
EngineException: java.lang.RuntimeException:
java.lang.ArrayIndexOutOfBoundsException: 1 (Failed with error ENGINE and
code 5001)

All engine.log file here:
https://drive.google.com/file/d/1UZ9dWnGrBaFVnfx1E_Ch52CtYDDtzT3p/view?usp=sharing

The VM fails to start on all 3 hosts, but I don't see particular error on
them; eg on ovirt01 vdsm.log.1.xz here:
https://drive.google.com/file/d/1yIlKtRtvftJVzWNlzV3WhJ3DaP4ksQvw/view?usp=sharing

The domain where the VM disk is on seems ok;
[root@ovirt01 vdsm]# gluster volume info data

Volume Name: data
Type: Replicate
Volume ID: 2238c6db-48c5-4071-8929-879cedcf39bf
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: ovirt01.localdomain.local:/gluster/brick2/data
Brick2: ovirt02.localdomain.local:/gluster/brick2/data
Brick3: ovirt03.localdomain.local:/gluster/brick2/data (arbiter)
Options Reconfigured:
transport.address-family: inet
performance.readdir-ahead: on
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.stat-prefetch: off
cluster.eager-lock: enable
network.remote-dio: off
cluster.quorum-type: auto
cluster.server-quorum-type: server
storage.owner-uid: 36
storage.owner-gid: 36
features.shard: on
features.shard-block-size: 512MB
performance.low-prio-threads: 32
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-wait-qlength: 1
cluster.shd-max-threads: 6
network.ping-timeout: 30
user.cifs: off
nfs.disable: on
performance.strict-o-direct: on
[root@ovirt01 vdsm]#

[root@ovirt01 vdsm]# gluster volume heal data info
Brick 

Re: [ovirt-users] Guest CPU reported 100%

2018-01-01 Thread Alex K
Hi Gianluca,

I am facing two different cases. Lets say the first case "stuck VM" and the
second "fake 100% CPU". On both I have verified that I have no storage
issues. Gluster volumes are up and accessible with other VMs (Windows 10
and Windows server 2016) running normally. The "stuck VM" case I have
observed more rarely. For the fake 100CPU% case, I suspect it could be sth
with the guest agent drivers or sth between qemu and Win10 since I've never
seen this with Windows 2016 server or Linux VMs.

Alex

On Mon, Jan 1, 2018 at 9:56 PM, Gianluca Cecchi 
wrote:

> On Mon, Jan 1, 2018 at 8:43 PM, Alex K  wrote:
>
>> Hi all and Happy New Year!
>>
>> I have a ovirt 4.1.3.5 cluster (running with 3 nodes and shared gluster
>> storage).
>> I have randomly observed that some Windows 10 64bit VMs are reported from
>> engine dashboard with 100%CPU while when connecting within the VM the CPU
>> utilization is normal.
>> Sometimes, when reported with 100% CPU I cannot get a console at VM
>> (console gives black screen) then I have to force shutdown the VM and start
>> it up again. The only warning I see is in the qemu logs of the guest
>> reporting that CPUs not present in any NUMA nodes.
>>
>> Any ideas how to tackle this?
>>
>> Thanx,
>> Alex
>>
>>
> Hi Alex,
> I have seen something similar but on ISCSI domain environment and not
> GlusterFS one, when I got problems with the storage array (in my case it
> was a firmware update that lasted too much) and the VMs were paused and
> after some seconds reactivated again.
> For some of them I registered the related qemu-kvm process going to fixed
> 100% cpu usage and unable to open spice console (black screen). But in my
> case also the VM itself was stuck: unable to connect to it via network or
> ping.
> I had to force power off the VM and power on it again. Some other VMs
> resumed from pause state without any apparent problem (apart from clock
> unsync).
> Both the good and bad VMs had ovirt guest agent running: they were CentOS
> 6.5 VMs
> Perhaps your situation is something in the middle verify you didn't
> had any problem with your storage and that your problematic VM had not been
> paused/resumed due to that
>
> Gianluca
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Q: GlusterFS and libgfapi

2018-01-01 Thread Gianluca Cecchi
On Mon, Jan 1, 2018 at 9:19 PM, Andrei V  wrote:

> Hi,
>
> Is this correct that starting from oVirt 4.2 GlusterFS storage domain is
> being used with libgfapi instead of FUSE (based on info linked below)?
>
> https://www.ovirt.org/develop/release-management/features/st
> orage/glusterfs-storage-domain/
>
> https://gerrit.ovirt.org/#/c/44061/
>

Actually it was introduced in 4.1.5:
https://www.ovirt.org/release/4.1.5


In 4.1.7 it was also introduced libgfapi support during Hyper Converged
Self Hosted Engine deployments:
https://www.ovirt.org/release/4.1.7
https://bugzilla.redhat.com/1471658

So it should be inherited in 4.2 too. There were a bug related to live
storage migration that actually seems not solved yet:
https://bugzilla.redhat.com/show_bug.cgi?id=1306562

The first link that you provided gives also the way to enable it for
already existing environments that has been upgraded.
I'm going to try myself on a cluster upgraded from 4.1.7 to 4.2 where I had
not enabled it yet and report

HIH,
Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Q: GlusterFS and libgfapi

2018-01-01 Thread Andrei V
Hi,

Is this correct that starting from oVirt 4.2 GlusterFS storage domain is
being used with libgfapi instead of FUSE (based on info linked below)?

https://www.ovirt.org/develop/release-management/features/storage/glusterfs-storage-domain/

https://gerrit.ovirt.org/#/c/44061/

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Guest CPU reported 100%

2018-01-01 Thread Gianluca Cecchi
On Mon, Jan 1, 2018 at 8:43 PM, Alex K  wrote:

> Hi all and Happy New Year!
>
> I have a ovirt 4.1.3.5 cluster (running with 3 nodes and shared gluster
> storage).
> I have randomly observed that some Windows 10 64bit VMs are reported from
> engine dashboard with 100%CPU while when connecting within the VM the CPU
> utilization is normal.
> Sometimes, when reported with 100% CPU I cannot get a console at VM
> (console gives black screen) then I have to force shutdown the VM and start
> it up again. The only warning I see is in the qemu logs of the guest
> reporting that CPUs not present in any NUMA nodes.
>
> Any ideas how to tackle this?
>
> Thanx,
> Alex
>
>
Hi Alex,
I have seen something similar but on ISCSI domain environment and not
GlusterFS one, when I got problems with the storage array (in my case it
was a firmware update that lasted too much) and the VMs were paused and
after some seconds reactivated again.
For some of them I registered the related qemu-kvm process going to fixed
100% cpu usage and unable to open spice console (black screen). But in my
case also the VM itself was stuck: unable to connect to it via network or
ping.
I had to force power off the VM and power on it again. Some other VMs
resumed from pause state without any apparent problem (apart from clock
unsync).
Both the good and bad VMs had ovirt guest agent running: they were CentOS
6.5 VMs
Perhaps your situation is something in the middle verify you didn't had
any problem with your storage and that your problematic VM had not been
paused/resumed due to that

Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Guest CPU reported 100%

2018-01-01 Thread Alex K
Hi all and Happy New Year!

I have a ovirt 4.1.3.5 cluster (running with 3 nodes and shared gluster
storage).
I have randomly observed that some Windows 10 64bit VMs are reported from
engine dashboard with 100%CPU while when connecting within the VM the CPU
utilization is normal.
Sometimes, when reported with 100% CPU I cannot get a console at VM
(console gives black screen) then I have to force shutdown the VM and start
it up again. The only warning I see is in the qemu logs of the guest
reporting that CPUs not present in any NUMA nodes.

Any ideas how to tackle this?

Thanx,
Alex
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Q: 2-Node Failover Setup - NFS or GlusterFS ?

2018-01-01 Thread Yaniv Kaul
On Mon, Jan 1, 2018 at 4:00 PM, Andrei V  wrote:

> On 01/01/2018 10:10 AM, Yaniv Kaul wrote:
>
>
> On Mon, Jan 1, 2018 at 12:50 AM, Andrei V  wrote:
>
>> Hi !
>>
>> I'm installing 2-node failover cluster (2 x Xeon servers with local RAID
>> 5 / ext4 for oVirt storage domains).
>> Now I have a dilemma - use either GlusterFS replica 2 or stick with NFS?
>>
>
> Replica 2 is not good enough, as it can leave you with split brain. It's
> been discussed in the mailing list several times.
> How do you plan to achieve HA with NFS? With drbd?
>
> Hi, Yaniv,
> Thanks a lot for detailed explanation!
>
> I know Replica 2 is not optimal solution.
> Right now I have only 2 servers with internal RAIDs for nodes, and till
> end of this week system had to be running in whatever condition.
> May be its better to use local storage domain on each node, set export
> domain on backup node, and backup VMs to 2nd backup node in timed interval?
> Its not highly-available yet workable solution.
>
> 4.2 Engine is running on separate hardware.
>>
>
> Is the Engine also highly available?
>
>
> Its KVM appliance, could be launched on 2 SuSE servers.
>
> Each node have its own storage domain (on internal RAID).
>>
>
> So some sort of replica 1 with geo-replication between them?
>
>
> Could it be the following?
> 1) Local storage domain on each node
> 2) GlusterFS geo-replication or over these directories? Not sure this will
> work.
>
>
>> All VMs must be highly available.
>>
>
> Without shared storage, it may be tricky.
>
>
> Seems to be timely VM backup to 2nd node is enough for this time.
> With current hardware anything above is too cumbersome to setup.
>

Agreed.
Y.


>
>
>
> One of the VMs is an accounting/stock control system with FireBird SQL
>> server on CentOS is speed-critical.
>>
>
> But is IO the bottleneck? Are you using SSDs / NVMe drives?
> I'm not familiar enough with FireBird SQL server - does it have an
> application layer replication you might opt to use?
> In such case, you could pass-through a NVM disk and have the application
> layer perform the replication between the nodes.
>
>
>> No load balancing between nodes necessary. 2nd is just for backup if 1st
>> for whatever reason goes up in smoke. All VM disks must be replicated to
>> backup node in near real-time or in worst case each 1 - 2 hour.
>> GlusterFS solves this issue yet at high performance penalty.
>>
>
> The problem with a passive backup is that you never know it'll really work
> when needed. This is why active-active is many time preferred.
> It's also more cost effective usually - instead of some HW lying around.
>
>
>>
>> >From what I read here
>> http://lists.ovirt.org/pipermail/users/2017-July/083144.html
>> GlusterFS performance with oVirt is not very good right now because QEMU
>> uses FUSE instead of libgfapi.
>>
>> What is optimal way to go on ?
>>
>
> It's hard to answer without additional details.
> Y.
>
>
>> Thanks in advance.
>> Andrei
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Q: 2-Node Failover Setup - NFS or GlusterFS ?

2018-01-01 Thread Andrei V
On 01/01/2018 10:10 AM, Yaniv Kaul wrote:
>
> On Mon, Jan 1, 2018 at 12:50 AM, Andrei V  > wrote:
>
> Hi !
>
> I'm installing 2-node failover cluster (2 x Xeon servers with
> local RAID
> 5 / ext4 for oVirt storage domains).
> Now I have a dilemma - use either GlusterFS replica 2 or stick
> with NFS?
>
>
> Replica 2 is not good enough, as it can leave you with split brain.
> It's been discussed in the mailing list several times.
> How do you plan to achieve HA with NFS? With drbd?
Hi, Yaniv,
Thanks a lot for detailed explanation!

I know Replica 2 is not optimal solution.
Right now I have only 2 servers with internal RAIDs for nodes, and till
end of this week system had to be running in whatever condition.
May be its better to use local storage domain on each node, set export
domain on backup node, and backup VMs to 2nd backup node in timed interval?
Its not highly-available yet workable solution.

> 4.2 Engine is running on separate hardware.
>
>
> Is the Engine also highly available?

Its KVM appliance, could be launched on 2 SuSE servers.

> Each node have its own storage domain (on internal RAID).
>
>
> So some sort of replica 1 with geo-replication between them?

Could it be the following?
1) Local storage domain on each node
2) GlusterFS geo-replication or over these directories? Not sure this
will work.

>
> All VMs must be highly available.
>
>
> Without shared storage, it may be tricky.

Seems to be timely VM backup to 2nd node is enough for this time.
With current hardware anything above is too cumbersome to setup.

>
> One of the VMs is an accounting/stock control system with FireBird SQL
> server on CentOS is speed-critical.
>
>
> But is IO the bottleneck? Are you using SSDs / NVMe drives? 
> I'm not familiar enough with FireBird SQL server - does it have an
> application layer replication you might opt to use?
> In such case, you could pass-through a NVM disk and have the
> application layer perform the replication between the nodes.
>  
>
> No load balancing between nodes necessary. 2nd is just for backup
> if 1st
> for whatever reason goes up in smoke. All VM disks must be
> replicated to
> backup node in near real-time or in worst case each 1 - 2 hour.
> GlusterFS solves this issue yet at high performance penalty.
>
>
> The problem with a passive backup is that you never know it'll really
> work when needed. This is why active-active is many time preferred.
> It's also more cost effective usually - instead of some HW lying around.
>  
>
>
> >From what I read here
> http://lists.ovirt.org/pipermail/users/2017-July/083144.html
> 
> GlusterFS performance with oVirt is not very good right now
> because QEMU
> uses FUSE instead of libgfapi.
>
> What is optimal way to go on ?
>
>
> It's hard to answer without additional details.
> Y.
>  
>
> Thanks in advance.
> Andrei
>
> ___
> Users mailing list
> Users@ovirt.org 
> http://lists.ovirt.org/mailman/listinfo/users
> 
>
>

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Error importing Windows Server 2016 VMs from vCenter 6.5

2018-01-01 Thread Yaniv Kaul
On Mon, Jan 1, 2018 at 12:27 PM, Gianluca Cecchi 
wrote:

> On Sun, Dec 31, 2017 at 4:48 PM, Matthew Hoberg 
> wrote:
>
>> I am trying to import VMs from VMware 6.5 into oVirt 4.2 and it is
>> failing. The only message I get in the Engine interface is that it failed
>> to convert. All my 2016 VMs are running in EFI mode. I built a fresh 2016
>> VM in MBR mode and that imported fine. VMware is a 2 host cluster running
>> vCenter. oVirt 4.2 is a 2 host cluster running a hosted engine. Both
>> clusters connect to iSCSI datastores. Is there anything else I have to do
>> to be able to import an EFI based VM?
>>
>>
>>
>> ~Matt
>>
>>
> hello,
> I see this bug still in "New" status, for general feature of booting VM
> via EFI, so I think it could be related with your conversion problems:
> https://bugzilla.redhat.com/show_bug.cgi?id=1327846
> You can add comments there to speed up enabling it, perhaps
>

And virt-v2v has to support this conversion as well.
Y.


>
>
> Gianluca
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Error importing Windows Server 2016 VMs from vCenter 6.5

2018-01-01 Thread Gianluca Cecchi
On Sun, Dec 31, 2017 at 4:48 PM, Matthew Hoberg 
wrote:

> I am trying to import VMs from VMware 6.5 into oVirt 4.2 and it is
> failing. The only message I get in the Engine interface is that it failed
> to convert. All my 2016 VMs are running in EFI mode. I built a fresh 2016
> VM in MBR mode and that imported fine. VMware is a 2 host cluster running
> vCenter. oVirt 4.2 is a 2 host cluster running a hosted engine. Both
> clusters connect to iSCSI datastores. Is there anything else I have to do
> to be able to import an EFI based VM?
>
>
>
> ~Matt
>
>
hello,
I see this bug still in "New" status, for general feature of booting VM via
EFI, so I think it could be related with your conversion problems:
https://bugzilla.redhat.com/show_bug.cgi?id=1327846
You can add comments there to speed up enabling it, perhaps

Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] New post on oVirt blog: Customizing the host deploy process

2018-01-01 Thread Yaniv Kaul
A new oVirt blog post has been published, on how you can customize, in
oVirt 4.2, the host deploy process, by using Ansible. This is a very
powerful feature which allows extending the regular host deploy process
with additional post-deployment tasks, such as package installation,
service configuration, etc.

See https://ovirt.org/blog/2017/12/host-deploy-customization/ for the full
post.

Thanks,
Y.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Q: 2-Node Failover Setup - NFS or GlusterFS ?

2018-01-01 Thread Yaniv Kaul
On Mon, Jan 1, 2018 at 12:50 AM, Andrei V  wrote:

> Hi !
>
> I'm installing 2-node failover cluster (2 x Xeon servers with local RAID
> 5 / ext4 for oVirt storage domains).
> Now I have a dilemma - use either GlusterFS replica 2 or stick with NFS?
>

Replica 2 is not good enough, as it can leave you with split brain. It's
been discussed in the mailing list several times.
How do you plan to achieve HA with NFS? With drbd?


>
> 4.2 Engine is running on separate hardware.
>

Is the Engine also highly available?


> Each node have its own storage domain (on internal RAID).
>

So some sort of replica 1 with geo-replication between them?


>
> All VMs must be highly available.
>

Without shared storage, it may be tricky.

One of the VMs is an accounting/stock control system with FireBird SQL
> server on CentOS is speed-critical.
>

But is IO the bottleneck? Are you using SSDs / NVMe drives?
I'm not familiar enough with FireBird SQL server - does it have an
application layer replication you might opt to use?
In such case, you could pass-through a NVM disk and have the application
layer perform the replication between the nodes.


> No load balancing between nodes necessary. 2nd is just for backup if 1st
> for whatever reason goes up in smoke. All VM disks must be replicated to
> backup node in near real-time or in worst case each 1 - 2 hour.
> GlusterFS solves this issue yet at high performance penalty.
>

The problem with a passive backup is that you never know it'll really work
when needed. This is why active-active is many time preferred.
It's also more cost effective usually - instead of some HW lying around.


>
> From what I read here
> http://lists.ovirt.org/pipermail/users/2017-July/083144.html
> GlusterFS performance with oVirt is not very good right now because QEMU
> uses FUSE instead of libgfapi.
>
> What is optimal way to go on ?
>

It's hard to answer without additional details.
Y.


> Thanks in advance.
> Andrei
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users