Re: [ovirt-users] oVirt newbie

2015-05-15 Thread Douglas Schilling Landgraf

Ola Fabio,

Tudo bem?

On 05/15/2015 04:27 PM, Fábio Coelho wrote:

Hi Everyone,

l'm  a SysAdmin, old vmware user, willing to advance in open source
alternatives. I hope I can help and be helped for here :D.


For sure. Fell free to ask, there are a lot of cases from companies 
migrating from VMware to oVirt/RHEV.




Currently, I'm testing a setup with ovirt-node-el7 and a centos6
engine-server, and at the moment, all is going fine!


Thanks for your feedback, we appreciate it. Looking forward for more 
good news from your side. :)


Some case studies:
http://www.ovirt.org/Category:Case_studies


Abraco!
--
Cheers
Douglas
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] oVirt newbie

2015-05-15 Thread Fábio Coelho
Hi Everyone, 

l'm a SysAdmin, old vmware user, willing to advance in open source 
alternatives. I hope I can help and be helped for here :D. 

Currently, I'm testing a setup with ovirt-node-el7 and a centos6 engine-server, 
and at the moment, all is going fine! 

Cheers, 

Fábio Coelho 


Aviso Legal

A informação contida neste e-mail e em seus anexos pode ser restrita, sendo o 
emitente deste responsável por seu conteúdo e endereçamento. Se você não for a 
pessoa autorizada a receber esta mensagem e tendo recebido a mesma por engano, 
favor apagá-la imediatamente. A JFSC considera opiniões, conclusões e outras 
informações não oficiais de responsabilidade do usuário deste serviço.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Possible SELinux problems with ovirt syncronizing networks

2015-05-15 Thread Jeremy Utley
Hello all!

Running ovirt 3.5 on CentOS 7 currently, and running into a little
problem.  All my nodes are currently showing the ovirtmgmt network as
unsyncronized.  When I try to force them to sync, it fails.  Looking at the
/var/log/vdsm/supervdsm.log file on one of the nodes, it looks like it has
to do with SELinux.  See:

http://pastebin.com/NX7yetVW

Which contains a dump of the supervdsm.log file when I tried to force
syncronization.  Judging from what I'm seeing, after VDSM writes the new
network configuration files to /etc/sysconfig/network-scripts/ifcfg-*, it
attempts to run a selinux.restorecon function against those files.  Since
we disable SELinux by default on all our servers, this action is failing
with Errno 61 (see lines 66-71 and 86-91 in the above-mentioned pastebin).
Is this normal?  Is ovirt expecting to run with SELinux enabled?  Or am I
mis-interpreting this log output?

Thanks for any help or advice you can give me!

Jeremy
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Live migration qemu 2.1.2 -> 2.1.3: Unknown savevm section

2015-05-15 Thread Markus Stockhausen
> Von: Markus Stockhausen
> Gesendet: Freitag, 10. April 2015 20:51
> An: users@ovirt.org
> Betreff: Live migration qemu 2.1.2 -> 2.1.3: Unknown savevm section
> 
> Hi,
> 
> don't know what will be the best place for the following question. 
> So starting with the OVirt mailing list. 
> 
> We are using OVirt with FC20 nodes with enabled virt-preview.
> Thus we are running qemu 2.1.2. Everything is working smoothly 
> including live merge.  
> 
> For testing purposes we compiled qemu 2.1.3 from Fedora koji
> and updated one of the hosts. Trying to migrate a running VM to
> the new host fails with the message
> 
> Unknown savevm section or instance 'kvm-tpr-opt' 0
> 
> I guess some incompatibility between the versions. But qemu git
> history between 2.1.2 and 2.1.3 gives no hints about the reason.
> 
> Any ideas - or is that migration scenario not supported at all?

The qemu venom vulnerability put me back to the topic. Just in case
someone is interested. The following patch breaks live migration
between 2.1.2 and 2.1.3.

pc: Fix disabling of vapic for compat PC models
http://git.qemu.org/?p=qemu.git;a=commit;h=8100812711ea480119f9796bd6c0895e6ac85d0f

I dropped this one during rebuild and now everything works fine again.

Markus

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte
Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail
irrtümlich erhalten haben, informieren Sie bitte sofort den Absender und
vernichten Sie diese Mail. Das unerlaubte Kopieren sowie die unbefugte
Weitergabe dieser Mail ist nicht gestattet.

Über das Internet versandte E-Mails können unter fremden Namen erstellt oder
manipuliert werden. Deshalb ist diese als E-Mail verschickte Nachricht keine
rechtsverbindliche Willenserklärung.

Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln

Vorstand:
Kadir Akin
Dr. Michael Höhnerbach

Vorsitzender des Aufsichtsrates:
Hans Kristian Langva

Registergericht: Amtsgericht Köln
Registernummer: HRB 52 497

This e-mail may contain confidential and/or privileged information. If you
are not the intended recipient (or have received this e-mail in error)
please notify the sender immediately and destroy this e-mail. Any
unauthorized copying, disclosure or distribution of the material in this
e-mail is strictly forbidden.

e-mails sent over the internet may have been written under a wrong name or
been manipulated. That is why this message sent as an e-mail is not a
legally binding declaration of intention.

Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln

executive board:
Kadir Akin
Dr. Michael Höhnerbach

President of the supervisory board:
Hans Kristian Langva

Registry office: district court Cologne
Register number: HRB 52 497


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] status of ovirt 3.5.1 with centos 7.1

2015-05-15 Thread Chris Adams
Once upon a time, Chris Adams  said:
> The power management appears to be a bug between vdsm and the fence
> agents.  Are you using fence_ipmilan?  It seems to not be seeing
> options.

Late follow-up, but I finally had some time to look at this.  Between
RHEL 7.0 and RHEL 7.1 (fence-agents 4.0.2 and 4.0.11), fence_ipmilan was
replaced, going from a compiled C program to a python script.  In the
process, boolean option handling appears to have changed when reading
options from standard input (as vdsm passes them).

In the old version of fence_ipmilan, just setting "lanplus" in the oVirt
host power management screen as an option was sufficient, but in the new
version, "lanplus=1" is required.  I changed all my hosts to use
"lanplus=1" and the power management test succeeds.

I've filed a RH bug against fence-agents, as IMHO this is a regression.

https://bugzilla.redhat.com/show_bug.cgi?id=1222098

-- 
Chris Adams 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] missing disk after storage domain expansion

2015-05-15 Thread Andrea Ghelardi
After querying the hosted engine with the suggested command
https://pisa-ion-ovirtm-01/ovirt-engine/api/storagedomains/7a48fe46-2112-40a4-814f-24d74c760b2d/disks;unregistered

Indeed shows  a (unregistered?) disk







hertz_disk5
disk for SYBASE installation, on SAN shared
storage


hertz_disk5
4ab070c0-fb16-452d-8521-4ff0b004aef3




225485783040
225485783040
225485783040

ok

ide
raw
false
false
false
false
false



I'm waiting for any comments on the two OVF disk or any advice before
proceeding with the registering command
(test in production environment hm)

Cheers
AG

-Original Message-
From: Andrea Ghelardi [mailto:a.ghela...@iontrading.com]
Sent: Friday, May 15, 2015 12:30 PM
To: Maor Lipchuk
Cc: users@ovirt.org
Subject: RE: [ovirt-users] missing disk after storage domain expansion

Thank you for your reply!
I'm unsure if the disk contained any snapshots. I do not think so.
Is the register action a safe one to do in production system? I wouldn't
mess any of my existing running servers.

Here the relevant logs from the date
./engine.log-20150501.gz:2015-04-29 16:10:40,868  : disk creation
("hertz_disk5" newImageId = 4ab070c0-fb16-452d-8521-4ff0b004aef3)
./engine.log-20150512.gz:2015-05-04 17:22:42,691  : last  occurrence of the
imageid

Please note from
./engine.log-20150512.gz  2015-05-11 12:07:16,719 : creation of two disk OVF
store (??) on the same storage domain 7a48fe46-2112-40a4-814f-24d74c760b2d
right after the volume expansion

In the instructions.txt  what I did to enlarge the volume (and lose the
disk) (the guides is informational only, I followed the steps on a different
server)

Here some cats to thank you in advance
https://www.youtube.com/watch?v=nfTY41-zC7Y
AG


-Original Message-
From: Maor Lipchuk [mailto:mlipc...@redhat.com]
Sent: Wednesday, May 13, 2015 11:02 PM
To: Andrea Ghelardi
Cc: users@ovirt.org
Subject: Re: [ovirt-users] missing disk after storage domain expansion

Hi Andrea,

First of all, the issue sounds quite severe, can u please attach the engine
logs so we can try to figure out how that happened.
second, does this disk contained any snapshots?
 If not, can you try to register it back (see
http://www.ovirt.org/Features/ImportStorageDomain#Register_an_unregistered_disk)


Regards,
Maor



- Original Message -
> From: "Andrea Ghelardi" 
> To: users@ovirt.org
> Sent: Monday, May 11, 2015 12:01:17 AM
> Subject: Re: [ovirt-users] missing disk after storage domain expansion
>
>
>
> Ok, so,
>
> After _ a lot _ of unsuccessful approach, I finally connected to
> postegre DB directly.
>
> Browsing the tables I found “unregistered_ovf_of_entities” where there
> is a reference of the missing disk
>
>
>
>
>
> @ovf:diskId:4ab070c0-fb16-452d-8521-4ff0b004aef3
>
> @ovf:size:210
>
> @ovf:actual_size:210
>
> @ovf:vm_snapshot_id:2e24b255-bb84-4284-8785-e2a042045882
>
> @ovf:fileRef:16736ce0-a9df-410f-9f29-3a28364cdd41/4ab070c0-fb16-452d-8
> 521-4ff0b004aef3
>
> @ovf:format: http://www.vmware.com/specifications/vmdk.html#sparse
>
> @ovf:volume-format:RAW
>
> @ovf:volume-type:Preallocated
>
> @ovf:disk-interface:VirtIO
>
> @ovf:boot:false
>
> @ovf:disk-alias:hertz_disk5
>
> @ovf:disk-description:disk for SYBASE installation, on SAN shared
> storage
>
> @ovf:wipe-after-delete:false
>
>
>
> However, I’ve been unable to find any other helpful details.
>
> I guess the disk is not recoverable at this point?
>
> Any guru who has a good ovirt DB kwnoledge willing to give me some advice?
>
>
>
> Thanks as usual
>
> AG
>
>
>
>
> From: Andrea Ghelardi [mailto: a.ghela...@iontrading.com ]
> Sent: Thursday, May 07, 2015 6:08 PM
> To: ' users@ovirt.org '
> Subject: missing disk after storage domain expansion
>
>
>
>
> Hi gentlemen,
>
>
>
> I recently found an error on 1 of my storages: it was complaining
> about no free space but the VM was running and disk operational.
>
> Since I needed to perform some maintenance on the VM, I shut it down
> and at restart VM couldn’t boot up properly.
>
> Checked VM via console and a disk was missing. Edited fstab (luckily
> this disk was not root but heck! It had a Sybase DB on it!) and
> restarted VM this time ok.
>
>
>
> Since the disk resides on the dstore with no space, I expanded the
> iSCSI LUN, then refreshed multipath on hosts, then resized PVs and now
> ovirt is showing the correct size (logs do not complain anymore on no
> free space).
>
>
>
> BUUUT
>
>
>
> Now disk is missing. It is not shown anymore on Disks tab nor anywhere
> else.
>
> Problem is that storage shows 214GB occupancy (size of the missing
> disk) so data is there but cannot find it anymore.
>
>
>
> Logs show original disk creation, errors from the lack of space,
> refresh of the storage size and then no more references on the disk.
>
>
>
> What can I do to find those missing ~210GBs?
>
>
>
> Cheers
>
> Andrea Ghelardi
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/user

Re: [ovirt-users] Problems with installing second host with hosted engine

2015-05-15 Thread Simone Tiraboschi


- Original Message -
> From: "Tigran Baluyan" 
> To: users@ovirt.org
> Sent: Tuesday, May 5, 2015 9:54:10 AM
> Subject: [ovirt-users] Problems with installing second host with hosted   
> engine
> 
> Hi,
> 
> I've stumbled across a very strange bug and been trying to work around it for
> a couple of days with no luck.
> 
> So my setup is:
> Host 1: centos 6.6 with ovirt 3.5 deployed as hosted engine. Had a few
> problems while deploying it, but eventually got it up and running.
> 
> Host 2: same centos 6.6, same hardware. While executing hosted-engine
> --deploy, I am getting this error:
> 
> [ INFO ] Stage: Setup validation
> [WARNING] Host name hyp-A2 has no domain suffix
> [WARNING] Failed to resolve hyp-A2 using DNS, it can be resolved only locally
> [ ERROR ] Failed to execute stage 'Setup validation': [Errno 2] No such file
> or directory:
> '/rhev/data-center/mnt/192.168.1.59:_media_ssd__raid10_ovirt_data__fast/f617dfce-4ac8-495b-a0f3-4f85ce39df27/ha_agent/hosted-engine.metadata'
> 
> My storage machine is a separate host running RHEL 6.4. /etc/exports looks
> like this:
> 
> /media/ssd_raid10/ *(rw,sync,all_squash)
> /media/ssd_raid10/ovirt
> *(rw,sync,no_subtree_check,anonuid=36,anongid=36,all_squash)
> *some other shares*
> 
> Now here is the strange part: after repeatedly bashing my head against the
> wall I finally noticed the difference: on the first host it mounts as:
> 
> 
> 192.168.1.59:/media/ssd_raid10/ovirt/data_fast on
> /rhev/data-center/mnt/192.168.1.59:_media_ssd__raid10_ovirt_data__fast_
> 
> While on the second host it mounts as:
> 
> 192.168.1.59:/media/ssd_raid10/ovirt/data_fast on
> /rhev/data-center/mnt/192.168.1.59:_media_ssd__raid10_ovirt_data__fast
> 
> (note the lack of underscore in the end)

It's know bug at least on master while it seams that it doesn't happens on 3.5.1
We are investigating it.
https://bugzilla.redhat.com/show_bug.cgi?id=1215967

Did you faced it on 3.5.2?

> That is what caused the problem - this "hosted-engine.metadata" file is
> actually a symlink. And because of different mount points on the second host
> it was pointing to nowhere.
> 
> So I figured, maybe I should add / to the end of the nfs path. So I tried it
> like this:
> 
> Please specify the storage you would like to use (iscsi, nfs3, nfs4)[nfs3]:
> Please specify the full shared storage connection path to use (example:
> host:/path): 192.168.1.59:/media/ssd_raid10/ovirt/data_fast/
> [ INFO ] Installing on first host
> Please provide storage domain name. [hosted_storage]:
> 
> As you can see, in this case it doesn't detect that it already has a hosted
> engine installed.
> 
> So, I'm out of ideas how to make it work. There is clearly something unstable
> in the installation scripts, but does anyone have any idea how to make it
> work?
> 
> PS: Sorry for my crude English, not really a native speaker.
> 
> Thanks,
> Tigran.
> 
> 
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] R: NFS export domain still remain in "Preparing for maintenance"

2015-05-15 Thread NUNIN Roberto
Apologize for spamming: when I try to put the NFS_Export in maintenance mode, 
in the engine GUI, messages area,  I can see:


2015-May-15, 09:22

Storage Domain NFS_Export (Data Center x) was deactivated.






but the status of the NFS_Export still remain “Preparing For Maintenance”  and 
the Deattach function is disabled.

BR





RN





> -Messaggio originale-

> Da: NUNIN Roberto

> Inviato: venerdì 15 maggio 2015 09:22

> A: 'Maor Lipchuk'

> Cc: users@ovirt.org

> Oggetto: R: [ovirt-users] NFS export domain still remain in "Preparing for

> maintenance"

>

> Hi Maor

>

> Thanks for answering.

>

> Probably my description wasn't so clear, try to explain better.

>

> We have configured one storage as NFS export (exported from a server that

> isn't part of the oVirt installation), to be used to move exported VM between

> one DC and another.

>

> The process to export VM's ends correctly but, when we try to put the NFS

> export storage in maintenance mode (not the hosts) before to deattach it

> from the current DC and, subsequently, attach to the DC where I want to

> import VM, the process remain hung  indefinitely in “Preparing for

> maintenance phase".

>

> I'm sure that the VM export process is ended correctly, looking at the logs in

> the engine GUI and also to data into the NFS export.

>

> Because of only one export storage is allowed per DC, I'm not able to move

> VM between DC's.

>

> This is the behavior.

>

> Best regards

>

>

> Roberto

>

> > -Messaggio originale-

> > Da: Maor Lipchuk [mailto:mlipc...@redhat.com]

> > Inviato: mercoledì 13 maggio 2015 23:18

> > A: NUNIN Roberto

> > Cc: users@ovirt.org

> > Oggetto: Re: [ovirt-users] NFS export domain still remain in "Preparing for

> > maintenance"

> >

> > Hi NUNIN,

> >

> > I'm not sure that I clearly understood the problem.

> > You wrote that your NFS export is attached to a 6.6 cluster, though a

> cluster

> > is mainly an entity which contains Hosts.

> >

> > If it is the Host that was preparing for maintenance then it could be that

> > there are VMs which are running on that Host which are currently during

> live

> > migration.

> > In that case you could either manually migrate those VMs, shut them

> down,

> > or simply move the Host back to active.

> > Is that indeed the issue? if not ,can you elaborate a but more please

> >

> >

> > Thanks,

> > Maor

> >

> >

> >

> > - Original Message -

> > > From: "NUNIN Roberto" 
> > > mailto:roberto.nu...@comifar.it>>

> > > To: users@ovirt.org

> > > Sent: Tuesday, May 12, 2015 5:17:39 PM

> > > Subject: [ovirt-users] NFS export domain still remain in "Preparing for

> > maintenance"

> > >

> > >

> > >

> > > Hi all

> > >

> > >

> > >

> > > We are using oVirt engine 3.5.1-0.0 on Centos 6.6

> > >

> > > We have two DC. One with hosts using vdsm-4.16.10-

> > 8.gitc937927.el7.x86_64,

> > > the other vdsm-4.16.12-7.gita30da75.el6.x86_64 on Centos6.6

> > >

> > > No hosted-engine, it run on a dedicates VM, outside oVirt.

> > >

> > >

> > >

> > > Behavior: When try to put the NFS export currently active and attached

> to

> > the

> > > 6.6 cluster, used to move VM from one DC to the other, this remain

> > > indefinitely in “Preparing for maintenance phase”.

> > >

> > >

> > >

> > > No DNS resolution issue in place. All parties involved are solved directly

> > > and via reverse resolution.

> > >

> > > I’ve read about the issue on el7 and IPv6 bug, but here we have the

> > problem

> > > on Centos 6.6 hosts.

> > >

> > >

> > >

> > > Any idea/suggestion/further investigation ?

> > >

> > >

> > >

> > > Can we reinitialize the NFS export in some way ? Only erasing content ?

> > >

> > > Thanks in advance for any suggestion.

> > >

> > >

> > >

> > >

> > >

> > > Roberto Nunin

> > >

> > > Italy

> > >

> > >

> > >

> > >

> > > Questo messaggio e' indirizzato esclusivamente al destinatario indicato e

> > > potrebbe contenere informazioni confidenziali, riservate o proprietarie.

> > > Qualora la presente venisse ricevuta per errore, si prega di segnalarlo

> > > immediatamente al mittente, cancellando l'originale e ogni sua copia e

> > > distruggendo eventuali copie cartacee. Ogni altro uso e' strettamente

> > > proibito e potrebbe essere fonte di violazione di legge.

> > >

> > > This message is for the designated recipient only and may contain

> > privileged,

> > > proprietary, or otherwise private information. If you have received it in

> > > error, please notify the sender immediately, deleting the original and all

> > > copies and destroying any hard copies. Any other use is strictly 
> > > prohibited

> > > and may be unlawful.

> > >

> > > ___

> > > Users mailing list

> > > Users@ovirt.org

> > > http://lists.ovirt.org/mailman/listinfo/users

> > >


[ovirt-users] R: NFS export domain still remain in "Preparing for maintenance"

2015-05-15 Thread NUNIN Roberto
Hi Maor

Thanks for answering.

Probably my description wasn't so clear, try to explain better.

We have configured one storage as NFS export (exported from a server that isn't 
part of the oVirt installation), to be used to move exported VM between one DC 
and another.

The process to export VM's ends correctly but, when we try to put the NFS 
export storage in maintenance mode (not the hosts) before to deattach it from 
the current DC and, subsequently, attach to the DC where I want to import VM, 
the process remain hung  indefinitely in “Preparing for maintenance phase".

I'm sure that the VM export process is ended correctly, looking at the logs in 
the engine GUI and also to data into the NFS export.

Because of only one export storage is allowed per DC, I'm not able to move VM 
between DC's.

This is the behavior.

Best regards


Roberto

> -Messaggio originale-
> Da: Maor Lipchuk [mailto:mlipc...@redhat.com]
> Inviato: mercoledì 13 maggio 2015 23:18
> A: NUNIN Roberto
> Cc: users@ovirt.org
> Oggetto: Re: [ovirt-users] NFS export domain still remain in "Preparing for
> maintenance"
>
> Hi NUNIN,
>
> I'm not sure that I clearly understood the problem.
> You wrote that your NFS export is attached to a 6.6 cluster, though a cluster
> is mainly an entity which contains Hosts.
>
> If it is the Host that was preparing for maintenance then it could be that
> there are VMs which are running on that Host which are currently during live
> migration.
> In that case you could either manually migrate those VMs, shut them down,
> or simply move the Host back to active.
> Is that indeed the issue? if not ,can you elaborate a but more please
>
>
> Thanks,
> Maor
>
>
>
> - Original Message -
> > From: "NUNIN Roberto" 
> > To: users@ovirt.org
> > Sent: Tuesday, May 12, 2015 5:17:39 PM
> > Subject: [ovirt-users] NFS export domain still remain in "Preparing for
>   maintenance"
> >
> >
> >
> > Hi all
> >
> >
> >
> > We are using oVirt engine 3.5.1-0.0 on Centos 6.6
> >
> > We have two DC. One with hosts using vdsm-4.16.10-
> 8.gitc937927.el7.x86_64,
> > the other vdsm-4.16.12-7.gita30da75.el6.x86_64 on Centos6.6
> >
> > No hosted-engine, it run on a dedicates VM, outside oVirt.
> >
> >
> >
> > Behavior: When try to put the NFS export currently active and attached to
> the
> > 6.6 cluster, used to move VM from one DC to the other, this remain
> > indefinitely in “Preparing for maintenance phase”.
> >
> >
> >
> > No DNS resolution issue in place. All parties involved are solved directly
> > and via reverse resolution.
> >
> > I’ve read about the issue on el7 and IPv6 bug, but here we have the
> problem
> > on Centos 6.6 hosts.
> >
> >
> >
> > Any idea/suggestion/further investigation ?
> >
> >
> >
> > Can we reinitialize the NFS export in some way ? Only erasing content ?
> >
> > Thanks in advance for any suggestion.
> >
> >
> >
> >
> >
> > Roberto Nunin
> >
> > Italy
> >
> >
> >
> >
> > Questo messaggio e' indirizzato esclusivamente al destinatario indicato e
> > potrebbe contenere informazioni confidenziali, riservate o proprietarie.
> > Qualora la presente venisse ricevuta per errore, si prega di segnalarlo
> > immediatamente al mittente, cancellando l'originale e ogni sua copia e
> > distruggendo eventuali copie cartacee. Ogni altro uso e' strettamente
> > proibito e potrebbe essere fonte di violazione di legge.
> >
> > This message is for the designated recipient only and may contain
> privileged,
> > proprietary, or otherwise private information. If you have received it in
> > error, please notify the sender immediately, deleting the original and all
> > copies and destroying any hard copies. Any other use is strictly prohibited
> > and may be unlawful.
> >
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> >

Questo messaggio e' indirizzato esclusivamente al destinatario indicato e 
potrebbe contenere informazioni confidenziali, riservate o proprietarie. 
Qualora la presente venisse ricevuta per errore, si prega di segnalarlo 
immediatamente al mittente, cancellando l'originale e ogni sua copia e 
distruggendo eventuali copie cartacee. Ogni altro uso e' strettamente proibito 
e potrebbe essere fonte di violazione di legge.

This message is for the designated recipient only and may contain privileged, 
proprietary, or otherwise private information. If you have received it in 
error, please notify the sender immediately, deleting the original and all 
copies and destroying any hard copies. Any other use is strictly prohibited and 
may be unlawful.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users