[ovirt-users] OS versions?

2016-01-21 Thread gflwqs gflwqs
Hi list,

Is it supported/recommended to run centos 7.2 as hosts on ovirt 3.5.6.2?

Thanks!

Christian
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Using Microsoft NFS server as storage domain

2016-01-21 Thread Dan Yasny
inline

On Thu, Jan 21, 2016 at 7:54 AM, Pavel Gashev  wrote:

> Hello,
>
> First of all I would like to ask if anybody has an experience with using
> Microsoft NFS server as a storage domain.
>
>
I have used one as an ISO domain for years. It wasn't great, but it was
good enough. Never a data domain though


> The main issue with MS NFS is NTFS :) NTFS doesn't support sparse files.
> Technically it's possible by enabling NTFS compression but  it has bad
> performance on huge files which is our case. Also there is no option in
> oVirt web interface to use COW format on NFS storage domains.
>
> Since it looks like oVirt doesn't support MS NFS, I decided to migrate all
> my VMs out of MS NFS to another storage. And I hit a bug. Live storage
> migration *silently* *corrupts* *data* if you migrate a disk from MS NFS
> storage domain. So if you shutdown just migrated VM and check filesystem
> you find that it has a lot of unrecoverable errors.
>
> There are the following symptoms:
> 1. It corrupts data if you migrate a disk from MS NFS to Linux NFS
> 2. It corrupts data if you migrate a disk from MS NFS to iSCSI
> 3. There is no corruption if you migrate from Linux NFS to iSCSI and vice
> versa.
> 4. There is no corruption if you migrate from anywhere to MS NFS.
> 5. Data corruption happens after 'Auto-generated for Live Storage
> Migration' snapshot. So if you rollback the snapshot, you could see
> absolutely clean filesystem.
> 6. It doesn't depend on SPM. So it corrupts data if SPM is on the same
> host, or another.
> 7. There are no error messages in vdsm/qemu/system logs.
>
> Yes, of course I could migrate from MS NFS with downtime – it's not an
> issue. The issue is that oVirt does silently corrupt data under some
> circumstances.
>
> Could you please help me to understand the reason of data corruption?
>
> vdsm-4.17.13-1.el7.noarch
> qemu-img-ev-2.3.0-31.el7_2.4.1.x86_64
> libvirt-daemon-1.2.17-13.el7_2.2.x86_64
> ovirt-engine-backend-3.6.1.3-1.el7.centos.noarch
>
> Thank you
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Recent hosts on older oVirt DC setup?

2016-01-21 Thread Sven Kieske
On 21/01/16 07:48, Nir Soffer wrote:
> 3.6 must work with engines > 3.3.

3.6 > 3.3 or 3.6 >= 3.3 ?



-- 
Mit freundlichen Grüßen / Regards

Sven Kieske

Systemadministrator
Mittwald CM Service GmbH & Co. KG
Königsberger Straße 6
32339 Espelkamp
T: +495772 293100
F: +495772 29
https://www.mittwald.de
Geschäftsführer: Robert Meyer
St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen
Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad
Oeynhausen



signature.asc
Description: OpenPGP digital signature
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Using Microsoft NFS server as storage domain

2016-01-21 Thread Pavel Gashev
Hello,

First of all I would like to ask if anybody has an experience with using 
Microsoft NFS server as a storage domain.

The main issue with MS NFS is NTFS :) NTFS doesn't support sparse files. 
Technically it's possible by enabling NTFS compression but  it has bad 
performance on huge files which is our case. Also there is no option in oVirt 
web interface to use COW format on NFS storage domains.

Since it looks like oVirt doesn't support MS NFS, I decided to migrate all my 
VMs out of MS NFS to another storage. And I hit a bug. Live storage migration 
silently corrupts data if you migrate a disk from MS NFS storage domain. So if 
you shutdown just migrated VM and check filesystem you find that it has a lot 
of unrecoverable errors.

There are the following symptoms:
1. It corrupts data if you migrate a disk from MS NFS to Linux NFS
2. It corrupts data if you migrate a disk from MS NFS to iSCSI
3. There is no corruption if you migrate from Linux NFS to iSCSI and vice versa.
4. There is no corruption if you migrate from anywhere to MS NFS.
5. Data corruption happens after 'Auto-generated for Live Storage Migration' 
snapshot. So if you rollback the snapshot, you could see absolutely clean 
filesystem.
6. It doesn't depend on SPM. So it corrupts data if SPM is on the same host, or 
another.
7. There are no error messages in vdsm/qemu/system logs.

Yes, of course I could migrate from MS NFS with downtime – it's not an issue. 
The issue is that oVirt does silently corrupt data under some circumstances.

Could you please help me to understand the reason of data corruption?

vdsm-4.17.13-1.el7.noarch
qemu-img-ev-2.3.0-31.el7_2.4.1.x86_64
libvirt-daemon-1.2.17-13.el7_2.2.x86_64
ovirt-engine-backend-3.6.1.3-1.el7.centos.noarch

Thank you


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] slow browsing in ovirt3.6

2016-01-21 Thread Nathanaël Blanchet
I'm working with ovirt-shell as a workaround, less intuitive but very 
efficient :)


Le 21/01/2016 08:15, Nicolas Ecarnot a écrit :

Le 21/01/2016 07:54, Nir Soffer a écrit :

On Thu, Jan 21, 2016 at 8:36 AM, alireza sadeh seighalan
 wrote:

hi everyone

i dont want to know why browsing in ovirt3.6's (3.6.2) admin console is
slow. i use firefox42.0 (from windows system) . it is a little 
faster in
google chrome but it is slow too.how can i solve this problem? 
thanks in

advance


Maybe this is this issue?
https://bugzilla.redhat.com/1264809

Was closed because we could not reproduce on later.


I just added my 2 cents on this bug, because I am silently suffering 
this same slooowwnesss for months.

Is it possible to re-open this bug and help me help you to debug this?



--
Nathanaël Blanchet

Supervision réseau
Pôle Infrastrutures Informatiques
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5   
Tél. 33 (0)4 67 54 84 55
Fax  33 (0)4 67 54 84 14
blanc...@abes.fr

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] vdsm IOProcess request queue full

2016-01-21 Thread Nir Soffer
On Thu, Jan 21, 2016 at 9:09 AM, alireza sadeh seighalan
 wrote:
> hi everyone
>
> when i run  systemctl status vdsmd  it shows me these strange messages:
>
> vdsm IOProcess  WARNING (32)  request queue full
> vdsm IOProcess  WARNING (33)  request queue full
> vdsm IOProcess  WARNING (34)  request queue full
> .
> .
> .
> vdsm IOProcess  WARNING (41)  request queue full
>
> is it normal situation?

No, this means there are stuck requests in ioprocess queues, meaning your
storage is responding too slow, networking issues, or a bug in ioprocess.

Please open a bug, and attach:
- vdsm logs
- /var/log/messages
- output of nfsstat when you see these warnings

Please also check your storage server - do you anything interesting
in the logs or health information available in your storage?

> i see these messages in ovirt 3.6   3.6.1   3.6.2
>
> os: centos7.1  latest update (20jan2016)
> ovirt: 3.6.2
> server:dl 380 g7
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] slow browsing in ovirt3.6

2016-01-21 Thread Nathanaël Blanchet

Le 21/01/2016 09:30, alireza sadeh seighalan a écrit :

hi


  Nathanaël Blanchet


would you explain more about ovirt-shell ? dont you use firefox or 
chrome for administration?

both, but they are both slow because of an intensive javascript call.
ovirt-shell is a python CLI based on the REST API, have a look there : 
http://www.ovirt.org/CLI#show
you can easily open a console (console ), start a vm in a nutshell 
(action vm start ) or change a parameter (update vm  
--cluster-name ).

 is your friend :)

For all who meet this issue, we may run the same configuration. Mine is 
a shared storage with 2 datacenters, 6 x 2TB FC lun, 20 hosts and more 
of 200 vms.
I connect ovirt to foreman for provisionning and errata integration. I 
haven't upgraded to 3.6.1 yet.


On Thu, Jan 21, 2016 at 11:58 AM, Nathanaël Blanchet > wrote:


I'm working with ovirt-shell as a workaround, less intuitive but
very efficient :)

Le 21/01/2016 08:15, Nicolas Ecarnot a écrit :

Le 21/01/2016 07:54, Nir Soffer a écrit :

On Thu, Jan 21, 2016 at 8:36 AM, alireza sadeh seighalan
> wrote:

hi everyone

i dont want to know why browsing in ovirt3.6's (3.6.2)
admin console is
slow. i use firefox42.0 (from windows system) . it is
a little faster in
google chrome but it is slow too.how can i solve this
problem? thanks in
advance


Maybe this is this issue?
https://bugzilla.redhat.com/1264809

Was closed because we could not reproduce on later.


I just added my 2 cents on this bug, because I am silently
suffering this same slooowwnesss for months.
Is it possible to re-open this bug and help me help you to
debug this?


-- 
Nathanaël Blanchet


Supervision réseau
Pôle Infrastrutures Informatiques
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5
Tél. 33 (0)4 67 54 84 55
Fax  33 (0)4 67 54 84 14
blanc...@abes.fr 


___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users




--
Nathanaël Blanchet

Supervision réseau
Pôle Infrastrutures Informatiques
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5   
Tél. 33 (0)4 67 54 84 55
Fax  33 (0)4 67 54 84 14
blanc...@abes.fr

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] vdsm IOProcess request queue full

2016-01-21 Thread Nathanaël Blanchet



Le 21/01/2016 08:09, alireza sadeh seighalan a écrit :

hi everyone

when i run  systemctl status vdsmd  it shows me these strange messages:

vdsm IOProcess  WARNING (32)  request queue full
vdsm IOProcess  WARNING (33)  request queue full
vdsm IOProcess  WARNING (34)  request queue full
.
.
.
vdsm IOProcess  WARNING (41)  request queue full

is it normal situation?
i see these messages in ovirt 3.6   3.6.1   3.6.2

os: centos7.1  latest update (20jan2016)

the latest is 7.2

ovirt: 3.6.2

it is still a release candidate (RC3)

server:dl 380 g7

I meet too a lot of issue with HP hosts, dell ones work fine.




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


--
Nathanaël Blanchet

Supervision réseau
Pôle Infrastrutures Informatiques
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5   
Tél. 33 (0)4 67 54 84 55
Fax  33 (0)4 67 54 84 14
blanc...@abes.fr

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Using Microsoft NFS server as storage domain

2016-01-21 Thread Nir Soffer
On Thu, Jan 21, 2016 at 2:54 PM, Pavel Gashev  wrote:
> Hello,
>
> First of all I would like to ask if anybody has an experience with using
> Microsoft NFS server as a storage domain.
>
> The main issue with MS NFS is NTFS :) NTFS doesn't support sparse files.
> Technically it's possible by enabling NTFS compression but  it has bad
> performance on huge files which is our case. Also there is no option in
> oVirt web interface to use COW format on NFS storage domains.

You can
1. create a small disk (1G)
2. create a snapshot
3. extend the disk go the final size

And you have nfs with cow format. The performance difference with one snapshot
should be small.

> Since it looks like oVirt doesn't support MS NFS, I decided to migrate all
> my VMs out of MS NFS to another storage. And I hit a bug. Live storage
> migration silently corrupts data if you migrate a disk from MS NFS storage
> domain. So if you shutdown just migrated VM and check filesystem you find
> that it has a lot of unrecoverable errors.
>
> There are the following symptoms:
> 1. It corrupts data if you migrate a disk from MS NFS to Linux NFS
> 2. It corrupts data if you migrate a disk from MS NFS to iSCSI
> 3. There is no corruption if you migrate from Linux NFS to iSCSI and vice
> versa.
> 4. There is no corruption if you migrate from anywhere to MS NFS.
> 5. Data corruption happens after 'Auto-generated for Live Storage Migration'
> snapshot. So if you rollback the snapshot, you could see absolutely clean
> filesystem.

Can you try to create a live-snapshot on MS NFS? It seems that this is the
issue, not live storage migration.

Do you have qemu-guest-agent on the vm? Without qemu-guest-agent, file
systems on the guest will no be freezed during the snapshot, which may cause
inconsistent snapshot.

Can you reproduce this with virt-manager, or by creating a vm and taking
a snapshot using virsh?

> 6. It doesn't depend on SPM. So it corrupts data if SPM is on the same host,
> or another.
> 7. There are no error messages in vdsm/qemu/system logs.
>
> Yes, of course I could migrate from MS NFS with downtime – it's not an
> issue. The issue is that oVirt does silently corrupt data under some
> circumstances.
>
> Could you please help me to understand the reason of data corruption?

Please file a bug and attach:

- /var/log/vdsm/vdsm.log
- /var/log/messages
- /var/log/sanlock.log
- output of  nfsstat during the test, maybe run it every minute?

> vdsm-4.17.13-1.el7.noarch
> qemu-img-ev-2.3.0-31.el7_2.4.1.x86_64
> libvirt-daemon-1.2.17-13.el7_2.2.x86_64
> ovirt-engine-backend-3.6.1.3-1.el7.centos.noarch
>
> Thank you
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] OS versions?

2016-01-21 Thread Nir Soffer
On Thu, Jan 21, 2016 at 3:19 PM, gflwqs gflwqs  wrote:
> Hi list,
>
> Is it supported/recommended to run centos 7.2 as hosts on ovirt 3.5.6.2?

It is supported and recommended.

Nir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Recent hosts on older oVirt DC setup?

2016-01-21 Thread Nir Soffer
On Thu, Jan 21, 2016 at 3:04 PM, Sven Kieske  wrote:
> On 21/01/16 07:48, Nir Soffer wrote:
>> 3.6 must work with engines > 3.3.
>
> 3.6 > 3.3 or 3.6 >= 3.3 ?

Hi Sven,

engine 3.3 is not supported in vdsm 3.6.

See /usr/lib/python2.7/site-packages/vdsm/dsaversion.py

'supportedENGINEs': ['3.4', '3.5', '3.6', '4.0'],
'clusterLevels': ['3.4', '3.5', '3.6', '4.0'],

Nir

>
>
>
> --
> Mit freundlichen Grüßen / Regards
>
> Sven Kieske
>
> Systemadministrator
> Mittwald CM Service GmbH & Co. KG
> Königsberger Straße 6
> 32339 Espelkamp
> T: +495772 293100
> F: +495772 29
> https://www.mittwald.de
> Geschäftsführer: Robert Meyer
> St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen
> Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad
> Oeynhausen
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Using Microsoft NFS server as storage domain

2016-01-21 Thread Pavel Gashev
On Thu, 2016-01-21 at 18:42 +, Nir Soffer wrote:

On Thu, Jan 21, 2016 at 2:54 PM, Pavel Gashev 
> wrote:

Also there is no option in
oVirt web interface to use COW format on NFS storage domains.



You can
1. create a small disk (1G)
2. create a snapshot
3. extend the disk go the final size

And you have nfs with cow format. The performance difference with one snapshot
should be small.


Yes. And there are other workarounds:
1. Use some block (i.e. iSCSI) storage for creating a thin provisioned disk 
(which is COW) and then move it to required storage.
2. Keep an empty 1G COW disk and copy+resize it when required.
3. Use ovirt-shell for creating disks.

Unfortunately, these are not native ways. These are ways for a hacker. Plain 
user clicks "New" in "Disks" tab and selects "Thin Provision" allocation 
policy. It's hard to explain to users that the simplest and obvious way is 
wrong. I hope it's wrong only for MS NFS.


5. Data corruption happens after 'Auto-generated for Live Storage Migration'
snapshot. So if you rollback the snapshot, you could see absolutely clean
filesystem.



Can you try to create a live-snapshot on MS NFS? It seems that this is the
issue, not live storage migration.


Live snapshots work very well on MS NFS. Creating and deleting works live 
without any issues. I did it many times. Please note that everything before the 
snapshot remains consistent. Data corruption occurs after the snapshot. So only 
non-snapshotted data is corrupted.



Do you have qemu-guest-agent on the vm? Without qemu-guest-agent, file
systems on the guest will no be freezed during the snapshot, which may cause
inconsistent snapshot.


I tried it with and without qemu-guest-agent. It doesn't depend.



Can you reproduce this with virt-manager, or by creating a vm and taking
a snapshot using virsh?


Sorry, I'm not sure how I can reproduce the issue using virsh.




Please file a bug and attach:

- /var/log/vdsm/vdsm.log
- /var/log/messages
- /var/log/sanlock.log
- output of  nfsstat during the test, maybe run it every minute?

Ok, I will collect the logs and fill a bug.

Thanks


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Using Microsoft NFS server as storage domain

2016-01-21 Thread Nir Soffer
On Thu, Jan 21, 2016 at 10:13 PM, Pavel Gashev  wrote:
> On Thu, 2016-01-21 at 18:42 +, Nir Soffer wrote:
>
> On Thu, Jan 21, 2016 at 2:54 PM, Pavel Gashev  wrote:
>
> Also there is no option in
> oVirt web interface to use COW format on NFS storage domains.
>
>
> You can
> 1. create a small disk (1G)
> 2. create a snapshot
> 3. extend the disk go the final size
>
> And you have nfs with cow format. The performance difference with one
> snapshot
> should be small.
>
>
> Yes. And there are other workarounds:
> 1. Use some block (i.e. iSCSI) storage for creating a thin provisioned disk
> (which is COW) and then move it to required storage.
> 2. Keep an empty 1G COW disk and copy+resize it when required.
> 3. Use ovirt-shell for creating disks.
>
> Unfortunately, these are not native ways. These are ways for a hacker. Plain
> user clicks "New" in "Disks" tab and selects "Thin Provision" allocation
> policy. It's hard to explain to users that the simplest and obvious way is
> wrong. I hope it's wrong only for MS NFS.

Sure I agree.

I think we do not use qcow format on file storage since there is no
need for this,
the file system is always sparse. I guess we did not plan to use MS NFS.

I would open bug for supporting qcow format on file storage. If this works for
some users, I think this is an option that should be possible in the
ui. Hopefully
there are no too many assumptions in the code about this.

Allon, do you see any reason not to support this for user that need this option?

>
> 5. Data corruption happens after 'Auto-generated for Live Storage Migration'
> snapshot. So if you rollback the snapshot, you could see absolutely clean
> filesystem.
>
>
> Can you try to create a live-snapshot on MS NFS? It seems that this is the
> issue, not live storage migration.
>
>
> Live snapshots work very well on MS NFS. Creating and deleting works live
> without any issues. I did it many times. Please note that everything before
> the snapshot remains consistent. Data corruption occurs after the snapshot.
> So only non-snapshotted data is corrupted.

live migration starts by creating a snapshot, then copying the disks to the new
storage, and then mirroring the active layer so both the old and the
new disks are
the same. Finally we switch to the new disk, and delete the old disk.

So probably the issue is in the mirroring step. This is most likely a
qemu issue.

>
> Do you have qemu-guest-agent on the vm? Without qemu-guest-agent, file
> systems on the guest will no be freezed during the snapshot, which may cause
> inconsistent snapshot.
>
>
> I tried it with and without qemu-guest-agent. It doesn't depend.
>
> Can you reproduce this with virt-manager, or by creating a vm and taking
> a snapshot using virsh?
>
>
> Sorry, I'm not sure how I can reproduce the issue using virsh.

I'll try to get instructions for this from libvirt developers. If this
happen with
libvirt alone, this is a libvirt or qemu bug, and there is little we (ovirt) can
do about it.

>
>
> Please file a bug and attach:
>
> - /var/log/vdsm/vdsm.log
> - /var/log/messages
> - /var/log/sanlock.log
> - output of  nfsstat during the test, maybe run it every minute?
>
>
> Ok, I will collect the logs and fill a bug.
>
> Thanks
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Using Microsoft NFS server as storage domain

2016-01-21 Thread Nir Soffer
Adding Allon

On Thu, Jan 21, 2016 at 10:55 PM, Nir Soffer  wrote:
> On Thu, Jan 21, 2016 at 10:13 PM, Pavel Gashev  wrote:
>> On Thu, 2016-01-21 at 18:42 +, Nir Soffer wrote:
>>
>> On Thu, Jan 21, 2016 at 2:54 PM, Pavel Gashev  wrote:
>>
>> Also there is no option in
>> oVirt web interface to use COW format on NFS storage domains.
>>
>>
>> You can
>> 1. create a small disk (1G)
>> 2. create a snapshot
>> 3. extend the disk go the final size
>>
>> And you have nfs with cow format. The performance difference with one
>> snapshot
>> should be small.
>>
>>
>> Yes. And there are other workarounds:
>> 1. Use some block (i.e. iSCSI) storage for creating a thin provisioned disk
>> (which is COW) and then move it to required storage.
>> 2. Keep an empty 1G COW disk and copy+resize it when required.
>> 3. Use ovirt-shell for creating disks.
>>
>> Unfortunately, these are not native ways. These are ways for a hacker. Plain
>> user clicks "New" in "Disks" tab and selects "Thin Provision" allocation
>> policy. It's hard to explain to users that the simplest and obvious way is
>> wrong. I hope it's wrong only for MS NFS.
>
> Sure I agree.
>
> I think we do not use qcow format on file storage since there is no
> need for this,
> the file system is always sparse. I guess we did not plan to use MS NFS.
>
> I would open bug for supporting qcow format on file storage. If this works for
> some users, I think this is an option that should be possible in the
> ui. Hopefully
> there are no too many assumptions in the code about this.
>
> Allon, do you see any reason not to support this for user that need this 
> option?
>
>>
>> 5. Data corruption happens after 'Auto-generated for Live Storage Migration'
>> snapshot. So if you rollback the snapshot, you could see absolutely clean
>> filesystem.
>>
>>
>> Can you try to create a live-snapshot on MS NFS? It seems that this is the
>> issue, not live storage migration.
>>
>>
>> Live snapshots work very well on MS NFS. Creating and deleting works live
>> without any issues. I did it many times. Please note that everything before
>> the snapshot remains consistent. Data corruption occurs after the snapshot.
>> So only non-snapshotted data is corrupted.
>
> live migration starts by creating a snapshot, then copying the disks to the 
> new
> storage, and then mirroring the active layer so both the old and the
> new disks are
> the same. Finally we switch to the new disk, and delete the old disk.
>
> So probably the issue is in the mirroring step. This is most likely a
> qemu issue.
>
>>
>> Do you have qemu-guest-agent on the vm? Without qemu-guest-agent, file
>> systems on the guest will no be freezed during the snapshot, which may cause
>> inconsistent snapshot.
>>
>>
>> I tried it with and without qemu-guest-agent. It doesn't depend.
>>
>> Can you reproduce this with virt-manager, or by creating a vm and taking
>> a snapshot using virsh?
>>
>>
>> Sorry, I'm not sure how I can reproduce the issue using virsh.
>
> I'll try to get instructions for this from libvirt developers. If this
> happen with
> libvirt alone, this is a libvirt or qemu bug, and there is little we (ovirt) 
> can
> do about it.
>
>>
>>
>> Please file a bug and attach:
>>
>> - /var/log/vdsm/vdsm.log
>> - /var/log/messages
>> - /var/log/sanlock.log
>> - output of  nfsstat during the test, maybe run it every minute?
>>
>>
>> Ok, I will collect the logs and fill a bug.
>>
>> Thanks
>>
>>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users