On 02/07/2014 02:10 PM, Alexey Kardashevskiy wrote:
On 07.02.2014 18:46, Orit Wasserman wrote:
On 02/07/2014 06:35 AM, Alexey Kardashevskiy wrote:
Hi!

I have yet another problem with migration. Or NFS.

There is one NFS server and 2 test POWER8 machines. There is a shared NFS
folder on the server, mounted to both test hosts. There is an qcow2 image
(abc.qcow2) in that shared folder.

We start a guest with this abc.qcow2 on the test machine #1. And start
another guest on the test machine #2 with "-incoming ..." and same
abc.qcow2.

Now we start migration. In most cases it goes fine. But if we put some
load
on machine #1, the destination guest sometime crashes.

I blame out-of-sync NFS on the test machines. I looked a bit further in
QEMU and could not find a spot where it would fflush(abc.qcow2) or
close it
or do any other sync so it is up to the host NFS mountpoint to decide
when
to sync and it definitely does not get a clue when to do this.

I do not really understand why the abc.qcow2 image is still open, should
not it be closed after migration succeeded?

What do I miss here? Should we switch from NFS to GlusterFS (is it always
syncronized)? Or if we want NFS, should we just boot our guests with
"root=/dev/nfs ip=dhcp nfsroot=..." and avoid using disk images in
network
disks? Thanks!


For NFS you need to use the sync mount option to force the NFS client to
sync to
server on writes.

So there is no any kind of sync in QEMU after migration finished,
correct? Looks too mucn to enforce "sync" option for NFS as we really
need it for once.


It is more a NFS issue, if you have a file in NFS that two users in
two different host are accessing (one at least write to it) you will need to enforce the 
"sync" option.
Even if you flush all the data and close the file the NFS client can still
have cached data that it didn't sync to the server.




Reply via email to