Hello,

I'm about to migrate from VirtualBox to Qemu+VGA-Passthrough. All my virtual disk images are stored in a BTRFS subvolume on-top of a MDRAID 1. The host runs kernel 3.10, and Qemu 1.5.1. The Testing-VM is a Windows 7 64bit, using a RAW virtio disk with cache=none, same happens for qcow2, though.

Using VirtualBox and in the past Vmware workstation I never had issues with corrupted diskimages, but now with Qemu all tries ended up with lots of errors like:

[ 4871.863009] BTRFS info (device md10): csum failed ino 687 off 46213922816 csum 3817758510 private 402306600 [ 4872.481013] BTRFS info (device md10): csum failed ino 687 off 46213922816 csum 3817758510 private 402306600 [ 4904.055514] BTRFS info (device md10): csum failed ino 687 off 46213922816 csum 4060166193 private 402306600 [ 4904.748130] BTRFS info (device md10): csum failed ino 687 off 46213922816 csum 4060166193 private 402306600 [ 4904.987540] BTRFS info (device md10): csum failed ino 687 off 46213922816 csum 3817758510 private 402306600 [ 4905.024700] BTRFS info (device md10): csum failed ino 687 off 46213922816 csum 3817758510 private 402306600 [ 4932.497793] BTRFS info (device md10): csum failed ino 687 off 46213922816 csum 4060166193 private 402306600 [ 4932.533634] BTRFS info (device md10): csum failed ino 687 off 46213922816 csum 4060166193 private 402306600

Trying to copy the disk image elsewhere causes I/O errors at some point.

I found a thread about the issue (http://comments.gmane.org/gmane.comp.file-systems.btrfs/20538) and also a bug report against Qemu from Josef Bacik describing the exact same problem: https://bugzilla.redhat.com/show_bug.cgi?id=693530 - Josef states it should be fixed since quite a while.

Is this a regression in BTRFS, a problem with my setup (md raid1 layer below btrfs), or (still) a bug in Qemu?
Would cache=writethrough or writeback be an option with BTRFS?

Thanks in advance for any input.

Best regards,
Thomas

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to