I'be been experimenting with CephFS for funning KVM images (proxmox).

cephfs fuse version - 0.87

cephfs kernel module - kernel version 3.10


Part of my testing involves running a Windows 7 VM up and running
CrystalDiskMark to check the I/O in the VM. Its surprisingly good with
both the fuse and the kernel driver, seq reads & writes are actually
faster than the underlying disk, so I presume the FS is aggressively
caching.

With the fuse driver I have no problems.

With the kernel driver, the benchmark runs fine, but when I reboot the
VM the drive is corrupted and unreadable, every time. Rolling back to
a snapshot fixes the disk. This does not happen unless I run the
benchmark, which I presume is writing a lot of data.

No problems with the same test for Ceph rbd, or NFS.


-- 
Lindsay
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to