On 10/10/2016 04:29 PM, Adam Thompson wrote: > The default PVE setup puts an XFS filesystem onto each "full disk" assigned > to CEPH. CEPH does **not** write directly to raw devices, so the choice of > filesystem is largely irrelevant. > Granted, ZFS is a "heavier" filesystem than XFS, but it's no better or worse > than running CEPH on XFS on Hardware RAID, which I've done elsewhere. > CEPH gives you the ability to not need software or hardware RAID. > ZFS gives you the ability to not need hardware RAID. > Layering them - assuming you have enough memory and CPU cycles - can be very > beneficial. > Neither CEPH nor XFS does deduplication or compression, which ZFS does. > Depending on what kind of CPU you have, turning on compression can > dramatically *speed up* I/O. Depending on how much RAM you have, turning on > deduplication can dramatically decrease disk space used. > Although, TBH, at that point I'd just do what I have running in production > right now: a reasonably-powerful SPARC64 NFS fileserver, and run QCOW2 files > over NFS. Performs better than CEPH did on 1Gbps infrastructure. > -Adam
Out of Curiosity, I suppose you're using the default 'NoCache' as the cache mode of those QCOW2 images ? _______________________________________________ pve-user mailing list [email protected] http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
