Hi,

The choice of filesystem is not "largely irrelevant"; filesystems are quite complex and the choice is relevant. With ZFS, you're in unknown territory AFAIK as it is not regularly tested in ceph development; I think only ext4 and XFS are regularly tested. And there are known limits/problems with ext4 for example, and seems that also apply to zfsonlinux (I think ext4 has lower limit yet):
https://github.com/zfsonlinux/zfs/issues/4913

Also no word about ZFS in recommendations:
http://docs.ceph.com/docs/jewel/rados/configuration/filesystem-recommendations/

It can be done? Yes - Lindsay is doing it successfully.

Is it advisable? - I don't think so. :-)

Anyway it seems they're getting rid of the filesystem in the near future with bluestore ;)

Cheers
Eneko

El 10/10/16 a las 16:29, Adam Thompson escribió:
The default PVE setup puts an XFS filesystem onto each "full disk" assigned to 
CEPH.  CEPH does **not** write directly to raw devices, so the choice of filesystem is 
largely irrelevant.
Granted, ZFS is a "heavier" filesystem than XFS, but it's no better or worse 
than running CEPH on XFS on Hardware RAID, which I've done elsewhere.
CEPH gives you the ability to not need software or hardware RAID.
ZFS gives you the ability to not need hardware RAID.
Layering them - assuming you have enough memory and CPU cycles - can be very 
beneficial.
Neither CEPH nor XFS does deduplication or compression, which ZFS does.  
Depending on what kind of CPU you have, turning on compression can dramatically 
*speed up* I/O.  Depending on how much RAM you have, turning on deduplication 
can dramatically decrease disk space used.
Although, TBH, at that point I'd just do what I have running in production 
right now: a reasonably-powerful SPARC64 NFS fileserver, and run QCOW2 files 
over NFS.  Performs better than CEPH did on 1Gbps infrastructure.
-Adam

-----Original Message-----
From: pve-user [mailto:pve-user-boun...@pve.proxmox.com] On
Behalf Of Lindsay Mathieson
Sent: October 10, 2016 09:21
To: pve-user@pve.proxmox.com
Subject: Re: [PVE-User] Ceph Cache Tiering

On 10/10/2016 10:22 PM, Eneko Lacunza wrote:
But this is nonsense, ZFS backed Ceph?! You're supposed to give full
disks to ceph, so that performance increases as you add more disks
I've tried it both ways, the performance is much the same. ZFS also
increases in performance the more disks you throw it, which is passed
onto ceph.


+Compression

+Auto Bit rot detection and repair

+A lot of flexibility

--
Lindsay Mathieson

_______________________________________________
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
_______________________________________________
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


--
Zuzendari Teknikoa / Director Técnico
Binovo IT Human Project, S.L.
Telf. 943493611
      943324914
Astigarraga bidea 2, planta 6 dcha., ofi. 3-2; 20180 Oiartzun (Gipuzkoa)
www.binovo.es

_______________________________________________
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

Reply via email to