Re: [PVE-User] Ceph Cache Tiering

2016-10-12 Thread Adam Thompson
That makes sense - the ZIL acts as a writeback cache for you. If the latency between QEMU and ZFS were higher (e.g. across NFS, maybe across a router, too) then you'd likely be able to measure a small difference. If the latencies are all small enough, any single writeback cache an will likely

Re: [PVE-User] Ceph Cache Tiering

2016-10-12 Thread Lindsay Mathieson
On 12/10/2016 9:55 PM, Adam Thompson wrote: Ultimately, "Always Use DirectIO" is a religious belief, not a technically sound belief. There are situations where it makes sense, and other situations where it doesn't. The same can be said of*every single* setting - if there were only One True

Re: [PVE-User] Ceph Cache Tiering

2016-10-12 Thread Adam Thompson
lace! -Adam > -Original Message- > From: pve-user [mailto:pve-user-boun...@pve.proxmox.com] On > Behalf Of Lindsay Mathieson > Sent: October 11, 2016 23:12 > To: PVE User List <pve-user@pve.proxmox.com> > Subject: Re: [PVE-User] Ceph Cache Tiering > > On 12 October 20

Re: [PVE-User] Ceph Cache Tiering

2016-10-11 Thread Lindsay Mathieson
On 12 October 2016 at 13:28, Adam Thompson wrote: > Not a bloody chance... WriteBack is the only thing that gives both acceptable > performance characteristics and data guarantees. Eh? I didn't think writeback gave data guarantees, quite the opposite. -- Lindsay

Re: [PVE-User] Ceph Cache Tiering

2016-10-11 Thread Adam Thompson
of a complete system failure.) -Adam > -Original Message- > From: pve-user [mailto:pve-user-boun...@pve.proxmox.com] On > Behalf Of Emmanuel Kasper > Sent: October 11, 2016 05:26 > To: PVE User List <pve-user@pve.proxmox.com> > Subject: Re: [PVE-User] Ceph Cache Tieri

Re: [PVE-User] Ceph Cache Tiering

2016-10-11 Thread Lindsay Mathieson
On 11/10/2016 6:39 PM, Eneko Lacunza wrote: I linked the bugreport, not that I care really :) If you read through the bug report it turns out to be: a). an xattr issue, not filename size b). A bug in ceph which assumes that the zfs max xattr size is far smaller than it is. They're not

Re: [PVE-User] Ceph Cache Tiering

2016-10-11 Thread Lindsay Mathieson
On 11/10/2016 8:26 PM, Emmanuel Kasper wrote: ut of Curiosity, I suppose you're using the default 'NoCache' as the cache mode of those QCOW2 images ? Not on ZFS, which doesn't support O_DIRECT, you need to use writethrough or writeback. -- Lindsay Mathieson

Re: [PVE-User] Ceph Cache Tiering

2016-10-11 Thread Emmanuel Kasper
On 10/10/2016 04:29 PM, Adam Thompson wrote: > The default PVE setup puts an XFS filesystem onto each "full disk" assigned > to CEPH. CEPH does **not** write directly to raw devices, so the choice of > filesystem is largely irrelevant. > Granted, ZFS is a "heavier" filesystem than XFS, but it's

Re: [PVE-User] Ceph Cache Tiering

2016-10-11 Thread Eneko Lacunza
El 10/10/16 a las 23:24, Lindsay Mathieson escribió: On 11/10/2016 2:05 AM, Eneko Lacunza wrote: And there are known limits/problems with ext4 for example, and seems that also apply to zfsonlinux No they do not. I linked the bugreport, not that I care really :) Anyway it seems they're

Re: [PVE-User] Ceph Cache Tiering

2016-10-10 Thread Lindsay Mathieson
On 11/10/2016 2:05 AM, Eneko Lacunza wrote: The choice of filesystem is not "largely irrelevant"; filesystems are quite complex and the choice is relevant. With ZFS, you're in unknown territory AFAIK as it is not regularly tested in ceph development; I think only ext4 and XFS are regularly

Re: [PVE-User] Ceph Cache Tiering

2016-10-10 Thread Lindsay Mathieson
On 11/10/2016 6:28 AM, Yannis Milios wrote: You can try to clone instead of rolling back an image to the snapshot. It's much faster and the recommended method by official Ceph documentation. Not integrated with Proxmox/ -- Lindsay Mathieson ___

Re: [PVE-User] Ceph Cache Tiering

2016-10-10 Thread Yannis Milios
>>...but there is one deal breaker for us and thats snapshots - they are incredibly >> slow to restore. You can try to clone instead of rolling back an image to the snapshot. It's much faster and the recommended method by official Ceph documentation.

Re: [PVE-User] Ceph Cache Tiering

2016-10-10 Thread Eneko Lacunza
NFS fileserver, and run QCOW2 files over NFS. Performs better than CEPH did on 1Gbps infrastructure. -Adam -Original Message- From: pve-user [mailto:pve-user-boun...@pve.proxmox.com] On Behalf Of Lindsay Mathieson Sent: October 10, 2016 09:21 To: pve-user@pve.proxmox.com Subject: Re:

Re: [PVE-User] Ceph Cache Tiering

2016-10-10 Thread Adam Thompson
n > Sent: October 10, 2016 09:21 > To: pve-user@pve.proxmox.com > Subject: Re: [PVE-User] Ceph Cache Tiering > > On 10/10/2016 10:22 PM, Eneko Lacunza wrote: > > But this is nonsense, ZFS backed Ceph?! You're supposed to give full > > disks to ceph, so that performance

Re: [PVE-User] Ceph Cache Tiering

2016-10-10 Thread Lindsay Mathieson
On 10/10/2016 10:22 PM, Eneko Lacunza wrote: But this is nonsense, ZFS backed Ceph?! You're supposed to give full disks to ceph, so that performance increases as you add more disks I've tried it both ways, the performance is much the same. ZFS also increases in performance the more disks you

Re: [PVE-User] Ceph Cache Tiering

2016-10-10 Thread Eneko Lacunza
Hi, El 10/10/16 a las 13:46, Lindsay Mathieson escribió: On 10/10/2016 8:19 PM, Brian :: wrote: I think with clusters with VM type workload and at the scale that proxmox users tend to build < 20 OSD servers that cache tier is adding layer of complexity that isn't going to payback. If you want

Re: [PVE-User] Ceph Cache Tiering

2016-10-10 Thread Lindsay Mathieson
On 10/10/2016 8:19 PM, Brian :: wrote: I think with clusters with VM type workload and at the scale that proxmox users tend to build < 20 OSD servers that cache tier is adding layer of complexity that isn't going to payback. If you want decent IOPS / throughput at this scale with Ceph no

Re: [PVE-User] Ceph Cache Tiering

2016-10-10 Thread Brian ::
Hi Lindsay I think with clusters with VM type workload and at the scale that proxmox users tend to build < 20 OSD servers that cache tier is adding layer of complexity that isn't going to payback. If you want decent IOPS / throughput at this scale with Ceph no spinning rust allowed anywhere :)

Re: [PVE-User] Ceph Cache Tiering

2016-10-10 Thread Kautz, Valentin
was smaller than the maximum cache size. Kind Regards Valentin Von: pve-user [pve-user-boun...@pve.proxmox.com] im Auftrag von Lindsay Mathieson [lindsay.mathie...@gmail.com] Gesendet: Sonntag, 9. Oktober 2016 00:21 An: PVE User List Betreff: Re: [PVE-User] Ceph

Re: [PVE-User] Ceph Cache Tiering

2016-10-08 Thread Lindsay Mathieson
On 9/10/2016 7:45 AM, Lindsay Mathieson wrote: cache tiering was limited and a poor fit for VM Hosting, generally the performance was with it "was *worse* with it" :) -- Lindsay Mathieson ___ pve-user mailing list pve-user@pve.proxmox.com

Re: [PVE-User] Ceph Cache tiering

2016-10-04 Thread gauthierl
Message - From: "Alwin Antreich" <sysadmin-...@cognitec.com> To: pve-user@pve.proxmox.com Sent: Tuesday, 4 October, 2016 08:51:12 Subject: Re: [PVE-User] Ceph Cache tiering Hi Lindsay, On 10/03/2016 11:59 PM, Lindsay Mathieson wrote: > Is it straightforward to setup

Re: [PVE-User] Ceph Cache tiering

2016-10-04 Thread Alwin Antreich
Hi Lindsay, On 10/03/2016 11:59 PM, Lindsay Mathieson wrote: > Is it straightforward to setup cache tiering under Proxmox these days? last > time I checked (several years ago) it was > quite tricky with the crush rule setup and keeping the integration with the > proxmox web ui. Sadly I can't

Re: [PVE-User] Ceph cache tiering

2015-05-16 Thread Alexandre DERUMIER
A lot of users on ceph mailing have reported problem with samsung evo drives, mainly because they are pretty slow for O_DSYNC writes See this for benching them. http://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-suitable-as-a-journal-device/ - Mail original -