Re: [pve-devel] Ceph Octopus is out!
Also, ceph server side, the new io_uring seem exciting too :) https://github.com/ceph/ceph/pull/27392 - Mail original - De: "Thomas Lamprecht" À: "pve-devel" , "aderumier" Envoyé: Mercredi 25 Mars 2020 12:00:21 Objet: Re: [pve-devel] Ceph Octopus is out! On 3/25/20 11:17 AM, Alexandre DERUMIER wrote: > I'm pretty excited by the new write-around cache policy for librbd && io > scheduler:) > > (better than writeback, no read latency impact :) > Better for high perf. write workloads: > Writes return immediately under both the write-around and write-back > policies, > unless there are more than rbd cache max dirty unwritten bytes to the storage > cluster. The write-around policy differs from the write-back policy in that > it > does not attempt to service read requests from the cache, unlike the > write-back > policy, and is therefore faster for high performance write workloads. Under > the > write-through policy, writes return only when the data is on disk on all > replicas, but reads may come from the cache. -- https://docs.ceph.com/docs/octopus/rbd/rbd-config-ref/#cache-settings Not sure if all workloads benefit from that. > librbd now uses a write-around cache policy be default, > replacing the previous write-back cache policy default. > This cache policy allows librbd to immediately complete > write IOs while they are still in-flight to the OSDs. > Subsequent flush requests will ensure all in-flight > write IOs are completed prior to completing. The > librbd cache policy can be controlled via a new > “rbd_cache_policy” configuration option. > > librbd now includes a simple IO scheduler which attempts to > batch together multiple IOs against the same backing RBD > data block object. The librbd IO scheduler policy can be > controlled via a new “rbd_io_scheduler” configuration > option. Let's see how much this brings in performance :) ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] Ceph Octopus is out!
On 3/25/20 11:17 AM, Alexandre DERUMIER wrote: > I'm pretty excited by the new write-around cache policy for librbd && io > scheduler:) > > (better than writeback, no read latency impact :) > Better for high perf. write workloads: > Writes return immediately under both the write-around and write-back policies, > unless there are more than rbd cache max dirty unwritten bytes to the storage > cluster. The write-around policy differs from the write-back policy in that it > does not attempt to service read requests from the cache, unlike the > write-back > policy, and is therefore faster for high performance write workloads. Under > the > write-through policy, writes return only when the data is on disk on all > replicas, but reads may come from the cache. -- https://docs.ceph.com/docs/octopus/rbd/rbd-config-ref/#cache-settings Not sure if all workloads benefit from that. > librbd now uses a write-around cache policy be default, > replacing the previous write-back cache policy default. > This cache policy allows librbd to immediately complete > write IOs while they are still in-flight to the OSDs. > Subsequent flush requests will ensure all in-flight > write IOs are completed prior to completing. The > librbd cache policy can be controlled via a new > “rbd_cache_policy” configuration option. > > librbd now includes a simple IO scheduler which attempts to > batch together multiple IOs against the same backing RBD > data block object. The librbd IO scheduler policy can be > controlled via a new “rbd_io_scheduler” configuration > option. Let's see how much this brings in performance :) ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] Ceph Octopus is out!
I'm pretty excited by the new write-around cache policy for librbd && io scheduler:) (better than writeback, no read latency impact :) librbd now uses a write-around cache policy be default, replacing the previous write-back cache policy default. This cache policy allows librbd to immediately complete write IOs while they are still in-flight to the OSDs. Subsequent flush requests will ensure all in-flight write IOs are completed prior to completing. The librbd cache policy can be controlled via a new “rbd_cache_policy” configuration option. librbd now includes a simple IO scheduler which attempts to batch together multiple IOs against the same backing RBD data block object. The librbd IO scheduler policy can be controlled via a new “rbd_io_scheduler” configuration option. - Mail original - De: "Thomas Lamprecht" À: "pve-devel" , "Victor Hooi" Envoyé: Mardi 24 Mars 2020 14:00:30 Objet: Re: [pve-devel] Ceph Octopus is out! Hi, On 3/24/20 1:53 PM, Victor Hooi wrote: > Hi, > > So I just saw the release annoucement for Ceph Ocotpus: > > https://ceph.io/releases/v15-2-0-octopus-released/ > > YAY! =) > > So I'm going to ask the question - when will this be in Proxmox? =) (e.g. > in Ceph Testing). I'd guess we get some testing stuff out in Q2/2020, depending on how well the changes in details play with our integration. As I know ceph, waiting on a stable point release, which should also include our dev testing feedback, will probably make sense to do. After all we want a smooth (upgrade) experience :) cheers, Thomas ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] Ceph Octopus is out!
Hi, On 3/24/20 1:53 PM, Victor Hooi wrote: > Hi, > > So I just saw the release annoucement for Ceph Ocotpus: > > https://ceph.io/releases/v15-2-0-octopus-released/ > > YAY! =) > > So I'm going to ask the question - when will this be in Proxmox? =) (e.g. > in Ceph Testing). I'd guess we get some testing stuff out in Q2/2020, depending on how well the changes in details play with our integration. As I know ceph, waiting on a stable point release, which should also include our dev testing feedback, will probably make sense to do. After all we want a smooth (upgrade) experience :) cheers, Thomas ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel