On 11/28/19 12:56 PM, David Majchrzak, ODERLAND Webbhotell AB wrote:
> Hi!
> 
> We've deployed a new flash only ceph cluster running Nautilus and I'm
> currently looking at any tunables we should set to get the most out of
> our NVMe SSDs.
> 
> I've been looking a bit at the options from the blog post here:
> 
> https://ceph.io/community/bluestore-default-vs-tuned-performance-comparison/
> 
> with the conf here:
> https://gist.github.com/likid0/1b52631ff5d0d649a22a3f30106ccea7
> 
> However some of them, like checksumming, is for testing speed only but
> not really applicable in a real life scenario with critical data.
> 
> Should we stick with defaults or is there anything that could help?
> 
> We have 256GB of RAM on each OSD host, 8 OSD hosts with 10 SSDs on
> each. 2 osd daemons on each SSD. Raise ssd bluestore cache to 8GB?
> 
> Workload is about 50/50 r/w ops running qemu VMs through librbd. So
> mixed block size.

Pin the C-State of your CPUs to 1 and disable powersaving. That can
reduce the latency vastly.

Testing with rados bench -t 1 -b 4096 -o 4096 you should be able to get
to a 0.8ms write latency with 3x replication.

> 
> 3 replicas.
> 
> Appreciate any advice!
> 
> Kind Regards,
> 
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to