[ceph-users] Re: ceph-osd iodepth for high-performance SSD OSDs

2021-10-26 Thread Frank Schilder
2021 11:14:28 To: Frank Schilder; ceph-users Subject: Re: [ceph-users] Re: ceph-osd iodepth for high-performance SSD OSDs On 10/26/21 10:22, Frank Schilder wrote: > It looks like the bottleneck is the bstore_kv_sync thread, there seems to be > only one running per OSD daemon independent of

[ceph-users] Re: ceph-osd iodepth for high-performance SSD OSDs

2021-10-26 Thread Stefan Kooman
On 10/26/21 10:22, Frank Schilder wrote: It looks like the bottleneck is the bstore_kv_sync thread, there seems to be only one running per OSD daemon independent of shard number. This would imply a rather low effective queue depth per OSD daemon. Are there ways to improve this other than

[ceph-users] Re: ceph-osd iodepth for high-performance SSD OSDs

2021-10-26 Thread Frank Schilder
2021 10:53:10 To: Frank Schilder Cc: ceph-users Subject: Re: [ceph-users] Re: ceph-osd iodepth for high-performance SSD OSDs Isn’t it too much for ssd 4 osd? Normally nvme is suitable for 4osd isn’t it? Istvan Szabo Senior Infrastructure Engineer

[ceph-users] Re: ceph-osd iodepth for high-performance SSD OSDs

2021-10-26 Thread Szabo, Istvan (Agoda)
Isn’t it too much for ssd 4 osd? Normally nvme is suitable for 4osd isn’t it? Istvan Szabo Senior Infrastructure Engineer --- Agoda Services Co., Ltd. e: istvan.sz...@agoda.com

[ceph-users] Re: ceph-osd iodepth for high-performance SSD OSDs

2021-10-26 Thread Frank Schilder
It looks like the bottleneck is the bstore_kv_sync thread, there seems to be only one running per OSD daemon independent of shard number. This would imply a rather low effective queue depth per OSD daemon. Are there ways to improve this other than deploying even more OSD daemons per OSD?