Re: [ceph-users] How to maximize the OSD effective queue depth in Ceph?

2019-08-06 Thread Anthony D'Atri
> However, I'm starting to think that the problem isn't with the number > of threads that have work to do... the problem may just be that the > OSD & PG code has enough thread locking happening that there is no > possible way to have more than a few things happening on a single OSD > (or perhaps a

Re: [ceph-users] How to maximize the OSD effective queue depth in Ceph?

2019-08-06 Thread Mark Lehrer
Thanks, that looks quite useful. I did a few tests and got basically a null result. In fact, when I put the RBDs on different pools on the same SSDs or pools on different SSDs, performance was a few percent worse than leaving them on the same pool. I definitely wasn't expecting this! It looks

Re: [ceph-users] How to maximize the OSD effective queue depth in Ceph?

2019-08-06 Thread Mark Nelson
You may be interested in using my wallclock profiler to look at lock contention: https://github.com/markhpc/gdbpmp It will greatly slow down the OSD but will show you where time is being spent and so far the results appear to at least be relatively informative.  I used it recently when

Re: [ceph-users] How to maximize the OSD effective queue depth in Ceph?

2019-08-06 Thread Mark Lehrer
I have a few more cycles this week to dedicate to the problem of making OSDs do more than maybe 5 simultaneous operations (as measured by the iostat effective queue depth of the drive). However, I'm starting to think that the problem isn't with the number of threads that have work to do... the

Re: [ceph-users] How to maximize the OSD effective queue depth in Ceph?

2019-05-11 Thread Maged Mokhtar
On 10/05/2019 19:54, Mark Lehrer wrote: I'm setting up a new Ceph cluster with fast SSD drives, and there is one problem I want to make sure to address straight away: comically-low OSD queue depths. On the past several clusters I built, there was one major performance problem that I never had

[ceph-users] How to maximize the OSD effective queue depth in Ceph?

2019-05-10 Thread Mark Lehrer
I'm setting up a new Ceph cluster with fast SSD drives, and there is one problem I want to make sure to address straight away: comically-low OSD queue depths. On the past several clusters I built, there was one major performance problem that I never had time to really solve, which is this: