On Mon, Feb 5, 2018 at 6:20 PM, Bart Van Assche <bart.vanass...@wdc.com> wrote:
> On Mon, 2018-02-05 at 18:16 +0100, Roman Penyaev wrote:
>> Everything (fio jobs, setup, etc) is given in the same link:
>>
>> https://www.spinics.net/lists/linux-rdma/msg48799.html
>>
>> at the bottom you will find links on google docs with many pages
>> and archived fio jobs and scripts. (I do not remember exactly,
>> one year passed, but there should be everything).
>>
>> Regarding smaller iodepth_batch_submit - that decreases performance.
>> Once I played with that, even introduced new iodepth_batch_complete_max
>> option for fio, but then I decided to stop and simply chose this
>> configuration, which provides me fastest results.
>
> Hello Roman,
>
> That's weird. For which protocols did reducing iodepth_batch_submit lead
> to lower performance: all the tested protocols or only some of them?

Hi Bart,

Seems that does not depend on protocol (when I tested it was true for nvme
and ibnbd).  That depends on a load.  On high load (1 or few fio jobs are
dedicated to each cpu, and we have 64 cpus) it turns out to be faster to wait
completions for all queue for that particular block dev, instead of switching
from kernel to userspace for each completed IO.

But I can assure you that performance difference is very minor, it exists,
but it does not change the whole picture of what you see on this google
sheet. So what I tried to achieve is to squeeze everything I could, nothing
more.

--
Roman

Reply via email to