Re: [ceph-users] io-schedulers

2018-11-05 Thread Sergey Malinin
Using "noop" makes sense only with ssd/nvme drives. "noop" is a simple fifo and 
using it with HDDs can result in unexpected blocking of useful IO in case when 
the queue is poisoned with burst of IO requests like background purge, which 
would become foreground in such case.


> On 5.11.2018, at 23:39, Jack  wrote:
> 
> We simply use the "noop" scheduler on our nand-based ceph cluster
> 
> 
> On 11/05/2018 09:33 PM, solarflow99 wrote:
>> I'm interested to know about this too.
>> 
>> 
>> On Mon, Nov 5, 2018 at 10:45 AM Bastiaan Visser  wrote:
>> 
>>> 
>>> There are lots of rumors around about the benefit of changing
>>> io-schedulers for OSD disks.
>>> Even some benchmarks can be found, but they are all more than a few years
>>> old.
>>> Since ceph is moving forward with quite a pace, i am wondering what the
>>> common practice is to use as io-scheduler on OSD's.
>>> 
>>> And since blk-mq is around these days, are the multi-queue schedules
>>> already being used in production clusters?
>>> 
>>> Regards,
>>> Bastiaan
>>> ___
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>> 
>> 
>> 
>> 
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> 
> 
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] io-schedulers

2018-11-05 Thread Sergey Malinin
It depends on store backend. Bluestore has it's own scheduler which works 
properly only with CFQ, while filestore configuration is narrowed to setting 
OSD IO threads' scheduling class and priority just like using 'ionice' utility.


> On 5.11.2018, at 21:45, Bastiaan Visser  wrote:
> 
> 
> There are lots of rumors around about the benefit of changing io-schedulers 
> for OSD disks.
> Even some benchmarks can be found, but they are all more than a few years 
> old. 
> Since ceph is moving forward with quite a pace, i am wondering what the 
> common practice is to use as io-scheduler on OSD's.
> 
> And since blk-mq is around these days, are the multi-queue schedules already 
> being used in production clusters? 
> 
> Regards,
> Bastiaan
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] io-schedulers

2018-11-05 Thread Jack
We simply use the "noop" scheduler on our nand-based ceph cluster


On 11/05/2018 09:33 PM, solarflow99 wrote:
> I'm interested to know about this too.
> 
> 
> On Mon, Nov 5, 2018 at 10:45 AM Bastiaan Visser  wrote:
> 
>>
>> There are lots of rumors around about the benefit of changing
>> io-schedulers for OSD disks.
>> Even some benchmarks can be found, but they are all more than a few years
>> old.
>> Since ceph is moving forward with quite a pace, i am wondering what the
>> common practice is to use as io-scheduler on OSD's.
>>
>> And since blk-mq is around these days, are the multi-queue schedules
>> already being used in production clusters?
>>
>> Regards,
>>  Bastiaan
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
> 
> 
> 
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] io-schedulers

2018-11-05 Thread solarflow99
I'm interested to know about this too.


On Mon, Nov 5, 2018 at 10:45 AM Bastiaan Visser  wrote:

>
> There are lots of rumors around about the benefit of changing
> io-schedulers for OSD disks.
> Even some benchmarks can be found, but they are all more than a few years
> old.
> Since ceph is moving forward with quite a pace, i am wondering what the
> common practice is to use as io-scheduler on OSD's.
>
> And since blk-mq is around these days, are the multi-queue schedules
> already being used in production clusters?
>
> Regards,
>  Bastiaan
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] io-schedulers

2018-11-05 Thread Bastiaan Visser


There are lots of rumors around about the benefit of changing io-schedulers for 
OSD disks.
Even some benchmarks can be found, but they are all more than a few years old. 
Since ceph is moving forward with quite a pace, i am wondering what the common 
practice is to use as io-scheduler on OSD's.

And since blk-mq is around these days, are the multi-queue schedules already 
being used in production clusters? 

Regards,
 Bastiaan
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com