On Wed, Mar 15, 2017 at 2:41 AM, Alex Gorbachev <a...@iss-integration.com> 
wrote:
> On Mon, Mar 13, 2017 at 6:09 AM, Florian Haas <flor...@hastexo.com> wrote:
>> On Mon, Mar 13, 2017 at 11:00 AM, Dan van der Ster <d...@vanderster.com> 
>> wrote:
>>>> I'm sorry, I may have worded that in a manner that's easy to
>>>> misunderstand. I generally *never* suggest that people use CFQ on
>>>> reasonably decent I/O hardware, and thus have never come across any
>>>> need to set this specific ceph.conf parameter.
>>>
>>> OTOH, cfq *does* help our hammer clusters. deadline's default
>>> behaviour is to delay writes up to 5 seconds if the disk is busy
>>> reading -- which it is, of couse, while deep scrubbing. And deadline
>>> does not offer any sort of fairness between processes accessing the
>>> same disk (which is admittedly less of an issue in jewel). But back in
>>> hammer days it was nice to be able to make the disk threads only read
>>> while the disk was otherwise idle.
>>
>> Thanks for pointing out the default 5000-ms write deadline. We
>> frequently tune that down to 1500ms. Disabling front merges also
>> sometimes seems to help.
>>
>> For the archives: those settings are in
>> /sys/block/*/queue/iosched/{write_expire,front_merges} and can be
>> persisted on Debian/Ubuntu with sysfsutils.
>
> Hey Florian :).

Привет Лёша! Long time no talk. :)

> I wrote a quick udev rule to do this on Ubuntu, here is is for others'
> reference.  Also saw earlier a recommendation to increase
> read_ahead_kb to 4096 for slower spinning disks
>
> root@croc1:/etc/udev/rules.d# cat 99-storcium-hdd.rules
>
> # Set write deadline to 1500 and disable front merges, and increase
> read_ahead_kb to 4096
> ACTION=="add|change", KERNEL=="sd*", SUBSYSTEM=="block",
> ATTR{queue/iosched/write_expire}="1500",
> ATTR{queue/iosched/front_merges}="0", ATTR{queue/read_ahead_kb}="4096"

Sure, that's an entirely viable alternative to using sysfsutils.
Indeed, rather a more elegant and contemporary one. :)

Cheers,
Florian
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to