[ 
https://issues.apache.org/jira/browse/CASSANDRA-9946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14648676#comment-14648676
 ] 

Ariel Weisberg commented on CASSANDRA-9946:
-------------------------------------------

The IO scheduler is a per block device thing so it could also vary across 
devices. If you wanted to know you could grab the setting from the filesystem 
(read /sys) and maybe then correlate it with the mounted filesystem that 
contains the directory you are writing to. 

CFQ is not the most popular choice for databases. The standard recommendation 
is deadline or no-op. Disk controllers and SSDs are doing their own scheduling 
so the only useful thing the IO scheduler can do is balance reads and writes 
and maybe respect latency for reads. I am out of my league here because I have 
never worked on a workload with a random read component and that is where 
things get hard since unlike writes you can't buffer a random read they are 
latency sensitive.

We definitely should have a recommendation for what IO scheduler people use and 
then do our benchmarking based on that.

> use ioprio_set on compaction threads by default instead of manually throttling
> ------------------------------------------------------------------------------
>
>                 Key: CASSANDRA-9946
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-9946
>             Project: Cassandra
>          Issue Type: New Feature
>          Components: Core
>            Reporter: Jonathan Ellis
>            Assignee: Ariel Weisberg
>             Fix For: 3.x
>
>
> Compaction throttling works as designed, but it has two drawbacks:
> * it requires manual tuning to choose the "right" value for a given machine
> * it does not allow compaction to "burst" above its limit if there is 
> additional i/o capacity available while there are less application requests 
> to serve
> Using ioprio_set instead solves both of these problems.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to