Oh, I should have added, my compression settings comment only applies to read 
heavy workloads, as reading 64KB off disk in order to return a handful of bytes 
is incredibly wasteful by orders of magnitude but doesn’t really cause any 
problems on write heavy workloads.

> On Jan 5, 2018, at 5:48 PM, Jon Haddad <j...@jonhaddad.com> wrote:
> Generally speaking, disable readahead.  After that it's very likely the issue 
> isn’t in the settings you’re using the disk settings, but is actually in your 
> Cassandra config or the data model.  How are you measuring things?  Are you 
> saturating your disks?  What resource is your bottleneck?
> *Every* single time I’ve handled a question like this, without exception, it 
> ends up being a mix of incorrect compression settings (use 4K at most), some 
> crazy readahead setting like 1MB, and terrible JVM settings that are the bulk 
> of the problem.  
> Without knowing how you are testing things or *any* metrics whatsoever 
> whether it be C* or OS it’s going to be hard to help you out.
> Jon
>> On Jan 5, 2018, at 5:41 PM, Justin Sanciangco <jsancian...@blizzard.com 
>> <mailto:jsancian...@blizzard.com>> wrote:
>> Hello,
>> I am currently benchmarking NVMe SSDs with Cassandra and am getting very bad 
>> performance when my workload exceeds the memory size. What mount settings 
>> for NVMe should be used? Right now the SSD is formatted as XFS using noop 
>> scheduler. Are there any additional mount options that should be used? Any 
>> specific kernel parameters that should set in order to make best use of the 
>> PCIe NVMe SSD? Your insight would be well appreciated.
>> Thank you,
>> Justin Sanciangco

Reply via email to