You should make a benchmark with cassandra-stress to find the sweet spot. With 
NVMe I guess you can start with a high value, 128?
Please let us know the results of your findings, it's interesting to know if we 
can go crazy with such pieces of hardware :-)

    Le Mardi 20 septembre 2016 12h11, Thomas Julian <thomasjul...@zoho.com> a 
écrit :


We are using Cassandra 2.1.13 with each node having a NVMe disk with the 
configuration of Total Capacity - 1.2TB, Alloted Capacity -  880GB. We would 
like to increase the default value of 32 for the param concurrent_reads. But 
the document says 

"(Default: 32)note For workloads with more data than can fit in memory, the 
bottleneck is reads fetching data from disk. Setting to (16 × number_of_drives) 
allows operations to queue low enough in the stack so that the OS and drives 
can reorder them. The default setting applies to both logical volume managed 
(LVM) and RAID drives."


According to this hardware specification, what could be the optimal value that 
can be set for concurrent_reads?

Best Regards,


Reply via email to