(2nd time w/o HTML formatting)
As I mentioned today in the meeting we have a first pass set of numbers from  
Infernalis.  This is the same hardware, configuration (including ceph.conf), 
and clients as the previous data from this presentation: 
http://www.slideshare.net/Inktank_Ceph/accelerating-cassandra-workloads-on-ceph-with-allflash-pcie-ssds

The comparison data is here: 
https://www.docdroid.net/X0kJcIp/quick-hammer-vs-infernalis-nvme-comparison.pdf.html

Reads are a bit slower than we measured before at higher QD, but roughly the 
same up to 1M IOPS.  Writes look a lot better!  Also if you look at the 
long-tail latency numbers (not graphed, in the backup slides) the long tail 
latency is much lower for writes and mixed workloads, by 1/3rd to 1/8th.     We 
are doing analysis and building tools that are focusing on 80th/90th/95th/99th 
latency across workloads at various queue depths now so we'll see if we can get 
any more insight.

We did at some point during the upgrade find that SELinux was set to 
'enforcing', and originally this gave us a much lower performance measurement.  
We have some comparison data for that if anyone is interested.   

If you're in the U.S, have a great holiday!

Thanks,

Stephen


--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to