Mark, many thanks for your effort and ceph performance tests. This puts things 
in perspective. 

Looking at the results, I was a bit concerned that the IOPs performance in 
niether releases come even marginally close to the capabilities of the 
underlying ssd device. Even the fastest PCI ssds have only managed to achieve 
about the 1/6th IOPs of the raw device. 

I guess there is a great deal more optimisations to be done in the upcoming LTS 
releases to make the IOPs rate close to the raw device performance. 

I have done some testing in the past and noticed that despite the server having 
a lot of unused resources (about 40-50% server idle and about 60-70% ssd idle) 
the ceph would not perform well when used with ssds. I was testing with Firefly 
+ auth and my IOPs rate was around the 3K mark. Something is holding ceph back 
from performing well with ssds ((( 

Andrei 

----- Original Message -----

> From: "Mark Nelson" <[email protected]>
> To: "ceph-devel" <[email protected]>
> Cc: [email protected]
> Sent: Tuesday, 17 February, 2015 5:37:01 PM
> Subject: [ceph-users] Ceph Dumpling/Firefly/Hammer SSD/Memstore
> performance comparison

> Hi All,

> I wrote up a short document describing some tests I ran recently to
> look
> at how SSD backed OSD performance has changed across our LTS
> releases.
> This is just looking at RADOS performance and not RBD or RGW. It also
> doesn't offer any real explanations regarding the results. It's just
> a
> first high level step toward understanding some of the behaviors
> folks
> on the mailing list have reported over the last couple of releases. I
> hope you find it useful.

> Mark

> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to