Hello,

On Thu, 2 Oct 2014 13:48:27 -0400 (EDT) Adam Boyhan wrote:

> Hey everyone, loving Ceph so far! 
> 
> We are looking to role out a Ceph cluster with all SSD's. Our
> application is around 30% writes and 70% reads random IO. The plan is to
> start with roughly 8 servers with 8 800GB Intel DC S3500's per server. I
> wanted to get some input on the use of the DC S3500. Seeing that we are
> primarily a read environment, I was thinking we could easily get away
> with the S3500 instead of the S3700 but I am unsure? Obviously the price
> point of the S3500 is very attractive but if they start failing on us
> too soon, it might not be worth the savings. My largest concern is the
> journaling of Ceph, so maybe I could use the S3500's for the bulk of the
> data and utilize a S3700 for the journaling? 
> 
Essentially Mark touched all the pertinent points in his answer, run the
numbers, don't bother with extra journals.

And not directed to you in particular, but searching the ML archives can
be quite helpful, as part of this has been discussed as recent as
yesterday, see the "SSD MTBF" thread.

As another example I recently purchased journal SSDs and did run all the
numbers. A DC S3500 240GB was going to survive 5 years according to my
calculations, but a DC S3700 100GB was still going to be fast enough,
while being CHEAPER in purchase costs and of course a no-brainer when it
comes to TBW/$ and good night sleep. ^^

I'd venture a cluster made of 2 really fast (and thus expensive) nodes for
a cache pool and 6 "classic" Ceph storage nodes for permanent storage
might be good enough for your use case with a future version of Ceph. 
But unfortunately currently cache pools aren't quite there yet.

Christian
-- 
Christian Balzer        Network/Systems Engineer                
[email protected]           Global OnLine Japan/Fusion Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to