Hi Mark,

Would you see any benefit in using a Intel P3700 NVMe drive as a journal
for say 6x Intel S3700 OSD's ?



On Fri, Oct 3, 2014 at 6:58 AM, Mark Nelson <[email protected]> wrote:

> On 10/02/2014 12:48 PM, Adam Boyhan wrote:
>
>> Hey everyone, loving Ceph so far!
>>
>
> Hi!
>
>
>
>> We are looking to role out a Ceph cluster with all SSD's.  Our
>> application is around 30% writes and 70% reads random IO.  The plan is
>> to start with roughly 8 servers with 8 800GB Intel DC S3500's per
>> server.  I wanted to get some input on the use of the DC S3500. Seeing
>> that we are primarily a read environment, I was thinking we could easily
>> get away with the S3500 instead of the S3700 but I am unsure?  Obviously
>> the price point of the S3500 is very attractive but if they start
>> failing on us too soon, it might not be worth the savings.  My largest
>> concern is the journaling of Ceph, so maybe I could use the S3500's for
>> the bulk of the data and utilize a S3700 for the journaling?
>>
>
> I'd suggest if you are using SSDs for OSDs anyway, you are better off just
> putting the journal on the SSD so you don't increase the number of devices
> per OSD that can cause failure.  In terms of the S3500 vs the S3700, it's
> all a numbers game.  Figure out how much data you expect to write, how many
> drives you have, what the expected write endurance of each drive is,
> replication, journaling, etc, and figure out what you need! :)
>
> The S3500 may be just fine, but it depends entirely on your write workload.
>
>
>> I appreciate the input!
>>
>> Thanks All!
>>
>>
>> _______________________________________________
>> ceph-users mailing list
>> [email protected]
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to