Hi,

We started with consumer grade SSDs. This was in normal operation no
problem, but did caused terrible performance during recovery or other
platform adjustments which involved datamovements. We finally decided to
replace everything with SM863 disks, which after a few years still
perform great

This is still in Firefly area, a lot of improvements happened between
Firefly and Jewel+ versions on CEPH on datarecovery and the impact on
performance during those operations.

Still, I would be careful with consumer SSDs..


regards,


mart



On 07/11/2018 02:49 PM, Satish Patel wrote:
> Prices going way up if I am picking Samsung SM863a for all data drives. 
>
> We have many servers running on consumer grade sad drives and we never
> noticed any performance or any fault so far (but we never used ceph
> before)
>
> I thought that is the whole point of ceph to provide high availability
> if drive go down also parellel read from multiple osd node   
>
> Sent from my iPhone
>
> On Jul 11, 2018, at 6:57 AM, Paul Emmerich <paul.emmer...@croit.io
> <mailto:paul.emmer...@croit.io>> wrote:
>
>> Hi,
>>
>> we‘ve no long-term data for the SM variant.
>> Performance is fine as far as we can tell, but the main difference
>> between these two models should be endurance.
>>
>>
>> Also, I forgot to mention that my experiences are only for the 1, 2,
>> and 4 TB variants. Smaller SSDs are often proportionally slower
>> (especially below 500GB).
>>
>> Paul
>>
>> Robert Stanford <rstanford8...@gmail.com
>> <mailto:rstanford8...@gmail.com>>:
>>
>>> Paul -
>>>
>>>  That's extremely helpful, thanks.  I do have another cluster that
>>> uses Samsung SM863a just for journal (spinning disks for data).  Do
>>> you happen to have an opinion on those as well?
>>>
>>> On Wed, Jul 11, 2018 at 4:03 AM, Paul Emmerich
>>> <paul.emmer...@croit.io <mailto:paul.emmer...@croit.io>> wrote:
>>>
>>>     PM/SM863a are usually great disks and should be the default
>>>     go-to option, they outperform
>>>     even the more expensive PM1633 in our experience.
>>>     (But that really doesn't matter if it's for the full OSD and not
>>>     as dedicated WAL/journal)
>>>
>>>     We got a cluster with a few hundred SanDisk Ultra II
>>>     (discontinued, i believe) that was built on a budget.
>>>     Not the best disk but great value. They have been running since
>>>     ~3 years now with very few failures and
>>>     okayish overall performance.
>>>
>>>     We also got a few clusters with a few hundred SanDisk Extreme
>>>     Pro, but we are not yet sure about their
>>>     long-time durability as they are only ~9 months old (average of
>>>     ~1000 write IOPS on each disk over that time).
>>>     Some of them report only 50-60% lifetime left.
>>>
>>>     For NVMe, the Intel NVMe 750 is still a great disk
>>>
>>>     Be carefuly to get these exact models. Seemingly similar disks
>>>     might be just completely bad, for
>>>     example, the Samsung PM961 is just unusable for Ceph in our
>>>     experience.
>>>
>>>     Paul
>>>
>>>     2018-07-11 10:14 GMT+02:00 Wido den Hollander <w...@42on.com
>>>     <mailto:w...@42on.com>>:
>>>
>>>
>>>
>>>         On 07/11/2018 10:10 AM, Robert Stanford wrote:
>>>         >
>>>         >  In a recent thread the Samsung SM863a was recommended as
>>>         a journal
>>>         > SSD.  Are there any recommendations for data SSDs, for
>>>         people who want
>>>         > to use just SSDs in a new Ceph cluster?
>>>         >
>>>
>>>         Depends on what you are looking for, SATA, SAS3 or NVMe?
>>>
>>>         I have very good experiences with these drives running with
>>>         BlueStore in
>>>         them in SuperMicro machines:
>>>
>>>         - SATA: Samsung PM863a
>>>         - SATA: Intel S4500
>>>         - SAS: Samsung PM1633
>>>         - NVMe: Samsung PM963
>>>
>>>         Running WAL+DB+DATA with BlueStore on the same drives.
>>>
>>>         Wido
>>>
>>>         >  Thank you
>>>         >
>>>         >
>>>         > _______________________________________________
>>>         > ceph-users mailing list
>>>         > ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
>>>         > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>         <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>
>>>         >
>>>         _______________________________________________
>>>         ceph-users mailing list
>>>         ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
>>>         http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>         <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>
>>>
>>>
>>>
>>>
>>>     -- 
>>>     Paul Emmerich
>>>
>>>     Looking for help with your Ceph cluster? Contact us at
>>>     https://croit.io
>>>
>>>     croit GmbH
>>>     Freseniusstr. 31h
>>>     
>>> <https://maps.google.com/?q=Freseniusstr.+31h+81247+M%C3%BCnchen&entry=gmail&source=g>
>>>     81247 München
>>>     
>>> <https://maps.google.com/?q=Freseniusstr.+31h+81247+M%C3%BCnchen&entry=gmail&source=g>
>>>     www.croit.io <http://www.croit.io>
>>>     Tel: +49 89 1896585 90
>>>
>>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

-- 
Mart van Santen
Greenhost
E: m...@greenhost.nl
T: +31 20 4890444
W: https://greenhost.nl

A PGP signature can be attached to this e-mail,
you need PGP software to verify it. 
My public key is available in keyserver(s)

PGP Fingerprint: CA85 EB11 2B70 042D AF66  B29A 6437 01A1 10A3 D3A5

Attachment: signature.asc
Description: OpenPGP digital signature

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to