I dunno, to me benchmark tests are only really useful to compare different 
drives.


From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Paul 
Emmerich
Sent: Monday, July 16, 2018 8:41 AM
To: Satish Patel
Cc: ceph-users
Subject: Re: [ceph-users] SSDs for data drives

This doesn't look like a good benchmark:

(from the blog post)

dd if=/dev/zero of=/mnt/rawdisk/data.bin bs=1G count=20 oflag=direct
1. it writes compressible data which some SSDs might compress, you should use 
urandom
2. that workload does not look like something Ceph will do to your disk, like 
not at all
If you want a quick estimate of an SSD in worst-case scenario: run the usual 4k 
oflag=direct,dsync test (or better: fio).
A bad SSD will get < 1k IOPS, a good one > 10k
But that doesn't test everything. In particular, performance might degrade as 
the disks fill up. Also, it's the absolute
worst-case, i.e., a disk used for multiple journal/wal devices


Paul

2018-07-16 10:09 GMT-04:00 Satish Patel 
<satish....@gmail.com<mailto:satish....@gmail.com>>:
https://blog.cypressxt.net/hello-ceph-and-samsung-850-evo/<https://urldefense.proofpoint.com/v2/url?u=https-3A__blog.cypressxt.net_hello-2Dceph-2Dand-2Dsamsung-2D850-2Devo_&d=DwMFaQ&c=5m9CfXHY6NXqkS7nN5n23w&r=5r9bhr1JAPRaUcJcU-FfGg&m=7UHXeW5wZThlrAzZ25rsUCxkcp9AB4rs0PczTtV8Ffg&s=7x2gOoMC1GGBnKz51-s8pQmlV8kJP8wNXMrcT0k0dJU&e=>

On Thu, Jul 12, 2018 at 3:37 AM, Adrian Saul
<adrian.s...@tpgtelecom.com.au<mailto:adrian.s...@tpgtelecom.com.au>> wrote:
>
>
> We started our cluster with consumer (Samsung EVO) disks and the write
> performance was pitiful, they had periodic spikes in latency (average of
> 8ms, but much higher spikes) and just did not perform anywhere near where we
> were expecting.
>
>
>
> When replaced with SM863 based devices the difference was night and day.
> The DC grade disks held a nearly constant low latency (contantly sub-ms), no
> spiking and performance was massively better.   For a period I ran both
> disks in the cluster and was able to graph them side by side with the same
> workload.  This was not even a moderately loaded cluster so I am glad we
> discovered this before we went full scale.
>
>
>
> So while you certainly can do cheap and cheerful and let the data
> availability be handled by Ceph, don’t expect the performance to keep up.
>
>
>
>
>
>
>
> From: ceph-users 
> [mailto:ceph-users-boun...@lists.ceph.com<mailto:ceph-users-boun...@lists.ceph.com>]
>  On Behalf Of
> Satish Patel
> Sent: Wednesday, 11 July 2018 10:50 PM
> To: Paul Emmerich <paul.emmer...@croit.io<mailto:paul.emmer...@croit.io>>
> Cc: ceph-users <ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>>
> Subject: Re: [ceph-users] SSDs for data drives
>
>
>
> Prices going way up if I am picking Samsung SM863a for all data drives.
>
>
>
> We have many servers running on consumer grade sad drives and we never
> noticed any performance or any fault so far (but we never used ceph before)
>
>
>
> I thought that is the whole point of ceph to provide high availability if
> drive go down also parellel read from multiple osd node
>
>
>
> Sent from my iPhone
>
>
> On Jul 11, 2018, at 6:57 AM, Paul Emmerich 
> <paul.emmer...@croit.io<mailto:paul.emmer...@croit.io>> wrote:
>
> Hi,
>
>
>
> we‘ve no long-term data for the SM variant.
>
> Performance is fine as far as we can tell, but the main difference between
> these two models should be endurance.
>
>
>
>
>
> Also, I forgot to mention that my experiences are only for the 1, 2, and 4
> TB variants. Smaller SSDs are often proportionally slower (especially below
> 500GB).
>
>
>
> Paul
>
>
> Robert Stanford <rstanford8...@gmail.com<mailto:rstanford8...@gmail.com>>:
>
> Paul -
>
>
>
>  That's extremely helpful, thanks.  I do have another cluster that uses
> Samsung SM863a just for journal (spinning disks for data).  Do you happen to
> have an opinion on those as well?
>
>
>
> On Wed, Jul 11, 2018 at 4:03 AM, Paul Emmerich 
> <paul.emmer...@croit.io<mailto:paul.emmer...@croit.io>>
> wrote:
>
> PM/SM863a are usually great disks and should be the default go-to option,
> they outperform
>
> even the more expensive PM1633 in our experience.
>
> (But that really doesn't matter if it's for the full OSD and not as
> dedicated WAL/journal)
>
>
>
> We got a cluster with a few hundred SanDisk Ultra II (discontinued, i
> believe) that was built on a budget.
>
> Not the best disk but great value. They have been running since ~3 years now
> with very few failures and
>
> okayish overall performance.
>
>
>
> We also got a few clusters with a few hundred SanDisk Extreme Pro, but we
> are not yet sure about their
>
> long-time durability as they are only ~9 months old (average of ~1000 write
> IOPS on each disk over that time).
>
> Some of them report only 50-60% lifetime left.
>
>
>
> For NVMe, the Intel NVMe 750 is still a great disk
>
>
>
> Be carefuly to get these exact models. Seemingly similar disks might be just
> completely bad, for
>
> example, the Samsung PM961 is just unusable for Ceph in our experience.
>
>
>
> Paul
>
>
>
> 2018-07-11 10:14 GMT+02:00 Wido den Hollander 
> <w...@42on.com<mailto:w...@42on.com>>:
>
>
>
> On 07/11/2018 10:10 AM, Robert Stanford wrote:
>>
>>  In a recent thread the Samsung SM863a was recommended as a journal
>> SSD.  Are there any recommendations for data SSDs, for people who want
>> to use just SSDs in a new Ceph cluster?
>>
>
> Depends on what you are looking for, SATA, SAS3 or NVMe?
>
> I have very good experiences with these drives running with BlueStore in
> them in SuperMicro machines:
>
> - SATA: Samsung PM863a
> - SATA: Intel S4500
> - SAS: Samsung PM1633
> - NVMe: Samsung PM963
>
> Running WAL+DB+DATA with BlueStore on the same drives.
>
> Wido
>
>>  Thank you
>>
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com<https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.ceph.com_listinfo.cgi_ceph-2Dusers-2Dceph.com&d=DwMFaQ&c=5m9CfXHY6NXqkS7nN5n23w&r=5r9bhr1JAPRaUcJcU-FfGg&m=7UHXeW5wZThlrAzZ25rsUCxkcp9AB4rs0PczTtV8Ffg&s=P1ElrFpvM2Mys4tJlehB5DcIJ__NozrQqcvOptEUhVY&e=>
>>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com<https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.ceph.com_listinfo.cgi_ceph-2Dusers-2Dceph.com&d=DwMFaQ&c=5m9CfXHY6NXqkS7nN5n23w&r=5r9bhr1JAPRaUcJcU-FfGg&m=7UHXeW5wZThlrAzZ25rsUCxkcp9AB4rs0PczTtV8Ffg&s=P1ElrFpvM2Mys4tJlehB5DcIJ__NozrQqcvOptEUhVY&e=>
>
>
>
> --
>
> Paul Emmerich
>
> Looking for help with your Ceph cluster? Contact us at 
> https://croit.io<https://urldefense.proofpoint.com/v2/url?u=https-3A__croit.io&d=DwMFaQ&c=5m9CfXHY6NXqkS7nN5n23w&r=5r9bhr1JAPRaUcJcU-FfGg&m=7UHXeW5wZThlrAzZ25rsUCxkcp9AB4rs0PczTtV8Ffg&s=kNkaX1ltFmobFs1G_QQiZg6X0Qw1bceTkYR2oe_7wmk&e=>
>
> croit GmbH
> Freseniusstr. 31h
> 81247 München
> www.croit.io<https://urldefense.proofpoint.com/v2/url?u=http-3A__www.croit.io&d=DwMFaQ&c=5m9CfXHY6NXqkS7nN5n23w&r=5r9bhr1JAPRaUcJcU-FfGg&m=7UHXeW5wZThlrAzZ25rsUCxkcp9AB4rs0PczTtV8Ffg&s=1f2igAgAHJwtnEiGVNK3lr9bSXw-1k1jcttHEQ_740Y&e=>
> Tel: +49 89 1896585 90
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com<https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.ceph.com_listinfo.cgi_ceph-2Dusers-2Dceph.com&d=DwMFaQ&c=5m9CfXHY6NXqkS7nN5n23w&r=5r9bhr1JAPRaUcJcU-FfGg&m=7UHXeW5wZThlrAzZ25rsUCxkcp9AB4rs0PczTtV8Ffg&s=P1ElrFpvM2Mys4tJlehB5DcIJ__NozrQqcvOptEUhVY&e=>
>
> Confidentiality: This email and any attachments are confidential and may be
> subject to copyright, legal or some other professional privilege. They are
> intended solely for the attention and use of the named addressee(s). They
> may only be copied, distributed or disclosed with the consent of the
> copyright owner. If you have received this email by mistake or by breach of
> the confidentiality clause, please notify the sender immediately by return
> email and delete or destroy all copies of the email. Any confidentiality,
> privilege or copyright is not waived or lost because this email has been
> sent to you by mistake.



--
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at 
https://croit.io<https://urldefense.proofpoint.com/v2/url?u=https-3A__croit.io&d=DwMFaQ&c=5m9CfXHY6NXqkS7nN5n23w&r=5r9bhr1JAPRaUcJcU-FfGg&m=7UHXeW5wZThlrAzZ25rsUCxkcp9AB4rs0PczTtV8Ffg&s=kNkaX1ltFmobFs1G_QQiZg6X0Qw1bceTkYR2oe_7wmk&e=>

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io<https://urldefense.proofpoint.com/v2/url?u=http-3A__www.croit.io&d=DwMFaQ&c=5m9CfXHY6NXqkS7nN5n23w&r=5r9bhr1JAPRaUcJcU-FfGg&m=7UHXeW5wZThlrAzZ25rsUCxkcp9AB4rs0PczTtV8Ffg&s=1f2igAgAHJwtnEiGVNK3lr9bSXw-1k1jcttHEQ_740Y&e=>
Tel: +49 89 1896585 90
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to