Re: [ceph-users] SSDs for data drives

2018-07-16 Thread Satish Patel
I just ran test on Samsung 850 Pro 500GB (how to interpret result of
following output?)



[root@compute-01 tmp]# fio --filename=/dev/sda --direct=1 --sync=1
--rw=write --bs=4k --numjobs=1 --iodepth=1 --runtime=60 --time_based
--group_reporting --name=journal-test
journal-test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B,
(T) 4096B-4096B, ioengine=psync, iodepth=1
fio-3.1
Starting 1 process
Jobs: 1 (f=1): [W(1)][100.0%][r=0KiB/s,w=76.0MiB/s][r=0,w=19.7k
IOPS][eta 00m:00s]
journal-test: (groupid=0, jobs=1): err= 0: pid=6969: Mon Jul 16 14:21:27 2018
  write: IOPS=20.1k, BW=78.6MiB/s (82.5MB/s)(4719MiB/60001msec)
clat (usec): min=36, max=4525, avg=47.22, stdev=16.65
 lat (usec): min=36, max=4526, avg=47.57, stdev=16.69
clat percentiles (usec):
 |  1.00th=[   39],  5.00th=[   40], 10.00th=[   40], 20.00th=[   41],
 | 30.00th=[   43], 40.00th=[   48], 50.00th=[   49], 60.00th=[   50],
 | 70.00th=[   50], 80.00th=[   51], 90.00th=[   52], 95.00th=[   53],
 | 99.00th=[   62], 99.50th=[   65], 99.90th=[  108], 99.95th=[  363],
 | 99.99th=[  396]
   bw (  KiB/s): min=72152, max=96464, per=100.00%, avg=80581.45,
stdev=7032.18, samples=119
   iops: min=18038, max=24116, avg=20145.34, stdev=1758.05, samples=119
  lat (usec)   : 50=71.83%, 100=28.06%, 250=0.03%, 500=0.08%, 750=0.01%
  lat (usec)   : 1000=0.01%
  lat (msec)   : 2=0.01%, 10=0.01%
  cpu  : usr=9.44%, sys=31.95%, ctx=1209952, majf=0, minf=78
  IO depths: 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
 submit: 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
 complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
 issued rwt: total=0,1207979,0, short=0,0,0, dropped=0,0,0
 latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
  WRITE: bw=78.6MiB/s (82.5MB/s), 78.6MiB/s-78.6MiB/s
(82.5MB/s-82.5MB/s), io=4719MiB (4948MB), run=60001-60001msec

Disk stats (read/write):
  sda: ios=0/1205921, merge=0/29, ticks=0/41418, in_queue=40965, util=68.35%

On Mon, Jul 16, 2018 at 1:18 PM, Michael Kuriger  wrote:
> I dunno, to me benchmark tests are only really useful to compare different
> drives.
>
>
>
>
>
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Paul Emmerich
> Sent: Monday, July 16, 2018 8:41 AM
> To: Satish Patel
> Cc: ceph-users
>
>
> Subject: Re: [ceph-users] SSDs for data drives
>
>
>
> This doesn't look like a good benchmark:
>
> (from the blog post)
>
> dd if=/dev/zero of=/mnt/rawdisk/data.bin bs=1G count=20 oflag=direct
>
> 1. it writes compressible data which some SSDs might compress, you should
> use urandom
>
> 2. that workload does not look like something Ceph will do to your disk,
> like not at all
>
> If you want a quick estimate of an SSD in worst-case scenario: run the usual
> 4k oflag=direct,dsync test (or better: fio).
>
> A bad SSD will get < 1k IOPS, a good one > 10k
>
> But that doesn't test everything. In particular, performance might degrade
> as the disks fill up. Also, it's the absolute
>
> worst-case, i.e., a disk used for multiple journal/wal devices
>
>
>
>
>
> Paul
>
>
>
> 2018-07-16 10:09 GMT-04:00 Satish Patel :
>
> https://blog.cypressxt.net/hello-ceph-and-samsung-850-evo/
>
>
> On Thu, Jul 12, 2018 at 3:37 AM, Adrian Saul
>  wrote:
>>
>>
>> We started our cluster with consumer (Samsung EVO) disks and the write
>> performance was pitiful, they had periodic spikes in latency (average of
>> 8ms, but much higher spikes) and just did not perform anywhere near where
>> we
>> were expecting.
>>
>>
>>
>> When replaced with SM863 based devices the difference was night and day.
>> The DC grade disks held a nearly constant low latency (contantly sub-ms),
>> no
>> spiking and performance was massively better.   For a period I ran both
>> disks in the cluster and was able to graph them side by side with the same
>> workload.  This was not even a moderately loaded cluster so I am glad we
>> discovered this before we went full scale.
>>
>>
>>
>> So while you certainly can do cheap and cheerful and let the data
>> availability be handled by Ceph, don’t expect the performance to keep up.
>>
>>
>>
>>
>>
>>
>>
>> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
>> Satish Patel
>> Sent: Wednesday, 11 July 2018 10:50 PM
>> To: Paul Emmerich 
>> Cc: ceph-users 
>> Subject: Re: [ceph-users] SSDs for data drives
>>
>>
>>
>> Prices going way up if I am picking Samsung SM863a for all data drives.
>>
>>

Re: [ceph-users] SSDs for data drives

2018-07-16 Thread Michael Kuriger
I dunno, to me benchmark tests are only really useful to compare different 
drives.


From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Paul 
Emmerich
Sent: Monday, July 16, 2018 8:41 AM
To: Satish Patel
Cc: ceph-users
Subject: Re: [ceph-users] SSDs for data drives

This doesn't look like a good benchmark:

(from the blog post)

dd if=/dev/zero of=/mnt/rawdisk/data.bin bs=1G count=20 oflag=direct
1. it writes compressible data which some SSDs might compress, you should use 
urandom
2. that workload does not look like something Ceph will do to your disk, like 
not at all
If you want a quick estimate of an SSD in worst-case scenario: run the usual 4k 
oflag=direct,dsync test (or better: fio).
A bad SSD will get < 1k IOPS, a good one > 10k
But that doesn't test everything. In particular, performance might degrade as 
the disks fill up. Also, it's the absolute
worst-case, i.e., a disk used for multiple journal/wal devices


Paul

2018-07-16 10:09 GMT-04:00 Satish Patel 
mailto:satish@gmail.com>>:
https://blog.cypressxt.net/hello-ceph-and-samsung-850-evo/<https://urldefense.proofpoint.com/v2/url?u=https-3A__blog.cypressxt.net_hello-2Dceph-2Dand-2Dsamsung-2D850-2Devo_&d=DwMFaQ&c=5m9CfXHY6NXqkS7nN5n23w&r=5r9bhr1JAPRaUcJcU-FfGg&m=7UHXeW5wZThlrAzZ25rsUCxkcp9AB4rs0PczTtV8Ffg&s=7x2gOoMC1GGBnKz51-s8pQmlV8kJP8wNXMrcT0k0dJU&e=>

On Thu, Jul 12, 2018 at 3:37 AM, Adrian Saul
mailto:adrian.s...@tpgtelecom.com.au>> wrote:
>
>
> We started our cluster with consumer (Samsung EVO) disks and the write
> performance was pitiful, they had periodic spikes in latency (average of
> 8ms, but much higher spikes) and just did not perform anywhere near where we
> were expecting.
>
>
>
> When replaced with SM863 based devices the difference was night and day.
> The DC grade disks held a nearly constant low latency (contantly sub-ms), no
> spiking and performance was massively better.   For a period I ran both
> disks in the cluster and was able to graph them side by side with the same
> workload.  This was not even a moderately loaded cluster so I am glad we
> discovered this before we went full scale.
>
>
>
> So while you certainly can do cheap and cheerful and let the data
> availability be handled by Ceph, don’t expect the performance to keep up.
>
>
>
>
>
>
>
> From: ceph-users 
> [mailto:ceph-users-boun...@lists.ceph.com<mailto:ceph-users-boun...@lists.ceph.com>]
>  On Behalf Of
> Satish Patel
> Sent: Wednesday, 11 July 2018 10:50 PM
> To: Paul Emmerich mailto:paul.emmer...@croit.io>>
> Cc: ceph-users mailto:ceph-users@lists.ceph.com>>
> Subject: Re: [ceph-users] SSDs for data drives
>
>
>
> Prices going way up if I am picking Samsung SM863a for all data drives.
>
>
>
> We have many servers running on consumer grade sad drives and we never
> noticed any performance or any fault so far (but we never used ceph before)
>
>
>
> I thought that is the whole point of ceph to provide high availability if
> drive go down also parellel read from multiple osd node
>
>
>
> Sent from my iPhone
>
>
> On Jul 11, 2018, at 6:57 AM, Paul Emmerich 
> mailto:paul.emmer...@croit.io>> wrote:
>
> Hi,
>
>
>
> we‘ve no long-term data for the SM variant.
>
> Performance is fine as far as we can tell, but the main difference between
> these two models should be endurance.
>
>
>
>
>
> Also, I forgot to mention that my experiences are only for the 1, 2, and 4
> TB variants. Smaller SSDs are often proportionally slower (especially below
> 500GB).
>
>
>
> Paul
>
>
> Robert Stanford mailto:rstanford8...@gmail.com>>:
>
> Paul -
>
>
>
>  That's extremely helpful, thanks.  I do have another cluster that uses
> Samsung SM863a just for journal (spinning disks for data).  Do you happen to
> have an opinion on those as well?
>
>
>
> On Wed, Jul 11, 2018 at 4:03 AM, Paul Emmerich 
> mailto:paul.emmer...@croit.io>>
> wrote:
>
> PM/SM863a are usually great disks and should be the default go-to option,
> they outperform
>
> even the more expensive PM1633 in our experience.
>
> (But that really doesn't matter if it's for the full OSD and not as
> dedicated WAL/journal)
>
>
>
> We got a cluster with a few hundred SanDisk Ultra II (discontinued, i
> believe) that was built on a budget.
>
> Not the best disk but great value. They have been running since ~3 years now
> with very few failures and
>
> okayish overall performance.
>
>
>
> We also got a few clusters with a few hundred SanDisk Extreme Pro, but we
> are not yet sure about their
>
> long-time durability as they are only ~9 m

Re: [ceph-users] SSDs for data drives

2018-07-16 Thread Paul Emmerich
This doesn't look like a good benchmark:

(from the blog post)

dd if=/dev/zero of=/mnt/rawdisk/data.bin bs=1G count=20 oflag=direct

1. it writes compressible data which some SSDs might compress, you should
use urandom
2. that workload does not look like something Ceph will do to your disk,
like not at all

If you want a quick estimate of an SSD in worst-case scenario: run the
usual 4k oflag=direct,dsync test (or better: fio).
A bad SSD will get < 1k IOPS, a good one > 10k

But that doesn't test everything. In particular, performance might degrade
as the disks fill up. Also, it's the absolute
worst-case, i.e., a disk used for multiple journal/wal devices



Paul

2018-07-16 10:09 GMT-04:00 Satish Patel :

> https://blog.cypressxt.net/hello-ceph-and-samsung-850-evo/
>
> On Thu, Jul 12, 2018 at 3:37 AM, Adrian Saul
>  wrote:
> >
> >
> > We started our cluster with consumer (Samsung EVO) disks and the write
> > performance was pitiful, they had periodic spikes in latency (average of
> > 8ms, but much higher spikes) and just did not perform anywhere near
> where we
> > were expecting.
> >
> >
> >
> > When replaced with SM863 based devices the difference was night and day.
> > The DC grade disks held a nearly constant low latency (contantly
> sub-ms), no
> > spiking and performance was massively better.   For a period I ran both
> > disks in the cluster and was able to graph them side by side with the
> same
> > workload.  This was not even a moderately loaded cluster so I am glad we
> > discovered this before we went full scale.
> >
> >
> >
> > So while you certainly can do cheap and cheerful and let the data
> > availability be handled by Ceph, don’t expect the performance to keep up.
> >
> >
> >
> >
> >
> >
> >
> > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> > Satish Patel
> > Sent: Wednesday, 11 July 2018 10:50 PM
> > To: Paul Emmerich 
> > Cc: ceph-users 
> > Subject: Re: [ceph-users] SSDs for data drives
> >
> >
> >
> > Prices going way up if I am picking Samsung SM863a for all data drives.
> >
> >
> >
> > We have many servers running on consumer grade sad drives and we never
> > noticed any performance or any fault so far (but we never used ceph
> before)
> >
> >
> >
> > I thought that is the whole point of ceph to provide high availability if
> > drive go down also parellel read from multiple osd node
> >
> >
> >
> > Sent from my iPhone
> >
> >
> > On Jul 11, 2018, at 6:57 AM, Paul Emmerich 
> wrote:
> >
> > Hi,
> >
> >
> >
> > we‘ve no long-term data for the SM variant.
> >
> > Performance is fine as far as we can tell, but the main difference
> between
> > these two models should be endurance.
> >
> >
> >
> >
> >
> > Also, I forgot to mention that my experiences are only for the 1, 2, and
> 4
> > TB variants. Smaller SSDs are often proportionally slower (especially
> below
> > 500GB).
> >
> >
> >
> > Paul
> >
> >
> > Robert Stanford :
> >
> > Paul -
> >
> >
> >
> >  That's extremely helpful, thanks.  I do have another cluster that uses
> > Samsung SM863a just for journal (spinning disks for data).  Do you
> happen to
> > have an opinion on those as well?
> >
> >
> >
> > On Wed, Jul 11, 2018 at 4:03 AM, Paul Emmerich 
> > wrote:
> >
> > PM/SM863a are usually great disks and should be the default go-to option,
> > they outperform
> >
> > even the more expensive PM1633 in our experience.
> >
> > (But that really doesn't matter if it's for the full OSD and not as
> > dedicated WAL/journal)
> >
> >
> >
> > We got a cluster with a few hundred SanDisk Ultra II (discontinued, i
> > believe) that was built on a budget.
> >
> > Not the best disk but great value. They have been running since ~3 years
> now
> > with very few failures and
> >
> > okayish overall performance.
> >
> >
> >
> > We also got a few clusters with a few hundred SanDisk Extreme Pro, but we
> > are not yet sure about their
> >
> > long-time durability as they are only ~9 months old (average of ~1000
> write
> > IOPS on each disk over that time).
> >
> > Some of them report only 50-60% lifetime left.
> >
> >
> >
> > For NVMe, the Intel NVMe 750 is still a great disk
> >
> >
> >
> > 

Re: [ceph-users] SSDs for data drives

2018-07-16 Thread Satish Patel
https://blog.cypressxt.net/hello-ceph-and-samsung-850-evo/

On Thu, Jul 12, 2018 at 3:37 AM, Adrian Saul
 wrote:
>
>
> We started our cluster with consumer (Samsung EVO) disks and the write
> performance was pitiful, they had periodic spikes in latency (average of
> 8ms, but much higher spikes) and just did not perform anywhere near where we
> were expecting.
>
>
>
> When replaced with SM863 based devices the difference was night and day.
> The DC grade disks held a nearly constant low latency (contantly sub-ms), no
> spiking and performance was massively better.   For a period I ran both
> disks in the cluster and was able to graph them side by side with the same
> workload.  This was not even a moderately loaded cluster so I am glad we
> discovered this before we went full scale.
>
>
>
> So while you certainly can do cheap and cheerful and let the data
> availability be handled by Ceph, don’t expect the performance to keep up.
>
>
>
>
>
>
>
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Satish Patel
> Sent: Wednesday, 11 July 2018 10:50 PM
> To: Paul Emmerich 
> Cc: ceph-users 
> Subject: Re: [ceph-users] SSDs for data drives
>
>
>
> Prices going way up if I am picking Samsung SM863a for all data drives.
>
>
>
> We have many servers running on consumer grade sad drives and we never
> noticed any performance or any fault so far (but we never used ceph before)
>
>
>
> I thought that is the whole point of ceph to provide high availability if
> drive go down also parellel read from multiple osd node
>
>
>
> Sent from my iPhone
>
>
> On Jul 11, 2018, at 6:57 AM, Paul Emmerich  wrote:
>
> Hi,
>
>
>
> we‘ve no long-term data for the SM variant.
>
> Performance is fine as far as we can tell, but the main difference between
> these two models should be endurance.
>
>
>
>
>
> Also, I forgot to mention that my experiences are only for the 1, 2, and 4
> TB variants. Smaller SSDs are often proportionally slower (especially below
> 500GB).
>
>
>
> Paul
>
>
> Robert Stanford :
>
> Paul -
>
>
>
>  That's extremely helpful, thanks.  I do have another cluster that uses
> Samsung SM863a just for journal (spinning disks for data).  Do you happen to
> have an opinion on those as well?
>
>
>
> On Wed, Jul 11, 2018 at 4:03 AM, Paul Emmerich 
> wrote:
>
> PM/SM863a are usually great disks and should be the default go-to option,
> they outperform
>
> even the more expensive PM1633 in our experience.
>
> (But that really doesn't matter if it's for the full OSD and not as
> dedicated WAL/journal)
>
>
>
> We got a cluster with a few hundred SanDisk Ultra II (discontinued, i
> believe) that was built on a budget.
>
> Not the best disk but great value. They have been running since ~3 years now
> with very few failures and
>
> okayish overall performance.
>
>
>
> We also got a few clusters with a few hundred SanDisk Extreme Pro, but we
> are not yet sure about their
>
> long-time durability as they are only ~9 months old (average of ~1000 write
> IOPS on each disk over that time).
>
> Some of them report only 50-60% lifetime left.
>
>
>
> For NVMe, the Intel NVMe 750 is still a great disk
>
>
>
> Be carefuly to get these exact models. Seemingly similar disks might be just
> completely bad, for
>
> example, the Samsung PM961 is just unusable for Ceph in our experience.
>
>
>
> Paul
>
>
>
> 2018-07-11 10:14 GMT+02:00 Wido den Hollander :
>
>
>
> On 07/11/2018 10:10 AM, Robert Stanford wrote:
>>
>>  In a recent thread the Samsung SM863a was recommended as a journal
>> SSD.  Are there any recommendations for data SSDs, for people who want
>> to use just SSDs in a new Ceph cluster?
>>
>
> Depends on what you are looking for, SATA, SAS3 or NVMe?
>
> I have very good experiences with these drives running with BlueStore in
> them in SuperMicro machines:
>
> - SATA: Samsung PM863a
> - SATA: Intel S4500
> - SAS: Samsung PM1633
> - NVMe: Samsung PM963
>
> Running WAL+DB+DATA with BlueStore on the same drives.
>
> Wido
>
>>  Thank you
>>
>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
> --
>
> Paul Emmerich
>
> Looking for help with your Ce

Re: [ceph-users] SSDs for data drives

2018-07-12 Thread Adrian Saul

We started our cluster with consumer (Samsung EVO) disks and the write 
performance was pitiful, they had periodic spikes in latency (average of 8ms, 
but much higher spikes) and just did not perform anywhere near where we were 
expecting.

When replaced with SM863 based devices the difference was night and day.  The 
DC grade disks held a nearly constant low latency (contantly sub-ms), no 
spiking and performance was massively better.   For a period I ran both disks 
in the cluster and was able to graph them side by side with the same workload.  
This was not even a moderately loaded cluster so I am glad we discovered this 
before we went full scale.

So while you certainly can do cheap and cheerful and let the data availability 
be handled by Ceph, don’t expect the performance to keep up.



From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Satish 
Patel
Sent: Wednesday, 11 July 2018 10:50 PM
To: Paul Emmerich 
Cc: ceph-users 
Subject: Re: [ceph-users] SSDs for data drives

Prices going way up if I am picking Samsung SM863a for all data drives.

We have many servers running on consumer grade sad drives and we never noticed 
any performance or any fault so far (but we never used ceph before)

I thought that is the whole point of ceph to provide high availability if drive 
go down also parellel read from multiple osd node

Sent from my iPhone

On Jul 11, 2018, at 6:57 AM, Paul Emmerich 
mailto:paul.emmer...@croit.io>> wrote:
Hi,

we‘ve no long-term data for the SM variant.
Performance is fine as far as we can tell, but the main difference between 
these two models should be endurance.


Also, I forgot to mention that my experiences are only for the 1, 2, and 4 TB 
variants. Smaller SSDs are often proportionally slower (especially below 500GB).

Paul

Robert Stanford mailto:rstanford8...@gmail.com>>:
Paul -

 That's extremely helpful, thanks.  I do have another cluster that uses Samsung 
SM863a just for journal (spinning disks for data).  Do you happen to have an 
opinion on those as well?

On Wed, Jul 11, 2018 at 4:03 AM, Paul Emmerich 
mailto:paul.emmer...@croit.io>> wrote:
PM/SM863a are usually great disks and should be the default go-to option, they 
outperform
even the more expensive PM1633 in our experience.
(But that really doesn't matter if it's for the full OSD and not as dedicated 
WAL/journal)

We got a cluster with a few hundred SanDisk Ultra II (discontinued, i believe) 
that was built on a budget.
Not the best disk but great value. They have been running since ~3 years now 
with very few failures and
okayish overall performance.

We also got a few clusters with a few hundred SanDisk Extreme Pro, but we are 
not yet sure about their
long-time durability as they are only ~9 months old (average of ~1000 write 
IOPS on each disk over that time).
Some of them report only 50-60% lifetime left.

For NVMe, the Intel NVMe 750 is still a great disk

Be carefuly to get these exact models. Seemingly similar disks might be just 
completely bad, for
example, the Samsung PM961 is just unusable for Ceph in our experience.

Paul

2018-07-11 10:14 GMT+02:00 Wido den Hollander 
mailto:w...@42on.com>>:


On 07/11/2018 10:10 AM, Robert Stanford wrote:
>
>  In a recent thread the Samsung SM863a was recommended as a journal
> SSD.  Are there any recommendations for data SSDs, for people who want
> to use just SSDs in a new Ceph cluster?
>

Depends on what you are looking for, SATA, SAS3 or NVMe?

I have very good experiences with these drives running with BlueStore in
them in SuperMicro machines:

- SATA: Samsung PM863a
- SATA: Intel S4500
- SAS: Samsung PM1633
- NVMe: Samsung PM963

Running WAL+DB+DATA with BlueStore on the same drives.

Wido

>  Thank you
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


--
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 
31h<https://maps.google.com/?q=Freseniusstr.+31h+81247+M%C3%BCnchen&entry=gmail&source=g>
81247 
München<https://maps.google.com/?q=Freseniusstr.+31h+81247+M%C3%BCnchen&entry=gmail&source=g>
www.croit.io<http://www.croit.io>
Tel: +49 89 1896585 90

___
ceph-users mailing list
ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Confidentiality: This email and any attachments are confidential and may be 
subject to copyright, legal or some other professional privilege. They are 
intended solely for the attent

Re: [ceph-users] SSDs for data drives

2018-07-11 Thread Konstantin Shalygin

  In a recent thread the Samsung SM863a was recommended as a journal SSD.
Are there any recommendations for data SSDs, for people who want to use
just SSDs in a new Ceph cluster?


Take a look to HGST SN260, this is MLC NVMe's [1]



[1] 
https://www.hgst.com/products/solid-state-solutions/ultrastar-sn200-series





k

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] SSDs for data drives

2018-07-11 Thread leo David
I am using S3510 for both filestore and bluestore.
Performance seems pretty good.

On Wed, Jul 11, 2018 at 5:44 PM, Robert Stanford 
wrote:

>
>  Any opinions on the Dell DC S3520 (for journals)?  That's what I have,
> stock and I wonder if I should replace them.
>
> On Wed, Jul 11, 2018 at 8:34 AM, Simon Ironside 
> wrote:
>
>>
>> On 11/07/18 14:26, Simon Ironside wrote:
>>
>> The 2TB Samsung 850 EVO for example is only rated for 300TBW (terabytes
>>> written). Over the 5 year warranty period that's only 165GB/day, not even
>>> 0.01 full drive writes per day. The SM863a part of the same size is rated
>>> for 12,320TBW, over 3 DWPD.
>>>
>>
>> Sorry, my maths is out above - that should be "not even 0.1 full drive
>> writes per day".
>>
>> Simon
>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>


-- 
*Leo David*
*  DevOps*
 *Syncrasy LTD*
www.syncrasy.io
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] SSDs for data drives

2018-07-11 Thread Robert Stanford
 Any opinions on the Dell DC S3520 (for journals)?  That's what I have,
stock and I wonder if I should replace them.

On Wed, Jul 11, 2018 at 8:34 AM, Simon Ironside 
wrote:

>
> On 11/07/18 14:26, Simon Ironside wrote:
>
> The 2TB Samsung 850 EVO for example is only rated for 300TBW (terabytes
>> written). Over the 5 year warranty period that's only 165GB/day, not even
>> 0.01 full drive writes per day. The SM863a part of the same size is rated
>> for 12,320TBW, over 3 DWPD.
>>
>
> Sorry, my maths is out above - that should be "not even 0.1 full drive
> writes per day".
>
> Simon
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] SSDs for data drives

2018-07-11 Thread Simon Ironside



On 11/07/18 14:26, Simon Ironside wrote:

The 2TB Samsung 850 EVO for example is only rated for 300TBW (terabytes 
written). Over the 5 year warranty period that's only 165GB/day, not 
even 0.01 full drive writes per day. The SM863a part of the same size is 
rated for 12,320TBW, over 3 DWPD.


Sorry, my maths is out above - that should be "not even 0.1 full drive 
writes per day".


Simon
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] SSDs for data drives

2018-07-11 Thread Simon Ironside



On 11/07/18 13:49, Satish Patel wrote:

Prices going way up if I am picking Samsung SM863a for all data drives.

We have many servers running on consumer grade sad drives and we never 
noticed any performance or any fault so far (but we never used ceph before)


I thought that is the whole point of ceph to provide high availability 
if drive go down also parellel read from multiple osd node


I wouldn't use consumer drives. They tend not to have power loss 
protection, performance can degrade sharply as queue depth increases and 
the endurance is nowhere near enterprise drives. Depending on your use 
pattern, you may get a real shock at how quickly they'll wear out.


The 2TB Samsung 850 EVO for example is only rated for 300TBW (terabytes 
written). Over the 5 year warranty period that's only 165GB/day, not 
even 0.01 full drive writes per day. The SM863a part of the same size is 
rated for 12,320TBW, over 3 DWPD.


Simon.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] SSDs for data drives

2018-07-11 Thread Piotr Dałek

On 18-07-11 02:35 PM, David Blundell wrote:

Hi,

I’m looking at 4TB Intel DC P4510 for data drives running BlueStore with WAL, 
DB and data on the same drives.  Has anyone had any good / bad experiences with 
them?  As Intel’s new data centre NVMe SSD it should be fast and reliable but 
then I would have thought the same about the DC S4600 drives which currently 
seem best to avoid…

David


tl;dr - try to avoid TLC NAND flash at all costs if consistent write 
performance is your target.


Lately I was benchmarking Intel DC P4500 (not DC P4510, mind you) and I 
easily ran into performance issues. Both DC P4500 and DC P4510 utilize 3d 
TLC NAND flash chips, so you won't get great speeds on very low queue 
depths, but what's interesting in DC P4500 is that it seems to use SLC cache 
that provides fast qd=1 4k random writes, close to 300MB/s (or ~90k IOPS), 
but qd=1 4k random reads are from totally different league (~38MB/s, ~10k 
IOPS). What is worse, it's not that difficult to exhaust that SLC cache and 
then your overall write performance drops BADLY. In my case, I was getting 
RBD write IOPS varying from 10 to 40k IOPS depending on if and for how long 
the write test was running and how heavy it was.


--
Piotr Dałek
piotr.da...@corp.ovh.com
https://www.ovhcloud.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] SSDs for data drives

2018-07-11 Thread Mart van Santen

Hi,


We started with consumer grade SSDs. This was in normal operation no
problem, but did caused terrible performance during recovery or other
platform adjustments which involved datamovements. We finally decided to
replace everything with SM863 disks, which after a few years still
perform great

This is still in Firefly area, a lot of improvements happened between
Firefly and Jewel+ versions on CEPH on datarecovery and the impact on
performance during those operations.

Still, I would be careful with consumer SSDs..


regards,


mart



On 07/11/2018 02:49 PM, Satish Patel wrote:
> Prices going way up if I am picking Samsung SM863a for all data drives. 
>
> We have many servers running on consumer grade sad drives and we never
> noticed any performance or any fault so far (but we never used ceph
> before)
>
> I thought that is the whole point of ceph to provide high availability
> if drive go down also parellel read from multiple osd node   
>
> Sent from my iPhone
>
> On Jul 11, 2018, at 6:57 AM, Paul Emmerich  > wrote:
>
>> Hi,
>>
>> we‘ve no long-term data for the SM variant.
>> Performance is fine as far as we can tell, but the main difference
>> between these two models should be endurance.
>>
>>
>> Also, I forgot to mention that my experiences are only for the 1, 2,
>> and 4 TB variants. Smaller SSDs are often proportionally slower
>> (especially below 500GB).
>>
>> Paul
>>
>> Robert Stanford > >:
>>
>>> Paul -
>>>
>>>  That's extremely helpful, thanks.  I do have another cluster that
>>> uses Samsung SM863a just for journal (spinning disks for data).  Do
>>> you happen to have an opinion on those as well?
>>>
>>> On Wed, Jul 11, 2018 at 4:03 AM, Paul Emmerich
>>> mailto:paul.emmer...@croit.io>> wrote:
>>>
>>> PM/SM863a are usually great disks and should be the default
>>> go-to option, they outperform
>>> even the more expensive PM1633 in our experience.
>>> (But that really doesn't matter if it's for the full OSD and not
>>> as dedicated WAL/journal)
>>>
>>> We got a cluster with a few hundred SanDisk Ultra II
>>> (discontinued, i believe) that was built on a budget.
>>> Not the best disk but great value. They have been running since
>>> ~3 years now with very few failures and
>>> okayish overall performance.
>>>
>>> We also got a few clusters with a few hundred SanDisk Extreme
>>> Pro, but we are not yet sure about their
>>> long-time durability as they are only ~9 months old (average of
>>> ~1000 write IOPS on each disk over that time).
>>> Some of them report only 50-60% lifetime left.
>>>
>>> For NVMe, the Intel NVMe 750 is still a great disk
>>>
>>> Be carefuly to get these exact models. Seemingly similar disks
>>> might be just completely bad, for
>>> example, the Samsung PM961 is just unusable for Ceph in our
>>> experience.
>>>
>>> Paul
>>>
>>> 2018-07-11 10:14 GMT+02:00 Wido den Hollander >> >:
>>>
>>>
>>>
>>> On 07/11/2018 10:10 AM, Robert Stanford wrote:
>>> >
>>> >  In a recent thread the Samsung SM863a was recommended as
>>> a journal
>>> > SSD.  Are there any recommendations for data SSDs, for
>>> people who want
>>> > to use just SSDs in a new Ceph cluster?
>>> >
>>>
>>> Depends on what you are looking for, SATA, SAS3 or NVMe?
>>>
>>> I have very good experiences with these drives running with
>>> BlueStore in
>>> them in SuperMicro machines:
>>>
>>> - SATA: Samsung PM863a
>>> - SATA: Intel S4500
>>> - SAS: Samsung PM1633
>>> - NVMe: Samsung PM963
>>>
>>> Running WAL+DB+DATA with BlueStore on the same drives.
>>>
>>> Wido
>>>
>>> >  Thank you
>>> >
>>> >
>>> > ___
>>> > ceph-users mailing list
>>> > ceph-users@lists.ceph.com 
>>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>> 
>>> >
>>> ___
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com 
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>> 
>>>
>>>
>>>
>>>
>>> -- 
>>> Paul Emmerich
>>>
>>> Looking for help with your Ceph cluster? Contact us at
>>> https://croit.io
>>>
>>> croit GmbH
>>> Freseniusstr. 31h
>>> 
>>> 
>>> 81247 München
>>> 
>>> 
>>> www.croit.io 
>>> Tel: +49 89 

Re: [ceph-users] SSDs for data drives

2018-07-11 Thread Satish Patel
Prices going way up if I am picking Samsung SM863a for all data drives. 

We have many servers running on consumer grade sad drives and we never noticed 
any performance or any fault so far (but we never used ceph before)

I thought that is the whole point of ceph to provide high availability if drive 
go down also parellel read from multiple osd node   

Sent from my iPhone

> On Jul 11, 2018, at 6:57 AM, Paul Emmerich  wrote:
> 
> Hi,
> 
> we‘ve no long-term data for the SM variant.
> Performance is fine as far as we can tell, but the main difference between 
> these two models should be endurance.
> 
> 
> Also, I forgot to mention that my experiences are only for the 1, 2, and 4 TB 
> variants. Smaller SSDs are often proportionally slower (especially below 
> 500GB).
> 
> Paul
> 
> Robert Stanford :
> 
>> Paul -
>> 
>>  That's extremely helpful, thanks.  I do have another cluster that uses 
>> Samsung SM863a just for journal (spinning disks for data).  Do you happen to 
>> have an opinion on those as well?
>> 
>>> On Wed, Jul 11, 2018 at 4:03 AM, Paul Emmerich  
>>> wrote:
>>> PM/SM863a are usually great disks and should be the default go-to option, 
>>> they outperform
>>> even the more expensive PM1633 in our experience.
>>> (But that really doesn't matter if it's for the full OSD and not as 
>>> dedicated WAL/journal)
>>> 
>>> We got a cluster with a few hundred SanDisk Ultra II (discontinued, i 
>>> believe) that was built on a budget.
>>> Not the best disk but great value. They have been running since ~3 years 
>>> now with very few failures and
>>> okayish overall performance.
>>> 
>>> We also got a few clusters with a few hundred SanDisk Extreme Pro, but we 
>>> are not yet sure about their 
>>> long-time durability as they are only ~9 months old (average of ~1000 write 
>>> IOPS on each disk over that time).
>>> Some of them report only 50-60% lifetime left.
>>> 
>>> For NVMe, the Intel NVMe 750 is still a great disk
>>> 
>>> Be carefuly to get these exact models. Seemingly similar disks might be 
>>> just completely bad, for
>>> example, the Samsung PM961 is just unusable for Ceph in our experience.
>>> 
>>> Paul
>>> 
>>> 2018-07-11 10:14 GMT+02:00 Wido den Hollander :
 
 
 On 07/11/2018 10:10 AM, Robert Stanford wrote:
 > 
 >  In a recent thread the Samsung SM863a was recommended as a journal
 > SSD.  Are there any recommendations for data SSDs, for people who want
 > to use just SSDs in a new Ceph cluster?
 > 
 
 Depends on what you are looking for, SATA, SAS3 or NVMe?
 
 I have very good experiences with these drives running with BlueStore in
 them in SuperMicro machines:
 
 - SATA: Samsung PM863a
 - SATA: Intel S4500
 - SAS: Samsung PM1633
 - NVMe: Samsung PM963
 
 Running WAL+DB+DATA with BlueStore on the same drives.
 
 Wido
 
 >  Thank you
 > 
 > 
 > ___
 > ceph-users mailing list
 > ceph-users@lists.ceph.com
 > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
 > 
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>> 
>>> 
>>> 
>>> -- 
>>> Paul Emmerich
>>> 
>>> Looking for help with your Ceph cluster? Contact us at https://croit.io
>>> 
>>> croit GmbH
>>> Freseniusstr. 31h
>>> 81247 München
>>> www.croit.io
>>> Tel: +49 89 1896585 90
>> 
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] SSDs for data drives

2018-07-11 Thread David Blundell
Hi,

I’m looking at 4TB Intel DC P4510 for data drives running BlueStore with WAL, 
DB and data on the same drives.  Has anyone had any good / bad experiences with 
them?  As Intel’s new data centre NVMe SSD it should be fast and reliable but 
then I would have thought the same about the DC S4600 drives which currently 
seem best to avoid…

David

> On 11 Jul 2018, at 11:57, Paul Emmerich  wrote:
> 
> Hi,
> 
> we‘ve no long-term data for the SM variant.
> Performance is fine as far as we can tell, but the main difference between 
> these two models should be endurance.
> 
> 
> Also, I forgot to mention that my experiences are only for the 1, 2, and 4 TB 
> variants. Smaller SSDs are often proportionally slower (especially below 
> 500GB).
> 
> Paul
> 
> Robert Stanford :
> 
>> Paul -
>> 
>>  That's extremely helpful, thanks.  I do have another cluster that uses 
>> Samsung SM863a just for journal (spinning disks for data).  Do you happen to 
>> have an opinion on those as well?
>> 
>> On Wed, Jul 11, 2018 at 4:03 AM, Paul Emmerich  
>> wrote:
>> PM/SM863a are usually great disks and should be the default go-to option, 
>> they outperform
>> even the more expensive PM1633 in our experience.
>> (But that really doesn't matter if it's for the full OSD and not as 
>> dedicated WAL/journal)
>> 
>> We got a cluster with a few hundred SanDisk Ultra II (discontinued, i 
>> believe) that was built on a budget.
>> Not the best disk but great value. They have been running since ~3 years now 
>> with very few failures and
>> okayish overall performance.
>> 
>> We also got a few clusters with a few hundred SanDisk Extreme Pro, but we 
>> are not yet sure about their 
>> long-time durability as they are only ~9 months old (average of ~1000 write 
>> IOPS on each disk over that time).
>> Some of them report only 50-60% lifetime left.
>> 
>> For NVMe, the Intel NVMe 750 is still a great disk
>> 
>> Be carefuly to get these exact models. Seemingly similar disks might be just 
>> completely bad, for
>> example, the Samsung PM961 is just unusable for Ceph in our experience.
>> 
>> Paul
>> 
>> 2018-07-11 10:14 GMT+02:00 Wido den Hollander :
>> 
>> 
>> On 07/11/2018 10:10 AM, Robert Stanford wrote:
>> > 
>> >  In a recent thread the Samsung SM863a was recommended as a journal
>> > SSD.  Are there any recommendations for data SSDs, for people who want
>> > to use just SSDs in a new Ceph cluster?
>> > 
>> 
>> Depends on what you are looking for, SATA, SAS3 or NVMe?
>> 
>> I have very good experiences with these drives running with BlueStore in
>> them in SuperMicro machines:
>> 
>> - SATA: Samsung PM863a
>> - SATA: Intel S4500
>> - SAS: Samsung PM1633
>> - NVMe: Samsung PM963
>> 
>> Running WAL+DB+DATA with BlueStore on the same drives.
>> 
>> Wido
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] SSDs for data drives

2018-07-11 Thread Paul Emmerich
Hi,

we‘ve no long-term data for the SM variant.
Performance is fine as far as we can tell, but the main difference between 
these two models should be endurance.


Also, I forgot to mention that my experiences are only for the 1, 2, and 4 TB 
variants. Smaller SSDs are often proportionally slower (especially below 500GB).

Paul

> Robert Stanford :
> 
> Paul -
> 
>  That's extremely helpful, thanks.  I do have another cluster that uses 
> Samsung SM863a just for journal (spinning disks for data).  Do you happen to 
> have an opinion on those as well?
> 
>> On Wed, Jul 11, 2018 at 4:03 AM, Paul Emmerich  
>> wrote:
>> PM/SM863a are usually great disks and should be the default go-to option, 
>> they outperform
>> even the more expensive PM1633 in our experience.
>> (But that really doesn't matter if it's for the full OSD and not as 
>> dedicated WAL/journal)
>> 
>> We got a cluster with a few hundred SanDisk Ultra II (discontinued, i 
>> believe) that was built on a budget.
>> Not the best disk but great value. They have been running since ~3 years now 
>> with very few failures and
>> okayish overall performance.
>> 
>> We also got a few clusters with a few hundred SanDisk Extreme Pro, but we 
>> are not yet sure about their 
>> long-time durability as they are only ~9 months old (average of ~1000 write 
>> IOPS on each disk over that time).
>> Some of them report only 50-60% lifetime left.
>> 
>> For NVMe, the Intel NVMe 750 is still a great disk
>> 
>> Be carefuly to get these exact models. Seemingly similar disks might be just 
>> completely bad, for
>> example, the Samsung PM961 is just unusable for Ceph in our experience.
>> 
>> Paul
>> 
>> 2018-07-11 10:14 GMT+02:00 Wido den Hollander :
>>> 
>>> 
>>> On 07/11/2018 10:10 AM, Robert Stanford wrote:
>>> > 
>>> >  In a recent thread the Samsung SM863a was recommended as a journal
>>> > SSD.  Are there any recommendations for data SSDs, for people who want
>>> > to use just SSDs in a new Ceph cluster?
>>> > 
>>> 
>>> Depends on what you are looking for, SATA, SAS3 or NVMe?
>>> 
>>> I have very good experiences with these drives running with BlueStore in
>>> them in SuperMicro machines:
>>> 
>>> - SATA: Samsung PM863a
>>> - SATA: Intel S4500
>>> - SAS: Samsung PM1633
>>> - NVMe: Samsung PM963
>>> 
>>> Running WAL+DB+DATA with BlueStore on the same drives.
>>> 
>>> Wido
>>> 
>>> >  Thank you
>>> > 
>>> > 
>>> > ___
>>> > ceph-users mailing list
>>> > ceph-users@lists.ceph.com
>>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>> > 
>>> ___
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> 
>> 
>> 
>> -- 
>> Paul Emmerich
>> 
>> Looking for help with your Ceph cluster? Contact us at https://croit.io
>> 
>> croit GmbH
>> Freseniusstr. 31h
>> 81247 München
>> www.croit.io
>> Tel: +49 89 1896585 90
> 
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] SSDs for data drives

2018-07-11 Thread Robert Stanford
Paul -

 That's extremely helpful, thanks.  I do have another cluster that uses
Samsung SM863a just for journal (spinning disks for data).  Do you happen
to have an opinion on those as well?

On Wed, Jul 11, 2018 at 4:03 AM, Paul Emmerich 
wrote:

> PM/SM863a are usually great disks and should be the default go-to option,
> they outperform
> even the more expensive PM1633 in our experience.
> (But that really doesn't matter if it's for the full OSD and not as
> dedicated WAL/journal)
>
> We got a cluster with a few hundred SanDisk Ultra II (discontinued, i
> believe) that was built on a budget.
> Not the best disk but great value. They have been running since ~3 years
> now with very few failures and
> okayish overall performance.
>
> We also got a few clusters with a few hundred SanDisk Extreme Pro, but we
> are not yet sure about their
> long-time durability as they are only ~9 months old (average of ~1000
> write IOPS on each disk over that time).
> Some of them report only 50-60% lifetime left.
>
> For NVMe, the Intel NVMe 750 is still a great disk
>
> Be carefuly to get these exact models. Seemingly similar disks might be
> just completely bad, for
> example, the Samsung PM961 is just unusable for Ceph in our experience.
>
> Paul
>
> 2018-07-11 10:14 GMT+02:00 Wido den Hollander :
>
>>
>>
>> On 07/11/2018 10:10 AM, Robert Stanford wrote:
>> >
>> >  In a recent thread the Samsung SM863a was recommended as a journal
>> > SSD.  Are there any recommendations for data SSDs, for people who want
>> > to use just SSDs in a new Ceph cluster?
>> >
>>
>> Depends on what you are looking for, SATA, SAS3 or NVMe?
>>
>> I have very good experiences with these drives running with BlueStore in
>> them in SuperMicro machines:
>>
>> - SATA: Samsung PM863a
>> - SATA: Intel S4500
>> - SAS: Samsung PM1633
>> - NVMe: Samsung PM963
>>
>> Running WAL+DB+DATA with BlueStore on the same drives.
>>
>> Wido
>>
>> >  Thank you
>> >
>> >
>> > ___
>> > ceph-users mailing list
>> > ceph-users@lists.ceph.com
>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
>
>
> --
> Paul Emmerich
>
> Looking for help with your Ceph cluster? Contact us at https://croit.io
>
> croit GmbH
> Freseniusstr. 31h
> 
> 81247 München
> 
> www.croit.io
> Tel: +49 89 1896585 90
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] SSDs for data drives

2018-07-11 Thread Paul Emmerich
PM/SM863a are usually great disks and should be the default go-to option,
they outperform
even the more expensive PM1633 in our experience.
(But that really doesn't matter if it's for the full OSD and not as
dedicated WAL/journal)

We got a cluster with a few hundred SanDisk Ultra II (discontinued, i
believe) that was built on a budget.
Not the best disk but great value. They have been running since ~3 years
now with very few failures and
okayish overall performance.

We also got a few clusters with a few hundred SanDisk Extreme Pro, but we
are not yet sure about their
long-time durability as they are only ~9 months old (average of ~1000 write
IOPS on each disk over that time).
Some of them report only 50-60% lifetime left.

For NVMe, the Intel NVMe 750 is still a great disk

Be carefuly to get these exact models. Seemingly similar disks might be
just completely bad, for
example, the Samsung PM961 is just unusable for Ceph in our experience.

Paul

2018-07-11 10:14 GMT+02:00 Wido den Hollander :

>
>
> On 07/11/2018 10:10 AM, Robert Stanford wrote:
> >
> >  In a recent thread the Samsung SM863a was recommended as a journal
> > SSD.  Are there any recommendations for data SSDs, for people who want
> > to use just SSDs in a new Ceph cluster?
> >
>
> Depends on what you are looking for, SATA, SAS3 or NVMe?
>
> I have very good experiences with these drives running with BlueStore in
> them in SuperMicro machines:
>
> - SATA: Samsung PM863a
> - SATA: Intel S4500
> - SAS: Samsung PM1633
> - NVMe: Samsung PM963
>
> Running WAL+DB+DATA with BlueStore on the same drives.
>
> Wido
>
> >  Thank you
> >
> >
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>



-- 
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] SSDs for data drives

2018-07-11 Thread Robert Stanford
 Wido -

 You're using the same SATA drive as journals and data drives both?  I want
to make sure my question was understood, since you mention BlueStore (maybe
you were just using them for journals; I want to make sure I understood).

 Thanks

On Wed, Jul 11, 2018 at 3:14 AM, Wido den Hollander  wrote:

>
>
> On 07/11/2018 10:10 AM, Robert Stanford wrote:
> >
> >  In a recent thread the Samsung SM863a was recommended as a journal
> > SSD.  Are there any recommendations for data SSDs, for people who want
> > to use just SSDs in a new Ceph cluster?
> >
>
> Depends on what you are looking for, SATA, SAS3 or NVMe?
>
> I have very good experiences with these drives running with BlueStore in
> them in SuperMicro machines:
>
> - SATA: Samsung PM863a
> - SATA: Intel S4500
> - SAS: Samsung PM1633
> - NVMe: Samsung PM963
>
> Running WAL+DB+DATA with BlueStore on the same drives.
>
> Wido
>
> >  Thank you
> >
> >
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] SSDs for data drives

2018-07-11 Thread Wido den Hollander


On 07/11/2018 10:10 AM, Robert Stanford wrote:
> 
>  In a recent thread the Samsung SM863a was recommended as a journal
> SSD.  Are there any recommendations for data SSDs, for people who want
> to use just SSDs in a new Ceph cluster?
> 

Depends on what you are looking for, SATA, SAS3 or NVMe?

I have very good experiences with these drives running with BlueStore in
them in SuperMicro machines:

- SATA: Samsung PM863a
- SATA: Intel S4500
- SAS: Samsung PM1633
- NVMe: Samsung PM963

Running WAL+DB+DATA with BlueStore on the same drives.

Wido

>  Thank you
> 
> 
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com