On 31/05/2018 14:41, Simon Ironside wrote:
> On 24/05/18 19:21, Lionel Bouton wrote:
>
>> Unfortunately I just learned that Supermicro found an incompatibility
>> between this motherboard and SM863a SSDs (I don't have more information
>> yet) and they proposed S4600 as an alternative. I
On 24/05/18 19:21, Lionel Bouton wrote:
Unfortunately I just learned that Supermicro found an incompatibility
between this motherboard and SM863a SSDs (I don't have more information
yet) and they proposed S4600 as an alternative. I immediately remembered
that there were problems and asked for a
On 24/05/18 19:21, Lionel Bouton wrote:
Has anyone successfully used Ceph with S4600 ? If so could you share if
you used filestore or bluestore, which firmware was used and
approximately how much data was written on the most used SSDs ?
I have 4 new OSD nodes which have 480GB S4600s (Firmware
I have some bluestore DC S4500's in my 3 node home cluster. I haven't ever
had any problems with it. I've used them with an EC cache tier, cephfs
metadata, and VM RBDs.
On Thu, May 24, 2018 at 2:21 PM Lionel Bouton
wrote:
> Hi,
>
> On 22/02/2018 23:32, Mike Lovell
Hi,
On 22/02/2018 23:32, Mike Lovell wrote:
> hrm. intel has, until a year ago, been very good with ssds. the
> description of your experience definitely doesn't inspire confidence.
> intel also dropping the entire s3xxx and p3xxx series last year before
> having a viable replacement has been
Hi all,
Thanks for all your follow ups on this. The Samsung SM863a is indeed a very
good alternative, thanks!
We ordered both (SM863a & DC S4600) so we can compare.
Intel's response (I mean the lack of it) is not very promising. Allthough
we have very good experiences with Intel DC SSD's we
adding ceph-users back on.
it sounds like the enterprise samsungs and hitachis have been mentioned on
the list as alternatives. i have 2 micron 5200 (pro i think) that i'm
beginning testing on and have some micron 9100 nvme drives to use as
journals. so the enterprise micron might be good. i did
turned the lot and ended up with 9/12 of the drives failing in the
> same manner. The replaced drives, which had different serial number ranges,
> also failed. Very frustrating is that the drives fail in a way that result
> in unbootable servers, unless one adds ‘rootdelay=240’ to the ke
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Many concurrent drive failures - How do I activate
pgs?
has anyone tried with the most recent firmwares from intel? i've had a number
of s4600 960gb drives that have been waiting for me to get around to adding
them to a ceph cluster
gt;>>> the same manner. The replaced drives, which had different serial number
>>>>> ranges, also failed. Very frustrating is that the drives fail in a way
>>>>> that
>>>>> result in unbootable servers, unless one adds ‘rootdelay=240’ to the
>
t; result in unbootable servers, unless one adds ‘rootdelay=240’ to the
>>>> kernel.
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> I would be interested to know what platform your drives were in and
>>>> whether or not t
gt;
>>>
>>>
>>> PS: After much searching we’ve decided to order the NVMe conversion kit
>>> and have ordered HGST UltraStar SN200 2.5 inch SFF drives with a 3 DWPD
>>> rating.
>>>
>>>
>>>
>>>
>>>
>
>> PS: After much searching we’ve decided to order the NVMe conversion kit
>> and have ordered HGST UltraStar SN200 2.5 inch SFF drives with a 3 DWPD
>> rating.
>>
>>
>>
>>
>>
>> Regards
>>
>> David Herselman
>>
>>
>>
>
ecovery]# df -h | grep -P 'ceph-(27|30|31|32|34)$'
> /dev/sdd4 140G 5.2G 135G 4% /var/lib/ceph/osd/ceph-27
> /dev/sdd7 140G 14G 127G 10% /var/lib/ceph/osd/ceph-30
> /dev/sdd8 140G 14G 127G 10% /var/lib/ceph/osd/ceph-31
> /dev/sdd9 140G 22G 119G 1
2018 12:45 AM
To: David Herselman <d...@syrex.co>
Cc: Christian Balzer <ch...@gol.com>; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Many concurrent drive failures - How do I activate
pgs?
Hi,
I have a case where 3 out to 12 of these Intel S4600 2TB model failed within a
ma
ceph-32
> /dev/sdd11 140G 22G 119G 16% /var/lib/ceph/osd/ceph-34
>
>
> How do I tell Ceph to read these object shards?
>
>
>
> PS: It's probably a good idea to reweight the OSDs to 0 before starting
> again. This should prevent data flowing on to them, if they
0G 22G 119G 16% /var/lib/ceph/osd/ceph-32
> /dev/sdd11 140G 22G 119G 16% /var/lib/ceph/osd/ceph-34
>
>
>How do I tell Ceph to read these object shards?
>
>
>
>PS: It's probably a good idea to reweight the OSDs to 0 before starting
>again. This should prevent d
d Herselman
Sent: Thursday, 21 December 2017 3:49 AM
To: 'Christian Balzer' <ch...@gol.com>; ceph-users@lists.ceph.com
Subject: RE: [ceph-users] Many concurrent drive failures - How do I activate
pgs?
Hi Christian,
Thanks for taking the time, I haven't been contacted by anyone y
Regards
David Herselman
-Original Message-
From: Christian Balzer [mailto:ch...@gol.com]
Sent: Thursday, 21 December 2017 3:24 AM
To: ceph-users@lists.ceph.com
Cc: David Herselman <d...@syrex.co>
Subject: Re: [ceph-users] Many concurrent drive failures - How do I activate
pgs?
ists.ceph.com
Cc: David Herselman <d...@syrex.co>
Subject: Re: [ceph-users] Many concurrent drive failures - How do I activate
pgs?
Hello,
first off, I don't have anything to add to your conclusions of the current
status, alas there are at least 2 folks here on the ML making a living from
Hello,
first off, I don't have anything to add to your conclusions of the current
status, alas there are at least 2 folks here on the ML making a living
from Ceph disaster recovery, so I hope you have been contacted already.
Now once your data is safe or you have a moment, I and others here
21 matches
Mail list logo