> This is worse than I feared, but very much in the realm of concerns I
> had with using single-disk RAID0 setups.? Thank you very much for
> posting your experience!? My money would still be on using *high write
> endurance* NVMes for DB/WAL and whatever I could afford for block.?
yw. Of
On 7/25/19 9:27 PM, Anthony D'Atri wrote:
We run few hundred HDD OSDs for our backup cluster, we set one RAID 0 per HDD
in order to be able
to use -battery protected- write cache from the RAID controller. It really
improves performance, for both
bluestore and filestore OSDs.
Having run
> We run few hundred HDD OSDs for our backup cluster, we set one RAID 0 per HDD
> in order to be able
> to use -battery protected- write cache from the RAID controller. It really
> improves performance, for both
> bluestore and filestore OSDs.
Having run something like 6000 HDD-based FileStore
with them.
Xavier
-Mensaje original-
De: ceph-users En nombre de Simon Ironside
Enviado el: jueves, 25 de julio de 2019 0:38
Para: ceph-users@lists.ceph.com
Asunto: Re: [ceph-users] New best practices for osds???
RAID0 mode being discussed here means several RAID0 "arrays&q
RAID0 mode being discussed here means several RAID0 "arrays", each with
a single physical disk as a member of it.
I.e. the number of OSDs is the same whether in RAID0 or JBOD mode.
E.g. 12x physicals disks = 12x RAID0 single disk "arrays" or 12x JBOD
physical disks = 12x OSDs.
Simon
On
One RAID0 array per drive :)
I can't understand how using RAID0 is better than JBOD, considering jbod
would be many individual disks, each used as OSDs, instead of a single
big one used as a single OSD.
--
With best regards,
Vitaliy Filippov
___
I can't understand how using RAID0 is better than JBOD, considering jbod
would be many individual disks, each used as OSDs, instead of a single big
one used as a single OSD.
On Mon, Jul 22, 2019 at 4:05 AM Vitaliy Filippov wrote:
> OK, I meant "it may help performance" :) the main point is
OK, I meant "it may help performance" :) the main point is that we had at
least one case of data loss due to some Adaptec controller in RAID0 mode
discussed recently in our ceph chat...
--
With best regards,
Vitaliy Filippov
___
ceph-users
On Mon, Jul 22, 2019 at 12:52 PM Vitaliy Filippov
wrote:
> It helps performance,
Not necessarily, I've seen several setups where disabling the cache
increases performance
Paul
> but it can also lead to data loss if the raid
> controller is crap (not flushing data correctly)
>
> --
> With
It helps performance, but it can also lead to data loss if the raid
controller is crap (not flushing data correctly)
--
With best regards,
Vitaliy Filippov
___
ceph-users mailing list
ceph-users@lists.ceph.com
in most cases write back cache does help a lot for hdd write latency,
either raid-0 or some Areca cards support write back in jbod mode. Our
observation they could help by a 3-5x factor in Bluestore, whereas
db/wal on flash will be about 2x, it does depend on hardware but in
general we see
On 2019-07-17T08:27:46, John Petrini wrote:
The main problem we've observed is that not all HBAs can just
efficiently and easily pass through disks 1:1. Some of those from a
more traditional server background insist on having some form of
mapping via RAID.
In that case it depends on whether 1
Some of the first performance studies we did back at Inktank were
looking at RAID-0 vs JBOD setups! :) You are absolutely right that the
controller cache (especially write-back with a battery or supercap) can
help with HDD-only configurations. Where we typically saw problems was
when you
Dell has a whitepaper that compares Ceph performance using JBOD and RAID-0
per disk that recommends RAID-0 for HDD's:
en.community.dell.com/techcenter/cloud/m/dell_cloud_resources/20442913/download
After switching from JBOD to RAID-0 we saw a huge reduction in latency, the
difference was much
On 17/7/19 1:12 am, Stolte, Felix wrote:
> Hi guys,
>
> our ceph cluster is performing way less than it could, based on the disks we
> are using. We could narrow it down to the storage controller (LSI SAS3008
> HBA) in combination with an SAS expander. Yesterday we had a meeting with our
>
Message-
From: ceph-users On Behalf Of Stolte,
Felix
Sent: Tuesday, July 16, 2019 11:42 AM
To: ceph-users
Subject: [ceph-users] New best practices for osds???
Hi guys,
our ceph cluster is performing way less than it could, based on the disks we
are using. We could narrow it down
Hi,I concur with Paul. Any kind of RAID for OSD devices would be ill advised.Make sure the 3008 SAS controller is flashed with the IT firmware so it will operate in JBOD mode.Regards, KasparOp 16 juli 2019 om 17:56 schreef Paul Emmerich : On Tue, Jul 16, 2019 at 5:43 PM Stolte, Felix <
On Tue, Jul 16, 2019 at 5:43 PM Stolte, Felix
wrote:
>
> They told us, that "best practices" for ceph would be to deploy disks as
> Raid 0 consisting of one disk using a raid controller with a big writeback
> cache.
>
> Since this "best practice" is new to me, I would like to hear your opinion
>
-juelich.de]
Sent: dinsdag 16 juli 2019 17:42
To: ceph-users
Subject: [ceph-users] New best practices for osds???
Hi guys,
our ceph cluster is performing way less than it could, based on the
disks we are using. We could narrow it down to the storage controller
(LSI SAS3008 HBA) in combination
Hi guys,
our ceph cluster is performing way less than it could, based on the disks we
are using. We could narrow it down to the storage controller (LSI SAS3008 HBA)
in combination with an SAS expander. Yesterday we had a meeting with our
hardware reseller and sale representatives of the
20 matches
Mail list logo