On 26.09.22 21:00, Frank Schilder wrote:
I wonder if it might be a good idea to collect such experience somewhere in the
ceph documentation, for example, a link unser hardware recommendations->solid
state drives in the docs. Are there legal implications with creating a list of
drives showing
erformance death trap:
https://tracker.ceph.com/issues/55324 ?
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: Mark Nelson
Sent: 25 July 2022 18:50
To: ceph-users@ceph.io
Subject: [ceph-users] Re: weird performance issue on ceph
gt;>>>> we didn't want to deal with having to situationally disable
>>>>>>>>>> it for drives with buggy firmwares and some of the other
>>>>>>>>>> associated problems with online discard. Having said th
related to this performance death trap:
https://tracker.ceph.com/issues/55324 ?
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: Mark Nelson
Sent: 25 July 2022 18:50
To: ceph-users@ceph.io
Subject: [ceph-users] Re: weird perf
/ceph/ceph/pull/47221
Mark
On 7/25/22 12:48, Frank Schilder wrote:
Could it be related to this performance death trap:
https://tracker.ceph.com/issues/55324 ?
=====
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: Mark Nelson
Sent: 25 July 2022 18:50
To: ce
/47221
Mark
On 7/25/22 12:48, Frank Schilder wrote:
Could it be related to this performance death trap:
https://tracker.ceph.com/issues/55324 ?
=========
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: Mark Nelson
Sent: 25 July 2022 18:50
To: ceph-users@ceph.io
Subject: [cep
all PR for improving our RocksDB tunings/glue here:
https://github.com/ceph/ceph/pull/47221
Mark
On 7/25/22 12:48, Frank Schilder wrote:
Could it be related to this performance death trap:
https://tracker.ceph.com/issues/55324 ?
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum
:
Could it be related to this performance death trap:
https://tracker.ceph.com/issues/55324 ?
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: Mark Nelson
Sent: 25 July 2022 18:50
To: ceph-users@ceph.io
Subject: [ceph-users] Re: weird performance issue on ceph
glue here:
https://github.com/ceph/ceph/pull/47221
Mark
On 7/25/22 12:48, Frank Schilder wrote:
Could it be related to this performance death trap:
https://tracker.ceph.com/issues/55324 ?
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
_________
Hi,
All rbd features was added to ceph-csi in last year [1]
You can add object-map feature in your options like any others:
```
imageFeatures: layering,exclusive-lock,object-map,fast-diff,deep-flatten
mapOptions: ms_mode=prefer-crc
```
k
[1] https://github.com/ceph/ceph-csi/pull/2514
; >
> >
> > Mark
> >
> > On 7/25/22 12:48, Frank Schilder wrote:
> > > Could it be related to this performance death trap:
> > https://tracker.ceph.com/issues/55324 ?
> > > =========
> > > Frank Schilder
> > > AIT Risø Campus
&
ed to this performance death trap:
> https://tracker.ceph.com/issues/55324 ?
> > =
> > Frank Schilder
> > AIT Risø Campus
> > Bygning 109, rum S14
> >
> > ________________
> > From: Mark Nelson
> > Sent: 25
===
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: Mark Nelson
Sent: 25 July 2022 18:50
To: ceph-users@ceph.io
Subject: [ceph-users] Re: weird performance issue on ceph
Hi Zoltan,
We have a very similar setup with one of our upstr
Hi Zoltan,
We have a very similar setup with one of our upstream community
performance test clusters. 60 4TB PM983 drives spread across 10 nodes.
We get similar numbers to what you are initially seeing (scaled down to
60 drives) though with somewhat lower random read IOPS (we tend to max
o
14 matches
Mail list logo