Quoting Massimo Sgaravatto (massimo.sgarava...@gmail.com):
> Thank you
>
> But the algorithms used during backfilling and during rebalancing (to
> decide where data have to be placed) are different ?
Yes, the balancer takes more factors into consideration. It also takes
into consideration all of
Thank you
But the algorithms used during backfilling and during rebalancing (to
decide where data have to be placed) are different ?
I.e. assuming that no new data are written and no data are deleted, if you
rely on the standard way (i.e. backfilling), when the data movement process
finishes
Quoting Massimo Sgaravatto (massimo.sgarava...@gmail.com):
> Just for my education, why letting the balancer moving the PGs to the new
> OSDs (CERN approach) is better than a throttled backfilling ?
1) Because you can pause the process on any given moment and obtain
HEALTH_OK again. 2) The
Just for my education, why letting the balancer moving the PGs to the new
OSDs (CERN approach) is better than a throttled backfilling ?
Thanks, Massimo
On Sat, Jul 27, 2019 at 12:31 AM Stefan Kooman wrote:
> Quoting Peter Sabaini (pe...@sabaini.at):
> > What kind of commit/apply latency
>>> We have been using:
>>>
>>> osd op queue = wpq
>>> osd op queue cut off = high
>>>
>>> It virtually eliminates the impact of backfills on our clusters. Our
>
> It does better because it is a fair share queue and doesn't let recovery
> ops take priority over client ops at any point for any
It does better because it is a fair share queue and doesn't let recovery
ops take priority over client ops at any point for any time. It allows
clients to have a much more predictable latency to the storage.
Sent from a mobile device, please excuse any typos.
On Sat, Aug 3, 2019, 1:10 PM Alex
On Fri, Aug 2, 2019 at 6:57 PM Robert LeBlanc wrote:
>
> On Fri, Jul 26, 2019 at 1:02 PM Peter Sabaini wrote:
>>
>> On 26.07.19 15:03, Stefan Kooman wrote:
>> > Quoting Peter Sabaini (pe...@sabaini.at):
>> >> What kind of commit/apply latency increases have you seen when adding a
>> >> large
On Fri, Jul 26, 2019 at 1:02 PM Peter Sabaini wrote:
> On 26.07.19 15:03, Stefan Kooman wrote:
> > Quoting Peter Sabaini (pe...@sabaini.at):
> >> What kind of commit/apply latency increases have you seen when adding a
> >> large numbers of OSDs? I'm nervous how sensitive workloads might react
>
prohibited. If you received this in error, please contact the sender and
destroy any copies of this information.
From: ceph-users on behalf of Anthony
D'Atri
Sent: Sunday, July 28, 2019 4:09 AM
To: ceph-users
Subject: Re: [ceph-us
Paul Emmerich wrote:
> +1 on adding them all at the same time.
>
> All these methods that gradually increase the weight aren't really
> necessary in newer releases of Ceph.
Because the default backfill/recovery values are lower than they were in, say,
Dumpling?
Doubling (or more) the size of
On 26.07.19 15:03, Stefan Kooman wrote:
> Quoting Peter Sabaini (pe...@sabaini.at):
>> What kind of commit/apply latency increases have you seen when adding a
>> large numbers of OSDs? I'm nervous how sensitive workloads might react
>> here, esp. with spinners.
>
> You mean when there is
Quoting Peter Sabaini (pe...@sabaini.at):
> What kind of commit/apply latency increases have you seen when adding a
> large numbers of OSDs? I'm nervous how sensitive workloads might react
> here, esp. with spinners.
You mean when there is backfilling going on? Instead of doing "a big
bang" you
What kind of commit/apply latency increases have you seen when adding a
large numbers of OSDs? I'm nervous how sensitive workloads might react
here, esp. with spinners.
cheers,
peter.
On 24.07.19 20:58, Reed Dier wrote:
> Just chiming in to say that this too has been my preferred method for
>
On 24/07/2019 20:06, Paul Emmerich wrote:
+1 on adding them all at the same time.
All these methods that gradually increase the weight aren't really
necessary in newer releases of Ceph.
FWIW, we added a rack-full (9x60 = 540 OSDs) in one go to our production
cluster (then running Jewel)
Hi,Janne
Thank you for correcting my mistake.
Maybe the first advice description is unclear,I want to say that add osds into
one failuer domain at a time ,
so that only one PG among up set to remap at a time.
--
zhanrzh...@teamsun.com.cn
>Den tors 25 juli 2019 kl 10:47 skrev
Den tors 25 juli 2019 kl 10:47 skrev 展荣臻(信泰) :
>
> 1、Adding osds in same one failure domain is to ensure only one PG in pg up
> set (ceph pg dump shows)to remap.
> 2、Setting "osd_pool_default_min_size=1" is to ensure objects to read/write
> uninterruptedly while pg remap.
> Is this wrong?
>
How
:2019-07-25 15:01:37 (星期四)
收件人: "zhanrzh...@teamsun.com.cn"
抄送: "xavier.trilla" , ceph-users
主题: Re: [ceph-users] How to add 100 new OSDs...
Den tors 25 juli 2019 kl 04:36 skrev zhanrzh...@teamsun.com.cn
:
I think it should to set "osd_pool_default_min_size=1&q
weight is a good sanity check of the new hardware,
and gives a good indication of how to approach the rest of the backfill.
Cheers,
Tom
From: ceph-users On Behalf Of Paul Emmerich
Sent: 24 July 2019 20:06
To: Reed Dier
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] How to add 100 new
Den tors 25 juli 2019 kl 04:36 skrev zhanrzh...@teamsun.com.cn <
zhanrzh...@teamsun.com.cn>:
> I think it should to set "osd_pool_default_min_size=1" before you add osd ,
> and the osd that you add at a time should in same Failure domain.
>
That sounds like weird or even bad advice?
What is
+1 on that. We are going to add 384 OSDs next week to a 2K+ cluster. The proposed solution really works well!KasparOp 24 juli 2019 om 21:06 schreef Paul Emmerich : +1 on adding them all at the same time.All these methods that gradually increase the weight aren't really necessary in newer
I think it should to set "osd_pool_default_min_size=1" before you add osd ,
and the osd that you add at a time should in same Failure domain.
Hi,
What would be the proper way to add 100 new OSDs to a cluster?
I have to add 100 new OSDs to our actual > 300 OSDs cluster, and I would like
to
+1 on adding them all at the same time.
All these methods that gradually increase the weight aren't really
necessary in newer releases of Ceph.
Paul
--
Paul Emmerich
Looking for help with your Ceph cluster? Contact us at https://croit.io
croit GmbH
Freseniusstr. 31h
81247 München
Just chiming in to say that this too has been my preferred method for adding
[large numbers of] OSDs.
Set the norebalance nobackfill flags.
Create all the OSDs, and verify everything looks good.
Make sure my max_backfills, recovery_max_active are as expected.
Make sure everything has peered.
On 7/24/19 7:15 PM, Kevin Hrpcek wrote:
> I often add 50+ OSDs at a time and my cluster is all NLSAS. Here is what
> I do, you can obviously change the weight increase steps to what you are
> comfortable with. This has worked well for me and my workloads. I've
> sometimes seen peering take
I usually add 20 OSDs each time.
To take control of the influence of backfilling, I will set
primary-affinity to 0 of those new OSDs and adjust backfilling
configurations.
http://docs.ceph.com/docs/master/rados/configuration/osd-config-ref/#backfilling
Kevin Hrpcek 于2019年7月25日周四 上午2:02写道:
> I
I change the crush weights. My 4 second sleep doesn't let peering finish for
each one before continuing. I'd test with some small steps to get an idea of
how much remaps when increasing the weight by $x. I've found my cluster is
comfortable with +1 increases...also it take awhile to get to a
Hi Kevin,
Yeah, that makes a lot of sense, and looks even safer than adding OSDs one by
one. What do you change, the crush weight? Or the reweight? (I guess you change
the crush weight, I am right?)
Thanks!
El 24 jul 2019, a les 19:17, Kevin Hrpcek
mailto:kevin.hrp...@ssec.wisc.edu>> va
I often add 50+ OSDs at a time and my cluster is all NLSAS. Here is what I do,
you can obviously change the weight increase steps to what you are comfortable
with. This has worked well for me and my workloads. I've sometimes seen peering
take longer if I do steps too quickly but I don't run any
Hi,
What would be the proper way to add 100 new OSDs to a cluster?
I have to add 100 new OSDs to our actual > 300 OSDs cluster, and I would like
to know how you do it.
Usually, we add them quite slowly. Our cluster is a pure SSD/NVMe one, and it
can handle plenty of load, but for the sake of
29 matches
Mail list logo