nstantin Shalygin
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Balancing cluster with large disks - 10TB HHD
>
> >> I would still like to have a log somewhere to grep and inspect what
> >> balancer/upmap actually does - when in my cluster. Or some ceph
&g
>> I would still like to have a log somewhere to grep and inspect what
>> balancer/upmap
>> actually does - when in my cluster. Or some ceph commands that deliveres
>> some monitoring capabilityes .. any suggestions?
> Yes, on ceph-mgr log, when log level is DEBUG.
Tried the docs .. something
On 12/30/18 6:48 PM, Marc Roos wrote:
You mean the values in the reweight column or the weight column? Because
from the commands in this thread I am assuming the weight column. Does
this mean that the upmap is handling disk sizes automatically?
Reweight, not weight. Weight is a weight of
>> 4. Revert all your reweights.
>
>Done
You mean the values in the reweight column or the weight column? Because
from the commands in this thread I am assuming the weight column. Does
this mean that the upmap is handling disk sizes automatically? Currently
I am using the balancer (turned off)
I would still like to have a log somewhere to grep and inspect what
balancer/upmap
actually does - when in my cluster. Or some ceph commands that deliveres
some monitoring capabilityes .. any suggestions?
Yes, on ceph-mgr log, when log level is DEBUG.
You can get your cluster upmap's in via
Hi. .. Just an update - This looks awesome.. and in a 8x5 company -
christmas is a good period to rebalance a cluster :-)
>> I'll try it out again - last I tried it complanied about older clients -
>> it should be better now.
> upmap is supported since kernel 4.13.
>
>> Second - should the
I'll try it out again - last I tried it complanied about older clients -
it should be better now.
upmap is supported since kernel 4.13.
Second - should the reweights be set back to 1 then?
Yes, also:
1. `ceph osd crush tunables optimal`
2. All your buckets should be straw2, but in case
> Have a look at this thread on the mailing list:
> https://www.mail-archive.com/ceph-users@lists.ceph.com/msg46506.html
Ok, done.. how do I see that it actually work?
Second - should the reweights be set back to 1 then?
Jesper
___
ceph-users mailing
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
On mik, 2018-12-26 at 16:30 +0100, jes...@krogh.cc wrote:
> > -BEGIN PGP SIGNED MESSAGE-
> > Hash: SHA256
> >
> > On mik, 2018-12-26 at 13:14 +0100, jes...@krogh.cc wrote:
> > > Thanks for the insight and links.
> > >
> > > > As I can see
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> On mik, 2018-12-26 at 13:14 +0100, jes...@krogh.cc wrote:
>> Thanks for the insight and links.
>>
>> > As I can see you are on Luminous. Since Luminous Balancer plugin is
>> > available [1], you should use it instead reweight's in place,
>>
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
On mik, 2018-12-26 at 13:14 +0100, jes...@krogh.cc wrote:
> Thanks for the insight and links.
>
> > As I can see you are on Luminous. Since Luminous Balancer plugin is
> > available [1], you should use it instead reweight's in place, especially
> >
Thanks for the insight and links.
> As I can see you are on Luminous. Since Luminous Balancer plugin is
> available [1], you should use it instead reweight's in place, especially
> in upmap mode [2]
I'll try it out again - last I tried it complanied about older clients -
it should be better
$ sudo ceph osd df tree
ID CLASSWEIGHTREWEIGHT SIZE USEAVAIL %USE VAR PGS TYPE NAME
-8 639.98883- 639T 327T 312T 51.24 1.00 - root
default
-10 111.73999- 111T 58509G 55915G 51.13 1.00 -
host bison
78 hdd_fast 0.90900 1.0
> Please, paste your `ceph osd df tree` and `ceph osd dump | head -n 12`.
$ sudo ceph osd df tree
ID CLASSWEIGHTREWEIGHT SIZE USEAVAIL %USE VAR PGS TYPE NAME
-8 639.98883- 639T 327T 312T 51.24 1.00 - root
default
-10 111.73999- 111T
We hit an OSD_FULL last week on our cluster - with an average utillzation
of less than 50% .. thus hugely imbalanced. This has driven us to
go for adjusting pg's upwards and reweighting the osd's more agressively.
Question: What do people see as an "acceptable" variance across OSD's?
x
N
Hi.
We hit an OSD_FULL last week on our cluster - with an average utillzation
of less than 50% .. thus hugely imbalanced. This has driven us to
go for adjusting pg's upwards and reweighting the osd's more agressively.
Question: What do people see as an "acceptable" variance across OSD's?
x
16 matches
Mail list logo