Re: [ceph-users] Balancing cluster with large disks - 10TB HHD

2019-01-02 Thread Thomas Byrne - UKRI STFC
nstantin Shalygin > Cc: ceph-users@lists.ceph.com > Subject: Re: [ceph-users] Balancing cluster with large disks - 10TB HHD > > >> I would still like to have a log somewhere to grep and inspect what > >> balancer/upmap actually does - when in my cluster. Or some ceph &g

Re: [ceph-users] Balancing cluster with large disks - 10TB HHD

2018-12-30 Thread jesper
>> I would still like to have a log somewhere to grep and inspect what >> balancer/upmap >> actually does - when in my cluster. Or some ceph commands that deliveres >> some monitoring capabilityes .. any suggestions? > Yes, on ceph-mgr log, when log level is DEBUG. Tried the docs .. something

Re: [ceph-users] Balancing cluster with large disks - 10TB HHD

2018-12-30 Thread Konstantin Shalygin
On 12/30/18 6:48 PM, Marc Roos wrote: You mean the values in the reweight column or the weight column? Because from the commands in this thread I am assuming the weight column. Does this mean that the upmap is handling disk sizes automatically? Reweight, not weight. Weight is a weight of

Re: [ceph-users] Balancing cluster with large disks - 10TB HHD

2018-12-30 Thread Marc Roos
>> 4. Revert all your reweights. > >Done You mean the values in the reweight column or the weight column? Because from the commands in this thread I am assuming the weight column. Does this mean that the upmap is handling disk sizes automatically? Currently I am using the balancer (turned off)

Re: [ceph-users] Balancing cluster with large disks - 10TB HHD

2018-12-29 Thread Konstantin Shalygin
I would still like to have a log somewhere to grep and inspect what balancer/upmap actually does - when in my cluster. Or some ceph commands that deliveres some monitoring capabilityes .. any suggestions? Yes, on ceph-mgr log, when log level is DEBUG. You can get your cluster upmap's in via

Re: [ceph-users] Balancing cluster with large disks - 10TB HHD

2018-12-28 Thread jesper
Hi. .. Just an update - This looks awesome.. and in a 8x5 company - christmas is a good period to rebalance a cluster :-) >> I'll try it out again - last I tried it complanied about older clients - >> it should be better now. > upmap is supported since kernel 4.13. > >> Second - should the

Re: [ceph-users] Balancing cluster with large disks - 10TB HHD

2018-12-26 Thread Konstantin Shalygin
I'll try it out again - last I tried it complanied about older clients - it should be better now. upmap is supported since kernel 4.13. Second - should the reweights be set back to 1 then? Yes, also: 1. `ceph osd crush tunables optimal` 2. All your buckets should be straw2, but in case

Re: [ceph-users] Balancing cluster with large disks - 10TB HHD

2018-12-26 Thread jesper
> Have a look at this thread on the mailing list: > https://www.mail-archive.com/ceph-users@lists.ceph.com/msg46506.html Ok, done.. how do I see that it actually work? Second - should the reweights be set back to 1 then? Jesper ___ ceph-users mailing

Re: [ceph-users] Balancing cluster with large disks - 10TB HHD

2018-12-26 Thread Heðin Ejdesgaard Møller
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 On mik, 2018-12-26 at 16:30 +0100, jes...@krogh.cc wrote: > > -BEGIN PGP SIGNED MESSAGE- > > Hash: SHA256 > > > > On mik, 2018-12-26 at 13:14 +0100, jes...@krogh.cc wrote: > > > Thanks for the insight and links. > > > > > > > As I can see

Re: [ceph-users] Balancing cluster with large disks - 10TB HHD

2018-12-26 Thread jesper
> -BEGIN PGP SIGNED MESSAGE- > Hash: SHA256 > > On mik, 2018-12-26 at 13:14 +0100, jes...@krogh.cc wrote: >> Thanks for the insight and links. >> >> > As I can see you are on Luminous. Since Luminous Balancer plugin is >> > available [1], you should use it instead reweight's in place, >>

Re: [ceph-users] Balancing cluster with large disks - 10TB HHD

2018-12-26 Thread Heðin Ejdesgaard Møller
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 On mik, 2018-12-26 at 13:14 +0100, jes...@krogh.cc wrote: > Thanks for the insight and links. > > > As I can see you are on Luminous. Since Luminous Balancer plugin is > > available [1], you should use it instead reweight's in place, especially > >

Re: [ceph-users] Balancing cluster with large disks - 10TB HHD

2018-12-26 Thread jesper
Thanks for the insight and links. > As I can see you are on Luminous. Since Luminous Balancer plugin is > available [1], you should use it instead reweight's in place, especially > in upmap mode [2] I'll try it out again - last I tried it complanied about older clients - it should be better

Re: [ceph-users] Balancing cluster with large disks - 10TB HHD

2018-12-25 Thread Konstantin Shalygin
$ sudo ceph osd df tree ID CLASSWEIGHTREWEIGHT SIZE USEAVAIL %USE VAR PGS TYPE NAME -8 639.98883- 639T 327T 312T 51.24 1.00 - root default -10 111.73999- 111T 58509G 55915G 51.13 1.00 - host bison 78 hdd_fast 0.90900 1.0

Re: [ceph-users] Balancing cluster with large disks - 10TB HHD

2018-12-25 Thread jesper
> Please, paste your `ceph osd df tree` and `ceph osd dump | head -n 12`. $ sudo ceph osd df tree ID CLASSWEIGHTREWEIGHT SIZE USEAVAIL %USE VAR PGS TYPE NAME -8 639.98883- 639T 327T 312T 51.24 1.00 - root default -10 111.73999- 111T

Re: [ceph-users] Balancing cluster with large disks - 10TB HHD

2018-12-25 Thread Konstantin Shalygin
We hit an OSD_FULL last week on our cluster - with an average utillzation of less than 50% .. thus hugely imbalanced. This has driven us to go for adjusting pg's upwards and reweighting the osd's more agressively. Question: What do people see as an "acceptable" variance across OSD's? x N

[ceph-users] Balancing cluster with large disks - 10TB HHD

2018-12-25 Thread jesper
Hi. We hit an OSD_FULL last week on our cluster - with an average utillzation of less than 50% .. thus hugely imbalanced. This has driven us to go for adjusting pg's upwards and reweighting the osd's more agressively. Question: What do people see as an "acceptable" variance across OSD's? x