Re: [ceph-users] Data distribution question

2019-05-02 Thread Shain Miley
Just to follow up on this: I enabled up enabling the balancer module in upmap mode. This did resolve the the short term issue and even things out a bit...but things are still far from uniform. It seems like the balancer option is an ongoing process that continues to run over time...so maybe

Re: [ceph-users] Data distribution question

2019-04-30 Thread Dan van der Ster
On Tue, Apr 30, 2019 at 9:01 PM Igor Podlesny wrote: > > On Wed, 1 May 2019 at 01:26, Igor Podlesny wrote: > > On Wed, 1 May 2019 at 01:01, Dan van der Ster wrote: > > >> > The upmap balancer in v12.2.12 works really well... Perfectly uniform > > >> > on our clusters. > > >> > > >> mode upmap

Re: [ceph-users] Data distribution question

2019-04-30 Thread Igor Podlesny
On Wed, 1 May 2019 at 01:58, Dan van der Ster wrote: > On Tue, Apr 30, 2019 at 8:26 PM Igor Podlesny wrote: [...] > All of the clients need to be luminous our newer: > > # ceph osd set-require-min-compat-client luminous > > You need to enable the module: > > # ceph mgr module enable balancer

Re: [ceph-users] Data distribution question

2019-04-30 Thread Igor Podlesny
On Wed, 1 May 2019 at 01:26, Igor Podlesny wrote: > On Wed, 1 May 2019 at 01:01, Dan van der Ster wrote: > >> > The upmap balancer in v12.2.12 works really well... Perfectly uniform on > >> > our clusters. > >> > >> mode upmap ? > > > > yes, mgr balancer, mode upmap. Also -- do your CEPHs have

Re: [ceph-users] Data distribution question

2019-04-30 Thread Dan van der Ster
On Tue, Apr 30, 2019 at 8:26 PM Igor Podlesny wrote: > > On Wed, 1 May 2019 at 01:01, Dan van der Ster wrote: > >> > The upmap balancer in v12.2.12 works really well... Perfectly uniform on > >> > our clusters. > >> > >> mode upmap ? > > > > yes, mgr balancer, mode upmap. > > I see. Was it a

Re: [ceph-users] Data distribution question

2019-04-30 Thread Igor Podlesny
On Wed, 1 May 2019 at 01:26, Jack wrote: > If those pools are useless, you can: > - drop them As Dan pointed out it's unlikely of having any effect. The thing is imbalance is a "property" of a pool, I'd suppose that most often -- is the most loaded one (or of a few most loaded ones). Not that

Re: [ceph-users] Data distribution question

2019-04-30 Thread Igor Podlesny
On Wed, 1 May 2019 at 01:01, Dan van der Ster wrote: >> > The upmap balancer in v12.2.12 works really well... Perfectly uniform on >> > our clusters. >> >> mode upmap ? > > yes, mgr balancer, mode upmap. I see. Was it a matter of just: 1) ceph balancer mode upmap 2) ceph balancer on or were

Re: [ceph-users] Data distribution question

2019-04-30 Thread Jack
You have a lot of useless PG, yet they have the same "weight" as the useful ones If those pools are useless, you can: - drop them - raise npr_archive's pg_num using the freed PGs As npr_archive own 97% of your data, it should get 97% of your pg (which is ~8000) The balance module is still quite

Re: [ceph-users] Data distribution question

2019-04-30 Thread Dan van der Ster
Removing pools won't make a difference. Read up to slide 22 here: https://www.slideshare.net/mobile/Inktank_Ceph/ceph-day-berlin-mastering-ceph-operations-upmap-and-the-mgr-balancer .. Dan (Apologies for terseness, I'm mobile) On Tue, 30 Apr 2019, 20:02 Shain Miley, wrote: > Here is the

Re: [ceph-users] Data distribution question

2019-04-30 Thread Shain Miley
Here is the per pool pg_num info: 'data' pg_num 64 'metadata' pg_num 64 'rbd' pg_num 64 'npr_archive' pg_num 6775 '.rgw.root' pg_num 64 '.rgw.control' pg_num 64 '.rgw' pg_num 64 '.rgw.gc' pg_num 64 '.users.uid' pg_num 64 '.users.email' pg_num 64 '.users' pg_num 64 '.usage' pg_num 64

Re: [ceph-users] Data distribution question

2019-04-30 Thread Dan van der Ster
On Tue, 30 Apr 2019, 19:32 Igor Podlesny, wrote: > On Wed, 1 May 2019 at 00:24, Dan van der Ster wrote: > > > > The upmap balancer in v12.2.12 works really well... Perfectly uniform on > our clusters. > > > > .. Dan > > mode upmap ? > yes, mgr balancer, mode upmap. .. Dan > -- > End of

Re: [ceph-users] Data distribution question

2019-04-30 Thread Igor Podlesny
On Wed, 1 May 2019 at 00:24, Dan van der Ster wrote: > > The upmap balancer in v12.2.12 works really well... Perfectly uniform on our > clusters. > > .. Dan mode upmap ? -- End of message. Next message? ___ ceph-users mailing list

Re: [ceph-users] Data distribution question

2019-04-30 Thread Dan van der Ster
The upmap balancer in v12.2.12 works really well... Perfectly uniform on our clusters. .. Dan On Tue, 30 Apr 2019, 19:22 Kenneth Van Alstyne, wrote: > Unfortunately it looks like he’s still on Luminous, but if upgrading is an > option, the options are indeed significantly better. If I recall

Re: [ceph-users] Data distribution question

2019-04-30 Thread Kenneth Van Alstyne
Unfortunately it looks like he’s still on Luminous, but if upgrading is an option, the options are indeed significantly better. If I recall correctly, at least the balancer module is available in Luminous. Thanks, -- Kenneth Van Alstyne Systems Architect Knight Point Systems, LLC

Re: [ceph-users] Data distribution question

2019-04-30 Thread Jack
Hi, I see that you are using rgw RGW comes with many pools, yet most of them are used for metadata and configuration, those do not store many data Such pools do not need more than a couple PG, each (I use pg_num = 8) You need to allocate your pg on pool that actually stores the data Please do

Re: [ceph-users] Data distribution question

2019-04-30 Thread Kenneth Van Alstyne
Shain: Have you looked into doing a "ceph osd reweight-by-utilization” by chance? I’ve found that data distribution is rarely perfect and on aging clusters, I always have to do this periodically. Thanks, -- Kenneth Van Alstyne Systems Architect Knight Point Systems, LLC Service-Disabled