[ceph-users] Cephfs metadata fix tool

2019-12-07 Thread Robert LeBlanc
Our Jewel cluster is exhibiting some similar issues to the one in this thread [0] and it was indicated that a tool would need to be written to fix that kind of corruption. Has the tool been written? How would I go about repair this 16EB directories that won't delete? Thank you, Robert LeBlanc

[ceph-users] PG Balancer Upmap mode not working

2019-12-07 Thread Philippe D'Anjou
I never had those issues with Luminous, never once, since Nautilus this is a constant headache.My issue is that I have OSDs that are over 85% whilst others are at 63%. My issue is that every time I do a rebalance or add new disks ceph moves PGs on near full OSDs and almost causes pool failures.

Re: [ceph-users] PG Balancer Upmap mode not working

2019-12-07 Thread Wido den Hollander
On 12/7/19 3:39 PM, Philippe D'Anjou wrote: > @Wido Den Hollander  > > First of all the docs say: "In most cases, this distribution is > “perfect,” which an equal number of PGs on each OSD (+/-1 PG, since they > might not divide evenly)." > Either this is just false information or very badly

[ceph-users] PG Balancer Upmap mode not working

2019-12-07 Thread Philippe D'Anjou
@Wido Den Hollander  First of all the docs say: " In most cases, this distribution is “perfect,” whichan equal number of PGs on each OSD (+/-1 PG, since they might notdivide evenly)."Either this is just false information or very badly stated. I increased PGs and see no difference. I pointed

Re: [ceph-users] PG Balancer Upmap mode not working

2019-12-07 Thread Wido den Hollander
On 12/7/19 1:42 PM, Philippe D'Anjou wrote: > @Wido Den Hollander  > > That doesn't explain why its between 76 and 92 PGs, that's major not equal. The balancer will balance the PGs so that all OSDs have an almost equal data usage. It doesn't balance that all OSDs have an equal amount of PGs.

[ceph-users] PG Balancer Upmap mode not working

2019-12-07 Thread Philippe D'Anjou
@Wido Den Hollander  That doesn't explain why its between 76 and 92 PGs, that's major not equal. Raising PGs to 100 is an old statement anyway, anything 60+ should be fine. Not an excuse for distribution failure in this case.I am expecting more or less equal PGs/OSD

Re: [ceph-users] PG Balancer Upmap mode not working

2019-12-07 Thread Wido den Hollander
On 12/7/19 11:42 AM, Philippe D'Anjou wrote: > Hi, > the docs say the upmap mode is trying to achieve perfect distribution as > to have equal amount of PGs/OSD. > This is what I got(v14.2.4): > >   0   ssd 3.49219  1.0 3.5 TiB 794 GiB 753 GiB  38 GiB 3.4 GiB 2.7 > TiB 22.20 0.32  82 up

[ceph-users] PG Balancer Upmap mode not working

2019-12-07 Thread Philippe D'Anjou
Hi,the docs say the upmap mode is trying to achieve perfect distribution as to have equal amount of PGs/OSD.This is what I got(v14.2.4):   0   ssd 3.49219  1.0 3.5 TiB 794 GiB 753 GiB  38 GiB 3.4 GiB 2.7 TiB 22.20 0.32  82 up   1   ssd 3.49219  1.0 3.5 TiB 800 GiB 751 GiB  45 GiB