[ceph-users] Re: Space leak in Bluestore

2020-03-27 Thread vitalif
Update on my issue. It seems it was caused by the broken compression which one of 14.2.x releases (ubuntu builds) probably had. My osd versions were mixed. Five OSDs were 14.2.7, one was 14.2.4, other 6 were 14.2.8. I moved the same pg several times more. Space usage dropped when the pg was

[ceph-users] Re: Space leak in Bluestore

2020-03-26 Thread vitalif
Hi, The cluster is all-flash (NVMe), so the removal is fast and it's in fact pretty noticeable, even on Prometheus graphs. Also I've logged raw space usage from `ceph -f json df`: 1) before pg rebalance started the space usage was 32724002664448 bytes 2) just before the rebalance finished it

[ceph-users] Re: Space leak in Bluestore

2020-03-26 Thread Igor Fedotov
Hi Vitaliy, just as a guess to verify: a while ago I've been observed very long pool (pretty large) removal. It took several days to complete. DB was at spinner which was one of driver of this slow behavior. Another one - PG removal design which enumerates up to 30 entries max to fill singl

[ceph-users] Re: Space leak in Bluestore

2020-03-25 Thread Виталий Филиппов
Hi Igor, I think so because 1) space usage increases after each rebalance. Even when the same pg is moved twice (!) 2) I use 4k min_alloc_size from the beginning One crazy hypothesis is that maybe ceph allocates space for uncompressed objects, then compresses them and leaks (uncompressed-compre

[ceph-users] Re: Space leak in Bluestore

2020-03-25 Thread Igor Fedotov
Bluestore fsck/repair detect and fix leaks at Bluestore level but I doubt your issue is here. To be honest I don't understand from the overview why do you think that there are any leaks at all Not sure whether this is relevant but from my experience space "leaks" are sometimes caused by

[ceph-users] Re: Space leak in Bluestore

2020-03-25 Thread vitalif
I have a question regarding this problem - is it possible to rebuild bluestore allocation metadata? I could try it to test if it's an allocator problem... Hi. I'm experiencing some kind of a space leak in Bluestore. I use EC, compression and snapshots. First I thought that the leak was caused

[ceph-users] Re: Space leak in Bluestore

2020-03-24 Thread Mark Nelson
FWIW, Igor has been doing some great work on improving performance with the 4k_min_alloc size.  He gave a presentation at a recent weekly performance meeting on it and it's looking really good.  On HDDs I think he was seeing up to 2X faster 8K-128K random writes at the expense of up to a 20% se

[ceph-users] Re: Space leak in Bluestore

2020-03-24 Thread vitalif
Hi Steve, Thanks, it's an interesting discussion, however I don't think that it's the same problem, because in my case bluestore eats additional space during rebalance. And it doesn't seem that Ceph does small overwrites during rebalance. As I understand it does the opposite: it reads and wri

[ceph-users] Re: Space leak in Bluestore

2020-03-24 Thread Steven Pine
Hi Vitaliy, You may be coming across the EC space amplification issue, https://tracker.ceph.com/issues/44213 I am not aware of any recent updates to resolve this issue. Sincerely, On Tue, Mar 24, 2020 at 12:53 PM wrote: > Hi. > > I'm experiencing some kind of a space leak in Bluestore. I use