(resending) Sounds like a bug. Can you open a ceph tracker issue?
Thanks, Yehuda On Mon, Jun 18, 2018 at 7:24 AM, Sander van Schie / True <sander.vansc...@true.nl> wrote: > While Ceph was resharding buckets over and over again, the maximum available > storage as reported by 'ceph df' also decreased by about 20%, while usage > stayed the same, we have yet to find out where the missing storage went. The > decreasing stopped once we disabled resharding. > > Any help would be greatly appreciated. > > Thanks, > > Sander > > ________________________________________ > From: Sander van Schie / True > Sent: Friday, June 15, 2018 14:19 > To: ceph-users@lists.ceph.com > Subject: RGW Dynamic bucket index resharding keeps resharding all buckets > > Hello, > > We're into some problems with dynamic bucket index resharding. After an > upgrade from Ceph 12.2.2 to 12.2.5, which fixed an issue with the resharding > when using tenants (which we do), the cluster was busy resharding for 2 days > straight, resharding the same buckets over and over again. > > After disabling it and re-enabling it a while later, it resharded all buckets > again and then kept quiet for a bit. Later on it started resharding buckets > over and over again, even buckets which didn't have any data added in the > meantime. In the reshard list it always says 'old_num_shards: 1' for every > bucket, even though I can confirm with 'bucket stats' there's already the > desired amount of shards present. It looks like the background process which > scans buckets doesn't properly recognize the amount of shards a bucket > currently has. When I manually add a reshard job, it does properly recognize > the current amount of shards. > > On a sidenote, we had two buckets in the reshard list which were removed a > long while ago. We were unable to cancel the reshard job for those buckets. > After recreating the users and buckets we were able to remove them from the > list though, so they are no longer present. Probably not relevant, but you > never know. > > Are we missing something, or are we running into a bug? > > Thanks, > > Sander > _______________________________________________ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com