Hi Luis,

There are currently open issues with space reclamation after dynamic
bucket index resharding, esp. http://tracker.ceph.com/issues/34307

Changes are being worked on to address this, and to permit
administratively reclaiming space.

Matt

On Tue, Oct 9, 2018 at 5:50 AM, Luis Periquito <periqu...@gmail.com> wrote:
> Hi all,
>
> I have several clusters, all running Luminous (12.2.7) proving S3
> interface. All of them have enabled dynamic resharding and is working.
>
> One of the newer clusters is starting to give warnings on the used
> space for the OMAP directory. The default.rgw.buckets.index pool is
> replicated with 3x copies of the data.
>
> I created a new crush ruleset to only use a few well known SSDs, and
> the OMAP directory size changed as expected: if I set the OSD as out
> and them tell to compact, the size of the OMAP will shrink. If I set
> the OSD as in the OMAP will grow to its previous state. And while the
> backfill is going we get loads of key recoveries.
>
> Total physical space for OMAP in the OSDs that have them is ~1TB, so
> given a 3x replica ~330G before replication.
>
> The data size for the default.rgw.buckets.data is just under 300G.
> There is one bucket who has ~1.7M objects and 22 shards.
>
> After deleting that bucket the size of the database didn't change -
> even after running gc process and telling the OSD to compact its
> database.
>
> This is not happening in older clusters, i.e created with hammer.
> Could this be a bug?
>
> I looked at getting all the OMAP keys and sizes
> (https://ceph.com/geen-categorie/get-omap-keyvalue-size/) and they add
> up to close the value I expected them to take, looking at the physical
> storage.
>
> Any ideas where to look next?
>
> thanks for all the help.
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>



-- 

Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103

http://www.redhat.com/en/technologies/storage

tel.  734-821-5101
fax.  734-769-8938
cel.  734-216-5309
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to