We would be a big user of this. We delete large buckets often and it takes
forever.

Though didn't I read that 'object expiration' support is on the near-term
RGW roadmap? That may do what we want.. we're creating thousands of objects
a day, and thousands of objects a day will be expiring, so RGW will need to
handle.


-Ben

On Wed, Mar 16, 2016 at 9:40 AM, Yehuda Sadeh-Weinraub <[email protected]>
wrote:

> On Tue, Mar 15, 2016 at 11:36 PM, Pavan Rallabhandi
> <[email protected]> wrote:
> > Hi,
> >
> > I find this to be discussed here before, but couldn¹t find any solution
> > hence the mail. In RGW, for a bucket holding objects in the range of ~
> > millions, one can find it to take for ever to delete the bucket(via
> > radosgw-admin). I understand the gc(and its parameters) that would
> reclaim
> > the space eventually, but am looking more at the bucket deletion options
> > that can possibly speed up the operation.
> >
> > I realize, currently rgw_remove_bucket(), does it 1000 objects at a time,
> > serially. Wanted to know if there is a reason(that am possibly missing
> and
> > discussed) for this to be left that way, otherwise I was considering a
> > patch to make it happen better.
> >
>
> There is no real reason. You might want to have a version of that
> command that doesn't schedule the removal to gc, but rather removes
> all the object parts by itself. Otherwise, you're just going to flood
> the gc. You'll need to iterate through all the objects, and for each
> object you'll need to remove all of it's rados objects (starting with
> the tail, then the head). Removal of each rados object can be done
> asynchronously, but you'll need to throttle the operations, not send
> everything to the osds at once (which will be impossible, as the
> objecter will throttle the requests anyway, which will lead to a high
> memory consumption).
>
> Thanks,
> Yehuda
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to