On Wed, Nov 26, 2014 at 3:09 PM, b <[email protected]> wrote:
> On 2014-11-27 09:38, Yehuda Sadeh wrote:
>>
>> On Wed, Nov 26, 2014 at 2:32 PM, b <[email protected]> wrote:
>>>
>>> I've been deleting a bucket which originally had 60TB of data in it, with
>>> our cluster doing only 1 replication, the total usage was 120TB.
>>>
>>> I've been deleting the objects slowly using S3 browser, and I can see the
>>> bucket usage is now down to around 2.5TB or 5TB with duplication, but the
>>> usage in the cluster has not changed.
>>>
>>> I've looked at garbage collection (radosgw-admin gc list --include all)
>>> and
>>> it just reports square brackets "[]"
>>>
>>> I've run radosgw-admin temp remove --date=2014-11-20, and it doesn't
>>> appear
>>> to have any effect.
>>>
>>> Is there a way to check where this space is being consumed?
>>>
>>> Running 'ceph df' the USED space in the buckets pool is not showing any
>>> of
>>> the 57TB that should have been freed up from the deletion so far.
>>>
>>> Running 'radosgw-admin bucket stats | jshon | grep size_kb_actual' and
>>> adding up all the buckets usage, this shows that the space has been freed
>>> from the bucket, but the cluster is all sorts of messed up.
>>>
>>>
>>> ANY IDEAS? What can I look at?
>>
>>
>> Can you run 'radosgw-admin gc list --include-all'?
>>
>> Yehuda
>
>
> I've done it before, and it just returns square brackets [] (see below)
>
> radosgw-admin gc list --include-all
> []

Do you know which of the rados pools have all that extra data? Try to
list that pool's objects, verify that there are no surprises there
(e.g., use 'rados -p <pool> ls').

Yehuda
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to