pipermail/ceph-users-ceph.com/2018-October/030791.html
> https://tracker.ceph.com/issues/37942
>
> For now we are cancelling these resharding jobs since they seem to be
> causing performance issues with the cluster, but this is an untenable
> solution. Does anyone know what is causing t
you using lifecycle? And
garbage collection is another background task.
And just to be clear -- sometimes all 3 of your rados gateways are
simultaneously in this state?
But the call graph would be incredibly helpful.
Thank you,
Eric
--
J. Eric Ivancich
he/him/his
Red Hat
ome help from Canonical, who I think are working on
> patches).
>
There was a recently merged PR that addressed bucket deletion with
missing shadow objects:
https://tracker.ceph.com/issues/40590
Thank you for reporting your experience w/ rgw,
Eric
--
J. Eric Ivanci
:
http://tracker.ceph.com/issues/40526
<http://tracker.ceph.com/issues/40526>
Eric
--
J. Eric Ivancich
he/him/his
Red Hat Storage
Ann Arbor, Michigan, USA
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
/github.com/ceph/ceph/pull/28724
Eric
--
J. Eric Ivancich
he/him/his
Red Hat Storage
Ann Arbor, Michigan, USA
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
04
> gc.26: 42
> gc.28: 111292
> gc.17: 111314
> gc.12: 111534
> gc.31: 111956
Casey Bodley mentioned to me that he's seen similar behavior to what
you're describing when RGWs are upgraded but not all OSDs are upgraded
as well. Is it possible that the OSDs hosting gc.13, gc.10, and so
On 6/4/19 7:37 AM, Wido den Hollander wrote:
> I've set up a temporary machine next to the 13.2.5 cluster with the
> 13.2.6 packages from Shaman.
>
> On that machine I'm running:
>
> $ radosgw-admin gc process
>
> That seems to work as intended! So the PR seems to have fixed it.
>
> Should be
Hi Wido,
When you run `radosgw-admin gc list`, I assume you are *not* using the
"--include-all" flag, right? If you're not using that flag, then
everything listed should be expired and be ready for clean-up. If after
running `radosgw-admin gc process` the same entries appear in
`radosgw-admin gc
Hi Manuel,
My response is interleaved below.
On 5/8/19 3:17 PM, EDH - Manuel Rios Fernandez wrote:
> Eric,
>
> Yes we do :
>
> time s3cmd ls s3://[BUCKET]/ --no-ssl and we get near 2min 30 secs for list
> the bucket.
We're adding an --allow-unordered option to `radosgw-admin bucket list`.
> show more 0B. Is it correct?
I am having difficulty understanding that sentence. Would you be so kind as to
rewrite it? I don’t want to create confusion by guessing.
Eric
> Thanks for your response.
>
>
> -Mensaje original-
> De: J. Eric Ivancich
> Enviado el: miércole
Hi Manuel,
My response is interleaved.
On 5/7/19 7:32 PM, EDH - Manuel Rios Fernandez wrote:
> Hi Eric,
>
> This looks like something the software developer must do, not something than
> Storage provider must allow no?
True -- so you're using `radosgw-admin bucket list --bucket=XYZ` to list
On 5/7/19 11:24 AM, EDH - Manuel Rios Fernandez wrote:
> Hi Casey
>
> ceph version 13.2.5 (cbff874f9007f1869bfd3821b7e33b2a6ffd4988) mimic
> (stable)
>
> Reshard is something than don’t allow us customer to list index?
>
> Regards
Listing of buckets with a large number of buckets is notoriously
So I do not think mclock_client queue works the way you’re hoping it does. For
categorization purposes it joins the operation class and the client identifier
with the intent that that will execute operations among clients more evenly
(i.e., it won’t favor one client over another).
However, it
tadata list bucket.instance | jq -r '.[]' | sort
>
>
>
> Give that a try and see if you see the same problem. It seems that once
> you remove the old bucket instances the omap dbs don't reduce in size
> until you compact them.
>
>
>
> Bryan
>
>
>
> *From
On 11/29/18 6:58 PM, Bryan Stillwell wrote:
> Wido,
>
> I've been looking into this large omap objects problem on a couple of our
> clusters today and came across your script during my research.
>
> The script has been running for a few hours now and I'm already over 100,000
> 'orphaned'
On 12/17/18 9:18 AM, Josef Zelenka wrote:
> Hi everyone, i'm running a Luminous 12.2.5 cluster with 6 hosts on
> ubuntu 16.04 - 12 HDDs for data each, plus 2 SSD metadata OSDs(three
> nodes have an additional SSD i added to have more space to rebalance the
> metadata). CUrrently, the cluster is
I did make an inquiry and someone here does have some experience w/ the
mc command -- minio client. We're curious how "ls -r" is implemented
under mc. Does it need to get a full listing and then do some path
parsing to produce nice output? If so, it may be playing a role in the
delay as well.
The numbers you're reporting strike me as surprising as well. Which version are
you running?
In case you're not aware, listing of buckets is not a very efficient operation
given that the listing is required to return with objects in lexical order.
They are distributed across the shards via a
18 matches
Mail list logo