Hi ceph users,
We have a few clusters with quincy 17.2.6 and we are preparing to migrate from
ceph-deploy to cephadm for better management.
We are using Ubuntu20 with latest updates (latest openssh).
While testing the migration to cephadm on a test cluster with octopus (v16
latest) we had no issu
On Sat, Oct 7, 2023 at 12:03 PM Paul JURCO wrote:
> Resent due to moderation when using web interface.
>
> Hi ceph users,
> We have a few clusters with quincy 17.2.6 and we are preparing to migrate
> from ceph-deploy to cephadm for better management.
> We are using Ubuntu20 w
Resent due to moderation when using web interface.
Hi ceph users,
We have a few clusters with quincy 17.2.6 and we are preparing to migrate
from ceph-deploy to cephadm for better management.
We are using Ubuntu20 with latest updates (latest openssh).
While testing the migration to cephadm on a tes
Enabling debug lc will execute more often the LC, but, please mind that
might not respect expiration time set. By design it will consider a day the
time set in interval.
So, if will run more often, you will end up removing objects sooner than
365 days (as an example) if set to do so.
Please test u
Hi!
All restarted as required in upgrade plan in the proper order, all software
was upgraded on all nodes. We are on Ubuntu 18 (all nodes).
"ceph versions" output shows all is on "16.2.9".
Thank you!
--
Paul Jurco
On Wed, Aug 10, 2022 at 5:43 PM Eneko Lacunza wrote:
&g
2.8 and in 2 days after to
16.2.9 on the cluster with crashes.
6 seg faults are on 2tb disks, 8 are on 1tb disks. 2TB are newer (below
2yo).
Could be related to hardware?
Thank you!
--
Paul Jurco
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
How to properly ask for investigation on this bug? It looks like is not
fixed.
--
Paul
On Wed, Sep 8, 2021 at 9:07 AM Paul JURCO wrote:
> Hi!
> I have upgraded to 15.2.14 in order to be able to delete an old bucket
> stuck at:
>
>
> *2021-09-08T08:47:15.216+03
Hi!
I have upgraded to 15.2.14 in order to be able to delete an old bucket
stuck at:
*2021-09-08T08:47:15.216+0300 7f96ddfe7080 0 abort_bucket_multiparts
WARNING : aborted 34333 incomplete multipart
uploads2021-09-08T08:47:17.012+0300 7f96ddfe7080 0 abort_bucket_multiparts
WARNING : aborted 343
.amazonaws.com/doc/2006-03-01/";>
>
> Incomplete Multipart Uploads
>
> Enabled
>
>
> 1
>
>
&g
e have
set rgw_lc_debug_interval to something low and executed lc process. but it
ignored this bucket completly as i have in logs.
Any suggestion is welcome, as i bet we have other buckets in the same
situation.
Thank you!
Paul
On Mon, Jul 26, 2021 at 2:59 PM Paul JURCO wrote:
> Hi Vidushi,
> aws s
reate a delete-marker for every object and move the
> object version from current to non-current, thereby reflecting the same
> number of objects in bucket stats output ].
>
> Vidushi
>
> On Mon, Jul 26, 2021 at 4:55 PM Paul JURCO wrote:
>
>> Hi!
>> I need some help
Hi!
I need some help understanding LC processing.
On latest versions of octopus installed (tested with 15.2.13 and 15.2.8) we
have at least one bucket which is not having the files removed when
expiring.
The size of the bucket reported with radosgw-admin compared with the one
obtained with s3cmd is
12 matches
Mail list logo