[ceph-users] cephadm, cannot use ECDSA key with quincy

2023-10-10 Thread paul . jurco
Hi ceph users, We have a few clusters with quincy 17.2.6 and we are preparing to migrate from ceph-deploy to cephadm for better management. We are using Ubuntu20 with latest updates (latest openssh). While testing the migration to cephadm on a test cluster with octopus (v16 latest) we had no issu

[ceph-users] Re: cephadm, cannot use ECDSA key with quincy

2023-10-10 Thread Paul JURCO
On Sat, Oct 7, 2023 at 12:03 PM Paul JURCO wrote: > Resent due to moderation when using web interface. > > Hi ceph users, > We have a few clusters with quincy 17.2.6 and we are preparing to migrate > from ceph-deploy to cephadm for better management. > We are using Ubuntu20 w

[ceph-users] cephadm, cannot use ECDSA key with quincy

2023-10-07 Thread Paul JURCO
Resent due to moderation when using web interface. Hi ceph users, We have a few clusters with quincy 17.2.6 and we are preparing to migrate from ceph-deploy to cephadm for better management. We are using Ubuntu20 with latest updates (latest openssh). While testing the migration to cephadm on a tes

[ceph-users] Re: Workload that delete 100 M object daily via lifecycle

2023-07-20 Thread Paul JURCO
Enabling debug lc will execute more often the LC, but, please mind that might not respect expiration time set. By design it will consider a day the time set in interval. So, if will run more often, you will end up removing objects sooner than 365 days (as an example) if set to do so. Please test u

[ceph-users] Re: 16.2.9 High rate of Segmentation fault on ceph-osd processes

2022-08-10 Thread Paul JURCO
Hi! All restarted as required in upgrade plan in the proper order, all software was upgraded on all nodes. We are on Ubuntu 18 (all nodes). "ceph versions" output shows all is on "16.2.9". Thank you! -- Paul Jurco On Wed, Aug 10, 2022 at 5:43 PM Eneko Lacunza wrote: &g

[ceph-users] 16.2.9 High rate of Segmentation fault on ceph-osd processes

2022-08-10 Thread Paul JURCO
2.8 and in 2 days after to 16.2.9 on the cluster with crashes. 6 seg faults are on 2tb disks, 8 are on 1tb disks. 2TB are newer (below 2yo). Could be related to hardware? Thank you! -- Paul Jurco ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Octopus: Cannot delete bucket

2021-09-13 Thread Paul JURCO
How to properly ask for investigation on this bug? It looks like is not fixed. -- Paul On Wed, Sep 8, 2021 at 9:07 AM Paul JURCO wrote: > Hi! > I have upgraded to 15.2.14 in order to be able to delete an old bucket > stuck at: > > > *2021-09-08T08:47:15.216+03

[ceph-users] Octopus: Cannot delete bucket

2021-09-07 Thread Paul JURCO
Hi! I have upgraded to 15.2.14 in order to be able to delete an old bucket stuck at: *2021-09-08T08:47:15.216+0300 7f96ddfe7080 0 abort_bucket_multiparts WARNING : aborted 34333 incomplete multipart uploads2021-09-08T08:47:17.012+0300 7f96ddfe7080 0 abort_bucket_multiparts WARNING : aborted 343

[ceph-users] Re: [Suspicious newsletter] Re: RGW: LC not deleting expired files

2021-07-29 Thread Paul JURCO
.amazonaws.com/doc/2006-03-01/";> > > Incomplete Multipart Uploads > > Enabled > > > 1 > > &g

[ceph-users] Re: RGW: LC not deleting expired files

2021-07-29 Thread Paul JURCO
e have set rgw_lc_debug_interval to something low and executed lc process. but it ignored this bucket completly as i have in logs. Any suggestion is welcome, as i bet we have other buckets in the same situation. Thank you! Paul On Mon, Jul 26, 2021 at 2:59 PM Paul JURCO wrote: > Hi Vidushi, > aws s

[ceph-users] Re: RGW: LC not deleting expired files

2021-07-26 Thread Paul JURCO
reate a delete-marker for every object and move the > object version from current to non-current, thereby reflecting the same > number of objects in bucket stats output ]. > > Vidushi > > On Mon, Jul 26, 2021 at 4:55 PM Paul JURCO wrote: > >> Hi! >> I need some help

[ceph-users] RGW: LC not deleting expired files

2021-07-26 Thread Paul JURCO
Hi! I need some help understanding LC processing. On latest versions of octopus installed (tested with 15.2.13 and 15.2.8) we have at least one bucket which is not having the files removed when expiring. The size of the bucket reported with radosgw-admin compared with the one obtained with s3cmd is