[ceph-users] Re: Debian/bullseye build for reef

2023-09-04 Thread Matthew Vernon
Hi, On 21/08/2023 17:16, Josh Durgin wrote: We weren't targeting bullseye once we discovered the compiler version problem, the focus shifted to bookworm. If anyone would like to help maintaining debian builds, or looking into these issues, it would be welcome:

[ceph-users] Re: Permissions of the .snap directory do not inherit ACLs in 17.2.6

2023-09-04 Thread MARTEL Arnaud
Hi Eugen, We have a lot of shared directories in cephfs and each directory has a specific ACL to grant access to several groups (for read and/or for read/write access). Here is the complete steps to reproduce the pb in 17.2.6 with only one group, GIPSI, in the ACL: # mkdir /mnt/ceph/test #

[ceph-users] Re: rgw replication sync issue

2023-09-04 Thread Eugen Block
Did you try to rewrite the objects to see if at least those two errors resolve? Do you have any logs from the RGWs when the sync stopped to work? You write that the bandwidth usage just dropped but is > 0, does that mean some buckets are still syncing? Can you see a pattern if the failing

[ceph-users] Re: Permissions of the .snap directory do not inherit ACLs in 17.2.6

2023-09-04 Thread Eugen Block
I'm wondering if I did something wrong or if I'm missing something. I tried to reproduce the described steps from the bug you mentioned, and from Nautilus to Reef (I have a couple of test clusters) the getfacl output always shows the same output for the .snap directory: $ getfacl

[ceph-users] RGW Lua - writable response header/field

2023-09-04 Thread Ondřej Kukla
Hello, We have a RGW setup that has a bunch of Nginx in front of RGWs to work as a LB. I’m currently working on some metrics and log analysis from the LB logs. At the moment I’m looking at possibilities to recognise the type of s3 request on the LB. I know that matching the format shouldn’t be

[ceph-users] Is it possible (or meaningful) to revive old OSDs?

2023-09-04 Thread ceph-mail
Hello, I have a ten node cluster with about 150 OSDs. One node went down a while back, several months. The OSDs on the node have been marked as down and out since. I am now in the position to return the node to the cluster, with all the OS and OSD disks. When I boot up the now working node,

[ceph-users] Re: Is it safe to add different OS but same ceph version to the existing cluster?

2023-09-04 Thread Szabo, Istvan (Agoda)
Hi, I've added ubuntu 20.04 nodes to my ceph octopus 15.2.17 baremetal deployed cluster next to the centos 8 nodes and I see something interesting regarding the disk usage, it is higher on ubuntu than on centos, however the cpu usage is lower (on this picture you can see 4 nodes, each column