[ceph-users] Re: Radogw ignoring HTTP_X_FORWARDED_FOR header

2023-06-26 Thread yosr . kchaou96
Hello, Issue has been found. RGW was not supposed to show the originator's ip in the logs. And my test scenario was not correct. My client was missing some permissions, that's why I get access denied. Yosr ___ ceph-users mailing list -- ceph-users@c

[ceph-users] Re: Radogw ignoring HTTP_X_FORWARDED_FOR header

2023-06-26 Thread yosr . kchaou96
Hello Christian, Thanks for your reply. So running rgw in debug mode shows the correct value of the header HTTP_X_FORWARDED_FOR. And my test scenario is trying to execute a GET request on a bucket on which policies have been applied. So it should accept requests only from a specific ip. And no

[ceph-users] Re: Possible data damage: 1 pg recovery_unfound, 1 pg inconsistent

2023-06-26 Thread Frank Schilder
Just in case, maybe this blog post contains some useful hints: https://blog.noc.grnet.gr/2016/10/18/surviving-a-ceph-cluster-outage-the-hard-way/ Its on a rather old ceph version, but the operations with objects might still be relevant. It requires that at least 1 OSD has a valid copy though. Yo

[ceph-users] RGW multisite logs (data, md, bilog) not being trimmed automatically?

2023-06-26 Thread Christian Rohmann
Hey ceph-users, I am running two (now) Quincy clusters doing RGW multi-site replication with only one actually being written to by clients. The other site is intended simply as a remote copy. On the primary cluster I am observing an ever growing (objects and bytes) "sitea.rgw.log" pool, not s

[ceph-users] Re: cephfs - unable to create new subvolume

2023-06-26 Thread karon karon
hi one last clarification: if I create a new FS i can create subvolumes, i would like to be able to correct the existing FS thank you for your help. Le ven. 23 juin 2023 à 10:55, karon karon a écrit : > Hello, > > I recently use cephfs in version 17.2.6 > I have a pool named "*data*" and a fs "

[ceph-users] Re: Radogw ignoring HTTP_X_FORWARDED_FOR header

2023-06-26 Thread Christian Rohmann
Hello Yosr, On 26/06/2023 11:41, Yosr Kchaou wrote: We are facing an issue with getting the right value for the header HTTP_X_FORWARDED_FOR when getting client requests. We need this value to do the source ip check validation. [...] Currently, RGW sees that all requests come from 127.0.0.1. So

[ceph-users] Re: ceph.conf and two different ceph clusters

2023-06-26 Thread Wesley Dillingham
You need to use the --id and --cluster options of the rbd command and maintain a .conf file for each cluster. /etc/ceph/clusterA.conf /etc/ceph/clusterB.conf /etc/ceph/clusterA.client.userA.keyring /etc/ceph/clusterB.client.userB.keyring now use the rbd commands as such: rbd --id userA --cluste

[ceph-users] ceph.conf and two different ceph clusters

2023-06-26 Thread garcetto
good afternoon, how can i config ceph.conf file on a generic rbd client to say to use two different ceph clusters to access different volumes on them? ceph-cluster-left --> rbd-vol-green ceph-cluster-right --> rbd-vol-blue thank you. ___ ceph-users ma

[ceph-users] Re: cephadm and remoto package

2023-06-26 Thread Florian Haas
Hi Shashi, I just ran into this myself, and I thought I'd share the solution/workaround that I applied. On 15/05/2023 22:08, Shashi Dahal wrote: Hi, I followed this documentation: https://docs.ceph.com/en/pacific/cephadm/adoption/ This is the error I get when trying to enable cephadm. ceph

[ceph-users] Re: Possible data damage: 1 pg recovery_unfound, 1 pg inconsistent

2023-06-26 Thread Frank Schilder
Hi Jorge, neither do I. You will need to wait for help on the list or try to figure something out with the docs. Please be patient, a mark-unfound-lost is only needed if everything else has been tried and failed. Until then, clients that don't access the broken object should work fine. Best r

[ceph-users] Re: Possible data damage: 1 pg recovery_unfound, 1 pg inconsistent

2023-06-26 Thread Jorge JP
Hello Frank, Thank you. I ran the next command: ceph pg 32.15c list_unfound I located the object but I don't know how solve this problem. { "num_missing": 1, "num_unfound": 1, "objects": [ { "oid": { "oid": "rbd_data.aedf52e8a44410.021f

[ceph-users] Re: Possible data damage: 1 pg recovery_unfound, 1 pg inconsistent

2023-06-26 Thread Frank Schilder
I don't think pg repair will work. It looks like a 2(1) replicated pool where both OSDs seem to have accepted writes while the other was down and now the PG can't decide what is the true latest version. Using size 2 min-size 1 comes with manual labor. As far as I can tell, you will need to figu

[ceph-users] Radogw ignoring HTTP_X_FORWARDED_FOR header

2023-06-26 Thread Yosr Kchaou
Hello, We are working on setting up an nginx sidecar container running along a RadosGW container inside the same kubernetes pod. We are facing an issue with getting the right value for the header HTTP_X_FORWARDED_FOR when getting client requests. We need this value to do the source ip check valid

[ceph-users] Re: Possible data damage: 1 pg recovery_unfound, 1 pg inconsistent

2023-06-26 Thread Jorge JP
Hello Stefan, I run this command yesterday but the status not changed. Other pgs with status "inconsistent" was repaired after a day, but in this case, not works. instructing pg 32.15c on osd.49 to repair Normally, the pg will changed to repair but not. De: Ste

[ceph-users] Re: Possible data damage: 1 pg recovery_unfound, 1 pg inconsistent

2023-06-26 Thread Stefan Kooman
On 6/26/23 08:38, Jorge JP wrote: Hello, After deep-scrub my cluster shown this error: HEALTH_ERR 1/38578006 objects unfound (0.000%); 1 scrub errors; Possible data damage: 1 pg recovery_unfound, 1 pg inconsistent; Degraded data redundancy: 2/77158878 objects degraded (0.000%), 1 pg degraded

[ceph-users] Re: radosgw hang under pressure

2023-06-26 Thread Rok Jaklič
From https://swamireddy.wordpress.com/2019/10/23/ceph-sharding-the-rados-gateway-bucket-index/ *Since the index is stored in a single RADOS object, only a single operation can be done on it at any given time. When the number of objects increases, the index stored in the RADOS object grows. Since a

[ceph-users] Bluestore compression - Which algo to choose? Zstd really still that bad?

2023-06-26 Thread Christian Rohmann
Hey ceph-users, we've been using the default "snappy" to have Ceph compress data on certain pools - namely backups / copies of volumes of a VM environment. So it's write once, and no random access. I am now wondering if switching to another algo (there is snappy, zlib, lz4, or zstd) would impr