[ceph-users] Re: Mysterious Space-Eating Monster

2024-05-06 Thread duluxoz
Thanks Sake, That recovered just under 4 Gig of space for us Sorry about the delay getting back to you (been *really* busy) :-) Cheers Dulux-Oz ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to

[ceph-users] Re: MDS 17.2.7 crashes at rejoin

2024-05-06 Thread Xiubo Li
Possibly, because we have seen this only in ceph 17. And if you could reproduce it then please provide the mds debug logs, after this we can quickly find the root cause of it. Thanks - Xiubo On 5/7/24 12:19, Robert Sander wrote: Hi, would an update to 18.2 help? Regards

[ceph-users] Re: MDS 17.2.7 crashes at rejoin

2024-05-06 Thread Robert Sander
Hi, would an update to 18.2 help? Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Amtsgericht Berlin-Charlottenburg - HRB 93818 B Geschäftsführer: Peer Heinlein - Sitz: Berlin

[ceph-users] Re: MDS crashes shortly after starting

2024-05-06 Thread Xiubo Li
The same issue with https://tracker.ceph.com/issues/60986 and as Robert Sander reported. On 5/6/24 05:11, E Taka wrote: Hi all, we have a serious problem with CephFS. A few days ago, the CephFS file systems became inaccessible, with the message MDS_DAMAGE: 1 mds daemon damaged The

[ceph-users] Re: MDS 17.2.7 crashes at rejoin

2024-05-06 Thread Xiubo Li
This is a known issue, please see https://tracker.ceph.com/issues/60986. If you could reproduce it then please enable the mds debug logs and this could help debug it fast: debug_mds = 25 debug_ms = 1 Thanks - Xiubo On 5/7/24 00:26, Robert Sander wrote: Hi, a 17.2.7 cluster with two

[ceph-users] Reef: Dashboard: Object Gateway Graphs have no Data

2024-05-06 Thread Dave Hall
Hello. We're running a containerized deployment of Reef with a focus on RGW. We noticed that while the Grafana graphs for other categories - OSDs, Pools, etc - have data, the graphs for the Object Gateway category are empty. I did some looking last week and found reference to something about

[ceph-users] MDS 17.2.7 crashes at rejoin

2024-05-06 Thread Robert Sander
Hi, a 17.2.7 cluster with two filesystems has suddenly non-working MDSs: # ceph -s cluster: id: f54eea86-265a-11eb-a5d0-457857ba5742 health: HEALTH_ERR 22 failed cephadm daemon(s) 2 filesystems are degraded 1 mds daemon damaged

[ceph-users] Re: Luminous OSDs failing with FAILED assert(clone_size.count(clone))

2024-05-06 Thread Rabellino Sergio
I'm sorry I did a little mistake: our release is mimic, obviously as stated in the logged error, and all the ceph stuffs are aligned to mimic. Il 06/05/2024 10:04, sergio.rabell...@unito.it ha scritto: Dear Ceph users, I'm pretty new on this list, but I've been using Ceph with satisfaction

[ceph-users] CLT meeting notes May 6th 2024

2024-05-06 Thread Adam King
- DigitalOcean credits - things to ask - what would promotional material require - how much are credits worth - Neha to ask - 19.1.0 centos9 container status - close to being ready - will be building centos 8 and 9 containers simultaneously - should test

[ceph-users] Luminous OSDs failing with FAILED assert(clone_size.count(clone))

2024-05-06 Thread sergio . rabellino
Dear Ceph users, I'm pretty new on this list, but I've been using Ceph with satisfaction since 2020. I faced some problems through these years consulting the list archive, but now we're down with a problem that seems without an answer. After a power failure, we have a bunch of OSDs that during

[ceph-users] Off-Site monitor node over VPN

2024-05-06 Thread Stefan Pinter
Hi! i hope someone can help us out here :) We need to move from 3 datacenters to 2 datacenters (+ 1 small serverroom reachable via layer 3 VPN) NOW we have a ceph-mon in each datacenter, which is fine. But we have to move and will only have 2 datacenters in the future (that are connected, so

[ceph-users] Re: radosgw sync non-existent bucket ceph reef 18.2.2

2024-05-06 Thread Konstantin Larin
Hello Christopher, We had something similar on Pacific multi-site. The problem was in leftover bucket metadata in our case, and was solved by "radosgw-admin metadata list ..." and "radosgw-admin metadata rm ..." on master, for a non-existent bucket. Best regards, Konstantin On Tue, 2024-04-30

[ceph-users] Re: Unable to add new OSDs

2024-05-06 Thread Michael Baer
Thanks for the help! I wanted to give an update on the resolution to the issues I was having. I didn't realize that I had created several competing OSD specifications via dashboard . By cleaning that up, OSD creation now is working as expected. -Mike > On Tue, 23 Apr 2024 00:06:19 -,

[ceph-users] Re: Have a problem with haproxy/keepalived/ganesha/docker

2024-05-06 Thread Marc
> Hello! Any news? > Yes, it will be around 18° today, Israel was heckled at EU song contest .. ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io