[ceph-users] Re: Help with deep scrub warnings

2024-05-24 Thread Sascha Lucas
Hi, just for the archives: On Tue, 5 Mar 2024, Anthony D'Atri wrote: * Try applying the settings to global so that mons/mgrs get them. Setting osd_deep_scrub_interval at global instead at osd immediately turns health to OK and removes the false warning from PGs not scrubbed in time. HTH,

[ceph-users] Re: Permanent KeyError: 'TYPE' ->17.2.7: return self.blkid_api['TYPE'] == 'part'

2023-11-10 Thread Sascha Lucas
Hi, On Wed, 8 Nov 2023, Sascha Lucas wrote: On Tue, 7 Nov 2023, Harry G Coin wrote: "/usr/lib/python3.6/site-packages/ceph_volume/util/device.py", line 482, in is_partition /usr/bin/docker: stderr return self.blkid_api['TYPE'] == 'part' /usr/bin/docker: stderr KeyEr

[ceph-users] Re: Permanent KeyError: 'TYPE' ->17.2.7: return self.blkid_api['TYPE'] == 'part'

2023-11-08 Thread Sascha Lucas
Hi, On Tue, 7 Nov 2023, Harry G Coin wrote: These repeat for every host, only after upgrading from prev release Quincy to 17.2.7.   As a result, the cluster is always warned, never indicates healthy. I'm hitting this error, too. "/usr/lib/python3.6/site-packages/ceph_volume/util/device.py",

[ceph-users] Re: MDS_DAMAGE dir_frag

2022-12-14 Thread Sascha Lucas
Hi Venky, On Wed, 14 Dec 2022, Venky Shankar wrote: On Tue, Dec 13, 2022 at 6:43 PM Sascha Lucas wrote: Just an update: "scrub / recursive,repair" does not uncover additional errors. But also does not fix the single dirfrag error. File system scrub does not clear entries from

[ceph-users] Re: MDS_DAMAGE dir_frag

2022-12-13 Thread Sascha Lucas
Hi William, On Mon, 12 Dec 2022, William Edwards wrote: Op 12 dec. 2022 om 22:47 heeft Sascha Lucas het volgende geschreven: Ceph "servers" like MONs, OSDs, MDSs etc. are all 17.2.5/cephadm/podman. The filesystem kernel clients are co-located on the same hosts running th

[ceph-users] Re: MDS_DAMAGE dir_frag

2022-12-13 Thread Sascha Lucas
Hi, On Mon, 12 Dec 2022, Sascha Lucas wrote: On Mon, 12 Dec 2022, Gregory Farnum wrote: Yes, we’d very much like to understand this. What versions of the server and kernel client are you using? What platform stack — I see it looks like you are using CephFS through the volumes interface

[ceph-users] Re: MDS_DAMAGE dir_frag

2022-12-12 Thread Sascha Lucas
Hi Greg, On Mon, 12 Dec 2022, Gregory Farnum wrote: On Mon, Dec 12, 2022 at 12:10 PM Sascha Lucas wrote: A follow-up of [2] also mentioned having random meta-data corruption: "We have 4 clusters (all running same version) and have experienced meta-data corruption on the majority of

[ceph-users] Re: MDS_DAMAGE dir_frag

2022-12-12 Thread Sascha Lucas
Hi Dhairya, On Mon, 12 Dec 2022, Dhairya Parmar wrote: You might want to look at [1] for this, also I found a relevant thread [2] that could be helpful. Thanks a lot. I already found [1,2], too. But I did not considered it, because I felt not having a "disaster"? Nothing seems broken nor

[ceph-users] MDS_DAMAGE dir_frag

2022-12-12 Thread Sascha Lucas
Hi, without any outage/disaster cephFS (17.2.5/cephadm) reports damaged metadata: [root@ceph106 ~]# zcat /var/log/ceph/3cacfa58-55cf-11ed-abaf-5cba2c03dec0/ceph-mds.disklib.ceph106.kbzjbg.log-20221211.gz 2022-12-10T10:12:35.161+ 7fa46779d700 1 mds.disklib.ceph106.kbzjbg Updating MDS