[ceph-users] Re: How can I use not-replicated pool (replication 1 or raid-0)

2023-05-05 Thread mhnx
Hello Frank. >If your only tool is a hammer ... >Sometimes its worth looking around. You are absolutely right! But I have limitations because my customer is a startup and they want to create a hybrid system with current hardware for all their needs. That's why I'm spending time to find a work

[ceph-users] Re: 16.2.13 pacific QE validation status

2023-05-05 Thread Yuri Weinstein
I got verbal approvals for the listed PRs: https://github.com/ceph/ceph/pull/51232 -- Venky approved https://github.com/ceph/ceph/pull/51344 -- Venky approved https://github.com/ceph/ceph/pull/51200 -- Casey approved https://github.com/ceph/ceph/pull/50894 -- Radek approved Suites rados and fs

[ceph-users] Re: Unable to restart mds - mds crashes almost immediately after finishing recovery

2023-05-05 Thread Emmanuel Jaep
Hi, thanks for the pointer. I'll definitely look into upgrading our cluster and patching it. As a temporary fix, as stated at line -3 of the dump, the client 'client.96913903:2156912' was causing the crash. When we evicted it, connected to the machine running this client, and rebooted it, the

[ceph-users] osd pause

2023-05-05 Thread Thomas Bennett
Hi, FYI - This might be pedantic, but there does not seem to be any difference between using these two sets of commands: - ceph osd pause / ceph osd unpause - ceph osd set pause / ceph osd unset pause I can see that they both set/unset the pauserd,pausewr flags, but since they don't

[ceph-users] Re: rbd map: corrupt full osdmap (-22) when

2023-05-05 Thread Kamil Madac
Ilya, Thanks for clarification. On Thu, May 4, 2023 at 1:12 PM Ilya Dryomov wrote: > On Thu, May 4, 2023 at 11:27 AM Kamil Madac wrote: > > > > Thanks for the info. > > > > As a solution we used rbd-nbd which works fine without any issues. If we > will have time we will also try to disable

[ceph-users] Re: pg deep-scrub issue

2023-05-05 Thread Eugen Block
Hi, please share more details about your cluster like ceph -s ceph osd df tree ceph pg ls-by-pool | head If the client load is not too high you could increase the osd_max_scrubs config from 1 to 3 and see if anything improves (what is the current value?). If the client load is high during

[ceph-users] Re: 17.2.6 fs 'ls' ok, but 'cat' 'operation not permitted' puzzle

2023-05-05 Thread Zac Dover
Eugen, TL;DR: https://github.com/ceph/ceph/pull/51359 Thanks, Eugen. Thanks, Harry G Coin. Longer version: Thanks for bringing this to my attention. I've added Harry G Coin's excellent procedure to the Troubleshooting page in the CephFS documentation. The PR that contains the commit that

[ceph-users] Re: Unable to restart mds - mds crashes almost immediately after finishing recovery

2023-05-05 Thread Dhairya Parmar
Apart from PR mentioned by xiubo, #49691 also contains a good fix for this issue. - Dhairya On Fri, May 5, 2023 at 6:32 AM Xiubo Li wrote: > Hi Emmanuel, > > This should be one known issue as https://tracker.ceph.com/issues/58392 > and there is one fix