[ceph-users] Re: Extremally need help. Openshift cluster is down :c

2023-02-15 Thread Eugen Block
On the MDS host you can see all cephadm daemon with 'cephadm ls', with 'cephadm logs --name mds.' you get the logs. Zitat von kreept.s...@gmail.com: Sorry, dont know where to find mds logs. I just found some logs in /var/log/ceph/ceph-volume.log from mds pod and here it is(just a piece):

[ceph-users] Re: [EXTERNAL] Re: Renaming a ceph node

2023-02-15 Thread Rice, Christian
Hi all, so I used the rename-bucket option this morning for OSD node renames, and it was a success. Works great even on Luminous. I looked at the swap-bucket command and I felt it was leaning toward real data migration from old OSDs to new OSDs and I was a bit timid because there wasn’t a

[ceph-users] Re: ceph noout vs ceph norebalance, which is better for minor maintenance

2023-02-15 Thread William Konitzer
Hi Dan, I appreciate the quick response. In that case, would something like this be better, or is it overkill!? 1. ceph osd add-noout osd.x #mark out for recovery operations 2. ceph osd add-noin osd.x #prevent rebalancing onto the OSD 3. kubectl -n rook-ceph scale deployment rook-ceph-osd--*

[ceph-users] Re: ceph noout vs ceph norebalance, which is better for minor maintenance

2023-02-15 Thread Dan van der Ster
Sorry -- Let me rewrite that second paragraph without overloading the term "rebalancing", which I recognize is confusing. ... In your case, where you want to perform a quick firmware update on the drive, you should just use noout. Without noout, the OSD will be marked out after 5 minutes and

[ceph-users] Re: ceph noout vs ceph norebalance, which is better for minor maintenance

2023-02-15 Thread Dan van der Ster
Hi Will, There are some misconceptions in your mail. 1. "noout" is a flag used to prevent the down -> out transition after an osd is down for several minutes. (Default 5 minutes). 2. "norebalance" is a flag used to prevent objects from being backfilling to a different OSD *if the PG is not

[ceph-users] Re: Extremally need help. Openshift cluster is down :c

2023-02-15 Thread kreept . sama
I forgot one more thing. Now when pod trying to mount pvc i have this issue: mssql-mssql-linux-698474b5d8-cpn6m MountVolume.MountDevice failed for volume "pvc-6b6ea6e8-ca60-4082-8f72-3a369aa99435" : rpc error: code = Internal desc = rados: ret=-30, Read-only file system: "error in setxattr"

[ceph-users] Re: Extremally need help. Openshift cluster is down :c

2023-02-15 Thread kreept . sama
Sorry, dont know where to find mds logs. I just found some logs in /var/log/ceph/ceph-volume.log from mds pod and here it is(just a piece): ... [2023-02-15 12:09:07,460][ceph_volume.main][INFO ] Running command: ceph-volume inventory --format json /dev/sda3 [2023-02-15

[ceph-users] ceph noout vs ceph norebalance, which is better for minor maintenance

2023-02-15 Thread wkonitzer
Hi, We have a discussion going on about which is the correct flag to use for some maintenance on an OSD, should it be "noout" or "norebalance"? This was sparked because we need to take an OSD out of service for a short while to upgrade the firmware. One school of thought is: - "ceph

[ceph-users] Re: clt meeting summary [15/02/2023]

2023-02-15 Thread Laura Flores
I would be interested in helping catalogue errors and fixes we experience in the lab. Do we have a preferred platform for this cheatsheet? On Wed, Feb 15, 2023 at 11:54 AM Nizamudeen A wrote: > Hi all, > > today's topics were: > >- Labs: > - Keeping a catalog > - Have a

[ceph-users] User + Dev monthly meeting happening tomorrow, Feb. 16th!

2023-02-15 Thread Laura Flores
Hi Ceph Users, The User + Dev monthly meeting is coming up tomorrow, Thursday, Feb. 16th at 3:00 PM UTC. Please add any topics you'd like to discuss to the agenda: https://pad.ceph.com/p/ceph-user-dev-monthly-minutes See you there,

[ceph-users] Re: Announcing go-ceph v0.17.0

2023-02-15 Thread Sven Anderson
We are happy to announce another release of the go-ceph API library. This is a regular release following our every-two-months release cadence. https://github.com/ceph/go-ceph/releases/tag/v0.20.0 Changes include additions to the rbd, rgw and cephfs packages. More details are available at the

[ceph-users] clt meeting summary [15/02/2023]

2023-02-15 Thread Nizamudeen A
Hi all, today's topics were: - Labs: - Keeping a catalog - Have a dedicated group to debug/work through the issues. - Looking for interested parties that would like to contribute in the lab maintenance tasks - Poll for meeting time, looking for a central person

[ceph-users] Ceph (cepadm) quincy: can't add osd from remote nodes.

2023-02-15 Thread Anton Chivkunov
Hello! I stuck with a problem, while trying to create cluster of 3 nodes (AWS EC2 instancies): fa11 ~ # ceph orch host ls HOST ADDR LABELS STATUS fa11 172.16.24.67 _admin fa12 172.16.23.159 _admin fa13 172.16.25.119 _admin 3 hosts in cluster Each of them have 2 disks (all

[ceph-users] Re: PSA: Potential problems in a recent kernel?

2023-02-15 Thread Dmitrii Ermakov
Hi Matthew, I can confirm that there is something with kernel 6.0.18-200.fc36 (We have it on our OKD 4 nodes). When we upgraded OKD (Opnshift 4 upstream with Fedora CoreOS 36) to  4.11.0-0.okd-2023-01-14-152430, it upgraded the Kernel 6.0.10-200.fc36 -> 6.0.18-200.fc36. We use Rook-CEPH

[ceph-users] Re: Very slow snaptrim operations blocking client I/O

2023-02-15 Thread Victor Rodriguez
An update on this for the record: To fully solve this I've had to destroy each OSD and create then again, one by one. I could have done it one host at a time but I've preferred to be on the safest side just in case something else went wrong. The values for num_pgmeta_omap (which I don't know

[ceph-users] Swift Public Access URL returns "NoSuchBucket" when rgw_swift_account_in_url is True

2023-02-15 Thread Beaman, Joshua
Greetings, Regarding: https://tracker.ceph.com/issues/58019 And: https://github.com/ceph/ceph/pull/47341 We have completed upgrading our production pacific clusters to 16.2.11 and are still experiencing this bug. Can anyone confirm if this backport was included in 16.2.11. Could there be

[ceph-users] Re: Missing object in bucket list

2023-02-15 Thread mahnoosh shahidi
Thanks for your reply. I believe this is exactly our case. Best Regards, Mahnoosh On Tue, Feb 14, 2023 at 9:25 PM J. Eric Ivancich wrote: > A bug was reported recently where if a put object occurs when bucket > resharding is finishing up, it would write to the old bucket shard rather > than

[ceph-users] Re: Renaming a ceph node

2023-02-15 Thread Eugen Block
Hi, I haven't done this in a production cluster yet, only in small test clusters without data. But there's a rename-bucket command: ceph osd crush rename-bucket rename bucket to It should do exactly that, just rename the bucket