[ceph-users] Re: osd cannot get osdmap

2023-09-14 Thread Stefan Kooman
On 14-09-2023 17:32, Nathan Gleason wrote: Hello, We had a network hiccup with a Ceph cluster and it made several of our osds go out/down. After the network was fixed the osds remain down. We have restarted them in numerous ways and they won’t come up. The logs for the down osds just

[ceph-users] Re: Questions about PG auto-scaling and node addition

2023-09-14 Thread Kai Stian Olstad
On Wed, Sep 13, 2023 at 04:33:32PM +0200, Christophe BAILLON wrote: We have a cluster with 21 nodes, each having 12 x 18TB, and 2 NVMe for db/wal. We need to add more nodes. The last time we did this, the PGs remained at 1024, so the number of PGs per OSD decreased. Currently, we are at 43 PGs

[ceph-users] ceph orchestator pulls strange images from docker.io

2023-09-14 Thread Boris Behrens
Hi, I currently try to adopt our stage cluster, some hosts just pull strange images. root@0cc47a6df330:/var/lib/containers/storage/overlay-images# podman ps CONTAINER ID IMAGE COMMAND CREATEDSTATUSPORTS NAMES

[ceph-users] osd cannot get osdmap

2023-09-14 Thread Nathan Gleason
Hello, We had a network hiccup with a Ceph cluster and it made several of our osds go out/down. After the network was fixed the osds remain down. We have restarted them in numerous ways and they won’t come up. The logs for the down osds just repeat this line over and over "tick checking mon

[ceph-users] Re: CEPH zero iops after upgrade to Reef and manual read balancer

2023-09-14 Thread Josh Salomon
Hi Mosharaf - I will check it but I can assure that this error is a CLI error and the command has not impacted the system or the data. I have no clue what happened - I am sure I tested this scenario. The command syntax is ceph osd rm-pg-upmap-primary the error you get is because you did not

[ceph-users] Re: Not able to find a standardized restoration procedure for subvolume snapshots.

2023-09-14 Thread Kushagr Gupta
Hi Team, Any update on this? Thanks and Regards, Kushagra Gupta On Tue, Sep 5, 2023 at 10:51 AM Kushagr Gupta wrote: > *Ceph-version*: Quincy > *OS*: Centos 8 stream > > *Issue*: Not able to find a standardized restoration procedure for > subvolume snapshots. > > *Description:* > Hi team, >

[ceph-users] Re: CEPH zero iops after upgrade to Reef and manual read balancer

2023-09-14 Thread Josh Salomon
Hi Mosharaf, If you undo the read balancing commands (using the command 'ceph osd rm-pg-upmap-primary' on all pgs in the pool) do you see improvements in the performance? Regards, Josh On Thu, Sep 14, 2023 at 12:35 AM Laura Flores wrote: > Hi Mosharaf, > > Can you please create a tracker

[ceph-users] What is causing *.rgw.log pool to fill up / not be expired (Re: RGW multisite logs (data, md, bilog) not being trimmed automatically?)

2023-09-14 Thread Christian Rohmann
I am unfortunately still observing this issue of the RADOS pool "*.rgw.log" filling up with more and more objects: On 26.06.23 18:18, Christian Rohmann wrote: On the primary cluster I am observing an ever growing (objects and bytes) "sitea.rgw.log" pool, not so on the remote "siteb.rgw.log"

[ceph-users] Re: Rebuilding data resiliency after adding new OSD's stuck for so long at 5%

2023-09-14 Thread sharathvuthpala
We are using ceph version 16.2.10-172.el8cp (00a157ecd158911ece116ae43095de793ed9f389) pacific (stable). ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: CEPH zero iops after upgrade to Reef and manual read balancer

2023-09-14 Thread Mosharaf Hossain
Hello Josh Thank you your for reply to us. After giving the command in the cluster I got the following error. We are concerned about user data. Could you kindly confirm this command will not affect any user data? root@ceph-node1:/# ceph osd rm-pg-upmap-primary Traceback (most recent call last):

[ceph-users] Re: Not able to find a standardized restoration procedure for subvolume snapshots.

2023-09-14 Thread Lokendra Rathour
Hi Team, Facing a similar situation, Any help would be appreciated. Thanks once again for the support. -Lokendra On Tue, Sep 5, 2023 at 10:51 AM Kushagr Gupta wrote: > *Ceph-version*: Quincy > *OS*: Centos 8 stream > > *Issue*: Not able to find a standardized restoration procedure for >

[ceph-users] Re: Ceph services failing to start after OS upgrade

2023-09-14 Thread Fabian Grünbichler
On September 13, 2023 7:50 pm, Robert Sander wrote: > On 12.09.23 14:51, hansen.r...@live.com.au wrote: > >> I have a ceph cluster running on my proxmox system and it all seemed to >> upgrade successfully however after the reboot my ceph-mon and my ceph-osd >> services are failing to start or

[ceph-users] Re: 6.5 CephFS client - ceph_cap_reclaim_work [ceph] / ceph_con_workfn [libceph] hogged CPU

2023-09-14 Thread Stefan Kooman
On 14-09-2023 03:27, Xiubo Li wrote: < - snip --> Hi Stefan, Yeah, as I remembered before I have seen something like this only once in the cephfs qa tests together with other issues, but I just thought it wasn't the root cause so I didn't spent time on it. Just went through the

[ceph-users] Re: Rebuilding data resiliency after adding new OSD's stuck for so long at 5%

2023-09-14 Thread Sake
Which version do you use? Quincy has currently incorrect values for it's new IOPS scheduler, this will be fixed in the next release (hopefully soon). But there are workaround, please check the mailing list about this, I'm in a hurry so can't point directly to the correct post. Best regards, SakeOn