On 14-09-2023 17:32, Nathan Gleason wrote:
Hello,
We had a network hiccup with a Ceph cluster and it made several of our osds go
out/down. After the network was fixed the osds remain down. We have restarted
them in numerous ways and they won’t come up.
The logs for the down osds just
On Wed, Sep 13, 2023 at 04:33:32PM +0200, Christophe BAILLON wrote:
We have a cluster with 21 nodes, each having 12 x 18TB, and 2 NVMe for db/wal.
We need to add more nodes.
The last time we did this, the PGs remained at 1024, so the number of PGs per
OSD decreased.
Currently, we are at 43 PGs
Hi,
I currently try to adopt our stage cluster, some hosts just pull strange
images.
root@0cc47a6df330:/var/lib/containers/storage/overlay-images# podman ps
CONTAINER ID IMAGE COMMAND
CREATEDSTATUSPORTS NAMES
Hello,
We had a network hiccup with a Ceph cluster and it made several of our osds go
out/down. After the network was fixed the osds remain down. We have restarted
them in numerous ways and they won’t come up.
The logs for the down osds just repeat this line over and over "tick checking
mon
Hi Mosharaf - I will check it but I can assure that this error is a CLI
error and the command has not impacted the system or the data. I have no
clue what happened - I am sure I tested this scenario.
The command syntax is
ceph osd rm-pg-upmap-primary
the error you get is because you did not
Hi Team,
Any update on this?
Thanks and Regards,
Kushagra Gupta
On Tue, Sep 5, 2023 at 10:51 AM Kushagr Gupta
wrote:
> *Ceph-version*: Quincy
> *OS*: Centos 8 stream
>
> *Issue*: Not able to find a standardized restoration procedure for
> subvolume snapshots.
>
> *Description:*
> Hi team,
>
Hi Mosharaf,
If you undo the read balancing commands (using the command 'ceph
osd rm-pg-upmap-primary' on all pgs in the pool) do you see improvements in
the performance?
Regards,
Josh
On Thu, Sep 14, 2023 at 12:35 AM Laura Flores wrote:
> Hi Mosharaf,
>
> Can you please create a tracker
I am unfortunately still observing this issue of the RADOS pool
"*.rgw.log" filling up with more and more objects:
On 26.06.23 18:18, Christian Rohmann wrote:
On the primary cluster I am observing an ever growing (objects and
bytes) "sitea.rgw.log" pool, not so on the remote "siteb.rgw.log"
We are using ceph version 16.2.10-172.el8cp
(00a157ecd158911ece116ae43095de793ed9f389) pacific (stable).
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hello Josh
Thank you your for reply to us.
After giving the command in the cluster I got the following error. We are
concerned about user data. Could you kindly confirm this command will not
affect any user data?
root@ceph-node1:/# ceph osd rm-pg-upmap-primary
Traceback (most recent call last):
Hi Team,
Facing a similar situation, Any help would be appreciated.
Thanks once again for the support.
-Lokendra
On Tue, Sep 5, 2023 at 10:51 AM Kushagr Gupta
wrote:
> *Ceph-version*: Quincy
> *OS*: Centos 8 stream
>
> *Issue*: Not able to find a standardized restoration procedure for
>
On September 13, 2023 7:50 pm, Robert Sander wrote:
> On 12.09.23 14:51, hansen.r...@live.com.au wrote:
>
>> I have a ceph cluster running on my proxmox system and it all seemed to
>> upgrade successfully however after the reboot my ceph-mon and my ceph-osd
>> services are failing to start or
On 14-09-2023 03:27, Xiubo Li wrote:
< - snip -->
Hi Stefan,
Yeah, as I remembered before I have seen something like this only once
in the cephfs qa tests together with other issues, but I just thought it
wasn't the root cause so I didn't spent time on it.
Just went through the
Which version do you use? Quincy has currently incorrect values for it's new IOPS scheduler, this will be fixed in the next release (hopefully soon). But there are workaround, please check the mailing list about this, I'm in a hurry so can't point directly to the correct post. Best regards, SakeOn
14 matches
Mail list logo