[ceph-users] Re: ceph status not showing correct monitor services

2024-04-02 Thread Eugen Block
You can add a mon manually to the monmap, but that requires a downtime of the mons. Here's an example [1] how to modify the monmap (including network change which you don't need, of course). But that would be my last resort, first I would try to find out why the MON fails to join the

[ceph-users] Re: Drained A Single Node Host On Accident

2024-04-02 Thread Eugen Block
Hi, without knowing the whole story, to cancel OSD removal you can run this command: ceph orch osd rm stop Regards, Eugen Zitat von "adam.ther" : Hello, I have a single node host with a VM as a backup MON,MGR,ect. This has caused all OSD's to be pending as 'deleting', can i safely

[ceph-users] Re: cephfs inode backtrace information

2024-04-02 Thread Loïc Tortay
On 29/03/2024 04:18, Niklas Hambüchen wrote: Hi Loïc, I'm surprised by that high storage amount, my "default" pool uses only ~512 Bytes per file, not ~32 KiB like in your pool. That's a 64x difference! (See also my other response to the original post I just sent.) I'm using Ceph 16.2.1. >

[ceph-users] Re: Replace block drives of combined NVME+HDD OSDs

2024-04-02 Thread Zakhar Kirpichenko
Thank you, Eugen. It was actually very straightforward. I'm happy to report back that there were no issues with removing and zapping the OSDs whose data devices were unavailable. I had to manually remove stale dm entries, but that was it. /Z On Tue, 2 Apr 2024 at 11:00, Eugen Block wrote: >

[ceph-users] Pacific 16.2.15 `osd noin`

2024-04-02 Thread Zakhar Kirpichenko
Hi, I'm adding a few OSDs to an existing cluster, the cluster is running with `osd noout,noin`: cluster: id: 3f50555a-ae2a-11eb-a2fc-ffde44714d86 health: HEALTH_WARN noout,noin flag(s) set Specifically `noin` is documented as "prevents booting OSDs from being marked

[ceph-users] Re: Questions about rbd flatten command

2024-04-02 Thread Henry lol
Yes, they do. Actually, the read/write ops will be skipped as you said. Also, is it possible to limit the max network throughput per flatten operation or image? I want to avoid the scenario where the flatten operation consumes network throughput fully.

[ceph-users] Re: Replace block drives of combined NVME+HDD OSDs

2024-04-02 Thread Eugen Block
Nice, thanks for the info. Zitat von Zakhar Kirpichenko : Thank you, Eugen. It was actually very straightforward. I'm happy to report back that there were no issues with removing and zapping the OSDs whose data devices were unavailable. I had to manually remove stale dm entries, but that was

[ceph-users] Re: Questions about rbd flatten command

2024-04-02 Thread Anthony D'Atri
Do these RBD volumes have a full feature set? I would think that fast-diff and objectmap would speed this. > On Apr 2, 2024, at 00:36, Henry lol wrote: > > I'm not sure, but it seems that read and write operations are > performed for all objects in rbd. > If so, is there any method to apply

[ceph-users] Re: Replace block drives of combined NVME+HDD OSDs

2024-04-02 Thread Eugen Block
Hi, here's the link to the docs [1] how to replace OSDs. ceph orch osd rm --replace --zap [--force] This should zap both the data drive and db LV (yes, its data is useless without the data drive), not sure how it will handle if the data drive isn't accessible though. One thing I'm not

[ceph-users] CEPH Quincy installation with multipathd enabled

2024-04-02 Thread youssef . khristo
Greetings community, we have a setup comprising of 6 servers hosting CentOS 8 Minimal Installation with CEPH Quincy version 18.2.2 supported by 20Gbps fiber optics NICs and a dual Xeon Intel processors, bootstrapped the installation on the first node then expanded to the others using the

[ceph-users] Re: cephadm: daemon osd.x on yyy is in error state

2024-04-02 Thread service . plant
probably `ceph mgr fail` will help. ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] "ceph orch daemon add osd" deploys broken OSD

2024-04-02 Thread service . plant
Hi everybody. I've faced the situation when I cannot redeploy OSD on a new disk So, I need to replace osd.30 cuz disk always reports about problems with I\O. I do `ceph orch daemon osd.30 --replace` Then I zap DB ``` root@server-2:/# ceph-volume lvm zap /dev/ceph-db/db-88 --> Zapping:

[ceph-users] Re: ceph status not showing correct monitor services

2024-04-02 Thread Adiga, Anantha
Hi Eugen, Currently there are only three nodes, but I can add a node to the cluster and check it out. I will take a look at the mon logs Thank you, Anantha -Original Message- From: Eugen Block Sent: Tuesday, April 2, 2024 12:19 AM To: Adiga, Anantha Cc: ceph-users@ceph.io

[ceph-users] Multi-MDS

2024-04-02 Thread quag...@bol.com.br
Hello, I did the configuration to activate multimds in ceph. The parameters I entered looked like this: 3 assets 1 standby I also placed the distributed pinning configuration at the root of the mounted dir of the storage: setfattr -n ceph.dir.pin.distributed -v 1 / This

[ceph-users] cephadm shell version not consistent across monitors

2024-04-02 Thread J-P Methot
Hi, We are still running ceph Pacific with cephadm and we have run into a peculiar issue. When we run the `cephadm shell` command on monitor1, the container we get runs ceph 16.2.9. However, when we run the same command on monitor2, the container runs 16.2.15, which is the current version of

[ceph-users] Re: cephadm shell version not consistent across monitors

2024-04-02 Thread Adam King
From what I can see with the most recent cephadm binary on pacific, unless you have the CEPHADM_IMAGE env variable set, it does a `podman images --filter label=ceph=True --filter dangling=false` (or docker) and takes the first image in the list. It seems to be getting sorted by creation time by

[ceph-users] Re: Pacific Bug?

2024-04-02 Thread Adam King
https://tracker.ceph.com/issues/64428 should be it. Backports are done for quincy, reef, and squid and the patch will be present in the next release for each of those versions. There isn't a pacific backport as, afaik, there are no more pacific releases planned. On Fri, Mar 29, 2024 at 6:03 PM

[ceph-users] Re: Failed adding back a node

2024-04-02 Thread Alex
Hi Adam. Re-deploying didn't work, but `ceph config dump` showed one of the container_images specified 16.2.10-160. After we removed that var, it instantly redeployed the OSDs. Thanks again for your help. ___ ceph-users mailing list --

[ceph-users] Re: RGW Data Loss Bug in Octopus 15.2.0 through 15.2.6

2024-04-02 Thread xu chenhui
Jonas Nemeiksis wrote: > Hello, > > Maybe your issue depends to this https://tracker.ceph.com/issues/63642 > > > > On Wed, Mar 27, 2024 at 7:31 PM xu chenhui xuchenhuig(a)gmail.com > wrote: > > > Hi, Eric Ivancich > >I have similar problem in ceph version 16.2.5. Has this problem

[ceph-users] Re: Are we logging IRC channels?

2024-04-02 Thread Alvaro Soto
I'll start working on the needed configurations and let you know. On Sat, Mar 23, 2024, 12:09 PM Anthony D'Atri wrote: > I fear this will raise controversy, but in 2024 what’s the value in > perpetuating an interface from early 1980s BITnet batch operating systems? > > > On Mar 23, 2024, at

[ceph-users] Re: ceph status not showing correct monitor services

2024-04-02 Thread Adiga, Anantha
Hi Eugen, Noticed this in the config dump: Why only "mon.a001s016 " listed?And this is the one that is not listed in "ceph -s" mon advanced auth_allow_insecure_global_id_reclaim false

[ceph-users] RBD image metric

2024-04-02 Thread Szabo, Istvan (Agoda)
Hi, Trying to pull out some metrics from ceph about the rbd images sizes but haven't found anything only pool related metrics. Wonder is there any metric about images or I need to create by myself to collect it with some third party tool? Thank you This