You can add a mon manually to the monmap, but that requires a downtime
of the mons. Here's an example [1] how to modify the monmap (including
network change which you don't need, of course). But that would be my
last resort, first I would try to find out why the MON fails to join
the quorum
Hi,
without knowing the whole story, to cancel OSD removal you can run
this command:
ceph orch osd rm stop
Regards,
Eugen
Zitat von "adam.ther" :
Hello,
I have a single node host with a VM as a backup MON,MGR,ect.
This has caused all OSD's to be pending as 'deleting', can i safely
ca
Hi,
here's the link to the docs [1] how to replace OSDs.
ceph orch osd rm --replace --zap [--force]
This should zap both the data drive and db LV (yes, its data is
useless without the data drive), not sure how it will handle if the
data drive isn't accessible though.
One thing I'm not sure
On 29/03/2024 04:18, Niklas Hambüchen wrote:
Hi Loïc, I'm surprised by that high storage amount, my "default" pool uses only
~512 Bytes per file, not ~32 KiB like in your pool. That's a 64x difference!
(See also my other response to the original post I just sent.)
I'm using Ceph 16.2.1.
>
Hel
Greetings community,
we have a setup comprising of 6 servers hosting CentOS 8 Minimal Installation
with CEPH Quincy version 18.2.2 supported by 20Gbps fiber optics NICs and a
dual Xeon Intel processors, bootstrapped the installation on the first node
then expanded to the others using the cephad
Hi,
I'm adding a few OSDs to an existing cluster, the cluster is running with
`osd noout,noin`:
cluster:
id: 3f50555a-ae2a-11eb-a2fc-ffde44714d86
health: HEALTH_WARN
noout,noin flag(s) set
Specifically `noin` is documented as "prevents booting OSDs from being
marked in"
Thank you, Eugen.
It was actually very straightforward. I'm happy to report back that there
were no issues with removing and zapping the OSDs whose data devices were
unavailable. I had to manually remove stale dm entries, but that was it.
/Z
On Tue, 2 Apr 2024 at 11:00, Eugen Block wrote:
> Hi
Do these RBD volumes have a full feature set? I would think that fast-diff and
objectmap would speed this.
> On Apr 2, 2024, at 00:36, Henry lol wrote:
>
> I'm not sure, but it seems that read and write operations are
> performed for all objects in rbd.
> If so, is there any method to apply qo
Yes, they do.
Actually, the read/write ops will be skipped as you said.
Also, is it possible to limit the max network throughput per flatten
operation or image?
I want to avoid the scenario where the flatten operation consumes
network throughput fully.
_
Nice, thanks for the info.
Zitat von Zakhar Kirpichenko :
Thank you, Eugen.
It was actually very straightforward. I'm happy to report back that there
were no issues with removing and zapping the OSDs whose data devices were
unavailable. I had to manually remove stale dm entries, but that was i
Hi Eugen,
Currently there are only three nodes, but I can add a node to the cluster and
check it out. I will take a look at the mon logs
Thank you,
Anantha
-Original Message-
From: Eugen Block
Sent: Tuesday, April 2, 2024 12:19 AM
To: Adiga, Anantha
Cc: ceph-users@ceph.io
Subject:
Hi everybody.
I've faced the situation when I cannot redeploy OSD on a new disk
So, I need to replace osd.30 cuz disk always reports about problems with I\O.
I do `ceph orch daemon osd.30 --replace`
Then I zap DB
```
root@server-2:/# ceph-volume lvm zap /dev/ceph-db/db-88
--> Zapping: /dev/ceph
Hello,
I did the configuration to activate multimds in ceph. The parameters I
entered looked like this:
3 assets
1 standby
I also placed the distributed pinning configuration at the root of the
mounted dir of the storage:
setfattr -n ceph.dir.pin.distributed -v 1 /
This configura
probably `ceph mgr fail` will help.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi,
We are still running ceph Pacific with cephadm and we have run into a
peculiar issue. When we run the `cephadm shell` command on monitor1, the
container we get runs ceph 16.2.9. However, when we run the same command
on monitor2, the container runs 16.2.15, which is the current version of
From what I can see with the most recent cephadm binary on pacific, unless
you have the CEPHADM_IMAGE env variable set, it does a `podman images
--filter label=ceph=True --filter dangling=false` (or docker) and takes the
first image in the list. It seems to be getting sorted by creation time by
def
https://tracker.ceph.com/issues/64428 should be it. Backports are done for
quincy, reef, and squid and the patch will be present in the next release
for each of those versions. There isn't a pacific backport as, afaik, there
are no more pacific releases planned.
On Fri, Mar 29, 2024 at 6:03 PM Ale
Hi Adam.
Re-deploying didn't work, but `ceph config dump` showed
one of the container_images specified 16.2.10-160.
After we removed that var, it instantly redeployed the OSDs.
Thanks again for your help.
___
ceph-users mailing list -- ceph-users@ceph.i
I'll start working on the needed configurations and let you know.
On Sat, Mar 23, 2024, 12:09 PM Anthony D'Atri wrote:
> I fear this will raise controversy, but in 2024 what’s the value in
> perpetuating an interface from early 1980s BITnet batch operating systems?
>
> > On Mar 23, 2024, at 5:45
Hi Eugen,
Noticed this in the config dump: Why only "mon.a001s016 " listed?And
this is the one that is not listed in "ceph -s"
mon advanced
auth_allow_insecure_global_id_reclaim false
Jonas Nemeiksis wrote:
> Hello,
>
> Maybe your issue depends to this https://tracker.ceph.com/issues/63642
>
>
>
> On Wed, Mar 27, 2024 at 7:31 PM xu chenhui
> wrote:
>
> > Hi, Eric Ivancich
> >I have similar problem in ceph version 16.2.5. Has this problem
Hi,
Trying to pull out some metrics from ceph about the rbd images sizes but
haven't found anything only pool related metrics.
Wonder is there any metric about images or I need to create by myself to
collect it with some third party tool?
Thank you
This messag
Hey Garcetto,
On 29.03.24 4:13 PM, garcetto wrote:
i am trying to set bucket policies to allow to different users to access
same bucket with different permissions, BUT it seems that is not yet
supported, am i wrong?
https://docs.ceph.com/en/reef/radosgw/bucketpolicy/#limitations
"We do not
23 matches
Mail list logo