Hi,
I have a reef cluster 18.2.2 on Rocky 8.9. This cluster has been upgraded from
pacific->quincy->reef over the past few years. It is a multi site with one
other cluster that works fine with s3/radosgw on both sides, with proper
bidirectional data replication.
On one of the master cluster's
Oh I'm sorry, Peter, I don't know why I wrote Karl. I apologize.
Zitat von Eugen Block :
Hi Karl,
I must admit that I haven't dealt with raw OSDs yet. We've been
usually working with LVM based clusters (some of the customers used
SUSE's product SES) and in SES there was a recommendation to
Hi Karl,
I must admit that I haven't dealt with raw OSDs yet. We've been
usually working with LVM based clusters (some of the customers used
SUSE's product SES) and in SES there was a recommendation to switch to
LVM before adopting with cephadm. So we usually did a rebuild of all
OSDs bef
Hi,
If I may, I would try something like this but I haven't tested this so
please take this with a grain of salt...
1.I would reinstall the Operating System in this case...
Since the root filesystem is accessible but the OS is not bootable, the
most straightforward approach would be to perform a
Thanks Eugen and others for the advice. These are not, however, lvm-based
OSDs. I can get a list of what is out there with:
cephadm ceph-volume raw list
and tried
cephadm ceph-volume raw activate
but it tells me I need to manually run activate.
I was able to find the correct data disks with fo
Sorry Frank, I typed the wrong name.
On Tue, Apr 30, 2024, 8:51 AM Mary Zhang wrote:
> Sounds good. Thank you Kevin and have a nice day!
>
> Best Regards,
> Mary
>
> On Tue, Apr 30, 2024, 8:21 AM Frank Schilder wrote:
>
>> I think you are panicking way too much. Chances are that you will never
Sounds good. Thank you Kevin and have a nice day!
Best Regards,
Mary
On Tue, Apr 30, 2024, 8:21 AM Frank Schilder wrote:
> I think you are panicking way too much. Chances are that you will never
> need that command, so don't get fussed out by an old post.
>
> Just follow what I wrote and, in th
I think you are panicking way too much. Chances are that you will never need
that command, so don't get fussed out by an old post.
Just follow what I wrote and, in the extremely rare case that recovery does not
complete due to missing information, send an e-mail to this list and state that
you
Thank you Frank for sharing such valuable experience! I really appreciate
it.
We observe similar timelines: it took more than 1 week to drain our OSD.
Regarding export PGs from failed disk and inject it back to the cluster, do
you have any documentations? I find this online Ceph.io — Incomplete PGs
Hi Robert
On Mon, Apr 29, 2024 at 8:06 AM Robert Sander
wrote:
> On 4/29/24 08:50, Alwin Antreich wrote:
>
> > well it says it in the article.
> >
> > The upcoming Squid release serves as a testament to how the Ceph
> > project continues to deliver innovative features to users without
>
Hi Götz,
You can change the value of osd_max_backfills (for all OSDs or specific
ones) using `ceph config`, but you need
enable osd_mclock_override_recovery_settings. See
https://docs.ceph.com/en/quincy/rados/configuration/mclock-config-ref/#steps-to-modify-mclock-max-backfills-recovery-limits
for
Hello Community,
is there a guide / documentation how to configure spdk with cephadm (running in
containers) in reef?
BR
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi all,
I second Eugen's recommendation. We have a cluster with large HDD OSDs where
the following timings are found:
- drain an OSD: 2 weeks.
- down an OSD and let cluster recover: 6 hours.
The drain OSD procedure is - in my experience - a complete waste of time,
actually puts your cluster at
On 30-04-2024 11:22, ronny.lippold wrote:
hi stefan ... you are the hero of the month ;)
:p.
i don't know, why i did not found your bug report.
i have the exact same problem and resolved the HEALTH only with "ceph
osd force_healthy_stretch_mode --yes-i-really-mean-it"
will comment the rep
14 matches
Mail list logo