v14.2.4
Following issue:
PG_DEGRADED_FULL Degraded data redundancy (low space): 1 pg backfill_toofull
pg 1.285 is active+remapped+backfill_toofull, acting [118,94,84]
BUT:118 hdd 9.09470 0.8 9.1 TiB 7.4 TiB 7.4 TiB 12 KiB 19 GiB 1.7
TiB 81.53 1.16 38 up
Even with adjusted
Did you reinstall mons as well? If no, check if you've removed that osd
auth (ceph auth ls)
On Fri, Nov 8, 2019, 19:27 nokia ceph wrote:
> Hi,
>
> The fifth node in the cluster was affected by hardware failure and hence
> the node was replaced in the ceph cluster. But we were not able to replace
can you post your 'ceph osd tree' in pastebin?
do you mean the osds report fsid mismatch is from old removed nodes?
nokia ceph 于2019年11月8日周五 下午10:21写道:
>
> Hi,
>
> The fifth node in the cluster was affected by hardware failure and hence the
> node was replaced in the ceph cluster. But we were no
Hi,
The fifth node in the cluster was affected by hardware failure and hence
the node was replaced in the ceph cluster. But we were not able to replace
it properly and hence we uninstalled the ceph in all the nodes, deleted the
pools and also zapped the osd's and recreated them as new ceph cluster
I saw many lines like that
mon.cn1@0(leader).osd e1805 preprocess_boot from osd.112
v2:10.50.11.45:6822/158344 clashes with existing osd: different fsid
(ours: 85908622-31bd-4728-9be3-f1f6ca44ed98 ; theirs:
127fdc44-c17e-42ee-bcd4-d577c0ef4479)
the osd boot will be ignored if the fsid mismatch
wha
the osd.0 is still in down state after restart? if so, maybe the
problem is in mon,
can you set the leader mon's debug_mon=20 and restart one of the down
state osd.
and then attach the mon log file.
nokia ceph 于2019年11月8日周五 下午6:38写道:
>
> Hi,
>
>
>
> Below is the status of the OSD after restart.
>
Hi,
Below is the status of the OSD after restart.
# systemctl status ceph-osd@0.service
● ceph-osd@0.service - Ceph object storage daemon osd.0
Loaded: loaded (/usr/lib/systemd/system/ceph-osd@.service;
enabled-runtime; vendor preset: disabled)
Drop-In: /etc/systemd/system/ceph-osd@.s
try to restart some of the down osds in 'ceph osd tree', and to see
what happened?
nokia ceph 于2019年11月8日周五 下午6:24写道:
>
> Adding my official mail id
>
> -- Forwarded message -
> From: nokia ceph
> Date: Fri, Nov 8, 2019 at 3:57 PM
> Subject: OSD's not coming up in Nautilus
> To:
Adding my official mail id
-- Forwarded message -
From: nokia ceph
Date: Fri, Nov 8, 2019 at 3:57 PM
Subject: OSD's not coming up in Nautilus
To: Ceph Users
Hi Team,
There is one 5 node ceph cluster which we have upgraded from Luminous to
Nautilus and everything was going well
Hi Team,
There is one 5 node ceph cluster which we have upgraded from Luminous to
Nautilus and everything was going well until yesterday when we noticed that
the ceph osd's are marked down and not recognized by the monitors as
running eventhough the osd processes are running.
We noticed that the
Hello everyone,
Anybody knows when the next update for ceph mimic stable is coming up?
Have a really nasty bug i hope will be fixed in the patch.
The bug is osd down on snaptrim, 9 months ago , on version 13.2.4 and i
reported it. Tracker: BUG #38124
When i reported it few months later 13.2.6
11 matches
Mail list logo