Folks,

I have deployed 15 OSDs node clusters using cephadm and encount duplicate
OSD on one of the nodes and am not sure how to clean that up.

root@datastorn1:~# ceph health
HEALTH_WARN 1 failed cephadm daemon(s); 1 pool(s) have no replicas
configured

osd.3 is duplicated on two nodes, i would like to remove it from
datastorn4 but I'm not sure how to remove it. In the ceph osd tree I am not
seeing any duplicate.

root@datastorn1:~# ceph orch ps | grep osd.3
osd.3                      datastorn4                stopped          7m
ago   3w        -    42.6G  <unknown>  <unknown>     <unknown>
osd.3                      datastorn5                running (3w)     7m
ago   3w    2584M    42.6G  17.2.3     0912465dcea5  d139f8a1234b


Getting following error in logs

2022-10-21T09:10:45.226872+0000 mgr.datastorn1.nciiiu (mgr.14188) 1098186 :
cephadm [INF] Found duplicate OSDs: osd.3 in status stopped on datastorn4,
osd.3 in status running on datastorn5
2022-10-21T09:11:46.254979+0000 mgr.datastorn1.nciiiu (mgr.14188) 1098221 :
cephadm [INF] Found duplicate OSDs: osd.3 in status stopped on datastorn4,
osd.3 in status running on datastorn5
2022-10-21T09:12:53.009252+0000 mgr.datastorn1.nciiiu (mgr.14188) 1098256 :
cephadm [INF] Found duplicate OSDs: osd.3 in status stopped on datastorn4,
osd.3 in status running on datastorn5
2022-10-21T09:13:59.283251+0000 mgr.datastorn1.nciiiu (mgr.14188) 1098293 :
cephadm [INF] Found duplicate OSDs: osd.3 in status stopped on datastorn4,
osd.3 in status running on datastorn5
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to