Thanks Anthony,
                                That did the trick! All OSDs are up now.

But I am getting one warning message

ceph status
  cluster:
    id:     1aa23d18-6eb0-11f0-823c-cc28aa448c14
    health: HEALTH_WARN
            Failed to place 1 daemon(s)
            1 failed cephadm daemon(s)

Health check reports following :-

ceph health detail
HEALTH_WARN Failed to place 1 daemon(s); 1 failed cephadm daemon(s)
[WRN] CEPHADM_DAEMON_PLACE_FAIL: Failed to place 1 daemon(s)
    Failed while placing node-exporter.ceph-mon3 on ceph-mon3: cephadm
exited with an error code: 1, stderr: Non-zero exit code 125 from
/usr/bin/podman container inspect --format {{.State.Status}}
ceph-1aa23d18-6eb0-11f0-823c-cc28aa448c14-node-exporter-ceph-mon3
/usr/bin/podman: stderr Error: no such container
ceph-1aa23d18-6eb0-11f0-823c-cc28aa448c14-node-exporter-ceph-mon3
Non-zero exit code 125 from /usr/bin/podman container inspect --format
{{.State.Status}}
ceph-1aa23d18-6eb0-11f0-823c-cc28aa448c14-node-exporter.ceph-mon3
/usr/bin/podman: stderr Error: no such container
ceph-1aa23d18-6eb0-11f0-823c-cc28aa448c14-node-exporter.ceph-mon3
Deploy daemon node-exporter.ceph-mon3 ...
Verifying port 0.0.0.0:9100 ...
Cannot bind to IP 0.0.0.0 port 9100: [Errno 98] Address already in use
ERROR: TCP Port(s) '0.0.0.0:9100' required for node-exporter already in use
[WRN] CEPHADM_FAILED_DAEMON: 1 failed cephadm daemon(s)
    daemon node-exporter.ceph-mon2 on ceph-mon2 is in error state

I think most likely its due to the leftovers of the previous
installation.  How would I go about removing this cleanly and  more
importantly, in a way that Ceph is aware of the change, therefore clearing
 the warning.
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to