Le 16/07/2017 à 17:02, Udo Lembke a écrit :
Hi,
On 16.07.2017 15:04, Phil Schwarz wrote:
...
Same result, the OSD is known by the node, but not by the cluster.
...
Firewall? Or missmatch in /etc/hosts or DNS??
Udo
OK,
- No FW,
- No DNS issue at this point.
- Same procedure followed with the last node, except full cluster update
before adding new node,new osd.
Only the strange behavior of the 'pveceph createosd' command which
was shown in prevous mail.
...
systemd[1]: [email protected]: Main process exited,
code=exited, status=1/FAILURE
systemd[1]: Failed to start Ceph disk activation: /dev/sdc1.
systemd[1]: [email protected]: Unit entered failed state.
systemd[1]: [email protected]: Failed with result 'exit-code'....
What consequences should i encounter when switching /etc/hosts from
public_IPs to private_IPs ? ( appart from time travel paradox or
blackhole bursting ..)
Thanks.
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com