Well, apparently this is the result of  (or is exposed by) a subtle
misconfiguration.

Assume gluster1 is the name of the remote peer, and 192.168.98.11 is
its IP addresses.

Well, I did :

    gluster peer probe 192.168.98.11

but then:

    gluster volume create gv0 replica 2 [...] gluster1:/export/brick1

So I used IP address to add the peer, but host-name to add a brick to
the volume.

Instead, if I coherently use name either for peer and volume brick
definition, glusterd starts normally, even if the DNS resolution fails
(in which case the remote peer is correctly treated as a disconnected
node).

(of course it's assumed that at least the local peer is correctly
resolved, or the IP address is used for it)

Thanks,

Guido
_______________________________________________
Gluster-users mailing list
[email protected]
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Reply via email to