Yes, this works without any issues. After re-installing a node, just
issue a "mmsdrrestore -N <hostname>" from one of the other nodes in the
cluster.
In case you are working with ssh authentication between the hosts, it is
helpful to have the ssh host keys and root user keys in your backup and
restore them after re-install, otherwise it's a hassle to distribute new
keys on all other cluster nodes (authorized_keys and known_hosts).
I am planning to implement a cluster with a bunch of old x86
machines, the disks are not connected to nodes via the SAN network,
instead each x86 machine has some local attached disks.
The question is regarding node failure, for example only the operating
system disk fails and the nsd disks are good. In that case I plan to
replace the failing OS disk with a new one and install the OS on it
and re-attach these nsd disks to that node, my question is: will this
work? how can I add a nsd back to the cluster without restoring data
from other replicas since the data/metadata is actually not corrupted
on nsd.
Best regards,
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at gpfsug.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org