On 03/13/2014 09:18 AM, Alejandro Planas wrote:
Hello,

We have 2 AWS instances, 1 brick on each instance, one replicated volume
among both instances. When one of the instances fails completely and
autoscaling replaces it with a new one, we are having issues recreating
the replicated volume again.

Can anyone provide some light on the gluster commands required to
include this new replacement instance (with one brick) as a member of
the replicated volume?


You can probably use:

volume replace-brick <volname> <old-brick> <new-brick> commit force

This will remove the old-brick from the volume and bring in new-brick to the volume. self-healing can then synchronize data to the new brick.

Regards,
Vijay

_______________________________________________
Gluster-users mailing list
[email protected]
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Reply via email to