Bernhard Glomm
IT Administration

Phone:   +49 (30) 86880 134
Fax:     +49 (30) 86880 100
Skype:   bernhard.glomm.ecologic
               
Ecologic Institut gemeinnützige GmbH | Pfalzburger Str. 43/44 | 10717 Berlin | 
Germany
GF: R. Andreas Kraemer | AG: Charlottenburg HRB 57947 | USt/VAT-IdNr.: 
DE811963464
Ecologic™ is a Trade Mark (TM) of Ecologic Institut gemeinnützige GmbH

Begin forwarded message:

From: bernhard glomm <[email protected]>
Subject: Re: [Gluster-users] Gluster Volume Replication using 2 AWS instances 
on Autoscaling
Date: March 13, 2014 6:08:10 PM GMT+01:00
To: Vijay Bellur <[email protected]>

??? I thought  replace-brick was not recommended at the moment
in 3.4.2 on a replica 2 volume I use successfully:

volume remove-brick <vol-name>  replica 1 <brick-name>  force
# replace the old brick with the new one, mount another disk or what ever, than
volume add-brick <vol-name> replica 2 <brick-name> force
volume heal <vol-name>

hth

Bernhard


On Mar 13, 2014, at 5:48 PM, Vijay Bellur <[email protected]> wrote:

On 03/13/2014 09:18 AM, Alejandro Planas wrote:
> Hello,
> 
> We have 2 AWS instances, 1 brick on each instance, one replicated volume
> among both instances. When one of the instances fails completely and
> autoscaling replaces it with a new one, we are having issues recreating
> the replicated volume again.
> 
> Can anyone provide some light on the gluster commands required to
> include this new replacement instance (with one brick) as a member of
> the replicated volume?
> 

You can probably use:

volume replace-brick <volname> <old-brick> <new-brick> commit force

This will remove the old-brick from the volume and bring in new-brick to the 
volume. self-healing can then synchronize data to the new brick.

Regards,
Vijay

_______________________________________________
Gluster-users mailing list
[email protected]
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Attachment: signature.asc
Description: Message signed with OpenPGP using GPGMail

_______________________________________________
Gluster-users mailing list
[email protected]
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Reply via email to