On 11/15/2017 12:54 PM, Daniel Berteaud wrote:



Le 13/11/2017 à 21:07, Daniel Berteaud a écrit :

Le 13/11/2017 à 10:04, Daniel Berteaud a écrit :

Could I just remove the content of the brick (including the .glusterfs directory) and reconnect ?


If it is only the brick that is faulty on the bad node, but everything else is fine, like glusterd running, the node being a part of the trusted storage pool etc,  you could just kill the brick first and do step-13 in "10.6.2. Replacing a Host Machine with the Same Hostname", (the mkdir of non-existent dir, followed by setfattr of non-existent key) of https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3.1/pdf/Administration_Guide/Red_Hat_Storage-3.1-Administration_Guide-en-US.pdf, then restart the brick by restarting glusterd on that node. Read 10.5 and 10.6 sections in the doc to get a better understanding of replacing bricks.


In fact, what would be the difference between reconnecting the brick with a wiped FS, and using

gluster volume remove-brick vmstore replica 1 master1:/mnt/bricks/vmstore
gluster volume add-brick myvol replica 2 master1:/mnt/bricks/vmstore
gluster volume heal vmstore full

As explained here: http://lists.gluster.org/pipermail/gluster-users/2014-January/015533.html

?

No one can help ?

Cheers,
Daniel

--

Logo FWS

        *Daniel Berteaud*

FIREWALL-SERVICES SAS.
Société de Services en Logiciels Libres
Tel : 05 56 64 15 32 <tel:0556641532>
Matrix: @dani:fws.fr
/www.firewall-services.com/



_______________________________________________
Gluster-users mailing list
[email protected]
http://lists.gluster.org/mailman/listinfo/gluster-users

_______________________________________________
Gluster-users mailing list
[email protected]
http://lists.gluster.org/mailman/listinfo/gluster-users

Reply via email to