On 11/16/2017 12:54 PM, Daniel Berteaud wrote:
Le 15/11/2017 à 09:45, Ravishankar N a écrit :
If it is only the brick that is faulty on the bad node, but everything else is fine, like glusterd running, the node being a part of the trusted storage pool etc,  you could just kill the brick first and do step-13 in "10.6.2. Replacing a Host Machine with the Same Hostname", (the mkdir of non-existent dir, followed by setfattr of non-existent key) of https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3.1/pdf/Administration_Guide/Red_Hat_Storage-3.1-Administration_Guide-en-US.pdf, then restart the brick by restarting glusterd on that node. Read 10.5 and 10.6 sections in the doc to get a better understanding of replacing bricks.

Thanks, I'll try that.
Any way in this situation to check which file will be healed from which brick before reconnecting ? Using some getfattr tricks ?
Yes, there are afr xattrs that determine the heal direction for each file. The good copy will have non-zero trusted.afr* xattrs that blame the bad one and heal will happen from good to bad.  If both bricks have attrs blaming the other, then the file is in split-brain.
-Ravi

Regards, Daniel

--

Logo FWS

        *Daniel Berteaud*

FIREWALL-SERVICES SAS.
Société de Services en Logiciels Libres
Tel : 05 56 64 15 32 <tel:0556641532>
Matrix: @dani:fws.fr
/www.firewall-services.com/


_______________________________________________
Gluster-users mailing list
[email protected]
http://lists.gluster.org/mailman/listinfo/gluster-users

Reply via email to