>1 node I wiped it clean and the other I left the 3 gluster brick drives 
>untouch.

If the last node from the original is untouched you can:
1. Go to the old host and use 'gluster volume remove-brick <VOL> replica 1 
wiped_host:/path/to-brick untouched_bricks_host:/path/to-brick force'
2. Remove the 2 nodes that you have kicked away:
gluster peer detach node2
gluster peer detach node3

3. Reinstall the wiped node and install gluster there 
4. Create the filesystem on the brick:
mkfs.xfs -i size=512 /dev/mapper/brick_block_device
5. Mount the Gluster (you can copy the fstab entry from the working node and 
adapt it)
Here is an example:
/dev/data/data1 /gluster_bricks/data1 xfs 
inode64,noatime,nodiratime,inode64,nouuid,context="system_u:object_r:glusterd_brick_t:s0"
 0 0

6. Create the selinux label via 'semanage fcontext -a -t glusterd_brick_t 
"/gluster_bricks/data1(/.*)?"' (remove only the single quotes) and run 
'restorecon -RFvv /gluster_bricks/data1'
7. Mount the FS and create a dir inside the mount point
8. Extend the gluster volume:
'gluster volume add-brick <VOL> replica 2 
new_host:/gluster_bricks/<dir>/<subdir>

9. Run a full heal
gluster volume heal <VOL> full

10. Repeat again and remember to never wipe 2 nodes at a time :)


Good luck and take a look at Quick Start Guide - Gluster Docs



Best Regards,
Strahil Nikolov
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5HABTIYJP2WITY2Y43XBAOSUH46EN7NS/

Reply via email to