Hi,
I'm testing the replicated volume wit a 3 VMs config:
gfs1:/export/sda3/brick
gfs2:/export/sda3/brick
gfsc as client

The volume name is gfs.
Gluster version in the test is 3.6.3, on CentOS 6.6.

A volume of 2 replica is made, and I try to simulate a brick fail by:
1. stop the glusterd and gluster processes on gfs1
2. unmount the brick
3. mkfs.xfs the brick
4. mount it back
5. start the gluster service
6. volume remove-brick gfs replica 1 gfs1:/export/sda3/brick force
7. volume add-brick gfs replica 2 gfs1:/export/sda3/brick

To this point, the "volume info gfs" shows the volume to be a 2-bricks
replicate volume, which is fine.
But the gluster somehow thinks the volume doesn't need healing.
Issue the "volume heal gfs full" did not heal the volume, data did not
copied from the gfs2 brick to gfs1.
Is the problem in the replace procedures or something else?
Please advise ;)

Mike
_______________________________________________
Gluster-users mailing list
[email protected]
http://www.gluster.org/mailman/listinfo/gluster-users

Reply via email to