Hello all! I had a volume with only a local brick running vms and recently added a second (remote) brick to the volume. After adding the brick, the heal command reported the following:
root@gluster-gu1:~# gluster volume heal gv1 info > Brick gluster-gu1:/mnt/gv_gu1/brick > / - Is in split-brain > Status: Connected > Number of entries: 1 > Brick gluster-gu2:/mnt/gv_gu1/brick > Status: Connected > Number of entries: 0 All other files healed correctly. I noticed that in the xfs of the brick I see a directory named localadmin but when I ls the gluster volume mountpoint I got an error and a lot of ??? root@gluster-gu1:/var/lib/vmImages_gu1# ll > ls: cannot access 'localadmin': No data available > d????????? ? ? ? ? ? localadmin/ This goes for both servers that have that volume gv1 mounted. Both see that directory like that. While in the xfs brick /mnt/gv_gu1/brick/localadmin is an accessible directory. root@gluster-gu1:/mnt/gv_gu1/brick/localadmin# ll > total 4 > drwxr-xr-x 2 localadmin root 6 Mar 7 09:40 ./ > drwxr-xr-x 6 root root 4096 Mar 7 09:40 ../ When I added the second brick to the volume, this localadmin folder was not replicated there I imagine because of this strange behavior. Can someone help me with this? Thanks!
_______________________________________________ Gluster-users mailing list [email protected] https://lists.gluster.org/mailman/listinfo/gluster-users
