Dear list,

I seem to have gotten into a tricky situation. Today I brought up a shiny
new server with new disk arrays and attempted to replace one brick of a
replica 2 distribute/replicate volume on an older server using the
`replace-brick` command:

# gluster volume replace-brick homes wingu0:/mnt/gluster/homes
wingu06:/data/glusterfs/sdb/homes commit force

The command was successful and I see the new brick in the output of
`gluster volume info`. The problem is that Gluster doesn't seem to be
migrating the data, and now the original brick that I replaced is no longer
part of the volume (and a few terabytes of data are just sitting on the old
brick):

# gluster volume info homes | grep -E "Brick[0-9]:"
Brick1: wingu4:/mnt/gluster/homes
Brick2: wingu3:/mnt/gluster/homes
Brick3: wingu06:/data/glusterfs/sdb/homes
Brick4: wingu05:/data/glusterfs/sdb/homes
Brick5: wingu05:/data/glusterfs/sdc/homes
Brick6: wingu06:/data/glusterfs/sdc/homes

I see the Gluster docs have a more complicated procedure for replacing
bricks that involves getfattr/setfattr¹. How can I tell Gluster about the
old brick? I see that I have a backup of the old volfile thanks to yum's
rpmsave function if that helps.

We are using Gluster 5.6 on CentOS 7. Thank you for any advice you can give.

¹
https://docs.gluster.org/en/latest/Administrator%20Guide/Managing%20Volumes/#replace-faulty-brick

-- 
Alan Orth
alan.o...@gmail.com
https://picturingjordan.com
https://englishbulgaria.net
https://mjanja.ch
"In heaven all the interesting people are missing." ―Friedrich Nietzsche
_______________________________________________
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Reply via email to