Hi

I have a pure distributed gluster volume with nine nodes and trying to
remove one node, I ran
gluster volume remove-brick atlasglust nodename:/glusteratlas/brick007/gv0
start

It completed but with around 17000 failures

      Node Rebalanced-files          size       scanned      failures
 skipped               status  run time in h:m:s
                               ---------      -----------   -----------
 -----------   -----------   -----------         ------------
 --------------
          nodename          4185858        27.5TB       6746030
 17488             0            completed      405:15:34

I can see that there is still 1.5 TB of data on the node which I was trying
to remove.

I am not sure what to do now?  Should I run remove-brick command again so
the files which has been failed can be tried again?

or should I run commit first and then try to remove node again?

Please advise as I don't want to remove files.

Thanks

Kashif
_______________________________________________
Gluster-users mailing list
[email protected]
https://lists.gluster.org/mailman/listinfo/gluster-users

Reply via email to