Hi, i had similar problem. In my case the rebalance didnt finish because of not enough free space to migrate the space to other nodes. Reason was 1% Reservation Options which is setup by default in distributed, but i set it to 0%, which was ignored by gluster
Greatings Taste Am 25.05.2019 07:00:13, schrieb Nithya Balachandran: > Hi Brandon, > Please send the following: > > 1. the gluster volume info > > 2. Information about which brick was removed > > 3. The rebalance log file for all nodes hosting removed bricks. > > Regards, > > Nithya > > > On Fri, 24 May 2019 at 19:33, Ravishankar N <> [email protected]> > > wrote: > > > > > > Adding a few DHT folks for some possible suggestions. > > > > -Ravi > > > > On 23/05/19 11:15 PM, > > [email protected]> > wrote: > > > > > > > > > > > Does anyone know what should be done on a glusterfs v5.6 "gluster volume remove-brick" operation that fails? I'm trying to remove 1 of 8 distributed smaller nodes for replacement with larger node. > > > > > > > > > > > > The "gluster volume remove-brick ... status" command reports status failed and failures = "3" > > > > > > > > > > > > cat /var/log/glusterfs/volbackups-rebalance.log > > > > > > ... > > > > > > [2019-05-23 16:43:37.442283] I [MSGID: 109028] [dht-rebalance.c:5070:gf_defrag_status_get] 0-volbackups-dht: Rebalance is failed. Time taken is 545.00 secs > > > > > > > > > > > > All servers are confirmed in good communications and updated and freshly rebooted and retried the remove-brick few times with fail each time> > > > > > > > > > > > > > > > > > _______________________________________________ Gluster-users mailing list > > > [email protected]> > > > > > https://lists.gluster.org/mailman/listinfo/gluster-users> > > > > > > > > > > > _______________________________________________ > Gluster-users mailing list > [email protected] > https://lists.gluster.org/mailman/listinfo/gluster-users
_______________________________________________ Gluster-users mailing list [email protected] https://lists.gluster.org/mailman/listinfo/gluster-users
