Hi all
After some testing and debugging I was able to reproduce the problem in our
lab. It turned out that this behaviour happens when root-sqaushing is turned
on, see the details below. Without root-squashing turned on rebalancing happens
just fine.
Volume Name: public
Type:
Hi all
After expanding our cluster we are facing failures while rebalancing. In my
opinion this doesn’t look good, so can anybody maybe explain how these failures
could arise, how you can fix them or what the consequences can be?
$gluster volume rebalance public status
I am messing around with gluster management and I've added a couple bricks
and did a rebalance, first fix-layout and then migrate data. When I do
this I seem to get a lot of failures:
gluster volume rebalance MAIL status
Node Rebalanced-files size
This is a bug in the way certain things are wrongly reported as failures.
It is not a defect. We will be fixing this in the next release.
Avati
On Thu, Jun 28, 2012 at 12:57 PM, James Devine fxmul...@gmail.com wrote:
I am messing around with gluster management and I've added a couple bricks