Hi James, thanks for your suggestions. I have used the rebalance
command several times after adding new bricks, and I am pretty sure that
it equalizes the amount of data stored on each brick rather than the
amount of free space. It would be useful to have an option to equalize
free space instead - that would certainly solve the problem of
non-uniform brick sizes.
There used to be a server vol file option "min-free-disk", to leave
space on bricks for files to grow. That would solve my problem as
well. Do you know if this is available in 3.1.x or 3.2, or if there is
a CLI command for setting it?
-Dan.
On 02/05/11 13:11, Burnash, James wrote:
Hi Dan.
I believe that you would have to run this command:
gluster volume rebalance <volname> start
at which point, Gluster will try to balance the files amongst the
storage nodes. Whether or not it will accommodate non-uniform bricks I
don't know for sure (since mine are uniform), but I believe that it
will look at actual space available and try to make intelligent
decisions on where to place files.
Please check with the devs before implementing my suggestion, however
-- don't want to cause any harm since I'm unsure.
James Burnash, Unix Engineering
*From:*[email protected]
[mailto:[email protected]] *On Behalf Of *Dan Bretherton
*Sent:* Sunday, May 01, 2011 9:01 AM
*To:* gluster-users
*Subject:* [SPAM?] [Gluster-users] Non-uniform backend brick sizes
*Importance:* Low
Hello All-
After posting to a previous thread about this issue
(http://gluster.org/pipermail/gluster-users/2011-April/007157.html) I
decided to start a new thread, mainly because I think I have found a
problem relating to this setup. Our servers vary in size quite a lot,
so some of the bricks in one particular volume are 100% full. This
has not caused us any problems until now, because new files are always
created on larger bricks where there is still space. However,
yesterday a user complained that he was getting "device full" errors
even though df reported several hundred GB free in the volume. The
problem turned out to be caused by over-writing pre-existing files
that were stored on one or more full bricks. Deleting the old files
before creating them again cured the problem, because the new files
were then created on larger bricks. Is this a known problem when
using distributed or distributed/replicated volumes with non uniform
backend sizes, and is there any way to avoid it?
Lifting some comments and questions from the other thread...
From this posting:
http://gluster.org/pipermail/gluster-users/2011-March/007103.html
> I see that
> your backend sizes are different... Its preferred to keep them uniform.
And from this posting:
http://gluster.org/pipermail/gluster-users/2011-March/007104.html
> try to keep the backend uniform to avoid any possible issues which may
> arise later.
Please could someone comment on the "possible issues that might arise"
with with a setup involving non-uniform backend brick sizes. All
comments and suggestions would be much appreciated.
-Dan.
DISCLAIMER:
This e-mail, and any attachments thereto, is intended only for use by
the addressee(s)named herein and
may contain legally privileged and/or confidential information. If you
are not the intended recipient of this
e-mail, you are hereby notified that any dissemination, distribution
or copying of this e-mail and any attachments
thereto, is strictly prohibited. If you have received this in error,
please immediately notify me and permanently
delete the original and any printout thereof.E-mail transmission
cannot be guaranteed to be secure or error-free.
The sender therefore does not accept liability for any errors or
omissions in the contents of this message which
arise as a result of e-mail transmission.
NOTICE REGARDING PRIVACY AND CONFIDENTIALITY
Knight Capital Group may, at its discretion, monitor and review the
content of all e-mail communications.
http://www.knight.com <http://www.knight.com/>
_______________________________________________
Gluster-users mailing list
[email protected]
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users