Pankil-
I'd be interested to know the size of the /mnt and /mnt2 partitions.
Are they the same? Can you run the following and report the output...
% df -h /mnt /mnt2
Thanks.
-Matt
On Jun 22, 2009, at 1:32 PM, Pankil Doshi wrote:
Hey Alex,
Will Hadoop balancer utility work in this case?
Pankil
On Mon, Jun 22, 2009 at 4:30 PM, Alex Loddengaard
<a...@cloudera.com> wrote:
Are you seeing any exceptions because of the disk being at 99%
capacity?
Hadoop should do something sane here and write new data to the disk
with
more capacity. That said, it is ideal to be balanced. As far as I
know,
there is no way to balance an individual DataNode's hard drives
(Hadoop
does
round-robin scheduling when writing data).
Alex
On Mon, Jun 22, 2009 at 10:12 AM, Kris Jirapinyo <kjirapi...@biz360.com
wrote:
Hi all,
How does one handle a mount running out of space for HDFS? We
have
two
disks mounted on /mnt and /mnt2 respectively on one of the
machines that
are
used for HDFS, and /mnt is at 99% while /mnt2 is at 30%. Is there
a way
to
tell the machine to balance itself out? I know for the cluster,
you can
balance it using start-balancer.sh but I don't think that it will
tell
the
individual machine to balance itself out. Our "hack" right now
would be
just to delete the data on /mnt, since we have replication of 3x, we
should
be OK. But I'd prefer not to do that. Any thoughts?