Thanks Andrew for your answer.
I may have found the HDFS parameter I was looking for:
dfs.datanode.du.reserved 0 Reserved space in bytes per volume. Always
leave this much space free for non dfs use.
http://hadoop.apache.org/common/docs/current/hdfs-default.html
https://issues.apache.org/jira/browse/HADOOP-1463
The book "Pro Hadoop" page 115, also mentions the "balancer" service and
the
dfs.balance.bandwidthPerSec
parameter also documented at:
http://hadoop.apache.org/common/docs/current/hdfs-default.html
dfs.balance.bandwidthPerSec 1048576 Specifies the maximum amount of
bandwidth that each datanode can utilize for the balancing purpose in
term of the number of bytes per second.
however I do not see the script "start-blancer.sh" in hbase.
Would it be possible to use those Hadoop parameters in a hbase setup?
Thanks
TR
Andrew Purtell wrote:
Hi,
Short answer: No.
Longer answer: HBase uses the underlying filesystem (typically HDFS) to replicate and persist data. This is independent of the key space. Any special block placement policy like you want would be handled by the filesystem. To my knowledge, HDFS doesn't support it. HDFS also does not like heterogeneous backing storage at the moment. It causes problems if one node fills before the others, and there is not yet an automatic mechanism for moving blocks from full nodes to less utilized ones, though I see there is an issue for that: http://issues.apache.org/jira/browse/HDFS-339 . I wouldn't recommend a setup like you propose.
- Andy
________________________________
From: Tux Racer <[email protected]>
To: [email protected]
Sent: Thu, November 26, 2009 11:14:15 AM
Subject: newbie question on disk usage on node with different disk size
Hello Hbase Users!
I am trying to find some pointers on how to configure hbase region server and
in particular how the disk will be filled on each node.
Say for instance that I have a small cluster of 3 nodes:
node 1 has a 100Gb disk
node 2 has a 200Gb disk
and node 3 has a 300 Gb disk
is there a way to tell hbase that it should store the keys proportionally to
the he node disk space?
(i.e. to have at some stage each disk filled at 50%: 50/100/150 Gb of space
used)
Or is that a pure Hadoop configuration question?
I looked at the files in the ~/hbase-0.20.1/conf/ folder with no luck.
Thanks
TR