On 3/30/10 8:12 PM, "steven zhuang" <[email protected]> wrote:
> hi, guys, > we have some machine with 1T disk, some with 100GB disk, > I have this question that is there any means we can limit the > disk usage of datanodes on those machines with smaller disk? > thanks! You can use dfs.datanode.du.reserved, but be aware that are *no* limits on mapreduce's usage, other than what you can create with file system quotas. I've started recommended creating file system partitions in order to work around Hadoop's crazy space reservation ideas.
