> From: [email protected]
> To: [email protected]
> Subject: Re: is there any way we can limit Hadoop Datanode's disk usage?
> Date: Wed, 31 Mar 2010 18:09:04 +0000
>
> On 3/30/10 8:12 PM, "steven zhuang" <[email protected]> wrote:
>
> > hi, guys,
> > we have some machine with 1T disk, some with 100GB disk,
> > I have this question that is there any means we can limit the
> > disk usage of datanodes on those machines with smaller disk?
> > thanks!
>
>
> You can use dfs.datanode.du.reserved, but be aware that are *no* limits on
> mapreduce's usage, other than what you can create with file system quotas.
>
> I've started recommended creating file system partitions in order to work
> around Hadoop's crazy space reservation ideas.
>
Hmmm.
Our sysadmins decided to put each of the jbod disks in to their own volume
group.
Kind of makes sense if you want to limit any impact that Hadoop could cause.
(Assuming someone forgot to set up the dfs.datanode.du.reserved)
But I do agree that at a minimum, the file space used by hadoop should be a
partition and not on the '/' (root) disk space.
_________________________________________________________________
Hotmail: Trusted email with powerful SPAM protection.
http://clk.atdmt.com/GBL/go/210850553/direct/01/