My problem was caused purely by copying files to HDFS using [hadoop dfs
-put].  No map-reduce activity was going on at the time (and all of the jobs
I had around that time were counting jobs that had very powerful reduction
in data volumes due to combiner functions.


On 1/8/08 1:32 PM, "Hairong Kuang" <[EMAIL PROTECTED]> wrote:

> Most of the time dfs and map/reduce share disks. Keep in mind that du
> options can not control how much space that map/reduce tasks take.
> Sometimes we get the out of disk space problem because data intensive
> map/reduce tasks take a lot of disk space.
> 
> Hairong
> 
> -----Original Message-----
> From: Ted Dunning [mailto:[EMAIL PROTECTED]
> Sent: Tuesday, January 08, 2008 1:13 PM
> To: hadoop-user@lucene.apache.org
> Subject: Re: Limit the space used by hadoop on a slave node
> 
> 
> I think I have seen related bad behavior on 15.1.
> 
> On 1/8/08 11:49 AM, "Hairong Kuang" <[EMAIL PROTECTED]> wrote:
> 
>> Has anybody tried 15.0? Please check
>> https://issues.apache.org/jira/browse/HADOOP-1463.
>> 
>> Hairong
>> -----Original Message-----
>> From: Joydeep Sen Sarma [mailto:[EMAIL PROTECTED]
>> Sent: Tuesday, January 08, 2008 11:33 AM
>> To: hadoop-user@lucene.apache.org; hadoop-user@lucene.apache.org
>> Subject: RE: Limit the space used by hadoop on a slave node
>> 
>> at least up until 14.4, these options are broken. see
>> https://issues.apache.org/jira/browse/HADOOP-2549
>> 
>> (there's a trivial patch - but i am still testing).
>> 
>> 
> 

Reply via email to