I think I got one reason(probably the one): my RAM and Swap-space were
almost fully occupied:

 free -m
             total       used       free     shared    buffers     cached
Mem:          4051       4027         23          0          3       1349
-/+ buffers/cache:       2674       1377
Swap:         8189       7752        436

Is there anything else? Or Can it be possible to handle dynamic memory
allocation while hadoop is processing large data?

-- 
Deepak Diwakar

2009/7/2 Deepak Diwakar <[email protected]>

> Hi,
>
> I  use Hadoop-19.0 in standalone mode. I ran an aggregation task over
> around 2.5 TB of data. This Particular task I run regularly but didn't get
> error except this time. And tried thrice to run this task and got same
> following error every time:
>
> 09/07/01 10:29:34 INFO mapred.MapTask: Starting flush of map output
> 09/07/01 10:29:34 WARN mapred.LocalJobRunner: job_local_0001
> java.io.IOException: Cannot run program "bash": java.io.IOException:
> error=12, Cannot allocate memory
>        at java.lang.ProcessBuilder.start(ProcessBuilder.java:459)
>        at org.apache.hadoop.util.Shell.runCommand(Shell.java:149)
>        at org.apache.hadoop.util.Shell.run(Shell.java:134)
>        at org.apache.hadoop.fs.DF.getAvailable(DF.java:73)
>        at
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:321)
>
>        at
> org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:124)
>
>        at
> org.apache.hadoop.mapred.MapOutputFile.getSpillFileForWrite(MapOutputFile.java:107)
>
>        at
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer.sortAndSpill(MapTask.java:920)
>
>        at
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer.flush(MapTask.java:832)
>        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:333)
>        at
> org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:138)
> Caused by: java.io.IOException: java.io.IOException: error=12, Cannot
> allocate memory
>        at java.lang.UNIXProcess.<init>(UNIXProcess.java:148)
>        at java.lang.ProcessImpl.start(ProcessImpl.java:65)
>        at java.lang.ProcessBuilder.start(ProcessBuilder.java:452)
>        ... 10 more
> java.io.IOException: Job failed!
>        at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1217)
>        at
> otfa.mapreducetasks.reach.GeoWiseReachH2.run(GeoWiseReachH2.java:223)
>        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
>        at
> otfa.mapreducetasks.reach.GeoWiseReachH2.main(GeoWiseReachH2.java:229)
>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>        at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>
>        at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>
>        at java.lang.reflect.Method.invoke(Method.java:597)
>        at org.apache.hadoop.util.RunJar.main(RunJar.java:165)
>        at org.apache.hadoop.mapred.JobShell.run(JobShell.java:54)
>        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
>
> Can someone explain why it is happening, while my other hadoop-tasks are
> running fine and even this particular task was running well in past(I didn't
> do any change). Is it java Heap size problem?
>
> Thanks in advance.
> --
> - Deepak Diwakar,
>
>
>


-- 
- Deepak Diwakar,

Reply via email to