Hi, I have a hadoop cluster of size 5 and a data of size 1GB. I am running a simple map reduce program which reads text data and outputs sequence files. I found some solutions to this problem suggesting to set over commmit to 0 and to increase the unlimit. I have memory over commit set to 0 and have ulimit unlimited. Even with this, I keep getting the following error. Is any one aware of any work arounds for this? java.io.IOException: Cannot run program "bash": java.io.IOException:
*Error: * *error*=12, Cannot allocate memory at java.lang.ProcessBuilder.start(ProcessBuilder.java:459) at org.apache.hadoop.util.Shell.runCommand(Shell.java:149) at org.apache.hadoop.util.Shell.run(Shell.java:134) at org.apache.hadoop.fs.DF.getAvailable(DF.java:73) at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:296) at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:124) at org.apache.hadoop.mapred.MapOutputFile.getSpillFileForWrite(MapOutputFile.java:107) at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.sortAndSpill(MapTask.java:734) at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.flush(MapTask.java:694) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:220) at org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:2124) Thanks -Rohini