You are over allocating memory per each java process in Hadoop. Memory 
allocation = (mappers + reducers) * child.java.opts memory setting. 

This would only happen when your node is fully utilized. 

Alex Rovner


Sent from my iPhone

On Mar 21, 2012, at 10:41 PM, rakesh sharma <[email protected]> wrote:

> 
> Hi All,
> I am using pig-0.8.1-cdh3u2 and hadoop hadoop-0.20.2-cdh3u1 on a linux box. 
> Once in a while, I get following exception:
> WARN : org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.Launcher 
> - There is no log file to write to.ERROR: 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.Launcher - 
> Backend error message during job submissionjava.io.IOException: Cannot run 
> program "chmod": java.io.IOException: error=12, Cannot allocate memory        
> at java.lang.ProcessBuilder.start(ProcessBuilder.java:459)        at 
> org.apache.hadoop.util.Shell.runCommand(Shell.java:200)        at 
> org.apache.hadoop.util.Shell.run(Shell.java:182)        at 
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:375)     
>    at org.apache.hadoop.util.Shell.execCommand(Shell.java:461)        at 
> org.apache.hadoop.util.Shell.execCommand(Shell.java:444)        at 
> org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:508)
>         at 
> org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:499)
>         at 
> org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:319)   
>      at 
> org.apache.hadoop.fs.FilterFileSystem.mkdirs(FilterFileSystem.java:189)       
>  at 
> org.apache.hadoop.mapreduce.JobSubmissionFiles.getStagingDir(JobSubmissionFiles.java:126)
>         at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:839)       
>  at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:833)        at 
> java.security.AccessController.doPrivileged(Native Method)        at 
> javax.security.auth.Subject.doAs(Subject.java:396)        at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>         at 
> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:833) at 
> org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:807)        at 
> org.apache.hadoop.mapred.jobcontrol.Job.submit(Job.java:378)        at 
> org.apache.hadoop.mapred.jobcontrol.JobControl.startReadyJobs(JobControl.java:247)
>         at 
> org.apache.hadoop.mapred.jobcontrol.JobControl.run(JobControl.java:279)       
>  at java.lang.Thread.run(Thread.java:619)Caused by: java.io.IOException: 
> java.io.IOException: error=12, Cannot allocate memory        at 
> java.lang.UNIXProcess.<init>(UNIXProcess.java:148)        at 
> java.lang.ProcessImpl.start(ProcessImpl.java:65)        at 
> java.lang.ProcessBuilder.start(ProcessBuilder.java:452)        ... 21 more
> Any ideas what might be causing it?
> Thanks,Rakesh                         

Reply via email to