I am using PigServer.registerScript() to launch my script. However, the post 
http://search-hadoop.com/m/N8Owj1uu0131&subj=Re+PigServer+vs+PigRunner  says 
that it should not be used. I am wondering if it could be contributing to this 
problem.

> From: [email protected]
> To: [email protected]
> Subject: Issue with PigServer in web application (tomcat container)
> Date: Sun, 1 Apr 2012 20:32:17 +0000
> 
> 
> Hi All,
> I am firing a pig script using PigServer from a web application (tomcat 7.0). 
> The script runs every five minutes via PigServer. There are around ten jar 
> files that are being written in {tomcat.home}/temp directory repeatedly which 
> is talking lot of space. After a while, I start seeing following exceptions 
> in tomcat log:
> WARN : org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.Launcher 
> - There is no log file to write to.ERROR: 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.Launcher - 
> Backend error message during job submissionjava.io.IOException: Cannot run 
> program "chmod": java.io.IOException: error=12, Cannot allocate memory        
> at java.lang.ProcessBuilder.start(ProcessBuilder.java:459)        at 
> org.apache.hadoop.util.Shell.runCommand(Shell.java:200)        at 
> org.apache.hadoop.util.Shell.run(Shell.java:182)        at 
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:375)     
>    at org.apache.hadoop.util.Shell.execCommand(Shell.java:461)        at 
> org.apache.hadoop.util.Shell.execCommand(Shell.java:444)        at 
> org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:508)
>         at 
> org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:499)
>         at 
> org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:319)   
>      at 
> org.apache.hadoop.fs.FilterFileSystem.mkdirs(FilterFileSystem.java:189)       
>  at 
> org.apache.hadoop.mapreduce.JobSubmissionFiles.getStagingDir(JobSubmissionFiles.java:126)
>         at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:839)       
>  at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:833)        at 
> java.security.AccessController.doPrivileged(Native Method)        at 
> javax.security.auth.Subject.doAs(Subject.java:396)        at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>         at 
> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:833)      
>   at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:807)        
> at org.apache.hadoop.mapred.jobcontrol.Job.submit(Job.java:378)        at 
> org.apache.hadoop.mapred.jobcontrol.JobControl.startReadyJobs(JobControl.java:247)
>         at 
> org.apache.hadoop.mapred.jobcontrol.JobControl.run(JobControl.java:279)       
>  at java.lang.Thread.run(Thread.java:619)Caused by: java.io.IOException: 
> java.io.IOException: error=12, Cannot allocate memory        at 
> java.lang.UNIXProcess.<init>(UNIXProcess.java:148)        at 
> java.lang.ProcessImpl.start(ProcessImpl.java:65)        at 
> java.lang.ProcessBuilder.start(ProcessBuilder.java:452)        ... 21 more
> Hadoop logs look fine and the same script on same data set works fine when I 
> run it manually. Multiple parallel instances (I tried 3 at same time) also 
> works fine when ran manually.
> I am using cloudera build of pig (pig-0.8.1-cdh3u2) and hadoop(0.20.2-cdh3u2) 
> (running on remote cluster). Any suggestions?
> Thanks,Rakesh                                           
                                          

Reply via email to