[ 
https://issues.apache.org/jira/browse/PIG-1838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13011328#comment-13011328
 ] 

Michael Brauwerman commented on PIG-1838:
-----------------------------------------

I see this problem as well.
In my case, I run commands basically like this to run a bunch of pig jobs in 
parallel:
 for date in `list-of-dates` do nohup pig -m DATE=$date my-script.pig &

Each pig job that runs will create a /tmp/pigNNNN dir with jar files, until 
/tmp is exhausted.
Meanwhile, /mnt/tmp is empty and would be a better place for these files to go.

What is the workaround?

I tried editing pig.sh to add
  HADOOP_OPTS="-Djava.io.tmpdir=/mnt/tmp"
before calling hadoop, but that did not seem to work.

Is my failed workaround a typo, or is there a diffrent way I should set 
java.io.tmpdir when launching pig?




> On a large farm, some pigs die of /tmp starvation
> -------------------------------------------------
>
>                 Key: PIG-1838
>                 URL: https://issues.apache.org/jira/browse/PIG-1838
>             Project: Pig
>          Issue Type: Wish
>          Components: impl
>    Affects Versions: 0.8.0
>            Reporter: Allen Wittenauer
>
> We're starting to see issues where interactive/command line pig users blow up 
> due to so many large jar creations in /tmp. (In other words, pig execution 
> prior to the java.io.tmpdir fix that Hadoop makes can kick in.)  Pig should 
> probably not depend upon users being savvy enough to override java.io.tmpdir 
> on their own in these situations and/or a better steward of the space it does 
> use.  

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to