seems to be the case
./mapred/src/java/org/apache/hadoop/mapred/LocalJobRunner.java
/**
* @see org.apache.hadoop.mapreduce.protocol.ClientProtocol#getSystemDir()
*/
public String getSystemDir() {
Path sysDir = new Path(
conf.get(JTConfig.JT_SYSTEM_DIR, "/tmp/hadoop/mapred/system"));
return fs.makeQualified(sysDir).toString();
}
/**
* @see
org.apache.hadoop.mapreduce.protocol.ClientProtocol#getStagingAreaDir()
*/
public String getStagingAreaDir() throws IOException {
Path stagingRootDir = new Path(conf.get(JTConfig.JT_STAGING_AREA_ROOT,
"/tmp/hadoop/mapred/staging"));
UserGroupInformation ugi = UserGroupInformation.getCurrentUser();
String user;
if (ugi != null) {
user = ugi.getShortUserName() + rand.nextInt();
} else {
user = "dummy" + rand.nextInt();
}
return fs.makeQualified(new Path(stagingRootDir,
user+"/.staging")).toString();
On Mon, Apr 16, 2012 at 6:06 PM, Yang <[email protected]> wrote:
> I specified -Djava.io.tmpdir=/my/big/partition/path to PIG_OPTS
> and I can see that this is indeed set on the JVM args,
>
> but when I ran
>
> pig -x local my_pig_script
>
> it still dumped temp files into /tmp, instead of the custom dir I
> specified.
>
> I tried adding
>
> SET mapred.child.java.opts '-Djava.io.tmpdir=/my/big/partition/path'
>
> to the pig script too (this is 0.9.2 so the SET works), but it still did
> not use the custom dir.
>
>
> I suspect that for the LocalRunner, hadoop has a bug so it uses the
> hard-coded /tmp dir instead of querying the JVM arg
>
>
>