[
https://issues.apache.org/jira/browse/MAPREDUCE-3844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Eli Collins updated MAPREDUCE-3844:
-----------------------------------
Priority: Blocker (was: Major)
Seems like a blocker to me, this causes any Hive join query to fail when table
size is larger than dfs block size (for both RCFile and text format).
> Problem in setting the childTmpDir in MapReduceChildJVM
> -------------------------------------------------------
>
> Key: MAPREDUCE-3844
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-3844
> Project: Hadoop Map/Reduce
> Issue Type: Bug
> Components: mrv2
> Affects Versions: 0.23.0, 0.23.1
> Reporter: Ahmed Radwan
> Assignee: Ahmed Radwan
> Priority: Blocker
> Attachments: MAPREDUCE-3844.patch
>
>
> We have seen this issue during a Hive test. Where Hive tries to create a temp
> file using File.createTempFile(..) and it throws:
> {code}
> Exception in thread "main" java.io.IOException: No such file or directory
> at java.io.UnixFileSystem.createFileExclusively(Native Method)
> at java.io.File.checkAndCreate(File.java:1704)
> at java.io.File.createTempFile(File.java:1792)
> at java.io.File.createTempFile(File.java:1828)
> at Test.main(Test.java:13)
> {code}
> Because it literally sees "$PWD/tmp" as the temp directory path.
> $PWD need to be evaluated before being used in setting the property
> "java.io.tmpdir" in MapReduceChildJVM.java.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira