The hadoop version is 0.18.3 . Recently we got "out of space" issue. It's from
"java.util.zip.ZipOutputStream".
We found that /tmp is full and after cleaning /tmp the problem is solved.
However why hadoop needs to use /tmp? We had already configured hadoop tmp to a
local disk in: hadoop-site.x
[
https://issues.apache.org/jira/browse/HADOOP-6215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sharad Agarwal resolved HADOOP-6215.
Resolution: Fixed
Hadoop Flags: [Reviewed]
+1 I committed to trunk and branch 20. Th
RPC.WaitForProxy swallows InterruptedException
--
Key: HADOOP-6221
URL: https://issues.apache.org/jira/browse/HADOOP-6221
Project: Hadoop Common
Issue Type: Bug
Components: ipc
Jetty wraps InterruptedExceptions by IOExceptions if interrupted in startup
---
Key: HADOOP-6220
URL: https://issues.apache.org/jira/browse/HADOOP-6220
Project: Hadoop Common
Split TFile by Record Sequence Number
-
Key: HADOOP-6218
URL: https://issues.apache.org/jira/browse/HADOOP-6218
Project: Hadoop Common
Issue Type: New Feature
Reporter: Hong Tang
It would be