[
https://issues.apache.org/jira/browse/HADOOP-898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12465824
]
Devaraj Das commented on HADOOP-898:
------------------------------------
During a run of the sort benchmark many reduces failed with these exceptions:
EXCEPTION TRACE #1
----------------------------------
java.lang.NullPointerException
at
org.apache.hadoop.dfs.DFSClient$DFSOutputStream.closeBackupStream(DFSClient.java:972)
at
org.apache.hadoop.dfs.DFSClient$DFSOutputStream.endBlock(DFSClient.java:1219)
at
org.apache.hadoop.dfs.DFSClient$DFSOutputStream.flush(DFSClient.java:1181)
at
org.apache.hadoop.dfs.DFSClient$DFSOutputStream.write(DFSClient.java:1163)
at
org.apache.hadoop.fs.FSDataOutputStream$Summer.write(FSDataOutputStream.java:85)
at
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:114)
at
java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65)
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:123)
at java.io.DataOutputStream.flush(DataOutputStream.java:106)
at java.io.FilterOutputStream.close(FilterOutputStream.java:140)
at org.apache.hadoop.io.SequenceFile$Writer.close(SequenceFile.java:558)
at
org.apache.hadoop.mapred.SequenceFileOutputFormat$1.close(SequenceFileOutputFormat.java:72)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:331)
at
org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:1367)
EXCEPTION TRACE #2
---------------------------------
java.io.IOException: Trying to write to backupStream but it already closed or
not open
at
org.apache.hadoop.dfs.DFSClient$DFSOutputStream.flushData(DFSClient.java:1195)
at
org.apache.hadoop.dfs.DFSClient$DFSOutputStream.flush(DFSClient.java:1183)
at
org.apache.hadoop.dfs.DFSClient$DFSOutputStream.close(DFSClient.java:1315)
at java.io.FilterOutputStream.close(FilterOutputStream.java:143)
at
org.apache.hadoop.fs.FSDataOutputStream$Summer.close(FSDataOutputStream.java:99)
at java.io.FilterOutputStream.close(FilterOutputStream.java:143)
at java.io.FilterOutputStream.close(FilterOutputStream.java:143)
at java.io.FilterOutputStream.close(FilterOutputStream.java:143)
at org.apache.hadoop.io.SequenceFile$Writer.close(SequenceFile.java:558)
at
org.apache.hadoop.mapred.SequenceFileOutputFormat$1.close(SequenceFileOutputFormat.java:72)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:331)
at
org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:1367)
EXCEPTION TRACE #3 (this is Hadoop-758 I think)
----------------------------------------------------------------------
java.io.FileNotFoundException:
/export/crawlspace/kryptonite/ddas/dfs/data/tmp/client-1257934407504380214 (No
such file or directory)
at java.io.FileInputStream.open(Native Method)
at java.io.FileInputStream.(FileInputStream.java:106)
at
org.apache.hadoop.dfs.DFSClient$DFSOutputStream.endBlock(DFSClient.java:1229)
at
org.apache.hadoop.dfs.DFSClient$DFSOutputStream.close(DFSClient.java:1318)
at java.io.FilterOutputStream.close(FilterOutputStream.java:143)
at java.io.FilterOutputStream.close(FilterOutputStream.java:143)
at java.io.FilterOutputStream.close(FilterOutputStream.java:143)
at
org.apache.hadoop.fs.FSDataOutputStream$Summer.close(FSDataOutputStream.java:98)
at java.io.FilterOutputStream.close(FilterOutputStream.java:143)
at java.io.FilterOutputStream.close(FilterOutputStream.java:143)
at java.io.FilterOutputStream.close(FilterOutputStream.java:143)
at org.apache.hadoop.io.SequenceFile$Writer.close(SequenceFile.java:558)
at
org.apache.hadoop.mapred.SequenceFileOutputFormat$1.close(SequenceFileOutputFormat.java:72)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:331)
at
org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:1367)
EXCEPTION TRACE #4 (although this I think has something to do with RPC timing
out at the namenode, I hadn't encountered this exception in a long long time)
--------------------------------------------------------
java.net.SocketTimeoutException: timed out waiting for rpc response
at org.apache.hadoop.ipc.Client.call(Client.java:467)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:164)
at org.apache.hadoop.dfs.$Proxy1.getProtocolVersion(Unknown Source)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:248)
at org.apache.hadoop.dfs.DFSClient.(DFSClient.java:105)
at
org.apache.hadoop.dfs.DistributedFileSystem.initialize(DistributedFileSystem.java:65)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:160)
at org.apache.hadoop.fs.FileSystem.getNamed(FileSystem.java:119)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:91)
at
org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:1366)
> namenode generates infinite stream of null pointers
> ---------------------------------------------------
>
> Key: HADOOP-898
> URL: https://issues.apache.org/jira/browse/HADOOP-898
> Project: Hadoop
> Issue Type: Bug
> Components: dfs
> Reporter: Owen O'Malley
> Assigned To: Raghu Angadi
> Attachments: HADOOP-898-2.patch, HADOOP-898.patch
>
>
> My namenode is generating a constant stream of NullPointerExceptions. The log
> looks like:
> 2007-01-17 19:51:27,461 INFO org.apache.hadoop.ipc.Server: IPC Server handler
> 3
> on 50000 call error: java.io.IOException: java.lang.NullPointerException
> java.io.IOException: java.lang.NullPointerException
> at
> org.apache.hadoop.dfs.FSNamesystem.addStoredBlock(FSNamesystem.java:1621)
> at
> org.apache.hadoop.dfs.FSNamesystem.processReport(FSNamesystem.java:1563)
> at org.apache.hadoop.dfs.NameNode.blockReport(NameNode.java:573)
> at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:589)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:337)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:538)
--
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
https://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira