[
https://issues.apache.org/jira/browse/HDFS-353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13575469#comment-13575469
]
Steve Loughran commented on HDFS-353:
-------------------------------------
looking in to the code, this situation appears to arise if there are no
inode->blocks==null; there aren't any blocks associated with a file.
> DFSClient could throw a FileNotFound exception when a file could not be opened
> ------------------------------------------------------------------------------
>
> Key: HDFS-353
> URL: https://issues.apache.org/jira/browse/HDFS-353
> Project: Hadoop HDFS
> Issue Type: Improvement
> Reporter: Steve Loughran
> Priority: Minor
>
> DfsClient.openInit() throws an IOE when a file can't be found, that is, it
> has no blocks
> [sf-startdaemon-debug] 09/02/16 12:38:47 [IPC Server handler 0 on 8012] INFO
> mapred.TaskInProgress : Error from attempt_200902161238_0001_m_000000_2:
> java.io.IOException: Cannot open filename /tests/mrtestsequence/in/in.txt
> [sf-startdaemon-debug] at
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.openInfo(DFSClient.java:1352)
> [sf-startdaemon-debug] at
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.<init>(DFSClient.java:1343)
> [sf-startdaemon-debug] at
> org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:312)
> [sf-startdaemon-debug] at
> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:177)
> [sf-startdaemon-debug] at
> org.apache.hadoop.fs.FileSystem.open(FileSystem.java:347)
> I propose turning this into a FileNotFoundException, which is more specific
> about the underlying problem. Including the full dfs URL would be useful too.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira