[ 
https://issues.apache.org/jira/browse/HDFS-353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13575460#comment-13575460
 ] 

Steve Loughran commented on HDFS-353:
-------------------------------------

@Chu -your test is looking at the local filesystem, this is specifically HDFS

# The date of the issue, 2009, implies it was some 0.19-0.20 version
# However, searching through the stack trace I supplied implies the problem is 
still there, 

specifically, in {{DFSInputStreamfetchLocatedBlocksAndGetLastBlockLength}} line 
162, the code that triggered the exception I saw still appears to be there
{code}
    LocatedBlocks newInfo = dfsClient.getLocatedBlocks(src, 0, prefetchSize);
    if (DFSClient.LOG.isDebugEnabled()) {
      DFSClient.LOG.debug("newInfo = " + newInfo);
    }
    if (newInfo == null) {
      throw new IOException("Cannot open filename " + src);
    }
{code}

Some more delving into the NameNode is needed to determine what could really 
trigger this error, but the client-side code is still there. 
                
> DFSClient could throw a FileNotFound exception when a file could not be opened
> ------------------------------------------------------------------------------
>
>                 Key: HDFS-353
>                 URL: https://issues.apache.org/jira/browse/HDFS-353
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>            Reporter: Steve Loughran
>            Priority: Minor
>
> DfsClient.openInit() throws an IOE when a file can't be found, that is, it 
> has no blocks
> [sf-startdaemon-debug] 09/02/16 12:38:47 [IPC Server handler 0 on 8012] INFO 
> mapred.TaskInProgress : Error from attempt_200902161238_0001_m_000000_2: 
> java.io.IOException: Cannot open filename /tests/mrtestsequence/in/in.txt
> [sf-startdaemon-debug]        at 
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.openInfo(DFSClient.java:1352)
> [sf-startdaemon-debug]        at 
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.<init>(DFSClient.java:1343)
> [sf-startdaemon-debug]        at 
> org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:312)
> [sf-startdaemon-debug]        at 
> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:177)
> [sf-startdaemon-debug]        at 
> org.apache.hadoop.fs.FileSystem.open(FileSystem.java:347)
> I propose turning this into a FileNotFoundException, which is more specific 
> about the underlying problem. Including the full dfs URL would be useful too.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to