[ https://issues.apache.org/jira/browse/HADOOP-5267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12673891#action_12673891 ]
Steve Loughran commented on HADOOP-5267: ---------------------------------------- full stack trace : {code} [sf-system-test-junit] Map: Task Id : attempt_200902161423_0001_m_000000_0, Status : TIPFAILED [sf-system-test-junit] java.io.IOException: Cannot open filename /tests/mrtestsequence/in/in.txt [sf-system-test-junit] at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.openInfo(DFSClient.java:1352) [sf-system-test-junit] at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.<init>(DFSClient.java:1343) [sf-system-test-junit] at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:312) [sf-system-test-junit] at org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:177) [sf-system-test-junit] at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:347) [sf-system-test-junit] at org.apache.hadoop.mapreduce.lib.input.LineRecordReader.initialize(LineRecordReader.java:67) [sf-system-test-junit] at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.initialize(MapTask.java:412) [sf-system-test-junit] at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:510) [sf-system-test-junit] at org.apache.hadoop.mapred.MapTask.run(MapTask.java:303) [sf-system-test-junit] at org.apache.hadoop.mapred.Child.main(Child.java:155) {code} I'd rather an error message like {code} java.io.FileNotFoundException: File not Found "tests/mrtestsequence/in/in.txt" {code} or even {code} java.io.FileNotFoundException: File not Found "hfds://server4:9000/tests/mrtestsequence/in/in.txt" {code} subtyping for code to handle better; including the full URL makes it easier to detect problems where the URL itself is at fault > DFSClient could throw a FileNotFound exception when a file could not be opened > ------------------------------------------------------------------------------ > > Key: HADOOP-5267 > URL: https://issues.apache.org/jira/browse/HADOOP-5267 > Project: Hadoop Core > Issue Type: Improvement > Components: dfs > Affects Versions: 0.21.0 > Reporter: Steve Loughran > Priority: Minor > > DfsClient.openInit() throws an IOE when a file can't be found, that is, it > has no blocks > [sf-startdaemon-debug] 09/02/16 12:38:47 [IPC Server handler 0 on 8012] INFO > mapred.TaskInProgress : Error from attempt_200902161238_0001_m_000000_2: > java.io.IOException: Cannot open filename /tests/mrtestsequence/in/in.txt > [sf-startdaemon-debug] at > org.apache.hadoop.hdfs.DFSClient$DFSInputStream.openInfo(DFSClient.java:1352) > [sf-startdaemon-debug] at > org.apache.hadoop.hdfs.DFSClient$DFSInputStream.<init>(DFSClient.java:1343) > [sf-startdaemon-debug] at > org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:312) > [sf-startdaemon-debug] at > org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:177) > [sf-startdaemon-debug] at > org.apache.hadoop.fs.FileSystem.open(FileSystem.java:347) > I propose turning this into a FileNotFoundException, which is more specific > about the underlying problem. Including the full dfs URL would be useful too. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.