VicoWu commented on issue #509: HBASE-22877 WebHDFS based export snapshot will fail if hfile is in archive directory URL: https://github.com/apache/hbase/pull/509#issuecomment-524596944 @shahrs87 Great thanks for you code insight for my problem; Yes, I previous make mistakes for the differences between WebHDFSFileSystem and HttpFSFileSystem because the schema for both them are `webhdfs`; I find that when I am using `WebHDFSFileSystem`, the target could both the HttpFS server (default port is 14000) or the NameNode webserver(default port is 57000), both connection are ok and both of them didn't throw exception when I am callling `fs.open()` To be simple, in my code base `hadoop2.6.0-cdh5.16.1`, the client indeed didn't connect to the remote server. Your code base seems like hadoop3.x, and I find the commit which make the fs.open() try to connect to remote server and thus throws a FileNotFoundException timely: https://github.com/apache/hadoop/commit/fde95d463c3123b315b3d07cb5b7b7dc19f7cb73 in which is adds the getRedirectedUrl() call in the ReadRunner contruction method: ``` ReadRunner(Path p, int bs) throws IOException { super(GetOpParam.Op.OPEN, p, new BufferSizeParam(bs)); super(GetOpParam.Op.OPEN, p, new BufferSizeParam(bs)); this.path = p; this.path = p; this.bufferSize = bs; this.bufferSize = bs; getRedirectedUrl(); } ``` And this patch has been applied to only 3.x hadoop; So that's why your experiment cannot reproduce this problem and that you think the fs.open() do make a connection to the remote server; But in hadoop 2.x, this problem do exists;
---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected] With regards, Apache Git Services
