VicoWu edited a comment on issue #509: HBASE-22877 WebHDFS based export 
snapshot will fail if hfile is in archive directory
URL: https://github.com/apache/hbase/pull/509#issuecomment-524596944
 
 
   @shahrs87 
   Great thanks for you code insight for my problem;
   For the mistake I made between the difference between `WebHDFSFileSystem` 
and `HttpFSFileSystem`: 
   Yes, I previous make mistakes for the differences between 
`WebHDFSFileSystem` and `HttpFSFileSystem` because the schema for both them are 
`webhdfs`; I find that when I am using `WebHDFSFileSystem`, the target could be 
both the `HttpFS server` (default port is 14000) or the `NameNode 
webserver`(default port is 57000), both connections could work fine except that 
they didn't throw FileNotFoundException timely when I am calling `fs.open()`;
   
   Why the error happened in my side and I find that no connection is 
established in `fs.open`?
   
   To  be simple, in my code base `hadoop2.6.0-cdh5.16.1`,  the client indeed 
didn't connect to the remote server. Your code base seems like `hadoop3.x`, and 
I find the commit which make the `fs.open()` try to connect to remote server 
and thus throws a `FileNotFoundException` timely is [this 
commit](https://github.com/apache/hadoop/commit/fde95d463c3123b315b3d07cb5b7b7dc19f7cb73)
    in which is adds the `getRedirectedUrl()` call in the `ReadRunner` 
construction method:
   ```
       ReadRunner(Path p, int bs) throws IOException {
         super(GetOpParam.Op.OPEN, p, new BufferSizeParam(bs));       
super(GetOpParam.Op.OPEN, p, new BufferSizeParam(bs));
         this.path = p;       this.path = p;
         this.bufferSize = bs;        this.bufferSize = bs;
         getRedirectedUrl();
       }
   ```
   And this patch has been applied to only `3.x` hadoop:
   ```
   release-3.2.0-RC1 release-3.2.0-RC0 release-3.1.2-RC1 release-3.1.2-RC0 
release-3.1.1-RC0 release-3.1.0-RC1 release-3.1.0-RC0 rel/submarine-0.2.0 
rel/release-3.2.0 rel/release-3.1.2 rel/release-3.1.1 rel/release-3.1.0 
ozone-0.4.0-alpha-RC2 ozone-0.4.0-alpha-RC1 ozone-0.4.0-alpha-RC0 
ozone-0.3.0-alpha ozone-0.3.0-alpha-RC1 ozone-0.3.0-alpha-RC0 
ozone-0.2.1-alpha-RC0
   ```
   So that's why your experiment cannot reproduce this problem and that you 
think the `fs.open() ` do make a connection to the remote server; But in hadoop 
2.x, this problem do exists;
   
   So that's my investigation  for this problem; I think everything is much 
more clear now;
   Do you think it is necessary to add this patch?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to