Apache9 commented on code in PR #6856:
URL: https://github.com/apache/hbase/pull/6856#discussion_r2027243919


##########
hbase-server/src/main/java/org/apache/hadoop/hbase/wal/AbstractFSWALProvider.java:
##########
@@ -448,10 +497,6 @@ public static boolean isArchivedLogFile(Path p) {
    * @throws IOException exception
    */
   public static Path findArchivedLog(Path path, Configuration conf) throws 
IOException {
-    // If the path contains oldWALs keyword then exit early.

Review Comment:
   {code}
     protected final Pair<FSDataInputStream, FileStatus> open() throws 
IOException {
       try {
         return Pair.newPair(fs.open(path), fs.getFileStatus(path));
       } catch (FileNotFoundException e) {
         Pair<FSDataInputStream, FileStatus> pair = openArchivedWAL();
         if (pair != null) {
           return pair;
         } else {
           throw e;
         }
       } catch (RemoteException re) {
         IOException ioe = 
re.unwrapRemoteException(FileNotFoundException.class);
         if (!(ioe instanceof FileNotFoundException)) {
           throw ioe;
         }
         Pair<FSDataInputStream, FileStatus> pair = openArchivedWAL();
         if (pair != null) {
           return pair;
         } else {
           throw ioe;
         }
       }
     }
   {code}
   
   This is the only method where we call openArchivedWAL, and the design here 
is that, the `path` is under the normal WAL directory, and if we can not find 
it, we will go into the archived wal directory, i.e, oldWALs directory to find 
it. So we should not pass a Path which is already under the oldWALs to 
findArchivedLog method. This is my point.
   
   So under which condition, the `path` here could be a Path which is already 
under the oldWALs directory? Maybe we need to fix the problem there.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to