chliang71 commented on a change in pull request #2019:
URL: https://github.com/apache/hadoop/pull/2019#discussion_r427493135



##########
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
##########
@@ -965,12 +978,25 @@ public int getUriDefaultPort() {
           INodeLink<AbstractFileSystem> link = 
             (INodeLink<AbstractFileSystem>) inode;
 
-          result[i++] = new FileStatus(0, false, 0, 0,
-            creationTime, creationTime,
-            PERMISSION_555, ugi.getShortUserName(), ugi.getPrimaryGroupName(),
-            link.getTargetLink(),
-            new Path(inode.fullPath).makeQualified(
-                myUri, null));
+          Path linkedPath = new Path(link.targetDirLinkList[0].toString());
+          ChRootedFs linkedFs = (ChRootedFs) link.getTargetFileSystem();
+          try {
+            FileStatus status = linkedFs.getMyFs().getFileStatus(linkedPath);
+            result[i++] = new FileStatus(status.getLen(), false,
+              status.getReplication(), status.getBlockSize(),
+              status.getModificationTime(), status.getAccessTime(),
+              status.getPermission(), status.getOwner(), status.getGroup(),
+              link.getTargetLink(),
+              new Path(inode.fullPath).makeQualified(
+                  myUri, null));
+          } catch (FileNotFoundException ex) {

Review comment:
       same comment about FileNotFoundException  here

##########
File path: 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestViewfsFileStatus.java
##########
@@ -56,38 +69,71 @@ public void testFileStatusSerialziation()
     File infile = new File(TEST_DIR, testfilename);
     final byte[] content = "dingos".getBytes();
 
-    FileOutputStream fos = null;
-    try {
-      fos = new FileOutputStream(infile);
+    try (FileOutputStream fos =  new FileOutputStream(infile)) {
       fos.write(content);
-    } finally {
-      if (fos != null) {
-        fos.close();
-      }
     }
     assertEquals((long)content.length, infile.length());
 
     Configuration conf = new Configuration();
     ConfigUtil.addLink(conf, "/foo/bar/baz", TEST_DIR.toURI());
-    FileSystem vfs = FileSystem.get(FsConstants.VIEWFS_URI, conf);
-    assertEquals(ViewFileSystem.class, vfs.getClass());
-    Path path = new Path("/foo/bar/baz", testfilename);
-    FileStatus stat = vfs.getFileStatus(path);
-    assertEquals(content.length, stat.getLen());
-    ContractTestUtils.assertNotErasureCoded(vfs, path);
-    assertTrue(path + " should have erasure coding unset in " +
-            "FileStatus#toString(): " + stat,
-        stat.toString().contains("isErasureCoded=false"));
-
-    // check serialization/deserialization
-    DataOutputBuffer dob = new DataOutputBuffer();
-    stat.write(dob);
-    DataInputBuffer dib = new DataInputBuffer();
-    dib.reset(dob.getData(), 0, dob.getLength());
-    FileStatus deSer = new FileStatus();
-    deSer.readFields(dib);
-    assertEquals(content.length, deSer.getLen());
-    assertFalse(deSer.isErasureCoded());
+    try (FileSystem vfs = FileSystem.get(FsConstants.VIEWFS_URI, conf)) {
+      assertEquals(ViewFileSystem.class, vfs.getClass());
+      Path path = new Path("/foo/bar/baz", testfilename);
+      FileStatus stat = vfs.getFileStatus(path);
+      assertEquals(content.length, stat.getLen());
+      ContractTestUtils.assertNotErasureCoded(vfs, path);
+      assertTrue(path + " should have erasure coding unset in " +
+          "FileStatus#toString(): " + stat,
+          stat.toString().contains("isErasureCoded=false"));
+
+      // check serialization/deserialization
+      DataOutputBuffer dob = new DataOutputBuffer();
+      stat.write(dob);
+      DataInputBuffer dib = new DataInputBuffer();
+      dib.reset(dob.getData(), 0, dob.getLength());
+      FileStatus deSer = new FileStatus();
+      deSer.readFields(dib);
+      assertEquals(content.length, deSer.getLen());
+      assertFalse(deSer.isErasureCoded());
+    }
+  }
+
+  @Test
+  public void testListStatusACL()

Review comment:
       some javadoc, and comments in the code could be helpful

##########
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
##########
@@ -915,11 +915,24 @@ public FileStatus getFileLinkStatus(final Path f)
       if (inode.isLink()) {
         INodeLink<AbstractFileSystem> inodelink = 
           (INodeLink<AbstractFileSystem>) inode;
-        result = new FileStatus(0, false, 0, 0, creationTime, creationTime,
+        Path linkedPath = new Path(inodelink.targetDirLinkList[0].toString());
+        ChRootedFs linkedFs = (ChRootedFs) inodelink.getTargetFileSystem();
+        try {
+          FileStatus status = linkedFs.getMyFs().getFileStatus(linkedPath);
+          result = new FileStatus(status.getLen(), false,
+            status.getReplication(), status.getBlockSize(),
+            status.getModificationTime(), status.getAccessTime(),
+            status.getPermission(), status.getOwner(), status.getGroup(),
+            inodelink.getTargetLink(),
+            new Path(inode.fullPath).makeQualified(
+                myUri, null));
+        } catch (FileNotFoundException ex) {

Review comment:
       similar comment regarding FileNotFoundException. I think in general, 
it's better to match behavior of non-federated client. If a path does not 
exist, just throw back FileNotFoundException.

##########
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
##########
@@ -1211,13 +1211,29 @@ public FileStatus getFileStatus(Path f) throws 
IOException {
         INode<FileSystem> inode = iEntry.getValue();
         if (inode.isLink()) {
           INodeLink<FileSystem> link = (INodeLink<FileSystem>) inode;
-
-          result[i++] = new FileStatus(0, false, 0, 0,
-            creationTime, creationTime, PERMISSION_555,
-            ugi.getShortUserName(), ugi.getPrimaryGroupName(),
-            link.getTargetLink(),
-            new Path(inode.fullPath).makeQualified(
-                myUri, null));
+          // For MERGE or NFLY links, the first target link is considered
+          // for fetching the FileStatus with an assumption that the permission
+          // and the owner will be the same for all the target directories.
+          Path linkedPath = new Path(link.targetDirLinkList[0].toString());
+          ChRootedFileSystem linkedFs = (ChRootedFileSystem)
+              link.getTargetFileSystem();
+          try {
+            FileStatus status = linkedFs.getMyFs().getFileStatus(linkedPath);
+            result[i++] = new FileStatus(status.getLen(), false,
+              status.getReplication(), status.getBlockSize(),
+              status.getModificationTime(), status.getAccessTime(),
+              status.getPermission(), status.getOwner(), status.getGroup(),
+              link.getTargetLink(),
+              new Path(inode.fullPath).makeQualified(
+                  myUri, null));
+          } catch (FileNotFoundException ex) {
+            result[i++] = new FileStatus(0, false, 0, 0,
+              creationTime, creationTime, PERMISSION_555,
+              ugi.getShortUserName(), ugi.getPrimaryGroupName(),
+              link.getTargetLink(),
+              new Path(inode.fullPath).makeQualified(
+                  myUri, null));

Review comment:
       two comments:
   1. seems to be some duplicate code, the "else" branch is pretty much the 
same, can we refactor here?
   2. Not sure if this is the best way when dealing with FileNotFoundException. 
If I understand this correctly, it is possible that some mounts does not have 
this path, so it can hit FileNotFoundException? 
   
   If this is the case, I wonder if it makes more sense to just skip this 
mount, by not adding a FileStatus for mount at all. So that clients do not get 
confused by an actually non-existing FileStatus, among other existing ones. But 
one issue here would be that result array is strictly the size of the # of 
mounts. Creating result as a list, append, and then return as a array may 
resolve this.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to