[ 
https://issues.apache.org/jira/browse/HDFS-16141?focusedWorklogId=636199&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-636199
 ]

ASF GitHub Bot logged work on HDFS-16141:
-----------------------------------------

                Author: ASF GitHub Bot
            Created on: 10/Aug/21 00:49
            Start Date: 10/Aug/21 00:49
    Worklog Time Spent: 10m 
      Work Description: shvachko commented on a change in pull request #3232:
URL: https://github.com/apache/hadoop/pull/3232#discussion_r685606157



##########
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirMkdirOp.java
##########
@@ -284,8 +294,8 @@ private static INode createDirectoryINode(FSDirectory fsd,
       byte[] component = iip.getPathComponent(i);
       missing[i - existing.length()] =
           createDirectoryINode(fsd, existing, component, perm);
-      missing[i - existing.length()].setParent(parent.asDirectory());
-      parent = missing[i - existing.length()];
+//      missing[i - existing.length()].setParent(parent.asDirectory());

Review comment:
       Yes I think it is right to remove those lines since addSingleDirectory() 
sets up the parent/child relationship.

##########
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSMkdirs.java
##########
@@ -152,4 +154,80 @@ public void testMkdirRpcNonCanonicalPath() throws 
IOException {
       cluster.shutdown();
     }
   }
+
+  @Test
+  public void testMkDirsWithRestart() throws IOException {
+    MiniDFSCluster cluster =
+        new MiniDFSCluster.Builder(conf).numDataNodes(2).build();
+    DistributedFileSystem dfs = cluster.getFileSystem();
+    try {
+      // Create a dir in root dir, should succeed
+      assertTrue(dfs.mkdir(new Path("/mkdir-1"), FsPermission.getDefault()));
+      dfs.mkdir(new Path("/mkdir-2"), FsPermission.getDefault());
+      dfs.mkdir(new Path("/mkdir-3"), FsPermission.getDefault());
+      DFSTestUtil.writeFile(dfs, new Path("/mkdir-1/file1"), "hello world");
+      cluster.restartNameNodes();
+      dfs = cluster.getFileSystem();
+      assertTrue(dfs.exists(new Path("/mkdir-1")));
+      assertTrue(dfs.exists(new Path("/mkdir-2")));
+      assertTrue(dfs.exists(new Path("/mkdir-3")));
+      assertTrue(dfs.exists(new Path("/mkdir-1/file1")));
+    } finally {
+      dfs.close();
+      cluster.shutdown();
+    }
+  }
+
+  @Test
+  public void testMkdirWithDelete() throws IOException {
+    MiniDFSCluster cluster =
+        new MiniDFSCluster.Builder(conf).numDataNodes(2).build();
+    DistributedFileSystem dfs = cluster.getFileSystem();
+    // Create a dir in root dir, should succeed
+    String dirA = "/A";
+    String dirB = "/B";
+
+    String fileA = "/a";
+    String fileB = "/b";
+
+    try {
+      FsPermission fsP = FsPermission.getDefault();
+      dfs.mkdir(new Path(dirA), fsP);
+      dfs.mkdir(new Path(dirB), fsP);
+      dfs.mkdirs(new Path(dirB + "/B1/B2/B3"), fsP);
+
+      DFSTestUtil.writeFile(dfs, new Path(dirA + fileA), "hello world");

Review comment:
       This should be `new Path(dirA, fileA)`.
   You don't want to deal with multi-OS file delimiters here.

##########
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSMkdirs.java
##########
@@ -152,4 +154,80 @@ public void testMkdirRpcNonCanonicalPath() throws 
IOException {
       cluster.shutdown();
     }
   }
+
+  @Test
+  public void testMkDirsWithRestart() throws IOException {
+    MiniDFSCluster cluster =
+        new MiniDFSCluster.Builder(conf).numDataNodes(2).build();
+    DistributedFileSystem dfs = cluster.getFileSystem();
+    try {
+      // Create a dir in root dir, should succeed
+      assertTrue(dfs.mkdir(new Path("/mkdir-1"), FsPermission.getDefault()));
+      dfs.mkdir(new Path("/mkdir-2"), FsPermission.getDefault());
+      dfs.mkdir(new Path("/mkdir-3"), FsPermission.getDefault());
+      DFSTestUtil.writeFile(dfs, new Path("/mkdir-1/file1"), "hello world");
+      cluster.restartNameNodes();
+      dfs = cluster.getFileSystem();
+      assertTrue(dfs.exists(new Path("/mkdir-1")));
+      assertTrue(dfs.exists(new Path("/mkdir-2")));
+      assertTrue(dfs.exists(new Path("/mkdir-3")));
+      assertTrue(dfs.exists(new Path("/mkdir-1/file1")));
+    } finally {
+      dfs.close();
+      cluster.shutdown();
+    }
+  }
+
+  @Test
+  public void testMkdirWithDelete() throws IOException {

Review comment:
       Can you combine these 2 tests in one. First assert the delete logic, 
then restart NN and check the existence of `mkdir-?`
   The problem with TestDFSMkdirs that it creates MiniDFSCluter for each test 
case. We don't want to refactor it on the branch, but would prefer to minimize 
replicating the wrong pattern.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Issue Time Tracking
-------------------

    Worklog Id:     (was: 636199)
    Time Spent: 1h 20m  (was: 1h 10m)

> [FGL] Address permission related issues with File / Directory
> -------------------------------------------------------------
>
>                 Key: HDFS-16141
>                 URL: https://issues.apache.org/jira/browse/HDFS-16141
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>            Reporter: Renukaprasad C
>            Assignee: Renukaprasad C
>            Priority: Major
>              Labels: pull-request-available
>          Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Post FGL implementation (MKDIR & Create File), there are existing UTs got 
> impacted which needs to be addressed.
> Failed Tests:
> TestDFSPermission
> TestPermission
> TestFileCreation
> TestDFSMkdirs (Added tests)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to