carp84 commented on code in PR #4493:
URL: https://github.com/apache/hbase/pull/4493#discussion_r895381506


##########
hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestFSTableDescriptors.java:
##########
@@ -285,6 +285,31 @@ public void testGetAll() throws IOException, 
InterruptedException {
       + htds.getAll().size(), count + 1, htds.getAll().size());
   }
 
+  @Test
+  public void testParallelGetAll() throws IOException, InterruptedException {
+    final String name = "testParallelGetAll";
+    FileSystem fs = FileSystem.get(UTIL.getConfiguration());
+    // Enable parallel load table descriptor.
+    FSTableDescriptors htds = new FSTableDescriptorsTest(fs, testDir, true, 
20);
+    final int count = 100;
+    // Write out table infos.
+    for (int i = 0; i < count; i++) {
+      htds.createTableDescriptor(
+        TableDescriptorBuilder.newBuilder(TableName.valueOf(name + 
i)).build());
+    }
+    // add hbase:meta
+    htds
+      
.createTableDescriptor(TableDescriptorBuilder.newBuilder(TableName.META_TABLE_NAME).build());
+    assertEquals("getAll() didn't return all TableDescriptors, expected: " + 
(count + 1) + " got: "
+      + htds.getAll().size(), count + 1, htds.getAll().size());

Review Comment:
   There are two possibilities that `getAll` does not work as expected, the 
first is the cold run without the cache gets a wrong result, and the second is 
the cache doesn't work well. The current test cannot fully cover these two 
cases, and please double check the suggested change.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to