carp84 commented on code in PR #4493:
URL: https://github.com/apache/hbase/pull/4493#discussion_r893331890
##########
hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestFSTableDescriptors.java:
##########
@@ -285,6 +285,25 @@ public void testGetAll() throws IOException,
InterruptedException {
+ htds.getAll().size(), count + 1, htds.getAll().size());
}
+ @Test
+ public void testGetAllByParallel() throws IOException, InterruptedException {
Review Comment:
```suggestion
public void testParallelGetAll() throws IOException, InterruptedException {
```
##########
hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSTableDescriptors.java:
##########
@@ -108,10 +118,23 @@ public FSTableDescriptors(final FileSystem fs, final Path
rootdir) {
public FSTableDescriptors(final FileSystem fs, final Path rootdir, final
boolean fsreadonly,
final boolean usecache) {
+ this(fs, rootdir, fsreadonly, usecache, 0);
+ }
+
+ public FSTableDescriptors(final FileSystem fs, final Path rootdir, final
boolean fsreadonly,
+ final boolean usecache, final int tableDescriptorParallelLoadThreads) {
this.fs = fs;
this.rootdir = rootdir;
this.fsreadonly = fsreadonly;
this.usecache = usecache;
+ if (tableDescriptorParallelLoadThreads > 0) {
+ tableDescriptorParallelLoadEnable = true;
+ executor = new ThreadPoolExecutor(tableDescriptorParallelLoadThreads,
Review Comment:
The executor is never shut down after initialization. it's better to shut it
down gracefully when the server is stopped. We apply a similar logic for the
`consumeExecutor` in `AsyncFSWAL`
##########
hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestFSTableDescriptors.java:
##########
@@ -285,6 +285,25 @@ public void testGetAll() throws IOException,
InterruptedException {
+ htds.getAll().size(), count + 1, htds.getAll().size());
}
+ @Test
+ public void testGetAllByParallel() throws IOException, InterruptedException {
+ final String name = "testGetAllByParallel";
+ FileSystem fs = FileSystem.get(UTIL.getConfiguration());
+ // Enable parallel load table descriptor.
+ FSTableDescriptors htds = new FSTableDescriptorsTest(fs, testDir, true,
20);
+ final int count = 100;
+ // Write out table infos.
+ for (int i = 0; i < count; i++) {
+ htds.createTableDescriptor(
+ TableDescriptorBuilder.newBuilder(TableName.valueOf(name +
i)).build());
+ }
+ // add hbase:meta
+ htds
+
.createTableDescriptor(TableDescriptorBuilder.newBuilder(TableName.META_TABLE_NAME).build());
+ assertEquals("getAll() didn't return all TableDescriptors, expected: " +
(count + 1) + " got: "
+ + htds.getAll().size(), count + 1, htds.getAll().size());
Review Comment:
```suggestion
int getTableDescriptorSize = htds.getAll().size();
assertEquals("getAll() didn't return all TableDescriptors, expected: " +
(count + 1) + " got: "
+ getTableDescriptorSize, count + 1, getTableDescriptorSize);
// get again to check whether the cache works well
getTableDescriptorSize = htds.getAll().size();
assertEquals("getAll() didn't return all TableDescriptors with cache,
expected: " + (count + 1)
+ " got: " + getTableDescriptorSize, count + 1,
getTableDescriptorSize);
```
We also need to check whether the cache works well.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]