nsivabalan commented on code in PR #5091:
URL: https://github.com/apache/hudi/pull/5091#discussion_r962265102
##########
hudi-client/hudi-spark-client/src/test/java/org/apache/hudi/client/functional/TestHoodieBackedTableMetadata.java:
##########
@@ -92,6 +100,52 @@ public void testTableOperations() throws Exception {
verifyBaseMetadataTable();
}
+ @Test
+ public void testMultiReaderForHoodieBackedTableMetadata() throws Exception {
+ final int taskNumber = 100;
+ HoodieTableType tableType = HoodieTableType.COPY_ON_WRITE;
+ init(tableType);
+ testTable.doWriteOperation("000001", INSERT, emptyList(), asList("p1"), 1);
+ HoodieBackedTableMetadata tableMetadata = new
HoodieBackedTableMetadata(context, writeConfig.getMetadataConfig(),
writeConfig.getBasePath(), writeConfig.getSpillableMapBasePath(), false);
+ assertTrue(tableMetadata.enabled());
+ List<String> metadataPartitions = tableMetadata.getAllPartitionPaths();
+ String partition = metadataPartitions.get(0);
+ String finalPartition = basePath + "/" + partition;
+ ArrayList<String> duplicatedPartitions = new ArrayList<>(taskNumber);
+ for (int i = 0; i < taskNumber; i++) {
+ duplicatedPartitions.add(finalPartition);
+ }
+ ExecutorService executors = Executors.newFixedThreadPool(taskNumber);
+ AtomicBoolean flag = new AtomicBoolean(false);
+ AtomicInteger count = new AtomicInteger(0);
+ AtomicInteger filesNumber = new AtomicInteger(0);
+
+ for (String part : duplicatedPartitions) {
+ executors.submit(new Runnable() {
+ @Override
+ public void run() {
+ try {
+ count.incrementAndGet();
+ while (true) {
+ if (count.get() == taskNumber) {
+ break;
+ }
+ }
Review Comment:
should we add a countDownLatch here so that all threads could call
tableMetadata.getAllFilesInPartition() around the same time. that way the test
is deterministic.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]