codope commented on code in PR #12376:
URL: https://github.com/apache/hudi/pull/12376#discussion_r1864857887
##########
hudi-common/src/main/java/org/apache/hudi/metadata/HoodieBackedTableMetadata.java:
##########
@@ -265,18 +265,24 @@ protected Map<String,
HoodieRecord<HoodieMetadataPayload>> getRecordsByKeys(List
final int numFileSlices = partitionFileSlices.size();
checkState(numFileSlices > 0, "Number of file slices for partition " +
partitionName + " should be > 0");
- // Parallel lookup for large sized partitions with many file slices
- // Partition the keys by the file slice which contains it
- ArrayList<ArrayList<String>> partitionedKeys =
partitionKeysByFileSlices(keys, numFileSlices);
- result = new HashMap<>(keys.size());
- getEngineContext().setJobStatus(this.getClass().getSimpleName(), "Reading
keys from metadata table partition " + partitionName);
- getEngineContext().map(partitionedKeys, keysList -> {
- if (keysList.isEmpty()) {
- return Collections.<String,
HoodieRecord<HoodieMetadataPayload>>emptyMap();
- }
- int shardIndex =
HoodieTableMetadataUtil.mapRecordKeyToFileGroupIndex(keysList.get(0),
numFileSlices);
- return lookupKeysFromFileSlice(partitionName, keysList,
partitionFileSlices.get(shardIndex));
- }, partitionedKeys.size()).forEach(result::putAll);
+ // Lookup keys from each file slice
+ if (numFileSlices == 1) {
Review Comment:
had to go back to do things old way because of CI getting stuck. More
details in HUDI-8621
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]