the-other-tim-brown commented on code in PR #6661:
URL: https://github.com/apache/hudi/pull/6661#discussion_r1008922752
##########
hudi-utilities/src/main/java/org/apache/hudi/utilities/sources/S3EventsHoodieIncrSource.java:
##########
@@ -213,15 +216,27 @@ public Pair<Option<Dataset<Row>>, String>
fetchNextBatch(Option<String> lastCkpt
}
});
return cloudFilesPerPartition.iterator();
- }, Encoders.STRING()).collectAsList();
+ }, Encoders.STRING());
Option<Dataset<Row>> dataset = Option.empty();
+ cloudFiles.cache();
if (!cloudFiles.isEmpty()) {
- DataFrameReader dataFrameReader = getDataFrameReader(fileFormat);
- dataset = Option.of(dataFrameReader.load(cloudFiles.toArray(new
String[0])));
+ JavaRDD<Dataset<Row>> datasetIterator = cloudFiles.javaRDD().flatMap(new
FlatMapFunction<String, Dataset<Row>>() {
+ @Override
+ public Iterator<Dataset<Row>> call(String cloudFile) throws Exception {
+ return Collections.singletonList(getDataFromFile(cloudFile,
fileFormat)).iterator();
+ }
+ });
+ Dataset<Row> finalDataset =
datasetIterator.fold(sparkSession.emptyDataFrame(), (Function2<Dataset<Row>,
Dataset<Row>, Dataset<Row>>) (v1, v2) -> v1.union(v2));
+ dataset = Option.of(finalDataset);
}
- LOG.debug("Extracted distinct files " + cloudFiles.size()
- + " and some samples " +
cloudFiles.stream().limit(10).collect(Collectors.toList()));
+ LOG.debug("Extracted distinct files " + cloudFiles.count()
Review Comment:
I think this will always evaluate the count and collect the first ten
elements to compute the value of the string before sending the value to the
`LOG.debug` method. I think in this occasion we want to check if debug logs are
enabled before executing this to avoid the extra overhead.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]