the-other-tim-brown commented on code in PR #6661:
URL: https://github.com/apache/hudi/pull/6661#discussion_r973866219


##########
hudi-utilities/src/main/java/org/apache/hudi/utilities/sources/S3EventsHoodieIncrSource.java:
##########
@@ -217,11 +220,18 @@ public Pair<Option<Dataset<Row>>, String> 
fetchNextBatch(Option<String> lastCkpt
 
     Option<Dataset<Row>> dataset = Option.empty();
     if (!cloudFiles.isEmpty()) {
-      DataFrameReader dataFrameReader = getDataFrameReader(fileFormat);
-      dataset = Option.of(dataFrameReader.load(cloudFiles.toArray(new 
String[0])));
+      JavaRDD<Dataset<Row>> datasetIterator = 
sparkContext.parallelize(cloudFiles, cloudFiles.size()).flatMap(

Review Comment:
   Why do we need to take the list of paths and make it into an rdd instead of 
keeping the list of paths in its rdd form? Instead of calling `collectAsList()` 
on line 219, could we just leave that as a Dataset<String>?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to