xushiyan commented on code in PR #8290:
URL: https://github.com/apache/hudi/pull/8290#discussion_r1154116735
##########
hudi-utilities/src/main/java/org/apache/hudi/utilities/sources/helpers/CloudObjectsSelectorCommon.java:
##########
@@ -115,4 +129,41 @@ private static boolean checkIfFileExists(String
storageUrlSchemePrefix, String b
throw new HoodieIOException(errMsg, ioe);
}
}
+
+ public static Option<Dataset<Row>> loadAsDataset(SparkSession spark,
List<CloudObject> cloudObjects, TypedProperties props, String fileFormat) {
+ LOG.debug("Extracted distinct files " + cloudObjects.size()
+ + " and some samples " +
cloudObjects.stream().map(CloudObject::getPath).limit(10).collect(Collectors.toList()));
+
+ if (isNullOrEmpty(cloudObjects)) {
+ return Option.empty();
+ }
+ DataFrameReader reader = spark.read().format(fileFormat);
+ String datasourceOpts = props.getString(SPARK_DATASOURCE_OPTIONS, null);
+ if (StringUtils.isNullOrEmpty(datasourceOpts)) {
+ // fall back to legacy config for BWC. TODO consolidate in HUDI-5780
+ datasourceOpts =
props.getString(S3EventsHoodieIncrSource.Config.SPARK_DATASOURCE_OPTIONS, null);
+ }
+ if (StringUtils.nonEmpty(datasourceOpts)) {
+ final ObjectMapper mapper = new ObjectMapper();
+ Map<String, String> sparkOptionsMap = null;
+ try {
+ sparkOptionsMap = mapper.readValue(datasourceOpts, Map.class);
+ } catch (IOException e) {
+ throw new HoodieException(String.format("Failed to parse sparkOptions:
%s", datasourceOpts), e);
+ }
+ LOG.info(String.format("sparkOptions loaded: %s", sparkOptionsMap));
+ reader = reader.options(sparkOptionsMap);
+ }
+ List<String> paths = new ArrayList<>();
+ long totalSize = 0;
+ for (CloudObject o: cloudObjects) {
+ paths.add(o.getPath());
+ totalSize += o.getSize();
+ }
+ // inflate 10% for potential hoodie meta fields
+ totalSize *= 1.1;
Review Comment:
as discussed, this is just estimation.
- input data files are usually vanilla parquet or other formats without hudi
meta fields, in this case, 10% is some estimation for large record size. for
small record size where metafields could take 80%, this 10% buffer won't make
things worse
- in rare case input data files are hudi parquet, this 10% buffer won't make
it too bad comparing to the accurate estimation (0%)
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]