yihua commented on code in PR #8993:
URL: https://github.com/apache/hudi/pull/8993#discussion_r1233484864
##########
hudi-client/hudi-spark-client/src/main/java/org/apache/hudi/client/utils/SparkValidatorUtils.java:
##########
@@ -116,16 +119,34 @@ private static CompletableFuture<Boolean>
runValidatorAsync(SparkPreCommitValida
/**
* Get records from partitions modified as a dataset.
* Note that this only works for COW tables.
+ *
+ * @param sqlContext Spark {@link SQLContext} instance.
+ * @param partitionsAffected A set of affected partitions.
+ * @param table {@link HoodieTable} instance.
+ * @param newStructTypeSchema The {@link StructType} schema from after state.
+ * @return The records in Dataframe from committed files.
*/
public static Dataset<Row> getRecordsFromCommittedFiles(SQLContext
sqlContext,
- Set<String>
partitionsAffected, HoodieTable table) {
-
+ Set<String>
partitionsAffected,
+ HoodieTable table,
+ StructType
newStructTypeSchema) {
List<String> committedFiles = partitionsAffected.stream()
.flatMap(partition ->
table.getBaseFileOnlyView().getLatestBaseFiles(partition).map(BaseFile::getPath))
.collect(Collectors.toList());
if (committedFiles.isEmpty()) {
- return sqlContext.emptyDataFrame();
+ try {
+ return sqlContext.createDataFrame(
+ sqlContext.emptyDataFrame().rdd(),
Review Comment:
We still want to return a `Dataset<Row>` with the schema here to run the SQL
query; otherwise, we cannot compare the SQL results between the before and
after states.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]