xushiyan commented on code in PR #9261:
URL: https://github.com/apache/hudi/pull/9261#discussion_r1311534951
##########
hudi-spark-datasource/hudi-spark-common/src/main/scala/org/apache/hudi/HoodieSparkSqlWriter.scala:
##########
@@ -222,6 +226,11 @@ object HoodieSparkSqlWriter {
val shouldReconcileSchema =
parameters(DataSourceWriteOptions.RECONCILE_SCHEMA.key()).toBoolean
val latestTableSchemaOpt = getLatestTableSchema(spark, tableIdentifier,
tableMetaClient)
+ val df = if (preppedWriteOperation || preppedSparkSqlWrites ||
preppedSparkSqlMergeInto) {
+ sourceDf
+ } else {
+ sourceDf.drop(HoodieRecord.HOODIE_META_COLUMNS: _*)
Review Comment:
dropping meta cols here caused a problem with HoodieStreamingSink: when val
sourceDF = spark.readStream.format(hudi).load;
sourceDF.writeStream.format(hudi).start(), the source DF is a streaming source
and dropping metacols failed the assertion of "Queries with streaming sources
must be executed with writeStream.start()" because internally Hudi is writing
DF as a batch
we either keep df here the same as sourceDF when using
sourceDF.writeStream.format(hudi).start() or use
sourceDF.writeStream.foreachBatch{}
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]