yihua commented on code in PR #11947:
URL: https://github.com/apache/hudi/pull/11947#discussion_r1802141309
##########
hudi-spark-datasource/hudi-spark-common/src/main/scala/org/apache/spark/sql/hudi/streaming/HoodieStreamSource.scala:
##########
@@ -71,19 +69,6 @@ class HoodieStreamSource(
parameters.get(DataSourceReadOptions.QUERY_TYPE.key).contains(DataSourceReadOptions.QUERY_TYPE_INCREMENTAL_OPT_VAL)
&&
parameters.get(DataSourceReadOptions.INCREMENTAL_FORMAT.key).contains(DataSourceReadOptions.INCREMENTAL_FORMAT_CDC_VAL)
- /**
- * When hollow commits are found while doing streaming read , unlike batch
incremental query,
- * we do not use [[HollowCommitHandling.FAIL]] by default, instead we use
[[HollowCommitHandling.BLOCK]]
- * to block processing data from going beyond the hollow commits to avoid
unintentional skip.
- *
- * Users can set
[[DataSourceReadOptions.INCREMENTAL_READ_HANDLE_HOLLOW_COMMIT]] to
- * [[HollowCommitHandling.USE_TRANSITION_TIME]] to avoid the blocking
behavior.
- */
- private val hollowCommitHandling: HollowCommitHandling =
Review Comment:
We still need to handle hollow commits if the source table is version 6 or
below. So keeping this for now and let HUDI-8359 fix it properly.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]