[
https://issues.apache.org/jira/browse/SPARK-18552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15691098#comment-15691098
]
Shixiong Zhu commented on SPARK-18552:
--------------------------------------
As per the current doc "the actual watermark used is only guaranteed to be at
least `delayThreshold` behind the actual event time. In some cases we may
still process records that arrive more than `delayThreshold` late", a smaller
watermark is fine. Right?
> Watermark should not rely on sinks to proceed
> ---------------------------------------------
>
> Key: SPARK-18552
> URL: https://issues.apache.org/jira/browse/SPARK-18552
> Project: Spark
> Issue Type: Bug
> Components: Structured Streaming
> Reporter: Liwei Lin
> Priority: Critical
>
> Today for watermark to be collected and proceed correctly, a sink should
> trigger the real execution the dataset it received in every batch.
> However, during the recovery process, a sink might skip a batch (such as in
> https://github.com/apache/spark/blob/v2.0.2/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/memory.scala#L202-L204),
> then the watermark just goes wrong.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]