HeartSaVioR commented on code in PR #39931:
URL: https://github.com/apache/spark/pull/39931#discussion_r1122424081
##########
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/statefulOperators.scala:
##########
@@ -96,6 +98,25 @@ trait StateStoreReader extends StatefulOperator {
/** An operator that writes to a StateStore. */
trait StateStoreWriter extends StatefulOperator with PythonSQLMetrics { self:
SparkPlan =>
+ /**
+ * Produce the output watermark for given input watermark (ms).
+ *
+ * In most cases, this is same as the criteria of state eviction, as most
stateful operators
+ * produce the output from two different kinds:
+ *
+ * 1. without buffering
+ * 2. with buffering (state)
+ *
+ * The state eviction happens when event time exceeds a "certain threshold
of timestamp", which
+ * denotes a lower bound of event time values for output (output watermark).
+ *
+ * The default implementation provides the input watermark as it is. Most
built-in operators
+ * will evict based on min input watermark and ensure it will be minimum of
the event time value
+ * for the output so far (including output from eviction). Operators which
behave differently
+ * (e.g. different criteria on eviction) must override this method.
+ */
+ def produceOutputWatermark(inputWatermarkMs: Long): Option[Long] =
Some(inputWatermarkMs)
Review Comment:
OK. I'll leave a context as a code comment. I agree this is missing and
leads some confusion.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]