Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/15702#discussion_r86203400
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala ---
@@ -536,6 +535,41 @@ class Dataset[T] private[sql](
}
/**
+ * :: Experimental ::
+ * Defines an event time watermark for this [[Dataset]]. A watermark
tracks a point in time
+ * before which we assume no more late data is going to arrive.
+ *
+ * Spark will use this watermark for several purposes:
+ * - To know when a given time window aggregation can be finalized and
thus can be emitted when
+ * using output modes that do not allow updates.
+ * - To minimize the amount of state that we need to keep for on-going
aggregations.
+ *
+ * The current event time is computed by looking at the
`MAX(eventTime)` seen in an epoch across
--- End diff --
Changed to watermark. For epoch, I really just mean "during some period of
time where we decide too coordinate across the partitions". This happens at
batch boundaries now, but that is not part of the contract we are promising. I
just removed that word to avoid confusion.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]