Github user ramkumarvenkat commented on a diff in the pull request:
https://github.com/apache/spark/pull/17037#discussion_r102671200
--- Diff: docs/structured-streaming-programming-guide.md ---
@@ -392,7 +392,7 @@ data, thus relieving the users from reasoning about it.
As an example, letâs
see how this model handles event-time based processing and late arriving
data.
## Handling Event-time and Late Data
-Event-time is the time embedded in the data itself. For many applications,
you may want to operate on this event-time. For example, if you want to get the
number of events generated by IoT devices every minute, then you probably want
to use the time when the data was generated (that is, event-time in the data),
rather than the time Spark receives them. This event-time is very naturally
expressed in this model -- each event from the devices is a row in the table,
and event-time is a column value in the row. This allows window-based
aggregations (e.g. number of events every minute) to be just a special type of
grouping and aggregation on the even-time column -- each time window is a group
and each row can belong to multiple windows/groups. Therefore, such
event-time-window-based aggregation queries can be defined consistently on both
a static dataset (e.g. from collected device events logs) as well as on a data
stream, making the life of the user much easier.
--- End diff --
and aggregation on the `even-time column`
is changed to
and aggregation on the `event-time column`
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]