Github user tdas commented on a diff in the pull request:
https://github.com/apache/spark/pull/7229#discussion_r34311348
--- Diff: docs/streaming-programming-guide.md ---
@@ -854,6 +854,8 @@ it with new information. To use this, you will have to
do two steps.
1. Define the state update function - Specify with a function how to
update the state using the
previous state and the new values from an input stream.
+Spark will run the `updateStateByKey` update function for all existing
keys, regardless of whether they have new data in a batch or not. If the update
function returns `None` then the key-value pair will be eliminated.
--- End diff --
This whole section is about `updateStateByKey` so saying it again here is
superfluous. Just "run the update function". Also I would clarify further: "In
every batch, Spark will apply the update function for all... ". It wasnt clear
that whether it was for every batch or overall.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]