Github user harishreedharan commented on the pull request:
https://github.com/apache/spark/pull/3798#issuecomment-71119114
I like this! I didn't try building it, but the logic is great!
So, to sum up the idea - the key detail here is that the checkpoint
contains the metadata to regenerate the RDDs, thus original order and batches
are recovered. That looks good - it was the same thing I was trying to see if
we could do in the Kafka receiver, but it would be difficult without some API
changes.
That brings me to a question - so in this PR, is the data pulled down from
Kafka only once every batch interval - say every 2 seconds, or is there a way
to generate it continuously rather than have spikes?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]