Github user jerryshao commented on the pull request:
https://github.com/apache/spark/pull/4805#issuecomment-77985091
Yes, that's my original purpose of PR, user can monitor and reuse these
offsets, also we can offer user the similar functionality as old Kafka stream
to get data from previous offset (if not out-of-range and no checkpointing). My
concern is that this might break the exact-once semantics.
So basically let me rephrase what I think about it:
1. If checkpointing is not enabled, do we need to offer user the ability to
get data from previous offset or always from earliest or latest offsets? If we
want to offer the similar functionality as previous Kafka stream, then I think
putting offsets to Kafka is required and keep basic no data loss is necessary.
2. If checkpointing is enabled, these offsets is only used for monitoring,
otherwise we will break the exact-once semantics when using these offsets
again. So if checkpointing is enabled, we can update the offset at any time
because we don't use these offsets to make sure no data loss.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]