Github user marmbrus commented on the issue:
https://github.com/apache/spark/pull/15102
> Either way, who are you to presume that a user doesn't know what she is
doing when she configured a consumer to start at a particular position for
an added partition?
I feel like we are talking past each other. We are not giving the
developer the option to manually configure a consumer in this way for this PR
precisely because I don't think we can while still maintaining the semantics
that structured streaming has promised. This is consciously a higher level API
than is provided by DStreams. Neither of these is right or wrong. I do want to
be able to support as many real use cases as possible in the Structured
Streaming API, but not at the expense of correctness for the queries that we
support. If there are low level use cases that we can't support, which require
users to use DStreams instead, that is okay, as it is okay that some users
still program against RDDs.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]