Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/15102
> PR with failing test indicating at least one reason why it's wrong from
an end-user perspective:
@koeninger Thanks for writing the test. Yes, we are aware of this issue.
However, it's unlikely that we can support deleting topics using the current
Source API. You can take a look at how StreamExecution checks the new data
here:
https://github.com/apache/spark/blob/976f3b1227c1a9e0b878e010531285fdba57b6a7/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/StreamExecution.scala#L320
Using the hash code to compare offsets has a potential issue, it may make
the latest offset be smaller than the old offset, then StreamExecution won't
process the new data.
I think one possible solution is StreamExecution doesn't compare the
offsets, instead, it just assumes `getOffset` will always return the latest
offset, and it never rollback to an old offset. This needs more discussion
anyway. Hence I suggest we don't block this PR for this. Deleting topics can be
supported in a later PR when we make an agreement on how to resolve the issue.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]