HeartSaVioR commented on pull request #29729:
URL: https://github.com/apache/spark/pull/29729#issuecomment-736473108
Thanks all for great efforts of reviewing, and thanks @gaborgsomogyi for the
great contribution! I merged this to master.
HeartSaVioR commented on pull request #29729:
URL: https://github.com/apache/spark/pull/29729#issuecomment-736470062
Github Action passed. I haven't found any new review comments except me.
I'll merge this now to ensure this to be included in 3.1.0.
HeartSaVioR commented on pull request #29729:
URL: https://github.com/apache/spark/pull/29729#issuecomment-735677230
@zsxwing @viirya @xuanyuanking
Appreciate the next round of review. Thanks. As this was proposed in a
couple of months ago and we all agree about the direction, I'll
HeartSaVioR commented on pull request #29729:
URL: https://github.com/apache/spark/pull/29729#issuecomment-735020039
I see there's no major change in both `KafkaOffsetReaderConsumer` and
`KafkaOffsetReaderAdmin`. For this case I'd be OK to refactor here as well,
though I'd just change
HeartSaVioR commented on pull request #29729:
URL: https://github.com/apache/spark/pull/29729#issuecomment-731923628
@gaborgsomogyi
I'd really like to see this included in Spark 3.1. Could you please work on
applying suggestion? If you don't have time (or if you'd like me to give a try
HeartSaVioR commented on pull request #29729:
URL: https://github.com/apache/spark/pull/29729#issuecomment-723316632
Once we pick the option 1 I'd say we should just allow the redundancy for
offset reader and consumer strategy (having different classes with all the
content copied), so
HeartSaVioR commented on pull request #29729:
URL: https://github.com/apache/spark/pull/29729#issuecomment-717622023
@zsxwing @viirya @xuanyuanking
Could you please go through reviewing again? If there's no further comments
in a couple of days I'll merge this in.
HeartSaVioR commented on pull request #29729:
URL: https://github.com/apache/spark/pull/29729#issuecomment-713554471
My suggestion assumes the case when AdminClient requires Kafka 1.x. If
AdminClient (at least the list of methods what we call) works with Kafka 0.11
then I'm happy to
HeartSaVioR commented on pull request #29729:
URL: https://github.com/apache/spark/pull/29729#issuecomment-709675486
@zsxwing Thanks for raising the good point. That is really something we
should consider. Personally I wouldn't mind if it's not compatible with
0.10/0.11 as 1.0 is released
HeartSaVioR commented on pull request #29729:
URL: https://github.com/apache/spark/pull/29729#issuecomment-706655853
@zsxwing @viirya
Would you mind doing another round of reviews? Looks like we only concern
about security aspect which is most likely about documentation issue. (As the
HeartSaVioR commented on pull request #29729:
URL: https://github.com/apache/spark/pull/29729#issuecomment-702711824
If I understand correctly, "in overall" (across driver and executors), it
shouldn't change the requirement of permissions; there're three distinct
operations described in
HeartSaVioR commented on pull request #29729:
URL: https://github.com/apache/spark/pull/29729#issuecomment-699721791
@gaborgsomogyi
I'm sorry, but if you don't mind, it would be really appreciated if we can
verify there's no change on security requirement to use AdminClient. We may
HeartSaVioR commented on pull request #29729:
URL: https://github.com/apache/spark/pull/29729#issuecomment-699721072
@zsxwing @gaborgsomogyi
Let's talk about the remaining tasks to move this forward. I see the change
of security requirement is probably a breaking change, but
HeartSaVioR commented on pull request #29729:
URL: https://github.com/apache/spark/pull/29729#issuecomment-699269552
Worth noting that the issue is not just occurred in theory, but I've seen
the case multiple times around community report, customers, etc. Probably we'd
feel better to
HeartSaVioR commented on pull request #29729:
URL: https://github.com/apache/spark/pull/29729#issuecomment-696425757
Kafka consumer in executors also use assign, and Kafka checks group id
authorization even with assign although group.id is not needed at all.
There's an interesting
HeartSaVioR commented on pull request #29729:
URL: https://github.com/apache/spark/pull/29729#issuecomment-695899573
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub
HeartSaVioR commented on pull request #29729:
URL: https://github.com/apache/spark/pull/29729#issuecomment-696425757
Kafka consumer in executors also use assign, and Kafka checks group id
authorization even with assign although group.id is not needed at all.
There's an interesting
HeartSaVioR commented on pull request #29729:
URL: https://github.com/apache/spark/pull/29729#issuecomment-695899573
As I commented earlier, I'll wait for a day to see there's any valid
concerns, and merge this. Please comment if anyone needs more days to review
through. Thanks!
cc.
HeartSaVioR commented on pull request #29729:
URL: https://github.com/apache/spark/pull/29729#issuecomment-693198878
Looks OK except the cleanup of unnecessary workarounds for `consumer.poll`
as a new commit.
cc. @zsxwing
19 matches
Mail list logo