neilramaswamy commented on PR #46863: URL: https://github.com/apache/spark/pull/46863#issuecomment-2420148908
@unbalanced @rschwagercharter: I chatted with some other Spark folks and got some good clarification: 1. Spark tries to be cloud-agnostic. You would have a hard time getting an Amazon-specific or Azure-specific Kafka partition assigner into the Spark codebase. 2. Adding a configuration that requires an imperative plugin is considered a large new API surface. It probably would need a SPIP. In the design doc's current form, where there are _no_ alternatives laid out, I don't see it having an easy time getting approval. It also doesn't address important Spark issues like how this will work for Python and Spark Connect, which will certainly require an answer. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
