rdblue commented on issue #24738: [WIP][SPARK-23098][SQL] Migrate Kafka Batch source to v2. URL: https://github.com/apache/spark/pull/24738#issuecomment-503360951 > When the metadata columns are maintained and when they are dropped out? I think that metadata columns are not reported as part of the schema, but can be projected. `SELECT *` would not produce them, but `SELECT *, offset()` would. > Are these columns being considered as well when we apply deduplication (distinct) or any operations considering all columns? This depends on how you apply the distinct operator. If you've run something like this, I think they would be included: ``` spark.table("kafka.topic").selectExpr("key", "value", "topic", "partition()", "offset()").distinct ``` > Do you think about providing "topic" as metadata and let sink accepts metadata and deal with? I would rather have the topic as metadata, but then I think it would be hard to support writing to multiple topics. Another aspect to consider is what to do when Kafka topics have associated schemas. I haven't looked into this integration for Spark much, but I think that we would want to expose the current schema of the topic instead of key/value bytes. But for that we could build our own Kafka source, I guess.
---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected] With regards, Apache Git Services --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
