gaborgsomogyi commented on a change in pull request #24738: [SPARK-23098][SQL] 
Migrate Kafka Batch source to v2.
URL: https://github.com/apache/spark/pull/24738#discussion_r298501951
 
 

 ##########
 File path: 
external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaSourceProvider.scala
 ##########
 @@ -353,15 +354,18 @@ private[kafka010] class KafkaSourceProvider extends 
DataSourceRegister
     }
   }
 
-  class KafkaTable(strategy: => ConsumerStrategy) extends Table
-    with SupportsRead with SupportsWrite {
+  class KafkaTable extends Table with SupportsRead with SupportsWrite {
 
-    override def name(): String = s"Kafka $strategy"
+    override def name(): String = "KafkaTable"
 
 Review comment:
   Strategy is not always available and this is the case with topic as well.
   There is a possibility to provide topic as column (in case of write). Such 
case the table has multiple topics and only available while rows processed.
   
   Strategy + topic can be added conditionally which most of the time 
represents the situation.
   On the other hand I don't think it worth to parse the rows and update 
`KafkaTable` name accordingly. Since this provides partial information and the 
implementation was quite complex compared to the value not sure it has the be 
added (and that's the reason why not added).
   
   WDYT, should this be added?
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to