HeartSaVioR commented on a change in pull request #27146: 
[SPARK-21869][SS][DOCS][FOLLOWUP] Document Kafka producer pool configuration
URL: https://github.com/apache/spark/pull/27146#discussion_r365122209
 
 

 ##########
 File path: docs/structured-streaming-kafka-integration.md
 ##########
 @@ -802,6 +802,31 @@ df.selectExpr("topic", "CAST(key AS STRING)", "CAST(value 
AS STRING)") \
 </div>
 </div>
 
+### Producer Caching
+
+Given Kafka producer instance is designed to be thread-safe, Spark initializes 
a Kafka producer instance and co-use across tasks for same caching key.
+
+The caching key is built up from the following information:
 
 Review comment:
   Technically you're right, the cache key will contain the configuration 
including auth config Spark will inject. I've not mentioned it as we start to 
explain the internal which end users may be OK with only knowing abstracted 
info.
   
   And same configuration goes to same addition, except a case where delegation 
token is renewed. Not 100% sure about details, @gaborgsomogyi could you help me 
confirming this?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to