vanzin commented on a change in pull request #27146: 
[SPARK-21869][SS][DOCS][FOLLOWUP] Document Kafka producer pool configuration
URL: https://github.com/apache/spark/pull/27146#discussion_r365359509
 
 

 ##########
 File path: docs/structured-streaming-kafka-integration.md
 ##########
 @@ -802,6 +802,31 @@ df.selectExpr("topic", "CAST(key AS STRING)", "CAST(value 
AS STRING)") \
 </div>
 </div>
 
+### Producer Caching
+
+Given Kafka producer instance is designed to be thread-safe, Spark initializes 
a Kafka producer instance and co-use across tasks for same caching key.
+
+The caching key is built up from the following information:
 
 Review comment:
   I think this would be more useful if you want to explain how a user can 
force separate producers to be used. Otherwise it's an internal detail that 
doesn't really affect users.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to