HeartSaVioR commented on a change in pull request #27146: 
[SPARK-21869][SS][DOCS][FOLLOWUP] Document Kafka producer pool configuration
URL: https://github.com/apache/spark/pull/27146#discussion_r365205645
 
 

 ##########
 File path: docs/structured-streaming-kafka-integration.md
 ##########
 @@ -802,6 +802,31 @@ df.selectExpr("topic", "CAST(key AS STRING)", "CAST(value 
AS STRING)") \
 </div>
 </div>
 
+### Producer Caching
+
+Given Kafka producer instance is designed to be thread-safe, Spark initializes 
a Kafka producer instance and co-use across tasks for same caching key.
+
+The caching key is built up from the following information:
 
 Review comment:
   I think we take the first approach - I just followed the way we did it 
before.
   
   Back to the topic, would we like to add the details on the guide doc? I'll 
address it if we would like to let end users know about. Otherwise we could 
leave it as it is.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to