HeartSaVioR commented on a change in pull request #26470: [SPARK-27042][SS]
Invalidate cached Kafka producer in case of task retry
URL: https://github.com/apache/spark/pull/26470#discussion_r346199388
##########
File path:
external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/CachedKafkaProducer.scala
##########
@@ -93,6 +93,10 @@ private[kafka010] object CachedKafkaProducer extends
Logging {
.setAuthenticationConfigIfNeeded()
.build()
val key = toCacheKey(updatedKafkaParams)
+ if (TaskContext.get != null && TaskContext.get.attemptNumber >= 1) {
Review comment:
Strictly saying, we should discard the instance (not instances) when the
instance throws some error, but then we may need to wrap the code to catch
exception and discard the instance wherever we use the instance. Aa bit
verbose, but we still have to return the instance to the pool even it succeeds,
so maybe not impossible.
I'm not aware whether Kafka consumer or Kafka producer is self-healing - if
they provide such functionality, we wouldn't need to even discard them at any
case.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]