zsxwing commented on a change in pull request #26470: [SPARK-27042][SS]
Invalidate cached Kafka producer in case of task retry
URL: https://github.com/apache/spark/pull/26470#discussion_r346969043
##########
File path:
external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/CachedKafkaProducer.scala
##########
@@ -93,6 +93,10 @@ private[kafka010] object CachedKafkaProducer extends
Logging {
.setAuthenticationConfigIfNeeded()
.build()
val key = toCacheKey(updatedKafkaParams)
+ if (TaskContext.get != null && TaskContext.get.attemptNumber >= 1) {
Review comment:
> I would assume it can't heal if programming failure would be in the
producer code itself.
That's true. Softwares have bugs. But I have never heard any Kafka producer
issues reported when using Spark. The task failure most likely is a user error,
transient network error, or something unrelated. And a task failure like this
should be isolated and should not impact other tasks in the same executor. This
seems overkill.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]