gaborgsomogyi commented on a change in pull request #26470: [SPARK-27042][SS]
Invalidate cached Kafka producer in case of task retry
URL: https://github.com/apache/spark/pull/26470#discussion_r346743239
##########
File path:
external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/CachedKafkaProducer.scala
##########
@@ -93,6 +93,10 @@ private[kafka010] object CachedKafkaProducer extends
Logging {
.setAuthenticationConfigIfNeeded()
.build()
val key = toCacheKey(updatedKafkaParams)
+ if (TaskContext.get != null && TaskContext.get.attemptNumber >= 1) {
Review comment:
> Why at the same time?
I thought job 1 and job 2 are different queries. Single query + multiple
cores could end-up in the same situation.
> Hence I would assume it can self-heal.
I would assume it can't heal if programming failure would be in the producer
code itself. On the other had I see your general concern related this. What is
your point considering https://github.com/apache/spark/pull/25853?
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]