gaborgsomogyi commented on a change in pull request #19096: [SPARK-21869][SS] A
cached Kafka producer should not be closed if any task is using it - adds inuse
tracking.
URL: https://github.com/apache/spark/pull/19096#discussion_r272915490
##########
File path:
external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/CachedKafkaProducer.scala
##########
@@ -18,20 +18,70 @@
package org.apache.spark.sql.kafka010
import java.{util => ju}
-import java.util.concurrent.{ConcurrentMap, ExecutionException, TimeUnit}
+import java.util.concurrent.{ConcurrentLinkedQueue, ConcurrentMap,
ExecutionException, TimeUnit}
+import java.util.concurrent.atomic.AtomicInteger
+
+import scala.collection.JavaConverters._
+import scala.util.control.NonFatal
import com.google.common.cache._
import com.google.common.util.concurrent.{ExecutionError,
UncheckedExecutionException}
import org.apache.kafka.clients.producer.KafkaProducer
-import scala.collection.JavaConverters._
-import scala.util.control.NonFatal
import org.apache.spark.SparkEnv
import org.apache.spark.internal.Logging
-private[kafka010] object CachedKafkaProducer extends Logging {
+private[kafka010] case class CachedKafkaProducer(
+ private val id: String = ju.UUID.randomUUID().toString,
+ private val inUseCount: AtomicInteger = new AtomicInteger(0),
+ private val kafkaParams: Seq[(String, Object)]) extends Logging {
+
+ private val configMap = kafkaParams.map(x => x._1 -> x._2).toMap.asJava
+
+ private def updatedAuthConfigIfNeeded(kafkaParamsMap: ju.Map[String,
Object]) =
+ KafkaConfigUpdater("executor", kafkaParamsMap.asScala.toMap)
+ .setAuthenticationConfigIfNeeded()
+ .build()
+
+ lazy val kafkaProducer: KafkaProducer[Array[Byte], Array[Byte]] = {
+ val producer = new KafkaProducer[Array[Byte],
Array[Byte]](updatedAuthConfigIfNeeded(configMap))
Review comment:
Previously we've discussed where to put config update and you've convinced
me to put delegation token inside the producer key (time eviction takes care of
old and not used producers + this way tokens can be used without task retry).
As I've just taken a look at the code despite of this agreement it's not done
somehow. As I've just filed multi-cluster DT support PR you can see what I mean
[here](https://github.com/apache/spark/pull/24305/files#diff-ac8844e8d791a75aaee3d0d10bfc1f2aR78).
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]