HeartSaVioR commented on a change in pull request #22138: [SPARK-25151][SS] 
Apply Apache Commons Pool to KafkaDataConsumer
URL: https://github.com/apache/spark/pull/22138#discussion_r318351504
 
 

 ##########
 File path: 
external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/package.scala
 ##########
 @@ -38,4 +38,25 @@ package object kafka010 {   // scalastyle:ignore
         " (check Structured Streaming Kafka integration guide for further 
details).")
       .intConf
       .createWithDefault(64)
+
+  private[kafka010] val CONSUMER_CACHE_JMX_ENABLED =
+    ConfigBuilder("spark.kafka.consumer.cache.jmx.enable")
+      .doc("Enable or disable JMX for pools created with this configuration 
instance.")
+      .booleanConf
+      .createWithDefault(false)
+
+  private[kafka010] val CONSUMER_CACHE_MIN_EVICTABLE_IDLE_TIME_MILLIS =
 
 Review comment:
   These config names are followed by Apache Commons Pool as originally the PR 
starts with only commons pool:
   
   ```
   // Set minimum evictable idle time which will be referred from evictor thread
   setMinEvictableIdleTimeMillis(minEvictableIdleTimeMillis)
   setSoftMinEvictableIdleTimeMillis(-1)
   ```
   
   and I think they're self-explaining, though looks like spark-sql-kafka 
already uses shorter configuration names. I'll change them as 
`CONSUMER_CACHE_TIMEOUT`, `FETCHED_DATA_CACHE_TIMEOUT` to be consistent with 
producer's config name as well as giving flexibility of both pools (decouple).

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to