HeartSaVioR commented on a change in pull request #22138: [SPARK-25151][SS] 
Apply Apache Commons Pool to KafkaDataConsumer
URL: https://github.com/apache/spark/pull/22138#discussion_r320569927
 
 

 ##########
 File path: docs/structured-streaming-kafka-integration.md
 ##########
 @@ -430,20 +430,70 @@ The following configurations are optional:
 ### Consumer Caching
 
 It's time-consuming to initialize Kafka consumers, especially in streaming 
scenarios where processing time is a key factor.
-Because of this, Spark caches Kafka consumers on executors. The caching key is 
built up from the following information:
+Because of this, Spark pools Kafka consumers on executors, by leveraging 
Apache Commons Pool.
+
+The caching key is built up from the following information:
+
 * Topic name
 * Topic partition
 * Group ID
 
-The size of the cache is limited by 
<code>spark.kafka.consumer.cache.capacity</code> (default: 64).
-If this threshold is reached, it tries to remove the least-used entry that is 
currently not in use.
-If it cannot be removed, then the cache will keep growing. In the worst case, 
the cache will grow to
-the max number of concurrent tasks that can run in the executor (that is, 
number of tasks slots),
-after which it will never reduce.
+The following properties are available to configure the consumer pool:
+
+<table class="table">
+<tr><th>Property Name</th><th>Default</th><th>Meaning</th></tr>
+<tr>
+  <td>spark.kafka.consumer.cache.capacity</td>
+  <td>The maximum number of consumers cached. Please note that it's a soft 
limit.</td>
+  <td>64</td>
+</tr>
+<tr>
+  <td>spark.kafka.consumer.cache.timeout</td>
+  <td>The minimum amount of time a consumer may sit idle in the pool before it 
is eligible for eviction by the evictor.</td>
+  <td>5m (5 minutes)</td>
+</tr>
+<tr>
+  <td>spark.kafka.consumer.cache.jmx.enable</td>
+  <td>Enable or disable JMX for pools created with this configuration 
instance. Statistics of the pool are available via JMX instance.
+  The prefix of JMX name is set to 
"kafka010-cached-simple-kafka-consumer-pool".
+  </td>
+  <td>false</td>
+</tr>
+</table>
+
+The size of the pool is limited by 
<code>spark.kafka.consumer.cache.capacity</code>,
+but it works as "soft-limit" to not block Spark tasks.
+
+Idle eviction thread periodically removes some consumers which are not used. 
If this threshold is reached when borrowing,
+it tries to remove the least-used entry that is currently not in use.
+
+If it cannot be removed, then the pool will keep growing. In the worst case, 
the pool will grow to
+the max number of concurrent tasks that can run in the executor (that is, 
number of tasks slots).
+
+If a task fails for any reason, the new task is executed with a newly created 
Kafka consumer for safety reasons.
+At the same time, we invalidate all consumers in pool which have same caching 
key, to remove consumer which was used
+in failed execution. Consumers which any other tasks are using will not be 
closed, but will be invalidated as well
+when they are returned into pool.
 
-If a task fails for any reason the new task is executed with a newly created 
Kafka consumer for safety reasons.
-At the same time the cached Kafka consumer which was used in the failed 
execution will be invalidated. Here it has to
-be emphasized it will not be closed if any other task is using it.
+Along with consumers, Spark pools the records fetched from Kafka separately, 
to let Kafka consumers stateless in point
+of Spark's view, and maximize the efficiency of pooling. It leverages same 
cache key with Kafka consumers pool.
+Note that it doesn't leverage Apache Commons Pool due to the difference of 
characteristics.
+
+The following properties are available to configure the fetched data pool:
+
+<table class="table">
+<tr><th>Property Name</th><th>Default</th><th>Meaning</th></tr>
+<tr>
+  <td>spark.kafka.consumer.fetchedData.cache.timeout</td>
 
 Review comment:
   My bad - just a copy-and-paste error. Will fix all missing things.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to