This is an automated email from the ASF dual-hosted git repository.
gurwls223 pushed a commit to branch branch-3.0
in repository https://gitbox.apache.org/repos/asf/spark.git
The following commit(s) were added to refs/heads/branch-3.0 by this push:
new a36140c [SPARK-32075][DOCS] Fix a few issues in parameters table
a36140c is described below
commit a36140c3c300beaf50d19381ac72e2524f888e53
Author: sidedoorleftroad <[email protected]>
AuthorDate: Wed Jun 24 13:39:55 2020 +0900
[SPARK-32075][DOCS] Fix a few issues in parameters table
### What changes were proposed in this pull request?
Fix a few issues in parameters table in
structured-streaming-kafka-integration doc.
### Why are the changes needed?
Make the title of the table consistent with the data.
### Does this PR introduce _any_ user-facing change?
Yes.
Before:

After:

Before:

After:

Before:

After:

### How was this patch tested?
Manually build and check.
Closes #28910 from sidedoorleftroad/SPARK-32075.
Authored-by: sidedoorleftroad <[email protected]>
Signed-off-by: HyukjinKwon <[email protected]>
(cherry picked from commit 986fa01747db4b52bb8ca1165e759ca2d46d26ff)
Signed-off-by: HyukjinKwon <[email protected]>
---
docs/structured-streaming-kafka-integration.md | 16 ++++++++--------
1 file changed, 8 insertions(+), 8 deletions(-)
diff --git a/docs/structured-streaming-kafka-integration.md
b/docs/structured-streaming-kafka-integration.md
index 016faa7..8dc2a73 100644
--- a/docs/structured-streaming-kafka-integration.md
+++ b/docs/structured-streaming-kafka-integration.md
@@ -528,28 +528,28 @@ The following properties are available to configure the
consumer pool:
<tr><th>Property Name</th><th>Default</th><th>Meaning</th><th>Since
Version</th></tr>
<tr>
<td>spark.kafka.consumer.cache.capacity</td>
- <td>The maximum number of consumers cached. Please note that it's a soft
limit.</td>
<td>64</td>
+ <td>The maximum number of consumers cached. Please note that it's a soft
limit.</td>
<td>3.0.0</td>
</tr>
<tr>
<td>spark.kafka.consumer.cache.timeout</td>
- <td>The minimum amount of time a consumer may sit idle in the pool before it
is eligible for eviction by the evictor.</td>
<td>5m (5 minutes)</td>
+ <td>The minimum amount of time a consumer may sit idle in the pool before it
is eligible for eviction by the evictor.</td>
<td>3.0.0</td>
</tr>
<tr>
<td>spark.kafka.consumer.cache.evictorThreadRunInterval</td>
- <td>The interval of time between runs of the idle evictor thread for
consumer pool. When non-positive, no idle evictor thread will be run.</td>
<td>1m (1 minute)</td>
+ <td>The interval of time between runs of the idle evictor thread for
consumer pool. When non-positive, no idle evictor thread will be run.</td>
<td>3.0.0</td>
</tr>
<tr>
<td>spark.kafka.consumer.cache.jmx.enable</td>
+ <td>false</td>
<td>Enable or disable JMX for pools created with this configuration
instance. Statistics of the pool are available via JMX instance.
The prefix of JMX name is set to
"kafka010-cached-simple-kafka-consumer-pool".
</td>
- <td>false</td>
<td>3.0.0</td>
</tr>
</table>
@@ -578,14 +578,14 @@ The following properties are available to configure the
fetched data pool:
<tr><th>Property Name</th><th>Default</th><th>Meaning</th><th>Since
Version</th></tr>
<tr>
<td>spark.kafka.consumer.fetchedData.cache.timeout</td>
- <td>The minimum amount of time a fetched data may sit idle in the pool
before it is eligible for eviction by the evictor.</td>
<td>5m (5 minutes)</td>
+ <td>The minimum amount of time a fetched data may sit idle in the pool
before it is eligible for eviction by the evictor.</td>
<td>3.0.0</td>
</tr>
<tr>
<td>spark.kafka.consumer.fetchedData.cache.evictorThreadRunInterval</td>
- <td>The interval of time between runs of the idle evictor thread for fetched
data pool. When non-positive, no idle evictor thread will be run.</td>
<td>1m (1 minute)</td>
+ <td>The interval of time between runs of the idle evictor thread for fetched
data pool. When non-positive, no idle evictor thread will be run.</td>
<td>3.0.0</td>
</tr>
</table>
@@ -825,14 +825,14 @@ The following properties are available to configure the
producer pool:
<tr><th>Property Name</th><th>Default</th><th>Meaning</th><th>Since
Version</th></tr>
<tr>
<td>spark.kafka.producer.cache.timeout</td>
- <td>The minimum amount of time a producer may sit idle in the pool before it
is eligible for eviction by the evictor.</td>
<td>10m (10 minutes)</td>
+ <td>The minimum amount of time a producer may sit idle in the pool before it
is eligible for eviction by the evictor.</td>
<td>2.2.1</td>
</tr>
<tr>
<td>spark.kafka.producer.cache.evictorThreadRunInterval</td>
- <td>The interval of time between runs of the idle evictor thread for
producer pool. When non-positive, no idle evictor thread will be run.</td>
<td>1m (1 minute)</td>
+ <td>The interval of time between runs of the idle evictor thread for
producer pool. When non-positive, no idle evictor thread will be run.</td>
<td>3.0.0</td>
</tr>
</table>
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]