[ 
https://issues.apache.org/jira/browse/SPARK-32566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17173107#comment-17173107
 ] 

Takeshi Yamamuro commented on SPARK-32566:
------------------------------------------

cc: [~kabhwan]

> kafka consumer cache capacity is unclear
> ----------------------------------------
>
>                 Key: SPARK-32566
>                 URL: https://issues.apache.org/jira/browse/SPARK-32566
>             Project: Spark
>          Issue Type: Documentation
>          Components: Structured Streaming
>    Affects Versions: 3.0.0
>            Reporter: srpn
>            Priority: Major
>
> The 
> [docs|https://spark.apache.org/docs/latest/streaming-kafka-0-10-integration.html]
>  mention
> {noformat}
> The cache for consumers has a default maximum size of 64.  If you expect
>  to be handling more than (64 * number of executors) Kafka partitions, 
> you can change this setting via 
> spark.streaming.kafka.consumer.cache.maxCapacity{noformat}
> However, for structured streaming, the code seems to expect
> {code:java}
> spark.kafka.consumer.cache.capacity/spark.sql.kafkaConsumerCache.capacity{code}
> Would be nice to clear the ambiguity in the documentation or even merge these 
> configurations in the code



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to