[ 
https://issues.apache.org/jira/browse/KAFKA-6980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16526392#comment-16526392
 ] 

Bhagya Lakshmi Gummalla commented on KAFKA-6980:
------------------------------------------------

[~jlu717] Hi, does this need to be worked on? Is this still an issue?

> Recommended MaxDirectMemorySize for consumers
> ---------------------------------------------
>
>                 Key: KAFKA-6980
>                 URL: https://issues.apache.org/jira/browse/KAFKA-6980
>             Project: Kafka
>          Issue Type: Wish
>          Components: consumer, documentation
>    Affects Versions: 0.10.2.0
>         Environment: CloudFoundry
>            Reporter: John Lu
>            Priority: Minor
>              Labels: consumer, docuentation
>
> We are observing that when MaxDirectMemorySize is set too low, our Kafka 
> consumer threads are failing and encountering the following exception:
> {{java.lang.OutOfMemoryError: Direct buffer memory}}
> Is there a way to estimate how much direct memory is required for optimal 
> performance?  In the documentation, it is suggested that the amount of memory 
> required is  [Number of Partitions * max.partition.fetch.bytes].  
> When we pick a value slightly above that, we no longer encounter the error, 
> but if we double or triple the number, our throughput improves drastically.  
> So we are wondering if there is another setting or parameter to consider?
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to