[ 
https://issues.apache.org/jira/browse/KAFKA-3892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15346321#comment-15346321
 ] 

Noah Sloan edited comment on KAFKA-3892 at 6/23/16 12:03 PM:
-------------------------------------------------------------

I can reproduce but I don't have code I can share (and you would need 0.9 
brokers with a lot of topics to try it against.) The only reason I know it is 
happening is because:

1. We have brokers with large metadata.
2. We have many producers/consumers in 1 VM.
3. That VM is memory constrained.

Then the VM OOMs and the heap dump shows unrequested metadata.

This could be happening to all 0.9 producers/consumers, but without all 3 
conditions, you would never notice. There isn't any way to see this is 
happening as a client without a heap dump or a debugger.



was (Author: iamnoah):
I can reproduce but I don't have code I can share (and you would need 0.9 
brokers with a lot of topics to try it against.) The only way I know it is 
happening is because:

1. We have brokers with large metadata.
2. We have many producers/consumers in 1 VM.
3. That VM is memory constrained.

Then the VM OOMs and the heap dump shows unrequested metadata.

This could be happening to all 0.9 producers/consumers, but without all 3 
conditions, you would never notice. There isn't any way to see this is 
happening as a client without a heap dump or a debugger.


> Clients retain metadata for non-subscribed topics
> -------------------------------------------------
>
>                 Key: KAFKA-3892
>                 URL: https://issues.apache.org/jira/browse/KAFKA-3892
>             Project: Kafka
>          Issue Type: Bug
>          Components: clients
>    Affects Versions: 0.9.0.1
>            Reporter: Noah Sloan
>
> After upgrading to 0.9.0.1 from 0.8.2 (and adopting the new consumer and 
> producer classes,) we noticed services with small heap crashing due to 
> OutOfMemoryErrors. These services contained many producers and consumers (~20 
> total) and were connected to brokers with >2000 topics and over 10k 
> partitions. Heap dumps revealed that each client had 3.3MB of Metadata 
> retained in their Cluster, with references to topics that were not being 
> produced or subscribed to. While the services were running with 128MB of heap 
> prior to the upgrade, we to had increased max heap to 200MB to accommodate 
> all the extra data. 
> While this is not technically a memory leak, it does impose a significant 
> overhead on clients when connected to a large cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to