[ 
https://issues.apache.org/jira/browse/STORM-339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15036212#comment-15036212
 ] 

Sotos Matzanas commented on STORM-339:
--------------------------------------

We are hitting the same issue with Storm 0.10.0, heap balloons past our -Xmx4G 
setting very fast and eventually worker gets killed from an OOM Exception. Heap 
dump shows millions of byte[] arrays in Netty Server that correspond to our 
serialized tuples being passed around. Issue goes away as soon as we enable 
ackers but of course we pay the slow throughput penalty 

> Severe memory leak to OOM when ackers disabled
> ----------------------------------------------
>
>                 Key: STORM-339
>                 URL: https://issues.apache.org/jira/browse/STORM-339
>             Project: Apache Storm
>          Issue Type: Bug
>          Components: storm-core
>    Affects Versions: 0.9.2-incubating
>            Reporter: Jiahong Li
>
> Without any ackers enabled, fast component  will continuously leak memory and 
> causing OOM problems when target component is slow. The OOM problem can be 
> reproduced by running this fast-slow-topology:
> https://github.com/Gvain/storm-perf-test/tree/fast-slow-topology
> with command:
> {code}
> $ storm jar storm_perf_test-1.0.0-SNAPSHOT-jar-with-dependencies.jar 
> com.yahoo.storm.perftest.Main --spout 1 --bolt 1 --workers 2 --testTime 600 
> --messageSize 6400
> {code}
> And the worker childopts with {{-Xms2g -Xmx2g -Xmn512m ...}}.
> At the same time, the executed count of target component is far behind from 
> the emitted count of source component.  I guess it could be that netty client 
> is buffering too much messages in its message_queue as target component sends 
> back OK/Failure Response too slowly. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to