Hi All,

We have a topology which is running on 16 workers with 2GB heap each.

However we see that the topology worker RES memory usage keeps on piling up
i.e., starting at 1.1 G and keeps growing over and beyond the 2G mark till
it overwhelms the entire node.

This possibly indicates that

1) we either have slowly consuming bolts and thus need throttling in spout
2) OR a memory leak in the ZMQ buffer allocation or some of the JNI code.

Based on responses in certain other discussions, we tried making our
topology reliable and make use of the MAX_SPOUT_PENDING to throttle the
spouts. However this did not yield us much value, trying with a value of
1000 & 100, we see the same growth in the memory usage, although a bit
slower in the later case.

We also did a pmap of the offending pids and did not see much memory usage
by the native lib*so files.

Is there any way to identify the source of this native leak OR fix this ?
We need some urgent help on this.

[NOTE: Using Storm - 0.9.0_wip21]

Thanks,
Indra

Reply via email to