Hello. We have same issue from Storm+Trident / 0.9.0.1. In my experience, you can check your tuples these are sended to other workers. If you use 0.9.x, you can set ZeroMQ high watermark (value means message count). If you set this value to small but not 0 (unlimited), you can see RES doesn't go too high. Instead, tuple will be dropped. If you see dropping RES memory, it's time to tune configuration. We gave up and set worker to 1, and problem is gone.
Hope this helps. HeartSaVioR (Jungtaek Lim) 2014년 7월 9일 수요일, Vladi Feigin<[email protected]>님이 작성한 메시지: > Hi, > > Our topology consumes almost 100% on the physical machines where it runs. > We have heavy load ( 5K events per sec) . > The Java Heap is configured with -Xmx1024m > But Linux top command shows very large figures (for the Storm process) : > VIRT=14G > RES=10G !! > Apparently that Java code (topology code) doesn't consumes it. So the > question is which Storm part does consume it? And why? > What should we check , reconfigure to avoid this? What we do wrong? > > At some point we get OOME in the bolts ... > The Storm version is 0.8.2 > > Thank you in advance, > Vladi > > > > > > > -- Name : 임 정택 Blog : http://www.heartsavior.net / http://dev.heartsavior.net Twitter : http://twitter.com/heartsavior LinkedIn : http://www.linkedin.com/in/heartsavior
