Hi, We had memory issues when there is a capacity bottleneck in one of the bolts. The inter process communication does not handle well this issue and the inter process buffer starts to grow.
The solution is to solve the capacity issue, the memory is a side effect. 10x Kobi On Fri, Oct 23, 2015 at 10:05 PM, John Yost <[email protected]> wrote: > Hi Dillian, > > Is the topology crashing or is it rebalancing by re-starting the workers > and corresponding executors? I recommend posting error messages from your > nimbus log and one of your supervisor logs. > > --John > > On Fri, Oct 23, 2015 at 2:25 PM, Javier Gonzalez <[email protected]> > wrote: > >> That's your storm cluster. Is your topology configured to use all three >> workers? (eg by using Conf.setNumWokers(int)) >> On Oct 22, 2015 9:25 PM, "Dillian Murphey" <[email protected]> >> wrote: >> >>> Have 3 workers. One is on the nimbus server which I'd like to get off >>> there, but don't think that matters at all. >>> >>> On Thu, Oct 22, 2015 at 3:54 PM, Javier Gonzalez <[email protected]> >>> wrote: >>> >>>> How many workers do you have configured? Is it possible that your whole >>>> topology is running within that worker? >>>> On Oct 22, 2015 6:15 PM, "Dillian Murphey" <[email protected]> >>>> wrote: >>>> >>>>> We have one worker than keeps giving us some problems. First it was >>>>> out of memory issues. We're thinking of spinning up a replacement system. >>>>> >>>>> But I thought storm was suppose to be fault tolerant. If this one >>>>> worker goes haywire why is my entire topology going down? >>>>> >>>>> >>>>> >>> > -- This message may contain confidential and/or privileged information. If you are not the addressee or authorized to receive this on behalf of the addressee you must not use, copy, disclose or take action based on this message or any information herein. If you have received this message in error, please advise the sender immediately by reply email and delete this message. Thank you.
