Hello again, After following Dima's advice, I managed to get rid of the ambiguity in IPs (thank you Dima for pointing out the "storm.local.hostname" configuration and Jeff for pointing out the inconsistency of IPs). Then, I lowered the state kept for the state-full operators, so that it doesn't take so much time to process.
Apparently, it seems that the error was caused because one of the operators was overwhelmed by incoming data and (I guess) its incoming queues became full after a point. So, the processing rate was much lower compared to the arrival rate of tuples. From what I understand, this caused the messages to be dropped, and made the downstream operator (virtually) unreachable. This problem did not occur before, because my local cluster consists of more powerful machines than the EC2's t2.micro instances. If you think that something else might be the cause, please, feel free to comment. Thank you again, Nick 2015-05-22 12:21 GMT-04:00 Nick R. Katsipoulakis <[email protected]>: > Hello Dima, > > I checked the security groups and they are properly set. I will try the > storm.local.hostname to see if that can resolve the issue. > > Thank you for your contribution. > > Cheers, > Nick > > > 2015-05-22 11:53 GMT-04:00 Dima Dragan <[email protected]>: > > Hello Nick, >> >> Please check Security Groups for any limitations. >> >> Also you can setting up "storm.local.hostname" in storm.yml on worker >> machine for using only public IPs. >> >> Best regards, >> Dmytro Dragan >> On May 22, 2015 18:38, "Nick R. Katsipoulakis" <[email protected]> >> wrote: >> >>> Hello Jeff, >>> >>> No, It seems that EC2 works with two types of IPs: Public (the one >>> starting with 52.XX.XX.XX), and the internal ones (the ones with >>> 172.XX.XX.XX). In my storm.yaml files I use the public IPs for the nimbus >>> and the ZooKeeper server. I have not tangled with the IP settings myself. >>> >>> Thank you, >>> Nick >>> >>> 2015-05-22 11:34 GMT-04:00 Jeffery Maass <[email protected]>: >>> >>>> >>>> I'm just a little curious about the 2 different IP sets: >>>> 172.31.10.201 <http://172.31.10.201:6703> >>>> and >>>> 52.7.165.232 <http://52.7.165.232/52.7.165.232:2181> >>>> >>>> are you doing that on purpose? >>>> >>>> >>>> On Fri, May 22, 2015 at 10:04 AM, Nick R. Katsipoulakis < >>>> [email protected]> wrote: >>>> >>>>> 52.7.165.232/52.7.165.232:2181. Will not attempt to authenticate >>>>> using SASL (unknown error) >>>>> 2015-05-21T20:45:11.030+0000 o.a.z.ClientCnxn [INFO] Socket connection >>>>> established to 52.7.165.232/52.7.165.232:2181, initiating session >>>>> 2015-05-21T20:45:11.035+0000 o.a.z.ClientCnxn [INFO] Session >>>>> establishment complete on server 52.7.165.232/52.7.165.232:2181, >>>>> sessionid = 0x14d72688cda0222, negotiated timeout = 40000 >>>>> 2015-05-21T20:49:06.916+0000 b.s.m.n.StormClientErrorHandler [INFO] >>>>> Connection failed Netty-Client-ip-172-31-10-201.ec2.internal/ >>>>> 172.31.10.201:6703 >>>> >>>> >>>> >>>> >>>> Thank you for your time! >>>> >>>> +++++++++++++++++++++ >>>> Jeff Maass <[email protected]> >>>> linkedin.com/in/jeffmaass >>>> stackoverflow.com/users/373418/maassql >>>> +++++++++++++++++++++ >>>> >>>> >>> >>> >>> -- >>> Nikolaos Romanos Katsipoulakis, >>> University of Pittsburgh, PhD candidate >>> >> > > > -- > Nikolaos Romanos Katsipoulakis, > University of Pittsburgh, PhD candidate > -- Nikolaos Romanos Katsipoulakis, University of Pittsburgh, PhD candidate
