storm.zookeeper.servers:
- "127.0.0.1"
nimbus.host: "127.0.0.1" ( 127.0.0.1 causes to bind a loopback
interface , instead either use your public ip or 0.0.0.0)
storm.local.dir: /tmp/storm ( I recommend this to move to a
different folder probably /home/storm, /tmp/storm will get
deleted if your machine is restarted)
make sure you zookeeper is also listening in 0.0.0.0 or public
ip not 127.0.0.1.
"No, I cannot ping my host which has a public ip address of
54.68.149.181"
you are not able to reach this ip form worker node but able to
access the UI using it?
-Harsha
On Mon, Sep 8, 2014, at 03:34 PM, Stephen Hartzell wrote:
Harsha,
The storm.yaml on the host machine looks like this:
storm.zookeeper.servers:
- "127.0.0.1"
nimbus.host: "127.0.0.1"
storm.local.dir: /tmp/storm
The storm.yaml on the worker machine looks like this:
storm.zookeeper.servers:
- "54.68.149.181"
nimbus.host: "54.68.149.181"
storm.local.dir: /tmp/storm
No, I cannot ping my host which has a public ip address of
54.68.149.181 although I can connect to the UI web page when it
is hosted. I don't know how I would go about connecting to
zookeeper on the nimbus host.
-Thanks, Stephen
On Mon, Sep 8, 2014 at 6:28 PM, Harsha <[1][email protected]>
wrote:
There aren't any errors in worker machine supervisor logs. Are
you using the same storm.yaml for both the machines and also
are you able to ping your nimbus host or connect to zookeeper
on nimbus host.
-Harsha
On Mon, Sep 8, 2014, at 03:24 PM, Stephen Hartzell wrote:
Harsha,
Thanks so much for getting back with me. I will check the
logs, but I don't seem to get any error messages. I have a
nimbus AWS machine with zookeeper on it and a worker AWS
machine.
On the nimbus machine I start the zookeeper and then I run:
bin/storm nimbus &
bin/storm supervisor &
bin/storm ui
On the worker machine I run:
bin/storm supervisor
When I go to the UI page, I only see 1 supervisor (the one on
the nimbus machine). So apparently, the worker machine isn't
"registering" with the nimbus machine.
On Mon, Sep 8, 2014 at 6:16 PM, Harsha <[2][email protected]>
wrote:
Hi Stephen,
What are the issues you are seeing.
"How do worker machines "know" how to connect to nimbus? Is it
in the storm configuration file"
Yes. make sure you the supervisor(worker) , nimbus nodes are
able to connect to your zookeeper cluster.
Check your logs under storm_inst/logs/ for any errors when you
try to start nimbus or supervisors.
If you are installing it manually try following these steps if
you are not already done.
[3]http://www.michael-noll.com/tutorials/running-multi-node-sto
rm-cluster/
-Harsha
On Mon, Sep 8, 2014, at 03:01 PM, Stephen Hartzell wrote:
All,
I would greatly appreciate any help that anyone would afford.
I've been trying to setup a storm cluster on AWS for a few
weeks now on centOS EC2 machines. So far, I haven't been able
to get a cluster built. I can get a supervisor and nimbus to
run on a single machine, but I can't figure out how to get
another worker to connect to nimbus. How do worker machines
"know" how to connect to nimbus? Is it in the storm
configuration file? I've gone through many tutorials and the
official documentation, but this point doesn't seem to be
covered anywhere in sufficient detail for a new guy like me.
Some of you may be tempted to point me toward storm-deploy,
but I spent four days trying to get that to work until I gave
up. I'm having Issue #58 on github. Following the instructions
exactly and other tutorials on a bran new AWS machine fails. So
I gave up on storm-deploy and decided to try and setup a
cluster manually. Thanks in advance to anyone willing to offer
me any inputs you can!
References
1. mailto:[email protected]
2. mailto:[email protected]
3. http://www.michael-noll.com/tutorials/running-multi-node-storm-cluster/