I'm hoping someone with practical experience can answer some questions I
have. I have already scoured the docs and watched some videos, but I
still have some unanswered questions.

1. When deploying to a cluster, do I always have to build a new jar,
manually push it to the Nimbus machine and run "storm jar my.jar
Myclass" or can I run a jar locally that calls "StormSubmitter.submit"
and everything is taken care of?

2. Does Nimbus then push jars with the new implementation code to all
the workers or does that have to be manually handled?

3. Can you configure the cluster so that it only run certain bolts on
certain machines? How?

4. Can you join tuple streams and only send output tuples downstream
after all expected input tuples have been received?

Thanks in advance!

~Tim

Reply via email to