1. You do not have to manually push to nimbus.  When you run "storm jar" it
will automatically send everything that is needed to the nimbus using the
thrift interface.

2. Nimbus manages this with the supervisors.

3. You would need to write a custom scheduler.  See for example
http://xumingming.sinaapp.com/885/twitter-storm-how-to-develop-a-pluggable-scheduler/

4. Yes, you would need to store the tuples in the bolt until you have
received everything you expect, then emit the output tuple after the last
tuple has arrived.

On Tue, Dec 23, 2014 at 12:31 PM, Tim Molter <[email protected]> wrote:

> I'm hoping someone with practical experience can answer some questions I
> have. I have already scoured the docs and watched some videos, but I
> still have some unanswered questions.
>
> 1. When deploying to a cluster, do I always have to build a new jar,
> manually push it to the Nimbus machine and run "storm jar my.jar
> Myclass" or can I run a jar locally that calls "StormSubmitter.submit"
> and everything is taken care of?
>
> 2. Does Nimbus then push jars with the new implementation code to all
> the workers or does that have to be manually handled?
>
> 3. Can you configure the cluster so that it only run certain bolts on
> certain machines? How?
>
> 4. Can you join tuple streams and only send output tuples downstream
> after all expected input tuples have been received?
>
> Thanks in advance!
>
> ~Tim
>

Reply via email to