It depends on how the tasks (more precisely executors) are distributed
across your cluster. You would need to create a bolt with as many tasks as
there are workers used by the topology, and the develop a custom scheduler
to make sure one task is run on each worker (writing a scheduler example
can be found at
https://xumingming.sinaapp.com/885/twitter-storm-how-to-develop-a-pluggable-scheduler/
). Actually, with the same number of workers and bolt tasks, I think that
the default scheduler should schedule one task to each worker, but it's not
for sure.
Best regards,
Martin


st 9. 9. 2015 v 17:44 odesílatel Adam Mitchell <[email protected]>
napsal:

> Is there a grouping option what will let me send spout output to each
> worker/JVM in my topology?  I see the "allGrouping" option, which would
> send the tuple to each task.
>
> Is there anything to send to each worker instead?
>
> The background here is that I have a JVM-level cache (Ehcache) and I'd
> like to invalidate certain objects in the cache based on input tuples.
>
> If I go with "allGrouping" I think it would work - the first task in a JVM
> to receive the tuple would clear the cache, and all the other tasks on that
> worker would just waste a little time trying to do the same.
>

Reply via email to