My problem was that I had not changed the configuration topology.workers.
It was using the default of 1

Thought it was probably something obvious. This is resolved now


On Wed, Feb 26, 2014 at 4:00 PM, Sangwa Simfukwe
<[email protected]>wrote:

> I've been working on a proof of concept with Storm and got it working in
> Local Mode. I'm running into problems when I try to deploy though. When I
> launch Storm UI, I see that all bolts are running on only one of the
> workers! I can't for the life of me figure out why. I.e. when I look at
> StormUI, I see bolts b0-b4 and spout0 under the "Bolts (All time)" section.
> When I drill down into the bolts, I see that they are all assigned to the
> same host (I have four Worker machines configured)
>
> What I'm trying to do is have a distributed topology with some in memory
> state. When a DRPC request comes in, it should be distributed to all
> partitions and return an aggregated response.
>
> My topology set up looks something like this:
> -------------------------------------
> // This part of the topology should populate the state
>
> TridentState state = topology.newStream(("MyTopology"), myStateFeederSpout)
>         .partitionBy("id")
>         .partitionPersist(myStateFactory, new Fields("id", "a", "b"), new
> MyStateUpdater())
>         .parallelismHint(12);
>
> // Set up so DRPC requests can query
> Stream stream = topology.newDRPCStream("someFunction");
>
> stream.each(new Fields("args"), new JsonStringToRequestConverter(), new
> Fields("request")) // Convert String to Request object
>         .broadcast() // Request should be sent to every partition
>         .stateQuery(state, new Fields("request"), new MyQuery(), new
> Fields("result"))
>         .aggregate(new Fields("result"), new MyCombiner(), new
> Fields("response"))
>         .each(new Fields("response"), new ObjectToJsonStringConverter(),
> new Fields("responseAsString"))
>         .project(new Fields("responseAsString"));
> -------------------------------------
>
> This works locally but why is it deployed to only one worker when run
> remotely. What I've tried
> - set topology.optimize to false
> - I can deploy to any of the other machines if I stop the others so that
> is the only option to deploy to
>
> What am I missing?
>
> Thanks
>
> -Sangwa
>

Reply via email to