[
https://issues.apache.org/jira/browse/STORM-2983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16424776#comment-16424776
]
Roshan Naik commented on STORM-2983:
------------------------------------
[~kabhwan] i think you are again missing what I am stressing.
We need a way to in code to do this check the worker count (for internal and
user code). Not be removing code that does such checks. I am not concerned
about retaining this one optimization.
There is no point removing reasonable code and then put it back again.
I would like to see why we cannot either fix the topology.workers or provide
something else as substitue.
> Some topologies not working properly
> -------------------------------------
>
> Key: STORM-2983
> URL: https://issues.apache.org/jira/browse/STORM-2983
> Project: Apache Storm
> Issue Type: Bug
> Reporter: Ethan Li
> Assignee: Ethan Li
> Priority: Major
> Labels: pull-request-available
> Time Spent: 20m
> Remaining Estimate: 0h
>
> For example,
> {code:java}
> bin/storm jar storm-loadgen-*.jar
> org.apache.storm.loadgen.ThroughputVsLatency --spouts 1 --splitters 2
> --counters 1 -c topology.debug=true
> {code}
> on ResourceAwareScheduler not working properly.
> With default cluster settings, there will be only one __acker-executor and it
> will be on a separate worker. And it looks like the __acker-executor was not
> able to receive messages from spouts and bolts. And spouts and bolts
> continued to retry sending messages to acker. It then led to another problem:
> STORM-2970
> I tried to run on storm right before
> [https://github.com/apache/storm/pull/2502] and right after and confirmed
> that this bug should be related to it
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)