[ 
https://issues.apache.org/jira/browse/STORM-2983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16424616#comment-16424616
 ] 

Roshan Naik commented on STORM-2983:
------------------------------------

As stated before, the core issue is not the specific optimization. Along with 
removing this optimization we would have to remove all other code that checks 
the same. It is impt to get RAS working but needs to be done correctly. 

My concern is that (independent of the existence/absence of this 
optimization)...  the mechanism to check the worker count by storm internal 
code or end user code is broken. Fixing that will address RAS as well as does 
not need to remove similar code.

So would like to ask my prev question again:

 
 - Is there good reason why topology.workers cannot be dynamically updated to 
reflect the actual worker count. 

 

> Some topologies not working properly 
> -------------------------------------
>
>                 Key: STORM-2983
>                 URL: https://issues.apache.org/jira/browse/STORM-2983
>             Project: Apache Storm
>          Issue Type: Bug
>            Reporter: Ethan Li
>            Assignee: Ethan Li
>            Priority: Major
>              Labels: pull-request-available
>          Time Spent: 20m
>  Remaining Estimate: 0h
>
> For example,
> {code:java}
> bin/storm jar storm-loadgen-*.jar 
> org.apache.storm.loadgen.ThroughputVsLatency --spouts 1 --splitters 2 
> --counters 1 -c topology.debug=true
> {code}
> on ResourceAwareScheduler not working properly.
> With default cluster settings, there will be only one __acker-executor and it 
> will be on a separate worker. And it looks like the __acker-executor was not 
> able to receive messages from spouts and bolts. And spouts and bolts 
> continued to retry sending messages to acker. It then led to another problem:
> STORM-2970
> I tried to run on storm right before 
> [https://github.com/apache/storm/pull/2502] and right after and confirmed 
> that this bug should be related to it



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to