[ 
https://issues.apache.org/jira/browse/STORM-330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14092909#comment-14092909
 ] 

ASF GitHub Bot commented on STORM-330:
--------------------------------------

Github user ptgoetz commented on a diff in the pull request:

    https://github.com/apache/incubator-storm/pull/220#discussion_r16061121
  
    --- Diff: storm-core/src/jvm/backtype/storm/messaging/netty/Client.java ---
    @@ -137,8 +138,9 @@ private synchronized void connect() {
                 if (channel != null && channel.isConnected()) {
                     return;
                 }
    -            
    +
                 int tried = 0;
    +            StormBoundedExponentialBackoffRetry retryPolicy = new 
StormBoundedExponentialBackoffRetry(base_sleep_ms, max_sleep_ms, max_retries);
    --- End diff --
    
    Would it make sense to make `retryPolicy` an instance variable and 
instantiate it in the constructor rather than create a new one for every call 
to `connect()`?


> storm.messaging.netty.max_retries option in config file not being used if > 30
> ------------------------------------------------------------------------------
>
>                 Key: STORM-330
>                 URL: https://issues.apache.org/jira/browse/STORM-330
>             Project: Apache Storm (Incubating)
>          Issue Type: Bug
>    Affects Versions: 0.9.1-incubating
>            Reporter: Roland Jungnickel
>            Priority: Minor
>
> I have been trying to set the storm.messaging.netty.max_retries to 240 
> because of connection issues when one worker takes a longer time to start its 
> processes. But due to this line 
> https://github.com/apache/incubator-storm/blob/1a0b46e95ab4ac467525314a75819a75dec92c40/storm-core/src/jvm/backtype/storm/messaging/netty/Client.java#L73
>  the max_retries is capped at 30. I am guessing this is a bug or at least 
> should be noted somewhere in the documentation?



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to