[ 
https://issues.apache.org/jira/browse/STORM-1705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15321231#comment-15321231
 ] 

ASF GitHub Bot commented on STORM-1705:
---------------------------------------

Github user ptgoetz commented on a diff in the pull request:

    https://github.com/apache/storm/pull/1331#discussion_r66318510
  
    --- Diff: 
external/storm-kafka/src/jvm/org/apache/storm/kafka/SpoutConfig.java ---
    @@ -37,6 +37,8 @@
         public long retryInitialDelayMs = 0;
         public double retryDelayMultiplier = 1.0;
         public long retryDelayMaxMs = 60 * 1000;
    +    public int retryLimit = Integer.MAX_VALUE;
    --- End diff --
    
    This is probably an unlikely edge case, but theoretically possible: If a 
tuple is retried `Integer.MAX_VALUE` times, it will be dropped.
    
    To truly turn off retry limits, I would suggest setting this to `-1` and 
adding logic to check for that in logic that checks to see if the retry limit 
has been reached.


> Cap on number of retries for a failed message in kafka spout
> ------------------------------------------------------------
>
>                 Key: STORM-1705
>                 URL: https://issues.apache.org/jira/browse/STORM-1705
>             Project: Apache Storm
>          Issue Type: New Feature
>          Components: storm-kafka
>            Reporter: Abhishek Agarwal
>            Assignee: Abhishek Agarwal
>
> The kafka-spout module based on newer APIs has a cap on the number of times, 
> a message is to be retried. It will be a good feature add in the older kafka 
> spout code as well. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to