[ https://issues.apache.org/jira/browse/STORM-1898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16363490#comment-16363490 ]
Jungtaek Lim commented on STORM-1898: ------------------------------------- The issue doesn't look like blocker. Lowering priority. > MAX_BATCH_SIZE_CONF not working in Trident storm Spout > ------------------------------------------------------- > > Key: STORM-1898 > URL: https://issues.apache.org/jira/browse/STORM-1898 > Project: Apache Storm > Issue Type: Bug > Components: storm-kafka > Affects Versions: 1.0.0 > Reporter: Narendra Bidari > Priority: Major > > Ideally Trident process should process tuples in Batch. > ex > > https://github.com/apache/storm/blob/ab66003c18fe4f8c0926b3219408b735b2ce2adf/storm-core/src/jvm/org/apache/storm/trident/spout/RichSpoutBatchExecutor.java > there is a parameter called MAX_BATCH_SIZE_CONF which limits the size of the > batch. > This parameter is not present in TridentKafkaEmitter. > https://github.com/apache/storm/blob/1.x-branch/external/storm-kafka/src/jvm/org/apache/storm/kafka/trident/TridentKafkaEmitter.java > --------------- > Problem is that now everytime the topology restarts it just fetches all > messages from Kafka. > Could any one throw some idea on it, I certainly feel its a bug. -- This message was sent by Atlassian JIRA (v7.6.3#76005)