[ 
https://issues.apache.org/jira/browse/STORM-1219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15013963#comment-15013963
 ] 

ASF GitHub Bot commented on STORM-1219:
---------------------------------------

Github user arunmahadevan commented on the pull request:

    https://github.com/apache/storm/pull/893#issuecomment-158126825
  
    @dossett it appears that the component configuration is merged with the 
global storm config. This happens when the topology is constructed and 
serialized. The merged config is then passed to `prepare`, so if the map passed 
to prepare is updated, it does not seem to have any effect.


> Fix HDFS and Hive bolt flush/acking
> -----------------------------------
>
>                 Key: STORM-1219
>                 URL: https://issues.apache.org/jira/browse/STORM-1219
>             Project: Apache Storm
>          Issue Type: Bug
>            Reporter: Arun Mahadevan
>            Assignee: Arun Mahadevan
>
> HDFS and Hive bolts is setting the default tick tuple interval in the 
> prepare() method, which is not taking effect. This needs to be fixed so that 
> the tuples are acked on time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to