[ 
https://issues.apache.org/jira/browse/STORM-376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14076282#comment-14076282
 ] 

ASF GitHub Bot commented on STORM-376:
--------------------------------------

Github user danehammer commented on the pull request:

    https://github.com/apache/incubator-storm/pull/168#issuecomment-50350332
  
    > It might be worth it to try and detect if the incoming data is compressed 
or not, and only decompress it if it is compressed.
    
    I don't think the recent commit is tenable. It should be all or nothing, as 
it was originally, or we'll have to add smarts like you mentioned. This makes 
me wonder - what happens when someone wants compression but not gzip? That 
thought train leads me to adding said smarts, and making it somewhat pluggable.


> Add compression to data stored in ZK
> ------------------------------------
>
>                 Key: STORM-376
>                 URL: https://issues.apache.org/jira/browse/STORM-376
>             Project: Apache Storm (Incubating)
>          Issue Type: Improvement
>            Reporter: Robert Joseph Evans
>            Assignee: Robert Joseph Evans
>         Attachments: storm-2000.png
>
>
> If you run zookeeper with -Dzookeeper.forceSync=no the zookeeper disk no 
> longer is the bottleneck for scaling storm.  For us on a Gigabit Ethernet 
> (scale test cluster) it becomes the aggregate reads by all of the supervisors 
> and workers trying to download the compiled topology assignments.
> To reduce this load we took two approaches.  First we compressed the data 
> being stored in zookeeper (this JIRA) which also has the added benefit of 
> increasing the size of the topology you can store in ZK.  Second we used the 
> ZK version number to see if the data had changed and avoid downloading it 
> again needlessly (STORM-375).
> With these changes we were able to scale to a simulated 1965 nodes (5 
> supervisors running on each of 393 real nodes, with each supervisor 
> configured to have 10 slots).  We also filled the cluster with 131 topologies 
> of 100 workers each.   (we are going to 200 topos, and may try to scale the 
> cluster even larger, but it takes forever to launch topologies once the 
> cluster is under load.  We may try to address that shortly too)



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to