[
https://issues.apache.org/jira/browse/STORM-376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14092962#comment-14092962
]
ASF GitHub Bot commented on STORM-376:
--------------------------------------
Github user danehammer commented on the pull request:
https://github.com/apache/incubator-storm/pull/168#issuecomment-51805865
I have deployed this in a development cluster, and inconsistently see
errors around the dreaded "IllegalStateException: unread block data", as well
as some issues where my custom deserialization code in classes is appearing to
receive incomplete blocks. I didn't see it for every topology, several went
along fine, even after several redeploys. But occasionally I would hit
deserialization problems. It's only with the gzip implementation. I have
configured the DefaultSerializationDelegate and not seen any issues since.
@revans2 I believe you said you've been running this at your place for a
while now (before the pull request), did you have any similar experiences?
> Add compression to data stored in ZK
> ------------------------------------
>
> Key: STORM-376
> URL: https://issues.apache.org/jira/browse/STORM-376
> Project: Apache Storm (Incubating)
> Issue Type: Improvement
> Reporter: Robert Joseph Evans
> Assignee: Robert Joseph Evans
> Attachments: storm-2000.png
>
>
> If you run zookeeper with -Dzookeeper.forceSync=no the zookeeper disk no
> longer is the bottleneck for scaling storm. For us on a Gigabit Ethernet
> (scale test cluster) it becomes the aggregate reads by all of the supervisors
> and workers trying to download the compiled topology assignments.
> To reduce this load we took two approaches. First we compressed the data
> being stored in zookeeper (this JIRA) which also has the added benefit of
> increasing the size of the topology you can store in ZK. Second we used the
> ZK version number to see if the data had changed and avoid downloading it
> again needlessly (STORM-375).
> With these changes we were able to scale to a simulated 1965 nodes (5
> supervisors running on each of 393 real nodes, with each supervisor
> configured to have 10 slots). We also filled the cluster with 131 topologies
> of 100 workers each. (we are going to 200 topos, and may try to scale the
> cluster even larger, but it takes forever to launch topologies once the
> cluster is under load. We may try to address that shortly too)
--
This message was sent by Atlassian JIRA
(v6.2#6252)