Github user HeartSaVioR commented on the issue: https://github.com/apache/storm/pull/2241 @harshach For ThroughputvsLatency, throttling spout is intended. We set desired throughput and see histogram of latency and other metrics. (CPU, GC, etc.) There're 3 components in topology which parallelism would be set to 4 * worker count so total 12 executor threads per worker. I think we can parameterize the magic number 4 and adjust it while testing too. I have also done with some performance tests, without modifying TVL topology. The reason is that we should also care about non-performance-maximized topology. For benchmarking performance maximized topology we also have ConstSpoutIdBoltNullBoltTopo, so let's not modify TVL and verify this patch works with all the cases. Since this patch doesn't seem to handle inter-worker communication properly, the test set what we can do for now is very limited. Here's my machine spec used for performance test: ``` java version "1.8.0_131" Java(TM) SE Runtime Environment (build 1.8.0_131-b11) Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode) ``` ``` OS: Ubuntu 17.04 CPU: AMD Ryzen 5 1600 3.2Ghz 6 core (with hyper-thread = 12 logical cores) RAM: Samsung DDR4 32G 19200 SSD: Samsung 850 Evo ``` and here's my number (just pasted as raw number): https://docs.google.com/spreadsheets/d/1J3S4R68CsazlINF60rQn4XCy2Hx5QNzqVoWSO9qo2tc/edit?usp=sharing My observation is that this patch looks impressive with performance maximized topology, but this also looks really bad (not acceptable) with relatively idle topology. I've observed all the things what @revans2 observed with TVL tests. But this patch looks stable with ConstSpoutIdBoltNullBoltTopo and even CPU usage seems lower than stock in this test. While we often publicize micro-benchmark result, in practice users would run much idle topologies. I'm OK if things can be stabilized with adjusting parameters (if then I think default value should be here), but if not, it should be addressed before accepting the patch. I would be -1 if TVL result is not stable.
--- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---