[ 
https://issues.apache.org/jira/browse/STORM-756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15024961#comment-15024961
 ] 

ASF GitHub Bot commented on STORM-756:
--------------------------------------

Github user HeartSaVioR commented on the pull request:

    https://github.com/apache/storm/pull/897#issuecomment-159354731
  
    Modified ThroughputVsLatency test result
    (replace SplitSentence to multilang implementation) 
    
    Parameter: ratePerSecond: 3500, parallelism: 3 (for 3 machines), test 
duration: 5 mins
    
    * master branch
    
    ```
    uptime:   30 acked:    10,060 acked/sec:     335.33 failed:        0 99%:   
            0 99.9%:               0 min:               0 max:               0 
mean:            0.00 stddev:            0.00 user:          0 sys:          0 
gc:          0 mem:       0.00
    uptime:   61 acked:    55,440 acked/sec:   1,788.39 failed:        0 99%:  
22,045,261,823 99.9%:  23,286,775,807 min:   1,729,101,824 max:  24,628,953,087 
mean: 11,432,404,508.43 stddev: 4,368,826,946.24 user:    198,130 sys:     
29,340 gc:      3,656 mem:     124.56
    uptime:   91 acked:   137,980 acked/sec:   4,599.33 failed:        0 99%:  
20,703,084,543 99.9%:  22,850,568,191 min:       9,191,424 max:  24,494,735,359 
mean: 7,061,035,402.37 stddev: 6,645,146,118.94 user:     90,500 sys:     
28,790 gc:      3,088 mem:     125.47
    uptime:  122 acked:   118,320 acked/sec:   3,816.77 failed:        0 99%:   
7,864,319,999 99.9%:   8,640,266,239 min:       8,765,440 max:   8,782,872,575 
mean: 1,387,484,861.22 stddev: 1,946,172,626.31 user:     84,680 sys:     
30,710 gc:      2,080 mem:     125.94
    uptime:  152 acked:   104,280 acked/sec:   3,476.00 failed:        0 99%:   
9,269,411,839 99.9%:   9,428,795,391 min:       8,765,440 max:   9,487,515,647 
mean: 1,755,618,080.92 stddev: 2,708,794,434.21 user:     83,980 sys:     
32,180 gc:      2,141 mem:     126.13
    uptime:  182 acked:   109,440 acked/sec:   3,648.00 failed:        0 99%:  
11,391,729,663 99.9%:  12,373,196,799 min:       8,912,896 max:  12,507,414,527 
mean: 2,468,952,333.95 stddev: 3,853,055,496.32 user:     84,560 sys:     
32,350 gc:      2,130 mem:     126.24
    uptime:  212 acked:    99,780 acked/sec:   3,326.00 failed:        0 99%:  
13,967,032,319 99.9%:  14,269,022,207 min:       9,207,808 max:  14,562,623,487 
mean: 4,100,379,004.64 stddev: 4,838,616,992.68 user:     87,170 sys:     
34,340 gc:      2,594 mem:     126.44
    uptime:  243 acked:    94,280 acked/sec:   3,041.29 failed:        0 99%:  
16,886,267,903 99.9%:  17,330,864,127 min:   1,125,122,048 max:  17,465,081,855 
mean: 7,729,820,542.63 stddev: 4,380,190,964.09 user:     91,780 sys:     
36,070 gc:      3,770 mem:     126.53
    uptime:  273 acked:   111,800 acked/sec:   3,726.67 failed:        0 99%:  
20,283,654,143 99.9%:  20,736,638,975 min:       9,535,488 max:  20,954,742,783 
mean: 8,742,006,020.17 stddev: 6,569,779,263.81 user:     90,150 sys:     
32,600 gc:      3,496 mem:     126.58
    uptime:  303 acked:   125,580 acked/sec:   4,186.00 failed:        0 99%:  
22,699,573,247 99.9%:  23,051,894,783 min:       9,035,776 max:  23,102,226,431 
mean: 6,929,826,240.64 stddev: 8,474,118,522.65 user:     84,990 sys:     
30,100 gc:      2,583 mem:     126.65
    ```
    
    * STORM-756
    
    ```
    uptime:   30 acked:    10,100 acked/sec:     336.67 failed:        0 99%:  
14,461,960,191 99.9%:  14,814,281,727 min:     442,236,928 max:  14,973,665,279 
mean: 7,939,956,844.68 stddev: 3,841,389,551.14 user:    109,710 sys:     
11,110 gc:        513 mem:     119.48
    uptime:   62 acked:   106,440 acked/sec:   3,326.25 failed:        0 99%:  
26,256,343,039 99.9%:  28,387,049,471 min:   2,054,160,384 max:  29,678,895,103 
mean: 13,926,608,768.77 stddev: 5,725,311,173.69 user:    108,090 sys:     
24,990 gc:      4,869 mem:     125.12
    uptime:   92 acked:   131,340 acked/sec:   4,378.00 failed:        0 99%:  
21,760,049,151 99.9%:  23,974,641,663 min:       9,068,544 max:  25,165,823,999 
mean: 7,675,575,835.18 stddev: 6,221,018,932.96 user:     89,030 sys:     
31,020 gc:      3,325 mem:     125.41
    uptime:  123 acked:   106,320 acked/sec:   3,429.68 failed:        0 99%:  
22,095,593,471 99.9%:  22,884,122,623 min:       9,486,336 max:  23,471,325,183 
mean: 4,663,389,393.35 stddev: 7,215,966,720.18 user:     86,270 sys:     
32,410 gc:      2,577 mem:     125.95
    uptime:  153 acked:   130,440 acked/sec:   4,348.00 failed:        0 99%:  
22,917,677,055 99.9%:  24,041,750,527 min:       9,453,568 max:  24,108,859,391 
mean: 4,941,133,334.61 stddev: 6,703,138,537.34 user:     84,630 sys:     
29,780 gc:      2,220 mem:     126.18
    uptime:  184 acked:   105,800 acked/sec:   3,412.90 failed:        0 99%:   
2,885,681,151 99.9%:   3,460,300,799 min:       7,655,424 max:   3,690,987,519 
mean:  613,777,739.77 stddev:  805,471,793.38 user:     80,580 sys:     30,160 
gc:      1,771 mem:     126.37
    uptime:  214 acked:   112,520 acked/sec:   3,750.67 failed:        0 99%:   
5,205,131,263 99.9%:   5,326,766,079 min:       8,962,048 max:   5,440,012,287 
mean: 1,416,728,576.68 stddev: 1,510,976,383.57 user:     85,820 sys:     
33,500 gc:      2,335 mem:     126.45
    uptime:  244 acked:    78,000 acked/sec:   2,600.00 failed:        0 99%:   
5,498,732,543 99.9%:   5,725,224,959 min:       9,805,824 max:   5,792,333,823 
mean: 1,915,742,867.54 stddev: 1,687,924,931.63 user:     86,790 sys:     
33,830 gc:      2,589 mem:     126.62
    uptime:  275 acked:   111,300 acked/sec:   3,590.32 failed:        0 99%:   
5,981,077,503 99.9%:   6,345,981,951 min:       9,125,888 max:   6,371,147,775 
mean:  581,477,509.76 stddev: 1,445,777,350.87 user:     77,860 sys:     29,150 
gc:      1,663 mem:     126.62
    uptime:  305 acked:   112,040 acked/sec:   3,734.67 failed:        0 99%:   
3,179,282,431 99.9%:   3,485,466,623 min:       8,798,208 max:   3,527,409,663 
mean:  503,179,535.38 stddev:  743,603,669.21 user:     84,440 sys:     33,880 
gc:      1,844 mem:     126.70
    ```
    
    There're no noticeable difference between subprocess. CPU usages / MEM 
usages are similar.
    
    From result, we can see that STORM-756 can show lower latency and GC time 
compared to master.


> [multilang] Introduce overflow control mechanism
> ------------------------------------------------
>
>                 Key: STORM-756
>                 URL: https://issues.apache.org/jira/browse/STORM-756
>             Project: Apache Storm
>          Issue Type: Improvement
>          Components: storm-multilang
>    Affects Versions: 0.10.0, 0.9.4, 0.11.0
>            Reporter: Jungtaek Lim
>            Assignee: Jungtaek Lim
>
> It's from STORM-738, 
> https://issues.apache.org/jira/browse/STORM-738?focusedCommentId=14394106&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14394106
> A. ShellBolt side control
> We can modify ShellBolt to have sent tuple ids list, and stop sending tuples 
> when list exceeds configured max value. In order to achieve this, subprocess 
> should notify "tuple id is complete" to ShellBolt.
> * It introduces new commands for multi-lang, "proceed" (or better name)
> * ShellBolt stores in-progress-of-processing tuples list.
> * Its overhead could be big, subprocess should always notify to ShellBolt 
> when any tuples are processed.
> B. subprocess side control
> We can modify subprocess to check pending queue after reading tuple.
> If it exceeds configured max value, subprocess can request "delay" to 
> ShellBolt for slowing down.
> When ShellBolt receives "delay", BoltWriterRunnable should stop polling 
> pending queue and continue polling later.
> How long ShellBolt wait for resending? Its unit would be "delay time" or 
> "tuple count". I don't know which is better yet.
> * It introduces new commands for multi-lang, "delay" (or better name)
> * I don't think it would be introduced soon, but subprocess can request delay 
> based on own statistics. (ex. pending tuple count * average tuple processed 
> time for time unit, average pending tuple count for count unit)
> ** We can leave when and how much to request "delay" to user. User can make 
> his/her own algorithm to control flooding.
> In my opinion B seems to more natural cause current issue is by subprocess 
> side so it would be better to let subprocess overcome it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to