Re: [Dev] Latency Calculation Feature in WSO2 GW

2016-02-16 Thread Chamil Elladeniya
Hi all,

We have integrated metrics Statistic feature to the GW as a separate
Handler by implementing the newly introduced MessagingHandler interface.
Handler gets invoked through each transaction with carbonMessage and
engaged location. Due to the current implementation, GW core is independent
of Statistic feature as this handler is added through OSGi service.

@IsuruP, Thank you for the JFR analysis and I'll do the necessary changes
to Metric names.

Following load testing results[1] are gained after above changes. Tests
were done in remote testing environment(BOA) using wrk tool with 0.5k sized
messages where the back-end was "echo-backEnd-with-delay".


[1]
https://docs.google.com/a/wso2.com/spreadsheets/d/18-tin2iNdnv93MX8AUq5Foe794jx-aJGjqbpB_5y3rk/edit?usp=sharing



On Wed, Feb 10, 2016 at 2:12 PM, Isuru Perera  wrote:

> Hi,
>
> It's great to see that there is not much impact to the performance of
> Gateway by the Carbon Metrics.
>
> The histogram update [1] method is synchronous and it is used by the
> Timer. (A timer is a histogram of durations). The Timer by default uses 
> ExponentiallyDecayingReservoir
> [2] and its update method [3] is synchronous. It also uses locks.
>
> I also analysed JFR dumps. For example, see the Hot Methods (filtered with
> "*metrics*") for a Flight Recording taken during 1 minute test with 900
> concurrency. (Chamil helped to get the JFR dumps)
>
> Stack
> Trace
> Sample CountPercentage(%)
> com.codahale.metrics.LongAdder.increment()
> 32  0.7
> com.codahale.metrics.Timer.time()
> 30  0.657
> org.wso2.carbon.metrics.impl.TimerImpl.start()
> 19  0.416
> com.codahale.metrics.ExponentiallyDecayingReservoir.lockForRegularUsage()
> 5  0.109
> com.codahale.metrics.Meter.mark(long)
> 4  0.088
> org.wso2.carbon.transport.http.netty.latency.metrics.RequestMetricsHolder.startTimer(String)
> 2  0.044
> org.wso2.carbon.transport.http.netty.latency.metrics.ResponseMetricsHolder.stopTimer(String)
> 1  0.022
> com.codahale.metrics.ExponentiallyDecayingReservoir.unlockForRegularUsage()
> 1  0.022
> com.codahale.metrics.EWMA.update(long)
> 1  0.022
> com.codahale.metrics.Timer$Context.stop()
> 1  0.022
> org.wso2.carbon.metrics.manager.MetricManager.append(StringBuilder,
> String) 1  0.022
> com.codahale.metrics.ExponentiallyDecayingReservoir.update(long)
> 1  0.022
> com.codahale.metrics.Meter.tickIfNecessary()
> 1  0.022
>
>
> It's good to see that there is less than 1% of CPU is spent on metrics
> related methods.
>
> @Chamil, please use final static String variables for the Metric Names
> instead of building the metric name in the constructor. Then we should be
> able to avoid multiple calls to  "MetricManager.append" method.
>
> The DAS reporter is already available and we can configure it to send
> events to WSO2 DAS. We will have to develop custom gadgets to visualize.
>
> Thanks!
>
> Best Regards,
>
> [1]
> https://github.com/dropwizard/metrics/blob/v3.1.2/metrics-core/src/main/java/com/codahale/metrics/Histogram.java#L37-L40
> [2]
> https://github.com/dropwizard/metrics/blob/v3.1.2/metrics-core/src/main/java/com/codahale/metrics/Timer.java#L55
> [3]
> https://github.com/dropwizard/metrics/blob/v3.1.2/metrics-core/src/main/java/com/codahale/metrics/ExponentiallyDecayingReservoir.java#L86-L115
>
>
> On Mon, Feb 8, 2016 at 9:22 AM, Isuru Ranawaka  wrote:
>
>> Hi Chamil ,
>>
>> Great , Seems there is no impact on performance by  integrating
>> carbon-metrics  and it may  operates in asynchronous manner  .Shall we look
>> in to how we can visualize this data and how we can publish those to
>> different Services like DAS .
>>
>> thanks
>>
>> On Mon, Feb 8, 2016 at 7:06 AM, Chamil Elladeniya 
>> wrote:
>>
>>> Hi all,
>>>
>>> As the previous test results were inconsistent, I did the test again
>>> using wrk to test the above scenario.
>>>
>>> Remote environment testing Using wrk
>>> Original GW
>>> Integrated GW
>>> # of threads Connections Requests per second [#/sec] Time per request
>>> [ms]
>>> # of threads Connections Requests per second [#/sec] Time per request
>>> [ms]
>>> 10 100 13721.18 7.98
>>> 10 100 14802.58 8.55
>>> 10 200 11884.68 18.24
>>> 10 200 14997.99 15.74
>>> 10 300 11735.99 26.98
>>> 10 300 13769.9 26.99
>>> 10 400 13257.39 32.68
>>> 10 400 14569.7 33.23
>>> 10 500 12124.66 44.14
>>> 10 500 14016.72 42.47
>>> 10 600 13575.55 50.63
>>> 10 600 13117.01 55.74
>>> 10 700 12891.08 66.33
>>> 10 700 12848.27 66.64
>>> 10 800 13045.18 71.42
>>> 10 800 13391.08 70.43
>>> 10 900 12990.17 86.4
>>> 10 900 13607.67 88.85
>>> 10 1000 7631.25 87.14
>>> 10 1000 8390.08 91.55
>>>
>>> Above average values are based on these individual test values[1].
>>>
>>> [1]
>>> 

Re: [Dev] Latency Calculation Feature in WSO2 GW

2016-02-10 Thread Isuru Perera
Hi,

It's great to see that there is not much impact to the performance of
Gateway by the Carbon Metrics.

The histogram update [1] method is synchronous and it is used by the Timer.
(A timer is a histogram of durations). The Timer by default uses
ExponentiallyDecayingReservoir
[2] and its update method [3] is synchronous. It also uses locks.

I also analysed JFR dumps. For example, see the Hot Methods (filtered with
"*metrics*") for a Flight Recording taken during 1 minute test with 900
concurrency. (Chamil helped to get the JFR dumps)

Stack
Trace
Sample CountPercentage(%)
com.codahale.metrics.LongAdder.increment()
32  0.7
com.codahale.metrics.Timer.time()
30  0.657
org.wso2.carbon.metrics.impl.TimerImpl.start()
19  0.416
com.codahale.metrics.ExponentiallyDecayingReservoir.lockForRegularUsage()
5  0.109
com.codahale.metrics.Meter.mark(long)
4  0.088
org.wso2.carbon.transport.http.netty.latency.metrics.RequestMetricsHolder.startTimer(String)
2  0.044
org.wso2.carbon.transport.http.netty.latency.metrics.ResponseMetricsHolder.stopTimer(String)
1  0.022
com.codahale.metrics.ExponentiallyDecayingReservoir.unlockForRegularUsage()
1  0.022
com.codahale.metrics.EWMA.update(long)
1  0.022
com.codahale.metrics.Timer$Context.stop()
1  0.022
org.wso2.carbon.metrics.manager.MetricManager.append(StringBuilder,
String) 1  0.022
com.codahale.metrics.ExponentiallyDecayingReservoir.update(long)
1  0.022
com.codahale.metrics.Meter.tickIfNecessary()
1  0.022


It's good to see that there is less than 1% of CPU is spent on metrics
related methods.

@Chamil, please use final static String variables for the Metric Names
instead of building the metric name in the constructor. Then we should be
able to avoid multiple calls to  "MetricManager.append" method.

The DAS reporter is already available and we can configure it to send
events to WSO2 DAS. We will have to develop custom gadgets to visualize.

Thanks!

Best Regards,

[1]
https://github.com/dropwizard/metrics/blob/v3.1.2/metrics-core/src/main/java/com/codahale/metrics/Histogram.java#L37-L40
[2]
https://github.com/dropwizard/metrics/blob/v3.1.2/metrics-core/src/main/java/com/codahale/metrics/Timer.java#L55
[3]
https://github.com/dropwizard/metrics/blob/v3.1.2/metrics-core/src/main/java/com/codahale/metrics/ExponentiallyDecayingReservoir.java#L86-L115


On Mon, Feb 8, 2016 at 9:22 AM, Isuru Ranawaka  wrote:

> Hi Chamil ,
>
> Great , Seems there is no impact on performance by  integrating
> carbon-metrics  and it may  operates in asynchronous manner  .Shall we look
> in to how we can visualize this data and how we can publish those to
> different Services like DAS .
>
> thanks
>
> On Mon, Feb 8, 2016 at 7:06 AM, Chamil Elladeniya 
> wrote:
>
>> Hi all,
>>
>> As the previous test results were inconsistent, I did the test again
>> using wrk to test the above scenario.
>>
>> Remote environment testing Using wrk
>> Original GW
>> Integrated GW
>> # of threads Connections Requests per second [#/sec] Time per request
>> [ms]
>> # of threads Connections Requests per second [#/sec] Time per request
>> [ms]
>> 10 100 13721.18 7.98
>> 10 100 14802.58 8.55
>> 10 200 11884.68 18.24
>> 10 200 14997.99 15.74
>> 10 300 11735.99 26.98
>> 10 300 13769.9 26.99
>> 10 400 13257.39 32.68
>> 10 400 14569.7 33.23
>> 10 500 12124.66 44.14
>> 10 500 14016.72 42.47
>> 10 600 13575.55 50.63
>> 10 600 13117.01 55.74
>> 10 700 12891.08 66.33
>> 10 700 12848.27 66.64
>> 10 800 13045.18 71.42
>> 10 800 13391.08 70.43
>> 10 900 12990.17 86.4
>> 10 900 13607.67 88.85
>> 10 1000 7631.25 87.14
>> 10 1000 8390.08 91.55
>>
>> Above average values are based on these individual test values[1].
>>
>> [1]
>> https://docs.google.com/a/wso2.com/spreadsheets/d/1LhOjuaVFlv3AVDN-gYOW0S9rhv5kyEqczPhQ106bTZE/edit?usp=sharing
>>
>> On Fri, Feb 5, 2016 at 8:07 AM, Nadeeshaan Gunasinghe <
>> nadeesh...@wso2.com> wrote:
>>
>>> Hi Chamil,
>>>
>>> It would be better if we can run another round of perf testing before we
>>> come to a conclusion here.
>>>
>>>
>>> *Nadeeshaan Gunasinghe*
>>> Software Engineer, WSO2 Inc. http://wso2.com
>>> +94770596754 | nadeesh...@wso2.com | Skype: nadeeshaan.gunasinghe
>>> <#1133191555_-1155064647_1759133060_>
>>> 
>>>   
>>> 
>>> Get a signature like this: Click here!
>>> 
>>>
>>> On Fri, Feb 5, 2016 at 7:47 AM, Isuru Ranawaka  

Re: [Dev] Latency Calculation Feature in WSO2 GW

2016-02-07 Thread Isuru Ranawaka
Hi Chamil ,

Great , Seems there is no impact on performance by  integrating
carbon-metrics  and it may  operates in asynchronous manner  .Shall we look
in to how we can visualize this data and how we can publish those to
different Services like DAS .

thanks

On Mon, Feb 8, 2016 at 7:06 AM, Chamil Elladeniya  wrote:

> Hi all,
>
> As the previous test results were inconsistent, I did the test again using
> wrk to test the above scenario.
>
> Remote environment testing Using wrk
> Original GW
> Integrated GW
> # of threads Connections Requests per second [#/sec] Time per request
> [ms]
> # of threads Connections Requests per second [#/sec] Time per request
> [ms]
> 10 100 13721.18 7.98
> 10 100 14802.58 8.55
> 10 200 11884.68 18.24
> 10 200 14997.99 15.74
> 10 300 11735.99 26.98
> 10 300 13769.9 26.99
> 10 400 13257.39 32.68
> 10 400 14569.7 33.23
> 10 500 12124.66 44.14
> 10 500 14016.72 42.47
> 10 600 13575.55 50.63
> 10 600 13117.01 55.74
> 10 700 12891.08 66.33
> 10 700 12848.27 66.64
> 10 800 13045.18 71.42
> 10 800 13391.08 70.43
> 10 900 12990.17 86.4
> 10 900 13607.67 88.85
> 10 1000 7631.25 87.14
> 10 1000 8390.08 91.55
>
> Above average values are based on these individual test values[1].
>
> [1]
> https://docs.google.com/a/wso2.com/spreadsheets/d/1LhOjuaVFlv3AVDN-gYOW0S9rhv5kyEqczPhQ106bTZE/edit?usp=sharing
>
> On Fri, Feb 5, 2016 at 8:07 AM, Nadeeshaan Gunasinghe  > wrote:
>
>> Hi Chamil,
>>
>> It would be better if we can run another round of perf testing before we
>> come to a conclusion here.
>>
>>
>> *Nadeeshaan Gunasinghe*
>> Software Engineer, WSO2 Inc. http://wso2.com
>> +94770596754 | nadeesh...@wso2.com | Skype: nadeeshaan.gunasinghe
>> <#-1155064647_1759133060_>
>> 
>>   
>> 
>> Get a signature like this: Click here!
>> 
>>
>> On Fri, Feb 5, 2016 at 7:47 AM, Isuru Ranawaka  wrote:
>>
>>> Hi Chamila,
>>>
>>> Results should be consisted and we have seen consisted results while
>>> doing perf testing . Here we need to ensure whether there are other
>>> processes running on server which can lead to degrade performance while
>>> doing load testing and as Kasun mentioned and we can run perf testing by
>>> enabling JFR.
>>>
>>> On Thu, Feb 4, 2016 at 10:20 PM, Kasun Indrasiri  wrote:
>>>
 Yes. We need to run this on perf testing environments and do a detailed
 analysis of JFRs. Also, we better use 'wrk' instead of ab.

 I guess the results were quite consistent with GW's performance test
 suite.
 @Ranawaka can you please comment on the behavior of perf testing
 results.

 On Wed, Feb 3, 2016 at 8:32 PM, Isuru Perera  wrote:

> The results are not consistent. For example, in Original GW, the RPS
> is very low in 500 & 900 concurrency. Therefore we cannot exactly tell
> whether there is a considerable performance impact when using Carbon
> Metrics.
>
> We first need to make sure there are no issues in the performance
> testing environment.
>
> During the previous Gateway testing, did anyone notice such
> inconsistent behaviour in different concurrency levels?
>
> On Wed, Feb 3, 2016 at 7:02 PM, Chamil Elladeniya 
> wrote:
>
>> Adding the team
>>
>>
>> On Wed, Feb 3, 2016 at 6:59 PM, Chamil Elladeniya 
>> wrote:
>>
>>> Hi all,
>>> This is the perf test results which performed in order to compare
>>> metrics integrated GW and the pure.
>>>
>>> Original GW
>>> Integrated GW
>>> # of requests Concurrency Requests per second [#/sec] Time per
>>> request [ms]
>>> # of requests Concurrency Requests per second [#/sec] Time per
>>> request [ms]
>>> 100 100 15543.07 6.434
>>> 100 100 15219.26 6.571
>>> 100 200 15633.71 12.793
>>> 100 200 12739.49 15.699
>>> 100 300 14406.71 20.824
>>> 100 300 15312.92 19.591
>>> 100 400 15116.58 26.461
>>> 100 400 12272.92 32.592
>>> 100 500 7053.54 70.886
>>> 100 500 6697.34 74.92
>>> 100 600 9497.65 63.173
>>> 100 600 11559.41 59.57
>>> 100 700 14322.05 48.876
>>> 100 700 10049.72 102.6
>>> 100 800 11374.63 70.332
>>> 100 800 10578.78 93.599
>>> 100 900 4838.48 186.009
>>> 100 900 3123.29 288.158
>>> 100 1000 9270.24 150.28
>>> 100 1000 9181.02 153.23
>>> 100 1100 13958.91 78.803
>>> 100 1100 

Re: [Dev] Latency Calculation Feature in WSO2 GW

2016-02-07 Thread Chamil Elladeniya
Hi all,

As the previous test results were inconsistent, I did the test again using
wrk to test the above scenario.

Remote environment testing Using wrk
Original GW
Integrated GW
# of threads Connections Requests per second [#/sec] Time per request [ms]
# of threads Connections Requests per second [#/sec] Time per request [ms]
10 100 13721.18 7.98
10 100 14802.58 8.55
10 200 11884.68 18.24
10 200 14997.99 15.74
10 300 11735.99 26.98
10 300 13769.9 26.99
10 400 13257.39 32.68
10 400 14569.7 33.23
10 500 12124.66 44.14
10 500 14016.72 42.47
10 600 13575.55 50.63
10 600 13117.01 55.74
10 700 12891.08 66.33
10 700 12848.27 66.64
10 800 13045.18 71.42
10 800 13391.08 70.43
10 900 12990.17 86.4
10 900 13607.67 88.85
10 1000 7631.25 87.14
10 1000 8390.08 91.55

Above average values are based on these individual test values[1].

[1]
https://docs.google.com/a/wso2.com/spreadsheets/d/1LhOjuaVFlv3AVDN-gYOW0S9rhv5kyEqczPhQ106bTZE/edit?usp=sharing

On Fri, Feb 5, 2016 at 8:07 AM, Nadeeshaan Gunasinghe 
wrote:

> Hi Chamil,
>
> It would be better if we can run another round of perf testing before we
> come to a conclusion here.
>
>
> *Nadeeshaan Gunasinghe*
> Software Engineer, WSO2 Inc. http://wso2.com
> +94770596754 | nadeesh...@wso2.com | Skype: nadeeshaan.gunasinghe
> <#1759133060_>
> 
>   
> 
> Get a signature like this: Click here!
> 
>
> On Fri, Feb 5, 2016 at 7:47 AM, Isuru Ranawaka  wrote:
>
>> Hi Chamila,
>>
>> Results should be consisted and we have seen consisted results while
>> doing perf testing . Here we need to ensure whether there are other
>> processes running on server which can lead to degrade performance while
>> doing load testing and as Kasun mentioned and we can run perf testing by
>> enabling JFR.
>>
>> On Thu, Feb 4, 2016 at 10:20 PM, Kasun Indrasiri  wrote:
>>
>>> Yes. We need to run this on perf testing environments and do a detailed
>>> analysis of JFRs. Also, we better use 'wrk' instead of ab.
>>>
>>> I guess the results were quite consistent with GW's performance test
>>> suite.
>>> @Ranawaka can you please comment on the behavior of perf testing
>>> results.
>>>
>>> On Wed, Feb 3, 2016 at 8:32 PM, Isuru Perera  wrote:
>>>
 The results are not consistent. For example, in Original GW, the RPS is
 very low in 500 & 900 concurrency. Therefore we cannot exactly tell whether
 there is a considerable performance impact when using Carbon Metrics.

 We first need to make sure there are no issues in the performance
 testing environment.

 During the previous Gateway testing, did anyone notice such
 inconsistent behaviour in different concurrency levels?

 On Wed, Feb 3, 2016 at 7:02 PM, Chamil Elladeniya 
 wrote:

> Adding the team
>
>
> On Wed, Feb 3, 2016 at 6:59 PM, Chamil Elladeniya 
> wrote:
>
>> Hi all,
>> This is the perf test results which performed in order to compare
>> metrics integrated GW and the pure.
>>
>> Original GW
>> Integrated GW
>> # of requests Concurrency Requests per second [#/sec] Time per
>> request [ms]
>> # of requests Concurrency Requests per second [#/sec] Time per
>> request [ms]
>> 100 100 15543.07 6.434
>> 100 100 15219.26 6.571
>> 100 200 15633.71 12.793
>> 100 200 12739.49 15.699
>> 100 300 14406.71 20.824
>> 100 300 15312.92 19.591
>> 100 400 15116.58 26.461
>> 100 400 12272.92 32.592
>> 100 500 7053.54 70.886
>> 100 500 6697.34 74.92
>> 100 600 9497.65 63.173
>> 100 600 11559.41 59.57
>> 100 700 14322.05 48.876
>> 100 700 10049.72 102.6
>> 100 800 11374.63 70.332
>> 100 800 10578.78 93.599
>> 100 900 4838.48 186.009
>> 100 900 3123.29 288.158
>> 100 1000 9270.24 150.28
>> 100 1000 9181.02 153.23
>> 100 1100 13958.91 78.803
>> 100 1100 13573.83 81.038
>> 100 1300 14110.01 92.133
>> 100 1300 5978.51 217.446
>> 100 1400 13769.77 101.672
>> 100 1400 timeout
>> 100 1500 5573.45 404.5
>> 100 1500 timeout
>>
>>
>> Above values are taken as an average of tests per case. Integrated GW
>> results timeout when the concurrency level is above 1400
>>
>> Thank you!
>>
>> On Tue, Jan 26, 2016 at 10:53 AM, Chamil Elladeniya > > wrote:
>>
>>> Hi all,

Re: [Dev] Latency Calculation Feature in WSO2 GW

2016-02-04 Thread Nadeeshaan Gunasinghe
Hi Chamil,

It would be better if we can run another round of perf testing before we
come to a conclusion here.


*Nadeeshaan Gunasinghe*
Software Engineer, WSO2 Inc. http://wso2.com
+94770596754 | nadeesh...@wso2.com | Skype: nadeeshaan.gunasinghe <#>

  

Get a signature like this: Click here!


On Fri, Feb 5, 2016 at 7:47 AM, Isuru Ranawaka  wrote:

> Hi Chamila,
>
> Results should be consisted and we have seen consisted results while doing
> perf testing . Here we need to ensure whether there are other processes
> running on server which can lead to degrade performance while doing load
> testing and as Kasun mentioned and we can run perf testing by enabling JFR.
>
> On Thu, Feb 4, 2016 at 10:20 PM, Kasun Indrasiri  wrote:
>
>> Yes. We need to run this on perf testing environments and do a detailed
>> analysis of JFRs. Also, we better use 'wrk' instead of ab.
>>
>> I guess the results were quite consistent with GW's performance test
>> suite.
>> @Ranawaka can you please comment on the behavior of perf testing results.
>>
>> On Wed, Feb 3, 2016 at 8:32 PM, Isuru Perera  wrote:
>>
>>> The results are not consistent. For example, in Original GW, the RPS is
>>> very low in 500 & 900 concurrency. Therefore we cannot exactly tell whether
>>> there is a considerable performance impact when using Carbon Metrics.
>>>
>>> We first need to make sure there are no issues in the performance
>>> testing environment.
>>>
>>> During the previous Gateway testing, did anyone notice such inconsistent
>>> behaviour in different concurrency levels?
>>>
>>> On Wed, Feb 3, 2016 at 7:02 PM, Chamil Elladeniya 
>>> wrote:
>>>
 Adding the team


 On Wed, Feb 3, 2016 at 6:59 PM, Chamil Elladeniya 
 wrote:

> Hi all,
> This is the perf test results which performed in order to compare
> metrics integrated GW and the pure.
>
> Original GW
> Integrated GW
> # of requests Concurrency Requests per second [#/sec] Time per
> request [ms]
> # of requests Concurrency Requests per second [#/sec] Time per
> request [ms]
> 100 100 15543.07 6.434
> 100 100 15219.26 6.571
> 100 200 15633.71 12.793
> 100 200 12739.49 15.699
> 100 300 14406.71 20.824
> 100 300 15312.92 19.591
> 100 400 15116.58 26.461
> 100 400 12272.92 32.592
> 100 500 7053.54 70.886
> 100 500 6697.34 74.92
> 100 600 9497.65 63.173
> 100 600 11559.41 59.57
> 100 700 14322.05 48.876
> 100 700 10049.72 102.6
> 100 800 11374.63 70.332
> 100 800 10578.78 93.599
> 100 900 4838.48 186.009
> 100 900 3123.29 288.158
> 100 1000 9270.24 150.28
> 100 1000 9181.02 153.23
> 100 1100 13958.91 78.803
> 100 1100 13573.83 81.038
> 100 1300 14110.01 92.133
> 100 1300 5978.51 217.446
> 100 1400 13769.77 101.672
> 100 1400 timeout
> 100 1500 5573.45 404.5
> 100 1500 timeout
>
>
> Above values are taken as an average of tests per case. Integrated GW
> results timeout when the concurrency level is above 1400
>
> Thank you!
>
> On Tue, Jan 26, 2016 at 10:53 AM, Chamil Elladeniya 
> wrote:
>
>> Hi all,
>>
>> Currently I'm tasked with implementing latency metrics calculation
>> feature according to the proposed architecture [1]. So far I have
>> integrated carbon-metrics and working on load testing to check if there 
>> is
>> any performance degradation of GW.
>>
>> [1] [Architecture] Implementing Latency Metrics Calculation Feature
>> in GW
>>
>> Thank you!
>>
>> On Thu, Dec 17, 2015 at 10:46 AM, Viraj Senevirathne > > wrote:
>>
>>> Hi All,
>>>
>>> In ESB 4.10.0 we are introducing new statistic feature which lets
>>> user drill down service level statistics.
>>>
>>> So for higher level statistics we can include,
>>>
>>>- Avg,Min, Maximum Mediation times for each service
>>>- Statistics of each endpoints
>>>- Allow users to enable and disable statistics for each
>>>components
>>>- Faults encounters while mediation for each service
>>>
>>> These are some extra parameters that exists in current transport
>>> latency parameters. I think it would be better to incorporate following
>>> parameters too.

Re: [Dev] Latency Calculation Feature in WSO2 GW

2016-02-04 Thread Isuru Ranawaka
Hi Chamila,

Results should be consisted and we have seen consisted results while doing
perf testing . Here we need to ensure whether there are other processes
running on server which can lead to degrade performance while doing load
testing and as Kasun mentioned and we can run perf testing by enabling JFR.

On Thu, Feb 4, 2016 at 10:20 PM, Kasun Indrasiri  wrote:

> Yes. We need to run this on perf testing environments and do a detailed
> analysis of JFRs. Also, we better use 'wrk' instead of ab.
>
> I guess the results were quite consistent with GW's performance test
> suite.
> @Ranawaka can you please comment on the behavior of perf testing results.
>
> On Wed, Feb 3, 2016 at 8:32 PM, Isuru Perera  wrote:
>
>> The results are not consistent. For example, in Original GW, the RPS is
>> very low in 500 & 900 concurrency. Therefore we cannot exactly tell whether
>> there is a considerable performance impact when using Carbon Metrics.
>>
>> We first need to make sure there are no issues in the performance testing
>> environment.
>>
>> During the previous Gateway testing, did anyone notice such inconsistent
>> behaviour in different concurrency levels?
>>
>> On Wed, Feb 3, 2016 at 7:02 PM, Chamil Elladeniya 
>> wrote:
>>
>>> Adding the team
>>>
>>>
>>> On Wed, Feb 3, 2016 at 6:59 PM, Chamil Elladeniya 
>>> wrote:
>>>
 Hi all,
 This is the perf test results which performed in order to compare
 metrics integrated GW and the pure.

 Original GW
 Integrated GW
 # of requests Concurrency Requests per second [#/sec] Time per request
 [ms]
 # of requests Concurrency Requests per second [#/sec] Time per request
 [ms]
 100 100 15543.07 6.434
 100 100 15219.26 6.571
 100 200 15633.71 12.793
 100 200 12739.49 15.699
 100 300 14406.71 20.824
 100 300 15312.92 19.591
 100 400 15116.58 26.461
 100 400 12272.92 32.592
 100 500 7053.54 70.886
 100 500 6697.34 74.92
 100 600 9497.65 63.173
 100 600 11559.41 59.57
 100 700 14322.05 48.876
 100 700 10049.72 102.6
 100 800 11374.63 70.332
 100 800 10578.78 93.599
 100 900 4838.48 186.009
 100 900 3123.29 288.158
 100 1000 9270.24 150.28
 100 1000 9181.02 153.23
 100 1100 13958.91 78.803
 100 1100 13573.83 81.038
 100 1300 14110.01 92.133
 100 1300 5978.51 217.446
 100 1400 13769.77 101.672
 100 1400 timeout
 100 1500 5573.45 404.5
 100 1500 timeout


 Above values are taken as an average of tests per case. Integrated GW
 results timeout when the concurrency level is above 1400

 Thank you!

 On Tue, Jan 26, 2016 at 10:53 AM, Chamil Elladeniya 
 wrote:

> Hi all,
>
> Currently I'm tasked with implementing latency metrics calculation
> feature according to the proposed architecture [1]. So far I have
> integrated carbon-metrics and working on load testing to check if there is
> any performance degradation of GW.
>
> [1] [Architecture] Implementing Latency Metrics Calculation Feature
> in GW
>
> Thank you!
>
> On Thu, Dec 17, 2015 at 10:46 AM, Viraj Senevirathne 
> wrote:
>
>> Hi All,
>>
>> In ESB 4.10.0 we are introducing new statistic feature which lets
>> user drill down service level statistics.
>>
>> So for higher level statistics we can include,
>>
>>- Avg,Min, Maximum Mediation times for each service
>>- Statistics of each endpoints
>>- Allow users to enable and disable statistics for each components
>>- Faults encounters while mediation for each service
>>
>> These are some extra parameters that exists in current transport
>> latency parameters. I think it would be better to incorporate following
>> parameters too.
>>
>>- Parameters
>>- Messages Received.
>>- Requests received.
>>- Responses sent
>>- Fault in Receiving
>>- Faults in Sending
>>- Min, Max, Avg  message size sent
>>- Min, Max, Avg  message size received
>>- Bytes Received
>>- Bytes received
>>- Timeouts in Receiving
>>- Timeouts in Sending
>>- Active Thread Count
>>- Last Reset Time
>>- Statistics Views for Daily, Hourly, by minutes ( This may be
>>optional)
>>
>>
>> *Operations*
>>
>>- Reset Statistics
>>
>>
>> Thank You,
>>
>> On Thu, Dec 17, 2015 at 10:20 AM, Kasun Indrasiri 
>> wrote:
>>
>>> We may also need a bit of high level stats too.. For instance things
>>> we have included in ESB 4.10.
>>>
>>> On Thu, Dec 17, 2015 at 10:17 AM, Nadeeshaan Gunasinghe <

Re: [Dev] Latency Calculation Feature in WSO2 GW

2016-02-04 Thread Kasun Indrasiri
Yes. We need to run this on perf testing environments and do a detailed
analysis of JFRs. Also, we better use 'wrk' instead of ab.

I guess the results were quite consistent with GW's performance test suite.
@Ranawaka can you please comment on the behavior of perf testing results.

On Wed, Feb 3, 2016 at 8:32 PM, Isuru Perera  wrote:

> The results are not consistent. For example, in Original GW, the RPS is
> very low in 500 & 900 concurrency. Therefore we cannot exactly tell whether
> there is a considerable performance impact when using Carbon Metrics.
>
> We first need to make sure there are no issues in the performance testing
> environment.
>
> During the previous Gateway testing, did anyone notice such inconsistent
> behaviour in different concurrency levels?
>
> On Wed, Feb 3, 2016 at 7:02 PM, Chamil Elladeniya 
> wrote:
>
>> Adding the team
>>
>>
>> On Wed, Feb 3, 2016 at 6:59 PM, Chamil Elladeniya 
>> wrote:
>>
>>> Hi all,
>>> This is the perf test results which performed in order to compare
>>> metrics integrated GW and the pure.
>>>
>>> Original GW
>>> Integrated GW
>>> # of requests Concurrency Requests per second [#/sec] Time per request
>>> [ms]
>>> # of requests Concurrency Requests per second [#/sec] Time per request
>>> [ms]
>>> 100 100 15543.07 6.434
>>> 100 100 15219.26 6.571
>>> 100 200 15633.71 12.793
>>> 100 200 12739.49 15.699
>>> 100 300 14406.71 20.824
>>> 100 300 15312.92 19.591
>>> 100 400 15116.58 26.461
>>> 100 400 12272.92 32.592
>>> 100 500 7053.54 70.886
>>> 100 500 6697.34 74.92
>>> 100 600 9497.65 63.173
>>> 100 600 11559.41 59.57
>>> 100 700 14322.05 48.876
>>> 100 700 10049.72 102.6
>>> 100 800 11374.63 70.332
>>> 100 800 10578.78 93.599
>>> 100 900 4838.48 186.009
>>> 100 900 3123.29 288.158
>>> 100 1000 9270.24 150.28
>>> 100 1000 9181.02 153.23
>>> 100 1100 13958.91 78.803
>>> 100 1100 13573.83 81.038
>>> 100 1300 14110.01 92.133
>>> 100 1300 5978.51 217.446
>>> 100 1400 13769.77 101.672
>>> 100 1400 timeout
>>> 100 1500 5573.45 404.5
>>> 100 1500 timeout
>>>
>>>
>>> Above values are taken as an average of tests per case. Integrated GW
>>> results timeout when the concurrency level is above 1400
>>>
>>> Thank you!
>>>
>>> On Tue, Jan 26, 2016 at 10:53 AM, Chamil Elladeniya 
>>> wrote:
>>>
 Hi all,

 Currently I'm tasked with implementing latency metrics calculation
 feature according to the proposed architecture [1]. So far I have
 integrated carbon-metrics and working on load testing to check if there is
 any performance degradation of GW.

 [1] [Architecture] Implementing Latency Metrics Calculation Feature in
 GW

 Thank you!

 On Thu, Dec 17, 2015 at 10:46 AM, Viraj Senevirathne 
 wrote:

> Hi All,
>
> In ESB 4.10.0 we are introducing new statistic feature which lets user
> drill down service level statistics.
>
> So for higher level statistics we can include,
>
>- Avg,Min, Maximum Mediation times for each service
>- Statistics of each endpoints
>- Allow users to enable and disable statistics for each components
>- Faults encounters while mediation for each service
>
> These are some extra parameters that exists in current transport
> latency parameters. I think it would be better to incorporate following
> parameters too.
>
>- Parameters
>- Messages Received.
>- Requests received.
>- Responses sent
>- Fault in Receiving
>- Faults in Sending
>- Min, Max, Avg  message size sent
>- Min, Max, Avg  message size received
>- Bytes Received
>- Bytes received
>- Timeouts in Receiving
>- Timeouts in Sending
>- Active Thread Count
>- Last Reset Time
>- Statistics Views for Daily, Hourly, by minutes ( This may be
>optional)
>
>
> *Operations*
>
>- Reset Statistics
>
>
> Thank You,
>
> On Thu, Dec 17, 2015 at 10:20 AM, Kasun Indrasiri 
> wrote:
>
>> We may also need a bit of high level stats too.. For instance things
>> we have included in ESB 4.10.
>>
>> On Thu, Dec 17, 2015 at 10:17 AM, Nadeeshaan Gunasinghe <
>> nadeesh...@wso2.com> wrote:
>>
>>> Hi all,
>>> It has been a requirement to implement a feature for keeping track
>>> of the various types of latency metrics in WSO2 GW. At the moment I am
>>> involved in implementing this latency metrics calculation feature 
>>> according
>>> to the architecture proposed at [1].
>>> As the first step I am capturing the raw data required for
>>> calculating various latency values. These raw data being collected as
>>> follows at the moment,

Re: [Dev] Latency Calculation Feature in WSO2 GW

2016-02-03 Thread Chamil Elladeniya
Adding the team

On Wed, Feb 3, 2016 at 6:59 PM, Chamil Elladeniya  wrote:

> Hi all,
> This is the perf test results which performed in order to compare metrics
> integrated GW and the pure.
>
> Original GW
> Integrated GW
> # of requests Concurrency Requests per second [#/sec] Time per request
> [ms]
> # of requests Concurrency Requests per second [#/sec] Time per request
> [ms]
> 100 100 15543.07 6.434
> 100 100 15219.26 6.571
> 100 200 15633.71 12.793
> 100 200 12739.49 15.699
> 100 300 14406.71 20.824
> 100 300 15312.92 19.591
> 100 400 15116.58 26.461
> 100 400 12272.92 32.592
> 100 500 7053.54 70.886
> 100 500 6697.34 74.92
> 100 600 9497.65 63.173
> 100 600 11559.41 59.57
> 100 700 14322.05 48.876
> 100 700 10049.72 102.6
> 100 800 11374.63 70.332
> 100 800 10578.78 93.599
> 100 900 4838.48 186.009
> 100 900 3123.29 288.158
> 100 1000 9270.24 150.28
> 100 1000 9181.02 153.23
> 100 1100 13958.91 78.803
> 100 1100 13573.83 81.038
> 100 1300 14110.01 92.133
> 100 1300 5978.51 217.446
> 100 1400 13769.77 101.672
> 100 1400 timeout
> 100 1500 5573.45 404.5
> 100 1500 timeout
>
>
> Above values are taken as an average of tests per case. Integrated GW
> results timeout when the concurrency level is above 1400
>
> Thank you!
>
> On Tue, Jan 26, 2016 at 10:53 AM, Chamil Elladeniya 
> wrote:
>
>> Hi all,
>>
>> Currently I'm tasked with implementing latency metrics calculation
>> feature according to the proposed architecture [1]. So far I have
>> integrated carbon-metrics and working on load testing to check if there is
>> any performance degradation of GW.
>>
>> [1] [Architecture] Implementing Latency Metrics Calculation Feature in GW
>>
>> Thank you!
>>
>> On Thu, Dec 17, 2015 at 10:46 AM, Viraj Senevirathne 
>> wrote:
>>
>>> Hi All,
>>>
>>> In ESB 4.10.0 we are introducing new statistic feature which lets user
>>> drill down service level statistics.
>>>
>>> So for higher level statistics we can include,
>>>
>>>- Avg,Min, Maximum Mediation times for each service
>>>- Statistics of each endpoints
>>>- Allow users to enable and disable statistics for each components
>>>- Faults encounters while mediation for each service
>>>
>>> These are some extra parameters that exists in current transport latency
>>> parameters. I think it would be better to incorporate following parameters
>>> too.
>>>
>>>- Parameters
>>>- Messages Received.
>>>- Requests received.
>>>- Responses sent
>>>- Fault in Receiving
>>>- Faults in Sending
>>>- Min, Max, Avg  message size sent
>>>- Min, Max, Avg  message size received
>>>- Bytes Received
>>>- Bytes received
>>>- Timeouts in Receiving
>>>- Timeouts in Sending
>>>- Active Thread Count
>>>- Last Reset Time
>>>- Statistics Views for Daily, Hourly, by minutes ( This may be
>>>optional)
>>>
>>>
>>> *Operations*
>>>
>>>- Reset Statistics
>>>
>>>
>>> Thank You,
>>>
>>> On Thu, Dec 17, 2015 at 10:20 AM, Kasun Indrasiri 
>>> wrote:
>>>
 We may also need a bit of high level stats too.. For instance things we
 have included in ESB 4.10.

 On Thu, Dec 17, 2015 at 10:17 AM, Nadeeshaan Gunasinghe <
 nadeesh...@wso2.com> wrote:

> Hi all,
> It has been a requirement to implement a feature for keeping track of
> the various types of latency metrics in WSO2 GW. At the moment I am
> involved in implementing this latency metrics calculation feature 
> according
> to the architecture proposed at [1].
> As the first step I am capturing the raw data required for calculating
> various latency values. These raw data being collected as follows at the
> moment,
>
> *Server Side*
>
>- Source Connection Creation time
>- Source Connection life time
>- Request header read time
>- Request body read time
>- Request read time
>
>
> *Client Side*
>
>- Client connection creation time
>- Client Connection life time
>- Response header read time
>- Response body read time
>- Response read time
>
>
> I am going to keep track of these raw data and then transport these
> data through the carbon message, as the initial step. Then a latency
> calculation engine is going to be implemented to calculate the various
> types of latency values such as,
>
>- Average Throughput of a connection
>- Average Latency of a connection
>- Average jitter of a connection
>- Message build time
>- Message encoding time
>- Message mediation time
>- etc
>
> Then a data publisher component is going to be implemented for
> publishing data to  JMX and DAS.
>
> During the implementation additional raw data will be 

Re: [Dev] Latency Calculation Feature in WSO2 GW

2016-02-03 Thread Chamil Elladeniya
Hi all,
This is the perf test results which performed in order to compare metrics
integrated GW and the pure.

Original GW
Integrated GW
# of requests Concurrency Requests per second [#/sec] Time per request [ms]
# of requests Concurrency Requests per second [#/sec] Time per request [ms]
100 100 15543.07 6.434
100 100 15219.26 6.571
100 200 15633.71 12.793
100 200 12739.49 15.699
100 300 14406.71 20.824
100 300 15312.92 19.591
100 400 15116.58 26.461
100 400 12272.92 32.592
100 500 7053.54 70.886
100 500 6697.34 74.92
100 600 9497.65 63.173
100 600 11559.41 59.57
100 700 14322.05 48.876
100 700 10049.72 102.6
100 800 11374.63 70.332
100 800 10578.78 93.599
100 900 4838.48 186.009
100 900 3123.29 288.158
100 1000 9270.24 150.28
100 1000 9181.02 153.23
100 1100 13958.91 78.803
100 1100 13573.83 81.038
100 1300 14110.01 92.133
100 1300 5978.51 217.446
100 1400 13769.77 101.672
100 1400 timeout
100 1500 5573.45 404.5
100 1500 timeout


Above values are taken as an average of tests per case. Integrated GW
results timeout when the concurrency level is above 1400

Thank you!

On Tue, Jan 26, 2016 at 10:53 AM, Chamil Elladeniya 
wrote:

> Hi all,
>
> Currently I'm tasked with implementing latency metrics calculation feature
> according to the proposed architecture [1]. So far I have integrated
> carbon-metrics and working on load testing to check if there is any
> performance degradation of GW.
>
> [1] [Architecture] Implementing Latency Metrics Calculation Feature in GW
>
> Thank you!
>
> On Thu, Dec 17, 2015 at 10:46 AM, Viraj Senevirathne 
> wrote:
>
>> Hi All,
>>
>> In ESB 4.10.0 we are introducing new statistic feature which lets user
>> drill down service level statistics.
>>
>> So for higher level statistics we can include,
>>
>>- Avg,Min, Maximum Mediation times for each service
>>- Statistics of each endpoints
>>- Allow users to enable and disable statistics for each components
>>- Faults encounters while mediation for each service
>>
>> These are some extra parameters that exists in current transport latency
>> parameters. I think it would be better to incorporate following parameters
>> too.
>>
>>- Parameters
>>- Messages Received.
>>- Requests received.
>>- Responses sent
>>- Fault in Receiving
>>- Faults in Sending
>>- Min, Max, Avg  message size sent
>>- Min, Max, Avg  message size received
>>- Bytes Received
>>- Bytes received
>>- Timeouts in Receiving
>>- Timeouts in Sending
>>- Active Thread Count
>>- Last Reset Time
>>- Statistics Views for Daily, Hourly, by minutes ( This may be
>>optional)
>>
>>
>> *Operations*
>>
>>- Reset Statistics
>>
>>
>> Thank You,
>>
>> On Thu, Dec 17, 2015 at 10:20 AM, Kasun Indrasiri  wrote:
>>
>>> We may also need a bit of high level stats too.. For instance things we
>>> have included in ESB 4.10.
>>>
>>> On Thu, Dec 17, 2015 at 10:17 AM, Nadeeshaan Gunasinghe <
>>> nadeesh...@wso2.com> wrote:
>>>
 Hi all,
 It has been a requirement to implement a feature for keeping track of
 the various types of latency metrics in WSO2 GW. At the moment I am
 involved in implementing this latency metrics calculation feature according
 to the architecture proposed at [1].
 As the first step I am capturing the raw data required for calculating
 various latency values. These raw data being collected as follows at the
 moment,

 *Server Side*

- Source Connection Creation time
- Source Connection life time
- Request header read time
- Request body read time
- Request read time


 *Client Side*

- Client connection creation time
- Client Connection life time
- Response header read time
- Response body read time
- Response read time


 I am going to keep track of these raw data and then transport these
 data through the carbon message, as the initial step. Then a latency
 calculation engine is going to be implemented to calculate the various
 types of latency values such as,

- Average Throughput of a connection
- Average Latency of a connection
- Average jitter of a connection
- Message build time
- Message encoding time
- Message mediation time
- etc

 Then a data publisher component is going to be implemented for
 publishing data to  JMX and DAS.

 During the implementation additional raw data will be needed to be
 captured according to the type of metrics we are going to calculate. In
 such situation, will update with the latest status and findings.

 [1] [Architecture] Implementing Latency Metrics Calculation Feature in
 GW

 Regards

 *Nadeeshaan Gunasinghe*
 

Re: [Dev] Latency Calculation Feature in WSO2 GW

2016-02-03 Thread Isuru Perera
The results are not consistent. For example, in Original GW, the RPS is
very low in 500 & 900 concurrency. Therefore we cannot exactly tell whether
there is a considerable performance impact when using Carbon Metrics.

We first need to make sure there are no issues in the performance testing
environment.

During the previous Gateway testing, did anyone notice such inconsistent
behaviour in different concurrency levels?

On Wed, Feb 3, 2016 at 7:02 PM, Chamil Elladeniya  wrote:

> Adding the team
>
>
> On Wed, Feb 3, 2016 at 6:59 PM, Chamil Elladeniya 
> wrote:
>
>> Hi all,
>> This is the perf test results which performed in order to compare metrics
>> integrated GW and the pure.
>>
>> Original GW
>> Integrated GW
>> # of requests Concurrency Requests per second [#/sec] Time per request
>> [ms]
>> # of requests Concurrency Requests per second [#/sec] Time per request
>> [ms]
>> 100 100 15543.07 6.434
>> 100 100 15219.26 6.571
>> 100 200 15633.71 12.793
>> 100 200 12739.49 15.699
>> 100 300 14406.71 20.824
>> 100 300 15312.92 19.591
>> 100 400 15116.58 26.461
>> 100 400 12272.92 32.592
>> 100 500 7053.54 70.886
>> 100 500 6697.34 74.92
>> 100 600 9497.65 63.173
>> 100 600 11559.41 59.57
>> 100 700 14322.05 48.876
>> 100 700 10049.72 102.6
>> 100 800 11374.63 70.332
>> 100 800 10578.78 93.599
>> 100 900 4838.48 186.009
>> 100 900 3123.29 288.158
>> 100 1000 9270.24 150.28
>> 100 1000 9181.02 153.23
>> 100 1100 13958.91 78.803
>> 100 1100 13573.83 81.038
>> 100 1300 14110.01 92.133
>> 100 1300 5978.51 217.446
>> 100 1400 13769.77 101.672
>> 100 1400 timeout
>> 100 1500 5573.45 404.5
>> 100 1500 timeout
>>
>>
>> Above values are taken as an average of tests per case. Integrated GW
>> results timeout when the concurrency level is above 1400
>>
>> Thank you!
>>
>> On Tue, Jan 26, 2016 at 10:53 AM, Chamil Elladeniya 
>> wrote:
>>
>>> Hi all,
>>>
>>> Currently I'm tasked with implementing latency metrics calculation
>>> feature according to the proposed architecture [1]. So far I have
>>> integrated carbon-metrics and working on load testing to check if there is
>>> any performance degradation of GW.
>>>
>>> [1] [Architecture] Implementing Latency Metrics Calculation Feature in
>>> GW
>>>
>>> Thank you!
>>>
>>> On Thu, Dec 17, 2015 at 10:46 AM, Viraj Senevirathne 
>>> wrote:
>>>
 Hi All,

 In ESB 4.10.0 we are introducing new statistic feature which lets user
 drill down service level statistics.

 So for higher level statistics we can include,

- Avg,Min, Maximum Mediation times for each service
- Statistics of each endpoints
- Allow users to enable and disable statistics for each components
- Faults encounters while mediation for each service

 These are some extra parameters that exists in current transport
 latency parameters. I think it would be better to incorporate following
 parameters too.

- Parameters
- Messages Received.
- Requests received.
- Responses sent
- Fault in Receiving
- Faults in Sending
- Min, Max, Avg  message size sent
- Min, Max, Avg  message size received
- Bytes Received
- Bytes received
- Timeouts in Receiving
- Timeouts in Sending
- Active Thread Count
- Last Reset Time
- Statistics Views for Daily, Hourly, by minutes ( This may be
optional)


 *Operations*

- Reset Statistics


 Thank You,

 On Thu, Dec 17, 2015 at 10:20 AM, Kasun Indrasiri 
 wrote:

> We may also need a bit of high level stats too.. For instance things
> we have included in ESB 4.10.
>
> On Thu, Dec 17, 2015 at 10:17 AM, Nadeeshaan Gunasinghe <
> nadeesh...@wso2.com> wrote:
>
>> Hi all,
>> It has been a requirement to implement a feature for keeping track of
>> the various types of latency metrics in WSO2 GW. At the moment I am
>> involved in implementing this latency metrics calculation feature 
>> according
>> to the architecture proposed at [1].
>> As the first step I am capturing the raw data required for
>> calculating various latency values. These raw data being collected as
>> follows at the moment,
>>
>> *Server Side*
>>
>>- Source Connection Creation time
>>- Source Connection life time
>>- Request header read time
>>- Request body read time
>>- Request read time
>>
>>
>> *Client Side*
>>
>>- Client connection creation time
>>- Client Connection life time
>>- Response header read time
>>- Response body read time
>>- Response read time
>>
>>
>> I am going to keep track of these 

Re: [Dev] Latency Calculation Feature in WSO2 GW

2016-01-25 Thread Chamil Elladeniya
Hi all,

Currently I'm tasked with implementing latency metrics calculation feature
according to the proposed architecture [1]. So far I have integrated
carbon-metrics and working on load testing to check if there is any
performance degradation of GW.

[1] [Architecture] Implementing Latency Metrics Calculation Feature in GW

Thank you!

On Thu, Dec 17, 2015 at 10:46 AM, Viraj Senevirathne 
wrote:

> Hi All,
>
> In ESB 4.10.0 we are introducing new statistic feature which lets user
> drill down service level statistics.
>
> So for higher level statistics we can include,
>
>- Avg,Min, Maximum Mediation times for each service
>- Statistics of each endpoints
>- Allow users to enable and disable statistics for each components
>- Faults encounters while mediation for each service
>
> These are some extra parameters that exists in current transport latency
> parameters. I think it would be better to incorporate following parameters
> too.
>
>- Parameters
>- Messages Received.
>- Requests received.
>- Responses sent
>- Fault in Receiving
>- Faults in Sending
>- Min, Max, Avg  message size sent
>- Min, Max, Avg  message size received
>- Bytes Received
>- Bytes received
>- Timeouts in Receiving
>- Timeouts in Sending
>- Active Thread Count
>- Last Reset Time
>- Statistics Views for Daily, Hourly, by minutes ( This may be
>optional)
>
>
> *Operations*
>
>- Reset Statistics
>
>
> Thank You,
>
> On Thu, Dec 17, 2015 at 10:20 AM, Kasun Indrasiri  wrote:
>
>> We may also need a bit of high level stats too.. For instance things we
>> have included in ESB 4.10.
>>
>> On Thu, Dec 17, 2015 at 10:17 AM, Nadeeshaan Gunasinghe <
>> nadeesh...@wso2.com> wrote:
>>
>>> Hi all,
>>> It has been a requirement to implement a feature for keeping track of
>>> the various types of latency metrics in WSO2 GW. At the moment I am
>>> involved in implementing this latency metrics calculation feature according
>>> to the architecture proposed at [1].
>>> As the first step I am capturing the raw data required for calculating
>>> various latency values. These raw data being collected as follows at the
>>> moment,
>>>
>>> *Server Side*
>>>
>>>- Source Connection Creation time
>>>- Source Connection life time
>>>- Request header read time
>>>- Request body read time
>>>- Request read time
>>>
>>>
>>> *Client Side*
>>>
>>>- Client connection creation time
>>>- Client Connection life time
>>>- Response header read time
>>>- Response body read time
>>>- Response read time
>>>
>>>
>>> I am going to keep track of these raw data and then transport these data
>>> through the carbon message, as the initial step. Then a latency calculation
>>> engine is going to be implemented to calculate the various types of latency
>>> values such as,
>>>
>>>- Average Throughput of a connection
>>>- Average Latency of a connection
>>>- Average jitter of a connection
>>>- Message build time
>>>- Message encoding time
>>>- Message mediation time
>>>- etc
>>>
>>> Then a data publisher component is going to be implemented for
>>> publishing data to  JMX and DAS.
>>>
>>> During the implementation additional raw data will be needed to be
>>> captured according to the type of metrics we are going to calculate. In
>>> such situation, will update with the latest status and findings.
>>>
>>> [1] [Architecture] Implementing Latency Metrics Calculation Feature in
>>> GW
>>>
>>> Regards
>>>
>>> *Nadeeshaan Gunasinghe*
>>> Software Engineer, WSO2 Inc. http://wso2.com
>>> +94770596754 | nadeesh...@wso2.com | Skype: nadeeshaan.gunasinghe
>>> <#1663256302_151ae46f9e826311_151ae43483a6d1f0_>
>>> 
>>>   
>>> 
>>> Get a signature like this: Click here!
>>> 
>>>
>>
>>
>>
>> --
>> Kasun Indrasiri
>> Software Architect
>> WSO2, Inc.; http://wso2.com
>> lean.enterprise.middleware
>>
>> cell: +94 77 556 5206
>> Blog : http://kasunpanorama.blogspot.com/
>>
>
>
>
> --
> Viraj Senevirathne
> Software Engineer; WSO2, Inc.
>
> Mobile : +94 71 958 0269
> Email : vir...@wso2.com
>
> ___
> Dev mailing list
> Dev@wso2.org
> http://wso2.org/cgi-bin/mailman/listinfo/dev
>
>


-- 
Chamil Elladeniya
*Software Engineering **Intern*
Mobile : +94 71 6181154 <%2B94%20%280%29%20773%20451194>
cham...@wso2.com
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] Latency Calculation Feature in WSO2 GW

2015-12-16 Thread Viraj Senevirathne
Hi All,

In ESB 4.10.0 we are introducing new statistic feature which lets user
drill down service level statistics.

So for higher level statistics we can include,

   - Avg,Min, Maximum Mediation times for each service
   - Statistics of each endpoints
   - Allow users to enable and disable statistics for each components
   - Faults encounters while mediation for each service

These are some extra parameters that exists in current transport latency
parameters. I think it would be better to incorporate following parameters
too.

   - Parameters
   - Messages Received.
   - Requests received.
   - Responses sent
   - Fault in Receiving
   - Faults in Sending
   - Min, Max, Avg  message size sent
   - Min, Max, Avg  message size received
   - Bytes Received
   - Bytes received
   - Timeouts in Receiving
   - Timeouts in Sending
   - Active Thread Count
   - Last Reset Time
   - Statistics Views for Daily, Hourly, by minutes ( This may be optional)


*Operations*

   - Reset Statistics


Thank You,

On Thu, Dec 17, 2015 at 10:20 AM, Kasun Indrasiri  wrote:

> We may also need a bit of high level stats too.. For instance things we
> have included in ESB 4.10.
>
> On Thu, Dec 17, 2015 at 10:17 AM, Nadeeshaan Gunasinghe <
> nadeesh...@wso2.com> wrote:
>
>> Hi all,
>> It has been a requirement to implement a feature for keeping track of the
>> various types of latency metrics in WSO2 GW. At the moment I am involved in
>> implementing this latency metrics calculation feature according to the
>> architecture proposed at [1].
>> As the first step I am capturing the raw data required for calculating
>> various latency values. These raw data being collected as follows at the
>> moment,
>>
>> *Server Side*
>>
>>- Source Connection Creation time
>>- Source Connection life time
>>- Request header read time
>>- Request body read time
>>- Request read time
>>
>>
>> *Client Side*
>>
>>- Client connection creation time
>>- Client Connection life time
>>- Response header read time
>>- Response body read time
>>- Response read time
>>
>>
>> I am going to keep track of these raw data and then transport these data
>> through the carbon message, as the initial step. Then a latency calculation
>> engine is going to be implemented to calculate the various types of latency
>> values such as,
>>
>>- Average Throughput of a connection
>>- Average Latency of a connection
>>- Average jitter of a connection
>>- Message build time
>>- Message encoding time
>>- Message mediation time
>>- etc
>>
>> Then a data publisher component is going to be implemented for publishing
>> data to  JMX and DAS.
>>
>> During the implementation additional raw data will be needed to be
>> captured according to the type of metrics we are going to calculate. In
>> such situation, will update with the latest status and findings.
>>
>> [1] [Architecture] Implementing Latency Metrics Calculation Feature in GW
>>
>> Regards
>>
>> *Nadeeshaan Gunasinghe*
>> Software Engineer, WSO2 Inc. http://wso2.com
>> +94770596754 | nadeesh...@wso2.com | Skype: nadeeshaan.gunasinghe
>> <#151ae46f9e826311_151ae43483a6d1f0_>
>> 
>>   
>> 
>> Get a signature like this: Click here!
>> 
>>
>
>
>
> --
> Kasun Indrasiri
> Software Architect
> WSO2, Inc.; http://wso2.com
> lean.enterprise.middleware
>
> cell: +94 77 556 5206
> Blog : http://kasunpanorama.blogspot.com/
>



-- 
Viraj Senevirathne
Software Engineer; WSO2, Inc.

Mobile : +94 71 958 0269
Email : vir...@wso2.com
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


[Dev] Latency Calculation Feature in WSO2 GW

2015-12-16 Thread Nadeeshaan Gunasinghe
Hi all,
It has been a requirement to implement a feature for keeping track of the
various types of latency metrics in WSO2 GW. At the moment I am involved in
implementing this latency metrics calculation feature according to the
architecture proposed at [1].
As the first step I am capturing the raw data required for calculating
various latency values. These raw data being collected as follows at the
moment,

*Server Side*

   - Source Connection Creation time
   - Source Connection life time
   - Request header read time
   - Request body read time
   - Request read time


*Client Side*

   - Client connection creation time
   - Client Connection life time
   - Response header read time
   - Response body read time
   - Response read time


I am going to keep track of these raw data and then transport these data
through the carbon message, as the initial step. Then a latency calculation
engine is going to be implemented to calculate the various types of latency
values such as,

   - Average Throughput of a connection
   - Average Latency of a connection
   - Average jitter of a connection
   - Message build time
   - Message encoding time
   - Message mediation time
   - etc

Then a data publisher component is going to be implemented for publishing
data to  JMX and DAS.

During the implementation additional raw data will be needed to be captured
according to the type of metrics we are going to calculate. In such
situation, will update with the latest status and findings.

[1] [Architecture] Implementing Latency Metrics Calculation Feature in GW

Regards

*Nadeeshaan Gunasinghe*
Software Engineer, WSO2 Inc. http://wso2.com
+94770596754 | nadeesh...@wso2.com | Skype: nadeeshaan.gunasinghe <#>

  

Get a signature like this: Click here!

___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] Latency Calculation Feature in WSO2 GW

2015-12-16 Thread Kasun Indrasiri
We may also need a bit of high level stats too.. For instance things we
have included in ESB 4.10.

On Thu, Dec 17, 2015 at 10:17 AM, Nadeeshaan Gunasinghe  wrote:

> Hi all,
> It has been a requirement to implement a feature for keeping track of the
> various types of latency metrics in WSO2 GW. At the moment I am involved in
> implementing this latency metrics calculation feature according to the
> architecture proposed at [1].
> As the first step I am capturing the raw data required for calculating
> various latency values. These raw data being collected as follows at the
> moment,
>
> *Server Side*
>
>- Source Connection Creation time
>- Source Connection life time
>- Request header read time
>- Request body read time
>- Request read time
>
>
> *Client Side*
>
>- Client connection creation time
>- Client Connection life time
>- Response header read time
>- Response body read time
>- Response read time
>
>
> I am going to keep track of these raw data and then transport these data
> through the carbon message, as the initial step. Then a latency calculation
> engine is going to be implemented to calculate the various types of latency
> values such as,
>
>- Average Throughput of a connection
>- Average Latency of a connection
>- Average jitter of a connection
>- Message build time
>- Message encoding time
>- Message mediation time
>- etc
>
> Then a data publisher component is going to be implemented for publishing
> data to  JMX and DAS.
>
> During the implementation additional raw data will be needed to be
> captured according to the type of metrics we are going to calculate. In
> such situation, will update with the latest status and findings.
>
> [1] [Architecture] Implementing Latency Metrics Calculation Feature in GW
>
> Regards
>
> *Nadeeshaan Gunasinghe*
> Software Engineer, WSO2 Inc. http://wso2.com
> +94770596754 | nadeesh...@wso2.com | Skype: nadeeshaan.gunasinghe
> <#151ae43483a6d1f0_>
> 
>   
> 
> Get a signature like this: Click here!
> 
>



-- 
Kasun Indrasiri
Software Architect
WSO2, Inc.; http://wso2.com
lean.enterprise.middleware

cell: +94 77 556 5206
Blog : http://kasunpanorama.blogspot.com/
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev