Our initial assumptions were wrong and we were able to narrow down this
issue to a wrong configuration parameter name. Due to that, we were always
running with a single disruptor event handler thread instead of a thread
pool. Now we are seeing better perf values with disruptor as opposed to the
On Mon, Mar 14, 2016 at 1:53 PM, Srinath Perera wrote:
> I talked to Ranawaka and Isuru in detail.
>
> Disrupter helps a lot when tasks are CPU bound. In such cases, in can
> works with very few threads and reduce the overhead of communication
> between threads.
>
> However,
I talked to Ranawaka and Isuru in detail.
Disrupter helps a lot when tasks are CPU bound. In such cases, in can works
with very few threads and reduce the overhead of communication between
threads.
However, when threads block for I/O this advantage is reduced a lot. In
those cases, we need to
Hi,
Let me try to clarify few things here.
- Initially we implemented Netty HTTP transport with conventional thread
model (workers) and at the same time we also tested the Disruptor based
model for the same Gateway/Header based routing use case. Disruptor based
approach gave us around ~20k of
Hi Azeez,
In GW Disruptor threads are not used for make calls for backends. Backends
are called by Netty worker pool and those calls are non blocking calls. So
if backend responds after a delay it won't be a problem.
In MSF4J if it includes IO operations or delayed operations then it causes
Kasun et al,
In MSF4J, the threads from the disruptor pool itself are used for
processing in the MSF4J operation. In the case of the GW passthrough & HBR
scenarios, are those disruptor threads used to make the calls to the actual
backends? Is that a blocking call? What if the backend service is
On Sat, Mar 12, 2016 at 1:40 PM, Sanjiva Weerawarana
wrote:
> On Thu, Mar 10, 2016 at 6:20 PM, Sagara Gunathunga
> wrote:
>
>> On Thu, Mar 10, 2016 at 10:26 AM, Afkham Azeez wrote:
>>
>>> No from day 1, we have decided that GW & MSF4J will use
On Thu, Mar 10, 2016 at 6:20 PM, Sagara Gunathunga wrote:
> On Thu, Mar 10, 2016 at 10:26 AM, Afkham Azeez wrote:
>
>> No from day 1, we have decided that GW & MSF4J will use the same Netty
>> transport component so that the config file will be the same as well
Hi,
we have made the Disruptor optional and Samiyuru will continue on the
testing as well.
thanks
On Fri, Mar 11, 2016 at 12:08 PM, Isuru Ranawaka wrote:
> Hi Azeez,
>
> I am currently working on making disruptor optional and I will finish it
> by EOD today.After that we
On Fri, Mar 11, 2016 at 12:08 PM, Isuru Ranawaka wrote:
> Hi Azeez,
>
> I am currently working on making disruptor optional and I will finish it
> by EOD today.After that we will do the tests that kasun has mentioned and
> figured out the best thread model for MSF4j to be
Hi Azeez,
I am currently working on making disruptor optional and I will finish it
by EOD today.After that we will do the tests that kasun has mentioned and
figured out the best thread model for MSF4j to be used.
thanks
On Fri, Mar 11, 2016 at 12:00 PM, Afkham Azeez wrote:
>
Shall we aim to get to the bottom of this by EoD today?
On Fri, Mar 11, 2016 at 11:44 AM, Samiyuru Senarathne
wrote:
> Transport config used for the above JFR:
> -
> id: "jaxrs-http"
> host: "0.0.0.0"
> port: 8080
> bossThreadPoolSize: 1
> workerThreadPoolSize: 16
Transport config used for the above JFR:
-
id: "jaxrs-http"
host: "0.0.0.0"
port: 8080
bossThreadPoolSize: 1
workerThreadPoolSize: 16
parameters:
# -
#name: "execThreadPoolSize"
#value: 2048
-
name: "disruptor.buffer.size"
value: 1024
-
name:
Hi,
Please find results of the tests I have done so far.
https://docs.google.com/a/wso2.com/spreadsheets/d/16TXeXU022b5ILqkRsY3zZdnu2OiA3yWEN42xVL8eXa4/edit?usp=sharing
MSF4J 1.0.0 section of this tests gives a good insight into the netty
thread behaviour. I focused a bit more on that because
Yes, Azeez.
So we are currently testing following approaches, with some processing at
the engine side (rather than doing just pass-thru).
- Transport with netty worker pool(IO) -> message processing worker pool
which does the dispatching/processing of the messages : (No
Disruptor/Making the
I think Samiyuru tested with that nrw worker pool & still the performance
is unacceptable.
On Mar 11, 2016 9:45 AM, "Isuru Ranawaka" wrote:
> Hi ,
>
> We have already added native worker pool for Disruptor and Samiyuru is
> doing testing on that. We will make disruptor optional
Hi ,
We have already added native worker pool for Disruptor and Samiyuru is
doing testing on that. We will make disruptor optional as well.
thanks
On Thu, Mar 10, 2016 at 9:29 AM, Kasun Indrasiri wrote:
> Yes, we can make the disruptor optional. Also, we should try using the
On Thu, Mar 10, 2016 at 10:26 AM, Afkham Azeez wrote:
> No from day 1, we have decided that GW & MSF4J will use the same Netty
> transport component so that the config file will be the same as well as
> improvements made to that transport will be automatically available for
>
If we have issues with disruptor, those issues are equally affecting GW or
GW based servers for most of the real use cases. So fixing that in the
proper way and continue to use the same architecture for both GW and MSF4J
looks ideal to me.
On Thu, Mar 10, 2016 at 10:26 AM, Afkham Azeez
No from day 1, we have decided that GW & MSF4J will use the same Netty
transport component so that the config file will be the same as well as
improvements made to that transport will be automatically available for
both products. So now at least for MSF4J, we have issues in using the Netty
When we discuss last week about Carbon transports for MSF4J the main
rational we identified was moving to Carbon transport will decouple
transport threads from worker threads through Disruptor and provide lot of
flexibility and manageability. If disruptor can't give better performance
we should
After upgrading to the new transport, we are seeing a significant drop in
performance for any service that take some time to execute. We have tried
with the configuration used for the gateway which gave the best figures on
the same hardware. We have also noted that using a separate dedicated
22 matches
Mail list logo