I mean nowadays everyone doing networking is using SEDA based
solutions, but you know writing event based code is much more
complex than writing plain old synchronous code.
So I asked myself: will javaflow be performant enough in a similar
use case? I decided to answer myself by writing a small proof of
concept.
Cool
I want to get a synchronous network client and make it run
asynchronously without altering its code, because the code is much
more clean to be read than a SEDA/event based approach.
I know continuations libraries have to do much with the stack/local
memory and they may be a performance bottleneck, but I wanted to
give it a try and understand the real overhead.
Here is a Proof of Concept:
My dummy synchronous client:
http://people.apache.org/~bago/netflow/site/xref/org/apache/james/netflow/rewrite/TesterClient.html
The network abstraction is the Transport: basically it allow you to
read/write lines in a blocking way.
http://people.apache.org/~bago/netflow/site/xref/org/apache/james/netflow/Transport.html
As the first step I implement an "in vm" test that simply implements
the
above transport using mina against a mina backed Echo-VMPipe-
transported
server.
Here is the mina transport:
http://people.apache.org/~bago/netflow/site/xref/org/apache/james/netflow/MinaTransport.html
Now I want to make it asynchronous without changing the TesterClient
source code.
I wrote a ContinuingMinaTransport:
http://people.apache.org/~bago/netflow/site/xref/org/apache/james/netflow/rewrite/ContinuingMinaTransport.html
this transport simply take care to suspend the execution of the
program
each time an IoFuture is submitted.
It stores the current future in the Continuation context
(http://people.apache.org/~bago/netflow/site/xref/org/apache/james/netflow/MinaFutureContext.html
)
and then suspend.
At the same time it also adds a the MinaFutureContext as a listener of
the future. The MinaFutureContext will notify its parent
MinaContinuationContext object when this future is ready so that the
continuation executor can "continue" the processing for this session.
The MinaContinuationContext simply have a map of futures and their
associuated suspended continuation, and then have a list of ready to
be
resumed futures.
MinaFutureContext.getReadyContext() returns if the processing is
completed otherwise block until a future is completed and ready to be
processed.
Here is the main test class:
http://people.apache.org/~bago/netflow/site/xref/org/apache/james/netflow/Test.html
Note that I have to use a ContinuationClassLoader and a factory in
order
to let javaflow instrument my protocol classes and make them
suspendable.
Very cool!
The test simply run the protocol either in standard mode or in the
SEDAted/continuation mode. It runs 1000 transactions in threadNumber
threads.
And here are the results for a single thread:
1) synchronous, sequential connectiosn: 1000ms
2) SEDAted classes: 4000ms
1000 connections means 10000 reads + 10000 writes, so a total of 20000
events (and continuations).
This would mean that the overhead for 20000 continuations is less
than 3000 ms, so nearly 150 microseconds per continuation.
I think this is an interesting result because this should show the
overhead of javaflow in the worst scenario. IN fact I have an in jvm
protocol so there is no delay at all for the network, but if I simply
add 1ms of delay for every read/write action the synchronous runner
will
of course take 20 seconds more, while the SEDAted runner will run
smoothly and without using 1 thread per connection.
If anyone wants to play with the code I uploaded it to my p.o.a home:
http://people.apache.org/~bago/netflow/
Nice. You should write a blog post about this :)
WDYT? Is this an useless approach? Does it worth using continuations
and
SEDAified protocols instead of 1 thread per connections or the
continuations overhead is greater than the overhead the standard
threading?
Well, the continuation overhead really comes only from the depth of
the call stack. If the stack is not too deep it should not be too much
overhead indeed. Jetty6 is featuring something like that. While they
call it continuations - I would not. But it uses roughly the same idea
I guess. Of course it's not as transparent.
I also thought that in a similar use case the continuation could be
triggered after a given delay so that we don't spend time suspending/
resuming when the answer is fast and we do that only when there is a
real wait time to be spent on something else.
Good thinking.
I tried to conditionally suspend only after "await" 1 millisecond to
see the performance impact of running instrumented code without
suspending it. I added this 1ms wait to every blocking call and the
time now is 1500ms.
Considering that this is only a PoC I set up in few hours I found it
very interesting and I decided to share this with you!
Thanks a lot for that!!
Any volunteers is more than welcome to help out. I guess that might
also trick me into working on it a bit more again ;)
cheers
--
Torsten
---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]