Ashish wrote:
On Tue, Nov 4, 2008 at 10:33 PM, Emmanuel Lecharny <[EMAIL PROTECTED]> wrote:
Ashish wrote:
2) Where in the chain do you put this filter ?
Multiple places to implement SEDA (before ProtocolDecoder, before
IoHandler)
I would like to know if you are not ask risk to have a hell lot of thread if
you do so. And I'm not sure that it has any added value, as the idea is to
use a thread to handle a costly operation, no need to spawn a new thread
when it's already done... Am I missing something ?
Well we needed SEDA to attain a very high processing rate.
But SEDA means that each stage is managing its tasks, communicating with
the other stages with queues. here, what you do is to multiplex the
incoming messages in one stage, then multiplexing again at the next
stage, and so on. What are the odds that the first multiplexing can't be
enough ?
Since its
one way, no response to be sent back. That's why we choose this
approach. Again at each stage, we do something meaningful. Say in
protocolcodec, we convert raw bytes into objects, next these objects
get converted into OSSJ object and then handler dumps them into DB.
Each of these stages are decoupled,
but they are chained, and executed into the same JVM, so as soon as you
limit the number of thread at some point, you won't process the messages
faster than the slowest stage. You can have two executor with say, N and
M threads, if processing events is slower in the second executor, the
queue between N and M will fill up and at some point, the first executor
pool will be exhausted when the second pool will be.
onces passed no need to look back.
Processing in one thread was like blocking it, let the messages be in
queue and then use them. This is quite helpful for a high burst or a
sustained load for a while.
There is already a backlog in the first executor...
I hope I have used MINA correctly, though I haven't benchmarked my
application for sustained load, for a peak load it did fairly well,
taking care of 5000 packets/sec, though had to set slightly higher
value of receive buffer.
The problem here is to know when your server hit it's limit? If it's
around 10000 req/s, then that's just fine : any architecture will work
:) Now, if you are targetting a higher rate, this is were things start
to be interesting :)
As much as I like the SEDA architecture, in a single VM, I don't know if
it's a valuable approach...
I would be very interested to know if it sustains a better load than a
standard approach with a single executor :)
--
--
cordialement, regards,
Emmanuel Lécharny
www.iktek.com
directory.apache.org