Chris, Your implementation is impressive, much more advanced that what I had planned to implement. Basically, I thought of persistent Queue's, modeled like Oracle AQ's. So each complete message received by Server is guaranteed to be processed.
Flow was something like 1. Get message 2. Decode message 3. enqueue to Queuing System 4. Queue processors start taking the job and send responses back Where I missed was, this could be done with another server. Couple of queries, 1. Each client getting its queue will be costly from management/implementation perspective. Like 120K queues maintained and each one taking care of its registered listener(s) 2. Was there any noticeable delay in processing, with this architecture 3. Will this work for protocols like SMPP, which sends a message and expects a response with an ID. (may be it could work, its just that the client needs to be a bit smarter) thanks ashish On Thu, Aug 27, 2009 at 9:21 PM, Christopher Popp<[email protected]> wrote: > Emmanuel Lecharny wrote: >> Interesting. >> >> Which version of MINA are you using ? >> >> Also, I want to know if you had a chance to test the latest select-fix >> branch ? > > We're currently using 2.0.0 M6. This particular solution is still in a bit > of an experimental stage, so we've just been moving up with the new versions > of MINA as they become available. I haven't had a chance to try out the > select-fix branch at all, as there hasn't been any particular roadblocks that > I've noticed in my development that has stopped me from just testing with M6. > > > I think the largest test we've done so far involved 1 front end server, 6 > MINA client servers, and the terracotta server which manages the clustering > (and is itself mirror-able, and all that good stuff, for those concerned with > it being a single point of failure). We simulated about 20,000 clients > connecting to each server, for a grand total of around around 120,000 clients > with long-lived TCP connections. We then submitted various jobs to our front > end server, which then propagated through the cluster to the appropriate > clients. I believe the guy who did this test actually accidentally took down > client servers, and was surprised when everything happily settled back to > normal without any special care after restarting that server. This > particular test was done on EC2. The results showed our implementation could > benefit from some performance analysis of how we did some message passing > through the Terracotta clustering, but from a stability perspective > we did not encounter any problems. > > Chris Popp > > > -- thanks ashish Blog: http://www.ashishpaliwal.com/blog My Photo Galleries: http://www.pbase.com/ashishpaliwal
