Very sophicated question. :P

If you only focus on HTTP protocol, I guess maybe the following experience can 
help you.

As you mentioned, the limitation of amount of ports one process can open is 
really exist. Beside that, please also  note that not all process can allow up 
to 50,000 connections simultaneously, because in linux environment, one 
session(connection) established also occupied a file descriptor of the server 
process, so usually when about 2000 connections is established, a Too many open 
files exception must be thrown, unless you modify the ulimit value of such 
process.

Obviously, as Emmanuel said, for the application itself, we usually start 
several application instances on the same physical machine, but that's not the 
best solution since too many connections(even the event handling of http 
requests is based on asynchronous way) can cause excessive context switch, 
particularly when the OS is not good at task management and not enough CPU is 
present. In the case, please follow the rule said by Emmmanuel, make sure your 
application can process incoming request as quickly as possible or consider to 
reject flood http requests from huge clients. Please remember, excessive 
contention can reduce tps of server.

However, Solutions mentioned above definitly a scalable solution, because 
application must be spread to multiple physicial machines to support more 
requests. So usually in telecom area, load balancing hardwares are so common 
because it can spread all http requests to many business servers which sits 
behind the LB. e.g F5 device. Once workload is above processing capability 
limitation, just need to add more machines next to LB to support more requests 
as long as LB can distribute flood flows. Diagram shown below describe the 
network structure:

Client --->    ---> application server
          LB 
Client --->   ----> application server

In extreme cases, maybe you don't believe LB can handle 200k~ requests, please 
remember to use DNS to spead http requests. Please also take a look at below 
picture:

Client --->                     -----> application server
                DNS ----> LB
Client --->             -----> application server
                DNS 
-----------------------------------------------------------> LB ---> 
application servers
                                                         
                                                         LB  ----> application 
servers
Client ---> 

Best regards

anderson

-----Original Message-----
From: Emmanuel Lecharny [mailto:[EMAIL PROTECTED] 
Sent: Sunday, September 14, 2008 3:41 PM
To: [email protected]
Subject: Re: Load Balancing Socket connections

Stephane Rainville wrote:
> I dont' think so.
>  
> Altough you are right a Socket can listen and open multiple connection on one 
> port (Like a web server on port 80) when the  server responds it responds on 
> a RANDOM port between 1000 and 65000.
>   
It responds using a random port associated to the client IP address. If a 
specific client tries to open more then 64536 connections to a server (very 
unlikely, if even possible !), then you will reach the limit. But as you will 
have many clients, everything is fine. Remember that a socket is defined by a 
couple of an IP address and a port.
>  
> So my question remains using MINA and long lived socket connections am i 
> limited to 65000 long lived socket connections and how can I load balance 
> that.
>   
You are not limited to 65000 long lived session with more than _one_ client.
>  
> Although the question is mostly for comprehension sake's I'm curious of the 
> experience of People using MINA under HEAVY load and how they reacted.
>   
Some guys have tested MINA with 200K connections established, but the load was 
null (they just established the session and not sending any byte). The main 
problem will be with the data processing. If you have a multi-processor server 
with a lot of memory, that could scale, up to a point.

-- 
--
cordialement, regards,
Emmanuel Lécharny
www.iktek.com
directory.apache.org


Reply via email to