Hi Wenrui,

Wenrui Guo wrote:
As you mentioned, the limitation of amount of ports one process can open is 
really exist. Beside that, please also  note that not all process can allow up 
to 50,000 connections simultaneously, because in linux environment, one 
session(connection) established also occupied a file descriptor of the server 
process, so usually when about 2000 connections is established, a Too many open 
files exception must be thrown, unless you modify the ulimit value of such 
process.
Modifying this parameter is absolutely mandatory, and I think that every person who is running a server in production knows that :) But it won't be a solution to your load balancing problem, anyway !
Obviously, as Emmanuel said, for the application itself, we usually start several application instances on the same physical machine, but that's not the best solution since too many connections(even the event handling of http requests is based on asynchronous way) can cause excessive context switch, particularly when the OS is not good at task management and not enough CPU is present.
The apache server (Httpd) itself have a limited number of threads to deal with incoming requests. But handling a request is a different thing than managing the connections, as we may have hundred of thousands opened connections, but only a few hundreds of request being processed simultaneously. If you don't have a high ration of processed request/ number of connections, that should be ok.
In the case, please follow the rule said by Emmmanuel, make sure your 
application can process incoming request as quickly as possible or consider to 
reject flood http requests from huge clients. Please remember, excessive 
contention can reduce tps of server.
The initial questions was about long lived sockets, not about handling a high number of incoming requests. If your server is supposed to receive thousands of requests per second, with a pretty expensive processing to handle each requests, then you better define a very good architecture, and MINA won't help you in this area ...
However, Solutions mentioned above definitly a scalable solution, because application must be spread to multiple physicial machines to support more requests. So usually in telecom area, load balancing hardwares are so common because it can spread all http requests to many business servers which sits behind the LB. e.g F5 device. Once workload is above processing capability limitation, just need to add more machines next to LB to support more requests as long as LB can distribute flood flows.
Assuming that you manage the sessions (all the LB box have a kind of sticky session capability), because hoping from one server to another during a session is really painful to manage from the application POV ! Generally speaking, unless you are delivering static content, scalability is not only a matter of dropping new boxes in the rack ... :)

But i'm afraid that this thread is going too far away from your initial point, and if we continue, it will become a complete review of all the possible architectures ;)

Thanks !

--
--
cordialement, regards,
Emmanuel Lécharny
www.iktek.com
directory.apache.org


Reply via email to