>> So the questions remains open:
>> - How many concurrent users do you have to support ?
> I'm afraid, I still do not understand what exactly are you asking, taking 
> into account
> that you wrote:
>> I don't want answer related to processors.
> Then for what part of the application is the "concurrency" assumed? For 
> internal
> application servers which produce custom data, or for socket layer on the 
> hypotetical
> "http-gateway"? If all connections have keep-alive option enabled, we 
> could anticipate
> 6000*N open connections on the gateway in every moment, where N is a 
> number
> of backend web-servers.

Sorry, but you are again making assumptions about the solution without 
giving the requirements.
I can't give you correct advice it you don't tell me the numbers I asked. 
Don't think about servers, cpu, timouts, sessions and the like. Only what 
the application require seen from the highest level possible.

> The question of scalability itself assumes that we want the application to 
> work under
> _any_ ever growing load, by simply adding new backend servers.
> If this wouldn't turn out to be the case, then this is not scalability, 
> imho.

You can buy devices to load-balance your application to several servers. For 
example: http://www.cisco.com/en/US/products/hw/contnetw/ps792/

With such a device and ICS, the question become: How to design my HTTP 
application using ICS so that I can handle many concurrent users ?

>> - What is the maximum allowed latency time ?
> 1 second.

This is a quite low latency time for a web application. Usually transport 
over internet make this time simply much larger.

>> This being said, 10 users per server is normally very small number unless 
>> the user
>> are asking a lot of data or number crunching processing.
> Yes. The application involves on-the-fly analytical processing.

This processing is not the web server application. You have to use high end 
servers to do the processing. This processing alone require selecting a 
server with suitable CPU power. This is completely independent of the HTTP 
transport if your application is designed properly.

>> > If session timeout is set to 10 min,
>>
>> The question is: why would a session be 10 minutes ? Why not 10 seconds 
>> or 10 hours ?
> This number is deduced as an avarage value of use cases and ergonomics of 
> the application.
> The 10 min is given, and there is no place for "why".

OK. Then we probably have a language problem (note that english is not my 
native language). I would have said "since session timeout is set to 10 
min...".


>> You have to tell more about your application if you want good advice.
> This is a kind of world-wide context searching service. In future it must 
> process
> 2 000 000 requests aday.

This is 23 requests per second on average. If requests must be executed in 
one second, it means you have - on average - 23 simultaneous connections. 
Probably peaks with 2-3 times that number. You just need a good server.

> 50Kb

You need 1.2 Mbps bandwidth. Not a big deal.

> The problem is that this is not a pure web-server. We have
> internal application servers which make it impossible to pack all the 
> stuff
> into an ordinary PC. I understand that we could possibly configure a 
> single
> web-server to deliver incoming requests to different application servers, 
> but this
> is another issue to be investigated. I don't know yet which approach is 
> easier
> to implement.

Using ICS, it is easy and probably better to have a single HTTP server 
application (no problem with 23 requests per second) which talk to your 
application servers, probably using basic TCP (socket) communication.

--
Contribute to the SSL Effort. Visit http://www.overbyte.be/eng/ssl.html
--
[EMAIL PROTECTED]
http://www.overbyte.be


-- 
To unsubscribe or change your settings for TWSocket mailing list
please goto http://www.elists.org/mailman/listinfo/twsocket
Visit our website at http://www.overbyte.be

Reply via email to