On Sunday 07 July 2002 07:45, Sanjiva Weerawarana wrote: > Hi Scott, > > Thanks for the useful experiment. It seems to me that if we were > to implement HTTP 1.1 keep-alive then this problem would go away, > right? That is, if the same TCP connection is used for a series > of requests then not its not an issue, right? > > I wonder how browsers do it- when I'm using my Internet banking > stuff does it keep re-negotiating keys?? Or does it keep a single > socket connection open for the 30 mins say that I'm using it. The > latter seems extremely resource heavy on the server.
The "former" sounds a lot more resource (CPU + network) heavy for the server. The "latter" is heavy on memory only, which is cheap nowadays. The "banking application" is most probably not negoatiating the keys, since that is something network intensive (something like 3 rounds of TCP traffic). Since SSL sits on top of TCP, it is only two issues on memory and network resources. 1. The connection itself stays alive and the server will not "share" connections between requests so to speak. The server probably have other scalability problems before running out of connections, so that is probably not an issue either. 2. The SSL takes up some memory, maybe as much as a couple of 10Ks (I think more like a couple of Ks), so on a 10000 connected users, they have to throw in an extra couple of 100Ms. Probably acceptable as well. Niclas -- To unsubscribe, e-mail: <mailto:[EMAIL PROTECTED]> For additional commands, e-mail: <mailto:[EMAIL PROTECTED]>