On 04/25/13 17:02, Michael Tiernan wrote:
Please tell me if I get this wrong but as I see it, each one of these
connections (individually) has four stages to it.
Stage 1, network request & handshake to establish connection to now
talk to "ssh".
TCP handshake in kernel plus allocation of buffers, very small.
Stage 2, ssh negotiation and "overhead" to establish secured channel
of communications.
PKI authentication very cpu intensive
Stage 3, the bidirecitonal exchange of payload.
Symmetrical session encryption. Not as bad as PKI key
negotiation/authentication, but not zero.
Stage 4, The closing of the secured channel, ssh cleanup, and network
connection shutdown.
Not much here, mainly the TCP fin.
My question is, isn't the worst part of this process (when multiplied
by X clients) going to be stage 2 where the highest computational
process occurrs? (I'm assuming this is where the compute cycles will
be used most.) With that said, if the assumption (*cough*) is right
then couldn't stage 1 & 2 be staggered to allow the weight of ~1K of
clients to be managable and then the open connections are just kept
open until you're done with them?
The problem with scheduling, is that you are looking to stagger the peak
load spike, for ~1k connections. So, if you had a window of 5 minutes,
that is 3.33 connections a second. Just to even it out. What do you do
at ~3k connections? You start making your window of convergence larger
and larger. At a certain point it becomes unacceptable.
-- Mr. Flibble King of the Potato People
_______________________________________________
Discuss mailing list
Discuss@lists.lopsa.org
https://lists.lopsa.org/cgi-bin/mailman/listinfo/discuss
This list provided by the League of Professional System Administrators
http://lopsa.org/