> "Chef works atop ssh, which – while the gold standard for cryptographically
> secure systems management – is computationally expensive to the point where
> most master servers fall over under the weight of 700-1500 clients."

Can I ask a side question about this statement? On the whole, I can
believe the statement but I'd like to ask for a bit more clarification
on it. Not to question the statement in general but to learn more
about the overall process.

Please tell me if I get this wrong but as I see it, each one of these
connections (individually) has four stages to it.
Stage 1, network request & handshake to establish connection to now
talk to "ssh".
Stage 2, ssh negotiation and "overhead" to establish secured channel
of communications.
Stage 3, the bidirecitonal exchange of payload.
Stage 4, The closing of the secured channel, ssh cleanup, and network
connection shutdown.

My question is, isn't the worst part of this process (when multiplied
by X clients) going to be stage 2 where the highest computational
process occurrs? (I'm assuming this is where the compute cycles will
be used most.)

With that said, if the assumption (*cough*) is right then couldn't
stage 1 & 2 be staggered to allow the weight of ~1K of clients to be
managable and then the open connections are just kept open until
you're done with them?
-- 
    << MCT >>   Michael C Tiernan.
    http://www.linkedin.com/in/mtiernan
    Non Impediti Ratione Cogatationis
_______________________________________________
Discuss mailing list
Discuss@lists.lopsa.org
https://lists.lopsa.org/cgi-bin/mailman/listinfo/discuss
This list provided by the League of Professional System Administrators
 http://lopsa.org/

Reply via email to