Issue #6135 has been updated by Trevor Vaughan.

I suppose the PKI issue isn't very large as long as you get it right the first 
time. We *have* to have what was talked about previously in terms of 
auto-rekeying clients though. As you get more systems, this gets extremely 
painful. Also, there needs to be an easy provision for switching out CA keys in 
the case of a loss/breach/rekey, whatever.

Right now, you have to manually blow away the SSL keys from each client to get 
them to talk to a new server with the same name properly. That's not OK (and 
I'm not quite sure what a sane solution for it is).

For #1, it's a bit amusing to hear you talking about pushed updates ;-). I was 
thinking about perhaps taking a hash of any updated manifest/module set 
(everything under the search path), combined with the timestamp in some 
redundant query source. At that point, each Puppet sever would have to have an 
authorization key provided by the query source (CA server?) before it would 
provide compiled manifests to any client. This would also provide for a nice 
method whereby you could disable all PMs for whatever reason (bad update, 
whatever). It would, however, necessitate a query against that source before 
each compile.

Just some thoughts.
----------------------------------------
Feature #6135: Elastic Compiler Pool
https://projects.puppetlabs.com/issues/6135#change-74157

Author: Luke Kanies
Status: Accepted
Priority: Normal
Assignee: eric sorenson
Category: 
Target version: 
Affected Puppet version: 
Keywords: backlog
Branch: 


Puppet deals poorly with the 'thundering herd' problem today, where a given 
system has sufficient capacity to respond to typical usage but falls down when 
usage goes above that (e.g., 10% of machines check in every minute normally, 
but 100% check in sometimes).

The architecture should be switched to a model where client requests are 
queued, and then a pool of compilers respond to those requests as they're able 
to.  The compiler pool could grow and shrink as needed, and at worst the queue 
would just be longer than one might like, rather than too many clients 
destroying a server.

This has the added benefit of providing visibility of how busy the pool is, by 
assessing queue length.

There are two basic models that could be used:

* Clients use the message bus directly - they queue requests and wait for a 
response to get sent back to them.  This is preferred, but won't always be 
possible because of client architectural limitations (e.g., firewalls).  The 
benefit of this is that there aren't even open sockets on the server as 
requests come in.

* Clients still connect with HTTP to the server, and the server handles 
queueing requests and returning compiled catalogs.  This provides nearly as 
much scalability, and in some ways is preferable because it is fully compatible 
with existing Puppet installations.


This should be accomplishable with just indirector plugins - a new terminus for 
queueing requests and returning catalogs, and a 


-- 
You have received this notification because you have either subscribed to it, 
or are involved in it.
To change your notification preferences, please click here: 
http://projects.puppetlabs.com/my/account

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Bugs" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/puppet-bugs?hl=en.

Reply via email to