Issue #6135 has been updated by Luke Kanies.

Trevor Vaughan wrote:
> Why not do both?
> 
> Option #1 is 'better' IMO since you can then do cross-site loading, etc... 
> but, as Luke points out, Option #2 is already compatible with existing 
> installations.
> 
> The real issues that have to be tackled are:
> 
> 1. How do you ensure that ALL PMs in the cluster have the exact same version 
> of a manifest given a particular time?

I agree, this can be challenging.  If we move to a world where the decision to 
update is pushed to them, rather than each PM making the decision to update its 
code, then this becomes a bit easier.

> 2. How do you federate your Puppet PKI without breaking autosigning or having 
> some magic that pre-places a key?
> 3. How do you ensure that all PM to PM communications are fully protected and 
> verified? (yeah, we have a PKI for that, but then you get into a full PKI 
> hierarchy, etc...)
> 
> I think that this should be dependent on OCSP/SCVP support so that you don't 
> have to rely on CRL pushes to stop trusting a rogue/malfunctioning PM.

I'm not sure how #2 and #3 become problems suddenly, but I'd plan to move to a 
central cert service that everyone relied on.  What would change in the new 
world vs existing one, in terms of PKI?

----------------------------------------
Feature #6135: Elastic Compiler Pool
https://projects.puppetlabs.com/issues/6135#change-74100

Author: Luke Kanies
Status: Accepted
Priority: Normal
Assignee: eric sorenson
Category: 
Target version: 
Affected Puppet version: 
Keywords: backlog
Branch: 


Puppet deals poorly with the 'thundering herd' problem today, where a given 
system has sufficient capacity to respond to typical usage but falls down when 
usage goes above that (e.g., 10% of machines check in every minute normally, 
but 100% check in sometimes).

The architecture should be switched to a model where client requests are 
queued, and then a pool of compilers respond to those requests as they're able 
to.  The compiler pool could grow and shrink as needed, and at worst the queue 
would just be longer than one might like, rather than too many clients 
destroying a server.

This has the added benefit of providing visibility of how busy the pool is, by 
assessing queue length.

There are two basic models that could be used:

* Clients use the message bus directly - they queue requests and wait for a 
response to get sent back to them.  This is preferred, but won't always be 
possible because of client architectural limitations (e.g., firewalls).  The 
benefit of this is that there aren't even open sockets on the server as 
requests come in.

* Clients still connect with HTTP to the server, and the server handles 
queueing requests and returning compiled catalogs.  This provides nearly as 
much scalability, and in some ways is preferable because it is fully compatible 
with existing Puppet installations.


This should be accomplishable with just indirector plugins - a new terminus for 
queueing requests and returning catalogs, and a 


-- 
You have received this notification because you have either subscribed to it, 
or are involved in it.
To change your notification preferences, please click here: 
http://projects.puppetlabs.com/my/account

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Bugs" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/puppet-bugs?hl=en.

Reply via email to