Issue #6135 has been updated by Luke Kanies.

The thundering herd problem is pretty easily fixable by switching to queued 
compile requests, rather than syncronous requests.

If each compile request was queued, with a call back to retrieve it, then the 
queue could get much longer without reducing the performance of the pipeline.  
The thundering herd is only a problem because 10x the clients results in 1/10 
or 1/100 per-client performance (e.g., compile time goes from 3s to 30s to 300s 
to crash).  With a queue, compile time stays static, and response time 
increases linearly with the number of systems in the queue.

The other benefit is that it's much easier to turn up more servers to pull from 
a queue than it is to add more to the load balancer.
----------------------------------------
Feature #6135: Elastic Compiler Pool
https://projects.puppetlabs.com/issues/6135#change-74253

Author: Luke Kanies
Status: Accepted
Priority: Normal
Assignee: eric sorenson
Category: 
Target version: 
Affected Puppet version: 
Keywords: backlog
Branch: 


Puppet deals poorly with the 'thundering herd' problem today, where a given 
system has sufficient capacity to respond to typical usage but falls down when 
usage goes above that (e.g., 10% of machines check in every minute normally, 
but 100% check in sometimes).

The architecture should be switched to a model where client requests are 
queued, and then a pool of compilers respond to those requests as they're able 
to.  The compiler pool could grow and shrink as needed, and at worst the queue 
would just be longer than one might like, rather than too many clients 
destroying a server.

This has the added benefit of providing visibility of how busy the pool is, by 
assessing queue length.

There are two basic models that could be used:

* Clients use the message bus directly - they queue requests and wait for a 
response to get sent back to them.  This is preferred, but won't always be 
possible because of client architectural limitations (e.g., firewalls).  The 
benefit of this is that there aren't even open sockets on the server as 
requests come in.

* Clients still connect with HTTP to the server, and the server handles 
queueing requests and returning compiled catalogs.  This provides nearly as 
much scalability, and in some ways is preferable because it is fully compatible 
with existing Puppet installations.


This should be accomplishable with just indirector plugins - a new terminus for 
queueing requests and returning catalogs, and a 


-- 
You have received this notification because you have either subscribed to it, 
or are involved in it.
To change your notification preferences, please click here: 
http://projects.puppetlabs.com/my/account

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Bugs" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/puppet-bugs?hl=en.

Reply via email to