> 1. Make low-end scaling easier: currently when a customer's deployment
> gets too large to be handled by a single master running Webrick, they have
> to install apache/passenger or mongrel.  This can be difficult to do, 
> especially
> since passenger has limited package support on some OSes (notably RHEL/
> Centos 5).  It would be nice to give people a less painful way of scaling up
> beyond what Webrick is capable of handling.

For a moment I thought you were seriously saying that a message bus
was easier to set up than mongel. :)

> 2. Make medium-to-high-end scaling easier: currently when a customer's
> deployment gets too large to be handled by a single physical machine, they
> have to set up a load balancing infrastructure to distribute HTTPS requests
> from client machines to a suite of puppet masters.  It would be nice to give
> people a way of adding CPUs to the problem (essentially creating a "catalog
> compiler farm") without forcing them to add a layer of infrastructure.

There's also the statefulness issue; distributing requests doesn't
suffice without a way to distribute the information necessary to
satisfy the requests.  Thus this load balancing is not as straight
forward as it might sound.

> 3. Allow customers who already have a queueing system as part of their
> infrastructure to use it to scale Puppet, so they don't have to implement a
> special Puppet-specific piece of infrastructure.

I'm not convinced this is viable; queuing systems are presently in
something like the lovely state of anarchy that email was in the mid
80's; the fact that someone

> 4. Make Puppet handle load spikes more robustly.  Currently I understand
> from Luke that there is an avalance effect once the master reaches 100%
> capacity, wherein the machine starts thrashing and actually loses
> throughput, causing further load increases.  It would be nice if we could
> guarantee that Puppet didn't try to serve more simultaneous requests than
> its processors/memory could handle, so that performance would degrade more
> gracefully in times of high load.

This is a really strong point, IMHO.

> 5. Allow customers to prioritize compilations for some client machines over
> others, so that mission critical updates aren't delayed.

> 6. Allow a "push model", where changing the manifest causes catalogs to be
> recompiled, and those catalogs are then pushed out to client machines rather
> than waiting for the client machines to contact the master and request
> catalogs.

I'd also suspect there are significant gains to be had from structured
"bulk" recompilations when the cost of being smart can be amortized
over enough hosts to reap the potential benefits.   (That is, with a
total cost function of c*n+k for n hosts, refactors that decrease c at
the expense of k can pay off if n is larger than our present value of
"always 1").

> 7. Allow inter-machine dependencies to be updated faster (i.e. if machine
> A's configuration depends on the stored configuration of machine B, then
> when a new catalog gets sent to machine B, push an update to machine A ASAP
> rather than waiting for it to contact the master and request a catalog).

Yeah.  Though extending the message bus to the client side as some of
these imply...hmm,

> 8. Allow the fundamental building blocks of Puppet to be decomposed more
> easily by advanced customers so that they can build in their own
> functionality, especially with respect to caching, request routing, and
> reporting.  For example, a customer might decide that instead of building a
> brand new catalog in response to every catalog request, they might want to
> send a standard pre-built catalog to some clients.  Customers should be able
> to do things like this (and make other extensions to puppet that we cannot
> anticipate) by putting together the building blocks of puppet in their own
> unique ways.

I've got some concerns here in that we've presently got 3+
not-quite-congruent decompositions in the works, but yeah, in
principle...

> 9. Allow for staged rollouts--a customer may want to update a manifest on
> the master but have the change propagate to client machines in a controlled
> fashion over several days, rather than automatically deploying each machine
> whenever it happens to contact the puppet master next.

> 10. Allow for faster file serving by allowing a client machine to request
> multiple files in parallel rather than making a separate REST request for
> every single file.

We could do parallel request via REST just as easily, no?

> 11. Allow for fail-over: if one puppet master crashes, allow other puppet
> masters to transparently take over the work it was doing.

And if one message bus crashes?

> E. We don't want to sacrifice error handling, and we want the system to be
> at least as robust as 0.25 and 2.6 if a puppet master crashes.

We may actually see some improvement in error handling, in that we'll
have opportunities to detect problems in more sensical places.

-- Markus
-----------------------------------------------------------
The power of accurate observation is
commonly called cynicism by those
who have not got it.  ~George Bernard Shaw
------------------------------------------------------------

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Developers" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/puppet-dev?hl=en.

Reply via email to