We have a similar setup, minus the SRV records (although that looks quire 
interesting, gotta get off of 2.7). And we push SVN checkouts instead of git, 
but that's not a big difference.

I have been thinking about the CA, and how to make it more available. My first 
thought is, do we have to save the generated client certs at all? I brought 
this up a few weeks ago and the general answer was "there is no technical 
reason to keep the certs", so I am considering deleting them immediately. Now I 
don't have to worry about backing up the puppetca!

Next, and this is where my SSL weakness  will shine, could you have all of your 
HA-puppetmasters run as CAs, too, and then have multiple CA certs on trusted 
list on the puppet masters? Something like this:
1. foo-east01 comes up, and gets an auto-signed vert from pm-east01.
2. pm-east01 hit by asteroid, so foo-east01 automatically fails over to 
foo-west01
3. pm-west01 knows to trust the pm-east-01 signed cert.
4. We stand up a pm-east0.new1, generate a new vert for it and append said cert 
to the trusted list for all clients/PMs.
5. foo-east01 starts using pm-east01.new again
6. foo-east02 comes up, gets a cert from pm-east01.new
(This is starting to feel like a certificate rotation strategy in some weird 
way).

One thing I wonder is if I'll actually be a little more secure. Instead of 
having to have a single CA with a huge FW configuration (we have a lot of 
independent networks across the 'net), each PM/CA has only a very specific FW 
ruleset.
 
On May 14, 2013, at 7:35 AM, Erik Dalén <erik.gustav.da...@gmail.com> wrote:

> 
> 
> 
> On 10 May 2013 19:52, Ramin K <ramin-l...@badapple.net> wrote:
> 
>         In any case I'd like to see more discussion on highly available 
> Puppet regardless of way it's implemented.
> 
> We are using SRV records for running multiple puppetmasters and selecting a 
> site local but allowing fallback to others in case it is down.
> We have 6 puppetmasters for the production environment running in this way 
> currently. Each normally handling 500-1000 nodes. The git repository is push 
> replicated to each one of them.
> 
> But only one is CA, it is backed up. If it would crash we are fine with 
> having a outage on installing new nodes until we have restored that part to 
> another node. But we have looked into some solutions for maybe making it more 
> resilient though.
> 
> For PuppetDB we have two service nodes and a master and hot standby for the 
> postgres database.
> 
> -- 
> Erik Dalén
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Puppet Users" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to puppet-users+unsubscr...@googlegroups.com.
> To post to this group, send email to puppet-users@googlegroups.com.
> Visit this group at http://groups.google.com/group/puppet-users?hl=en.
> For more options, visit https://groups.google.com/groups/opt_out.
>  
>  

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To post to this group, send email to puppet-users@googlegroups.com.
Visit this group at http://groups.google.com/group/puppet-users?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.


Reply via email to