On Jul 16, 2010, at 1:42 AM, R.I.Pienaar wrote:


----- "Luke Kanies" <[email protected]> wrote:


Store configs just wont work for me - masters distributed in
different continents, machines hitting any one of those, networks
often down between them etc.

Don't you have the same problem with your mongo db?

yes and no, it's less of a problem how I do it:

- the puppet masters do not write, they just read.
data comes from mcollective

This is basically how I think masters should be done - the goal should always be to have a shared-nothing state on the masters.

- the mcollective daemon on every node will send a request
to any machine that runs an agent 'registration'.  so I
just install such an agent on every puppet master and the
data arrives.  The interesting thing with this model is
that i can have many uses for registration data and can have
different types of receivers all on the same collective.
One to feed puppet, one to provide nagios data, one to feed
something a web ui wants like memcache, all sucking in this
data

- the middleware can build meshed networks. if UK cant
see US, but both can see DE then the updates will still
prop everywhere

- updates happen very quickly mine is set to 2 minute interval.
after a split brain or a data corruption issue 2 minutes later
all the data will be re-created. I could on my node count probably
set that to something crazy like 30 seconds.

So there is still a level of eventual consistency of the data but its
resolved very quickly should that happen.

Very nice.

That is, isn't this a relatively intractable problem when using
databases, and any solution used to mitigate it could also be used for
storeconfigs?

Could certainly extend puppet to use more of the ideas in how I build it but the barrier to entry to doing that at a plugin layer is too big. And
the barrier in actually extending puppet is massive.

I were able to build out all of this from initial idea to integrated and working across my masters in roughly 5 hours so, for me, doing something
entirely different was the only viable option.

Besides, I dont think the resource level abstraction in exported resources
works well with my way of thinking so I just wont use it :P

Hrm, well, hopefully I can change your mind on this at some point, and if not, hopefully we can find a more structured way of sharing data that will work for both of us.

I hate the 'query data from db' model, because you lose all dependency information - if you use exported resources, we could draw a graph of the entire network and give you dependency information across the whole thing, but if you're just pulling data from a database, then you have no idea who's using the data.

Do you have a model for these queries that can provide that kind of dependency information?

--
To have a right to do a thing is not at all the same as to be right
in doing it. -- G. K. Chesterton
---------------------------------------------------------------------
Luke Kanies  -|-   http://puppetlabs.com   -|-   +1(615)594-8199

--
You received this message because you are subscribed to the Google Groups "Puppet 
Developers" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/puppet-dev?hl=en.

Reply via email to