I'm using puppet (0.24, working on the 0.25 migration) to do rolling
upgrades across our datacenter.

I'm running puppet as a daemon.

In order to change an application version, I modify a database, which
in turn modifies the data that my puppet_node_classifier presents. I
then ssh to the nodes that I want to upgrade and force a puppet run
with puppetd --server=foo --test --report.

The problem I'm running into is that on a regular basis a node is
already in the process of doing an update, and so I get back a message
like this:

Lock file /var/lib/puppet/state/puppetdlock exists; skipping catalog run

I can avoid this in some fashion by detecting this return result and
re-sshing into the node to run puppetd again, but this doesn't seem
very elegant. What are other people doing to avoid this sort of
situation?

Pete

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To post to this group, send email to puppet-users@googlegroups.com
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to