Yes, Puppet is perfect for your file-copy-and-hook scenario. In Puppet
speak it's notify and subscribe between resources, here's a very
quick example that will restart Some Daemon if /etc/resolv.conf
changes:
node 'somehost' {
class { 'resolv': }
}
class resolv {
$resolv_conf =
I use Puppet Commander, an MCollective tool:
http://projects.puppetlabs.com/projects/mcollective-plugins/wiki/ToolPuppetcommander
Rather than have my Puppet Agents check in, no Puppet service runs on
any server, instead MCollective is running. Puppet Commander uses the
MCollective framework to
So, we got access back. It turned out to be a malformed fact causing
the problem. The particular fact was using a ruby exec to get lvm
freespace, using chomp on the result without checking wether there was
a result. Since the x7 server didn't have lvm installed to begin with,
that led to problems.
On Saturday, April 28, 2012 2:11:23 AM UTC-7, Luke Bigum wrote:
Yes, Puppet is perfect for your file-copy-and-hook scenario. In Puppet
speak it's notify and subscribe between resources, here's a very
quick example that will restart Some Daemon if /etc/resolv.conf
changes:
node
On Sat, Apr 28, 2012 at 2:12 AM, Luke Bigum luke.bi...@lmax.com wrote:
I use Puppet Commander, an MCollective tool:
http://projects.puppetlabs.com/projects/mcollective-plugins/wiki/ToolPuppetcommander
Rather than have my Puppet Agents check in, no Puppet service runs on
any server, instead
On 28 April 2012 15:11, Walter Heck walterh...@gmail.com wrote:
So, just to make sure I understand this correctly: in this case just
removing all the puppet code didn't help, since the mere existence of
the module with the custom fact in our module path made the fact
execute on the agent,
Hi,
[...]
$valid_ensure_values = [ present, absent ]
if ! ($::{ensure} in $::{valid_ensure_values}) {
$test_ = inline_template(%=
($::apt-cacher-ng::params::valid_ensure_values).join(', ') %)
fail(${module_name}::server - Invalid ensure value [currently -
${ensure}],
When we started with splay, over time we found that puppet runs would flock
together. If there were network or system load issues causing multiple
puppet runs to be slow, they would seem to clear at the same time, then be
on the same schedule from then on. As other clients would hit a slow run,