Issue #11135 has been updated by Joshua Lifton.

Status changed from Unreviewed to Needs More Information

The agents returning the message you're not expecting were likely in the middle 
of a Puppet run when you executed the kick command. Since only one Puppet run 
can occur at a time, the kick returns immediately for those hosts. You can test 
that this is the case by waiting a couple of minutes and trying to kick the 
failed hosts again. This time, the kick should succeed for those hosts, unless 
yet another Puppet run blocks it, which is unlikely given the default 
configuration of running Puppet every 30 minutes. You should also notice that 
the hosts that fail during a kick will not be the same hosts between kicks. 
Please confirm this is the case.
----------------------------------------
Bug #11135: puppet kick / puppet agent 
https://projects.puppetlabs.com/issues/11135

Author: Mario Lassnig
Status: Needs More Information
Priority: Normal
Assignee: 
Category: 
Target version: 
Affected Puppet version: 2.7.6
Keywords: 
Branch: 


Hi,

I'm managing several machines with 2.7.6 (master) and clients (2.7.1 and 2.7.6 
mixed, not yet all fully upgraded).

On the puppet master, i want to kick a node but it exits like this, which 
doesn't seem right. The agent seems to return the wrong message.

    voatlas226:~$ puppet kick voatlas182.cern.ch --trace --debug --foreground
    Triggering voatlas182.cern.ch
    Getting status
    status is running
    Host voatlas182.cern.ch is already running
    voatlas182.cern.ch finished with exit code 3
    Failed: voatlas182.cern.ch

An example of a good one:

    voatlas226:~$ puppet kick voatlas114.cern.ch --trace --debug --foreground
    Triggering voatlas114.cern.ch
    Getting status
    status is success
    voatlas114.cern.ch finished with exit code 0
    Finished
    
The really weird thing is, that I have several other nodes where this works 
perfectly (2.7.1, 2.7.6, doesn't matter), and a few others where it doesn't 
(2.7.1, 2.7.6, doesn't matter). The nodes are configured exactly the same 
(auth.conf, puppet.conf, empty namespace.auth), and there really is nothing 
fancy going on. Several reinstalls of the gems and wiping of /var/lib/puppet 
didn't help. It's a mystery to me.

That's the auth.conf I'm using for testing...

    path /
    auth any
    allow *
    
And here's the puppet.conf (the same on master and all agents)

    [main]
    logdir = /var/log/puppet
    rundir = /var/run/puppet
    ssldir = $vardir/ssl
    [agent]
    server = atlas-puppet.cern.ch
    classfile = $vardir/classes.txt
    report = true
    listen = true
    localconfig = $vardir/localconfig
    preferred_serialization_format = yaml
    [master]
    certname = atlas-puppet.cern.ch
    reports = http, store
    reporturl = http://atlas-puppet.cern.ch:3000/reports/upload
    pluginsync = true
    storeconfigs = true
    dbadapter = mysql
    dbuser = root
    dbpassword =
    dbserver = localhost
    dbsocket = /var/lib/mysql/mysql.sock

If you need any more information, please tell me what you need.

Thanks,
Mario


-- 
You have received this notification because you have either subscribed to it, 
or are involved in it.
To change your notification preferences, please click here: 
http://projects.puppetlabs.com/my/account

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Bugs" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/puppet-bugs?hl=en.

Reply via email to