Issue #16121 has been updated by Chris Henry.
creating a simple manifest with the 2 cron jobs to mimic my manifest has a
little different behavior. Specifically it does not re-add the root cron job
over and over and over - but it still does not delete the www-data user's entry
before creating the root user entry on the first run after flipping the 'user'
parameter on the cron:
<pre>
verify no cron jobs:
[user@p-HOSTNAME tmp]$ sudo -u www-data crontab -l |grep -A 2
"s3_logger_supervisord$"
[user@p-HOSTNAME tmp]$ sudo -u root crontab -l |grep -A 2
"s3_logger_supervisord$"
--
test manifest adds www-data user cron job:
[user@p-HOSTNAME tmp]$ cat test.pp
cron {
"s3_logger_supervisord":
command => "/var/has/s3_logrotate/bin/s3_logrotate.py -b bucket -p
/var/has/log/supervisord -l /var/has/log/s3_logrotate/HOSTNAME_supervisord -s
'secret_key' -k 'key'",
user => www-data,
minute => 40,
hour => 23;
"s3_logger_supervisord_cleanup":
command => "/bin/find /var/has/log/s3_logrotate -name
'${hostname}_supervisord.*' -mtime +7 -exec rm {} \\;",
user => root,
minute => 35,
hour => 23;
}
--
apply test manifest (creates www-data user cron job)
[user@p-HOSTNAME tmp]$ sudo puppet apply -v -d ./test.pp
debug: Creating default schedules
debug: Failed to load library 'rubygems' for feature 'rubygems'
debug: Puppet::Type::User::ProviderUser_role_add: file rolemod does not exist
debug: Puppet::Type::User::ProviderLdap: true value when expecting false
debug: Puppet::Type::User::ProviderDirectoryservice: file /usr/bin/dscl does
not exist
debug: Puppet::Type::User::ProviderPw: file pw does not exist
debug: Failed to load library 'ldap' for feature 'ldap'
debug: /File[/var/lib/puppet/lib]: Autorequiring File[/var/lib/puppet]
debug: /File[/var/lib/puppet/ssl/certs/p-HOSTNAME.use01.plat.priv.pem]:
Autorequiring File[/var/lib/puppet/ssl/certs]
debug: /File[/var/lib/puppet/ssl/public_keys/p-HOSTNAME.use01.plat.priv.pem]:
Autorequiring File[/var/lib/puppet/ssl/public_keys]
debug: /File[/var/lib/puppet/state/graphs]: Autorequiring
File[/var/lib/puppet/state]
debug: /File[/var/lib/puppet/ssl]: Autorequiring File[/var/lib/puppet]
debug: /File[/var/lib/puppet/client_yaml]: Autorequiring File[/var/lib/puppet]
debug: /File[/var/lib/puppet/state/state.yaml]: Autorequiring
File[/var/lib/puppet/state]
debug: /File[/var/lib/puppet/ssl/crl.pem]: Autorequiring
File[/var/lib/puppet/ssl]
debug: /File[/var/lib/puppet/ssl/private_keys]: Autorequiring
File[/var/lib/puppet/ssl]
debug: /File[/var/lib/puppet/ssl/public_keys]: Autorequiring
File[/var/lib/puppet/ssl]
debug: /File[/var/lib/puppet/state/last_run_report.yaml]: Autorequiring
File[/var/lib/puppet/state]
debug: /File[/var/lib/puppet/client_data]: Autorequiring File[/var/lib/puppet]
debug: /File[/var/lib/puppet/state/last_run_summary.yaml]: Autorequiring
File[/var/lib/puppet/state]
debug: /File[/var/lib/puppet/state]: Autorequiring File[/var/lib/puppet]
debug: /File[/var/lib/puppet/ssl/certs]: Autorequiring File[/var/lib/puppet/ssl]
debug: /File[/var/lib/puppet/facts]: Autorequiring File[/var/lib/puppet]
debug: /File[/var/lib/puppet/ssl/private]: Autorequiring
File[/var/lib/puppet/ssl]
debug: /File[/var/lib/puppet/ssl/certs/ca.pem]: Autorequiring
File[/var/lib/puppet/ssl/certs]
debug: /File[/var/lib/puppet/state/resources.txt]: Autorequiring
File[/var/lib/puppet/state]
debug: /File[/var/lib/puppet/clientbucket]: Autorequiring File[/var/lib/puppet]
debug: /File[/var/lib/puppet/ssl/certificate_requests]: Autorequiring
File[/var/lib/puppet/ssl]
debug: /File[/var/lib/puppet/ssl/private_keys/p-HOSTNAME.use01.plat.priv.pem]:
Autorequiring File[/var/lib/puppet/ssl/private_keys]
debug: Finishing transaction 69953800229580
debug: Loaded state in 0.02 seconds
debug: Loaded state in 0.02 seconds
info: Applying configuration version '1346099302'
debug: /Schedule[daily]: Skipping device resources because running on a host
debug: /Schedule[monthly]: Skipping device resources because running on a host
debug: /Schedule[hourly]: Skipping device resources because running on a host
debug: /Schedule[never]: Skipping device resources because running on a host
debug: /Schedule[weekly]: Skipping device resources because running on a host
debug: Prefetching crontab resources for cron
notice: /Stage[main]//Cron[s3_logger_supervisord]/ensure: created
debug: Flushing cron provider target www-data
debug: /Stage[main]//Cron[s3_logger_supervisord]: The container Class[Main]
will propagate my refresh event
debug: /Schedule[puppet]: Skipping device resources because running on a host
debug: Class[Main]: The container Stage[main] will propagate my refresh event
debug: Finishing transaction 69953799442880
debug: Storing state
debug: Stored state in 0.31 seconds
notice: Finished catalog run in 0.48 seconds
debug: Finishing transaction 69953800562740
debug: Received report to process from p-HOSTNAME.use01.plat.priv
debug: Processing report from p-HOSTNAME.use01.plat.priv with processor
Puppet::Reports::Store
--
replace www-data user with root in test manifest
[user@p-HOSTNAME tmp]$ sed -i 's/www-data/root/g' test.pp
--
test manifest now adds root user cron job:
[user@p-HOSTNAME tmp]$ cat test.pp
cron {
"s3_logger_supervisord":
command => "/var/has/s3_logrotate/bin/s3_logrotate.py -b bucket -p
/var/has/log/supervisord -l /var/has/log/s3_logrotate/HOSTNAME_supervisord -s
'secret_key' -k 'key'",
user => root,
minute => 40,
hour => 23;
"s3_logger_supervisord_cleanup":
command => "/bin/find /var/has/log/s3_logrotate -name
'${hostname}_supervisord.*' -mtime +7 -exec rm {} \\;",
user => root,
minute => 35,
hour => 23;
}
--
apply test manifest (should delete www-data user cron job and create root user
cron job - but doesn't)
[user@p-HOSTNAME tmp]$ sudo puppet apply -v -d ./test.pp
debug: Creating default schedules
debug: Failed to load library 'rubygems' for feature 'rubygems'
debug: Puppet::Type::User::ProviderUser_role_add: file rolemod does not exist
debug: Puppet::Type::User::ProviderLdap: true value when expecting false
debug: Puppet::Type::User::ProviderDirectoryservice: file /usr/bin/dscl does
not exist
debug: Puppet::Type::User::ProviderPw: file pw does not exist
debug: Failed to load library 'ldap' for feature 'ldap'
debug: /File[/var/lib/puppet/ssl/certs]: Autorequiring File[/var/lib/puppet/ssl]
debug: /File[/var/lib/puppet/facts]: Autorequiring File[/var/lib/puppet]
debug: /File[/var/lib/puppet/state/resources.txt]: Autorequiring
File[/var/lib/puppet/state]
debug: /File[/var/lib/puppet/client_yaml]: Autorequiring File[/var/lib/puppet]
debug: /File[/var/lib/puppet/client_data]: Autorequiring File[/var/lib/puppet]
debug: /File[/var/lib/puppet/ssl/certs/p-HOSTNAME.use01.plat.priv.pem]:
Autorequiring File[/var/lib/puppet/ssl/certs]
debug: /File[/var/lib/puppet/ssl/certs/ca.pem]: Autorequiring
File[/var/lib/puppet/ssl/certs]
debug: /File[/var/lib/puppet/state]: Autorequiring File[/var/lib/puppet]
debug: /File[/var/lib/puppet/state/graphs]: Autorequiring
File[/var/lib/puppet/state]
debug: /File[/var/lib/puppet/ssl/public_keys/p-HOSTNAME.use01.plat.priv.pem]:
Autorequiring File[/var/lib/puppet/ssl/public_keys]
debug: /File[/var/lib/puppet/state/state.yaml]: Autorequiring
File[/var/lib/puppet/state]
debug: /File[/var/lib/puppet/clientbucket]: Autorequiring File[/var/lib/puppet]
debug: /File[/var/lib/puppet/ssl/private]: Autorequiring
File[/var/lib/puppet/ssl]
debug: /File[/var/lib/puppet/ssl/private_keys/p-HOSTNAME.use01.plat.priv.pem]:
Autorequiring File[/var/lib/puppet/ssl/private_keys]
debug: /File[/var/lib/puppet/ssl]: Autorequiring File[/var/lib/puppet]
debug: /File[/var/lib/puppet/state/last_run_summary.yaml]: Autorequiring
File[/var/lib/puppet/state]
debug: /File[/var/lib/puppet/state/last_run_report.yaml]: Autorequiring
File[/var/lib/puppet/state]
debug: /File[/var/lib/puppet/ssl/private_keys]: Autorequiring
File[/var/lib/puppet/ssl]
debug: /File[/var/lib/puppet/lib]: Autorequiring File[/var/lib/puppet]
debug: /File[/var/lib/puppet/ssl/crl.pem]: Autorequiring
File[/var/lib/puppet/ssl]
debug: /File[/var/lib/puppet/ssl/certificate_requests]: Autorequiring
File[/var/lib/puppet/ssl]
debug: /File[/var/lib/puppet/ssl/public_keys]: Autorequiring
File[/var/lib/puppet/ssl]
debug: Finishing transaction 69884944804720
debug: Loaded state in 0.02 seconds
debug: Loaded state in 0.02 seconds
info: Applying configuration version '1346099316'
debug: /Schedule[daily]: Skipping device resources because running on a host
debug: /Schedule[monthly]: Skipping device resources because running on a host
debug: /Schedule[hourly]: Skipping device resources because running on a host
debug: /Schedule[never]: Skipping device resources because running on a host
debug: /Schedule[weekly]: Skipping device resources because running on a host
debug: Prefetching crontab resources for cron
notice: /Stage[main]//Cron[s3_logger_supervisord]/ensure: created
debug: Flushing cron provider target root
debug: /Stage[main]//Cron[s3_logger_supervisord]: The container Class[Main]
will propagate my refresh event
debug: /Schedule[puppet]: Skipping device resources because running on a host
debug: Class[Main]: The container Stage[main] will propagate my refresh event
debug: Finishing transaction 69884944021460
debug: Storing state
debug: Stored state in 0.29 seconds
notice: Finished catalog run in 0.42 seconds
debug: Finishing transaction 69884945144080
debug: Received report to process from p-HOSTNAME.use01.plat.priv
debug: Processing report from p-HOSTNAME.use01.plat.priv with processor
Puppet::Reports::Store
--
verify www-data user crontab after
[user@p-HOSTNAME tmp]$ sudo -u www-data crontab -l |grep -A 2
"s3_logger_supervisord$"
# Puppet Name: s3_logger_supervisord
40 23 * * * /var/has/s3_logrotate/bin/s3_logrotate.py -b bucket -p
/var/has/log/supervisord -l /var/has/log/s3_logrotate/HOSTNAME_supervisord -s
'secret_key' -k 'key'
--
verify root user crontab after
[user@p-HOSTNAME tmp]$ sudo -u root crontab -l |grep -A 2
"s3_logger_supervisord$"
# Puppet Name: s3_logger_supervisord
40 23 * * * /var/has/s3_logrotate/bin/s3_logrotate.py -b bucket -p
/var/has/log/supervisord -l /var/has/log/s3_logrotate/HOSTNAME_supervisord -s
'secret_key' -k 'key'
[user@p-HOSTNAME tmp]$
</pre>
----------------------------------------
Bug #16121: Cron user change results in duplicate entries on target user
https://projects.puppetlabs.com/issues/16121#change-69919
Author: Chris Henry
Status: Needs More Information
Priority: Normal
Assignee: Chris Henry
Category: cron
Target version:
Affected Puppet version:
Keywords:
Branch:
Running puppetmaster 2.7.13 and puppetd 2.7.13 on Centos6
We have a puppet module that installs some scripts and creates a cronjob to
pickup gzip'd logs and upload them to s3. I mistakenly created the cron job as
the user 'www-data' initially - but later found out that the supervisord daemon
logs as 'root' - so I changed the user of a puppet cronjob from 'www-data' ->
'root'.
On a puppetd run the client successfully detects the change and returns a
notice that it is changing users for the cron job - but the end result is that
the cron job is not removed for the www-data user and a duplicate job is create
on the root users crontab.
I didn't notice this for about a day but when I looked I saw that there were
many duplicate entries in the root users crontab for this job - presumably one
for each puppetd run.
The only crontab jobs on this server are managed by puppet - no manual edits or
jobs have ever been created
As you can see in the output below every client run results in the cronjob
still existing for the www-data user and another entry being generated in the
root users crontab:
<pre>
client puppetd run:
[user@HOSTNAME ~]$ date
Sat Aug 25 07:52:20 UTC 2012
[user@HOSTNAME ~]$ sudo puppetd -t
info: Caching catalog for HOSTNAME
info: Applying configuration version '1345880642'
notice:
/Stage[main]/S3_logrotate::Supervisord/Cron[s3_logger_supervisord]/user: user
changed 'www-data' to 'root'
notice:
/Stage[main]/S3_logrotate::Supervisord/Cron[s3_logger_supervisord]/target:
target changed 'www-data' to 'root'
notice: Finished catalog run in 31.34 seconds
[user@HOSTNAME ~]$
---------------
'www-data' crontab after run:
[user@HOSTNAME ~]$ date
Sat Aug 25 07:53:36 UTC 2012
[user@HOSTNAME ~]$ sudo -u www-data crontab -l
# HEADER: This file was autogenerated at Fri Aug 24 20:09:06 +0000 2012 by
puppet.
# HEADER: While it can still be managed manually, it is definitely not
recommended.
# HEADER: Note particularly that the comments starting with 'Puppet Name' should
# HEADER: not be deleted, as doing so could cause duplicate cron jobs.
# Puppet Name: s3_logger_crond
40 23 * * * /var/has/s3_logrotate/bin/s3_logrotate.py -b bucket_name -p
/var/has/log/crond -l /var/has/log/s3_logrotate/HOSTNAME_crond -s
secret_key_here -k key_here
# Puppet Name: s3_logger_nginx
40 23 * * * /var/has/s3_logrotate/bin/s3_logrotate.py -b bucket_name -p
/var/has/log/nginx -l /var/has/log/s3_logrotate/HOSTNAME_nginx -s
'secret_key_here' -k 'key_here'
# Puppet Name: s3_logger_supervisord
40 23 * * * /var/has/s3_logrotate/bin/s3_logrotate.py -b bucket_name -p
/var/has/log/supervisord -l /var/has/log/s3_logrotate/HOSTNAME_supervisord -s
'secret_key_here' -k 'key_here'
# Puppet Name: s3_logger_api
40 23 * * * /var/has/s3_logrotate/bin/s3_logrotate.py -b bucket_name -p
/data/log/api -l /var/has/log/s3_logrotate/HOSTNAME_api -s 'secret_key_here' -k
'key_here'
[user@HOSTNAME ~]$
------------------
'root' crontab after run:
[user@HOSTNAME ~]$ date
Sat Aug 25 07:54:15 UTC 2012
[user@HOSTNAME ~]$ sudo -u root crontab -l
# HEADER: This file was autogenerated at Sat Aug 25 07:53:08 +0000 2012 by
puppet.
# HEADER: While it can still be managed manually, it is definitely not
recommended.
# HEADER: Note particularly that the comments starting with 'Puppet Name' should
# HEADER: not be deleted, as doing so could cause duplicate cron jobs.
# Puppet Name: puppet_clientbucket_cleanup
15 1 * * * /usr/bin/find /var/lib/puppet/clientbucket/ -type f -mtime +14 -exec
rm {} \;
# Puppet Name: s3_logger_php-fpm_cleanup
35 23 * * * /bin/find /var/has/log/s3_logrotate -name 'HOSTNAME_php-fpm.*'
-mtime +7 -exec rm {} \;
# Puppet Name: s3_logger_nginx_cleanup
35 23 * * * /bin/find /var/has/log/s3_logrotate -name 'HOSTNAME_nginx.*' -mtime
+7 -exec rm {} \;
# Puppet Name: s3_logger_crond_cleanup
35 23 * * * /bin/find /var/has/log/s3_logrotate -name 'HOSTNAME_crond.*' -mtime
+7 -exec rm {} \;
# Puppet Name: s3_logger_supervisord_cleanup
35 23 * * * /bin/find /var/has/log/s3_logrotate -name 'HOSTNAME_supervisord.*'
-mtime +7 -exec rm {} \;
# Puppet Name: s3_logger_api_cleanup
35 23 * * * /bin/find /var/has/log/s3_logrotate -name 'HOSTNAME_api.*' -mtime
+7 -exec rm {} \;
# Puppet Name: s3_logger_php-fpm
40 23 * * * /var/has/s3_logrotate/bin/s3_logrotate.py -b bucket_name -p
/var/has/log/php-fpm -l /var/has/log/s3_logrotate/HOSTNAME_php-fpm -s
'secret_key_here' -k 'key_here'
# Puppet Name: s3_logger_supervisord
40 23 * * * /var/has/s3_logrotate/bin/s3_logrotate.py -b bucket_name -p
/var/has/log/supervisord -l /var/has/log/s3_logrotate/HOSTNAME_supervisord -s
'secret_key_here' -k 'key_here'
# Puppet Name: s3_logger_supervisord
40 23 * * * /var/has/s3_logrotate/bin/s3_logrotate.py -b bucket_name -p
/var/has/log/supervisord -l /var/has/log/s3_logrotate/HOSTNAME_supervisord -s
'secret_key_here' -k 'key_here'
# Puppet Name: s3_logger_supervisord
40 23 * * * /var/has/s3_logrotate/bin/s3_logrotate.py -b bucket_name -p
/var/has/log/supervisord -l /var/has/log/s3_logrotate/HOSTNAME_supervisord -s
'secret_key_here' -k 'key_here'
# Puppet Name: s3_logger_supervisord
40 23 * * * /var/has/s3_logrotate/bin/s3_logrotate.py -b bucket_name -p
/var/has/log/supervisord -l /var/has/log/s3_logrotate/HOSTNAME_supervisord -s
'secret_key_here' -k 'key_here'
# Puppet Name: s3_logger_supervisord
40 23 * * * /var/has/s3_logrotate/bin/s3_logrotate.py -b bucket_name -p
/var/has/log/supervisord -l /var/has/log/s3_logrotate/HOSTNAME_supervisord -s
'secret_key_here' -k 'key_here'
# Puppet Name: s3_logger_supervisord
40 23 * * * /var/has/s3_logrotate/bin/s3_logrotate.py -b bucket_name -p
/var/has/log/supervisord -l /var/has/log/s3_logrotate/HOSTNAME_supervisord -s
'secret_key_here' -k 'key_here'
# Puppet Name: s3_logger_supervisord
40 23 * * * /var/has/s3_logrotate/bin/s3_logrotate.py -b bucket_name -p
/var/has/log/supervisord -l /var/has/log/s3_logrotate/HOSTNAME_supervisord -s
'secret_key_here' -k 'key_here'
# Puppet Name: s3_logger_supervisord
40 23 * * * /var/has/s3_logrotate/bin/s3_logrotate.py -b bucket_name -p
/var/has/log/supervisord -l /var/has/log/s3_logrotate/HOSTNAME_supervisord -s
'secret_key_here' -k 'key_here'
# Puppet Name: s3_logger_supervisord
40 23 * * * /var/has/s3_logrotate/bin/s3_logrotate.py -b bucket_name -p
/var/has/log/supervisord -l /var/has/log/s3_logrotate/HOSTNAME_supervisord -s
'secret_key_here' -k 'key_here'
[user@HOSTNAME ~]$
------------------
manifest:
class s3_logrotate::supervisord ($secret, $key,
$bucket,$path='/var/has/log/supervisord') {
include s3_logrotate
Cron {
require => Class["s3_logrotate"]
}
cron {
"s3_logger_supervisord":
command => "/var/has/s3_logrotate/bin/s3_logrotate.py -b $bucket
-p $path -l /var/has/log/s3_logrotate/${hostname}_supervisord -s '${secret}' -k
'${key}'",
user => root, # this used to be www-data but supervisord logs
as root
minute => 40,
hour => 23;
"s3_logger_supervisord_cleanup":
command => "/bin/find /var/has/log/s3_logrotate -name
'${hostname}_supervisord.*' -mtime +7 -exec rm {} \\;",
user => root,
minute => 35,
hour => 23;
}
}
</pre>
--
You have received this notification because you have either subscribed to it,
or are involved in it.
To change your notification preferences, please click here:
http://projects.puppetlabs.com/my/account
--
You received this message because you are subscribed to the Google Groups
"Puppet Bugs" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to
[email protected].
For more options, visit this group at
http://groups.google.com/group/puppet-bugs?hl=en.