Thanks all for responses. Does seem like Puppet isn't really the tool for
the job but can be persuaded to do it. I found Puppet WSUS, which leverages
PoshWSUS to control WSUS... but you still need WSUS. I am not really sure
why we are not using it... something to discuss with the management!
Ken / Deepak,
first of all thanks a ton for the great help.
It seems there is some issue with the puppet facts upload command
I already have the certname server configuration directives declared in
the puppet.conf but for some reason the same are not being recognized by
the puppet facts
That's why I'm upgrading (otherwise 3.4.3 works for me), but I will have a hard
time justifying any upgrade that leads to breakage. They're in profile::base
for me too.
Well, on with my testing!
On Thu, Jun 12, 2014 at 10:36:07AM +1000, Pete Brown wrote:
I decided to put puppet,hiera and
Hi!
I try to activate the yum repository for SUSE Linux Enterprise Server 11:
myserver:~ # zypper --non-interactive addrepo -t YUM
http://yum.puppetlabs.com puppet
Adding repository 'puppet' [done]
Repository 'puppet' successfully added
Enabled: Yes
Autorefresh: No
URI:
Which means you're running with an invalid config for up to 30 minutes
before your services are restarted back with your desired config?
On Wed, 11 Dec 2013, Dan White wrote:
I am using Puppet on RHEL systems.
I do not use Puppet to patch the servers. I use Red Hat Network and yum
update.
first of all thanks a ton for the great help.
No problem, I'm glad Deepak chimed in about 'facts upload' its a much
better way to do it.
It seems there is some issue with the puppet facts upload command
I already have the certname server configuration directives declared in
the
Trying to use the puppetlabs-puppetdb module to set up my puppet master to
use stored configs using puppetdb along side foreman.
In a config group I dropped it class puppetb and puppetdb::master::config.
Here's the error I'm seeing:
Error: Could not retrieve catalog from remote server:
Torsten,
The puppetlabs yum repository is for EL releases only. Because SLES
is 'special' you need to use a repo with RPMS (and metadata) tailored
for zypper and its ilk.
https://build.opensuse.org/project/show/systemsmanagement:puppet
- Jesse
On Thu, Jun 12, 2014 at 7:12 AM, Torsten Kleiber
*puppet.conf on master *
[main]
# The Puppet log directory.
# The default value is '$vardir/log'.
logdir = /var/log/puppet
# Where Puppet PID files are kept.
# The default value is '$vardir/run'.
rundir = /var/run/puppet
# Where SSL certificates are kept.
# The
The files above are my current master and agent configs, I have updated the
agent config after your suggestion and it seems to be working great...
But now I will need to update my puppet agent config(for the above change
and to include postrun_command for puppet facts upload) on almost 2000
Using the puppetdb and postgresql master branches from github and this is
all I've got:
class { 'puppetdb': }
class { 'puppetdb::master::config':
puppet_service_name = 'apache2',
require = Class['puppetdb'],
}
Works fine with Foreman. Like Ken
I need to use certnames as we are an IDC and need to handle large number of
instances and have a unique naming convention for each device. We cannot
force hostnames on servers(belonging to customers) so our unique device
name is forced on the certname.
Anyways.. Can you also shed some light on
I need to use certnames as we are an IDC and need to handle large number of
instances and have a unique naming convention for each device. We cannot
force hostnames on servers(belonging to customers) so our unique device name
is forced on the certname.
Fair enough.
Anyways.. Can you also
Also, try running puppet facts upload --debug --trace and see if that
gives us more info.
On Thu, Jun 12, 2014 at 2:08 PM, Ken Barber k...@puppetlabs.com wrote:
I need to use certnames as we are an IDC and need to handle large number of
instances and have a unique naming convention for each
I am in the process of rolling out 20 new servers and all have to have some
commonalities. I am shockingly new to puppet and am supposed to have this
up and running in 2 weeks. I have the ssh class written and prepped, but my
ldap config is wierd and I'm not sure how to use puppet to set it up.
Thank you!
But now I get another error:
vdu10272:~ # zypper --non-interactive refresh
Retrieving repository 'puppet' metadata [/]
Download (curl) error for
'https://build.opensuse.org/project/show/systemsmanagement:puppet/repodata/repomd.xml':
Error code: Unrecognized error
Error message: SSL
You can do it through the Foreman UI.
Select the node in question, and then hit the 'YAML' button under
'Properties' - 'Details'.
HTH
Gav
On Thursday, 12 June 2014 13:57:42 UTC+1, Ken Barber wrote:
Using the puppetdb and postgresql master branches from github and this
is
all I've
This doesnt only happen in case of a puppet facts upload but even in a
case of puppet agent --test
And yes the master configs that I have provided are correct ones that I
have been using now.
On Thu, Jun 12, 2014 at 6:39 PM, Ken Barber k...@puppetlabs.com wrote:
Also, try running puppet
Hi,
I have open source Puppet Server 3.3.1 and on agent 2.7.25.
I install puppet-dashboard.
After a break, I return to the puppet server and I would like to use try it
with ENC.
I did not understand in basic how it work.
1. In the file external_node which certs I need to put?
I
Apparently I had a syntax error in the class.
On Thursday, June 12, 2014 9:08:32 AM UTC-4, bryan@tma1.com wrote:
I am in the process of rolling out 20 new servers and all have to have
some commonalities. I am shockingly new to puppet and am supposed to have
this up and running in 2
Sort of. The normal Puppet+Passenger configuration still crashes, but
for some odd reason if I add the following to the Puppet config.ru
(after the --confdir and --vardir lines) the crashes stop...
ARGV --debug
ARGV --trace
ARGV --profile
ARGV --logdest /var/log/puppet/puppetmaster.log
I
No issue with connection to puppetdb on port 8081. Overall connectivity
looks good.
[root@hostname conf.d]# telnet puppetdb 8081
Trying XXX.XXX.XXX.XXX...
Connected to puppetdb.
Escape character is '^]'.
conf.d]# netstat -tpane|grep 80
tcp0 0 127.0.0.1:5432
I deployed a Puppet Enterprise Master and three Agent nodes in AWS. At the
time I did not allocate Elastic IP to the the instances. I got everything
up and running and was enjoying Puppet until I stopped the instances.
Now my console shows the nodes as unresponsive.
Here is my attempt thus far
No issue with connection to puppetdb on port 8081. Overall connectivity
looks good.
[root@hostname conf.d]# telnet puppetdb 8081
Trying XXX.XXX.XXX.XXX...
Connected to puppetdb.
Escape character is '^]'.
I'm not sure this is the correct hostname, so I wouldn't trust these
test results.
conf.d]# netstat -tpane|grep 80
tcp0 0 127.0.0.1:5432 127.0.0.1:58512
ESTABLISHED 26 126802 19611/postgres
tcp0 0 :::172.16.43.151:8080 :::*
LISTEN 496126760 19343/java
tcp0 0 :::80
+1 here.
We see this problem kind of problem on Puppet 3.4.3 and Scientific Linux
6.5.
Only the files causing the segfault are different from your case:
/usr/lib/ruby/site_ruby/1.8/puppet/util/autoload.rb:88: [BUG]
Segmentation fault
ruby 1.8.7 (2011-06-30 patchlevel 352) [x86_64-linux]
On Thu, Jun 12, 2014 at 3:00 AM, Pskov Shurik pskovshu...@gmail.com wrote:
Thanks all for responses. Does seem like Puppet isn't really the tool for
the job but can be persuaded to do it. I found Puppet WSUS, which leverages
PoshWSUS to control WSUS... but you still need WSUS. I am not really
Hi,
First I imported my puppet code from puppet server to svn repository.
2nd I checkout the code from svn repository to remote server, where I am
developing the code.After I editing the code I commited the code. and did
svn up puppetlabs in remote server.
3rd After I did this, the code is
build.opensuse.org is the OBS project location where the packages are
managed/built. Successfully built packages are published at
download.opensuse.org, so for SLES use the below repo URL.
http://download.opensuse.org/repositories/systemsmanagement:/puppet/SLE_11_SP3/
--
Later,
Darin
On Thu,
Without the complete error output, it is difficult to diagnose. Can
you post that? Use puppet module instal --debug
puppetlabs/openstack What is the output of puppet config print
modulepath.
Thanks,
Kurt
On Wed, Jun 11, 2014 at 10:35 PM, devaki prabhu
devakiprabhu...@gmail.com wrote:
Hi,
I
On 6/12/14, 12:01 PM, Supriya Uppalapati wrote:
Hi,
First I imported my puppet code from puppet server to svn repository.
2nd I checkout the code from svn repository to remote server, where I am
developing the code.After I editing the code I commited the code. and
did svn up puppetlabs in
Verdict: Went fine with the usual upgrade teething troubles. (Once I figured
those out I reverted to my pre-upgrade VM snapshot on the first upgraded host
and there was no hassle the second time.) I went from 3.4.3 to 3.6.2.
The procedure was to upgrade the following rpms on each host (daemon
Can we get a fix for Facter 1 as well since Puppet 2.7 requires Facter 2?
Or correct the Puppet RPM if that works.
Thanks,
Trevor
On Tue, Jun 10, 2014 at 2:20 PM, Sam Kottler s...@kottlerdevelopment.com
wrote:
Announce: Puppet 2.7.26 Available [ Security Release ]
Puppet 2.7.26 is a
Hi list,
I'm working on a little addition to an internal module we use to ensure
our puppet clients have a consistent configuration to also ensure the
correct version of puppet is installed on the system and run into a bit
of semantics issue as I thought that ensure = 'version.num' was a
more
Package updates to RHEL systems do not touch config files if they have been
changed, so it's rare that a simple update would cause any configuration to
become invalid (of course, anything is possible). And you tested updates
and their possible config changes before deployment, right?
❧ Brian
Very nice report, we use mcollective so it will be very useful!
Have you found any performance issue? I'm worried about that because our
current puppet master is currently a bit overloaded.
Thanks,
El 12/06/2014 18:38, Christopher Wood christopher_w...@pobox.com
escribió:
Verdict: Went fine
No performance issues, but I'm running one CA and one non-CA puppetmaster
behind a load balancer. My reflex action in case of load is to throw more
servers in and then think about what's wrong.
I did see one of these for each new agent checkin, but things worked fine:
Jun 12 11:45:07
Hi all,
I'm trying to set up something that will have multiple puppet masters
(with one as the CA) and multiple puppet db's (they will be
geographically dispersed).
The multi-masters stuff all works fine, but I'm struggling with multiple
puppet db's.
Ideally I'd like puppet db to live on
Nice report.
Thanks.
I have a dev environment i build with vagrant every day. I even
managed to bootstrap a puppetmaster in there for testing exported
resources.
Highly recommend using Vagrant because it forces you to fix those
dependency errors you would normally not see on live servers that
39 matches
Mail list logo