Hot-deploy can be risky. The biggest issue I've seen is a low permgen. Hot
deploy puts twice the requirement on permgen.
On Dec 23, 2011 7:58 PM, Kenneth Lo k...@paydiant.com wrote:
“What are the problems this requirement is intended to solve?”
What I was told from my eng team is that
This smells like you have a second copy of facter or some other facts
somewhere in your RUBYLIB, as the latest version no longer uses
Facter::IPAddress. Are you sure you haven't got an RPM or local copy
installed somewhere else?
Try running facter --trace as well so we can see the output. The
What are you actually trying to do with the YAML file today Marek
whereby the links are causing such problems? This is a semi-loaded
question ... call me curious :-).
On Wed, Feb 15, 2012 at 9:55 PM, Marek Dohojda chro...@gmail.com wrote:
:: sigh :::
Back to the ol' drawing board. LOL.
Well
This ordering behaviour is as you state, and the numbers in the
namevar are ultimately for how they get ordered in the file ruleset as
you state - but not what order they are _inserted_. Ideally it would
be great to have insertion order and order in the firewall list to be
the same - but this
You said:
the numbers in the namevar are ultimately for how they get
ordered in the file ruleset as you state - but not what order
they are _inserted_.
Which makes me still think that the order various modules kick can affect
the firewall rules. Thus, a stage after main is still needed to
Since the whole fwpre class is run before everything else, is it necessary
to define each resource with dependencies with firewall {002 testing:
...}-firewall {... as in your gist?
No its not.
Anyway, works great for us now. Thanks much!
Good to hear - I'll get the documentation fixed then.
Hi Amos - its been a long long long time mate :-).
example for (1): Our vagrant (http://vagrantup.com/) dev base boxes
still come with Puppet 2.6.3 while the manifest depends on Puppet 2.7
features. I can manually upgrade puppet manually (and that's what I do
on dev), but when the time to
3. More forethought and discussion on the dev list prior to making a
pull request/patch.
That'd be really great. And I noticed some attempts lately in this
direction, which is really good.
I've been moving more discussions onto puppet-dev in the last few
weeks, as I've been delving more
(I don't have a direct tool - but its an interesting conceptual problem)
You would ordinarily want an intermediate system for this, either
something custom - or a CI system like Jenkins (and perhaps Travis CI
does this as well) where you would put your 'publish' logic.
The problem of course,
A pm2rpm tool perhaps Todd? :-).
On Mon, Apr 16, 2012 at 7:36 PM, Todd Zullinger t...@pobox.com wrote:
Michael Stahnke wrote:
For the next major Puppet version, code-named Telly, we have some changes
coming. This is the first in a series of emails around these changes and
may require some
I'm going to review this now. Its destined for master, so someone from
the release team can probably comment on the next major release
schedule for stdlib.
On Tue, Apr 17, 2012 at 7:35 PM, Geoff Davis gada...@ucsd.edu wrote:
That's what I'm looking for. I'll fold in that branch into my testing
Don't stress, I'm sure its topical :-).
On May 8, 2012 5:49 AM, Brian Gupta brian.gu...@brandorr.com wrote:
My apologies, this was supposed to go to the puppet-nyc mailing list. :(
-Brian
On Tue, May 8, 2012 at 12:42 AM, Brian Gupta brian.gu...@brandorr.com
wrote:
Ohad did a great job
Hi all,
Just FYI. I've just renamed some of the modules that are stored as
github repos in our Github organisation
(http://github.com/puppetlabs/) to use the puppetlabs-module_name
convention so we can get a bit of consistency. The following changes
were made:
puppet-lvm - puppetlabs-lvm
Perhaps look at the Puppet Dashboard or Foreman schemas as a starting
point? These are both ENC's that are already working.
ken.
On Wed, May 30, 2012 at 9:13 PM, erkan yanar er...@linsenraum.de wrote:
Moin,
I am thinking of using a RDBM as a best practise.
I am missing some info/examples
Why don't you try using PuppetDB for stored configs instead? Its
asynchronous, uses activemq behind the scenes and supports postgres.
https://github.com/puppetlabs/puppetdb
On Thu, May 31, 2012 at 10:32 AM, Svein sv...@soleim.at wrote:
How can I set up both Storeconfig and mcollective using
Turns out yes, it's the leap second, but boy was the fix I found
easier than that:
http://artipc10.vub.ac.be/wordpress/2012/07/01/leap-second-causing-ksoftirqd-and-java-to-use-lots-of-cpu-time/
$ sudo date -s `date`
Cleared it rigt up.
Huh. What a weird fix :-).
ken.
--
You received
I dealt with a case that had a Cray XT4 using Redhat Linux
specifically. More or less since the hardware and OS was just like any
other platform we support, it wasn't a problem and we supported it.
If you are using Cray Linux - I believe its based on the SuSE Linux
platform (and we do support
If you are using Cray Linux - I believe its based on the SuSE Linux
platform (and we do support SLES 11sp1/2) - so if you have trouble I'm
sure it wouldn't be hard to adapt, although it isn't a platform we
have specifically targeted in the past. Some Facter patches would
probably be needed to
As far as the rdoc thing, it's fine with me, but it would be nice if there
was a way to scrape params from the classes w/o having to list out each in
the comments, which I think is part of the actual Ruby rdoc functionality.
Yeah - repeating yourself is awful. /me has written quite a few
I am able to install my RPM via this puppet code ...
[root@agent1 ~]# puppet apply -v install_named_conf.pp
info: Loading facts in
/opt/puppet/share/puppet/modules/stdlib/lib/facter/facter_dot_d.rb
info: Loading facts in
/opt/puppet/share/puppet/modules/stdlib/lib/facter/puppet_vardir.rb
+1
On Fri, Jul 27, 2012 at 4:10 PM, Trevor Vaughan tvaug...@onyxpoint.com wrote:
Best.Post.Ever
On Fri, Jul 27, 2012 at 10:58 AM, Christopher Wood
christopher_w...@pobox.com wrote:
On Wed, Jul 25, 2012 at 04:34:34PM -0700, Stuart Cracraft wrote:
Hey, Chris: so that begs the
Hope someone can help, ( not sure if this is the right place ... as I´m new
to this group)
Yes - this is the correct place.
I have created a new module ( see snipped from tree below ) but its not
used and no error is reported . looks like its not recognized by the puppetd
Probably a silly
I don't think Alfresco is shipped as a formal OS package. It's just a
tarball with an installer script last time I looked, I remember some
third-party efforts back in the 2.x days but nothing formal from the
company itself (at least not when I worked on Alfresco). Live
management only deals with
My immediate gut feeling on this would be network not Puppet being the
issue actually, especially if another client is having success at
doing the sync.
Its a virt so it could be hypervisor drivers or some other issue, its
an old version of the kernel as well - its more likely to happen -
I am currently trying to get the puppetdb dashboard and the puppet dashboard
working on the same system. Puppet dashboard is working great but after
successfully installing puppetdb following puppet's opensource
instructions the puppetdb dashboard just doesn't seem to exist (according to
some
Damn, I thought John had it :-(.
Here's a question I hadn't asked - whats your plugin sync performance
like on the puppetmaster node itself? ie. clear all synced files and
run it locally and time it, comparing to the other nodes.
On Wed, Jan 9, 2013 at 4:02 PM, Kirk Steffensen
Normally /etc/puppetlabs/puppet/manifests would contain the main entry
point 'site.pp'. These days though with, Exported Node Classification
(ie. the dashboard ENC) not everyone uses it and its now become
optional.
If you don't use an ENC, you can create a site.pp for example - and
include a
Ken, thanks. Unfortunately, (from a troubleshooting standpoint), it only
took one or two seconds to sync stdlib on the local box.
rm -rf /var/lib/puppet/lib/*
puppet agent --test
I saw the same stream of File notices, but they streamed by in real time,
instead of taking 10 seconds per
I am trying to add a property to the User type in order to be able to turn
off the screen saver of the managed users. Everything I have found on custom
types has been around creating an entirely new type rather than extending an
existing one -- except for one sentence on the Custom Types page;
Do you get a core dump? Does it seriously just silently 'stop' with no
SEGV or anything - even in the forground?
On Wed, Jan 9, 2013 at 11:07 PM, Cody Robertson codyhawkh...@gmail.com wrote:
There is nothing in the logs as previously noted. It simply crashed quietly.
This is the same for when
I have no core dumps however I need to make sure I have it set to allow
them.
Yeah check ulimit -a for the puppetdb user, might need ulimit -c
unlimited or some such.
It literally just goes kaput - very strange. I've yet to have time to
strace it yet today however I did it briefly and it was
UTC-5, Ken Barber wrote:
Ken, thanks. Unfortunately, (from a troubleshooting standpoint), it
only
took one or two seconds to sync stdlib on the local box.
rm -rf /var/lib/puppet/lib/*
puppet agent --test
I saw the same stream of File notices, but they streamed by in real
You probably want an audit field in your file resource, try this
pattern on for size:
# cat /tmp/zzz.pp
file {/tmp/foo:
ensure = file,
notify = Exec[foo],
audit = 'content',
}
exec {foo:
command = /usr/bin/true,
refreshonly = true,
}
# echo foobar /tmp/foo
# ./bin/puppet apply
Sorry, forgot the doc link:
http://docs.puppetlabs.com/references/latest/metaparameter.html#audit
On Mon, Jan 14, 2013 at 6:24 PM, Ken Barber k...@puppetlabs.com wrote:
You probably want an audit field in your file resource, try this
pattern on for size:
# cat /tmp/zzz.pp
file {/tmp/foo
I believe you'll get an assertion failure by setting it to zero.
In a way you probably don't want it to be pinned to 1 node, because if
it is removed from the cluster or goes down you'll lose the garbage
collection. However we don't have a nice way today of splaying the
interval or anything
Well I currently have 7,000 nodes in my PuppetDB instance and 8 puppetdb
servers in various geographic locations. The garbage collect seems to be a
pretty intensive operation on the Postgres DB server in the current design.
Thats a shame to hear. What's the impact for you? I see from your
this risk ... we only need a heavy GC run, it probably
doesn't need to be actively receiving requests and such ...
ken.
On Tue, Jan 15, 2013 at 4:12 PM, Chuck cssc...@gmail.com wrote:
On Tuesday, January 15, 2013 9:55:48 AM UTC-6, Ken Barber wrote:
Well I currently have 7,000 nodes in my
On Tuesday, January 15, 2013 1:27:58 PM UTC-6, Ken Barber wrote:
Hey Chuck,
I've had a chat with my colleagues and they raised some concerns about
your gc performance. I wouldn't mind drilling into that problem if
thats okay with you?
For starters - what are your settings looking like? In particular
, January 9, 2013 6:43:05 PM UTC-5, Ken Barber wrote:
Do you get a core dump? Does it seriously just silently 'stop' with no
SEGV or anything - even in the forground?
On Wed, Jan 9, 2013 at 11:07 PM, Cody Robertson codyha...@gmail.com
wrote:
There is nothing in the logs as previously noted
(0,102) of
relation 16811 of database 16513 after 1000.283 ms
On Tuesday, January 15, 2013 4:07:11 PM UTC-6, Chuck wrote:
On Tuesday, January 15, 2013 3:19:29 PM UTC-6, Ken Barber wrote:
So that looks like its taking ages ... some questions for you:
* What version of Postgresql are you
Hi Chris,
I regenerated the puppetdb certs according to the instructions here:
Step 3, Option B
https://docs.puppetlabs.com/puppetdb/0.9/install_from_source.html#step-3-option-b-manually-create-a-keystore-and-truststore
And can verify the cert manually using openssl client
#echo QUIT |
Yes, if you want to use exported resource this is the current endorsed path.
On Sat, Jan 19, 2013 at 3:01 AM, Jakov Sosic jso...@srce.hr wrote:
On 01/19/2013 03:57 AM, Jakov Sosic wrote:
I've tried connecting through localhost and everything works fine...
This is my node manifest:
I've seen a couple of instances where a service resource has failed with an
error because it's
been evaluated before its corresponding package is installed. I can fix this
by adding an explicit
require to the service resource, or by just running puppet again, but I
thought that there would
I would like to ask what to pay attention to, when someone wants to set up
multiple PuppetDB instances (and point them to the same DB), and put them
behind a proxy/load balancer, as mentioned in the documentation.
Can this cause some concurrency issues with the message queues?
What do you
This sounds like a sensible workaround, I will definitely have a look. I
haven't yet had enough time to look at the issue properly, but it seems that
this very long time is indeed consumed by catalog construction. Puppetdb
fails after this is finished, so it seems that it dies when nagios host
at 3:19 PM, Daniel Siechniewicz
dan...@nulldowntime.com wrote:
On Tue, Jan 22, 2013 at 3:04 PM, Ken Barber k...@puppetlabs.com wrote:
This sounds like a sensible workaround, I will definitely have a look. I
haven't yet had enough time to look at the issue properly, but it seems that
this very
Does this happen across all nodes? This is an indication you might
have a resource that affects a large set of nodes that suddenly
changes every 4 days.
In the catalogs table, the 'hash' is just a hash of the catalogue
data, if anything in the catalogue changes - it changes. And new
entries are
I assume that each PuppetDB instance maintains its own message queue, and
commands sent by the master wait in these queues.
Yes, today this is true.
In that case, is the following scenario possible:
- the master sends facts for a node to PuppetDB through the load balancer
- the load
more
nodes at a time than we have in the past.
On Tuesday, January 22, 2013 1:27:15 PM UTC-6, Ken Barber wrote:
Does this happen across all nodes? This is an indication you might
have a resource that affects a large set of nodes that suddenly
changes every 4 days.
In the catalogs table
We didn't notice anything on Sunday. We have a decent number of resources
that effect all nodes. This may explain the ocasional performance issues.
Sure, more specifically you'll get catalog replaces in the database if
you have resources that are always 'changing'. This might be a dynamic
I think John is on to something here, Daniel - I haven't seen your
full content yet, but are you creating a file for each nagios_host, so
that you can use that as the 'target'? Thus creating a single file for
each nagios_host entry?
If this is the case, the John is spot-on ... and you're creating
to the following people who contributed patches to this release:
Chris Price
Deepak Giridharagopal
Jeff Blaine
Ken Barber
Kushal Pisavadia
Matthaus Litteken
Michael Stahnke
Moses Mendoza
Nick Lewis
Pierre-Yves Ritschard
Notable features:
Enhanced query API
A substantially improved
Here is the documentation for wiring up the puppetmaster to puppetdb:
http://docs.puppetlabs.com/puppetdb/1.1/connect_puppet_master.html
Make sure you follow all of these steps and see how you go. Also
remember to restart your puppetmaster :-). It sounds like at least
your routes.yaml isn't
tmpwatch is also a good approach:
http://linux.about.com/library/cmd/blcmdl8_tmpwatch.htm. Probably
requires less scripting, and its already on most distros, probably
cleaning your /tmp directories already.
On Sat, Feb 2, 2013 at 6:07 PM, Dan White y...@comcast.net wrote:
I take back my
I followed the howto, but exported resources just don't get applied on
agents...
The documentation is fairly precise, looking at your configuration snippet:
# cat /etc/puppet/puppet.conf
[main]
logdir = /var/log/puppet
rundir = /var/run/puppet
ssldir = $vardir/ssl
if you follow it step by step:
http://docs.puppetlabs.com/puppetdb/1.1/connect_puppet_master.html
ken.
On Sun, Feb 3, 2013 at 10:11 PM, Ken Barber k...@puppetlabs.com wrote:
I followed the howto, but exported resources just don't get applied on
agents...
The documentation is fairly precise
Also your routes.yaml doesn't match the documentation I gave you
either, nor should you be setting thin_storeconfigs any more as stated
in another section.
Yeah, but if I remove the section for catalog, I don't get replacing
catalogs in puppetdb log... But since my initial problem was wrong
Perhaps you want to use 'puppet node deactivate' for now:
http://docs.puppetlabs.com/puppetdb/1.1/maintain_and_tune.html#deactivate-decommissioned-nodes
This will at least remove it from being collected immediately, but it
leaves it in the database.
So beyond that we are working on an automated
I would recommend moving it over now. There are magic numbers I could
make up about when to switch etc., but if you want to get it right
from the beginning switch now.
The problem is HSQLDB stores everything in the RAM in the JVM, and as
that gets bigger your JVM garbage collection consumes more
.
On Wed, Feb 6, 2013 at 11:22 PM, Jakov Sosic jso...@srce.hr wrote:
On 02/06/2013 09:48 PM, Ken Barber wrote:
I would recommend moving it over now. There are magic numbers I could
make up about when to switch etc., but if you want to get it right
from the beginning switch now.
The problem
That hash been changed now on docs.puppetlabs.com Jakov, again ... thanks.
On Wed, Feb 6, 2013 at 8:56 PM, Ken Barber k...@puppetlabs.com wrote:
Also, please note that puppetmasterd section is still in the official
documentation, like for example here:
http://docs.puppetlabs.com/guides
Hi Ronald,
I presume your talking about switching to PuppetDB from using the
ActiveRecord sqlite3 setup?
One methodology is to:
* Do a quick change freeze, ie. don't allow puppet to run properly on
any host while you do the rest.
* Simply make the switch to PostgreSQL PuppetDB configuration in
I've alerted operations. Thanks guys.
On Mon, Feb 11, 2013 at 1:38 PM, Gregory B.
gregorybec...@notonthehighstreet.com wrote:
+1 the repository is down for me too. Is there any known mirror?
--
You received this message because you are subscribed to the Google Groups
Puppet Users group.
To
Hi Heena,
Is your puppetdb system separate from your puppetmaster? If so, you
probably need to make sure that PuppetDB is listening on the correct
interface. By default we only listen on 127.0.0.1.
Take a look at:
/etc/puppetdb/conf.d/jetty.ini
And set the 'ssl-host' entry to your public
12, 2013 at 10:55 AM, Heena rush2h...@gmail.com wrote:
Hi,
Thanks for the reply.
I am using same system for puppetmaster and puppetdb.
On Tuesday, February 12, 2013 4:02:18 PM UTC+5:30, Ken Barber wrote:
Hi Heena,
Is your puppetdb system separate from your puppetmaster? If so, you
Hi all,
I've been looking at a potential problem, as documented here:
http://projects.puppetlabs.com/issues/19241
To do with a leak within the KahaDB persistence layer of ActiveMQ.
Specifically, there are reports of the db.data file growing unbounded:
My biggest concern is that nodes can access other nodes resources stored in
PuppetDB, which effectively means that parameters like passwords and other
sensitive information is exposed.
If the data is not exported this shouldn't be the case ordinarily.
Obviously though if your content is
My biggest concern is that nodes can access other nodes resources stored
in
PuppetDB, which effectively means that parameters like passwords and
other
sensitive information is exposed.
If the data is not exported this shouldn't be the case ordinarily.
It actually is the case. For
of the module.
2013-01-31 - jv j...@jeffvier.com
* Fix typo in README.pp for postgresql::db example
2013-02-03 - Ken Barber k...@bob.sh
* Add unit tests and travis-ci support
2013-02-02 - Ken Barber k...@bob.sh
* Add locale parameter support to the 'postgresql' class
2013-01-21 - Michael Arnold git
So for anyone running RHEL or Centos 5, we've found a bug - but
already have a fix for you all in master:
https://github.com/puppetlabs/puppet-postgresql/issues/130
We'll do a follow up minor release soon to cover this. Thanks!
ken.
On Wed, Feb 20, 2013 at 6:02 PM, Ken Barber k
the
`include` directive in `postgresql.conf` was not compatible. As a
work-around we have added checks in our code to make sure systems
running PostgreSQL 8.1 or older do not have this directive added.
Detailed Changes
2013-01-21 - Ken Barber k...@bob.sh
* Only install `include` directive
I do this kind of thing here:
https://github.com/puppetlabs/puppetlabs-kwalify/blob/master/lib/puppet/parser/functions/validate_resource.rb#L24
ken.
On Fri, Feb 22, 2013 at 6:05 PM, Matt W m...@nextdoor.com wrote:
I'm trying to create a function that I can call in a manifest like this:
If you clear the queue and rollback to the original version does the
problem disappear? If you're having processing problems at the latest
version thats what I would do, as I presume we're talking production
here right?
Can this be somehow related to the the KahaDB leak thread?
No - it doesn't
Okay. Did you clear the ActiveMQ queues after doing this? I usually
just move the old KahaDB directory out of the way when I do this.
I haven't though about myself, but it makes sense, so I just flushed the
queue again while puppetdb service was stopped. Since this last restart it
seems
] [replace catalog] puppetdb2.vm
If at all possible - I wouldn't mind a full copy of your puppetdb.log
... to dig a bit deeper. And I know I told you to clear the KahaDB
queue (I always make this mistake) but I don't suppose you kept an old
copy of it?
ken.
On Thu, Feb 28, 2013 at 3:55 PM, Ken Barber k
is would be handy.
I can organise a secure space on a Puppetlabs support storage area to
upload this data if you are willing. Just contact me privately to
organise it.
ken.
On Fri, Mar 1, 2013 at 2:25 PM, Ken Barber k...@puppetlabs.com wrote:
So I've been pondering this issue of yours, and I keep
Any progress today?
On Fri, Mar 1, 2013 at 9:00 AM, ak0ska akos.he...@gmail.com wrote:
Yes, maybe not. The next step will be to recreate it from scratch.
On Friday, March 1, 2013 5:47:06 PM UTC+1, Ken Barber wrote:
Well, I don't think a vacuum will help you - I imagine something is
wrong
It sounds like the dashboard Javascript can't access the HTTP
end-points which is strange. The way it works is that it hits a series
of REST end-points on the web server.
As the dashboard is updated using background Javascript, it can still
keep trying to access backend data even though the web
Vacuum full was running for the whole weekend, so we didn't yet have time to
rebuild indexes, because that would require more downtime, and we're not
sure how long it would take. The size of the database didn't drop that much,
it's now ~370Gb.
Wow. Thats still way too large for the amount of
Indexes seem bloated.
Totally agree, you should organise re-indexes starting from the biggest.
relation | size
-+-
public.idx_catalog_resources_tags_gin | 117 GB
public.idx_catalog_resources_tags | 96
After dropping the obsolete index, and rebuilding the others, the database
is now ~ 30 GB. We still get the constraint violation errors when garbage
collection starts.
Okay - can you please send me the puppetdb.log entry that shows the
exception? Including surrounding messages?
Also the
A new release of the puppetlabs/puppetdb module is now available on the Forge:
http://forge.puppetlabs.com/puppetlabs/puppetdb/1.1.5
This is a minor bug-release.
Changelog
2013-02-13 - Karel Brezina
* Fix database creation so database_username, database_password and
database_name are
I think most people are implementing either an Apache or NGinx proxy
in front of PuppetDB for this purpose.
For Apache, should be pretty easy to do with proxy based RewriteRule's
in Apache, and within the same virtualhost definition you should be
able to enforce authentication. For example:
ProxyPass / http://localhost:8080/
Location /
AuthType basic
AuthName Restrited Files
AuthBasicProvider file
AuthUserFile /etc/apache2/passw
Require valid-user
/Location
/VirtualHost
On Tuesday, March 12, 2013 10:40:01 AM UTC-7, Ken Barber wrote:
I think most
you ask for. However, I feel like I should ask,
whether you think this problem is worth your efforts, if rebuilding the
database might solve the issue?
Cheers,
ak0ska
On Thursday, March 14, 2013 8:05:59 AM UTC+1, Ken Barber wrote:
Hi ak0ska,
So I've been spending the last 2 days trying
Hey all,
I'm hoping I can get some information from other users on the list in
relationship ak0ska's problem listed below. I thought I would start a
new thread so more users would see this message and not loose it in
the original thread which is already pretty long:
way with PuppetDB
directly, so the statement log will still be helpful if you can supply
it.
ken.
On Thu, Mar 14, 2013 at 2:12 PM, Ken Barber k...@puppetlabs.com wrote:
So I have this sinking feeling that all of your problems (including
the constraint side-effect) are related to general
Hi ak0ska,
How are things going? Anything to report?
ken.
On Fri, Mar 15, 2013 at 5:00 AM, Ken Barber k...@puppetlabs.com wrote:
Hi ak0ska,
FWIW - with the help of some of my colleagues we've managed to
replicate your constraint issue in a lab style environment now:
https
Russel: Can you confirm the same error message that Hugh is receiving
in your own puppetdb.log?
Hugh: I'd suggest raising a bug with all the details:
http://projects.puppetlabs.com/projects/puppetdb/issues/new ...
Russell, if the problem looks the same I'd confirm it in the same
ticket so we can
. As Ken
mentioned, it would be most helpful if we could get the Ruby/OpenSSL/JDK
versions from your masters and puppetdb servers. Thanks!
On Sat, Mar 23, 2013 at 2:04 AM, Ken Barber k...@puppetlabs.com wrote:
Russel: Can you confirm the same error message that Hugh is receiving
in your own
Thanks Hugh, can you confirm if switching to openjdk-6 fixes it?
On Mon, Mar 25, 2013 at 1:35 PM, Hugh Cole-Baker h...@fanduel.com wrote:
I've filed a bug report http://projects.puppetlabs.com/issues/19884 with
some info on the OpenJDK / Ruby / OpenSSL versions we're using.
--
You received
now been re-enabled with a fix for the regression. Can you try
upgrading to 1.0.1-4ubuntu5.8 (combined with openjdk-7) to see if this
helps?
ken.
On Mon, Mar 25, 2013 at 1:59 PM, Ken Barber k...@puppetlabs.com wrote:
Thanks Hugh, can you confirm if switching to openjdk-6 fixes it?
On Mon, Mar 25
Try:
curl -vv -G -H Accept: application/json
'http://localhost:8080/v2/commands' --data-urlencode
'payload={command:deactivate node,version:
1,payload:\yournodename\}'
The command needs to be submitted with the form parameter 'payload'.
The 'payload' part of the command is itself a JSON
Here is a better working example as a gist, with what you should see
in the puppetdb.log if it was successful:
https://gist.github.com/kbarber/5254512
On Wed, Mar 27, 2013 at 2:19 PM, Ken Barber k...@puppetlabs.com wrote:
Try:
curl -vv -G -H Accept: application/json
'http://localhost:8080/v2
Puppet (err): Could not retrieve catalog from remote server: execution
expired
Puppet (notice): Using cached catalog
/File[/etc/security/http/key.pem] (err): Could not evaluate: SSL_connect
SYSCALL returned=5 errno=0 state=SSLv2/v3 read server hello A Could not
retrieve file metadata for
So I have some questions, as the error could mean a number of things:
What version of PuppetDB are you running? And what exact version of
Java is it using?
Can you take a look at puppetdb.log and tell me if you see any
meaningful error messages?
Without trying to compile a catalog in this
, is there a ~/.puppet directory for that user at all?
ken.
On Thu, Mar 28, 2013 at 1:17 PM, Mohit Chawla
mohit.chawla.bin...@gmail.com wrote:
Hello Ken,
Thanks for the response.
On Thu, Mar 28, 2013 at 6:42 PM, Ken Barber k...@puppetlabs.com wrote:
So I have some questions, as the error could mean
at these masters). And afaik
right now, there wasn't any ~/.puppet dir for root, however I need to
confirm this.
On Thu, Mar 28, 2013 at 7:07 PM, Ken Barber k...@puppetlabs.com wrote:
I'm just trying to run up the same environment so I can try to
replicate it, as yet I can't replicate it on the newer
/alcy/5283712.
On Fri, Mar 29, 2013 at 3:14 AM, Ken Barber k...@puppetlabs.com wrote:
Yeah, it does seem very odd though ... if agent works - and the master
is able to talk to PuppetDB no problem, then its weird that running
puppet master on the command line doesn't seem to work.
What
This is me installing a puppetmaster for Ubuntu with the bog-standard
apt repos the other day - took less than 5 minutes:
https://gist.github.com/kbarber/5209267
As you can see it was pretty straight-forward (apt-get install
puppetmaster-passenger, more or less) - note this was a clean Ubuntu
1 - 100 of 515 matches
Mail list logo