The appropriate attribute in this case should be 'content' instead of
'source' ... so instead of this:
file { '/etc/openldap/ldap.conf':
ensure => file,
source => template('ldap/ldap.conf.erb'),
require => Class["ldap::install"],
}
Use this:
file { '/etc/openldap/ldap.conf':
So at a high level
The resources are populated from successful Puppet runs that submit
their catalogs to PuppetDB. So step 1, run puppet agent -t or
whatever, if you aren't seeing something like this in your
puppetdb.log:
2016-05-18 14:12:28,855 INFO [command-proc-3055] [p.p.command]
>> Model wise, in an ideal world, the proxied/virtual address would be a
>> 'node' of sorts, and have that entry, but if no box exists to compile
>> that catalog, well then we're just talking crazy :-).
>>
>
>
> Well no, if the proxied / virtual address is not a property specific to any
>
Exported resources won't handle any resource de-duplication. You can
get around this by simply not using that to collect data, dalen's
puppetdb-query will help with this, and in PDB 4.0 we're introducing a
function for this purpose also into core. Once you have the data, then
you can do anything
gt; Repo-pkgs: 49
>
> And indeed :
> # find ...mirrors/puppetlabs/6/PC1/x86_64 -name '*.rpm' | wc -l
> 51
>
> 2 rpm are missing, did someone forgot a createrepo ? Running it on my local
> mirror solved the problem.
>
> No one installed puppetdb 3.2.3, I don't know wher
Migration number 40 is for version 3.2.3, not version 3.2.2. Looks
like someone has previously installed version 3.2.3 and pointed it at
your database.
See:
https://github.com/puppetlabs/puppetdb/blob/3.2.3/src/puppetlabs/puppetdb/scf/migrate.clj#L1553
versus
Perhaps you might want to modify the tagmail reporter, to handle this
yourself. A report handler gets passed the environment, so it
shouldn't be possible to massage it into shape. FWIW, tagmail has been
removed from the latest source of Puppet, full explanation here:
information on the specifics of the
release can be found in the official release notes:
https://docs.puppetlabs.com/puppetdb/3.2/release_notes.html
Contributors
---
Andrew Roetker, Ken Barber, Nick Fagerlund, Rob Browing, Russell Mull,
Ryan Senior, Tim Skirvin, Wayne Warren and Wyatt Alt
> I have a running puppet installation (version 3.8.3)
> I installed and configured a puppetdb node (2.3.8 with postgresql).
> Configured puppet master to user the new puppetdb node.
>
> When I run puppet agent from any of the nodes I get a 'Invalid
> relationship Class doesn't seem to be in
On Fri, Oct 9, 2015 at 4:35 AM, Dan wrote:
> Hi Wyatt,
>
> Thanks for the pointer! I found the full stack trace which gives a better
> error:
>
> I just need to workout how to configure the SSL configuration now.
Try `puppetdb ssl-setup` on the command line. It requires that
Can you help with this query? I am trying to get 2 facts from all of our
puppet clients in PuppetDB.
I tried variations of the following, but no luck: ('[or, [=, name,
kernelversion], [=, name, instance_uuid]]')
For me this query works. Here is the full curl example in the latest
PDB (I
I may try to query the PuppetDB from a parser function to get the list
of paths on the client; I am reading the docs at the moment.
Here is what I came up with, and it works for me. It assumes the
PuppetDB is on localhost:8080 as seen from the Puppet master, though; I
don't know if it would
I'm attempting a migration from a PuppetDB 2.x and rack Puppet 3.8.1
install over to the all new 'pc1' puppetserver puppet-agent PuppetDB v3
stack[0].
On the two nodes I've tried so far (the master itself and a test node)
I'm getting the following error:
Error 400 on SERVER: Attempt to
I can run 'puppet resource package' on a node to get a list of installed
packages and version numbers.
are those version numbers available through a PuppetDB(2.3) API query?
We don't currently store the version of pre-existing/unmanaged
resources. If you define a version explicitly,
I can run 'puppet resource package' on a node to get a list of installed
packages and version numbers.
are those version numbers available through a PuppetDB(2.3) API query?
We don't currently store the version of pre-existing/unmanaged
resources. If you define a version explicitly, that gets
Hello list.
I'm trying to use PuppetDB to link a webserver cluster to an EJB cluster.
Each webserver host need to reference a comma separated list of IPs provided
by each EJB host
Exporting and collecting resources works pretty nice but I think this isn't
the better approach. I just need a
I have the need to run puppet in a 'stripped down' state, essentially
turning off a large portion of an orchestration module. The problem occurs
when puppetdb receives a catalog that does not have any of the exported
resources that the orchestration module would generate. This obviously
I am trying to use PuppetDB with a Puppet 4 server that I am testing. I have
set it up as per the official docs but now I get this error when trying to
do a Puppet run (it worked before adding PuppetDB):
# puppet agent -t --noop
Warning: Unable to fetch my node definition, but the agent run
So Rob I've managed to do a successful install on a clean Ubuntu 14.04
box, you can see the full transcript from here:
https://gist.github.com/kbarber/837ff7e55e8940a7d1c8
What variations during the installation process do you think are here?
In regards to your other points yesterday:
I
Even using the “embedded” database is apparently useless, as puppet is still
not able to connect to puppetdb.
In addition, puppetdb is very obviously not creating it’s firewall rules
even though I haven’t disabled that feature.
This is interesting/surprising, but it sounds like the main
I figured out the issue with the embedded database. For some reason letting
the system default to ::cert for the “ssl_listen_address” had it binding to
localhost instead of the actual interface it should have. Specifying
“0.0.0.0” for that option made puppetdb work just fine when using the
Even using the “embedded” database is apparently useless, as puppet is still
not able to connect to puppetdb.
In addition, puppetdb is very obviously not creating it’s firewall rules
even though I haven’t disabled that feature.
This is interesting/surprising, but it sounds like the main
Documentation says
configtimeout
How long the client should wait for the configuration to be retrieved before
considering it a failure. This setting is deprecated and has been replaced
by http_connect_timeout and http_read_timeout. This setting can be a time
interval in seconds (30 or
When I make event request to PuppetDB, it seems that PuppetDB only keep 2
weeks worth of events. How can I change how long this 2 weeks to longer
period? Will there be any consequences other than disk space if I do that?
Change this setting and restart:
So will bugfixes then land against the stable branch and get new bugfix
releases there?
Absolutely, the 4.x module branch (now stable) is still alive and
while for now and will continue to take patches and release from that
branch. It doesn't even preclude minor feature releases (non breaking
Here my setup :
puppetdb-terminus-2.2.2-1.el6.noarch
puppetdb-2.2.2-1.el6.noarch
postgresql93-libs-9.3.6-1PGDG.rhel6.x86_64
postgresql93-server-9.3.6-1PGDG.rhel6.x86_64
postgresql93-contrib-9.3.6-1PGDG.rhel6.x86_64
postgresql93-9.3.6-1PGDG.rhel6.x86_64
Centos 6.6
I try to upgrade to the
we run puppet 3.6.2 on SLES 11 SP3 and downloaded puppetdb 2.2.0 from
http://download.opensuse.org/repositories/systemsmanagement:/puppet:/devel/SLE_11_SP3/x86_64/puppetdb-2.2.0-14.34.x86_64.rpm
.
Trying to start puppetdb produces nothing more than this message Error:
Could not find or load
I know this has been discussed several times, but I did not find the
information/fix I’m looking for, so here I go…
I have a farm of ~370 servers.
I have a single puppet master (12 physical cores, lots of RAM) on which I
deployed puppet 3.7.5 this week (not using r10k, that’s on our huge
, the puppet-users mailing list and the
#puppet IRC channel are watched by a number of us in the PuppetDB team
(not to mention, by other avid community users who are also helpful),
so we can help where necessary with any problems.
Regards
Ken Barber
PuppetDB Team
Puppet Labs Inc.
--
You received
I have a 270MB puppetdb-oom.hprof.prev file in /var/log/puppetdb
This isn't unexpected behaviour per se, although it appears as such if
you haven't dealt much with Java applications. Memory usage is a hard
to predict thing, and if its too low, yes the JVM will crash itself
drop that hprof file.
16850 puppetdb 20 0 12.697g 418684 14848 S 0.9 0.4 4:32.74 java
That's top now since it began running around 10.30 this morning (GMT). 12G
of ram? It's the only proc in the list having a 'g' against it. Seems
excessive..?
So, there is a difference in the columns here ... the column
In short, yes - yes 2.2.2 does support 9.2, although in the next major
release (3.x) we will be dropping that support. We are generally
telling people to utilise the PGDG set of packages to obtain the
latest PostgreSQL version:
http://yum.postgresql.org/repopackages.php
It might be that PuppetDB is running out of heap? Check
/var/log/puppetdb for a file 'puppetdb-oom.hprof' for an indiciation
this is happening.
You can find instructions for how to adjust your heap space here:
https://docs.puppetlabs.com/puppetdb/2.2/configure.html#configuring-the-java-heap-size
I am using puppetdb and exported resource to manage autmatic nagios setup.
It works very well. Now I want to setup another nagios server for another
set of machines using same puppetdb and puppet master.
As for I understand, a client exports @@nagios_host and nagios server
collect it by
I have installed Puppet on a Raspberry pi (running Raspbian) and it seems to
work (sort of). I managed to add it to a Puppetmaster and sign its
certificate but a puppet run fails:
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Loading facts
Error: Could not retrieve local
in the meantime I've added RAM and extended the heap to 2GB. But still I'm
getting crashes of PuppetDB.
Last time it was the kernel OOM that killed the java process as I saw in
/var/log/messages
kernel: Out of memory: Kill process 10146 (java) score 158 or sacrifice
child
kernel: Killed
I recently upgraded from puppetdb 1.6 to 2.2 but now it seems like I'm
having issues with the puppetdb report processor.
I keep getting the errors below:
puppet-master[23130]: Report processor failed: Environment is nil, unable
to submit report. This may be due a bug with Puppet. Ensure you
We have entirely-gem based Puppet masters (no Ubuntu packages installing
Puppet)... we're trying to add in the puppetdb-terminus gemfile. We have it
configured, and installed:
# gem list | grep -i puppet
hiera-puppet (1.0.0)
puppet (3.7.3)
puppet-catalog-test (0.3.1)
puppet-lint (1.0.1)
Nearly everything is in the title.
When I manually run a puppet agent --test on a host (let's say host1) that
export @@something, I can get something on another host (let's say host2)
and I'm happy.
Later, when the daemon on host1 runs, it still exports @@something but
custom facts are
Would like to use PuppetDB to find out more about the estate inventory.
Specifically at the moment I am trying to find out the amount of servers
using productname, and the physicalprocessorcount for each - both with
totals.
Is this doable from the API at all, or easier from inside PSQL ?
I recently changed over to a postgres back end. Now puppetdb.log seems to be
awash in these errors.
I'm pretty sure everything is up to date:
# rpm -qa | grep puppet
puppetlabs-release-6-11.noarch
puppet-server-3.7.2-1.el6.noarch
puppetdb-2.2.2-1.el6.noarch
puppet-3.7.2-1.el6.noarch
We are running PuppetDB 1.6.0. We have fact 'a' in puppetdb that has large
numbers occasionally, such as 2930266584. When we launch a query /v3/facts/a
with filter '[, value, 1950341121]' it returns code 200 with an empty
body, while puppetdb.log shows a new error:
2014-10-15 11:20:41,293
I'm using this snippet to build my icinga configuration out of my exported
facts
#Collect the nagios_host resources
Nagios_host || {
target = /etc/icinga/puppet.d/hosts.cfg,
require = File[/etc/icinga/puppet.d/hosts.cfg],
notify = Service[icinga],
}
If I now
Nope it should work in theory, are you using PuppetDB for this? If so
in the puppetdb.log you should see a corresponding log entry for the
deactivate command for that node. Can you grep against your
puppetdb.log to see if this arrives when you send the `puppet node
deactivate {foo}` command.
Wait, are you actually purging the resources somewhere? If it becomes
unmanaged, that doesn't mean it cleans up after itself unless you are
purging also.
I did this:
Not sure if I can follow you though?!
What happens if I manually add a (fake) host to my hosts.cfg file. The host
No not necessarily, you need to enable resource purging with resources
like nagios_host:
resources { nagios_host:
purge = true,
}
Oh, I just did not now that. My manifest now looks like this:
resources { [nagios_host, nagios_service]:
purge = true,
}
#Collect the
Are you pushing reports into puppetdb or only into foreman?
Ok so I missed that. Sorry dude. And yeah as you point out I originally had
reports only going to foreman. But I changed the puppet.conf to this:
[root@puppet:/etc/puppet] #egrep -i reports|storeconfigs puppet.conf
reports
A cursory google search hasn't turned up much on this topic.
Is there a puppet-jobs list for jobs oriented around Puppet (not to be
confused with Work at Puppet Labs! style stuff?
Other projects seem to have similar lists, but I can't find one for Puppet;
I suspect it'd be constructive to
The default.yaml is in the gist I supplied. I used the el6 and centos6 64
from your sample project.
Taking a look again, its even more confusing than that, the platform
name you have used is pointing at one of the very very old AMI's. The
newer ones we actually use are private, but I don't
I was just testing the host config file from puppetdb coupled with the
documentation on the beaker documentation.
Those docs honestly look old, they are still mentioning blimpy which I
effectively deprecated/superseded with the aws_sdk driver.
I was actually going to omit the error
That's great Ken.
I'll have a look. My .fog file was correct but I was missing that
ec2.yaml.
I get the user experience thing, it'll evolve and I'll help if I can.
Would I be right to assume you built your images with packer?
All of those images predate packer, but we're using packer
When I have a look at the logs I see that I'm getting password
authentication failures for the puppetdb user:
[root@puppet:/etc/puppet] #tail -30 /var/log/puppetdb/puppetdb.log
2014-10-05 16:25:36,339 ERROR [c.j.b.h.AbstractConnectionHook] Failed to
acquire connection Sleeping for 7000ms
When I have a look at the logs I see that I'm getting password
authentication failures for the puppetdb user:
[root@puppet:/etc/puppet] #tail -30 /var/log/puppetdb/puppetdb.log
2014-10-05 16:25:36,339 ERROR [c.j.b.h.AbstractConnectionHook] Failed to
acquire connection Sleeping for 7000ms
Thanks again for your help. I changed the password on a temporary basis to
an absurdly simple one. I'm both happy to say that puppetdb is working now.
And sad to have taken up your time with this. Sorry about that.
No problem mate, glad it was something simple in the end :-).
ken.
--
You
I've seen how the puppetdb module uses ec2 to execute beaker tests. I've
tried setting this up as well and am getting some errors.
Is there a working example of using the different hypervisors?
I see this:
https://github.com/puppetlabs/beaker/wiki/Creating-A-Test-Environment#ec2-support
I've seen how the puppetdb module uses ec2 to execute beaker tests. I've
tried setting this up as well and am getting some errors.
Is there a working example of using the different hypervisors?
I see this:
https://github.com/puppetlabs/beaker/wiki/Creating-A-Test-Environment#ec2-support
I was just testing the host config file from puppetdb coupled with the
documentation on the beaker documentation.
Those docs honestly look old, they are still mentioning blimpy which I
effectively deprecated/superseded with the aws_sdk driver.
I was actually going to omit the error message.
I've installed puppetdb on my puppetmaster. I have puppet-server-3.7.1,
puppetdb-2.2 and puppetdb-terminus-2.2.
I've setup puppetdb like this:
[root@puppet:/etc/puppet] #cat /etc/puppetdb/conf.d/database.ini
[database]
classname = org.postgresql.Driver
subprotocol = postgresql
subname =
We do this, but could probably live without it. But we do it using the facts
indirector and setting it up to cache to puppetdb.
So in both cases you use 'puppet facts upload'?
ken.
--
You received this message because you are subscribed to the Google Groups
Puppet Users group.
To
I tried the same thing and got the error below. Any ideas?
puppetdb=# create extension pg_trgm;
ERROR: could not open extension control file
/usr/share/postgresql/9.3/extension/pg_trgm.control: No such file or
directory
Seems odd, pg_trgm should be shipped with PostgreSQL. Maybe its a bug
More information along these lines, highlighting ease of use and
tools for users to see their catalogs, will go along way towards soothing us
touchy sysadmins.
Totally understand, I was a very touchy admin myself before working at
Puppet Labs and when the tools let you down it can be
And further, I'd really like to see non-Ruby scripting languages enabled to
participate as first-class citizens for the extension points - this (coupled
with better definition of core APIs) would really make the on-ramp for new
puppet users much lower friction.
Python support would be lovely.
I'm new to puppet and hope somebody can help me. I have a master server with
a few Mac and Windows Agents. I created a custom fact for windows that will
show the current logged in users Sid. I have a manifest that will edit the
HK Users registry Keys. My issue is when I run the agent it fails
Hmm... I didn't even know this existed. Ironically, given your question, it
sounds like something I'd want to use. But if it's going away, I guess I'll
just totally forget that I heard it...
Oh to be clear Jason, the functionality is of course still staying for
PuppetDB :-). Its just the
(1) at my current shop, there's an immense hatred of everything JVM. That's
going to be a hard transition. Not to mention Puppet is the only place we
run Ruby, so it's nice and easy to let puppet do whatever it wants with Ruby.
Not so much for installing JVMs that may break production
The requirements for puppetdb specify that it supports 1.7 from either
openjdk or oracle. I've got oracle installed (RHEL6) but the rpm insists on
openjdk (which I can't install for other reasons). Anyone know of a way
around this, or am I going to have to hack the package?
Huh, I guess you
Just wondering. I was messing about with some queries this morning.
Asking for 'server/v3/reports --data-urlencode
'query=[=,certname,client_name]' only returned about 2 weeks worth of
reports. This system has been up and running for 6-8 months.
Whatever this value is set to:
Do many people use or care about the ability to upload facts out of
band to PuppetDB from a machine without the need for a full catalog
compilation? Its not a highly documented facility, ie. doesn't work
out of the box without configuration changes, but I know some people
have asked me this on IRC
detailed information and upgrade advice, consult the detailed
release notes here:
https://docs.puppetlabs.com/puppetdb/2.2/release_notes.html
Contributors
Brian Cain, Eric Timmerman, Justin Holguin, Ken Barber, Nick
Fagerlund, Ryan Senior and Wyatt Alt.
Changelog
-
Brian Cain
Using multiple Puppet masters behind SRV records is working well although I
suspect the low duplication rates I am seeing is down to the fact the load
balancing is split between the nodes and the servername/serverip being
recorded is different when hitting the other Puppet master in the pool ?
I'm trying to sign this new github linked CLA and it's saying the my
email address is already taken, which I'm guessing is because my
puppetlabs and github accounts share a common email address. How can I
get around this annoyance?
Can you try logging a ticket here?
Thanks, but I says about query like this:
[and,
[and,
[=, type, Class],
[=, title, Php]],
[and,
[=, type, Class],
[=, title, Nginx]]]
Think about what this does behind the scenes on the resources endpoint
(see the query here:
Thanks, but I says about query like this:
[and,
[and,
[=, type, Class],
[=, title, Php]],
[and,
[=, type, Class],
[=, title, Nginx]]]
Think about what this does behind the scenes on the resources endpoint
(see the query here:
I might be thinking about this the wrong way, but I think the API can
only do so much on the server side to achieve this. In particular I
want to do a distinct aggregation but we don't support that.
Fortunately, this is achievable on the command line with a tool like
JGrep:
Hi Maxim,
This is not directly reproducible by myself today:
https://gist.github.com/kbarber/c6941099bea07096361e ...
Perhaps something in your puppet.conf is doing this, I could imagine
something like:
usecacheonfailure = true
Causing this to happen, but I can't reproduce the exact same
/hiera.yaml
storeconfigs = true
storeconfigs_backend = puppetdb
I found discussion about this bug:
http://projects.theforeman.org/issues/3851
But I want to know if there is any workaround.
If catalog fails on client side - I can see error reports.
On Monday, July 21, 2014 4:01:43 PM UTC+3, Ken
Does this sound like your issue?
https://tickets.puppetlabs.com/browse/PDB-762
We found it recently and have already fixed it in source, but not
shipped a fix yet. We were holding off for someone complaining loud
enough or just shipping it with 2.2.0 (which should be out in a few
weeks or so).
client
configuration and use PEM files instead of JKS, and it take the same
arguments, some documentation can be found at :
http://www.postgresql.org/docs/8.4/static/libpq-connect.html#LIBPQ-CONNECT-SSLMODE
Le 16 juil. 2014 à 17:05, Ken Barber k...@puppetlabs.com a écrit :
I wrote
I wrote that document, at the time client based certificates weren't
really supported or something like that.
Specifically not supporting client auth is hinted in the JDBC driver
details here: http://jdbc.postgresql.org/documentation/head/ssl-factory.html
I seem to recall there being a problem
version and
makes the necessary changes to use the coerce feature.
PuppetDB 2.1.0 Contributors
---
Chris Price, Eric Timmerman, Ken Barber, Melissa Stone, Niels Abspoel,
Ryan Senior, Wyatt Alt
PuppetDB 2.1.0 Changlog
---
Chris Price (7):
c7628d2
So puppetlabs-firewall is an active provider, whenever it 'runs' in
the catalog it applies the rule straight away. You are probably seeing
this because you're applying a blocking rule (like a DROP or default
DROP for the table) before the SSH allowance rule gets applied.
Take a close look at the
Does exist some way to force a error while applying the catalog? We need
to check some facts vs configuration and force an error if doesnt' match, so
we could have a report from the node in the puppetdb with the failed state.
(we cannot use a compilation/evaluation error because it doesn't
This is probably a long shot ... but we have this ticket here for PuppetDB:
https://tickets.puppetlabs.com/browse/PDB-675
It describes a scenario where when the PID file is missing, the
service script is unable to stop the running process. Now our solution
is simple, we have a fallback to kill
Thanks mate :-).
On Tue, Jul 1, 2014 at 7:57 PM, Mathew Crane mathew.cr...@gmail.com wrote:
No problem :-).
Can you raise a bug on the original exec {} issue for me?
https://tickets.puppetlabs.com/browse/PDB
ken.
https://tickets.puppetlabs.com/browse/PDB-742
--
You received this
On Wednesday, June 25, 2014 7:25:57 PM UTC+2, Ken Barber wrote:
Been fighting witht his now for a bit , and IRC didnt seem to have any
answers so thought I would ask here.
I have 2 platforms , one prod one pre-prod the pre-prod runs basiclly
the
latest versions puppetdb 2 and puppet 3.62
Been fighting witht his now for a bit , and IRC didnt seem to have any
answers so thought I would ask here.
I have 2 platforms , one prod one pre-prod the pre-prod runs basiclly the
latest versions puppetdb 2 and puppet 3.62 for clients and master. This
platform is also where I am having
Did you ever fix this. I am having the same problem.
It is an issue with SSL. Chrome gives this error: Error code:
ERR_SSL_PROTOCOL_ERROR.
Curl gives this error curl: (35) Unknown SSL protocol error in connection
If I figure more out, I will post the fix.
These two errors are unrelated to
Did you figure this out Sans?
My only next logical step would be to have you show me a full git
repository with a working copy of the code/vagrantfile/etc. that is
actually breaking.
ken.
On Wed, Jun 18, 2014 at 7:17 AM, Sans r.santanu@gmail.com wrote:
Thanks Rakesh!
But, as you probably
Just started using PuppetDB (using the Puppetlabs' module) and getting
issues with connection. First it was giving me server Not Found:
Error: Unable to connect to puppetdb server (puppet.internal:8081): [404]
Not Found
Notice: Failed to connect to puppetdb; sleeping 2 seconds before retry
The support for environments in PDB is for storing the environment
where a catalog/factset/report came from ... and you can certainly
query on it, but currently with ordinary resource collection you
cannot constrain on environment. There is an open ticket in the Puppet
queue to do this in the
At first glance this all seems correct. Hrm.
Can you do the telnet test?
telnet puppet.internal 8081
Also, are you destroying and rebuilding these VM's each time and then
its failing? Or are you doing all of this _after_ the vm's are
launched. Its quite possible there is a race
Oh ... and lets see the output of:
iptables -vnL
Perhaps there is a firewall here? Its worth double checking.
On Tue, Jun 17, 2014 at 11:06 AM, Ken Barber k...@puppetlabs.com wrote:
At first glance this all seems correct. Hrm.
Can you do the telnet test?
telnet puppet.internal 8081
Also
Right now I'm creating only one VM, co-locating PuppetMaster and PuppetDB to
make it simple - destroying and rebuilding. But it always fails - during the
provisioning/building and also even after if I login to the machine and run
puppet apply. Telnet works fine:
root@puppet:~# telnet
I use puppetdb + puppetboard, which are very useful to see the current state
of my environment. Puppetboard also provides a very nice representation of
each agent's most recent reports. However, I want to take it to the next
level and create custom historical reports for business intelligence
It's very strange: Until I run puppetdb ssl-setup -f, I get
Error: Unable to connect to puppetdb server (puppet.internal:8081): [404]
Not Found
but after that, I get
Notice: Unable to connect to puppetdb server (puppet.internal:8081):
#Errno::ECONNREFUSED: Connection refused - connect(2)
Thanks, good to know. While the REST API would be the method to get at the
data, my issue is that I'm not capable of writing a web app + data
repository that can generate web-based reports, etc. I've actually gotten
into the habit of running one-off queries using the API with curl to get
Sorry, do you mean Pentaho?
On Tue, Jun 17, 2014 at 9:53 PM, Ken Barber k...@puppetlabs.com wrote:
Thanks, good to know. While the REST API would be the method to get at the
data, my issue is that I'm not capable of writing a web app + data
repository that can generate web-based reports, etc
, 2014 at 9:53 PM, Ken Barber k...@puppetlabs.com wrote:
Sorry, do you mean Pentaho?
On Tue, Jun 17, 2014 at 9:53 PM, Ken Barber k...@puppetlabs.com wrote:
Thanks, good to know. While the REST API would be the method to get at the
data, my issue is that I'm not capable of writing a web app
at 10:01 PM, Ken Barber k...@puppetlabs.com wrote:
Ryan,
What about something like this?
http://wiki.pentaho.com/display/EAI/Rest+Client
This page seems to mix in general actions with integration steps, but
there are more integration types available here:
http://wiki.pentaho.com/display/EAI
Alex,
The more complete idea would be to trigger when resources have
actually been applied. So I would probably consider a report listener
for this kind of thing, as it shows when a resource has changed rather
than compiled.
I think Chris Spence has a tool for this kind of thing that uses MCO
to
1 - 100 of 515 matches
Mail list logo