[Puppet Users] Re: -Configuring 1000 servers to install software packages in one go.

2016-08-24 Thread Peter Faller
You can use wildcarded node names in the manifest, if you have many hosts 
that require the same configuration.

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/8ba153c7-a1e2-4701-89d6-3fc4a6d368bf%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Puppet Users] How to handle predictable network interface names

2016-08-24 Thread Rob Nelson
Just tell them you wanted to make sure you were satisfying the external pen
testing requirements of PCI ;)

On Wednesday, August 24, 2016, Luke Bigum  wrote:

>  if I gave out the module the Security team would throttle me for
> releasing what is part of a map of internal network architecture ;-)
>


-- 

Rob Nelson
rnels...@gmail.com

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/CAC76iT-WXMQ64j5KYBo%2BCU7AAh%2BQYv3pUJ9i0%2B0JJ0PfhSZGRA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


[Puppet Users] Regenerated new master certs after alterning DNS aliases, Puppet Server not starting

2016-08-24 Thread mike r
Getting errors on checksums with DB

I had to regenerate certs for master and agent and nodes after altering DNS 
aliases

doesnt like the checksum

 
at 
org.apache.http.impl.nio.client.CloseableHttpAsyncClientBase$1.run(CloseableHttpAsyncClientBase.java:64)
 
~[puppet-server-release.jar:na]
at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_101]
2016-08-24 11:10:28,271 WARN  [qtp1808023046-59]* [puppetserver] Puppet 
Error connecting to MASTERNAME on 8081 at route 
/pdb/cmd/v1?checksum=e31c9a403e4e76da070b6193aea5a4bab93618f7=4=MASTERNAME=replace_facts,
 
error message received was 'Error executing http request'. Failing over to 
the next PuppetDB server_url in the 'server_urls' list*
2016-08-24 11:10:28,272 ERROR [qtp1808023046-59] [puppetserver] Puppet 
Failed to execute 
'/pdb/cmd/v1?checksum=e31c9a403e4e76da070b6193aea5a4bab93618f7=4=MASTERNAME=replace_facts'
 
on at least 1 of the following 'server_urls': https://MASTERNAME:8081
2016-08-24 11:10:28,273 ERROR [qtp1808023046-59] [puppetserver] Puppet 
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/util/puppetdb/http.rb:115:in 
`raise_request_error' 
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/util/puppetdb/http.rb:156:in 
`failover_action' 
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/util/puppetdb/http.rb:214:in 
`action' 
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/util/puppetdb/command.rb:63:in
 
`submit' 
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/util/profiler/around_profiler.rb:58:in
 
`profile' 
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/util/profiler.rb:51:in 
`profile' 
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/util/puppetdb.rb:101:in 
`profile' 
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/util/puppetdb/command.rb:62:in
 
`submit' 
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/util/puppetdb.rb:64:in 
`submit_command' 
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/util/profiler/around_profiler.rb:58:in
 
`profile' 
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/util/profiler.rb:51:in 
`profile' 
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/util/puppetdb.rb:101:in 
`profile' 
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/util/puppetdb.rb:61:in 
`submit_command' 
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/indirector/facts/puppetdb.rb:37:in
 
`save' 
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/util/profiler/around_profiler.rb:58:in
 
`profile' 
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/util/profiler.rb:51:in 
`profile' 
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/util/puppetdb.rb:101:in 
`profile' 
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/indirector/facts/puppetdb.rb:20:in
 
`save' 
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/indirector/indirection.rb:285:in
 
`save' 
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/node/facts.rb:21:in 
`save' 
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/indirector/catalog/compiler.rb:42:in
 
`extract_facts_from_request' 
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/util/profiler/around_profiler.rb:58:in
 
`profile' 
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/util/profiler.rb:51:in 
`profile' 
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/indirector/catalog/compiler.rb:23:in
 
`extract_facts_from_request' 
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/indirector/catalog/compiler.rb:48:in
 
`find' 
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/indirector/indirection.rb:194:in
 
`find' 
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/network/http/api/indirected_routes.rb:132:in
 
`do_find' 
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/network/http/api/indirected_routes.rb:48:in
 
`call' /opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/context.rb:65:in 
`override' /opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet.rb:240:in 
`override' 
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/network/http/api/indirected_routes.rb:47:in
 
`call' 
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/network/http/route.rb:82:in 
`process' org/jruby/RubyArray.java:1613:in `each' 
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/network/http/route.rb:81:in 
`process' 
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/network/http/route.rb:87:in 
`process' 
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/network/http/route.rb:87:in 
`process' 
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/network/http/handler.rb:60:in
 
`process' 
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/util/profiler/around_profiler.rb:58:in
 
`profile' 
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/util/profiler.rb:51:in 
`profile' 
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/network/http/handler.rb:58:in
 
`process' 
file:/opt/puppetlabs/server/apps/puppetserver/puppet-server-release.jar!/puppetserver-lib/puppet/server/master.rb:42:in
 
`handleRequest' Puppet$$Server$$Master_80906857.gen:13:in `handleRequest' 
request_handler_core.clj:281:in 

[Puppet Users] Puppet Enterprise can't use fqdn

2016-08-24 Thread Freddy Paxton
Basically, my installation of Puppet ran fine once I used my IP on port 
3000 for the web installation.
However, when I try to add an agent using the curl script, it fails because 
it doesn't seem to associate the fqdn with the IP (despite me clarifying 
this in the /etc/hosts file).

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/2fb186b9-7755-4786-90b1-029cc44af24e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Puppet Users] How to handle predictable network interface names

2016-08-24 Thread Luke Bigum
Now that I think about it, I might be able to post a sanitised version of 
the module online with most of the internal stuff stripped out. It might 
prove useful for educating our own staff in the concepts, as well as other 
people. It's not a 5 minute job though so if/when it's done, I'll write a 
new Group post instead of continuing to hijack this one :-)

On Wednesday, 24 August 2016 17:05:47 UTC+1, LinuxDan wrote:
>
> It is a starting point.
> Many thanks for sharing what you can.
>
> Dan White | d_e_...@icloud.com 
> 
> “Sometimes I think the surest sign that intelligent life exists elsewhere in 
> the universe is that none of it has tried to contact us.”  (Bill Waterson: 
> Calvin & Hobbes)
>
>
> On Aug 24, 2016, at 12:03 PM, Luke Bigum  
> wrote:
>
> No, not really :-( It's a very "internal" module that I forked from 
> someone's Google Summer of Code project over 5 years ago (way before 
> voxpupuli/puppet-network). You know all those Hiera keys about vlan tags I 
> mentioned? The defaults are in this module and are the default VLAN 
> interfaces for all of our networks. if I gave out the module the Security 
> team would throttle me for releasing what is part of a map of internal 
> network architecture ;-)
>
> I can however, just post the bit that does the UDEV rules...
>
> *
> $ cat ../templates/etc/udev/rules.d/70-persistent-net.rules.erb
> # Managed by Puppet
>
> <% if @interfaces.is_a?(Hash) -%>
> <%   @interfaces.sort.each do |key,val| -%>
> <% if val['hwaddr'] -%>
> SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="<%= 
> val['hwaddr'] -%>", ATTR{type}=="1", KERNEL=="eth*", NAME="<%= key -%>"
> <% end # if val['hwaddr'] -%>
> <%   end # @interfaces.sort.each -%>
> <% end -%>
> <% if @extra_udev_static_interface_names.is_a?(Hash) -%>
> <%   @extra_udev_static_interface_names.sort.each do |interface,hwaddr| -%>
> SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="<%= 
> hwaddr.downcase -%>", ATTR{type}=="1", KERNEL=="eth*", NAME="<%= interface 
> -%>"
> <%   end -%>
> <% end -%>
> *
>
> The template will create udev rules from two sources. The first is 
> @interfaces, which is the giant multi-level hash of network interfaces that 
> our old designs use. A VM might look like this in Hiera:
>
> networking::interfaces:
>   eth0:
> ipaddr: 1.1.1.1
> hwaddr: 52:54:00:11:22:33
>
> The second source of udev rules is also a Hash and also from Hiera, but 
> rather than it be embedded in the giant hash of networking information, it 
> is there to compliment the newer role/profile approach where we don't 
> specify MAC addresses. This is purely a cosmetic thing for VMs to make our 
> interface names look sensible. Here is a sanitised Hiera file for a VM with 
> the fictitious "database" profile:
>
> profile::database::subnet_INTERNAL_slaves:
>   - 'eth100'
> profile::database::subnet_CLIENT_slaves:
>   - 'eth214'
> networking::extra_udev_static_interface_names:
>   eth100: '52:54:00:11:22:33'
>   eth214: '52:54:00:44:55:66'
>
>
>
>
>
> On Wednesday, 24 August 2016 16:41:28 UTC+1, LinuxDan wrote:
>>
>> Very nice, Luke.
>>
>> Does the code that lets you custom-name your interfaces live in github or 
>> puppet-forge anywhere ?
>>
>> If not, would you be willing to share ?  I can bring brownies and/or beer 
>> to the collaboration :)
>>
>> Dan White | d_e_...@icloud.com
>> 
>> “Sometimes I think the surest sign that intelligent life exists elsewhere in 
>> the universe is that none of it has tried to contact us.”  (Bill Waterson: 
>> Calvin & Hobbes)
>>
>>
>> On Aug 24, 2016, at 11:36 AM, Luke Bigum  wrote:
>>
>> Here we have very strict control over our hardware and what interface 
>> goes where. We keep CentOS 6's naming scheme on Dell hardware, so p2p1 is 
>> PCI slot 2, Port 1, and don't try rename it. We have a 3rd party patch 
>> manager tool (patchmanager.com), LLDP on our switches, and a Nagios 
>> check that tells me if an interface is not plugged into the switch port it 
>> is supposed to be plugged into (according to patchmanager). This works 
>> perfectly on Dell hardware because the PCI name mapping works. On really 
>> old HP gear it doesn't work, so we fall back on always assuming eth0 is the 
>> first onboard port, etc. If the kernel scanned these devices in a different 
>> order we'd get the same breakage you describe, but that's never happened on 
>> it's own, it's only happened if an engineer has gone and added re-arranged 
>> cards.
>>
>> We still need some sort of "glue record" that says "this interface should 
>> be up and have this IP". In our older designs this was managed entirely in 
>> Hiera - so there's a giant multi-level hash that we run create_resources() 
>> over to define every single network interface. You can imagine the amount 
>> of Hiera data we have. In 

Re: [Puppet Users] How to handle predictable network interface names

2016-08-24 Thread Dan White

It is a starting point.
Many thanks for sharing what you can.
Dan White | d_e_wh...@icloud.com

“Sometimes I think the surest sign that intelligent life exists elsewhere in the 
universe is that none of it has tried to contact us.”  (Bill Waterson: Calvin & 
Hobbes)

On Aug 24, 2016, at 12:03 PM, Luke Bigum  wrote:

No, not really :-( It's a very "internal" module that I forked from someone's 
Google Summer of Code project over 5 years ago (way before voxpupuli/puppet-network). You 
know all those Hiera keys about vlan tags I mentioned? The defaults are in this module 
and are the default VLAN interfaces for all of our networks. if I gave out the module the 
Security team would throttle me for releasing what is part of a map of internal network 
architecture ;-)

I can however, just post the bit that does the UDEV rules...

*
$ cat ../templates/etc/udev/rules.d/70-persistent-net.rules.erb
# Managed by Puppet

<% if @interfaces.is_a?(Hash) -%>
<%   @interfaces.sort.each do |key,val| -%>
<%     if val['hwaddr'] -%>
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="<%= val['hwaddr'] -%>", ATTR{type}=="1", 
KERNEL=="eth*", NAME="<%= key -%>"
<%     end # if val['hwaddr'] -%>
<%   end # @interfaces.sort.each -%>
<% end -%>
<% if @extra_udev_static_interface_names.is_a?(Hash) -%>
<%   @extra_udev_static_interface_names.sort.each do |interface,hwaddr| -%>
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="<%= hwaddr.downcase -%>", 
ATTR{type}=="1", KERNEL=="eth*", NAME="<%= interface -%>"
<%   end -%>
<% end -%>
*

The template will create udev rules from two sources. The first is @interfaces, 
which is the giant multi-level hash of network interfaces that our old designs 
use. A VM might look like this in Hiera:

networking::interfaces:
  eth0:
    ipaddr: 1.1.1.1
    hwaddr: 52:54:00:11:22:33

The second source of udev rules is also a Hash and also from Hiera, but rather than it be 
embedded in the giant hash of networking information, it is there to compliment the newer 
role/profile approach where we don't specify MAC addresses. This is purely a cosmetic 
thing for VMs to make our interface names look sensible. Here is a sanitised Hiera file 
for a VM with the fictitious "database" profile:

profile::database::subnet_INTERNAL_slaves:
  - 'eth100'
profile::database::subnet_CLIENT_slaves:
  - 'eth214'
networking::extra_udev_static_interface_names:
  eth100: '52:54:00:11:22:33'
  eth214: '52:54:00:44:55:66'





On Wednesday, 24 August 2016 16:41:28 UTC+1, LinuxDan wrote:
Very nice, Luke.

Does the code that lets you custom-name your interfaces live in github or 
puppet-forge anywhere ?

If not, would you be willing to share ?  I can bring brownies and/or beer to 
the collaboration :)
Dan White | d_e_...@icloud.com

“Sometimes I think the surest sign that intelligent life exists elsewhere in the 
universe is that none of it has tried to contact us.”  (Bill Waterson: Calvin & 
Hobbes)

On Aug 24, 2016, at 11:36 AM, Luke Bigum  wrote:

Here we have very strict control over our hardware and what interface goes 
where. We keep CentOS 6's naming scheme on Dell hardware, so p2p1 is PCI slot 
2, Port 1, and don't try rename it. We have a 3rd party patch manager tool 
(patchmanager.com), LLDP on our switches, and a Nagios check that tells me if 
an interface is not plugged into the switch port it is supposed to be plugged 
into (according to patchmanager). This works perfectly on Dell hardware because 
the PCI name mapping works. On really old HP gear it doesn't work, so we fall 
back on always assuming eth0 is the first onboard port, etc. If the kernel 
scanned these devices in a different order we'd get the same breakage you 
describe, but that's never happened on it's own, it's only happened if an 
engineer has gone and added re-arranged cards.

We still need some sort of "glue record" that says "this interface should be up and have this IP". In our 
older designs this was managed entirely in Hiera - so there's a giant multi-level hash that we run create_resources() over to 
define every single network interface. You can imagine the amount of Hiera data we have. In the newer designs which are a lot 
more of a role/profile approach I've been trying to conceptualise the networking based on our profiles. So if one of our servers 
is fulfilling function "database" there will be a Class[profile::database]. This Class might create a bonded interface 
for the "STORAGE" network and another interface for the "CLIENT" network. Through various levels of Hiera I 
can define the STORAGE network as VLAN 100, because it might be a different vlan tag at a different location. Then at the Hiera 
node level (on each individual server) I will have something like:

profile::database::bond_storage_slaves: [ 'p2p1', 'p2p2' ]

That's the 

Re: [Puppet Users] How to handle predictable network interface names

2016-08-24 Thread Luke Bigum
No, not really :-( It's a very "internal" module that I forked from 
someone's Google Summer of Code project over 5 years ago (way before 
voxpupuli/puppet-network). You know all those Hiera keys about vlan tags I 
mentioned? The defaults are in this module and are the default VLAN 
interfaces for all of our networks. if I gave out the module the Security 
team would throttle me for releasing what is part of a map of internal 
network architecture ;-)

I can however, just post the bit that does the UDEV rules...

*
$ cat ../templates/etc/udev/rules.d/70-persistent-net.rules.erb
# Managed by Puppet

<% if @interfaces.is_a?(Hash) -%>
<%   @interfaces.sort.each do |key,val| -%>
<% if val['hwaddr'] -%>
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="<%= 
val['hwaddr'] -%>", ATTR{type}=="1", KERNEL=="eth*", NAME="<%= key -%>"
<% end # if val['hwaddr'] -%>
<%   end # @interfaces.sort.each -%>
<% end -%>
<% if @extra_udev_static_interface_names.is_a?(Hash) -%>
<%   @extra_udev_static_interface_names.sort.each do |interface,hwaddr| -%>
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="<%= 
hwaddr.downcase -%>", ATTR{type}=="1", KERNEL=="eth*", NAME="<%= interface 
-%>"
<%   end -%>
<% end -%>
*

The template will create udev rules from two sources. The first is 
@interfaces, which is the giant multi-level hash of network interfaces that 
our old designs use. A VM might look like this in Hiera:

networking::interfaces:
  eth0:
ipaddr: 1.1.1.1
hwaddr: 52:54:00:11:22:33

The second source of udev rules is also a Hash and also from Hiera, but 
rather than it be embedded in the giant hash of networking information, it 
is there to compliment the newer role/profile approach where we don't 
specify MAC addresses. This is purely a cosmetic thing for VMs to make our 
interface names look sensible. Here is a sanitised Hiera file for a VM with 
the fictitious "database" profile:

profile::database::subnet_INTERNAL_slaves:
  - 'eth100'
profile::database::subnet_CLIENT_slaves:
  - 'eth214'
networking::extra_udev_static_interface_names:
  eth100: '52:54:00:11:22:33'
  eth214: '52:54:00:44:55:66'




On Wednesday, 24 August 2016 16:41:28 UTC+1, LinuxDan wrote:
>
> Very nice, Luke.
>
> Does the code that lets you custom-name your interfaces live in github or 
> puppet-forge anywhere ?
>
> If not, would you be willing to share ?  I can bring brownies and/or beer 
> to the collaboration :)
>
> Dan White | d_e_...@icloud.com 
> 
> “Sometimes I think the surest sign that intelligent life exists elsewhere in 
> the universe is that none of it has tried to contact us.”  (Bill Waterson: 
> Calvin & Hobbes)
>
>
> On Aug 24, 2016, at 11:36 AM, Luke Bigum  
> wrote:
>
> Here we have very strict control over our hardware and what interface goes 
> where. We keep CentOS 6's naming scheme on Dell hardware, so p2p1 is PCI 
> slot 2, Port 1, and don't try rename it. We have a 3rd party patch manager 
> tool (patchmanager.com), LLDP on our switches, and a Nagios check that 
> tells me if an interface is not plugged into the switch port it is supposed 
> to be plugged into (according to patchmanager). This works perfectly on 
> Dell hardware because the PCI name mapping works. On really old HP gear it 
> doesn't work, so we fall back on always assuming eth0 is the first onboard 
> port, etc. If the kernel scanned these devices in a different order we'd 
> get the same breakage you describe, but that's never happened on it's own, 
> it's only happened if an engineer has gone and added re-arranged cards.
>
> We still need some sort of "glue record" that says "this interface should 
> be up and have this IP". In our older designs this was managed entirely in 
> Hiera - so there's a giant multi-level hash that we run create_resources() 
> over to define every single network interface. You can imagine the amount 
> of Hiera data we have. In the newer designs which are a lot more of a 
> role/profile approach I've been trying to conceptualise the networking 
> based on our profiles. So if one of our servers is fulfilling function 
> "database" there will be a Class[profile::database]. This Class might 
> create a bonded interface for the "STORAGE" network and another interface 
> for the "CLIENT" network. Through various levels of Hiera I can define the 
> STORAGE network as VLAN 100, because it might be a different vlan tag at a 
> different location. Then at the Hiera node level (on each individual 
> server) I will have something like:
>
> profile::database::bond_storage_slaves: [ 'p2p1', 'p2p2' ]
>
> That's the glue. At some point I need to tell Puppet that on this specific 
> server, the storage network is a bond of p2p1 and p2p2. If I took that 
> profile to a HP server, I'd be specifying a different set of interface 
> names. In some situations I even just put in one bond interface member, 
> 

[Puppet Users] generating puppet catalog throws error "Error: cannot load such file -- md5"

2016-08-24 Thread Joseph Lorenzini
Hi all,

I am migrating a puppet master from centos 5 to centos 7. I performed a 
fresh install of puppet 3.7.4 on centos 7. on the puppet master, I copied 
my puppet data to /etc/puppet etc. Whenever puppet agent runs on another 
node though, I get this exception.


Error: cannot load such file -- md5


This led me to think that the md5.rb script did not exist. But it does. I 
found it here. 

/usr/share/ruby/vendor_ruby/puppet/parser/functions/md5.rb

Note I am on ruby 2.0. Is it possible that the puppet daemon is not 
properly setting the load path to pick up that file? If so, how do I 
configure puppet  master to pick up that ruby script? according to facter, 
here's the rubysitedir.

/usr/local/share/ruby/site_ruby/

Any guidance would be greatly appreciated.

Thanks,
Joe 

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/255c5e77-90cb-4291-bd01-e93abd9a9541%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[Puppet Users] Re: How to handle predictable network interface names

2016-08-24 Thread Luke Bigum
Here we have very strict control over our hardware and what interface goes 
where. We keep CentOS 6's naming scheme on Dell hardware, so p2p1 is PCI 
slot 2, Port 1, and don't try rename it. We have a 3rd party patch manager 
tool (patchmanager.com), LLDP on our switches, and a Nagios check that 
tells me if an interface is not plugged into the switch port it is supposed 
to be plugged into (according to patchmanager). This works perfectly on 
Dell hardware because the PCI name mapping works. On really old HP gear it 
doesn't work, so we fall back on always assuming eth0 is the first onboard 
port, etc. If the kernel scanned these devices in a different order we'd 
get the same breakage you describe, but that's never happened on it's own, 
it's only happened if an engineer has gone and added re-arranged cards.

We still need some sort of "glue record" that says "this interface should 
be up and have this IP". In our older designs this was managed entirely in 
Hiera - so there's a giant multi-level hash that we run create_resources() 
over to define every single network interface. You can imagine the amount 
of Hiera data we have. In the newer designs which are a lot more of a 
role/profile approach I've been trying to conceptualise the networking 
based on our profiles. So if one of our servers is fulfilling function 
"database" there will be a Class[profile::database]. This Class might 
create a bonded interface for the "STORAGE" network and another interface 
for the "CLIENT" network. Through various levels of Hiera I can define the 
STORAGE network as VLAN 100, because it might be a different vlan tag at a 
different location. Then at the Hiera node level (on each individual 
server) I will have something like:

profile::database::bond_storage_slaves: [ 'p2p1', 'p2p2' ]

That's the glue. At some point I need to tell Puppet that on this specific 
server, the storage network is a bond of p2p1 and p2p2. If I took that 
profile to a HP server, I'd be specifying a different set of interface 
names. In some situations I even just put in one bond interface member, 
which is useless, but in most situations I find less entropy is worth more 
than having a slightly more efficient networking stack.

I have bounced around the idea of removing this step and trusting the 
switch - ie: write a fact to do an LLDP query for the VLAN of the switch 
port each interface is connected to, that way you wouldn't need the glue, 
there'd be a fact called vlan_100_interfaces. Two problems with this 
approach: we end up trusting the switch to be our source of truth (it may 
not be correct, and, what if the switch port is down?). Secondly the 
quality and consistency of LLDP information you get out of various 
manufacturers of networking hardware is very different, so relying on LLDP 
information to define your OS network config is a bit risky for me.

It's a different story for our VMs. Since they are Puppet defined we 
specify a MAC address and so we "know" which MAC will be attached to which 
VM bridge. We drop a MAC based udev rule into the guest to name them 
similarly, ie: eth100 is on br100. I could technically use the same Puppet 
code to write udev rules for my hardware, but the PCI based naming scheme 
is fine so far.

That's what we do, but it's made easy by an almost homogeneous hardware 
platform and strict physical patch management.

When I read about your problem, it sounds like you are missing a "glue 
record" that describes your logical interfaces to your physical devices. If 
you were to follow something along the lines of our approach, you might 
have something like this:

class profile::some_firewall(
  $external_interface_name = 'eth0',
  $internal_interface_name = 'eth1',
  $perimiter_interface_name = 'eth2'
) {
  firewall { '001_allow_internal':
chain   => 'INPUT',
iniface => $internal_interface_name,
action  => 'accept',
proto => 'all',
  }

  firewall { '002_some_external_rule':
chain   => 'INPUT',
iniface => $external_interface_name,
action  => 'accept',
proto => 'tcp',
dport => '443',
  }
}

That very simple firewall profile probably already works on your HP 
hardware, and on your Dell hardware you'd need to override the 3 parameters 
in Hiera:

profile::some_firewall::internal_interface_name: 'em1'
profile::some_firewall::external_interface_name: 'p3p1'
profile::some_firewall::perimiter_interface_name: 'p1p1'

Hope that helps,

-Luke

On Wednesday, 24 August 2016 14:55:38 UTC+1, Marc Haber wrote:
>
> Hi, 
>
> I would like to discuss how to handle systemd's new feature of 
> predictable network interface names. This is a rather hot topic in the 
> team I'm currently working in, and I'd like to solicit your opinions 
> about that. 
>
> On systems with more than one interface, the canonical way to handle 
> this issue in the past was "assume that eth0 is connected to network 
> foo, eth1 is connected to network bar, and eth2 is connected to 
> network baz" and to accept 

Re: [Puppet Users] How to handle predictable network interface names

2016-08-24 Thread Dan White

Very nice, Luke.

Does the code that lets you custom-name your interfaces live in github or 
puppet-forge anywhere ?

If not, would you be willing to share ?  I can bring brownies and/or beer to 
the collaboration :)
Dan White | d_e_wh...@icloud.com

“Sometimes I think the surest sign that intelligent life exists elsewhere in the 
universe is that none of it has tried to contact us.”  (Bill Waterson: Calvin & 
Hobbes)

On Aug 24, 2016, at 11:36 AM, Luke Bigum  wrote:

Here we have very strict control over our hardware and what interface goes 
where. We keep CentOS 6's naming scheme on Dell hardware, so p2p1 is PCI slot 
2, Port 1, and don't try rename it. We have a 3rd party patch manager tool 
(patchmanager.com), LLDP on our switches, and a Nagios check that tells me if 
an interface is not plugged into the switch port it is supposed to be plugged 
into (according to patchmanager). This works perfectly on Dell hardware because 
the PCI name mapping works. On really old HP gear it doesn't work, so we fall 
back on always assuming eth0 is the first onboard port, etc. If the kernel 
scanned these devices in a different order we'd get the same breakage you 
describe, but that's never happened on it's own, it's only happened if an 
engineer has gone and added re-arranged cards.

We still need some sort of "glue record" that says "this interface should be up and have this IP". In our 
older designs this was managed entirely in Hiera - so there's a giant multi-level hash that we run create_resources() over to 
define every single network interface. You can imagine the amount of Hiera data we have. In the newer designs which are a lot 
more of a role/profile approach I've been trying to conceptualise the networking based on our profiles. So if one of our servers 
is fulfilling function "database" there will be a Class[profile::database]. This Class might create a bonded interface 
for the "STORAGE" network and another interface for the "CLIENT" network. Through various levels of Hiera I 
can define the STORAGE network as VLAN 100, because it might be a different vlan tag at a different location. Then at the Hiera 
node level (on each individual server) I will have something like:

profile::database::bond_storage_slaves: [ 'p2p1', 'p2p2' ]

That's the glue. At some point I need to tell Puppet that on this specific 
server, the storage network is a bond of p2p1 and p2p2. If I took that profile 
to a HP server, I'd be specifying a different set of interface names. In some 
situations I even just put in one bond interface member, which is useless, but 
in most situations I find less entropy is worth more than having a slightly 
more efficient networking stack.

I have bounced around the idea of removing this step and trusting the switch - 
ie: write a fact to do an LLDP query for the VLAN of the switch port each 
interface is connected to, that way you wouldn't need the glue, there'd be a 
fact called vlan_100_interfaces. Two problems with this approach: we end up 
trusting the switch to be our source of truth (it may not be correct, and, what 
if the switch port is down?). Secondly the quality and consistency of LLDP 
information you get out of various manufacturers of networking hardware is very 
different, so relying on LLDP information to define your OS network config is a 
bit risky for me.

It's a different story for our VMs. Since they are Puppet defined we specify a MAC 
address and so we "know" which MAC will be attached to which VM bridge. We drop 
a MAC based udev rule into the guest to name them similarly, ie: eth100 is on br100. I 
could technically use the same Puppet code to write udev rules for my hardware, but the 
PCI based naming scheme is fine so far.

That's what we do, but it's made easy by an almost homogeneous hardware 
platform and strict physical patch management.

When I read about your problem, it sounds like you are missing a "glue record" 
that describes your logical interfaces to your physical devices. If you were to follow 
something along the lines of our approach, you might have something like this:

class profile::some_firewall(
  $external_interface_name = 'eth0',
  $internal_interface_name = 'eth1',
  $perimiter_interface_name = 'eth2'
) {
  firewall { '001_allow_internal':
    chain   => 'INPUT',
    iniface => $internal_interface_name,
    action  => 'accept',
    proto => 'all',
  }

  firewall { '002_some_external_rule':
    chain   => 'INPUT',
    iniface => $external_interface_name,
    action  => 'accept',
    proto => 'tcp',
    dport => '443',
  }
}

That very simple firewall profile probably already works on your HP hardware, 
and on your Dell hardware you'd need to override the 3 parameters in Hiera:

profile::some_firewall::internal_interface_name: 'em1'
profile::some_firewall::external_interface_name: 'p3p1'
profile::some_firewall::perimiter_interface_name: 'p1p1'

Hope that helps,

-Luke

On 

[Puppet Users] RE:-Configuring 1000 servers to install software packages in one go.

2016-08-24 Thread Mo Green


Hi Everyone,

I am a developer new  Puppet and could I please ask a few questions 

regarding how puppet works.

In a scenario where Puppet is configured to run on  approximately "1000"

servers so once can for example install  emacs  in one go.

Would the manual overhead only occur when :-

a) Configuring  the master/server puppet on one server and agent puppet 

on the 1000 servers placing all 1000 agent hostnames IP addresses 

on master and master/server puppet hostname /ip address on each agent 
server.

b) For a  manifest that installs a software package on the 1000 servers.

written on the master what other details are required in the to be compiled 
catalogue 

server manifest  for new software to be installed on each individual server 
 by the agent.


May anyone have examples you could send or point me in the right direction. 
 

.

Eg. Would you have to declare   (1000) nodes in the manifest?

class screen 

{

  package { 'screen':

ensure => 'installed',

  }

}

Many Thanks in advance.

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/3853ea85-22a0-44f0-9fa2-4f3d23a17a08%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Puppet Users] How to handle predictable network interface names

2016-08-24 Thread Rob Nelson
Marc,

We use VMware's vSphere, which still results in "random" but predictable
interface names - eth0 is now eno16780032, eth1 is now eno3359296, etc.
We've stuck with that because while it's somewhat painful (eth# is soo
much easier to comprehend), it's far less painful to memorize that than to
maintain some udev rules that may need tweaked across time. However, if we
were on bare metal, it might be worth disabling the rules to get the older
style back. That's probably still less optimal than customized names, but
it's well documented at least. For example, http://carminebufano.com/?p=108
or
http://amolkg.blogspot.in/2015/05/centos-7-change-network-interface-name.html
- though there are multiple ways to do it even then.


Rob Nelson
rnels...@gmail.com

On Wed, Aug 24, 2016 at 9:55 AM, Marc Haber 
wrote:

> Hi,
>
> I would like to discuss how to handle systemd's new feature of
> predictable network interface names. This is a rather hot topic in the
> team I'm currently working in, and I'd like to solicit your opinions
> about that.
>
> On systems with more than one interface, the canonical way to handle
> this issue in the past was "assume that eth0 is connected to network
> foo, eth1 is connected to network bar, and eth2 is connected to
> network baz" and to accept that things fail horribly if the order in
> which network interfaces are detected changes.
>
> While upstream's focus is as usual on desktop machines where Ethernet,
> USB and WWAN interfaces come and go multiple times a day (see
> upstream's reasoning in
> https://www.freedesktop.org/wiki/Software/systemd/
> PredictableNetworkInterfaceNames/),
> this seldomly happens in our happy server environment, which reduces
> the breakage potential to disruptive kernel updates or vendor firmware
> changes peddling with the order in which network interfaces are
> enumerated.
>
> This happens rather seldomly in my experience.
>
> I would, however, like to stay with the new scheme since I see its
> charms.
>
> But, how would I handle this in a puppetized environment?
>
> Currently, the code that is already, for example for a firewall
> assumes that eth0 is the external interface, eth1 the internal one and
> eth2 the perimeter networks.
>
> In the new setup, all those interfaces can have different names
> depending on different hardware being used. That means that the same
> puppet code cannot be used on one firewall instance running on Dell
> hardware and a second one running on HP hardware because BIOS indices
> and/or PCI paths will vary. If I used the MAC scheme, things are even
> worse since interface names will be different even on different pieces
> of otherwise identical hardware.
>
> Many of my team members thinkt hat one should simply turn of
> predictable network interface names altgether and so that our old code
> continues to work. I think that this would be a bad idea, but don't
> have any logical arguments other than my gut feeling.
>
> Generating udev rules to fix the network names (and assign names like
> ext1, int1, per1) already in postinst of the OS does not work since we
> don't know how the machine is going to be wired and even used.
>
> Any ideas? How do _you_ do this?
>
> Greetings
> Marc
>
> --
> 
> -
> Marc Haber | "I don't trust Computers. They | Mailadresse im Header
> Leimen, Germany|  lose things."Winona Ryder | Fon: *49 6224 1600402
> Nordisch by Nature |  How to make an American Quilt | Fax: *49 6224 1600421
>
> --
> You received this message because you are subscribed to the Google Groups
> "Puppet Users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to puppet-users+unsubscr...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/
> msgid/puppet-users/20160824135527.GP2471%40torres.zugschlus.de.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/CAC76iT8YZCWeKDB%2BdPiTRxE80WTe2BrCL%3D1hJEFteZKJF-gHEw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


[Puppet Users] How to handle predictable network interface names

2016-08-24 Thread Marc Haber
Hi,

I would like to discuss how to handle systemd's new feature of
predictable network interface names. This is a rather hot topic in the
team I'm currently working in, and I'd like to solicit your opinions
about that.

On systems with more than one interface, the canonical way to handle
this issue in the past was "assume that eth0 is connected to network
foo, eth1 is connected to network bar, and eth2 is connected to
network baz" and to accept that things fail horribly if the order in
which network interfaces are detected changes.

While upstream's focus is as usual on desktop machines where Ethernet,
USB and WWAN interfaces come and go multiple times a day (see
upstream's reasoning in
https://www.freedesktop.org/wiki/Software/systemd/PredictableNetworkInterfaceNames/),
this seldomly happens in our happy server environment, which reduces
the breakage potential to disruptive kernel updates or vendor firmware
changes peddling with the order in which network interfaces are
enumerated.

This happens rather seldomly in my experience.

I would, however, like to stay with the new scheme since I see its
charms.

But, how would I handle this in a puppetized environment?

Currently, the code that is already, for example for a firewall
assumes that eth0 is the external interface, eth1 the internal one and
eth2 the perimeter networks.

In the new setup, all those interfaces can have different names
depending on different hardware being used. That means that the same
puppet code cannot be used on one firewall instance running on Dell
hardware and a second one running on HP hardware because BIOS indices
and/or PCI paths will vary. If I used the MAC scheme, things are even
worse since interface names will be different even on different pieces
of otherwise identical hardware.

Many of my team members thinkt hat one should simply turn of
predictable network interface names altgether and so that our old code
continues to work. I think that this would be a bad idea, but don't
have any logical arguments other than my gut feeling.

Generating udev rules to fix the network names (and assign names like
ext1, int1, per1) already in postinst of the OS does not work since we
don't know how the machine is going to be wired and even used.

Any ideas? How do _you_ do this?

Greetings
Marc

-- 
-
Marc Haber | "I don't trust Computers. They | Mailadresse im Header
Leimen, Germany|  lose things."Winona Ryder | Fon: *49 6224 1600402
Nordisch by Nature |  How to make an American Quilt | Fax: *49 6224 1600421

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/20160824135527.GP2471%40torres.zugschlus.de.
For more options, visit https://groups.google.com/d/optout.


[Puppet Users] Announce: Puppet-agent 1.6.1 is available

2016-08-24 Thread Geoff Nichols
Puppet Agent 1.6.1 is now available. This release includes Puppet 4.6.1
(containing a critical bug fix, as well as a number of smaller fixes).


Yesterday we removed Puppet 4.6.0 (and puppet-agent 1.6.0) from our
repositories after users found a critical issue (PUP-6608) affecting variables
defined in a class not being in scope after resource-like declaration of
that class.


Users who had installed Puppet 4.6.0 (puppet-agent 1.6.0) should upgrade to
Puppet 4.6.1 (puppet-agent 1.6.1).


This release fixes the critical issue and several smaller bugs in Puppet
and Facter.


Release notes are linked from the puppet-agent 1.6.1 note:
https://docs.puppet.com/puppet/4.6/reference/release_notes_agent.html.


To install or upgrade puppet-agent, follow the getting started directions:
http://docs.puppet.com/puppet/latest/reference/index.html
.


-- 
Geoff Nichols
Puppet Release Engineering

PuppetConf 2016 , 19 - 21 October, San
Diego, California
*Summer Savings - Register by 15 September and save $240
*

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/CADjnYBy-oCjxwHku_Z64R8Jw8A7biAEnCJvwzKUjz1ykzav5aA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


[Puppet Users] Re: Noop metaparameter in class not working as expected

2016-08-24 Thread jcbollinger


On Monday, August 22, 2016 at 9:08:20 AM UTC-5, Julio Guevara wrote:
>
> Last time I bump this email :/
> Anyone has any idea?
>
>
The docs 

 
specify that:

In addition to class-specific parameters, you can also specify a value for 
> any metaparameter 
> .
>  
> In such cases, every resource contained in the class will also have that 
> metaparameter:
>

noop is a metaparameter; in fact, it is the example used in the section of 
the docs I quoted.  Your expectation is therefore correct, and Puppet's 
behavior is buggy in this regard.  I'm uncertain whether this is a 
manifestation of PUP-3630 , 
or whether it is a different issue.  You could consider asking about that 
over on puppet-developers, or in PUP-3630's comment thread.  If it is a new 
issue, then I'm sure Puppet would appreciate a bug report.


John

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/329cfa5a-eaef-40c8-a276-a59fac42585c%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Puppet Users] Can't connect agent to master using curl

2016-08-24 Thread Freddy Paxton
Yes I'm pretty sure I have.
Would it make a difference if my hostname has 2 fullstops/periods in it?

On Tuesday, August 23, 2016 at 6:17:39 PM UTC, Lowe Schmidt wrote:
>
> Hey,
>
> do you have DNS configured for your domain? 
>
>
> --
> Lowe Schmidt | +46 723 867 157
>
> On 23 August 2016 at 15:49, Freddy Paxton  > wrote:
>
>> Hi,
>> I recently installed Puppet Enterprise on my master using the guide 
>> provided on the Puppet website. First of all, it wanted me to navigate to 
>> https://:3000 to use the web installer, however, using my 
>> hostname to use the web installer didn't work. Instead I used the IP 
>> address and this did work. After completing the installation I then tried 
>> to connect an agent to my master. I used the curl script provided (curl -k 
>> https://:8140/packages/current/install.bash | sudo bash) and yet 
>> again the hostname failed to work, so I used the IP address again for this 
>> and a connection was made. But I was then presented with this message "curl 
>> failed to get 
>> https://:8140/packages/2016.2.1/ubuntu-14.04-amd64.bash  The 
>> agent packages needed to support ubuntu-14.04-amd64 are not present on your 
>> master. To add them, apply the pe_repo::platform::ubuntu_1404_amd64 
>> class to your master node and then run Puppet. The required agent packages 
>> should be retrieved when puppet runs on the master, after which you can run 
>> the install.bash script again." I checked to see if 
>> 'pe_repo::platform::ubuntu_1404_amd64' was already applied on my master 
>> mode and it was (as I am using the same operating system for the master and 
>> the agent). I believe this is because the install.bash script uses the 
>> hostname to retrieve the classes from the master, and as I said before, 
>> this doesn't seem to want to work for me.
>>
>> This could well be an issue outside of Puppet, but any guidance on this 
>> would be appreciated, I can't seem to find any answers anywhere.
>>
>> Thanks
>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "Puppet Users" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to puppet-users...@googlegroups.com .
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/puppet-users/d0255831-d0ae-4c2f-b29e-e801e755b9d3%40googlegroups.com
>>  
>> 
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/a9e54c50-b194-45dc-9b2e-b5cf9a3f75ea%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.