On Wednesday, October 12, 2016 at 7:59:25 AM UTC-5, jcbollinger wrote:
> On Tuesday, October 11, 2016 at 3:54:25 PM UTC-5, re-g...@wiu.edu wrote:
>> On Monday, October 10, 2016 at 9:07:35 AM UTC-5, jcbollinger wrote:
>>> On Friday, October 7, 2016 at 10:39:58 AM UTC-5, re-g...@wiu.edu wrote:
>>>> Well... The node I have been testing the duplicate declaration on uses
>>>> a puppet secondary-master server, as it is on a remote network segment. It
>>>> does not connect directly to the puppet primary-master in which The Forman
>>>> is running on.
>>>> So I did some work to get this particular "server1" node to use the
>>>> puppet primary-master that The Foreman is running on. When I run a puppet
>>>> update, it completes without error. When I switch back to the puppet
>>>> secondary-master, I get the duplicate class error.
>>>> They are both running puppet 3.8.7-1 on CentOS 6.
>>>> The YAML produced by both is exactly 100% the same. So I can assume the
>>>> YAML structure is not the issue.
>>>> Would this suggest that the puppet secondary-master server is the
>>>> issue, or the client connecting to it is perhaps not always getting what
>>>> wants from the slave?
>>>> Remember that the puppet updates will complete correctly for many
>>>> hours, then magically change to this error. And vice-versa, be in error
>>>> many hours, and then magically change to completing correctly. Also that
>>>> sometime changing configuration in The Forman can trigger the Error to
>>>> occur AND other times trigger to Error to stop occurring.
>>>> Also note, I only have this problem with the saz-rsyslog module - NEVER
>>>> with any other puppet module.
>>>> When I remove the saz-rsyslog module, all issues disappear.
>>> I am not prepared to believe that identical implementations of Puppet's
>>> catalog builder running on substantially identical platforms with identical
>>> inputs behave differently. Since you do see variations in behavior,
>>> therefore, I conclude that those differences can be attributed to
>>> differences in implementation, platform, or (most likely) inputs.
>>>> I have made sure the puppet modules are 100% in sync between primary
>>>> and secondary master server.
>>>> And I have restarted the puppet processes on the secondary-master
>>>> server, but the error will continue on the nodes.
>>> Those are good steps. To really troubleshoot this thoroughly, however,
>>> I think you'll need to be systematic: capture the ENC output for each
>>> catalog request for a given node (or for all nodes), with timestamps and
>>> associated success / failure of catalog compilation. Compare the ENC
>>> output for successful catalog builds with that for failed builds and look
>>> for systematics.
>>> Either at the same time or separately, you should look into whether
>>> Puppet's environment cache has an impact here. Some of the behaviors you
>>> describe make me rather suspicious of this. Supposing that you are using
>>> directory environments, you should experiment both with disabling caching
>>> altogether (set the environment_timeout configuration option to 0 (its
>>> default)) and with caching indefinitely (set environment_timeout to
>>> "unlimited"). Note that Puppet recommends against using any other setting
>>> for that option. You could also try turning on the ignorecache option at
>>> the master.
>> So I ran a `puppet agent -t` on one of the problem nodes against the
>> primary master puppet server (which was successful), and then afterwards
>> the secondary master puppet server (which produces the duplicate
>> declaration error for Class[Rsyslog]).
>> The size of both catalog files are exactly the same (I am referring to
>> this file:
>> /var/lib/puppet/client_data/catalog/server1.mydomain.example.com.json ).
>> The only difference inside the file is the order of items in the json
> A duplicate declaration error is a parse (i.e. catalog building) error.
> If the master encounters one then it does not emit a catalog -- or if it
> did, it could not be one based on the failed catalog-building attempt. I'm
> not sure what you're looking at, but it does not have the significance you
> are attributing to it.
>> So the only difference I can see between the two puppet servers is the
>> order of the overall elements in the catalogs' json hashes and arrays.
>> Could this be a cause of the duplicate declaration error?
> No. Any catalog you have is the result of a catalog-building run in which
> no such error was produced.
> The appearance of duplicate declaration errors can be sensitive to the
> order of declarations in your manifests or ENC output, but a catalog does
> not directly inform about that.
You were right, that catalog file
was from a successful build/run. There is no new catalog file when the
server1 node receives the duplicate declaration error.
I did some further testing, and I hope I made a little progress on
narrowing down the issue - perhaps this adds a clue:
1. Both rsyslog and rsyslog::client are assigned to the node server1 via a
class group in The Foreman
2. When I remove the rsyslog::client assignment, the server1 node can
perform a successful puppet update, without a duplicate declaration error
3. Then I added rsyslog::client assignment back into the group, I received
the duplicate declaration error
4. Then, one parameter at a time, I configured The Foreman to use the
"default value" (checked the "use default" box) of each configured
parameter for the node server1, when this fact matches:
5. When all parameters are set to be "default value" for this node, even
though the class is still assigned to the node via The Foreman, and even
though the parameters are still set to a value for other nodes via The
Foreman, the puppet update runs successfully without a duplicate
declaration error on node server1
6. After this, my going back and setting just one parameter in the
rsyslog::client class for this node causes the duplicate declaration error
In The Foreman, I have a Puppet Class config group that I assign to the
group of hosts in this remote location. I assign this same config group for
every node in the network, even ones in the local network and other remote
networks that do not experience duplicate class errors. Let me call this
The DefG1 config group has my default puppet module classes defined in it,
including the Rsyslog puppet module class.
However, I am including a subclass rsyslog::client in addition to the main
class rsyslog in this group DefG1. Because I need to configure the rsyslog
client parameters for every node, namely the remote syslog server host.
Now, this is not an issue with other puppet modules to include sub modules
(for example I also use both foreman-puppet and foreman-puppet::config
together without issue). Nor has this been an issue for nodes performing
puppet updates against the primary-master puppet server and all the other
secondary-master puppet servers (I have 3 secondary-masters, 1 local
network, and 3 remote networks with secondary-puppet servers).
I cannot configure the class parameters in The Forman in the puppet
module's class for a node unless the puppet class is assigned to the node.
I can assign the class (1) directly to the node, or (2) to the node group
the node is in, or (3) to a class config group that I can assign to the
node or the node group. I have done #3, the config group, and I assign the
config group to the highest parent node group all nodes in all networks are
a member of.
So when any rsyslog::client class parameters are defined, I receive the
duplicate declaration error for nodes only in this one remote network.
Yesterday I was experiencing this duplicate declaration error in another
one of my remote networks, but it resolved itself magically within 24 hours
- no one touched The Foreman, and all I did was login and monitor the
You received this message because you are subscribed to the Google Groups
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email
To view this discussion on the web visit
For more options, visit https://groups.google.com/d/optout.