TLDR: I have started an attempt at analyzing this issue and think I understand the root cause. I have created a simple patch which fixes the issue for me.
Details
Preparation steps
First, let me add an even shorter reproducer which works using only puppet apply:
/tmp/test.pp
|
file { '/tmp/target':
|
ensure => 'directory',
|
source => '/tmp/source',
|
recurse => true,
|
tag => 'foo',
|
} ~>
|
exec { '/bin/echo "changed"':
|
refreshonly => true,
|
tag => 'foo',
|
}
|
Current behaviour
$ date > /tmp/source/date
|
$ puppet apply test.pp --tags foo |& grep Exec
|
# No output
|
Expected behaviour
$ date > /tmp/source/date
|
$ puppet apply test.pp --tags foo |& grep Exec
|
Notice: /Stage[main]/Main/Exec[/bin/echo "changed"]: Triggered 'refresh' from 1 events
|
Analysis
Event propagation works properly. This is confirmed by running puppet apply without --tags:
$ puppet apply /tmp/test.pp |& grep Exec
|
Notice: /Stage[main]/Main/Exec[/bin/echo "changed"]: Triggered 'refresh' from 1 events
|
When running with --debug, something looks suspicious:
$ date > /tmp/source/date
|
$ puppet apply test.pp --debug --tags foo |& grep /tmp/target
|
Debug: Adding relationship from File[/tmp/target] to Exec[/bin/echo "changed"] with 'notify'
|
Debug: /Stage[main]/Main/File[/tmp/target]/notify: subscribes to Exec[/bin/echo "changed"]
|
Info: Computing checksum on file /tmp/target/date
|
Info: /Stage[main]/Main/File[/tmp/target/date]: Filebucketed /tmp/target/date to puppet with sum 56b9d3e3936f73efc18618b440ef1302
|
Notice: /Stage[main]/Main/File[/tmp/target/date]/content: content changed '{md5}56b9d3e3936f73efc18618b440ef1302' to '{md5}ad4f97525ef4db3e5b22a5cd9d907c69'
|
Debug: /Stage[main]/Main/File[/tmp/target/date]: The container /tmp/target will propagate my refresh event
|
Debug: /tmp/target: Not tagged with foo
|
Debug: /tmp/target: Resource is being skipped, unscheduling all events
|
Info: /tmp/target: Unscheduling all events on /tmp/target
|
One of last Debug lines says that the /tmp/target resource is not tagged with the expected tag. By adding debug code in several places, I was able to confirm that the File[/tmp/target] resource does have the necessary tag added, as well as all generated resources (File[/tmp/target/date] in this case). The mentioned Debug line does not refer to the File resource at all – instead, it seems to refer to a "whit" resource with this name. This instance does indeed seem to lack the relevant tag.
To my understanding, the code which deals with integrating generated resources (transaction/additional_resource_generator.rb) modifies the resource graph and connects the generated resources (File[/tmp/target/date]) as well as the generating resource (File[/tmp/target]) using an internal-only intermediate node called "whit" to the resource graph. However, whits do not seem to have their tags copied. Therefore, a puppet apply/agent run with --tags set will break this chain, as the whit is excluded from running (transaction.rb: skip?).
Fix
The obvious fix seems to be to add the tags of the resource generating resource (File[/tmp/target]) to the whit. I have done exactly that and it seems to work. However, I do not know which unwanted side effects this may have.
Patch against 4.7.0-1 (Arch Linux)
|
transaction/additional_resource_generator.rb
|
--- transaction/additional_resource_generator.rb.orig 2016-11-13 00:51:22.087080025 +0100
|
+++ transaction/additional_resource_generator.rb 2016-11-13 00:51:25.953746592 +0100
|
@@ -74,6 +74,9 @@
|
|
def contain_generated_resources_in(resource, made)
|
sentinel = Puppet::Type.type(:whit).new(:name => "completed_#{resource.title}", :catalog => resource.catalog)
|
+ if resource.respond_to?(:tags)
|
+ sentinel.tags = resource.tags
|
+ end
|
priority = @prioritizer.generate_priority_contained_in(resource, sentinel)
|
@relationship_graph.add_vertex(sentinel, priority)
|
Disclaimer: While I have spent some time reading Puppet code now, I literally do not have any experience with Puppet internals at all, so I may be wrong in some or all of these explanations. Nevertheless, I hope this information aids with creation of an official fix.
It would be great if you were able to review the information in this comment and the attached patch in the near future.
|