I've been stumbling over this, too. According to this:

http://docs.ansible.com/playbooks_roles.html#task-include-files-and-encouraging-reuse

and specifically, this:

[[[
Note

You can not conditionally path the location to an include file, like you 
can with vars_files. If you find yourself needing to do this, consider how 
you can restructure your playbook to be more class/role oriented.
]]]

dynamic includes like this can't be done. (The very helpful halberom on 
#ansible suggested this had been removed due to "security concerns".)

Let me explain why I was trying to do this, and why something like this 
would be useful. I've broken the following bits of the explanation down 
into small chunks; without that it's something of a wall of text.



I've got a bunch of development teams who are writing playbooks. They're 
each deploying multiple services which, taken as a whole, comprise a large 
and complex application.

(We may very likely be using multiple git repos to manage those playbooks, 
for administrative reasons such as managing the gerrit review workload.)



Each service component will likely be deployed into some kind of 
"container". Those containers might be python venvs; or chroot 
environments; or docker containers, etc.

Now, I'd like my developer groups to be able to lean upon 
centrally-managed tasks to do things like "prepare a container", "install 
software X into container Y", and so on.

So, there's kind of a high-level operational/deployment API we're talking 
about, that development teams can lean on.

The use of task inclusion (with parameterised tasks that take an 
appropriate set of variables) looks ideal to achieve this.



The advantages of this are that:

- we then have a single place to update our choice of container technology 
implementation
- developer groups can write playbooks to that abstract API
- vitally, we can make a change to the central implementation and all 
  developer-supplied playbooks inherit the new implementation;
- there's no dodging around like fools trying some kind of staged-commit 
  morris-dance to get such a change like that through the CI gating of, 
  potentially, multiple git repos.



So far, so reasonable, I think; it requires a bit of up-front planning 
(although that's not a bad thing) to understand what kinds of operations 
the lifecycle comprises - and it's one way to tackle the large amount of 
complexity something like this has.



But, when it comes to an upgrade of an *already-installed* system, the 
situation is a little more interesting: in particular, we'll know about 
the version we were going _from_, and the version we're moving _to_. It's 
quite natural to write common playbooks that create and destroy 
containers, or deploy software into them, etc, as a set.



An upgrade, then, needs to pick an appropriate version of the central API 
implementation for different tasks, eg, as a quick sketch:

[[[
This abstract playbook "delegates" to two separate concrete 
implementations!
In practice, I'd expect to have high-level playbooks that wrap
a single conceptual task like "do X"/"undo X"
---
- name: undeploy a container
  include: undeploy_v{{ versions.old }}.yml

- name: deploy a container
  include: deploy_v{{ versions.new }}.yml
]]]

By manipulating the variables versions.old and versions.new, we can 
potentially manage a change of container technology (I'm using this as a 
concrete example; there are others) without component development teams 
needing to touch their own playbook repos.



The situation is still complex, but it's a manageable level of complexity.

However, without j2 expansion of the includes: we're left with this, which 
is something of a sprawling mess:

[[[
A rather more clunky version of the abstract playbook
Delegate out to the correct implementations
---
- name: undeploy a container, v1.1
  include: undeploy_v1.1.yml
  when: versions.old == 1.1

- name: undeploy a container, v1.2
  include: undeploy_v1.2.yml
  when: versions.old == 1.2

- name: undeploy a container, v2.0
  include: undeploy_v2.0.yml
  when: versions.old == 2.0
]]]

This is rather more error-prone and laborious.



Now, the helpful note in the documentation suggests that I should be 
looking to restructure my playbooks to be more class/role oriented. What 
is more class oriented than supplying multiple contrete implementations of 
the lifecycle API, I'm not clear; and achieving this through roles would 
seem to involve jumping through several mental hoops about what a 
particular role represents ("hosts that were running service X version Y 
that needs to be removed from there" - yuck).


Suggestions are very welcome!

Cheers,
jan


-- 
[email protected]  http://ioctl.org/jan/ Short, dark, ugly: pick any three
Goth is the new black.

Reply via email to