We have a use case similar to one I've seen in a couple of places, where
(let's say) a web server has a template that needs to contain information
about (let's say) all the app servers that are in their Ansible facts. If
you do 'ap -t web.yml', with a play to gather facts from everyone and a
play to put the template on the web servers, that works great. If you add
a '--limit web-01', that fails, because the scope of the fact-gathering is
also limited by the --limit.

The couple of places I've seen this suggest fact caching as a solution, so
that seems good. This is the first time we're using fact caching, so we
have some questions.

One is to confirm that for this to work with --limit, we have to gather
facts in a separate playbook without --limit. (If, for example, our hosts
are somewhat dynamic, and we want to make sure the cache is fresh.) All
the plays in a playbook are always affected by --limit, right?

(As an aside, being able say 'ignore_limit: True' for a play would be a
really nice way to solve this, and possibly good for other times when you
want a limit to apply to most of a playbook but not one thing, for
whatever reason.)

So ok, a simple fact-gathering playbook, like this:

  - hosts: all 
    tasks: [] 

Two is to ask about a strange behavior we saw the first time we tried
this. One of our hosts said

  failed: [app-01] => {"failed": true, "parsed": false} 
  {"verbose_override": true, "changed": false, "ansible_facts": [* stuff *]
  OpenSSH_6.6.1, OpenSSL 1.0.1k-fips 8 Jan 2015^M 
  debug1: Reading configuration data /home/jsmift/.ssh/config^M 
  debug1: Reading configuration data /etc/ssh/ssh_config^M 
  debug1: auto-mux: Trying existing master^M 
  debug1: mux_client_request_session: master session id: 2^M 
  Shared connection to 172.16.1.1 closed.^M 
 
That happens sometimes, intermittent SSH glitch, whatever; but the curious
thing was that the cache for this host got written, with

  {"module_setup": true}

as the only thing in it. When I ran the fact-gathering playbook again, it
didn't get any SSH errors for this host this time, but it also didn't
update the fact cache. Is that surprising? If so, is there a way to bust
the cache, other than a task to remove the cache files or something, so we
can at least detect the failure and act accordingly?

                                      -Josh ([email protected])

(apologies for the automatic corporate disclaimer that follows)




This email is intended for the person(s) to whom it is addressed and may 
contain information that is PRIVILEGED or CONFIDENTIAL. Any unauthorized use, 
distribution, copying, or disclosure by any person other than the addressee(s) 
is strictly prohibited. If you have received this email in error, please notify 
the sender immediately by return email and delete the message and any 
attachments from your system.

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/22344.58770.605066.518495%40gargle.gargle.HOWL.
For more options, visit https://groups.google.com/d/optout.

Reply via email to