I have a question about using ansible tower isolated groups. I have the
following setup:
• Nodes 1-3 are in the “tower” group and are also the controller nodes for
Nodes 4-5, which are in the isolated group “restricted.”
• I have two inventories, one localhost with instance group set to “tower”
and another inventory “restricted” with instance group set to “restricted.”
• I then two playbooks that run “hostname” on localhost. Playbook “non
restricted hostname” uses the localhost inventory and has instance group
set to “tower.” Playbook “restricted hostname” uses the “restricted”
inventory and has instance group set to “restricted.”
To put a finer point on it, my installation inventory looked something like
this:
[tower]
tower-1
tower-2
tower-3
[instance_group_restricted]
tower-1
tower-2
tower-3
[isolated_group_restricted]
tower-4
tower-5
[isolated_group_restricted:vars]
controller=restricted
[database]
tower-db
When I run the playbook “non restricted hostname,” it runs fine – it runs
against nodes 1, 2, or 3. When I run the playbook “restricted hostname,” it
hangs. Tower itself appears to hang forever, but if by the API I retrieve
the job, it actually finished, but timed out. What’s really weird is that
it appears to have attempted to run the job on the controller node (node 1):
"result_stdout": "
PLAY [Prepare data, dispatch job in isolated environment.]
*********************
TASK [create a proot/bwrap temp dir (if necessary)]
****************************
fatal: [tower-1]: FAILED! => {\"changed\": false, \"cmd\": \"/usr/bin/rsync
--delay-updates -F --compress --archive --rsh=/usr/bin/ssh -S none -o
StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null
--out-format=<<CHANGED>>%i %n%L /tmp/awx_proot_lHxuQd awx@tower-1:/tmp\",
\"failed\": true, \"msg\": \"Warning: Permanently added
'tower-1,192.168.1.108' (ECDSA) to the list of known hosts.\\r\\nPermission
denied (publickey,gssapi-keyex,gssapi-with-mic).\\r\\nrsync: connection
unexpectedly closed (0 bytes received so far) [sender]\\nrsync error:
unexplained error (code 255) at io.c(605) [sender=3.0.9]\\n\", \"rc\": 255}
PLAY RECAP
*********************************************************************
tower-1 : ok=0 changed=0 unreachable=0 failed=1
What I expected was for one of the nodes in the controller group (nodes
1-3) to pick up the job and send it to one of the zeus nodes (nodes 4-5).
Instead node 1 picked up the job and tried to send it to itself. Have I set
something up wrong?
--
You received this message because you are subscribed to the Google Groups
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To post to this group, send email to [email protected].
To view this discussion on the web visit
https://groups.google.com/d/msgid/ansible-project/095cb574-2b43-4c32-b27a-12f9df906193%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.