I am creating a playbook that is used for multiple hosts - now this might
not be the best way but I still want to figure it out. I have two hosts
now one that is centos 7 and one that is in amazon ec2. Obviously one uses
systemd and the other does not. All the tasks apply to both hosts except
for the stopping of firewalld
---
- hosts: main
user: root
tasks:
- name: Ensure firewalld is stopped
systemd:
name: firewalld
state: stopped
masked: yes
- name: Disable Selinux
selinux:
state: disabled
- name: Ensure we have latest updates
yum:
name: "*"
state: latest
Once it gets to this part one host is "ok" and the other host ( amazon ec2
) fails for obvious reasons. There are like 10 other tasks that are after
this that then only run on the local centos 7 server and of course the
amazon ec2 instance does not get included.
TASK [Ensure firewalld is stopped]
*********************************************
ok: [ip of centos 7]
fatal: [ip of amazon ec2]: FAILED! => {"changed": false, "cmd": "None show
firewalld", "failed": true, "msg": "[Errno 2] No such file or directory",
"rc": 2}
right after you see this
TASK [Disable Selinux]
*********************************************************
ok: [ip of centos 7]
and of course nothing for the amazon ec2 instance. How can I keep the
tasks running on the failed host.
--
You received this message because you are subscribed to the Google Groups
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To post to this group, send email to [email protected].
To view this discussion on the web visit
https://groups.google.com/d/msgid/ansible-project/017f9eec-9526-423b-a39f-5720a8ae9f46%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.