Re: [ansible-project] Unusal behaviour with set_fact and add_host

2024-04-25 Thread Todd Lewis
I've been staring at this for quite a while. I was able to reproduce 
your issue, and I now understand what's going on.


Rather than trying to duplicate the "single vs. multiple ips on an 
interface", I just ran a playbook against two hosts (tango and 
cloister), and did


- name: Set is_leader for tango and cloister
  ansible.builtin.set_fact:
is_leader: '{{ true if inventory_hostname == "tango" else false }}'

After that, I could play with ansible.builtin.add_host. And that's where 
the confusion starts. Everything works "as expected" except for theses 
add_host tasks. So I went back and read the document on add_host very 
closely. I had not appreciated this bit about the bypass_host_loop 
property 
:


   *Forces a ‘global’ task* that *does not execute per host*, this
   *bypasses per host templating and serial, throttle and other loop
   considerations*

   Conditionals will work *as if |run_once| is being used*, variables
   used will be from the *first available host*

   This action will not work normally outside of lockstep strategies

So you can't run ansible.builtin.add_host on all your hosts the same way 
you would, well, almost everything else. *If you want to loop over all 
your hosts* (which is the default behavior for most other modules), 
*then you have to do it explicitly*, like this:


- name: Display is_leader for all hosts
  ansible.builtin.debug:
var: is_leader

- name: Show groups BEFORE add_host
  ansible.builtin.debug:
msg: "{{ groups }}"

- name: Add hosts to either the nodes_prim or nodes_stby group
  *ansible.builtin.add_host*:
name: "*{{ item }}*"
groups:
  - "{{ 'nodes_prim' if*hostvars[item].is_leader*  else 'notes_stby' }}"
  *loop: "{{ ansible_play_hosts }}"*

- name: Show groups AFTER add_host
  ansible.builtin.debug:
msg: "{{ groups }}"


On 4/23/24 6:28 PM, Todd Lewis wrote:
All that said, I don't see why this one task wouldn't work for your 
purposes:

 - name: Add hosts to groups
   ansible.builtin.add_host:
 name: "{{ inventory_hostname }}"
 groups:
   - nodes
   - "{{ 'nodes_prim' if is_active | d(false) else 'nodes_stby' }}"

On 4/23/24 5:40 PM, Todd Lewis wrote:
Hard to say. The job log you show doesn't match the tasks in the 
playbook you posted. They seem to be the same down to the "Debug 
is_leader value" task, but after that the task names in the log don't 
match those in the playbook.


In the playbook you posted, task "Add standby host" an task "Add 
primary host" both add the hosts to the same group. That may be a 
source of your problem. But as we don't see the output for those 
tasks …, hmm.

—
Todd

On 4/23/24 4:03 PM, 'deewon' via Ansible Project wrote:

ansible version: 2:11.2
jinja version: 3.0.3
python version: 3.6.8

Hello all

I have a peculiar problem with a relatively simple playbook I have 
written. It basically checks 2 servers to determine which one has 
more than 1 IP and uses that to set a simp;e fact called "is_leader" 
for each node.


This fact is then to be used  to build an in-memory inventory using 
module add_host to iterate through the servers based on the value of 
that fact. See below:


- name: Get Server Recycling Order
  hosts: "{{ my_servers | default ('localhost') }}"
  become: yes
  become_method: sudo
  become_user: root

  tasks:

    - name: Set Patching Sequence
      tags: always
      block:

         - name: Get IP addresses of eth0 using nmcli command
           command: nmcli -g ip4.address conn show "System eth0"
           register: nmcli_output
           changed_when: false


         - name: Get nmcli output
           debug:
             var: nmcli_output


         - name: Set is_leader variable based on number of IPs
           set_fact:
             is_leader: "{{ (nmcli_output.stdout.split('|') | 
length) > 1 }}"



         - name: Display if server is ACTIVE (1) or PASSIVE (0)
           debug:
             var: is_leader


         - name: Debug is_leader value
           debug:
              msg: "is_leader value for {{ inventory_hostname }} is 
{{ is_leader }}"



         - name: Add standby host
           add_host:
              name: "{{ inventory_hostname }}"
              groups: nodes
           when: not is_leader

         - name: Add primary host
           add_host:
              name: "{{ inventory_hostname }}"
              groups: nodes
           when: is_leader

#- name: Execute tasks on servers in a specified order
#  hosts: nodes
#  gather_facts: no
#  serial: 1
#  tasks:
#    - name: Run a command in a specified order
#      command: echo "Running on {{ inventory_hostname }}"
#      tags: always

The problem is that when this runs, the  "is_leader" fact is  
ignored for when the condition is false and the standby node is  not 
added

Re: [ansible-project] Unusal behaviour with set_fact and add_host

2024-04-25 Thread 'deewon' via Ansible Project
Thanks Todd. My messages kept getting deleted so I thought to try again

I probably over-redacted the output of the playbook to only show the 
relevant bits. The full playbook is below:

== playbook ==

- name: Get Server Recycling Order
  hosts: "{{ my_servers | default ('localhost') }}"
  become: yes
  become_method: sudo
  become_user: root

  tasks:

- name: Set Patching Sequence
  tags: always
  block:

 - name: Get IP addresses of eth0 using nmcli command
   command: nmcli -g ip4.address conn show "System eth0"
   register: nmcli_output
   changed_when: false


 - name: Get nmcli output
   debug:
 var: nmcli_output


 - name: Set is_leader variable based on number of IPs
   set_fact:
 is_leader: "{{ (nmcli_output.stdout.split('|') | length) > 1 
}}"


 - name: Display if server is ACTIVE (1) or PASSIVE (0)
   debug:
 var: is_leader


 - name: Debug is_leader value
   debug:
  msg: "is_leader value for {{ inventory_hostname }} is {{ 
is_leader }}"
 

 - name: Re-confirm is_leader before adding to group
   debug:
  msg: "Preparing to add {{ inventory_hostname }} to stby group 
based on is_leader = {{ is_leader }}"
   when: is_leader is false

 - name: Add standby proxysql node
   add_host:
  name: "{{ inventory_hostname }}"
  groups: nodes_stby
   when: is_leader == False

 - name: Re-confirm is_leader before adding to group
   debug:
  msg: "Preparing to add {{ inventory_hostname }} to prim group 
based on is_leader = {{ is_leader }}"
   when: is_leader == True

 - name: Add primary proxysql node
   add_host:
  name: "{{ inventory_hostname }}"
  groups: nodes_prim
   when: is_leader == True

== output ==

PLAY [Get Server Recycling Order] 
**

TASK [Gathering Facts] 
*

ok: [node-2]
ok: [node-1]

TASK [Get IP addresses of eth0 using nmcli command] 


ok: [node-2] => {
"changed": false,
"cmd": [
"nmcli",
"-g",
"ip4.address",
"conn",
"show",
"System eth0"
],

}

STDOUT:

x.x.x.x/x
ok: [node-1] => {
"changed": false,
"cmd": [
"nmcli",
"-g",
"ip4.address",
"conn",
"show",
"System eth0"
],

}

STDOUT:

x.x.x.x/x | y.y.y.y/y

TASK [Get nmcli output] 

ok: [node-1] => {
"nmcli_output": {
"changed": false,
"cmd": [
"nmcli",
"-g",
"ip4.address",
"conn",
"show",
"System eth0"
],
"stderr": "",
"stderr_lines": [],
"stdout": "x.x.x.x/x | y.y.y.y/y",
"stdout_lines": [
"x.x.x.x/x | y.y.y.y/y"
]
}
}
ok: [node-2] => {
"nmcli_output": {
"changed": false,
"cmd": [
"nmcli",
"-g",
"ip4.address",
"conn",
"show",
"System eth0"
],
"stderr": "",
"stderr_lines": [],
"stdout": "x.x.x.x/x",
"stdout_lines": [
"x.x.x.x/x"
]
}
}

TASK [Set is_leader variable based on number of IPs] 
**
ok: [node-1] => {
"ansible_facts": {
"is_leader": true
},
"changed": false
}
ok: [node-2] => {
"ansible_facts": {
"is_leader": false
},
"changed": false
}

TASK [Display if server is ACTIVE (1) or PASSIVE (0)] 
**
ok: [node-1] => {
"is_leader": true
}
ok: [node-2] => {
"is_leader": false
}

TASK [Debug is_leader value] 
**
ok: [node-1] => {}

MSG:

is_leader value for node-1 is True
ok: [node-2] => {}

MSG:

is_leader value for node-2 is False

TASK [Re-confirm is_leader before adding to group] 

skipping: 

Re: [ansible-project] Unusal behaviour with set_fact and add_host

2024-04-25 Thread 'deewon' via Ansible Project
Thanks Todd. My messages kept getting deleted so I thought to try again

I probably over-redacted the output of the playbook to only show the 
relevant bits. The full playbook is below:

== playbook ==

- name: Get Server Recycling Order
  hosts: "{{ my_servers | default ('localhost') }}"
  become: yes
  become_method: sudo
  become_user: root

  tasks:

- name: Set Patching Sequence
  tags: always
  block:

 - name: Get IP addresses of eth0 using nmcli command
   command: nmcli -g ip4.address conn show "System eth0"
   register: nmcli_output
   changed_when: false


 - name: Get nmcli output
   debug:
 var: nmcli_output


 - name: Set is_leader variable based on number of IPs
   set_fact:
 is_leader: "{{ (nmcli_output.stdout.split('|') | length) > 1 
}}"


 - name: Display if server is ACTIVE (1) or PASSIVE (0)
   debug:
 var: is_leader


 - name: Debug is_leader value
   debug:
  msg: "is_leader value for {{ inventory_hostname }} is {{ 
is_leader }}"
 

 - name: Re-confirm is_leader before adding to group
   debug:
  msg: "Preparing to add {{ inventory_hostname }} to stby group 
based on is_leader = {{ is_leader }}"
   when: is_leader is false

 - name: Add standby proxysql node
   add_host:
  name: "{{ inventory_hostname }}"
  groups: nodes_stby
   when: is_leader == False

 - name: Re-confirm is_leader before adding to group
   debug:
  msg: "Preparing to add {{ inventory_hostname }} to prim group 
based on is_leader = {{ is_leader }}"
   when: is_leader == True

 - name: Add primary proxysql node
   add_host:
  name: "{{ inventory_hostname }}"
  groups: nodes_prim
   when: is_leader == True

== output ==

PLAY [Get Server Recycling Order] 
**

TASK [Gathering Facts] 
*
Tuesday 23 April 2024  19:46:14 + (0:00:00.084)   0:00:00.109 
*
ok: [node-2]
ok: [node-1]

TASK [Get IP addresses of eth0 using nmcli command] 

Tuesday 23 April 2024  19:46:29 + (0:00:15.167)   0:00:15.277 
*
ok: [node-2] => {
"changed": false,
"cmd": [
"nmcli",
"-g",
"ip4.address",
"conn",
"show",
"System eth0"
],
"delta": "0:00:00.026130",
"end": "2024-04-23 15:46:30.828281",
"rc": 0,
"start": "2024-04-23 15:46:30.802151"
}

STDOUT:

192.168.10.11/24
ok: [node-1] => {
"changed": false,
"cmd": [
"nmcli",
"-g",
"ip4.address",
"conn",
"show",
"System eth0"
],
"delta": "0:00:00.038040",
"end": "2024-04-23 15:46:30.893109",
"rc": 0,
"start": "2024-04-23 15:46:30.855069"
}

STDOUT:

192.168.10.10/24 | 192.168.4.1/32

TASK [Get nmcli output] 

Tuesday 23 April 2024  19:46:31 + (0:00:01.458)   0:00:16.736 
*
ok: [node-1] => {
"nmcli_output": {
"changed": false,
"cmd": [
"nmcli",
"-g",
"ip4.address",
"conn",
"show",
"System eth0"
],
"delta": "0:00:00.038040",
"end": "2024-04-23 15:46:30.893109",
"failed": false,
"msg": "",
"rc": 0,
"start": "2024-04-23 15:46:30.855069",
"stderr": "",
"stderr_lines": [],
"stdout": "192.168.10.10/24 | 192.168.4.1/32",
"stdout_lines": [
"192.168.10.10/24 | 192.168.4.1/32"
]
}
}
ok: [node-2] => {
"nmcli_output": {
"changed": false,
"cmd": [
"nmcli",
"-g",
"ip4.address",
"conn",
"show",
"System eth0"
],
"delta": "0:00:00.026130",
"end": "2024-04-23 15:46:30.828281",
"failed": false,
"msg": "",
"rc": 0,
"start": "2024-04-23 15:46:30.802151",
"stderr": "",
"stderr_lines": [],
"stdout": "192.168.10.11/24",
"stdout_lines": [
"192.168.10.11/24"
]
}
}

TASK [Set is_leader variable based on number of IPs] 
**
Tuesday 23 April 2024 

Re: [ansible-project] Unusal behaviour with set_fact and add_host

2024-04-25 Thread 'deewon' via Ansible Project
Thanks Todd. My messages kept getting deleted so I thought to try again

I probably over-redacted the output of the playbook to only show the 
relevant bits. The full playbook is below:

== playbook ==

- name: Get Server Recycling Order
  hosts: "{{ my_servers | default ('localhost') }}"
  become: yes
  become_method: sudo
  become_user: root

  tasks:

- name: Set Patching Sequence
  tags: always
  block:

 - name: Get IP addresses of eth0 using nmcli command
   command: nmcli -g ip4.address conn show "System eth0"
   register: nmcli_output
   changed_when: false


 - name: Get nmcli output
   debug:
 var: nmcli_output


 - name: Set is_leader variable based on number of IPs
   set_fact:
 is_leader: "{{ (nmcli_output.stdout.split('|') | length) > 1 
}}"


 - name: Display if server is ACTIVE (1) or PASSIVE (0)
   debug:
 var: is_leader


 - name: Debug is_leader value
   debug:
  msg: "is_leader value for {{ inventory_hostname }} is {{ 
is_leader }}"
 

 - name: Re-confirm is_leader before adding to group
   debug:
  msg: "Preparing to add {{ inventory_hostname }} to stby group 
based on is_leader = {{ is_leader }}"
   when: is_leader is false

 - name: Add standby proxysql node
   add_host:
  name: "{{ inventory_hostname }}"
  groups: nodes_stby
   when: is_leader == False

 - name: Re-confirm is_leader before adding to group
   debug:
  msg: "Preparing to add {{ inventory_hostname }} to prim group 
based on is_leader = {{ is_leader }}"
   when: is_leader == True

 - name: Add primary proxysql node
   add_host:
  name: "{{ inventory_hostname }}"
  groups: nodes_prim
   when: is_leader == True

== output ==

PLAY [Get Server Recycling Order] 
**

TASK [Gathering Facts] 
*
Tuesday 23 April 2024  19:46:14 + (0:00:00.084)   0:00:00.109 
*
ok: [node-2]
ok: [node-1]

TASK [Get IP addresses of eth0 using nmcli command] 

Tuesday 23 April 2024  19:46:29 + (0:00:15.167)   0:00:15.277 
*
ok: [node-2] => {
"changed": false,
"cmd": [
"nmcli",
"-g",
"ip4.address",
"conn",
"show",
"System eth0"
],
"delta": "0:00:00.026130",
"end": "2024-04-23 15:46:30.828281",
"rc": 0,
"start": "2024-04-23 15:46:30.802151"
}

STDOUT:

192.168.10.11/24
ok: [node-1] => {
"changed": false,
"cmd": [
"nmcli",
"-g",
"ip4.address",
"conn",
"show",
"System eth0"
],
"delta": "0:00:00.038040",
"end": "2024-04-23 15:46:30.893109",
"rc": 0,
"start": "2024-04-23 15:46:30.855069"
}

STDOUT:

192.168.10.10/24 | 192.168.4.1/32

TASK [Get nmcli output] 

Tuesday 23 April 2024  19:46:31 + (0:00:01.458)   0:00:16.736 
*
ok: [node-1] => {
"nmcli_output": {
"changed": false,
"cmd": [
"nmcli",
"-g",
"ip4.address",
"conn",
"show",
"System eth0"
],
"delta": "0:00:00.038040",
"end": "2024-04-23 15:46:30.893109",
"failed": false,
"msg": "",
"rc": 0,
"start": "2024-04-23 15:46:30.855069",
"stderr": "",
"stderr_lines": [],
"stdout": "192.168.10.10/24 | 192.168.4.1/32",
"stdout_lines": [
"192.168.10.10/24 | 192.168.4.1/32"
]
}
}
ok: [node-2] => {
"nmcli_output": {
"changed": false,
"cmd": [
"nmcli",
"-g",
"ip4.address",
"conn",
"show",
"System eth0"
],
"delta": "0:00:00.026130",
"end": "2024-04-23 15:46:30.828281",
"failed": false,
"msg": "",
"rc": 0,
"start": "2024-04-23 15:46:30.802151",
"stderr": "",
"stderr_lines": [],
"stdout": "192.168.10.11/24",
"stdout_lines": [
"192.168.10.11/24"
]
}
}

TASK [Set is_leader variable based on number of IPs] 
**
Tuesday 23 April 2024 

Re: [ansible-project] Unusal behaviour with set_fact and add_host

2024-04-25 Thread 'deewon' via Ansible Project
Testing if my email is still deleted without any code

On Tuesday, April 23, 2024 at 11:28:59 PM UTC+1 Todd Lewis wrote:

> All that said, I don't see why this one task wouldn't work for your 
> purposes:
>
> - name: Add hosts to groups
>   ansible.builtin.add_host:
> name: "{{ inventory_hostname }}"
> groups:
>   - nodes
>   - "{{ 'nodes_prim' if is_active | d(false) else 'nodes_stby' }}"
>
>
> On 4/23/24 5:40 PM, Todd Lewis wrote:
>
> Hard to say. The job log you show doesn't match the tasks in the playbook 
> you posted. They seem to be the same down to the "Debug is_leader value" 
> task, but after that the task names in the log don't match those in the 
> playbook.
>
> In the playbook you posted, task "Add standby host" an task "Add primary 
> host" both add the hosts to the same group. That may be a source of your 
> problem. But as we don't see the output for those tasks …, hmm.
> —
> Todd
>
> On 4/23/24 4:03 PM, 'deewon' via Ansible Project wrote:
>
> ansible version: 2:11.2 
> jinja version: 3.0.3
> python version: 3.6.8
>
> Hello all
>
> I have a peculiar problem with a relatively simple playbook I have 
> written. It basically checks 2 servers to determine which one has more than 
> 1 IP and uses that to set a simp;e fact called "is_leader" for each node. 
>
> This fact is then to be used  to build an in-memory inventory using module 
> add_host to iterate through the servers based on the value of that fact. 
> See below:
>
> - name: Get Server Recycling Order
>   hosts: "{{ my_servers | default ('localhost') }}"
>   become: yes
>   become_method: sudo
>   become_user: root
>
>   tasks:
>
> - name: Set Patching Sequence
>   tags: always
>   block:
>
>  - name: Get IP addresses of eth0 using nmcli command
>command: nmcli -g ip4.address conn show "System eth0"
>register: nmcli_output
>changed_when: false
>
>
>  - name: Get nmcli output
>debug:
>  var: nmcli_output
>
>
>  - name: Set is_leader variable based on number of IPs
>set_fact:
>  is_leader: "{{ (nmcli_output.stdout.split('|') | length) > 1 
> }}"
>
>
>  - name: Display if server is ACTIVE (1) or PASSIVE (0)
>debug:
>  var: is_leader
>
>
>  - name: Debug is_leader value
>debug:
>   msg: "is_leader value for {{ inventory_hostname }} is {{ 
> is_leader }}"
>
>
>  - name: Add standby host
>add_host:
>   name: "{{ inventory_hostname }}"
>   groups: nodes
>when: not is_leader 
>
>  - name: Add primary host
>add_host:
>   name: "{{ inventory_hostname }}"
>   groups: nodes
>when: is_leader
>
> #- name: Execute tasks on servers in a specified order
> #  hosts: nodes
> #  gather_facts: no
> #  serial: 1
> #  tasks:
> #- name: Run a command in a specified order
> #  command: echo "Running on {{ inventory_hostname }}"
> #  tags: always
>
> The problem is that when this runs, the  "is_leader" fact is  ignored for 
> when the condition is false and the standby node is  not added to the 
> inventory. The relevant extract from the execution is below:
>
>  TASK [Set is_leader variable based on number of IPs] 
> **
> Tuesday 23 April 2024  19:46:31 + (0:00:00.079)   0:00:16.815 
> *
> ok: [node-1] => {
> "ansible_facts": {
> "is_leader": true
> },
> "changed": false
> }
> ok: [node-2] => {
> "ansible_facts": {
> "is_leader": false
> },
> "changed": false
> }
>
> TASK [Display if server is ACTIVE (1) or PASSIVE (0)] 
> **
> Tuesday 23 April 2024  19:46:31 + (0:00:00.083)   0:00:16.898 
> *
> ok: [node-1] => {
> "is_leader": true
> }
> ok: [node-2] => {
> "is_leader": false
> }
>
> TASK [Debug is_leader value] 
> **
> Tuesday 23 April 2024  19:46:31 + (0:00:00.072)   0:00:16.971 
> *
> ok: [node-1] => {}
>
> MSG:
>
> is_leader value for node-1 is True
> ok: [node-2] => {}
>
> MSG:
>
> is_leader value for node-2 is False
>
> TASK [Re-confirm is_leader before adding to group] 
> 
> Tuesday 23 April 2024  19:46:31 + (0:00:00.074)   0:00:17.045 
> *
> skipping: [node-1] => {}
> ok: [node-2] => {}
>
> MSG:
>
> Preparing to add node-2 to stby group based on is_leader = False
>
> TASK [Add standby proxysql node] 
> *

Re: [ansible-project] Unusal behaviour with set_fact and add_host

2024-04-25 Thread 'deewon' via Ansible Project
Hi Todd. I believe I responded to you but it seems my response was deleted. 
I can post the full playbook and output if required. A lot of those debug 
steps were just to understand why the condition was being ignored when 
"is_leader" is set to false

I'll incorporate the suggestion above to reduce the number of tasks 
required to do this. The goal was simply to add the tasks into an in-memory 
group and carry out some serial action in the order in which they were 
added. 

Thanks

On Tuesday, April 23, 2024 at 11:28:59 PM UTC+1 Todd Lewis wrote:

All that said, I don't see why this one task wouldn't work for your 
purposes:
- name: Add hosts to groups ansible.builtin.add_host: name: "{{ 
inventory_hostname }}" groups: - nodes - "{{ 'nodes_prim' if is_active | 
d(false) else 'nodes_stby' }}" 

On 4/23/24 5:40 PM, Todd Lewis wrote:

Hard to say. The job log you show doesn't match the tasks in the playbook 
you posted. They seem to be the same down to the "Debug is_leader value" 
task, but after that the task names in the log don't match those in the 
playbook.

In the playbook you posted, task "Add standby host" an task "Add primary 
host" both add the hosts to the same group. That may be a source of your 
problem. But as we don't see the output for those tasks …, hmm.
—
Todd

On 4/23/24 4:03 PM, 'deewon' via Ansible Project wrote:

ansible version: 2:11.2 
jinja version: 3.0.3
python version: 3.6.8

Hello all

I have a peculiar problem with a relatively simple playbook I have written. 
It basically checks 2 servers to determine which one has more than 1 IP and 
uses that to set a simp;e fact called "is_leader" for each node. 

This fact is then to be used  to build an in-memory inventory using module 
add_host to iterate through the servers based on the value of that fact. 
See below:

- name: Get Server Recycling Order
  hosts: "{{ my_servers | default ('localhost') }}"
  become: yes
  become_method: sudo
  become_user: root

  tasks:

- name: Set Patching Sequence
  tags: always
  block:

 - name: Get IP addresses of eth0 using nmcli command
   command: nmcli -g ip4.address conn show "System eth0"
   register: nmcli_output
   changed_when: false


 - name: Get nmcli output
   debug:
 var: nmcli_output


 - name: Set is_leader variable based on number of IPs
   set_fact:
 is_leader: "{{ (nmcli_output.stdout.split('|') | length) > 1 
}}"


 - name: Display if server is ACTIVE (1) or PASSIVE (0)
   debug:
 var: is_leader


 - name: Debug is_leader value
   debug:
  msg: "is_leader value for {{ inventory_hostname }} is {{ 
is_leader }}"


 - name: Add standby host
   add_host:
  name: "{{ inventory_hostname }}"
  groups: nodes
   when: not is_leader 

 - name: Add primary host
   add_host:
  name: "{{ inventory_hostname }}"
  groups: nodes
   when: is_leader

#- name: Execute tasks on servers in a specified order
#  hosts: nodes
#  gather_facts: no
#  serial: 1
#  tasks:
#- name: Run a command in a specified order
#  command: echo "Running on {{ inventory_hostname }}"
#  tags: always

The problem is that when this runs, the  "is_leader" fact is  ignored for 
when the condition is false and the standby node is  not added to the 
inventory. The relevant extract from the execution is below:

 TASK [Set is_leader variable based on number of IPs] 
**
Tuesday 23 April 2024  19:46:31 + (0:00:00.079)   0:00:16.815 
*
ok: [node-1] => {
"ansible_facts": {
"is_leader": true
},
"changed": false
}
ok: [node-2] => {
"ansible_facts": {
"is_leader": false
},
"changed": false
}

TASK [Display if server is ACTIVE (1) or PASSIVE (0)] 
**
Tuesday 23 April 2024  19:46:31 + (0:00:00.083)   0:00:16.898 
*
ok: [node-1] => {
"is_leader": true
}
ok: [node-2] => {
"is_leader": false
}

TASK [Debug is_leader value] 
**
Tuesday 23 April 2024  19:46:31 + (0:00:00.072)   0:00:16.971 
*
ok: [node-1] => {}

MSG:

is_leader value for node-1 is True
ok: [node-2] => {}

MSG:

is_leader value for node-2 is False

TASK [Re-confirm is_leader before adding to group] 

Tuesday 23 April 2024  19:46:31 + (0:00:00.074)   0:00:17.045 
*
skipping: [node-1] => {}
ok: [node-2] => {}

MSG

Re: [ansible-project] Unusal behaviour with set_fact and add_host

2024-04-25 Thread 'deewon' via Ansible Project

Thanks for taking a look at this Todd. Apologies for the messy output. I 
over-redacted in an attempt to post only the relevant information

This is the full playbook followed by the output:

== playbook ==

- name: Get Server Recycling Order
  hosts: "{{ my_servers | default ('localhost') }}"
  become: yes
  become_method: sudo
  become_user: root

  tasks:

- name: Set Patching Sequence
  tags: always
  block:

 - name: Get IP addresses of eth0 using nmcli command
   command: nmcli -g ip4.address conn show "System eth0"
   register: nmcli_output
   changed_when: false


 - name: Get nmcli output
   debug:
 var: nmcli_output


 - name: Set is_leader variable based on number of IPs
   set_fact:
 is_leader: "{{ (nmcli_output.stdout.split('|') | length) > 1 
}}"


 - name: Display if server is ACTIVE (1) or PASSIVE (0)
   debug:
 var: is_leader


 - name: Debug is_leader value
   debug:
  msg: "is_leader value for {{ inventory_hostname }} is {{ 
is_leader }}"
  

 - name: Re-confirm is_leader before adding to group
   debug:
  msg: "Preparing to add {{ inventory_hostname }} to stby group 
based on is_leader = {{ is_leader }}"
   when: is_leader is false

 - name: Add standby proxysql node
   add_host:
  name: "{{ inventory_hostname }}"
  groups: nodes_stby
   when: is_leader == False

 - name: Re-confirm is_leader before adding to group
   debug:
  msg: "Preparing to add {{ inventory_hostname }} to prim group 
based on is_leader = {{ is_leader }}"
   when: is_leader == True

 - name: Add primary proxysql node
   add_host:
  name: "{{ inventory_hostname }}"
  groups: nodes_prim
   when: is_leader == True

#- name: Execute tasks on servers in a specified order
#  hosts: nodes
#  gather_facts: no
#  serial: 1
#  tasks:
#- name: Run a command in a specified order
#  command: echo "Running on {{ inventory_hostname }}"
#  tags: always


== Output ==

PLAY [Get Server Recycling Order] 
**

TASK [Gathering Facts] 
*
Tuesday 23 April 2024  19:46:14 + (0:00:00.084)   0:00:00.109 
*
ok: [node-2]
ok: [node-1]

TASK [Get IP addresses of eth0 using nmcli command] 

Tuesday 23 April 2024  19:46:29 + (0:00:15.167)   0:00:15.277 
*
ok: [node-2] => {
"changed": false,
"cmd": [
"nmcli",
"-g",
"ip4.address",
"conn",
"show",
"System eth0"
],
"delta": "0:00:00.026130",
"end": "2024-04-23 15:46:30.828281",
"rc": 0,
"start": "2024-04-23 15:46:30.802151"
}

STDOUT:

192.168.10.11/24
ok: [node-1] => {
"changed": false,
"cmd": [
"nmcli",
"-g",
"ip4.address",
"conn",
"show",
"System eth0"
],
"delta": "0:00:00.038040",
"end": "2024-04-23 15:46:30.893109",
"rc": 0,
"start": "2024-04-23 15:46:30.855069"
}

STDOUT:

192.168.10.10/24 | 192.168.4.1/32

TASK [Get nmcli output] 

Tuesday 23 April 2024  19:46:31 + (0:00:01.458)   0:00:16.736 
*
ok: [node-1] => {
"nmcli_output": {
"changed": false,
"cmd": [
"nmcli",
"-g",
"ip4.address",
"conn",
"show",
"System eth0"
],
"delta": "0:00:00.038040",
"end": "2024-04-23 15:46:30.893109",
"failed": false,
"msg": "",
"rc": 0,
"start": "2024-04-23 15:46:30.855069",
"stderr": "",
"stderr_lines": [],
"stdout": "192.168.10.10/24 | 192.168.4.1/32",
"stdout_lines": [
"192.168.10.10/24 | 192.168.4.1/32"
]
}
}
ok: [node-2] => {
"nmcli_output": {
"changed": false,
"cmd": [
"nmcli",
"-g",
"ip4.address",
"conn",
"show",
"System eth0"
],
"delta": "0:00:00.026130",
"end": "2024-04-23 15:46:30.828281",
"failed": false,
"msg": "",
"rc": 0,
"start": "2024-04-23 15:46:30.802151",
"stderr": "",
"stderr_lines": [],
"stdout": "192.168.10.11/24",
"stdout_lines": [

Re: [ansible-project] Unusal behaviour with set_fact and add_host

2024-04-23 Thread Todd Lewis
All that said, I don't see why this one task wouldn't work for your 
purposes:


- name: Add hosts to groups
  ansible.builtin.add_host:
name: "{{ inventory_hostname }}"
groups:
  - nodes
  - "{{ 'nodes_prim' if is_active | d(false) else 'nodes_stby' }}"


On 4/23/24 5:40 PM, Todd Lewis wrote:
Hard to say. The job log you show doesn't match the tasks in the 
playbook you posted. They seem to be the same down to the "Debug 
is_leader value" task, but after that the task names in the log don't 
match those in the playbook.


In the playbook you posted, task "Add standby host" an task "Add 
primary host" both add the hosts to the same group. That may be a 
source of your problem. But as we don't see the output for those tasks 
…, hmm.

—
Todd

On 4/23/24 4:03 PM, 'deewon' via Ansible Project wrote:

ansible version: 2:11.2
jinja version: 3.0.3
python version: 3.6.8

Hello all

I have a peculiar problem with a relatively simple playbook I have 
written. It basically checks 2 servers to determine which one has 
more than 1 IP and uses that to set a simp;e fact called "is_leader" 
for each node.


This fact is then to be used  to build an in-memory inventory using 
module add_host to iterate through the servers based on the value of 
that fact. See below:


- name: Get Server Recycling Order
  hosts: "{{ my_servers | default ('localhost') }}"
  become: yes
  become_method: sudo
  become_user: root

  tasks:

    - name: Set Patching Sequence
      tags: always
      block:

         - name: Get IP addresses of eth0 using nmcli command
           command: nmcli -g ip4.address conn show "System eth0"
           register: nmcli_output
           changed_when: false


         - name: Get nmcli output
           debug:
             var: nmcli_output


         - name: Set is_leader variable based on number of IPs
           set_fact:
             is_leader: "{{ (nmcli_output.stdout.split('|') | length) 
> 1 }}"



         - name: Display if server is ACTIVE (1) or PASSIVE (0)
           debug:
             var: is_leader


         - name: Debug is_leader value
           debug:
              msg: "is_leader value for {{ inventory_hostname }} is 
{{ is_leader }}"



         - name: Add standby host
           add_host:
              name: "{{ inventory_hostname }}"
              groups: nodes
           when: not is_leader

         - name: Add primary host
           add_host:
              name: "{{ inventory_hostname }}"
              groups: nodes
           when: is_leader

#- name: Execute tasks on servers in a specified order
#  hosts: nodes
#  gather_facts: no
#  serial: 1
#  tasks:
#    - name: Run a command in a specified order
#      command: echo "Running on {{ inventory_hostname }}"
#      tags: always

The problem is that when this runs, the  "is_leader" fact is ignored 
for when the condition is false and the standby node is  not added to 
the inventory. The relevant extract from the execution is below:


 TASK [Set is_leader variable based on number of IPs] 
**

Tuesday 23 April 2024  19:46:31 + (0:00:00.079) 0:00:16.815 *
ok: [node-1] => {
    "ansible_facts": {
        "is_leader": true
    },
    "changed": false
}
ok: [node-2] => {
    "ansible_facts": {
        "is_leader": false
    },
    "changed": false
}

TASK [Display if server is ACTIVE (1) or PASSIVE (0)] 
**

Tuesday 23 April 2024  19:46:31 + (0:00:00.083) 0:00:16.898 *
ok: [node-1] => {
    "is_leader": true
}
ok: [node-2] => {
    "is_leader": false
}

TASK [Debug is_leader value] 
**

Tuesday 23 April 2024  19:46:31 + (0:00:00.072) 0:00:16.971 *
ok: [node-1] => {}

MSG:

is_leader value for node-1 is True
ok: [node-2] => {}

MSG:

is_leader value for node-2 is False

TASK [Re-confirm is_leader before adding to group] 


Tuesday 23 April 2024  19:46:31 + (0:00:00.074) 0:00:17.045 *
skipping: [node-1] => {}
ok: [node-2] => {}

MSG:

Preparing to add node-2 to stby group based on is_leader = False

TASK [Add standby proxysql node] 
***

Tuesday 23 April 2024  19:46:31 + (0:00:00.088) 0:00:17.134 *
skipping: [node-1] => {
    "changed": false,
    "skip_reason": "Conditional result was False"
}

TASK [Re-confirm is_leader before adding to group] 

Re: [ansible-project] Unusal behaviour with set_fact and add_host

2024-04-23 Thread Todd Lewis
Hard to say. The job log you show doesn't match the tasks in the 
playbook you posted. They seem to be the same down to the "Debug 
is_leader value" task, but after that the task names in the log don't 
match those in the playbook.


In the playbook you posted, task "Add standby host" an task "Add primary 
host" both add the hosts to the same group. That may be a source of your 
problem. But as we don't see the output for those tasks …, hmm.

—
Todd

On 4/23/24 4:03 PM, 'deewon' via Ansible Project wrote:

ansible version: 2:11.2
jinja version: 3.0.3
python version: 3.6.8

Hello all

I have a peculiar problem with a relatively simple playbook I have 
written. It basically checks 2 servers to determine which one has more 
than 1 IP and uses that to set a simp;e fact called "is_leader" for 
each node.


This fact is then to be used  to build an in-memory inventory using 
module add_host to iterate through the servers based on the value of 
that fact. See below:


- name: Get Server Recycling Order
  hosts: "{{ my_servers | default ('localhost') }}"
  become: yes
  become_method: sudo
  become_user: root

  tasks:

    - name: Set Patching Sequence
      tags: always
      block:

         - name: Get IP addresses of eth0 using nmcli command
           command: nmcli -g ip4.address conn show "System eth0"
           register: nmcli_output
           changed_when: false


         - name: Get nmcli output
           debug:
             var: nmcli_output


         - name: Set is_leader variable based on number of IPs
           set_fact:
             is_leader: "{{ (nmcli_output.stdout.split('|') | length) 
> 1 }}"



         - name: Display if server is ACTIVE (1) or PASSIVE (0)
           debug:
             var: is_leader


         - name: Debug is_leader value
           debug:
              msg: "is_leader value for {{ inventory_hostname }} is {{ 
is_leader }}"



         - name: Add standby host
           add_host:
              name: "{{ inventory_hostname }}"
              groups: nodes
           when: not is_leader

         - name: Add primary host
           add_host:
              name: "{{ inventory_hostname }}"
              groups: nodes
           when: is_leader

#- name: Execute tasks on servers in a specified order
#  hosts: nodes
#  gather_facts: no
#  serial: 1
#  tasks:
#    - name: Run a command in a specified order
#      command: echo "Running on {{ inventory_hostname }}"
#      tags: always

The problem is that when this runs, the  "is_leader" fact is ignored 
for when the condition is false and the standby node is not added to 
the inventory. The relevant extract from the execution is below:


 TASK [Set is_leader variable based on number of IPs] 
**

Tuesday 23 April 2024  19:46:31 + (0:00:00.079) 0:00:16.815 *
ok: [node-1] => {
    "ansible_facts": {
        "is_leader": true
    },
    "changed": false
}
ok: [node-2] => {
    "ansible_facts": {
        "is_leader": false
    },
    "changed": false
}

TASK [Display if server is ACTIVE (1) or PASSIVE (0)] 
**

Tuesday 23 April 2024  19:46:31 + (0:00:00.083) 0:00:16.898 *
ok: [node-1] => {
    "is_leader": true
}
ok: [node-2] => {
    "is_leader": false
}

TASK [Debug is_leader value] 
**

Tuesday 23 April 2024  19:46:31 + (0:00:00.072) 0:00:16.971 *
ok: [node-1] => {}

MSG:

is_leader value for node-1 is True
ok: [node-2] => {}

MSG:

is_leader value for node-2 is False

TASK [Re-confirm is_leader before adding to group] 


Tuesday 23 April 2024  19:46:31 + (0:00:00.074) 0:00:17.045 *
skipping: [node-1] => {}
ok: [node-2] => {}

MSG:

Preparing to add node-2 to stby group based on is_leader = False

TASK [Add standby proxysql node] 
***

Tuesday 23 April 2024  19:46:31 + (0:00:00.088) 0:00:17.134 *
skipping: [node-1] => {
    "changed": false,
    "skip_reason": "Conditional result was False"
}

TASK [Re-confirm is_leader before adding to group] 


Tuesday 23 April 2024  19:46:31 + (0:00:00.036) 0:00:17.170 *
ok: [node-1] => {}

MSG:

Preparing to add node-1 to prim group based on is_leader = True
skipping: [node-2] => {}

TASK [Add primary proxysql node] 
***