Based on https://github.com/gluster/gluster-ansible-infra/blob/master/roles/backend_setup/tasks/main.yml the used module is package, but the strange thing is why ansible doesn't detect the python3 and dnf.
As far as I remember, you can edit the play before running it . Maybe this will fix: 1. Go to command line and run: which python3 2. Set the 'ansible_python_interpreter' to the value of the previous step Most probably you need to convert it to: vars: ansible_python_interpreter=/full/path/to/python3/or/python3 Note that the variable 'ansible_python_interpreter' must be indented to the write with 2 spaces (no tabs allowed). Best Regards, Strahil Nikolov На 15 юли 2020 г. 0:19:09 GMT+03:00, [email protected] написа: >Thank you Strahil. I think I edited the oVirt Node Cockpit >Hyperconverged Wizard Gluster Deployment Ansible playbook as detailed >in your post and received the following new failure: > >TASK [gluster.infra/roles/backend_setup : Change to Install lvm tools >for RHEL systems.] *** >fatal: [fmov1n3.sn.dtcorp.com]: FAILED! => {"changed": false, "msg": >"The Python 2 yum module is needed for this module. If you require >Python 3 support use the `dnf` Ansible module instead."} >fatal: [fmov1n1.sn.dtcorp.com]: FAILED! => {"changed": false, "msg": >"The Python 2 yum module is needed for this module. If you require >Python 3 support use the `dnf` Ansible module instead."} >fatal: [fmov1n2.sn.dtcorp.com]: FAILED! => {"changed": false, "msg": >"The Python 2 yum module is needed for this module. If you require >Python 3 support use the `dnf` Ansible module instead."} > >Any further assistance is most appreciated!!! > >Respectfully, >Charles > >--- >Gluster Deployment Ansible Playbook > >hc_nodes: > hosts: > fmov1n1.sn.dtcorp.com: > gluster_infra_volume_groups: > - vgname: gluster_vg_nvme0n1 > pvname: /dev/mapper/vdo_nvme0n1 > - vgname: gluster_vg_nvme2n1 > pvname: /dev/mapper/vdo_nvme2n1 > - vgname: gluster_vg_nvme1n1 > pvname: /dev/mapper/vdo_nvme1n1 > gluster_infra_mount_devices: > - path: /gluster_bricks/engine > lvname: gluster_lv_engine > vgname: gluster_vg_nvme0n1 > - path: /gluster_bricks/data > lvname: gluster_lv_data > vgname: gluster_vg_nvme2n1 > - path: /gluster_bricks/vmstore > lvname: gluster_lv_vmstore > vgname: gluster_vg_nvme1n1 > gluster_infra_vdo: > - name: vdo_nvme0n1 > device: /dev/nvme0n1 > slabsize: 2G > logicalsize: 1000G > blockmapcachesize: 128M > emulate512: 'off' > writepolicy: auto > maxDiscardSize: 16M > - name: vdo_nvme2n1 > device: /dev/nvme2n1 > slabsize: 32G > logicalsize: 5000G > blockmapcachesize: 128M > emulate512: 'off' > writepolicy: auto > maxDiscardSize: 16M > - name: vdo_nvme1n1 > device: /dev/nvme1n1 > slabsize: 32G > logicalsize: 5000G > blockmapcachesize: 128M > emulate512: 'off' > writepolicy: auto > maxDiscardSize: 16M > blacklist_mpath_devices: > - nvme0n1 > - nvme2n1 > - nvme1n1 > gluster_infra_thick_lvs: > - vgname: gluster_vg_nvme0n1 > lvname: gluster_lv_engine > size: '100%PVS' > gluster_infra_thinpools: > - vgname: gluster_vg_nvme2n1 > thinpoolname: gluster_thinpool_gluster_vg_nvme2n1 > poolmetadatasize: 3G > - vgname: gluster_vg_nvme1n1 > thinpoolname: gluster_thinpool_gluster_vg_nvme1n1 > poolmetadatasize: 3G > gluster_infra_lv_logicalvols: > - vgname: gluster_vg_nvme2n1 > thinpool: gluster_thinpool_gluster_vg_nvme2n1 > lvname: gluster_lv_data > lvsize: '100%PVS' > - vgname: gluster_vg_nvme1n1 > thinpool: gluster_thinpool_gluster_vg_nvme1n1 > lvname: gluster_lv_vmstore > lvsize: '100%PVS' > fmov1n2.sn.dtcorp.com: > gluster_infra_volume_groups: > - vgname: gluster_vg_nvme0n1 > pvname: /dev/mapper/vdo_nvme0n1 > - vgname: gluster_vg_nvme2n1 > pvname: /dev/mapper/vdo_nvme2n1 > - vgname: gluster_vg_nvme1n1 > pvname: /dev/mapper/vdo_nvme1n1 > gluster_infra_mount_devices: > - path: /gluster_bricks/engine > lvname: gluster_lv_engine > vgname: gluster_vg_nvme0n1 > - path: /gluster_bricks/data > lvname: gluster_lv_data > vgname: gluster_vg_nvme2n1 > - path: /gluster_bricks/vmstore > lvname: gluster_lv_vmstore > vgname: gluster_vg_nvme1n1 > gluster_infra_vdo: > - name: vdo_nvme0n1 > device: /dev/nvme0n1 > slabsize: 2G > logicalsize: 1000G > blockmapcachesize: 128M > emulate512: 'off' > writepolicy: auto > maxDiscardSize: 16M > - name: vdo_nvme2n1 > device: /dev/nvme2n1 > slabsize: 32G > logicalsize: 5000G > blockmapcachesize: 128M > emulate512: 'off' > writepolicy: auto > maxDiscardSize: 16M > - name: vdo_nvme1n1 > device: /dev/nvme1n1 > slabsize: 32G > logicalsize: 5000G > blockmapcachesize: 128M > emulate512: 'off' > writepolicy: auto > maxDiscardSize: 16M > blacklist_mpath_devices: > - nvme0n1 > - nvme2n1 > - nvme1n1 > gluster_infra_thick_lvs: > - vgname: gluster_vg_nvme0n1 > lvname: gluster_lv_engine > size: '100%PVS' > gluster_infra_thinpools: > - vgname: gluster_vg_nvme2n1 > thinpoolname: gluster_thinpool_gluster_vg_nvme2n1 > poolmetadatasize: 3G > - vgname: gluster_vg_nvme1n1 > thinpoolname: gluster_thinpool_gluster_vg_nvme1n1 > poolmetadatasize: 3G > gluster_infra_lv_logicalvols: > - vgname: gluster_vg_nvme2n1 > thinpool: gluster_thinpool_gluster_vg_nvme2n1 > lvname: gluster_lv_data > lvsize: '100%PVS' > - vgname: gluster_vg_nvme1n1 > thinpool: gluster_thinpool_gluster_vg_nvme1n1 > lvname: gluster_lv_vmstore > lvsize: '100%PVS' > fmov1n3.sn.dtcorp.com: > gluster_infra_volume_groups: > - vgname: gluster_vg_nvme0n1 > pvname: /dev/mapper/vdo_nvme0n1 > - vgname: gluster_vg_nvme2n1 > pvname: /dev/mapper/vdo_nvme2n1 > - vgname: gluster_vg_nvme1n1 > pvname: /dev/mapper/vdo_nvme1n1 > gluster_infra_mount_devices: > - path: /gluster_bricks/engine > lvname: gluster_lv_engine > vgname: gluster_vg_nvme0n1 > - path: /gluster_bricks/data > lvname: gluster_lv_data > vgname: gluster_vg_nvme2n1 > - path: /gluster_bricks/vmstore > lvname: gluster_lv_vmstore > vgname: gluster_vg_nvme1n1 > gluster_infra_vdo: > - name: vdo_nvme0n1 > device: /dev/nvme0n1 > slabsize: 2G > logicalsize: 1000G > blockmapcachesize: 128M > emulate512: 'off' > writepolicy: auto > maxDiscardSize: 16M > - name: vdo_nvme2n1 > device: /dev/nvme2n1 > slabsize: 32G > logicalsize: 5000G > blockmapcachesize: 128M > emulate512: 'off' > writepolicy: auto > maxDiscardSize: 16M > - name: vdo_nvme1n1 > device: /dev/nvme1n1 > slabsize: 32G > logicalsize: 5000G > blockmapcachesize: 128M > emulate512: 'off' > writepolicy: auto > maxDiscardSize: 16M > blacklist_mpath_devices: > - nvme0n1 > - nvme2n1 > - nvme1n1 > gluster_infra_thick_lvs: > - vgname: gluster_vg_nvme0n1 > lvname: gluster_lv_engine > size: '100%PVS' > gluster_infra_thinpools: > - vgname: gluster_vg_nvme2n1 > thinpoolname: gluster_thinpool_gluster_vg_nvme2n1 > poolmetadatasize: 3G > - vgname: gluster_vg_nvme1n1 > thinpoolname: gluster_thinpool_gluster_vg_nvme1n1 > poolmetadatasize: 3G > gluster_infra_lv_logicalvols: > - vgname: gluster_vg_nvme2n1 > thinpool: gluster_thinpool_gluster_vg_nvme2n1 > lvname: gluster_lv_data > lvsize: '100%PVS' > - vgname: gluster_vg_nvme1n1 > thinpool: gluster_thinpool_gluster_vg_nvme1n1 > lvname: gluster_lv_vmstore > lvsize: '100%PVS' > vars: > gluster_infra_disktype: JBOD > gluster_set_selinux_labels: true > gluster_infra_fw_ports: > - 2049/tcp > - 54321/tcp > - 5900/tcp > - 5900-6923/tcp > - 5666/tcp > - 16514/tcp > gluster_infra_fw_permanent: true > gluster_infra_fw_state: enabled > gluster_infra_fw_zone: public > gluster_infra_fw_services: > - glusterfs > gluster_features_force_varlogsizecheck: false > cluster_nodes: > - fmov1n1.sn.dtcorp.com > - fmov1n2.sn.dtcorp.com > - fmov1n3.sn.dtcorp.com > gluster_features_hci_cluster: '{{ cluster_nodes }}' > gluster_features_hci_volumes: > - volname: engine > brick: /gluster_bricks/engine/engine > arbiter: 0 > - volname: data > brick: /gluster_bricks/data/data > arbiter: 0 > - volname: vmstore > brick: /gluster_bricks/vmstore/vmstore > arbiter: 0 > >--- >/var/log/cockpit/ovirt-dashboard/gluster-deployment.log > >PLAY [Setup backend] >*********************************************************** > >TASK [Gathering Facts] >********************************************************* >ok: [fmov1n1.sn.dtcorp.com] >ok: [fmov1n2.sn.dtcorp.com] >ok: [fmov1n3.sn.dtcorp.com] > >TASK [Check if valid hostnames are provided] >*********************************** >changed: [fmov1n1.sn.dtcorp.com] => (item=fmov1n1.sn.dtcorp.com) >changed: [fmov1n1.sn.dtcorp.com] => (item=fmov1n2.sn.dtcorp.com) >changed: [fmov1n1.sn.dtcorp.com] => (item=fmov1n3.sn.dtcorp.com) > >TASK [Check if provided hostnames are valid] >*********************************** >ok: [fmov1n1.sn.dtcorp.com] => { > "changed": false, > "msg": "All assertions passed" >} >ok: [fmov1n2.sn.dtcorp.com] => { > "changed": false, > "msg": "All assertions passed" >} >ok: [fmov1n3.sn.dtcorp.com] => { > "changed": false, > "msg": "All assertions passed" >} > >TASK [Check if /var/log has enough disk space] >********************************* >skipping: [fmov1n1.sn.dtcorp.com] >skipping: [fmov1n2.sn.dtcorp.com] >skipping: [fmov1n3.sn.dtcorp.com] > >TASK [Check if the /var is greater than 15G] >*********************************** >skipping: [fmov1n1.sn.dtcorp.com] >skipping: [fmov1n2.sn.dtcorp.com] >skipping: [fmov1n3.sn.dtcorp.com] > >TASK [Check if disks have logical block size of 512B] >************************** >skipping: [fmov1n1.sn.dtcorp.com] => (item={'vgname': >'gluster_vg_nvme0n1', 'pvname': '/dev/mapper/vdo_nvme0n1'}) >skipping: [fmov1n1.sn.dtcorp.com] => (item={'vgname': >'gluster_vg_nvme2n1', 'pvname': '/dev/mapper/vdo_nvme2n1'}) >skipping: [fmov1n1.sn.dtcorp.com] => (item={'vgname': >'gluster_vg_nvme1n1', 'pvname': '/dev/mapper/vdo_nvme1n1'}) >skipping: [fmov1n2.sn.dtcorp.com] => (item={'vgname': >'gluster_vg_nvme0n1', 'pvname': '/dev/mapper/vdo_nvme0n1'}) >skipping: [fmov1n2.sn.dtcorp.com] => (item={'vgname': >'gluster_vg_nvme2n1', 'pvname': '/dev/mapper/vdo_nvme2n1'}) >skipping: [fmov1n2.sn.dtcorp.com] => (item={'vgname': >'gluster_vg_nvme1n1', 'pvname': '/dev/mapper/vdo_nvme1n1'}) >skipping: [fmov1n3.sn.dtcorp.com] => (item={'vgname': >'gluster_vg_nvme0n1', 'pvname': '/dev/mapper/vdo_nvme0n1'}) >skipping: [fmov1n3.sn.dtcorp.com] => (item={'vgname': >'gluster_vg_nvme2n1', 'pvname': '/dev/mapper/vdo_nvme2n1'}) >skipping: [fmov1n3.sn.dtcorp.com] => (item={'vgname': >'gluster_vg_nvme1n1', 'pvname': '/dev/mapper/vdo_nvme1n1'}) > >TASK [Check if logical block size is 512 bytes] >******************************** >skipping: [fmov1n1.sn.dtcorp.com] => (item=Logical Block Size) >skipping: [fmov1n1.sn.dtcorp.com] => (item=Logical Block Size) >skipping: [fmov1n1.sn.dtcorp.com] => (item=Logical Block Size) >skipping: [fmov1n2.sn.dtcorp.com] => (item=Logical Block Size) >skipping: [fmov1n2.sn.dtcorp.com] => (item=Logical Block Size) >skipping: [fmov1n2.sn.dtcorp.com] => (item=Logical Block Size) >skipping: [fmov1n3.sn.dtcorp.com] => (item=Logical Block Size) >skipping: [fmov1n3.sn.dtcorp.com] => (item=Logical Block Size) >skipping: [fmov1n3.sn.dtcorp.com] => (item=Logical Block Size) > >TASK [Get logical block size of VDO devices] >*********************************** >skipping: [fmov1n1.sn.dtcorp.com] => (item={'name': 'vdo_nvme0n1', >'device': '/dev/nvme0n1', 'slabsize': '2G', 'logicalsize': '1000G', >'blockmapcachesize': '128M', 'emulate512': 'off', 'writepolicy': >'auto', 'maxDiscardSize': '16M'}) >skipping: [fmov1n1.sn.dtcorp.com] => (item={'name': 'vdo_nvme2n1', >'device': '/dev/nvme2n1', 'slabsize': '32G', 'logicalsize': '5000G', >'blockmapcachesize': '128M', 'emulate512': 'off', 'writepolicy': >'auto', 'maxDiscardSize': '16M'}) >skipping: [fmov1n1.sn.dtcorp.com] => (item={'name': 'vdo_nvme1n1', >'device': '/dev/nvme1n1', 'slabsize': '32G', 'logicalsize': '5000G', >'blockmapcachesize': '128M', 'emulate512': 'off', 'writepolicy': >'auto', 'maxDiscardSize': '16M'}) >skipping: [fmov1n2.sn.dtcorp.com] => (item={'name': 'vdo_nvme0n1', >'device': '/dev/nvme0n1', 'slabsize': '2G', 'logicalsize': '1000G', >'blockmapcachesize': '128M', 'emulate512': 'off', 'writepolicy': >'auto', 'maxDiscardSize': '16M'}) >skipping: [fmov1n2.sn.dtcorp.com] => (item={'name': 'vdo_nvme2n1', >'device': '/dev/nvme2n1', 'slabsize': '32G', 'logicalsize': '5000G', >'blockmapcachesize': '128M', 'emulate512': 'off', 'writepolicy': >'auto', 'maxDiscardSize': '16M'}) >skipping: [fmov1n2.sn.dtcorp.com] => (item={'name': 'vdo_nvme1n1', >'device': '/dev/nvme1n1', 'slabsize': '32G', 'logicalsize': '5000G', >'blockmapcachesize': '128M', 'emulate512': 'off', 'writepolicy': >'auto', 'maxDiscardSize': '16M'}) >skipping: [fmov1n3.sn.dtcorp.com] => (item={'name': 'vdo_nvme0n1', >'device': '/dev/nvme0n1', 'slabsize': '2G', 'logicalsize': '1000G', >'blockmapcachesize': '128M', 'emulate512': 'off', 'writepolicy': >'auto', 'maxDiscardSize': '16M'}) >skipping: [fmov1n3.sn.dtcorp.com] => (item={'name': 'vdo_nvme2n1', >'device': '/dev/nvme2n1', 'slabsize': '32G', 'logicalsize': '5000G', >'blockmapcachesize': '128M', 'emulate512': 'off', 'writepolicy': >'auto', 'maxDiscardSize': '16M'}) >skipping: [fmov1n3.sn.dtcorp.com] => (item={'name': 'vdo_nvme1n1', >'device': '/dev/nvme1n1', 'slabsize': '32G', 'logicalsize': '5000G', >'blockmapcachesize': '128M', 'emulate512': 'off', 'writepolicy': >'auto', 'maxDiscardSize': '16M'}) > >TASK [Check if logical block size is 512 bytes for VDO devices] >**************** >skipping: [fmov1n1.sn.dtcorp.com] => (item=Logical Block Size) >skipping: [fmov1n1.sn.dtcorp.com] => (item=Logical Block Size) >skipping: [fmov1n1.sn.dtcorp.com] => (item=Logical Block Size) >skipping: [fmov1n2.sn.dtcorp.com] => (item=Logical Block Size) >skipping: [fmov1n2.sn.dtcorp.com] => (item=Logical Block Size) >skipping: [fmov1n2.sn.dtcorp.com] => (item=Logical Block Size) >skipping: [fmov1n3.sn.dtcorp.com] => (item=Logical Block Size) >skipping: [fmov1n3.sn.dtcorp.com] => (item=Logical Block Size) >skipping: [fmov1n3.sn.dtcorp.com] => (item=Logical Block Size) > >TASK [gluster.infra/roles/firewall_config : Start firewalld if not >already started] *** >ok: [fmov1n3.sn.dtcorp.com] >ok: [fmov1n2.sn.dtcorp.com] >ok: [fmov1n1.sn.dtcorp.com] > >TASK [gluster.infra/roles/firewall_config : check if required variables >are set] *** >skipping: [fmov1n1.sn.dtcorp.com] >skipping: [fmov1n2.sn.dtcorp.com] >skipping: [fmov1n3.sn.dtcorp.com] > >TASK [gluster.infra/roles/firewall_config : Open/Close firewalld ports] >******** >changed: [fmov1n3.sn.dtcorp.com] => (item=2049/tcp) >changed: [fmov1n2.sn.dtcorp.com] => (item=2049/tcp) >changed: [fmov1n1.sn.dtcorp.com] => (item=2049/tcp) >changed: [fmov1n3.sn.dtcorp.com] => (item=54321/tcp) >changed: [fmov1n2.sn.dtcorp.com] => (item=54321/tcp) >changed: [fmov1n1.sn.dtcorp.com] => (item=54321/tcp) >changed: [fmov1n3.sn.dtcorp.com] => (item=5900/tcp) >changed: [fmov1n2.sn.dtcorp.com] => (item=5900/tcp) >changed: [fmov1n1.sn.dtcorp.com] => (item=5900/tcp) >changed: [fmov1n3.sn.dtcorp.com] => (item=5900-6923/tcp) >changed: [fmov1n2.sn.dtcorp.com] => (item=5900-6923/tcp) >changed: [fmov1n1.sn.dtcorp.com] => (item=5900-6923/tcp) >changed: [fmov1n3.sn.dtcorp.com] => (item=5666/tcp) >changed: [fmov1n2.sn.dtcorp.com] => (item=5666/tcp) >changed: [fmov1n1.sn.dtcorp.com] => (item=5666/tcp) >changed: [fmov1n3.sn.dtcorp.com] => (item=16514/tcp) >changed: [fmov1n2.sn.dtcorp.com] => (item=16514/tcp) >changed: [fmov1n1.sn.dtcorp.com] => (item=16514/tcp) > >TASK [gluster.infra/roles/firewall_config : Add/Delete services to >firewalld rules] *** >ok: [fmov1n3.sn.dtcorp.com] => (item=glusterfs) >ok: [fmov1n2.sn.dtcorp.com] => (item=glusterfs) >ok: [fmov1n1.sn.dtcorp.com] => (item=glusterfs) > >TASK [gluster.infra/roles/backend_setup : Check that the multipath.conf >exists] *** >ok: [fmov1n3.sn.dtcorp.com] >ok: [fmov1n2.sn.dtcorp.com] >ok: [fmov1n1.sn.dtcorp.com] > >TASK [gluster.infra/roles/backend_setup : Ensure that multipathd >services is enabled if not] *** >skipping: [fmov1n3.sn.dtcorp.com] >changed: [fmov1n1.sn.dtcorp.com] >changed: [fmov1n2.sn.dtcorp.com] > >TASK [gluster.infra/roles/backend_setup : Ensure that multipathd >services is running] *** >ok: [fmov1n3.sn.dtcorp.com] >changed: [fmov1n1.sn.dtcorp.com] >changed: [fmov1n2.sn.dtcorp.com] > >TASK [gluster.infra/roles/backend_setup : Create /etc/multipath/conf.d >if doesn't exists] *** >changed: [fmov1n3.sn.dtcorp.com] >changed: [fmov1n2.sn.dtcorp.com] >changed: [fmov1n1.sn.dtcorp.com] > >TASK [gluster.infra/roles/backend_setup : Get the UUID of the devices] >********* >changed: [fmov1n3.sn.dtcorp.com] => (item=nvme0n1) >changed: [fmov1n2.sn.dtcorp.com] => (item=nvme0n1) >changed: [fmov1n1.sn.dtcorp.com] => (item=nvme0n1) >changed: [fmov1n3.sn.dtcorp.com] => (item=nvme2n1) >changed: [fmov1n2.sn.dtcorp.com] => (item=nvme2n1) >changed: [fmov1n1.sn.dtcorp.com] => (item=nvme2n1) >changed: [fmov1n3.sn.dtcorp.com] => (item=nvme1n1) >changed: [fmov1n2.sn.dtcorp.com] => (item=nvme1n1) >changed: [fmov1n1.sn.dtcorp.com] => (item=nvme1n1) > >TASK [gluster.infra/roles/backend_setup : Check that the blacklist.conf >exists] *** >ok: [fmov1n3.sn.dtcorp.com] >ok: [fmov1n2.sn.dtcorp.com] >ok: [fmov1n1.sn.dtcorp.com] > >TASK [gluster.infra/roles/backend_setup : Create blacklist template >content] *** >changed: [fmov1n3.sn.dtcorp.com] >changed: [fmov1n2.sn.dtcorp.com] >changed: [fmov1n1.sn.dtcorp.com] > >TASK [gluster.infra/roles/backend_setup : Add wwid to blacklist in >blacklist.conf file] *** >changed: [fmov1n3.sn.dtcorp.com] => (item={'cmd': 'multipath -a >/dev/nvme0n1', 'stdout': "wwid 'eui.343756304d7020750025385800000004' >added", 'stderr': '', 'rc': 0, 'start': '2020-07-14 21:06:36.612051', >'end': '2020-07-14 21:06:36.623511', 'delta': '0:00:00.011460', >'changed': True, 'invocation': {'module_args': {'_raw_params': >'multipath -a /dev/nvme0n1', '_uses_shell': True, 'warn': True, >'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, >'chdir': None, 'executable': None, 'creates': None, 'removes': None, >'stdin': None}}, 'stdout_lines': ["wwid >'eui.343756304d7020750025385800000004' added"], 'stderr_lines': [], >'failed': False, 'item': 'nvme0n1', 'ansible_loop_var': 'item'}) >changed: [fmov1n2.sn.dtcorp.com] => (item={'cmd': 'multipath -a >/dev/nvme0n1', 'stdout': "wwid 'eui.343756304d7020530025385800000004' >added", 'stderr': '', 'rc': 0, 'start': '2020-07-14 21:06:36.674961', >'end': '2020-07-14 21:06:36.687875', 'delta': '0:00:00.012914', >'changed': True, 'invocation': {'module_args': {'_raw_params': >'multipath -a /dev/nvme0n1', '_uses_shell': True, 'warn': True, >'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, >'chdir': None, 'executable': None, 'creates': None, 'removes': None, >'stdin': None}}, 'stdout_lines': ["wwid >'eui.343756304d7020530025385800000004' added"], 'stderr_lines': [], >'failed': False, 'item': 'nvme0n1', 'ansible_loop_var': 'item'}) >changed: [fmov1n1.sn.dtcorp.com] => (item={'cmd': 'multipath -a >/dev/nvme0n1', 'stdout': "wwid 'eui.343756304d7020220025385800000004' >added", 'stderr': '', 'rc': 0, 'start': '2020-07-14 21:06:36.732721', >'end': '2020-07-14 21:06:36.744468', 'delta': '0:00:00.011747', >'changed': True, 'invocation': {'module_args': {'_raw_params': >'multipath -a /dev/nvme0n1', '_uses_shell': True, 'warn': True, >'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, >'chdir': None, 'executable': None, 'creates': None, 'removes': None, >'stdin': None}}, 'stdout_lines': ["wwid >'eui.343756304d7020220025385800000004' added"], 'stderr_lines': [], >'failed': False, 'item': 'nvme0n1', 'ansible_loop_var': 'item'}) >changed: [fmov1n3.sn.dtcorp.com] => (item={'cmd': 'multipath -a >/dev/nvme2n1', 'stdout': "wwid 'eui.343756304d7020730025385800000004' >added", 'stderr': '', 'rc': 0, 'start': '2020-07-14 21:06:41.411729', >'end': '2020-07-14 21:06:41.423305', 'delta': '0:00:00.011576', >'changed': True, 'invocation': {'module_args': {'_raw_params': >'multipath -a /dev/nvme2n1', '_uses_shell': True, 'warn': True, >'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, >'chdir': None, 'executable': None, 'creates': None, 'removes': None, >'stdin': None}}, 'stdout_lines': ["wwid >'eui.343756304d7020730025385800000004' added"], 'stderr_lines': [], >'failed': False, 'item': 'nvme2n1', 'ansible_loop_var': 'item'}) >changed: [fmov1n2.sn.dtcorp.com] => (item={'cmd': 'multipath -a >/dev/nvme2n1', 'stdout': "wwid 'eui.343756304d7020190025385800000004' >added", 'stderr': '', 'rc': 0, 'start': '2020-07-14 21:06:41.683414', >'end': '2020-07-14 21:06:41.695115', 'delta': '0:00:00.011701', >'changed': True, 'invocation': {'module_args': {'_raw_params': >'multipath -a /dev/nvme2n1', '_uses_shell': True, 'warn': True, >'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, >'chdir': None, 'executable': None, 'creates': None, 'removes': None, >'stdin': None}}, 'stdout_lines': ["wwid >'eui.343756304d7020190025385800000004' added"], 'stderr_lines': [], >'failed': False, 'item': 'nvme2n1', 'ansible_loop_var': 'item'}) >changed: [fmov1n1.sn.dtcorp.com] => (item={'cmd': 'multipath -a >/dev/nvme2n1', 'stdout': "wwid 'eui.343756304d7007630025385800000004' >added", 'stderr': '', 'rc': 0, 'start': '2020-07-14 21:06:41.832021', >'end': '2020-07-14 21:06:41.844162', 'delta': '0:00:00.012141', >'changed': True, 'invocation': {'module_args': {'_raw_params': >'multipath -a /dev/nvme2n1', '_uses_shell': True, 'warn': True, >'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, >'chdir': None, 'executable': None, 'creates': None, 'removes': None, >'stdin': None}}, 'stdout_lines': ["wwid >'eui.343756304d7007630025385800000004' added"], 'stderr_lines': [], >'failed': False, 'item': 'nvme2n1', 'ansible_loop_var': 'item'}) >changed: [fmov1n3.sn.dtcorp.com] => (item={'cmd': 'multipath -a >/dev/nvme1n1', 'stdout': "wwid 'eui.343756304d7020760025385800000004' >added", 'stderr': '', 'rc': 0, 'start': '2020-07-14 21:06:46.242072', >'end': '2020-07-14 21:06:46.253191', 'delta': '0:00:00.011119', >'changed': True, 'invocation': {'module_args': {'_raw_params': >'multipath -a /dev/nvme1n1', '_uses_shell': True, 'warn': True, >'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, >'chdir': None, 'executable': None, 'creates': None, 'removes': None, >'stdin': None}}, 'stdout_lines': ["wwid >'eui.343756304d7020760025385800000004' added"], 'stderr_lines': [], >'failed': False, 'item': 'nvme1n1', 'ansible_loop_var': 'item'}) >changed: [fmov1n2.sn.dtcorp.com] => (item={'cmd': 'multipath -a >/dev/nvme1n1', 'stdout': "wwid 'eui.343756304d7020690025385800000004' >added", 'stderr': '', 'rc': 0, 'start': '2020-07-14 21:06:46.697920', >'end': '2020-07-14 21:06:46.708944', 'delta': '0:00:00.011024', >'changed': True, 'invocation': {'module_args': {'_raw_params': >'multipath -a /dev/nvme1n1', '_uses_shell': True, 'warn': True, >'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, >'chdir': None, 'executable': None, 'creates': None, 'removes': None, >'stdin': None}}, 'stdout_lines': ["wwid >'eui.343756304d7020690025385800000004' added"], 'stderr_lines': [], >'failed': False, 'item': 'nvme1n1', 'ansible_loop_var': 'item'})> _______________________________________________ Users mailing list -- [email protected] To unsubscribe send an email to [email protected] Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/[email protected]/message/DRENK7NITNTAAVNSKHTEZ2GLVWU4O3PK/

