Thank you Donald!  Your and Strahil's suggested solutions regarding disabling 
multipath for the nvme drives were correct.  The Gluster deployment progressed 
much further but stalled at
    TASK [gluster.features/roles/gluster_hci : Set granual-entry-heal on] 
**********
task path: 
/etc/ansible/roles/gluster.features/roles/gluster_hci/tasks/hci_volumes.yml:67
with
"stdout": "One or more bricks could be down. Please execute the command again 
after bringing all bricks online and finishing any pending heals\nVolume heal 
failed."

Specifically
TASK [gluster.features/roles/gluster_hci : Set granual-entry-heal on] **********
task path: 
/etc/ansible/roles/gluster.features/roles/gluster_hci/tasks/hci_volumes.yml:67
failed: [fmov1n1.sn.dtcorp.com] (item={'volname': 'engine', 'brick': 
'/gluster_bricks/engine/engine', 'arbiter': 0}) => {"ansible_loop_var": "item", 
"changed": true, "cmd": ["gluster", "volume", "heal", "engine", 
"granular-entry-heal", "enable"], "delta": "0:00:10.112451", "end": "2020-12-18 
19:50:22.818741", "item": {"arbiter": 0, "brick": 
"/gluster_bricks/engine/engine", "volname": "engine"}, "msg": "non-zero return 
code", "rc": 107, "start": "2020-12-18 19:50:12.706290", "stderr": "", 
"stderr_lines": [], "stdout": "One or more bricks could be down. Please execute 
the command again after bringing all bricks online and finishing any pending 
heals\nVolume heal failed.", "stdout_lines": ["One or more bricks could be 
down. Please execute the command again after bringing all bricks online and 
finishing any pending heals", "Volume heal failed."]}
failed: [fmov1n1.sn.dtcorp.com] (item={'volname': 'data', 'brick': 
'/gluster_bricks/data/data', 'arbiter': 0}) => {"ansible_loop_var": "item", 
"changed": true, "cmd": ["gluster", "volume", "heal", "data", 
"granular-entry-heal", "enable"], "delta": "0:00:10.110165", "end": "2020-12-18 
19:50:38.260277", "item": {"arbiter": 0, "brick": "/gluster_bricks/data/data", 
"volname": "data"}, "msg": "non-zero return code", "rc": 107, "start": 
"2020-12-18 19:50:28.150112", "stderr": "", "stderr_lines": [], "stdout": "One 
or more bricks could be down. Please execute the command again after bringing 
all bricks online and finishing any pending heals\nVolume heal failed.", 
"stdout_lines": ["One or more bricks could be down. Please execute the command 
again after bringing all bricks online and finishing any pending heals", 
"Volume heal failed."]}
failed: [fmov1n1.sn.dtcorp.com] (item={'volname': 'vmstore', 'brick': 
'/gluster_bricks/vmstore/vmstore', 'arbiter': 0}) => {"ansible_loop_var": 
"item", "changed": true, "cmd": ["gluster", "volume", "heal", "vmstore", 
"granular-entry-heal", "enable"], "delta": "0:00:10.113203", "end": "2020-12-18 
19:50:53.767864", "item": {"arbiter": 0, "brick": 
"/gluster_bricks/vmstore/vmstore", "volname": "vmstore"}, "msg": "non-zero 
return code", "rc": 107, "start": "2020-12-18 19:50:43.654661", "stderr": "", 
"stderr_lines": [], "stdout": "One or more bricks could be down. Please execute 
the command again after bringing all bricks online and finishing any pending 
heals\nVolume heal failed.", "stdout_lines": ["One or more bricks could be 
down. Please execute the command again after bringing all bricks online and 
finishing any pending heals", "Volume heal failed."]}

As this is a different issue, I will post a new thread.

Gratefully yours,
Charles
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EV6DND2FOX3RF2JBB37COW425ZCUVIHL/

Reply via email to