Now that I've been able to get past my issues with q35 bios I decided to try 
the hyperconverged install.  setting up the storage was no problem and the 
engine setup went fine until the point where it wanted to use that glusterfs 
storage.  I got the error, pretty quickly I might add, of

2020-06-30 02:17:50,689-0400 DEBUG ansible on_any args 
<ansible.executor.task_result.TaskResult object at 0x7f83f85b4278> kwargs
2020-06-30 02:17:51,428-0400 INFO ansible task start {'status': 'OK', 
'ansible_type': 'task', 'ansible_playbook': 
'/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml', 
'ansible_task': 'ovirt.hosted_engine_setup : Add glusterfs storage domain'}
2020-06-30 02:17:51,428-0400 DEBUG ansible on_any args TASK: 
ovirt.hosted_engine_setup : Add glusterfs storage domain kwargs 
is_conditional:False
2020-06-30 02:17:51,429-0400 DEBUG ansible on_any args localhostTASK: 
ovirt.hosted_engine_setup : Add glusterfs storage domain kwargs
2020-06-30 02:17:53,656-0400 DEBUG var changed: host "localhost" var 
"otopi_storage_domain_details_gluster" type "<class 'dict'>" value: "{
    "changed": false,
    "exception": "Traceback (most recent call last):
  File 
\"/tmp/ansible_ovirt_storage_domain_payload_jh42k_ip/ansible_ovirt_storage_domain_payload.zip/ansible/modules/cloud/ovirt/ovirt_storage_domain.py\",
 line 792, in main
  File 
\"/tmp/ansible_ovirt_storage_domain_payload_jh42k_ip/ansible_ovirt_storage_domain_payload.zip/ansible/module_utils/ovirt.py\",
 line 623, in create
    **kwargs
  File \"/usr/lib64/python3.6/site-packages/ovirtsdk4/services.py\", line 
26097, in add
    return self._internal_add(storage_domain, headers, query, wait)
  File \"/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py\", line 232, 
in _internal_add
    return future.wait() if wait else future
  File \"/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py\", line 55, in 
wait
    return self._code(response)
  File \"/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py\", line 229, 
in callback
    self._check_fault(response)
  File \"/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py\", line 132, 
in _check_fault
    self._raise_error(response, body)
  File \"/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py\", line 118, 
in _raise_error
    raise error\novirtsdk4.Error: Fault reason is \"Operation Failed\". Fault 
detail is \"[Failed to fetch Gluster Volume List]\". HTTP response code is 
400.\n",
    "failed": true,
    "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Failed to 
fetch Gluster Volume List]\". HTTP response code is 400."
}"

The glusterfs cli worked fine by that point, had no issues with volume list, so 
I went into the node:6900/ovirt-engine forwarder to the appliance and could see 
that at the time of that error in the event logs there was a message that it 
was starting to update things and a few seconds later a message that the 
volumes were available.

I am thinking that there is need for a retry loop in this step or that 
something in the path isn't waiting for something internal to complete before 
issuing the Operation Failed error.

Regards,
Glenn Marcy
_______________________________________________
Devel mailing list -- [email protected]
To unsubscribe send an email to [email protected]
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/[email protected]/message/T3MKM7N4KIL3TPJNGE3VZ6RP2MNEI53X/

Reply via email to