Hi everybody!

I am having a hard time getting oVirt 4.4 to work. We want to update our 4.3 Cluster and i am trying to set up a fresh 4.4 Cluster (and restore the backup later on) in order to update to 4.5. It fails at the end of the engine deloyment, when the Gluster Storage Domain should be added.

I installed oVirt Node 4.4.10 on an old PC and made the following modifications to the engine deployment process:

- altered defaults in /usr/share/ansible/collections/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/defaults/main.yml: - "he_pause_before_engine_setup: true" (in this pause before engine setup, i ssh into the engine and exclude the package postgresql-jdbc from update, which otherwise breaks the deployment [1]) - "he_remove_appliance_rpm: false" (to avoid the large download every single try, i tried a lot)
  - "he_force_ip4: true" (to avoid problems with IPv6, see below)

- in /usr/share/ansible/collections/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/fetch_host_ip.yml i added after "- name: Get host address resolution:" the following lines (to avoid a problem with an "invalid" IPv6-Adress, which otherwise breaks the deployment [2]):
    - name: Get host IP addresses
      ansible.builtin.command: hostname -I
      register: hostname_addresses_output
      changed_when: true

Most times, i started deployment via shell but tried via webinterface of the node as well. It fails at the task "Add glusterfs storage domain" with the following message:

"[ ERROR ] ovirtsdk4.Error: Fault reason is "Operation Failed". Fault detail is "[Failed to fetch Gluster Volume List]". HTTP response code is 400." (See also [3])

When the setup asks for storage, i tried different answers (gluster.local:/volume, gluster.local:/path/to/brick/volume, 192.168.8.51:/volume ...), no mount options.

I added firewall rules for glusterfs at node and engine. Even tried disabling the firewall. No firewall on the gluster servers running. On the Node, i also tested setting SELinux to permissive.

Recorded the traffic at different interfaces ("ovirtmgmt" and "virbr0" on the node and "eth0" on the engine) and i can see the node and the gluster server talking: Node gets the volume with options (which are, btw, compliant to the docs, "storage.owner-gid: 36" "storage.owner-uid: 36" etc) but thats it, no further packets to mount the volume.

I noticed some ARP packets as well, the node asks the IP from the engine (the configured static IP, which is not yet active). And the engine sends a dns request for the gluster server to the node (via interface virbr0), but doesnt connect to the gluster server. At least, thats what i can see, most of the traffic is TLS, which i couldnt decrypt yet. I appreciate any hint where to find the right keys.

Anyway, i can ssh from the engine to the gluster server and mount the gluster volume manually on the node (mount -t glusterfs gluster.local:/volume /local/path), so there seem no connectivity issues.

Since the engine deployment log is around 30MB i attached a log summary with findings i found relevant. I'll provide more logs if needed.

I really wanna put this huge timesink to an end. Can anyone help me or point me in the right direction?

Many thanks in advance :)

Regards,
Niko


[1] This was the error message i got:
"[ ERROR ] fatal: [localhost -> 192.168.222.195]: FAILED! => {"attempts": 30, "changed": false, "connection": "close", "content": "Error500 - Internal Server Error", "content_encoding": "identity", "content_length": "86", "content_type": "text/html; charset=UTF-8", "date": "Wed, 17 May 2023 22:42:27 GMT", "elapsed": 0, "msg": "Status code was 500 and not [200]: HTTP Error 500: Internal Server Error", "redirected": false, "server": "Apache/2.4.37 (centos) OpenSSL/1.1.1k mod_auth_gssapi/1.6.1 mod_wsgi/4.6.4 Python/3.6", "status": 500, "url": "http://localhost/ovirt-engine/services/health"}";

[2] This was the error message i got:
"VDSM ovirt.martinwi.local command HostSetupNetworksVDS failed: Internal JSON-RPC error: {'reason': "Invalid IP address: 'fe80::ea3f:67ff:fe7f:a029%ovirtmgmt' does not appear to be an IPv4 or IPv6 address"}"

[3] /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20230718140628-rryscj.log: 2023-07-18 16:32:35,877+0200 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:106 {'msg': 'Fault reason is "Operation Failed". Fault detail is "[Failed to fetch Gluster Volume List]". HTTP response code is 400.', 'exception': 'Traceback (most recent call last):\n File "/tmp/ansible_ovirt_storage_domain_payload_b4ofbzxa/ansible_ovirt_storage_domain_payload.zip/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_storage_domain.py", line 804, in main\n File "/tmp/ansible_ovirt_storage_domain_payload_b4ofbzxa/ansible_ovirt_storage_domain_payload.zip/ansible_collections/ovirt/ovirt/plugins/module_utils/ovirt.py", line 674, in create\n **kwargs\n File "/usr/lib64/python3.6/site-packages/ovirtsdk4/services.py", line 26258, in add\n return self._internal_add(storage_domain, headers, query, wait)\n File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 232, in _internal_add\n return future.wait() if wait else future\n File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 55, in wait\n return self._code(response)\n File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 229, in callback\n self._check_fault(response)\n File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 132, in _check_fault\n self._raise_error(response, body)\n File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 118, in _raise_error\n raise error\novirtsdk4.Error: Fault reason is "Operation Failed". Fault detail is "[Failed to fetch Gluster Volume List]". HTTP response code is 400.\n', 'invocation': {'module_args': {'state': 'unattached', 'name': 'hosted_storage', 'host': 'ovirt.martinwi.local', 'data_center': 'Default', 'wait': True, 'glusterfs': {'address': 'gluster1.martinwi.local', 'path': '/gv3', 'mount_options': ''}, 'timeout': 180, 'poll_interval': 3, 'fetch_nested': False, 'nested_attributes': [], 'domain_function': 'data', 'id': None, 'description': None, 'comment': None, 'localfs': None, 'nfs': None, 'iscsi': None, 'managed_block_storage': None, 'posixfs': None, 'fcp': None, 'wipe_after_delete': None, 'backup': None, 'critical_space_action_blocker': None, 'warning_low_space': None, 'destroy': None, 'format': None, 'discard_after_delete': None}}, '_ansible_no_log': False, 'changed': False}
logs/messages:4084:

Jul 18 16:32:35 ovirt platform-python[61382]: ansible-ovirt_storage_domain 
Invoked with state=unattached name=hosted_storage host=ovirt.martinwi.local 
data_center=Default wait=True glusterfs={'address': 'gluster1.martinwi.local', 
'path': '/gv3', 'mount_options': ''} auth={'token': 
'jEcuy5sqns7AdJSsODnXZYiFG0tRHKjD-Mq3R5Vuwzp_bQK-s83a-rG7Ot3p2KdlfhBNb2EwXje7vHPldIX3Og',
 'url': 'https://ovirt-engine.martinwi.local/ovirt-engine/api', 'ca_file': 
None, 'insecure': True, 'timeout': 0, 'compress': True, 'kerberos': False, 
'headers': None, 'hostname': None, 'username': None, 'password': None} 
timeout=180 poll_interval=3 fetch_nested=False nested_attributes=[] 
domain_function=data id=None description=None comment=None localfs=None 
nfs=None iscsi=None managed_block_storage=None posixfs=None fcp=None 
wipe_after_delete=None backup=None critical_space_action_blocker=None 
warning_low_space=None destroy=None format=None discard_after_delete=None
Jul 18 16:32:35 ovirt dnsmasq[14219]: nameserver 192.168.8.55 refused to do a 
recursive query

----------------------------------------------------------------------------------------

logs/glusterfs/cli.log:1:

[2023-07-18 14:32:35.546006] I [cli.c:846:main] 0-cli: Started running 
/usr/sbin/gluster with version 8.6
[2023-07-18 14:32:35.546068] I [cli.c:722:cli_rpc_init] 0-cli: Connecting to 
remote glusterd at gluster1.martinwi.local
[2023-07-18 14:32:35.628879] I [MSGID: 101190] 
[event-epoll.c:670:event_dispatch_epoll_worker] 0-epoll: Started thread with 
index [{index=0}] 
[2023-07-18 14:32:35.628955] I [MSGID: 101190] 
[event-epoll.c:670:event_dispatch_epoll_worker] 0-epoll: Started thread with 
index [{index=1}] 
[2023-07-18 14:32:35.630925] I [cli-rpc-ops.c:756:gf_cli_get_volume_cbk] 0-cli: 
Received resp to get vol: 0
[2023-07-18 14:32:35.631026] E [cli-rpc-ops.c:825:gf_cli_get_volume_cbk] 0-cli: 
Error outputting to xml
[2023-07-18 14:32:35.631083] I [input.c:31:cli_batch] 0-: Exiting with: -2

----------------------------------------------------------------------------------------

logs/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-ansible-create_storage_domain-20230718163226-puca3q.log:2488:

2023-07-18 16:32:35,009+0200 INFO ansible task start {'status': 'OK', 
'ansible_type': 'task', 'ansible_playbook': 
'/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml', 
'ansible_task': 'ovirt.ovirt.hosted_engine_setup : Add glusterfs storage 
domain'}
2023-07-18 16:32:35,009+0200 DEBUG ansible on_any args TASK: 
ovirt.ovirt.hosted_engine_setup : Add glusterfs storage domain  kwargs 
is_conditional:False 
2023-07-18 16:32:35,009+0200 DEBUG ansible on_any args localhost TASK: 
ovirt.ovirt.hosted_engine_setup : Add glusterfs storage domain  kwargs 
2023-07-18 16:32:35,866+0200 DEBUG var changed: host "localhost" var 
"ansible_play_hosts" type "<class 'list'>" value: "[]"
2023-07-18 16:32:35,867+0200 DEBUG var changed: host "localhost" var 
"ansible_play_batch" type "<class 'list'>" value: "[]"
2023-07-18 16:32:35,867+0200 DEBUG var changed: host "localhost" var 
"play_hosts" type "<class 'list'>" value: "[]"
2023-07-18 16:32:35,867+0200 ERROR ansible failed {
    "ansible_host": "localhost",
    "ansible_playbook": 
"/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml",
    "ansible_result": {
        "_ansible_no_log": false,
        "changed": false,
        "exception": "Traceback (most recent call last):\n  File 
\"/tmp/ansible_ovirt_storage_domain_payload_b4ofbzxa/ansible_ovirt_storage_domain_payload.zip/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_storage_domain.py\",
 line 804, in main\n  File 
\"/tmp/ansible_ovirt_storage_domain_payload_b4ofbzxa/ansible_ovirt_storage_domain_payload.zip/ansible_collections/ovirt/ovirt/plugins/module_utils/ovirt.py\",
 line 674, in create\n    **kwargs\n  File 
\"/usr/lib64/python3.6/site-packages/ovirtsdk4/services.py\", line 26258, in 
add\n    return self._internal_add(storage_domain, headers, query, wait)\n  
File \"/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py\", line 232, in 
_internal_add\n    return future.wait() if wait else future\n  File 
\"/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py\", line 55, in wait\n 
   return self._code(response)\n  File 
\"/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py\", line 229, in 
callback\n    self._check_fault(response)\n  File 
\"/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py\", line 132, in 
_check_fault\n    self._raise_error(response, body)\n  File 
\"/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py\", line 118, in 
_raise_error\n    raise error\novirtsdk4.Error: Fault reason is \"Operation 
Failed\". Fault detail is \"[Failed to fetch Gluster Volume List]\". HTTP 
response code is 400.\n",
        "invocation": {
            "module_args": {
                "backup": null,
                "comment": null,
                "critical_space_action_blocker": null,
                "data_center": "Default",
                "description": null,
                "destroy": null,
                "discard_after_delete": null,
                "domain_function": "data",
                "fcp": null,
                "fetch_nested": false,
                "format": null,
                "glusterfs": {
                    "address": "gluster1.martinwi.local",
                    "mount_options": "",
                    "path": "/gv3"
                },
                "host": "ovirt.martinwi.local",
                "id": null,
                "iscsi": null,
                "localfs": null,
                "managed_block_storage": null,
                "name": "hosted_storage",
                "nested_attributes": [],
                "nfs": null,
                "poll_interval": 3,
                "posixfs": null,
                "state": "unattached",
                "timeout": 180,
                "wait": true,
                "warning_low_space": null,
                "wipe_after_delete": null
            }
        },
        "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Failed 
to fetch Gluster Volume List]\". HTTP response code is 400."
    },
    "ansible_task": "Add glusterfs storage domain",
    "ansible_type": "task",
    "status": "FAILED",
    "task_duration": 0
}
2023-07-18 16:32:35,867+0200 DEBUG ansible on_any args 
<ansible.executor.task_result.TaskResult object at 0x7fa120d8ffd0>  kwargs 
ignore_errors:None 
2023-07-18 16:32:35,869+0200 INFO ansible stats {
    "ansible_playbook": 
"/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml",
    "ansible_playbook_duration": "00:08 Minutes",
    "ansible_result": "type: <class 'dict'>\nstr: {'localhost': {'ok': 11, 
'failures': 1, 'unreachable': 0, 'changed': 0, 'skipped': 3, 'rescued': 0, 
'ignored': 0}}",
    "ansible_type": "finish",
    "status": "FAILED"
}

----------------------------------------------------------------------------------------

logs/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20230718140628-rryscj.log:161421:

2023-07-18 16:08:30,233+0200 DEBUG otopi.plugins.otopi.dialog.human 
human.queryString:159 query OVEHOSTED_STORAGE_DOMAIN_TYPE
2023-07-18 16:08:30,233+0200 DEBUG otopi.plugins.otopi.dialog.human 
dialog.__logString:204 DIALOG:SEND                 Please specify the storage 
you would like to use (glusterfs, iscsi, fc, nfs)[nfs]: 
2023-07-18 16:32:10,230+0200 DEBUG otopi.plugins.otopi.dialog.human 
dialog.__logString:204 DIALOG:RECEIVE    glusterfs
2023-07-18 16:32:10,230+0200 DEBUG otopi.plugins.otopi.dialog.human 
human.queryString:159 query OVEHOSTED_STORAGE_DOMAIN_CONNECTION
2023-07-18 16:32:10,230+0200 DEBUG otopi.plugins.otopi.dialog.human 
dialog.__logString:204 DIALOG:SEND                 Please specify the full 
shared storage connection path to use (example: host:/path): 
2023-07-18 16:32:25,037+0200 DEBUG otopi.plugins.otopi.dialog.human 
dialog.__logString:204 DIALOG:RECEIVE    gluster1.martinwi.local:/gv3
2023-07-18 16:32:25,038+0200 DEBUG otopi.plugins.otopi.dialog.human 
human.queryString:159 query OVEHOSTED_STORAGE_DOMAIN_MNT_OPTIONS
2023-07-18 16:32:25,038+0200 DEBUG otopi.plugins.otopi.dialog.human 
dialog.__logString:204 DIALOG:SEND                 If needed, specify 
additional mount options for the connection to the hosted-engine storagedomain 
(example: rsize=32768,wsize=32768) []: 
2023-07-18 16:32:26,143+0200 INFO 
otopi.plugins.gr_he_ansiblesetup.core.storage_domain 
storage_domain._closeup:793 Creating Storage Domain
2023-07-18 16:32:26,144+0200 DEBUG 
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:195 
ansible-playbook: cmd: ['/bin/ansible-playbook', 
'--module-path=/usr/share/ovirt-hosted-engine-setup/ansible', 
'--inventory=localhost,', '--extra-vars=@/tmp/tmpx3638_rw', 
'--tags=create_storage_domain', '--skip-tags=always', 
'/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml']
2023-07-18 16:32:26,144+0200 DEBUG 
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:196 
ansible-playbook: out_path: /tmp/tmp5rnu3hv2
2023-07-18 16:32:26,144+0200 DEBUG 
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:197 
ansible-playbook: vars_path: /tmp/tmpx3638_rw
2023-07-18 16:32:26,145+0200 DEBUG 
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:198 
ansible-playbook: env: {'LS_COLORS': 
'rs=0:di=38;5;33:ln=38;5;51:mh=00:pi=40;38;5;11:so=38;5;13:do=38;5;5:bd=48;5;232;38;5;11:cd=48;5;232;38;5;3:or=48;5;232;38;5;9:mi=01;05;37;41:su=48;5;196;38;5;15:sg=48;5;11;38;5;16:ca=48;5;196;38;5;226:tw=48;5;10;38;5;16:ow=48;5;10;38;5;21:st=48;5;21;38;5;15:ex=38;5;40:*.tar=38;5;9:*.tgz=38;5;9:*.arc=38;5;9:*.arj=38;5;9:*.taz=38;5;9:*.lha=38;5;9:*.lz4=38;5;9:*.lzh=38;5;9:*.lzma=38;5;9:*.tlz=38;5;9:*.txz=38;5;9:*.tzo=38;5;9:*.t7z=38;5;9:*.zip=38;5;9:*.z=38;5;9:*.dz=38;5;9:*.gz=38;5;9:*.lrz=38;5;9:*.lz=38;5;9:*.lzo=38;5;9:*.xz=38;5;9:*.zst=38;5;9:*.tzst=38;5;9:*.bz2=38;5;9:*.bz=38;5;9:*.tbz=38;5;9:*.tbz2=38;5;9:*.tz=38;5;9:*.deb=38;5;9:*.rpm=38;5;9:*.jar=38;5;9:*.war=38;5;9:*.ear=38;5;9:*.sar=38;5;9:*.rar=38;5;9:*.alz=38;5;9:*.ace=38;5;9:*.zoo=38;5;9:*.cpio=38;5;9:*.7z=38;5;9:*.rz=38;5;9:*.cab=38;5;9:*.wim=38;5;9:*.swm=38;5;9:*.dwm=38;5;9:*.esd=38;5;9:*.jpg=38;5;13:*.jpeg=38;5;13:*.mjpg=38;5;13:*.mjpeg=38;5;13:*.gif=38;5;13:*.bmp=38;5;13:*.pbm=38;5;13:*.pgm=38;5;13:*.ppm=38;5;13:*.tga=38;5;13:*.xbm=38;5;13:*.xpm=38;5;13:*.tif=38;5;13:*.tiff=38;5;13:*.png=38;5;13:*.svg=38;5;13:*.svgz=38;5;13:*.mng=38;5;13:*.pcx=38;5;13:*.mov=38;5;13:*.mpg=38;5;13:*.mpeg=38;5;13:*.m2v=38;5;13:*.mkv=38;5;13:*.webm=38;5;13:*.ogm=38;5;13:*.mp4=38;5;13:*.m4v=38;5;13:*.mp4v=38;5;13:*.vob=38;5;13:*.qt=38;5;13:*.nuv=38;5;13:*.wmv=38;5;13:*.asf=38;5;13:*.rm=38;5;13:*.rmvb=38;5;13:*.flc=38;5;13:*.avi=38;5;13:*.fli=38;5;13:*.flv=38;5;13:*.gl=38;5;13:*.dl=38;5;13:*.xcf=38;5;13:*.xwd=38;5;13:*.yuv=38;5;13:*.cgm=38;5;13:*.emf=38;5;13:*.ogv=38;5;13:*.ogx=38;5;13:*.aac=38;5;45:*.au=38;5;45:*.flac=38;5;45:*.m4a=38;5;45:*.mid=38;5;45:*.midi=38;5;45:*.mka=38;5;45:*.mp3=38;5;45:*.mpc=38;5;45:*.ogg=38;5;45:*.ra=38;5;45:*.wav=38;5;45:*.oga=38;5;45:*.opus=38;5;45:*.spx=38;5;45:*.xspf=38;5;45:',
 'SSH_CONNECTION': '192.168.8.131 41474 192.168.8.50 22', 'LANG': 
'de_DE.UTF-8', 'HISTCONTROL': 'ignoredups', 'GUESTFISH_RESTORE': '\\e[0m', 
'HOSTNAME': 'ovirt.martinwi.local', 'GUESTFISH_INIT': '\\e[1;34m', 'S_COLORS': 
'auto', 'which_declare': 'declare -f', 'XDG_SESSION_ID': '1', 'USER': 'root', 
'GUESTFISH_PS1': '\\[\\e[1;32m\\]><fs>\\[\\e[0;31m\\] ', 
'SELINUX_ROLE_REQUESTED': '', 'PWD': '/root', 'HOME': '/root', 'SSH_CLIENT': 
'192.168.8.131 41474 22', 'SELINUX_LEVEL_REQUESTED': '', 'SSH_TTY': 
'/dev/pts/0', 'MAIL': '/var/spool/mail/root', 'SHELL': '/bin/bash', 'TERM': 
'xterm-256color', 'SELINUX_USE_CURRENT_RANGE': '', 'SHLVL': '1', 'PYTHONPATH': 
'/usr/share/ovirt-hosted-engine-setup/scripts/..:', 'LOGNAME': 'root', 
'DBUS_SESSION_BUS_ADDRESS': 'unix:path=/run/user/0/bus', 'XDG_RUNTIME_DIR': 
'/run/user/0', 'PATH': 
'/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin', 
'GUESTFISH_OUTPUT': '\\e[0m', 'HISTSIZE': '1000', 'LESSOPEN': 
'||/usr/bin/lesspipe.sh %s', 'BASH_FUNC_which%%': '() {  ( alias;\n eval 
${which_declare} ) | /usr/bin/which --tty-only --read-alias --read-functions 
--show-tilde --show-dot $@\n}', 'LC_CTYPE': 'C.UTF-8', 'OTOPI_EXECDIR': 
'/root', 'OTOPI_LOGFILE': 
'/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20230718140628-rryscj.log',
 'LC_ALL': 'C.UTF-8', 'OTOPI_CALLBACK_OF': '/tmp/tmp5rnu3hv2', 
'ANSIBLE_CALLBACK_WHITELIST': '1_otopi_json,2_ovirt_logger', 
'ANSIBLE_STDOUT_CALLBACK': '1_otopi_json', 'HE_ANSIBLE_LOG_PATH': 
'/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-ansible-create_storage_domain-20230718163226-puca3q.log'}
2023-07-18 16:32:27,151+0200 DEBUG 
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:106 
PLAY [Trigger ovirt.hosted_engine_setup role]
2023-07-18 16:32:27,351+0200 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:112 TASK [ovirt.ovirt.hosted_engine_setup : 
Execute just a specific set of steps]
2023-07-18 16:32:27,452+0200 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:112 ok: [localhost]
2023-07-18 16:32:27,853+0200 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:112 TASK [ovirt.ovirt.hosted_engine_setup : Force 
facts gathering]
2023-07-18 16:32:28,955+0200 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:112 ok: [localhost]
2023-07-18 16:32:29,056+0200 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:112 TASK [ovirt.ovirt.hosted_engine_setup : Wait 
for the storage interface to be up]
2023-07-18 16:32:29,256+0200 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:112 skipping: [localhost]
2023-07-18 16:32:29,357+0200 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:112 TASK [ovirt.ovirt.hosted_engine_setup : Check 
local VM dir stat]
2023-07-18 16:32:29,858+0200 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:112 ok: [localhost]
2023-07-18 16:32:29,959+0200 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:112 TASK [ovirt.ovirt.hosted_engine_setup : 
Enforce local VM dir existence]
2023-07-18 16:32:30,159+0200 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:112 skipping: [localhost]
2023-07-18 16:32:30,260+0200 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:112 TASK [ovirt.ovirt.hosted_engine_setup : 
include_tasks]
2023-07-18 16:32:30,461+0200 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:112 ok: [localhost]
2023-07-18 16:32:30,661+0200 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:112 TASK [ovirt.ovirt.hosted_engine_setup : 
Obtain SSO token using username/**FILTERED** credentials]
2023-07-18 16:32:31,263+0200 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:112 ok: [localhost]
2023-07-18 16:32:31,463+0200 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:112 TASK [ovirt.ovirt.hosted_engine_setup : Fetch 
host facts]
2023-07-18 16:32:32,065+0200 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:112 ok: [localhost]
2023-07-18 16:32:32,165+0200 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:112 TASK [ovirt.ovirt.hosted_engine_setup : Fetch 
cluster ID]
2023-07-18 16:32:32,366+0200 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:112 ok: [localhost]
2023-07-18 16:32:32,567+0200 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:112 TASK [ovirt.ovirt.hosted_engine_setup : Fetch 
cluster facts]
2023-07-18 16:32:33,168+0200 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:112 ok: [localhost]
2023-07-18 16:32:33,269+0200 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:112 TASK [ovirt.ovirt.hosted_engine_setup : Fetch 
Datacenter facts]
2023-07-18 16:32:33,870+0200 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:112 ok: [localhost]
2023-07-18 16:32:34,071+0200 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:112 TASK [ovirt.ovirt.hosted_engine_setup : Fetch 
Datacenter ID]
2023-07-18 16:32:34,171+0200 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:112 ok: [localhost]
2023-07-18 16:32:34,372+0200 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:112 TASK [ovirt.ovirt.hosted_engine_setup : Fetch 
Datacenter name]
2023-07-18 16:32:34,572+0200 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:112 ok: [localhost]
2023-07-18 16:32:34,673+0200 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:112 TASK [ovirt.ovirt.hosted_engine_setup : Add 
NFS storage domain]
2023-07-18 16:32:34,874+0200 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:112 skipping: [localhost]
2023-07-18 16:32:34,974+0200 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:112 TASK [ovirt.ovirt.hosted_engine_setup : Add 
glusterfs storage domain]
2023-07-18 16:32:35,877+0200 DEBUG 
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:106 
{'msg': 'Fault reason is "Operation Failed". Fault detail is "[Failed to fetch 
Gluster Volume List]". HTTP response code is 400.', 'exception': 'Traceback 
(most recent call last):\n  File 
"/tmp/ansible_ovirt_storage_domain_payload_b4ofbzxa/ansible_ovirt_storage_domain_payload.zip/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_storage_domain.py",
 line 804, in main\n  File 
"/tmp/ansible_ovirt_storage_domain_payload_b4ofbzxa/ansible_ovirt_storage_domain_payload.zip/ansible_collections/ovirt/ovirt/plugins/module_utils/ovirt.py",
 line 674, in create\n    **kwargs\n  File 
"/usr/lib64/python3.6/site-packages/ovirtsdk4/services.py", line 26258, in 
add\n    return self._internal_add(storage_domain, headers, query, wait)\n  
File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 232, in 
_internal_add\n    return future.wait() if wait else future\n  File 
"/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 55, in wait\n   
 return self._code(response)\n  File 
"/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 229, in 
callback\n    self._check_fault(response)\n  File 
"/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 132, in 
_check_fault\n    self._raise_error(response, body)\n  File 
"/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 118, in 
_raise_error\n    raise error\novirtsdk4.Error: Fault reason is "Operation 
Failed". Fault detail is "[Failed to fetch Gluster Volume List]". HTTP response 
code is 400.\n', 'invocation': {'module_args': {'state': 'unattached', 'name': 
'hosted_storage', 'host': 'ovirt.martinwi.local', 'data_center': 'Default', 
'wait': True, 'glusterfs': {'address': 'gluster1.martinwi.local', 'path': 
'/gv3', 'mount_options': ''}, 'timeout': 180, 'poll_interval': 3, 
'fetch_nested': False, 'nested_attributes': [], 'domain_function': 'data', 
'id': None, 'description': None, 'comment': None, 'localfs': None, 'nfs': None, 
'iscsi': None, 'managed_block_storage': None, 'posixfs': None, 'fcp': None, 
'wipe_after_delete': None, 'backup': None, 'critical_space_action_blocker': 
None, 'warning_low_space': None, 'destroy': None, 'format': None, 
'discard_after_delete': None}}, '_ansible_no_log': False, 'changed': False}
2023-07-18 16:32:35,977+0200 ERROR 
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:110 
ovirtsdk4.Error: Fault reason is "Operation Failed". Fault detail is "[Failed 
to fetch Gluster Volume List]". HTTP response code is 400.
2023-07-18 16:32:36,078+0200 ERROR 
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:110 
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault reason is 
\"Operation Failed\". Fault detail is \"[Failed to fetch Gluster Volume 
List]\". HTTP response code is 400."}
2023-07-18 16:32:36,178+0200 DEBUG 
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:106 
PLAY RECAP [localhost] : ok: 11 changed: 0 unreachable: 0 skipped: 3 failed: 1
2023-07-18 16:32:36,279+0200 DEBUG 
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:223 
ansible-playbook rc: 2
2023-07-18 16:32:36,279+0200 DEBUG 
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:230 
ansible-playbook stdout:
2023-07-18 16:32:36,279+0200 DEBUG 
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:233 
ansible-playbook stderr:

----------------------------------------------------------------------------------------

logs/ovirt-hosted-engine-setup/engine-logs-2023-07-18T14:33:50Z/log/ovirt-engine/engine.log:2014:

2023-07-18 16:32:35,493+02 INFO  
[org.ovirt.engine.core.bll.storage.connection.AddStorageServerConnectionCommand]
 (default task-1) [85910c72-2f78-4f1c-a2f4-cea3e57d2b49] Lock Acquired to 
object 
'EngineLock:{exclusiveLocks='[gluster1.martinwi.local:/gv3=STORAGE_CONNECTION]',
 sharedLocks=''}'
2023-07-18 16:32:35,509+02 INFO  
[org.ovirt.engine.core.bll.storage.connection.AddStorageServerConnectionCommand]
 (default task-1) [85910c72-2f78-4f1c-a2f4-cea3e57d2b49] Running command: 
AddStorageServerConnectionCommand internal: false. Entities affected :  ID: 
aaa00000-0000-0000-0000-123456789aaa Type: SystemAction group 
CREATE_STORAGE_DOMAIN with role type ADMIN
2023-07-18 16:32:35,512+02 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] 
(default task-1) [85910c72-2f78-4f1c-a2f4-cea3e57d2b49] START, 
ConnectStorageServerVDSCommand(HostName = ovirt.martinwi.local, 
StorageServerConnectionManagementVDSParameters:{hostId='4163f25c-60a5-45df-a954-6f8956103c23',
 storagePoolId='00000000-0000-0000-0000-000000000000', storageType='GLUSTERFS', 
connectionList='[StorageServerConnections:{id='null', 
connection='gluster1.martinwi.local:/gv3', iqn='null', vfsType='glusterfs', 
mountOptions='', nfsVersion='null', nfsRetrans='null', nfsTimeo='null', 
iface='null', netIfaceName='null'}]', sendNetworkEventOnFailure='true'}), log 
id: 7bfc54aa
2023-07-18 16:32:35,629+02 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] 
(default task-1) [85910c72-2f78-4f1c-a2f4-cea3e57d2b49] FINISH, 
ConnectStorageServerVDSCommand, return: 
{00000000-0000-0000-0000-000000000000=4149}, log id: 7bfc54aa
2023-07-18 16:32:35,635+02 ERROR 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default 
task-1) [85910c72-2f78-4f1c-a2f4-cea3e57d2b49] EVENT_ID: 
STORAGE_DOMAIN_ERROR(996), The error message for connection 
gluster1.martinwi.local:/gv3 returned by VDSM was: Failed to fetch Gluster 
Volume List
2023-07-18 16:32:35,635+02 ERROR 
[org.ovirt.engine.core.bll.storage.connection.FileStorageHelper] (default 
task-1) [85910c72-2f78-4f1c-a2f4-cea3e57d2b49] The connection with details 
'gluster1.martinwi.local:/gv3' failed because of error code '4149' and error 
message is: failed to fetch gluster volume list
2023-07-18 16:32:35,636+02 ERROR 
[org.ovirt.engine.core.bll.storage.connection.AddStorageServerConnectionCommand]
 (default task-1) [85910c72-2f78-4f1c-a2f4-cea3e57d2b49] Command 
'org.ovirt.engine.core.bll.storage.connection.AddStorageServerConnectionCommand'
 failed: EngineException: GlusterVolumeListFailed (Failed with error 
GlusterVolumeListFailed and code 4149)
2023-07-18 16:32:35,651+02 ERROR 
[org.ovirt.engine.core.bll.storage.connection.AddStorageServerConnectionCommand]
 (default task-1) [85910c72-2f78-4f1c-a2f4-cea3e57d2b49] Transaction 
rolled-back for command 
'org.ovirt.engine.core.bll.storage.connection.AddStorageServerConnectionCommand'.
2023-07-18 16:32:35,656+02 INFO  
[org.ovirt.engine.core.bll.storage.connection.AddStorageServerConnectionCommand]
 (default task-1) [85910c72-2f78-4f1c-a2f4-cea3e57d2b49] Lock freed to object 
'EngineLock:{exclusiveLocks='[gluster1.martinwi.local:/gv3=STORAGE_CONNECTION]',
 sharedLocks=''}'
2023-07-18 16:32:35,660+02 ERROR 
[org.ovirt.engine.api.restapi.resource.AbstractBackendResource] (default 
task-1) [] Operation Failed: [Failed to fetch Gluster Volume List]

----------------------------------------------------------------------------------------

logs/vdsm/vdsm.log:2536:

2023-07-18 16:32:35,348+0200 INFO  (vmrecovery) [vdsm.api] START 
getConnectedStoragePoolsList() from=internal, 
task_id=836164a8-882e-47a4-8f22-689f22425a6f (api:48)
2023-07-18 16:32:35,348+0200 INFO  (vmrecovery) [vdsm.api] FINISH 
getConnectedStoragePoolsList return={'poollist': []} from=internal, 
task_id=836164a8-882e-47a4-8f22-689f22425a6f (api:54)
2023-07-18 16:32:35,348+0200 INFO  (vmrecovery) [vds] recovery: waiting for 
storage pool to go up (clientIF:735)
2023-07-18 16:32:35,531+0200 INFO  (jsonrpc/6) [vdsm.api] START 
connectStorageServer(domType=7, spUUID='00000000-0000-0000-0000-000000000000', 
conList=[{'password': '********', 'vfs_type': 'glusterfs', 'port': '', 'iqn': 
'', 'connection': 'gluster1.martinwi.local:/gv3', 'ipv6_enabled': 'false', 
'id': '00000000-0000-0000-0000-000000000000', 'user': '', 'tpgt': '1'}]) 
from=::ffff:192.168.222.56,47396, flow_id=85910c72-2f78-4f1c-a2f4-cea3e57d2b49, 
task_id=3c24b1e6-fe6a-4987-882a-4392fc920b7e (api:48)
2023-07-18 16:32:35,632+0200 ERROR (jsonrpc/6) [storage.HSM] Could not connect 
to storageServer (hsm:2374)
Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/vdsm/storage/hsm.py", line 2371, in 
connectStorageServer
    conObj.connect()
  File "/usr/lib/python3.6/site-packages/vdsm/storage/storageServer.py", line 
166, in connect
    self.validate()
  File "/usr/lib/python3.6/site-packages/vdsm/storage/storageServer.py", line 
298, in validate
    if not self.volinfo:
  File "/usr/lib/python3.6/site-packages/vdsm/storage/storageServer.py", line 
285, in volinfo
    self._volinfo = self._get_gluster_volinfo()
  File "/usr/lib/python3.6/site-packages/vdsm/storage/storageServer.py", line 
338, in _get_gluster_volinfo
    self._volfileserver)
  File "/usr/lib/python3.6/site-packages/vdsm/common/supervdsm.py", line 56, in 
__call__
    return callMethod()
  File "/usr/lib/python3.6/site-packages/vdsm/common/supervdsm.py", line 54, in 
<lambda>
    **kwargs)
  File "<string>", line 2, in glusterVolumeInfo
  File "/usr/lib64/python3.6/multiprocessing/managers.py", line 772, in 
_callmethod
    raise convert_to_error(kind, result)
vdsm.gluster.exception.GlusterVolumesListFailedException: Volume list failed: 
rc=2 out=() err=['Command {self.cmd} failed with rc={self.rc} out={self.out!r} 
err={self.err!r}']
2023-07-18 16:32:35,632+0200 INFO  (jsonrpc/6) [storage.StorageDomainCache] 
Invalidating storage domain cache (sdc:74)
2023-07-18 16:32:35,632+0200 INFO  (jsonrpc/6) [vdsm.api] FINISH 
connectStorageServer return={'statuslist': [{'id': 
'00000000-0000-0000-0000-000000000000', 'status': 4149}]} 
from=::ffff:192.168.222.56,47396, flow_id=85910c72-2f78-4f1c-a2f4-cea3e57d2b49, 
task_id=3c24b1e6-fe6a-4987-882a-4392fc920b7e (api:54)

----------------------------------------------------------------------------------------

logs/vdsm/supervdsm.log:1067:

MainProcess|jsonrpc/6::DEBUG::2023-07-18 
16:32:35,532::supervdsm_server::95::SuperVdsm.ServerCallback::(wrapper) call 
volumeInfo with ('gv3', 'gluster1.martinwi.local') {}
MainProcess|jsonrpc/6::DEBUG::2023-07-18 
16:32:35,533::commands::153::common.commands::(start) /usr/bin/taskset 
--cpu-list 0-3 /usr/sbin/gluster --mode=script volume info 
--remote-host=gluster1.martinwi.local gv3 --xml (cwd None)
MainProcess|jsonrpc/6::DEBUG::2023-07-18 
16:32:35,632::commands::98::common.commands::(run) FAILED: <err> = b'\n'; <rc> 
= 2
MainProcess|jsonrpc/6::ERROR::2023-07-18 
16:32:35,632::supervdsm_server::99::SuperVdsm.ServerCallback::(wrapper) Error 
in volumeInfo
Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/vdsm/gluster/cli.py", line 111, in 
_execGluster
    return commands.run(cmd)
  File "/usr/lib/python3.6/site-packages/vdsm/common/commands.py", line 101, in 
run
    raise cmdutils.Error(args, p.returncode, out, err)
vdsm.common.cmdutils.Error: Command ['/usr/sbin/gluster', '--mode=script', 
'volume', 'info', '--remote-host=gluster1.martinwi.local', 'gv3', '--xml'] 
failed with rc=2 out=b'' err=b'\n'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/vdsm/gluster/cli.py", line 540, in 
volumeInfo
    xmltree = _execGlusterXml(command)
  File "/usr/lib/python3.6/site-packages/vdsm/gluster/cli.py", line 134, in 
_execGlusterXml
    return _getTree(_execGluster(cmd))
  File "/usr/lib/python3.6/site-packages/vdsm/gluster/cli.py", line 113, in 
_execGluster
    raise ge.GlusterCmdFailedException(rc=e.rc, err=[e.msg])
vdsm.gluster.exception.GlusterCmdFailedException: Command failed: rc=2 out=() 
err=['Command {self.cmd} failed with rc={self.rc} out={self.out!r} 
err={self.err!r}']

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/vdsm/supervdsm_server.py", line 97, in 
wrapper
    res = func(*args, **kwargs)
  File "/usr/lib/python3.6/site-packages/vdsm/gluster/cli.py", line 542, in 
volumeInfo
    raise ge.GlusterVolumesListFailedException(rc=e.rc, err=e.err)
vdsm.gluster.exception.GlusterVolumesListFailedException: Volume list failed: 
rc=2 out=() err=['Command {self.cmd} failed with rc={self.rc} out={self.out!r} 
err={self.err!r}']

_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZDGBR5GJ2LPNRAUT73WOTWKHQ2TLEXEC/

Reply via email to