[ovirt-users] Re: oVirt 4.4.0 HE deployment on GlusterFS fails during health check

2020-06-13 Thread Oliver Leinfelder
Hi,

Your gluster mount option is not correct.
> You need 'backup-volfile-servers=storagehost2:storagehost3' (without the
> volume name as they all have thaylt volume) .


yes, of course. I'm sorry but the appended volume name was a mistake I made
for the email and not during deployment where only specified the FQDNs
without the volname.

As mentioned, the mount generally seema to work as data ist written during
deployment. It fails later during health check :-(

Best regards
Oliver
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SP7ZNIIXGHQ73452J4D6FYWQT2DQIKHR/


[ovirt-users] Ansible ovirt.hosts has no port setting

2020-06-13 Thread Paul-Erik Törrönen

Running oVirt 4.4.

While setting up ovirt with ansible, I noticed that there is no 
parameter for setting the ssh port of the hosts (which is present if one 
adds an host on the GUI).


I tried adding the port as a part of the address, but that field is 
apparently evaluated as FQDN only.


I did circumvent this by adding FQDN-labeled sections with 
Port-definition  ~/.ssh/config for ovirt-user on the engine-machine (not 
hosted), but this only got me a step further since the configuration in 
the engine still is without a port definition and falls back to 22 as a 
result.


Running the ansible playbook with -vvv I can see the following:

Using module file 
/usr/lib/python3.6/site-packages/ansible/modules/cloud/ovirt/ovirt_host.py
<127.0.0.1> PUT 
/root/.ansible/tmp/ansible-local-3041af74sxb1/tmpjlyq4ic8 TO 
/root/.ansible/tmp/ansible-tmp-1592049378.2474232-3179-247712281303744/AnsiballZ_ovirt_host.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x 
/root/.ansible/tmp/ansible-tmp-1592049378.2474232-3179-247712281303744/ 
/root/.ansible/tmp/ansible-tmp-1592049378.2474232-3179-247712281303744/AnsiballZ_ovirt_host.py 
&& sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python3.6 
/root/.ansible/tmp/ansible-tmp-1592049378.2474232-3179-247712281303744/AnsiballZ_ovirt_host.py 
&& sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r 
/root/.ansible/tmp/ansible-tmp-1592049378.2474232-3179-247712281303744/ 
> /dev/null 2>&1 && sleep 0'

The full traceback is:
Traceback (most recent call last):
  File 
"/tmp/ansible_ovirt_host_payload_zeyzwhlj/ansible_ovirt_host_payload.zip/ansible/modules/cloud/ovirt/ovirt_host.py", 
line 639, in main
  File 
"/tmp/ansible_ovirt_host_payload_zeyzwhlj/ansible_ovirt_host_payload.zip/ansible/module_utils/ovirt.py", 
line 785, in action

poll_interval=self._module.params['poll_interval'],
  File 
"/tmp/ansible_ovirt_host_payload_zeyzwhlj/ansible_ovirt_host_payload.zip/ansible/module_utils/ovirt.py", 
line 359, in wait

raise Exception("Error while waiting on result state of the entity.")
Exception: Error while waiting on result state of the entity.
failed: [localhost] (item=myHost) => {
"ansible_loop_var": "item",
"changed": false,
"invocation": {
"module_args": {
"activate": true,
"address": null,
"check_upgrade": true,
"cluster": null,
"comment": null,
"fetch_nested": false,
"force": false,
"hosted_engine": null,
"id": null,
"iscsi": null,
"kdump_integration": null,
"kernel_params": null,
"name": "myHost",
"nested_attributes": [],
"override_display": null,
"override_iptables": null,
"password": null,
"poll_interval": 3,
"power_management_enabled": null,
"public_key": true,
"reboot_after_upgrade": true,
"spm_priority": null,
"state": "reinstalled",
"timeout": 600,
"vgpu_placement": null,
"wait": true
}
},
"item": {
"address": "myHost.myDomain",
"affinity_labels": [],
"auto_numa_status": "unknown",
"certificate": {
"organization": "myDomain",
"subject": "O=myDomain,CN=myHost.myDomain"
},
"cluster": {
"href": 
"/ovirt-engine/api/clusters/3642e08b-1311-4f8a-803b-897321513b0b",

"id": "3642e08b-1311-4f8a-803b-897321513b0b"
},
"comment": "",
"cpu": {
"speed": 0.0,
"topology": {}
},
"device_passthrough": {
"enabled": false
},
"devices": [],
"external_network_provider_configurations": [],
"external_status": "ok",
"hardware_information": {
"supported_rng_sources": []
},
"hooks": [],
"href": 
"/ovirt-engine/api/hosts/22623d77-069e-4feb-86db-03b08bc67c1a",

"id": "22623d77-069e-4feb-86db-03b08bc67c1a",
"katello_errata": [],
"kdump_status": "unknown",
"ksm": {
"enabled": false
},
"max_scheduling_memory": 0,
"memory": 0,
"name": "myHost",
"network_attachments": [],
"nics": [],
"numa_nodes": [],
"numa_supported": false,
"os": {
"custom_kernel_cmdline": ""
},
"permissions": [],
"port": 54321,
"power_management": {
"automatic_pm_enabled": true,
"enabled": false,
"kdump_detection": true,
"pm_proxies": []
},
"protocol": "stomp",
"se_linux": {},
"spm": {
"priority": 5,
"status": "none"
},
"ssh": {
"fingerprint": "SHA256:abc...",
"port": 22
},
"statistics": [],
"status": "install_failed",
"storag

[ovirt-users] Re: Skype for Bussines

2020-06-13 Thread Strahil Nikolov via Users
You can check in 
https://lists.ovirt.org/archives/search?q=spice+youtube&page=1&sort=date-desc  
for 'spice  options  hooks'  . Maybe the discussed there could help.

Best Regards,
Strahil  Nikolov

На 11 юни 2020 г. 12:35:30 GMT+03:00, ozme...@hotmail.com написа:
>Hi,
>While using "skype for bussiness" on guest machine we've been getting
>sound and video problem.
>i've found a solution on
>https://docs.microsoft.com/en-us/skypeforbusiness/deploy/deploy-clients/deploy-the-lync-vdi-plug-in
>web page.
>But this is only for windows client and nor for spice connection
>
>Does anyone has a solution for this problem?
>
>Thanks
>___
>Users mailing list -- users@ovirt.org
>To unsubscribe send an email to users-le...@ovirt.org
>Privacy Statement: https://www.ovirt.org/privacy-policy.html
>oVirt Code of Conduct:
>https://www.ovirt.org/community/about/community-guidelines/
>List Archives:
>https://lists.ovirt.org/archives/list/users@ovirt.org/message/LPQXAD2YICVW2GRKUQCVJZGPDVI5FJ2M/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/U3RRJXNLFU3L67Y64MLW7Q3XATLFCFIC/


[ovirt-users] Re: oVirt 4.4.0 HE deployment on GlusterFS fails during health check

2020-06-13 Thread Strahil Nikolov via Users
Your gluster mount option is not correct.
You need 'backup-volfile-servers=storagehost2:storagehost3' (without the volume 
name as they all have thaylt volume) .

Best Regards,
Strahil  Nikolov

На 13 юни 2020 г. 10:47:28 GMT+03:00, Oliver Leinfelder 
 написа:
>Hi,
>
>I have the following two components:
>
>1.) A freshly installed VM host (oVirt Node 4.4.0 release ISO)
>2.) 3 storage hosts, also freshly installed from oVirt Node 4.4.0 
>release ISO
>
>The storage hosts have been successfully installed with Gluster
>(through 
>Cockpit). They have two volumes, both of which I can mount and 
>read/write from a client.
>
>On the VM host, I ran "hosted-engine --deploy" (no backups imported).
>
>When prompted for storage, I answered "glusterfs" and specified 
>"storagehost1:/engine" as storage for the HE deployment. For mount 
>options, I specified 
>"backup-volfile-servers=storagehost2:/engine:storagehost3:/engine"
>
>(Not the real hostnames, but all of them are resolvable via internal
>DNS)
>
>Everything seems to works fine, I also see the "engine" volume become 
>populated with data. At some point I could ping and SSH login to the
>HE.
>
>When the setup proceed to health check, it failed and the whole process
>
>was aborted :-(
>
>"hosted-engine --vm-status" reported "failed liveliness check" when it 
>was reachable via SSH. At some point the engine went down and, to my 
>surprise, shows a grub prompt after the restart when doing a 
>"hosted-engine --console".
>
>[ INFO  ] TASK [ovirt.hosted_engine_setup : Check engine VM health]
>[ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 180, "changed": 
>true, "cmd": ["hosted-engine", "--vm-status", "--json"], "delta": 
>"0:00:00.160595", "end": "2020-06-12 17:50:05.675774", "rc": 0,
>"start": 
>"2020-06-12 17:50:05.515179", "stderr": "", "stderr_lines": [],
>"stdout": "{\"1\": {\"host-id\": 1, \"host-ts\": 11528, \"score\":
>3400, 
>\"engine-status\": {\"vm\": \"up\", \"health\": \"bad\", \"detail\": 
>\"Powering down\", \"reason\": \"failed liveliness check\"}, 
>\"hostname\": \"vmhost\", \"maintenance\": false, \"stopped\": false, 
>\"crc32\": \"2c447835\", \"conf_on_shared_storage\": true, 
>\"local_conf_timestamp\": 11528, \"extra\": 
>\"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=11528
>
>(Fri Jun 12 17:49:57 
>2020)\\nhost-id=1\\nscore=3400\\nvm_conf_refresh_time=11528 (Fri Jun 12
>
>17:49:57 
>2020)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStop\\nstopped=False\\ntimeout=Thu
>
>Jan  1 04:12:48 1970\\n\", \"live-data\": true},
>\"global_maintenance\": 
>false}", "stdout_lines": ["{\"1\": {\"host-id\": 1, \"host-ts\": 11528,
>
>\"score\": 3400, \"engine-status\": {\"vm\": \"up\", \"health\": 
>\"bad\", \"detail\": \"Powering down\", \"reason\": \"failed liveliness
>
>check\"}, \"hostname\": \"vmhost\", \"maintenance\": false,
>\"stopped\": 
>false, \"crc32\": \"2c447835\", \"conf_on_shared_storage\": true, 
>\"local_conf_timestamp\": 11528, \"extra\": 
>\"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=11528
>
>(Fri Jun 12 17:49:57 
>2020)\\nhost-id=1\\nscore=3400\\nvm_conf_refresh_time=11528 (Fri Jun 12
>
>17:49:57 
>2020)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStop\\nstopped=False\\ntimeout=Thu
>
>Jan  1 04:12:48 1970\\n\", \"live-data\": true},
>\"global_maintenance\": 
>false}"]}
>
>A second attempt failed at exactly the same stage.
>
>I can see the following in the setup log:
>
>ovirt-hosted-engine-setup-20200612151212-j9zwd2.log:
>2020-06-12 17:33:18,314+0200 DEBUG 
>otopi.ovirt_hosted_engine_setup.ansible_utils 
>ansible_utils._process_output:103 {'msg': 'non-zero return code',
>'cmd': 
>['hosted-engine', '--reinitialize-lockspace', '--force'], 'stdout': '',
>
>'stderr': 'Traceback (most recent call last):\n
>   File "/usr/lib64/python3.6/runpy.py", line 193, in 
>_run_module_as_main\n    "__main__", mod_spec)\n  File 
>"/usr/lib64/python3.6/runpy.py", line 85, in _run_code\n exec(code, 
>run_globals)\n  File 
>"/usr/lib/python3.6/site-packages/ovirt_hosted_engine_setup/reinitialize_
>lockspace.py", line 30, in \n ha_cli.reset_lockspace(force)\n  
>File 
>"/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/client/client.py",
>
>line 286, in reset_lockspace\n    stats = 
>broker.get_stats_from_storage()\n  File
>"/usr/lib/python3.6/site-packages/ov
>irt_hosted_engine_ha/lib/brokerlink.py", line 148, in 
>get_stats_from_storage\n    result = self._proxy.get_stats()\n  File 
>"/usr/lib64/python3.6/xmlrpc/client.py", line 1112, in __call__\n
>return 
>self.__send(self.__name, args)\n  File
>"/usr/lib64/python3.6/xmlrpc/client
>.py", line 1452, in __request\n    verbose=self.__verbose\n  File 
>"/usr/lib64/python3.6/xmlrpc/client.py", line 1154, in request\n return
>
>self.single_request(host, handler, request_body, verbose)\n File 
>"/usr/lib64/python3.6/xmlrpc/client.py", line 1166, in single_requ
>est\n    http_conn = self.send_request(host, handler, request_bo

[ovirt-users] oVirt 4.4.0 HE deployment on GlusterFS fails during health check

2020-06-13 Thread Oliver Leinfelder

Hi,

I have the following two components:

1.) A freshly installed VM host (oVirt Node 4.4.0 release ISO)
2.) 3 storage hosts, also freshly installed from oVirt Node 4.4.0 
release ISO


The storage hosts have been successfully installed with Gluster (through 
Cockpit). They have two volumes, both of which I can mount and 
read/write from a client.


On the VM host, I ran "hosted-engine --deploy" (no backups imported).

When prompted for storage, I answered "glusterfs" and specified 
"storagehost1:/engine" as storage for the HE deployment. For mount 
options, I specified 
"backup-volfile-servers=storagehost2:/engine:storagehost3:/engine"


(Not the real hostnames, but all of them are resolvable via internal DNS)

Everything seems to works fine, I also see the "engine" volume become 
populated with data. At some point I could ping and SSH login to the HE.


When the setup proceed to health check, it failed and the whole process 
was aborted :-(


"hosted-engine --vm-status" reported "failed liveliness check" when it 
was reachable via SSH. At some point the engine went down and, to my 
surprise, shows a grub prompt after the restart when doing a 
"hosted-engine --console".


[ INFO  ] TASK [ovirt.hosted_engine_setup : Check engine VM health]
[ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 180, "changed": 
true, "cmd": ["hosted-engine", "--vm-status", "--json"], "delta": 
"0:00:00.160595", "end": "2020-06-12 17:50:05.675774", "rc": 0, "start": 
"2020-06-12 17:50:05.515179", "stderr": "", "stderr_lines": [],
"stdout": "{\"1\": {\"host-id\": 1, \"host-ts\": 11528, \"score\": 3400, 
\"engine-status\": {\"vm\": \"up\", \"health\": \"bad\", \"detail\": 
\"Powering down\", \"reason\": \"failed liveliness check\"}, 
\"hostname\": \"vmhost\", \"maintenance\": false, \"stopped\": false, 
\"crc32\": \"2c447835\", \"conf_on_shared_storage\": true, 
\"local_conf_timestamp\": 11528, \"extra\": 
\"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=11528 
(Fri Jun 12 17:49:57 
2020)\\nhost-id=1\\nscore=3400\\nvm_conf_refresh_time=11528 (Fri Jun 12 
17:49:57 
2020)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStop\\nstopped=False\\ntimeout=Thu 
Jan  1 04:12:48 1970\\n\", \"live-data\": true}, \"global_maintenance\": 
false}", "stdout_lines": ["{\"1\": {\"host-id\": 1, \"host-ts\": 11528, 
\"score\": 3400, \"engine-status\": {\"vm\": \"up\", \"health\": 
\"bad\", \"detail\": \"Powering down\", \"reason\": \"failed liveliness 
check\"}, \"hostname\": \"vmhost\", \"maintenance\": false, \"stopped\": 
false, \"crc32\": \"2c447835\", \"conf_on_shared_storage\": true, 
\"local_conf_timestamp\": 11528, \"extra\": 
\"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=11528 
(Fri Jun 12 17:49:57 
2020)\\nhost-id=1\\nscore=3400\\nvm_conf_refresh_time=11528 (Fri Jun 12 
17:49:57 
2020)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStop\\nstopped=False\\ntimeout=Thu 
Jan  1 04:12:48 1970\\n\", \"live-data\": true}, \"global_maintenance\": 
false}"]}


A second attempt failed at exactly the same stage.

I can see the following in the setup log:

ovirt-hosted-engine-setup-20200612151212-j9zwd2.log:
2020-06-12 17:33:18,314+0200 DEBUG 
otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:103 {'msg': 'non-zero return code', 'cmd': 
['hosted-engine', '--reinitialize-lockspace', '--force'], 'stdout': '', 
'stderr': 'Traceback (most recent call last):\n
  File "/usr/lib64/python3.6/runpy.py", line 193, in 
_run_module_as_main\n    "__main__", mod_spec)\n  File 
"/usr/lib64/python3.6/runpy.py", line 85, in _run_code\n exec(code, 
run_globals)\n  File 
"/usr/lib/python3.6/site-packages/ovirt_hosted_engine_setup/reinitialize_
lockspace.py", line 30, in \n ha_cli.reset_lockspace(force)\n  
File 
"/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/client/client.py", 
line 286, in reset_lockspace\n    stats = 
broker.get_stats_from_storage()\n  File "/usr/lib/python3.6/site-packages/ov
irt_hosted_engine_ha/lib/brokerlink.py", line 148, in 
get_stats_from_storage\n    result = self._proxy.get_stats()\n  File 
"/usr/lib64/python3.6/xmlrpc/client.py", line 1112, in __call__\n return 
self.__send(self.__name, args)\n  File "/usr/lib64/python3.6/xmlrpc/client
.py", line 1452, in __request\n    verbose=self.__verbose\n  File 
"/usr/lib64/python3.6/xmlrpc/client.py", line 1154, in request\n return 
self.single_request(host, handler, request_body, verbose)\n File 
"/usr/lib64/python3.6/xmlrpc/client.py", line 1166, in single_requ
est\n    http_conn = self.send_request(host, handler, request_body, 
verbose)\n  File "/usr/lib64/python3.6/xmlrpc/client.py", line 1279, in 
send_request\n    self.send_content(connection, request_body)\n File 
"/usr/lib64/python3.6/xmlrpc/client.py", line 1309, in send_con
tent\n    connection.endheaders(request_body)\n  File 
"/usr/lib64/python3.6/http/client.py", line 1249, in endheaders\n 
self._send_output(message_body, enco