[ovirt-users] Hosted Engine deployment failure ovirt 4.3.10

2024-04-25 Thread john . roche
Hi, I'm getting this issue when deploying the hosted engine onto iscsi storage 
using ovirt 4.3.10

The Disk volume is new, no issues, using 
ThinkSystem SR650 hardware 
and CentOS Linux release 7.8.2003 (Core)

I used both the GUI and ansible script and both fail, this is the farthest I 
can get and it's driving me mad, please help
the errors logs are below

[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Initialize lockspace volume]
[ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 5, "changed": true, 
"cmd": ["hosted-engine", "--reinitialize-lockspace", "--force"],

more errors

"delta": "0:00:00.668430", "end": "2024-04-22 16:52:56.859768", "msg": 
"non-zero return code", "rc": 1, "start": "2024-04-22 16:52:56.191338", 
"stderr": "Traceback (most recent call last):\n File 
\"/usr/lib64/python2.7/runpy.py\", line 162, in _run_module_as_main\n 
\"__main__\", fname, loader, pkg_name)\n File 
\"/usr/lib64/python2.7/runpy.py\", line 72, in _run_code\n exec code in 
run_globals\n File 
\"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_setup/reinitialize_lockspace.py\",
 line 30, in \n ha_cli.reset_lockspace(force)\n File 
\"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py\", 
line 286, in reset_lockspace\n stats = broker.get_stats_from_storage()\n File 
\"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py\", 
line 146, in get_stats_from_storage\n result = self._proxy.get_stats()\n File 
\"/usr/lib64/python2.7/xmlrpclib.py\", line 1233, in __call__\n return 
self.__send(self.__name, args)\n File \"/usr/lib64/python2.7/
 xmlrpclib.py\", line 1591, in __request\n verbose=self.__verbose\n File 
\"/usr/lib64/python2.7/xmlrpclib.py\", line 1273, in request\n return 
self.single_request(host, handler, request_body, verbose)\n File 
\"/usr/lib64/python2.7/xmlrpclib.py\", line 1301, in single_request\n 
self.send_content(h, request_body)\n File 
\"/usr/lib64/python2.7/xmlrpclib.py\", line 1448, in send_content\n 
connection.endheaders(request_body)\n File \"/usr/lib64/python2.7/httplib.py\", 
line 1052, in endheaders\n self._send_output(message_body)\n File 
\"/usr/lib64/python2.7/httplib.py\", line 890, in _send_output\n 
self.send(msg)\n File \"/usr/lib64/python2.7/httplib.py\", line 852, in send\n 
self.connect()\n File 
\"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/unixrpc.py\", 
line 60, in connect\n self.sock.connect(base64.b16decode(self.host))\n File 
\"/usr/lib64/python2.7/socket.py\", line 224, in meth\n return 
getattr(self._sock,name)(*args)\nsocket.error: [Errno 2] No such file or 
directory"
 , "stderr_lines": ["Traceback (most recent call last):", " File 
\"/usr/lib64/python2.7/runpy.py\", line 162, in _run_module_as_main", " 
\"__main__\", fname, loader, pkg_name)", " File 
\"/usr/lib64/python2.7/runpy.py\", line 72, in _run_code", " exec code in 
run_globals", " File 
\"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_setup/reinitialize_lockspace.py\",
 line 30, in ", " ha_cli.reset_lockspace(force)", " File 
\"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py\", 
line 286, in reset_lockspace", " stats = broker.get_stats_from_storage()", " 
File 
\"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py\", 
line 146, in get_stats_from_storage", " result = self._proxy.get_stats()", " 
File \"/usr/lib64/python2.7/xmlrpclib.py\", line 1233, in __call__", " return 
self.__send(self.__name, args)", " File \"/usr/lib64/python2.7/xmlrpclib.py\", 
line 1591, in __request", " verbose=self.__verbose", " File 
\"/usr/lib64/python2.7/xmlrpcli
 b.py\", line 1273, in request", " return self.single_request(host, handler, 
request_body, verbose)", " File \"/usr/lib64/python2.7/xmlrpclib.py\", line 
1301, in single_request", " self.send_content(h, request_body)", " File 
\"/usr/lib64/python2.7/xmlrpclib.py\", line 1448, in send_content", " 
connection.endheaders(request_body)", " File 
\"/usr/lib64/python2.7/httplib.py\", line 1052, in endheaders", " 
self._send_output(message_body)", " File \"/usr/lib64/python2.7/httplib.py\", 
line 890, in _send_output", " self.send(msg)", " File 
\"/usr/lib64/python2.7/httplib.py\", line 852, in send", " self.connect()", " 
File 
\"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/unixrpc.py\", 
line 60, in connect", " self.sock.connect(base64.b16decode(self.host))", " File 
\"/usr/lib64/python2.7/socket.py\", line 224, in meth", " return 
getattr(self._sock,name)(*args)", "socket.error: [Errno 2] No such file or 
directory"], "stdout": "", "stdout_lines": []}
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 

[ovirt-users] Hosted engine installation oVirt 4.5 initialize lockspace volume

2024-03-27 Thread lbayerlein
Hello,

hope, you are doing well. I try to install a new hosted engine on my 
environment to switch the storage this engine. I use following command to 
deploy the instance:

  hosted-engine --deploy --4 --ansible-extra-vars=he_offline_deployment=true

Everything works fine, except the initialization of lockspace volume. Here I 
get:

```
[ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Remove temporary entry in 
/etc/hosts for the local VM]
[ INFO  ] changed: [localhost]
[ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Set the name for add_host]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Fetch the value of 
HOST_KEY_CHECKING]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Get the username running the 
deploy]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Register the engine FQDN as a 
host]
[ INFO  ] changed: [localhost]
[ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Start ovirt-ha-broker service 
on the host]
[ INFO  ] changed: [localhost]
[ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Initialize lockspace volume]
[ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Workaround for 
ovirt-ha-broker start failures]
[ INFO  ] changed: [localhost]
[ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Initialize lockspace volume]
[ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 5, "changed": true, 
"cmd": ["hosted-engine", "--reinitialize-lock
space", "--force"], "delta": "0:00:00.269920", "end": "2024-03-20 
14:43:00.366557", "msg": "non-zero return code", "rc"
: 1, "start": "2024-03-20 14:43:00.096637", "stderr": "Traceback (most recent 
call last):\n  File \"/usr/lib64/python3.
6/runpy.py\", line 193, in _run_module_as_main\n\"__main__\", mod_spec)\n  
File \"/usr/lib64/python3.6/runpy.py\",
line 85, in _run_code\nexec(code, run_globals)\n  File 
\"/usr/lib/python3.6/site-packages/ovirt_hosted_engine_setup
/reinitialize_lockspace.py\", line 30, in \n
ha_cli.reset_lockspace(force)\n  File \"/usr/lib/python3.6/site
-packages/ovirt_hosted_engine_ha/client/client.py\", line 286, in 
reset_lockspace\nstats = broker.get_stats_from_st
orage()\n  File 
\"/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py\", 
line 148, in get_stats_f
rom_storage\nresult = self._proxy.get_stats()\n  File 
\"/usr/lib64/python3.6/xmlrpc/client.py\", line 1112, in __ca
ll__\nreturn self.__send(self.__name, args)\n  File 
\"/usr/lib64/python3.6/xmlrpc/client.py\", line 1452, in __requ
est\nverbose=self.__verbose\n  File 
\"/usr/lib64/python3.6/xmlrpc/client.py\", line 1154, in request\nreturn se
lf.single_request(host, handler, request_body, verbose)\n  File 
\"/usr/lib64/python3.6/xmlrpc/client.py\", line 1166, i
n single_request\nhttp_conn = self.send_request(host, handler, 
request_body, verbose)\n  File \"/usr/lib64/python3.
6/xmlrpc/client.py\", line 1279, in send_request\n
self.send_content(connection, request_body)\n  File \"/usr/lib64/
python3.6/xmlrpc/client.py\", line 1309, in send_content\n
connection.endheaders(request_body)\n  File \"/usr/lib64/
python3.6/http/client.py\", line 1268, in endheaders\n
self._send_output(message_body, encode_chunked=encode_chunked
)\n  File \"/usr/lib64/python3.6/http/client.py\", line 1044, in _send_output\n 
   self.send(msg)\n  File \"/usr/lib64/
python3.6/http/client.py\", line 982, in send\nself.connect()\n  File 
\"/usr/lib/python3.6/site-packages/ovirt_host
ed_engine_ha/lib/unixrpc.py\", line 76, in connect\n
self.sock.connect(base64.b16decode(self.host))\nFileNotFoundErr
or: [Errno 2] No such file or directory", "stderr_lines": ["Traceback (most 
recent call last):", "  File \"/usr/lib64/p
ython3.6/runpy.py\", line 193, in _run_module_as_main", "\"__main__\", 
mod_spec)", "  File \"/usr/lib64/python3.6/r
unpy.py\", line 85, in _run_code", "exec(code, run_globals)", "  File 
\"/usr/lib/python3.6/site-packages/ovirt_host
ed_engine_setup/reinitialize_lockspace.py\", line 30, in ", "
ha_cli.reset_lockspace(force)", "  File \"/usr
/lib/python3.6/site-packages/ovirt_hosted_engine_ha/client/client.py\", line 
286, in reset_lockspace", "stats = bro
ker.get_stats_from_storage()", "  File 
\"/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py\", l
ine 148, in get_stats_from_storage", "result = self._proxy.get_stats()", "  
File \"/usr/lib64/python3.6/xmlrpc/clie
nt.py\", line 1112, in __call__", "return self.__send(self.__name, args)", 
"  File \"/usr/lib64/python3.6/xmlrpc/cl
ient.py\", line 1452, in __request", "verbose=self.__verbose", "  File 
\"/usr/lib64/python3.6/xmlrpc/client.py\", l
ine 1154, in request", "return self.single_request(host, handler, 
request_body, verbose)", "  File \"/usr/lib64/pyt
hon3.6/xmlrpc/client.py\", line 1166, in single_request", "http_conn = 
self.send_request(host, handler, request_bod
y, verbose)", "  File 

[ovirt-users] Hosted Engine out of sync

2024-03-12 Thread cytavision . es
Hi,

Last January I lost access to my oVirt Manager dashboard due to a certificate 
error.
I managed to fix this problem, but since then I have lost all the usage 
information displayed on the dashboard.
After that, two of my hosts lost communication with the hosted engine and no 
matter what I've tried, they can't connect to my cluster and always have an 
"Unassigned" or "NonOperational" status.
Recently I have noticed that the web interface of the manager is too slow due 
to the high CPU of the hosted engine. The high CPU is caused by the postmaster 
process.
I have been running this cluster for many years but this is the first time I 
have experienced so many problems.

Can you please help me?

BR
Odysseas Odysseos
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EZZZAZB7KOO7DHWRFMRXKEKJPJO5KZP5/


[ovirt-users] hosted engine deployment fails with certificate error

2024-02-27 Thread jerome lardeux
Hello
I,m trying to deploy a hosted engine in my lab, but i keep getting the same 
error
Every time, while trying to install evirt packages, i got this:
"ERROR otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:113 fatal: [localhost -> 192.168.222.91]: FAILED! 
=> {"changed": false, "msg": "Failed to download metadata for repo 
'ovirt-45-upstream': Cannot prepare internal mirrorlist: Curl error (60): Peer 
certificate cannot be authenticated with given CA certificates for 
https://mirrorlist.ovirt.org/mirrorlist-ovirt-4.5-el8 [SSL certificate problem: 
self signed certificate in certificate chain]", "rc": 1, "results": []}"

it is an ovirt node 4.5

thank you for your help
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GH4SLZJ2B3ALJNABZFNCEFN6NVXM7YEC/


[ovirt-users] hosted-engine --deploy fails, apparently due to not finding storage pool

2024-02-09 Thread iucounu
Hi,

I'm trying to deploy a self hosted engine, and it gets to 'Wait for the host to 
be up' and then fails about 20mins later. Checking the hosted engine setup log, 
the failure seems to be to do with a storage pool not being found (see error 
below). Is this error referring to the storage pool that the hosted engine will 
use? The setup script does not ask me what kind of storage I want to use, or 
the path for it, does it auto detect this? The host I am running the engine 
setup on is part of a gluster cluster, which I have provisioned for this 
purpose. Do I need to edit a file somewhere to specify this, or is there 
something else I need to do (permissions etc)? The hosted engine VM itself 
comes up fine, and I can get a console on it via virsh. 

Thanks for any help,

kind regards,

Cam

2024-02-08 16:39:04,438+ DEBUG 
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:109 
{'changed': True, 'stdout': '', 'stderr': "error: failed to get pool 
'localvm0bod0wzw'\nerror: Storage pool not found: no storage pool with matching 
name 'localvm0bod0wzw'", 'rc': 1, 'cmd': ['virsh', '-c', 
'qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf', 
'pool-destroy', 'localvm0bod0wzw'], 'start': '2024-02-08 16:39:04.246510', 
'end': '2024-02-08 16:39:04.301790', 'delta': '0:00:00.055280', 'msg': 
'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'virsh -c 
qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf pool-destroy 
localvm0bod0wzw', '_uses_shell': False, 'stdin_add_newline': True, 
'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 
'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 
'stderr_lines': ["error: failed to get pool 'localvm0bod0wzw'", "error: Storage 
pool not fou
 nd: no storage pool with matching name 'localvm0bod0wzw'"], '_ansible_no_log': 
None}
2024-02-08 16:39:04,539+ DEBUG 
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:109 
ignored: [localhost]: FAILED! => {"changed": true, "cmd": ["virsh", "-c", 
"qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf", 
"pool-destroy", "localvm0bod0wzw"], "delta": "0:00:00.055280", "end": 
"2024-02-08 16:39:04.301790", "msg": "non-zero return code", "rc": 1, "start": 
"2024-02-08 16:39:04.246510", "stderr": "error: failed to get pool 
'localvm0bod0wzw'\nerror: Storage pool not found: no storage pool with matching 
name 'localvm0bod0wzw'", "stderr_lines": ["error: failed to get pool 
'localvm0bod0wzw'", "error: Storage pool not found: no storage pool with 
matching name 'localvm0bod0wzw'"], "stdout": "", "stdout_lines": []}
2024-02-08 16:39:04,839+ DEBUG 
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:109 
{'changed': True, 'stdout': '', 'stderr': "error: failed to get pool 
'localvm0bod0wzw'\nerror: Storage pool not found: no storage pool with matching 
name 'localvm0bod0wzw'", 'rc': 1, 'cmd': ['virsh', '-c', 
'qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf', 
'pool-undefine', 'localvm0bod0wzw'], 'start': '2024-02-08 16:39:04.638774', 
'end': '2024-02-08 16:39:04.691953', 'delta': '0:00:00.053179', 'msg': 
'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'virsh -c 
qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf pool-undefine 
localvm0bod0wzw', '_uses_shell': False, 'stdin_add_newline': True, 
'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 
'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 
'stderr_lines': ["error: failed to get pool 'localvm0bod0wzw'", "error: Storage 
pool not f
 ound: no storage pool with matching name 'localvm0bod0wzw'"], 
'_ansible_no_log': None}
2024-02-08 16:39:04,940+ DEBUG 
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:109 
ignored: [localhost]: FAILED! => {"changed": true, "cmd": ["virsh", "-c", 
"qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf", 
"pool-undefine", "localvm0bod0wzw"], "delta": "0:00:00.053179", "end": 
"2024-02-08 16:39:04.691953", "msg": "non-zero return code", "rc": 1, "start": 
"2024-02-08 16:39:04.638774", "stderr": "error: failed to get pool 
'localvm0bod0wzw'\nerror: Storage pool not found: no storage pool with matching 
name 'localvm0bod0wzw'", "stderr_lines": ["error: failed to get pool 
'localvm0bod0wzw'", "error: Storage pool not found: no storage pool with 
matching name 'localvm0bod0wzw'"], "stdout": "", "stdout_lines": []}
2024-02-08 16:39:05,141+ DEBUG 
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:109 
{'msg': 'The task includes an option with an undefined variable. The error was: 
\'local_vm_disk_path\' is undefined. \'local_vm_disk_path\' is undefined\n\nThe 
error appears to be in 
\'/usr/share/ansible/collections/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/clean_local_storage_pools.yml\':
 line 16, 

[ovirt-users] hosted-engine --deploy: No module named 'he_ansible'

2024-01-19 Thread iucounu
Hi,

I'm trying to install hosted engine on a server (4.5.5-1.el9) and it reports 
'No module named 'he_ansible'. I have what could an he_ansible  module under 
/usr/share/ovirt-hosted-engine-setup:

/usr/share/ovirt-hosted-engine-setup/he_ansible
/usr/share/ovirt-hosted-engine-setup/he_ansible/__pycache__
/usr/share/ovirt-hosted-engine-setup/he_ansible/ansible.cfg
/usr/share/ovirt-hosted-engine-setup/he_ansible/callback_plugins
/usr/share/ovirt-hosted-engine-setup/he_ansible/constants.py
/usr/share/ovirt-hosted-engine-setup/he_ansible/trigger_role.yml
/usr/share/ovirt-hosted-engine-setup/he_ansible/__pycache__/constants.cpython-39.opt-1.pyc
/usr/share/ovirt-hosted-engine-setup/he_ansible/__pycache__/constants.cpython-39.pyc
/usr/share/ovirt-hosted-engine-setup/he_ansible/callback_plugins/1_otopi_json.py
/usr/share/ovirt-hosted-engine-setup/he_ansible/callback_plugins/2_ovirt_logger.py

Is there a missing include/path somewhere?

Thanks for any help.

Full error:

bash# hosted-engine --deploy
:1: DeprecationWarning: distro.linux_distribution() is deprecated. It 
should only be used as a compatibility shim with Python's 
platform.linux_distribution(). Please use distro.id(), distro.version() and 
distro.name() instead.
:1: DeprecationWarning: distro.linux_distribution() is deprecated. It 
should only be used as a compatibility shim with Python's 
platform.linux_distribution(). Please use distro.id(), distro.version() and 
distro.name() instead.
***L:ERROR Internal error: No module named 'he_ansible'
Traceback (most recent call last):
  File "/usr/lib/python3.9/site-packages/otopi/main.py", line 141, in execute
self.context.loadPlugins()
  File "/usr/lib/python3.9/site-packages/otopi/context.py", line 803, in 
loadPlugins
self._loadPluginGroups(plugindir, needgroups, loadedgroups)
  File "/usr/lib/python3.9/site-packages/otopi/context.py", line 112, in 
_loadPluginGroups
self._loadPlugins(path, path, groupname)
  File "/usr/lib/python3.9/site-packages/otopi/context.py", line 69, in 
_loadPlugins
self._loadPlugins(base, d, groupname)
  File "/usr/lib/python3.9/site-packages/otopi/context.py", line 95, in 
_loadPlugins
util.loadModule(
  File "/usr/lib/python3.9/site-packages/otopi/util.py", line 110, in loadModule
spec.loader.exec_module(module)
  File "", line 850, in exec_module
  File "", line 228, in _call_with_frames_removed
  File 
"/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/gr-he-ansiblesetup/core/__init__.py",
 line 25, in 
from . import misc
  File 
"/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/gr-he-ansiblesetup/core/misc.py",
 line 34, in 
from ovirt_hosted_engine_setup import ansible_utils
  File 
"/usr/lib/python3.9/site-packages/ovirt_hosted_engine_setup/ansible_utils.py", 
line 33, in 
from he_ansible.constants import AnsibleCallback
ModuleNotFoundError: No module named 'he_ansible'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib/python3.9/site-packages/otopi/__main__.py", line 88, in main
installer.execute()
  File "/usr/lib/python3.9/site-packages/otopi/main.py", line 143, in execute
util.raiseExceptionInformation(
  File "/usr/lib/python3.9/site-packages/otopi/util.py", line 85, in 
raiseExceptionInformation
raise info[1].with_traceback(info[2])
  File "/usr/lib/python3.9/site-packages/otopi/main.py", line 141, in execute
self.context.loadPlugins()
  File "/usr/lib/python3.9/site-packages/otopi/context.py", line 803, in 
loadPlugins
self._loadPluginGroups(plugindir, needgroups, loadedgroups)
  File "/usr/lib/python3.9/site-packages/otopi/context.py", line 112, in 
_loadPluginGroups
self._loadPlugins(path, path, groupname)
  File "/usr/lib/python3.9/site-packages/otopi/context.py", line 69, in 
_loadPlugins
self._loadPlugins(base, d, groupname)
  File "/usr/lib/python3.9/site-packages/otopi/context.py", line 95, in 
_loadPlugins
util.loadModule(
  File "/usr/lib/python3.9/site-packages/otopi/util.py", line 110, in loadModule
spec.loader.exec_module(module)
  File "", line 850, in exec_module
  File "", line 228, in _call_with_frames_removed
  File 
"/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/gr-he-ansiblesetup/core/__init__.py",
 line 25, in 
from . import misc
  File 
"/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/gr-he-ansiblesetup/core/misc.py",
 line 34, in 
from ovirt_hosted_engine_setup import ansible_utils
  File 
"/usr/lib/python3.9/site-packages/ovirt_hosted_engine_setup/ansible_utils.py", 
line 33, in 
from he_ansible.constants import AnsibleCallback
otopi.main.PluginLoadException: No module named 'he_ansible'
root@lonovirt1 /u/cmcl bash#
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 

[ovirt-users] hosted-engine deploy skip storage configuration

2024-01-13 Thread laetitia . gilet
Hello,
I'm trying to install ovirt from the command line on an ovirt 4.5.5 el9 Ovirt 
node.
I prepared my LUN and multipath configuration and then run
hosted-engine --deploy --4
The storage configuration is skipped and I am not prompted about which storage 
domain type I want to use.

In the log the shows the few questions i've aswered :  

QUESTION/1/CI_APPLY_OPENSCAP_PROFILE=str:no
QUESTION/1/CI_DNS=str:172.20.11.100
QUESTION/1/CI_ENABLE_FIPS=str:no
QUESTION/1/CI_INSTANCE_DOMAINNAME=str:example.fr
QUESTION/1/CI_INSTANCE_HOSTNAME=str:ovirt-prod.example.fr
QUESTION/1/CI_ROOT_PASSWORD=str:**FILTERED**
QUESTION/1/CI_ROOT_SSH_ACCESS=str:yes
QUESTION/1/CI_ROOT_SSH_PUBKEY=str:
QUESTION/1/CI_VM_ETC_HOST=str:yes
QUESTION/1/CI_VM_STATIC_NETWORKING=str:static
QUESTION/1/CLOUDINIT_VM_STATIC_IP_ADDRESS=str:172.20.82.2
QUESTION/1/DEPLOY_PROCEED=str:yes
QUESTION/1/DIALOGOVEHOSTED_NOTIF/destEmail=str:ad...@example.fr
QUESTION/1/DIALOGOVEHOSTED_NOTIF/smtpPort=str:25
QUESTION/1/DIALOGOVEHOSTED_NOTIF/smtpServer=str:smtp.example.fr
QUESTION/1/DIALOGOVEHOSTED_NOTIF/sourceEmail=str:noreply-ov...@example.fr
QUESTION/1/ENGINE_ADMIN_PASSWORD=str:**FILTERED**
QUESTION/1/OVEHOSTED_GATEWAY=str:172.20.82.1
QUESTION/1/OVEHOSTED_NETWORK_TEST=str:dns
QUESTION/1/OVEHOSTED_VMENV_OVF_ANSIBLE=str:
QUESTION/1/OVESETUP_NETWORK_FQDN_first_HE=str:kvm.example.fr
QUESTION/1/ovehosted_bridge_if=str:bond1
QUESTION/1/ovehosted_cluster_name=str:PC_Crise
QUESTION/1/ovehosted_datacenter_name=str:Ovirt-prod
QUESTION/1/ovehosted_enable_keycloak=str:no
QUESTION/1/ovehosted_vmenv_cpu=str:4
QUESTION/1/ovehosted_vmenv_mac=str:00:16:3e:71:7e:ed
QUESTION/1/ovehosted_vmenv_mem=str:16384
QUESTION/2/CI_ROOT_PASSWORD=str:**FILTERED**
QUESTION/2/ENGINE_ADMIN_PASSWORD=str:**FILTERED**
...
otopi.dialog.human dialog.__logString:204 DIALOG:SEND
2024-01-10 15:26:40,556+0100 DEBUG otopi.plugins.otopi.dialog.human 
dialog.__logString:204 DIALOG:SEND --== STORAGE CONFIGURATION 
==--
2024-01-10 15:26:40,556+0100 DEBUG otopi.plugins.otopi.dialog.human 
dialog.__logString:204 DIALOG:SEND
2024-01-10 15:26:40,557+0100 DEBUG otopi.context context._executeMethod:124 
Stage customization METHOD otopi.plugins.otopi.dialog.cli.Plugin._customize
2024-01-10 15:26:40,557+0100 DEBUG otopi.context context._executeMethod:134 
otopi.plugins.otopi.dialog.cli.Plugin._customize condition False
2024-01-10 15:26:40,558+0100 DEBUG otopi.context context._executeMethod:124 
Stage customization METHOD 
otopi.plugins.gr_he_common.core.titles.Plugin._storage_end
2024-01-10 15:26:40,559+0100 DEBUG otopi.context context._executeMethod:124 
Stage customization METHOD 
otopi.plugins.gr_he_common.core.titles.Plugin._network_start
2024-01-10 15:26:40,559+0100 DEBUG otopi.plugins.otopi.dialog.human 
dialog.__logString:204 DIALOG:SEND
2024-01-10 15:26:40,55

My host see the LUN and multipath -ll result is OK

Can you help me to configure the vm engine storage to FC please ?
Laetitia
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NSDYXW5JQZM2ZHQJ2TYISFFMV4ILO3QC/


[ovirt-users] hosted-engine --deploy fails with "The task includes an option with an undefined variable. The error was: 'local_vm_ip' is undefined"

2023-11-20 Thread drew.tonnesen--- via Users
As basic as can be install/deploy. Used ovirt-node-ng-installer-latest-el8.iso. 
Followed steps exactly. Took all defaults possible. After step 23 (Specify the 
hostname of the deployment host:) the deployment starts. It goes for quite a 
long time until it fails here, never getting to step 25 and asking me for 
storage:

 2023-11-19 22:19:45,568+ DEBUG var changed: host "localhost" var 
"module_setup" type "" value: "true"
   1888 2023-11-19 22:19:45,568+ DEBUG var changed: host "localhost" var 
"discovered_interpreter_python" type "" value: 
""/usr/libexec/platform-python""
   1889 2023-11-19 22:19:45,568+ INFO ansible ok {'status': 'OK', 
'ansible_type': 'task', 'ansible_playbook': 
'/usr/share/ovirt-hosted-engine-setup/he_ansible/trigger_role.yml', 
'ansible_host': 'localhost', 'ansible_task': 'Force facts gathering', 
'task_duration': 1}
   1890 2023-11-19 22:19:45,568+ DEBUG ansible on_any args 
  kwargs
   1891 2023-11-19 22:19:45,664+ INFO ansible task start {'status': 'OK', 
'ansible_type': 'task', 'ansible_playbook': 
'/usr/share/ovirt-hosted-engine-setup/he_ansible/trigger_role.yml', 
'ansible_task': 'ovirt.ovirt.hosted_engine_setup : Set the name for 
add_host'}
   1892 2023-11-19 22:19:45,664+ DEBUG ansible on_any args TASK: 
ovirt.ovirt.hosted_engine_setup : Set the name for add_host  kwargs 
is_conditional:False
   1893 2023-11-19 22:19:45,664+ DEBUG ansible on_any args localhost TASK: 
ovirt.ovirt.hosted_engine_setup : Set the name for add_host  kwargs
   1894 2023-11-19 22:19:45,759+ ERROR ansible failed {
   1895 "ansible_host": "localhost",
   1896 "ansible_playbook": 
"/usr/share/ovirt-hosted-engine-setup/he_ansible/trigger_role.yml",
   1897 "ansible_result": {
   1898 "_ansible_no_log": false,
   1899 "msg": "The task includes an option with an undefined variable. 
The error was: 'local_vm_ip' is undefined\n\nThe error appears to be in 
'/usr/share/ansible/collections/ansible_collections/ovirt/ovirt/roles/hosted_en 
   gine_setup/tasks/sync_on_engine_machine.yml': line 2, column 3, but 
may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe 
offending line appears to be:\n\n---\n- name: Set the name for add_host\n  ^ he 
   re\n"
   1900 },
   1901 "ansible_task": "Set the name for add_host",
   1902 "ansible_type": "task",
   1903 "status": "FAILED",
   1904 "task_duration": 0
   1905 }

Summary is:


   2197 2023-11-19 22:20:02,141+ INFO SUMMARY:
   2198 DurationTask Name
   2199 
   2200 [ < 1 sec ] Execute just a specific set of steps
   2201 [  00:01  ] Force facts gathering
   2202 [ FAILED  ] Set the name for add_host
   2203 [ < 1 sec ] Fetch the value of HOST_KEY_CHECKING
   2204 [ < 1 sec ] Get the username running the deploy
   2205 [ FAILED  ] Register the engine FQDN as a host
   2206 [ FAILED  ] Sync on engine machine
   2207 [ < 1 sec ] Fetch logs from the engine VM
   2208 [ < 1 sec ] Set destination directory path
   2209 [ < 1 sec ] Create destination directory
   2210 [ < 1 sec ] Find the local appliance image
   2211 [  00:10  ] Give the vm time to flush dirty buffers
   2212 [ < 1 sec ] Change ownership of copied engine logs
   2213 [ < 1 sec ] Remove local vm dir
   2214 [ < 1 sec ] Remove temporary entry in /etc/hosts for the local VM
   2215 [ < 1 sec ] Clean local storage pools
   2216 [ FAILED  ] Destroy local storage-pool localvmk0rv3u2g
   2217 [ FAILED  ] Undefine local storage-pool localvmk0rv3u2g
   2218 [ FAILED  ] Destroy local storage-pool {{ 
local_vm_disk_path.split('/')[5] }}
   2219 [ FAILED  ] Undefine local storage-pool {{ 
local_vm_disk_path.split('/')[5] }}

I am using a viable FQDN and IP for my engine which is on the same network as 
the host. I've been at this for more than a week with different options/OSs. 
Figured this was the most basic possible configuration and yet it still doesn't 
work. Incredibly frustrating product.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YYHWOHN52ROGUI2I5N36X37VKOIDXXJY/


[ovirt-users] Hosted engine VM not coming up after Storage rebuild.

2023-10-12 Thread Sumit Basu
Hi,

We are running Ovirt 4.3 on IBM x3650, x3550 servers with our SAN on IBM 
Midrange storage DS5300. All the storage domains are separate LUN's in the 
storage with a dedicated LUN for the HostedEngine VM. The storage had a failure 
due to a major power issue. One of the storage array's that have the storage 
domains had to be re-constructed using IBM's StorageManager tool. After booting 
the hosts, i can see all the storage domains as LUN's with the multipath -ll 
from all the hosts, but on starting the hosted engine with #hosted-engine 
--vm-start, and checking with --vm-status i get  

"Engine status: {"reason": "failed liveliness check", "health": "bad", 
"vm": "up", "detail": "Up"}'.

The global maintenance mode is enabled and i tried "hosted-engine --console" - 
after logging in i checked the boot.log, i find 

[FAILED] Failed to mount /var/log.
See 'systemctl status var-log.mount' for details.
[DEPEND] Dependency failed for Update UTMP about System Boot/Shutdown.
[DEPEND] Dependency failed for Update UTMP about System Runlevel Changes.
[DEPEND] Dependency failed for Flush Journal to Persistent Storage.
[DEPEND] Dependency failed for /var/log/audit.
[DEPEND] Dependency failed for Local File Systems.
[DEPEND] Dependency failed for Mark the need to relabel after reboot.
[DEPEND] Dependency failed for Migrate local... structure to the new structure.
[DEPEND] Dependency failed for Relabel all filesystems, if necessary.

the "systemctl status var-log.mount"  shows
● var-log.mount - /var/log
   Loaded: loaded (/etc/fstab; bad; vendor preset: disabled)
   Active: failed (Result: exit-code) since Fri 2023-10-06 14:38:58 IST; 10min 
ago
Where: /var/log
 What: /dev/mapper/ovirt-log
 Docs: man:fstab(5)
   man:systemd-fstab-generator(8)
  Process: 691 ExecMount=/bin/mount /dev/mapper/ovirt-log /var/log -t xfs -o 
nodev (code=exited, status=32)

Oct 06 14:38:57 ovman systemd[1]: Mounting /var/log...
Oct 06 14:38:58 ovman mount[691]: mount: mount /dev/mapper/ovirt-...g
Oct 06 14:38:58 ovman systemd[1]: var-log.mount mount process exi...2
Oct 06 14:38:58 ovman systemd[1]: Failed to mount /var/log.
Oct 06 14:38:58 ovman systemd[1]: Unit var-log.mount entered fail
Hint: Some lines were ellipsized, use -l to show in full.

I need to recover from this quick.

Sumit Basu
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XXPX7J3X3PYBYMQOHSWESGVHY5KOHGQW/


[ovirt-users] Hosted-engine restore failing

2023-10-11 Thread Devin A. Bougie
Hi, All.  We are attempting to migrate to a new storage domain for our oVirt 
4.5.4 self-hosted engine setup, and are failing with "cannot import name 
'Callable' from 'collections'"

Please see below for the errors on the console.

Many thanks,
Devin

--
hosted-engine --deploy --restore-from-file=backup.bck --4
...
[ INFO  ] Checking available network interfaces:
[ ERROR ] b'[WARNING]: Skipping plugin (/usr/share/ovirt-hosted-engine-\n'
[ ERROR ] b'setup/he_ansible/callback_plugins/2_ovirt_logger.py), cannot load: 
cannot\n'
[ ERROR ] b"import name 'Callable' from 'collections'\n"
[ ERROR ] b'(/usr/lib64/python3.11/collections/__init__.py)\n'
[ ERROR ] b"ERROR! Unexpected Exception, this is probably a bug: cannot import 
name 'Callable' from 'collections' 
(/usr/lib64/python3.11/collections/__init__.py)\n"
[ ERROR ] Failed to execute stage 'Environment customization': Failed executing 
ansible-playbook
[ INFO  ] Stage: Clean up
[ INFO  ] Cleaning temporary resources
[ ERROR ] b'[WARNING]: Skipping plugin (/usr/share/ovirt-hosted-engine-\n'
[ ERROR ] b'setup/he_ansible/callback_plugins/2_ovirt_logger.py), cannot load: 
cannot\n'
[ ERROR ] b"import name 'Callable' from 'collections'\n"
[ ERROR ] b'(/usr/lib64/python3.11/collections/__init__.py)\n'
[ ERROR ] b"ERROR! Unexpected Exception, this is probably a bug: cannot import 
name 'Callable' from 'collections' 
(/usr/lib64/python3.11/collections/__init__.py)\n"
[ ERROR ] Failed to execute stage 'Clean up': Failed executing ansible-playbook
[ INFO  ] Generating answer file 
'/var/lib/ovirt-hosted-engine-setup/answers/answers-2023100358.conf'
[ INFO  ] Stage: Pre-termination
[ INFO  ] Stage: Termination
[ ERROR ] Hosted Engine deployment failed
  Log file is located at
  
/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-2023100352-raupj9.log
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/A7DPYZ4DG5EXB2YTYDERHVN5A2ZVKUPR/


[ovirt-users] Hosted engine setup Failed

2023-04-24 Thread Fedele Stabile
Good morning,
I have a fresh installed host node ovirt v.4.5 and i would install engine using 
terminal, using the commanda hosted-engine --deploy
host node has a ip on 160.97.xx and i want the engine on the same network 
(160.97.xx)
The installation seems to be good but at the end  
exit leaving the host-engine running on 192.168.222.x

Seems that the error is here:

2023-04-25 06:28:18,953+0200 DEBUG 
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:109 
ignored: [localhost]: FAILED! => {"msg": "The task includes an option with an 
undefined variable. The error was: 'local_vm_ip' is undefined\n\nThe error 
appears to be in 
'/usr/share/ansible/collections/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/sync_on_engine_machine.yml':
 line 2, column 3, but may\nbe elsewhere in the file depending on the exact 
syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: Set the 
name for add_host\n  ^ here\n"}


2023-04-25 06:28:19,757+0200 DEBUG 
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:109 
ignored: [localhost]: FAILED! => {"censored": "the output has been hidden due 
to the fact that 'no_log: true' was specified for this result"}
2023-04-25 06:28:19,857+0200 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:115 TASK [ovirt.ovirt.hosted_engine_setup : Sync 
on engine machine]
2023-04-25 06:28:19,958+0200 DEBUG 
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:109 
{'msg': "The field 'delegate_to' has an invalid value, which includes an 
undefined variable. The error was: 'dict object' has no attribute 
'engine'\n\nThe error appears to be in 
'/usr/share/ansible/collections/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/sync_on_engine_machine.yml':
 line 7, column 3, but may\nbe elsewhere in the file depending on the exact 
syntax problem.\n\nThe offending line appears to be:\n\n  import_tasks: 
add_engine_as_ansible_host.yml\n- name: Sync on engine machine\n  ^ here\n", 
'_ansible_no_log': None}
2023-04-25 06:28:20,058+0200 DEBUG 
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:109 
ignored: [localhost]: FAILED! => {"msg": "The field 'delegate_to' has an 
invalid value, which includes an undefined variable. The error was: 'dict 
object' has no attribute 'engine'\n\nThe error appears to be in 
'/usr/share/ansible/collections/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/sync_on_engine_machine.yml':
 line 7, column 3, but may\nbe elsewhere in the file depending on the exact 
syntax problem.\n\nThe offending line appears to be:\n\n  import_tasks: 
add_engine_as_ansible_host.yml\n- name: Sync on engine machine\n  ^ here\n"}

Help me, please
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FVM45UDO67FI3CQEATOLGCCMAWII7I7V/


[ovirt-users] hosted-engine to standalone

2023-04-20 Thread carl langlois
Hi,

I plan to transform my hosted_engine setup to a standalone engine.
Currently I have 3 hosts that can run the engine. The engine domain is
located on a glusterfs. I want to simplify this setup by taking 1 of the 3
hosts and setting it as a standalone engine and re-installing the other
host as a standard hypervisor. Also i want to remove the glusterfs.  I am
on 4.3 for now but the plan is to upgrade after this simplification. The
step i plan to do is:

   1. global maintenance
   2. stop engine
   3. backup engine
   4. shutdown engine
   5. install fresh standalone engine and restore from the backup
   6. boot the standalone engine.
   7. after not sure what the step to clean the old engine domain..

Any suggestion?

Regards,
Carl
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AUZAMMMVITN37EUXNRRM6ELWHOPAT6QV/


[ovirt-users] Hosted-Engine Down VSDM Cert Expired

2023-01-21 Thread jarredm
Hey all,

I'm looking to get a bit of guidance here. As the subject suggests, we have a 
hosted-engine ovirt cluster. I ran into an issue trying to login to the web 
interface. I was seeing errors about certificate expiration, although I didn't 
know what cert it was referring to at the time. I ssh'd to the hosted-engine 
and restarted it. However, once it shutdown, it was unable to start again.

What I've discovered so far is that the hosted-engine is currently residing on 
node 33 (storage is on a gluster volume) and the vdsm certificate for that node 
has expired. There are three nodes in total, and two of them have expired 
certs. However, one of them has a valid cert still. I'm able to run vdsm-client 
commands on that node. Although I haven't done anything with that yet other 
than to verify that I'm able to do some of the Host get* commands successfully. 
I'm wondering if it is possible to "pull" the hosted-engine onto this host and 
fire it back up there.

Thanks in advance for your help!

I'm gathering log info etc as described and it will be available here: 
https://drive.google.com/drive/folders/1cBPrN8SuIR-dgnpRKe1eKXRZZTPPshyJ?usp=sharing

Version info:
Installed Packages
centos-release-gluster8.noarch  
1.0-1.el8  @extras  
   
centos-release-storage-common.noarch
2-2.el8@extras  
   
glusterfs.x86_64
8.6-2.el8  
@centos-gluster8
ovirt-release44.noarch  
4.4.8.3-1.el8  
@@commandline   
vdsm.x86_64 
4.40.80.6-1.el8@ovirt-4.4   
   
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CIDB44H4L4NUBIUMF47PD62WJDP7C46I/


[ovirt-users] Hosted Engine No Longer Has IP Address - Help!

2023-01-05 Thread Matthew J Black
Hi Guys,

I've gone and shot myself in the foot - and I'm looking for some first-aid.

I've managed to remove the IP Address of the oVirt Self-Hosted Engine and so 
have lost contact with it (don't ask how - let's just say I f*cked-up). I 
*think* its still running, I've got it set to DHCP, and I've got access to the 
Host its running on, so my question(s) is:

- (The preferred method) How can I re-establish (console?) contact - I'm 
thinking via the Host Server and some kvm-commands, so I can issue a `dhclient` 
command
- (The most drastic) How can I get it to reboot ie is there a command / command 
sequence to do this

Any help would be appreciated.

Cheers

Dulux-Oz
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PY3WVEH22P7SGQ3N7XK2PXRILYTTV6PW/


[ovirt-users] Hosted-engine update error

2022-12-20 Thread Jason Beard
Greetings! I was updating from ovirt 4.5.3 to ovirt 4.5.4 and my hosted engine 
has some package errors but the update seems to complete. The errors still 
exist with dnf. I feel like this is something that needs to be resolved but not 
sure. 

When I query the rpmdb I do see ovirt-*-4.5.4.x packages are installed.  When I 
ran the engine-setup it completed with no errors.

Errors:
# dnf update
Last metadata expiration check: 4:11:16 ago on Tue 20 Dec 2022 04:59:46 AM CST.
Error: 
 Problem: package gluster-ansible-cluster-1.0.1-2.el8.noarch requires ansible 
>= 2.6, but none of the providers can be installed
  - package ansible-core-2.13.5-1.el8.x86_64 conflicts with ansible < 2.10.0 
provided by ansible-2.9.20-1.el8.noarch
  - package ansible-core-2.13.5-1.el8.x86_64 conflicts with ansible < 2.10.0 
provided by ansible-2.9.17-1.el8.noarch
  - package ansible-core-2.13.5-1.el8.x86_64 conflicts with ansible < 2.10.0 
provided by ansible-2.9.18-2.el8.noarch
  - package ansible-core-2.13.5-1.el8.x86_64 conflicts with ansible < 2.10.0 
provided by ansible-2.9.20-2.el8.noarch
  - package ansible-core-2.13.5-1.el8.x86_64 conflicts with ansible < 2.10.0 
provided by ansible-2.9.21-2.el8.noarch
  - package ansible-core-2.13.5-1.el8.x86_64 conflicts with ansible < 2.10.0 
provided by ansible-2.9.23-2.el8.noarch
  - package ansible-core-2.13.5-1.el8.x86_64 conflicts with ansible < 2.10.0 
provided by ansible-2.9.24-2.el8.noarch
  - package ansible-core-2.13.5-1.el8.x86_64 conflicts with ansible < 2.10.0 
provided by ansible-2.9.27-2.el8.noarch
  - package ansible-2.9.27-2.el8.noarch conflicts with ansible-core > 2.11.0 
provided by ansible-core-2.13.5-1.el8.x86_64
  - cannot install the best update candidate for package 
gluster-ansible-cluster-1.0-5.el8.noarch
  - cannot install the best update candidate for package 
ansible-core-2.13.5-1.el8.x86_64
  - package ansible-2.9.20-1.el8.noarch is filtered out by exclude filtering
  - package ansible-1:2.9.27-5.el8.noarch is filtered out by exclude filtering
  - package ansible-6.3.0-1.el8.noarch is filtered out by exclude filtering
  - package ansible-2.9.16-1.el8.noarch is filtered out by exclude filtering
  - package ansible-2.9.19-1.el8.noarch is filtered out by exclude filtering
  - package ansible-2.9.23-1.el8.noarch is filtered out by exclude filtering
  - package ansible-1:2.9.27-4.el8.noarch is filtered out by exclude filtering
(try to add '--allowerasing' to command line to replace conflicting packages or 
'--skip-broken' to skip uninstallable packages or '--nobest' to use not only 
best candidate packages)
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IFQZ3ZALFKSZV72TQIRSFDXDHZWKGOZ7/


[ovirt-users] Hosted Engine 4.5 Deployment Issues

2022-12-15 Thread nathan.english--- via Users
Hi There,

Apologies in advanced, I'm sure this has been asked before, but I struggle to 
find anything in the documentation.

I'm currently in the process of deploying my first oVirt instance, and having a 
issue when the Hosted Engine is trying to Install the Hosted Engine Package. 
Error from the logs attached.

My Environment is behind a proxy, which blocks access to any domain containing 
the word mirror, so I understand why the error would be thrown. Is there any 
way to update the repos so I can point them to alternative sources? I have a 
working local mirror in my environment which does contain all the packages 
required for oVirt.

Kind Regards,

Nathan



hosted-engine-setup.log
Description: hosted-engine-setup.log
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5HAMBTHDGUUL5FHDXMD2SROS7ZI2ZHMK/


[ovirt-users] Hosted engine 4.5.3.2 - still with bug

2022-11-01 Thread vanBulion
hi

I've try to install new hosted-engine version but deploy process finish with an 
error after vm start on shared storage (FC disks from SAN storage).
[ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Check engine VM health]

this cuz Jboos web page not working
"Service Unavailable
The server is temporarily unable to service your request due to maintenance 
downtime or capacity problems. Please try again later."


I open console to VM engine and...
disk dm-6 and dm-3 was corrupted - LOL - must fix by xfs_repair.
But after restart hosted-engne.service wont start.

● ovirt-engine.service - oVirt Engine
   Loaded: loaded (/usr/lib/systemd/system/ovirt-engine.service; enabled; 
vendor preset: disabled)
   Active: failed (Result: exit-code) since Mon 2022-10-31 15:35:30 CET; 4min 
11s ago
  Process: 1469 
ExecStart=/usr/share/ovirt-engine/services/ovirt-engine/ovirt-engine.py 
--redirect-output --systemd=notify $EXTRA_ARGS start (code=exited, 
status=1/FAILURE)
 Main PID: 1469 (code=exited, status=1/FAILURE)

Oct 31 15:35:23 ovirt systemd[1]: Starting oVirt Engine...
Oct 31 15:35:23 ovirt systemd[1]: Started oVirt Engine.
Oct 31 15:35:26 ovirt ovirt-engine.py[1469]: 2022-10-31 15:35:26,606+0100 
ovirt-engine: INFO _detectJBossVersion:186 Detecting JBoss version. Running: 
/usr/lib/jvm/jre-11-openjdk-11.0.>
Oct 31 15:35:30 ovirt ovirt-engine.py[1469]: 2022-10-31 15:35:30,369+0100 
ovirt-engine: INFO _detectJBossVersion:206 Return code: -6,  | stdout: '['#', 
'# A fatal error has been detect>
Oct 31 15:35:30 ovirt ovirt-engine.py[1469]: 2022-10-31 15:35:30,370+0100 
ovirt-engine: ERROR run:543 Error: Cannot detect JBoss version
Oct 31 15:35:30 ovirt.pogodno systemd[1]: ovirt-engine.service: Main process 
exited, code=exited, status=1/FAILURE
Oct 31 15:35:30 ovirt.pogodno systemd[1]: ovirt-engine.service: Failed with 
result 'exit-code'.

I even try to downgrade postgresql-jdbc to 42.2.3 but no effect.

If I stop deploy process when VM start from local storage and I dont continue 
it - engine VM start with no error but VM start without HA IP configured - not 
usefully.
So I still witing to good working - ovirt-engine-appliance 4.5

ovirt-node-ng-installer-4.5.3.2-2022102813.el8.iso
ovirt-engine-appliance-4.5-20221026100732.1.el8.x86_64.rpm

Regards
Damian
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YJG2O6OFJ52YP7ZQNNQ5QKLYGCNCSBRS/


[ovirt-users] hosted-engine-setup --deploy fail on Centos Stream 8

2022-10-10 Thread andrea.crisanti--- via Users
Hy,

 I am trying to install ovirt 4.5 on a 4-host cluster running Centos Stream 8, 
but the engine does not start and the whole process fails.

Here is my procedure

dnf install centos-release-ovirt45
dnf module reset virt
dnf module enable virt:rhel
dnf install ovirt-engine-appliance
dnf install  ovirt-hosted-engine-setup

The latest version of ansible [ansible-core 2.13] uses python3.9 and the 
installation fails because some python3.9 modules are missing 
[python39-netaddr, python39-jmespath] and cannot be installed [conflict 
python3-jmespath]. So I downgraded ansible to ansible-core 2.12

dnf downgrade ansible-core

Now 

hosted-engine-setup --deploy --4

goes proceed further but stops because it cannot start the engine

[ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Wait for the host to be up]   
 
[ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Notify the user about a 
failure]   
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Host is not 
up, please check logs, perhaps also on the engine machine"}

I looked into the log file
 
/var/log//ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-ansible-bootstrap_local_vm-20221007132728-yp7cd1.log
and I found the following error:

2022-10-07 13:28:30,881+0200 ERROR ansible failed {
"ansible_host": "localhost",
"ansible_playbook": 
"/usr/share/ovirt-hosted-engine-setup/he_ansible/trigger_role.yml",
"ansible_result": {
"_ansible_no_log": false,
"changed": false,
"cmd": [
"virsh",
"net-undefine",
"default"
],
"delta": "0:00:00.039258",
"end": "2022-10-07 13:28:30.710401",
"invocation": {
"module_args": {
"_raw_params": "virsh net-undefine default",
"_uses_shell": false,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true,
"warn": false
}
},
"msg": "non-zero return code",
"rc": 1,
"start": "2022-10-07 13:28:30.671143",
"stderr": "error: failed to get network 'default'\nerror: Network not 
found: no network with matching name 'default'",
"stderr_lines": [
"error: failed to get network 'default'",
"error: Network not found: no network with matching name 'default'"
],
"stdout": "",
"stdout_lines": []
},
"ansible_task": "Update libvirt default network configuration, undefine",
"ansible_type": "task",
"status": "FAILED",
"task_duration": 0
}

Needless to say 
firewalld and libvirtd are both up
and virsh net-list gives:

 Name  StateAutostart   Persistent

 ;vdsmdummy;   active   no  no
 default   active   no  yes

I googled around without success. 

Has anyone had similar problems?

End of past July I installed Ovirt on another cluster running Centos Stream 8 
following the procedure I just described with no problem.

If needed I can post all log files.

Thanks for the help.

Best
Andrea
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JI72US3JIOXBWTMTVVGDLVAZV7UJXBYF/


[ovirt-users] Hosted-Engine VM wont start after physical CPU change

2022-09-24 Thread douglasddr8
I currently have a selfhosted engine with two cascadelake xeon silver 4208 cpu 
with 8 cores
I changed both to two skylake xeon platinum 8160 with 24cores

After initialization the vm hosted-engine wont start, I found in the logs that 
the cpu is not compatible because its not contain the avx512vnni flag
Is it possible to work around the problem or these processors cannot be used?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CZBDL7A4SH7DQQ2DOAG2U6NOKRDUASVX/


[ovirt-users] hosted-engine -vm-status show a ghost node that is not anymore in the cluster: how to remove?

2022-08-31 Thread Diego Ercolani
engine 4.5.2.4
The issue is that in my cluster when I use the:
[root@ovirt-node3 ~]# hosted-engine --vm-status


--== Host ovirt-node3.ovirt (id: 1) status ==--

Host ID: 1
Host timestamp : 1633143
Score  : 3400
Engine status  : {"vm": "down", "health": "bad", "detail": 
"unknown", "reason": "vm not running on this host"}
Hostname   : ovirt-node3.ovirt
Local maintenance  : False
stopped: False
crc32  : 1cbfcd19
conf_on_shared_storage : True
local_conf_timestamp   : 1633143
Status up-to-date  : True
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=1633143 (Wed Aug 31 14:37:53 2022)
host-id=1
score=3400
vm_conf_refresh_time=1633143 (Wed Aug 31 14:37:53 2022)
conf_on_shared_storage=True
maintenance=False
state=EngineDown
stopped=False


--== Host ovirt-node1.ovirt (id: 2) status ==--

Host ID: 2
Host timestamp : 373629
Score  : 0
Engine status  : unknown stale-data
Hostname   : ovirt-node1.ovirt
Local maintenance  : True
stopped: False
crc32  : 12a6eb81
conf_on_shared_storage : True
local_conf_timestamp   : 373630
Status up-to-date  : False
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=373629 (Tue Jun 14 16:48:50 2022)
host-id=2
score=0
vm_conf_refresh_time=373630 (Tue Jun 14 16:48:50 2022)
conf_on_shared_storage=True
maintenance=True
state=LocalMaintenance
stopped=False


--== Host ovirt-node2.ovirt (id: 3) status ==--

Host ID: 3
Host timestamp : 434247
Score  : 3400
Engine status  : {"vm": "down", "health": "bad", "detail": 
"unknown", "reason": "vm not running on this host"}
Hostname   : ovirt-node2.ovirt
Local maintenance  : False
stopped: False
crc32  : badb3751
conf_on_shared_storage : True
local_conf_timestamp   : 434247
Status up-to-date  : True
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=434247 (Wed Aug 31 14:37:45 2022)
host-id=3
score=3400
vm_conf_refresh_time=434247 (Wed Aug 31 14:37:45 2022)
conf_on_shared_storage=True
maintenance=False
state=EngineDown
stopped=False


--== Host ovirt-node4.ovirt (id: 4) status ==--

Host ID: 4
Host timestamp : 1646655
Score  : 3400
Engine status  : {"vm": "up", "health": "good", "detail": 
"Up"}
Hostname   : ovirt-node4.ovirt
Local maintenance  : False
stopped: False
crc32  : 1a16027e
conf_on_shared_storage : True
local_conf_timestamp   : 1646655
Status up-to-date  : True
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=1646655 (Wed Aug 31 14:37:43 2022)
host-id=4
score=3400
vm_conf_refresh_time=1646655 (Wed Aug 31 14:37:43 2022)
conf_on_shared_storage=True
maintenance=False
state=EngineUp
stopped=False

The problem is that ovirt-node1.ovirt is not anymore in ther cluster, in the 
host list presented by the ui there is correctly no ovirt-node1, the 
ovirt-node1 appears only in the commandline.

I did a full text search in the engine DB, but node1 doesn't appear anywhere, 
even in the filesystem, a grep doesn't find anything
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/S544RUVUXTQFWMEUIGO3UOHQVCPXT3QX/


[ovirt-users] Hosted engine restarting

2022-08-22 Thread markeczzz
Hi! 
In the last few days I am having problem with Hosted-Engine, it keeps 
restarting. Sometimes after few minutes, sometimes after few hours..
I haven't done any changes on oVirt or network in that time.
Version is 4.4.10.7-1.el8. (this was also installation version)

Here are the logs:
Agent.log--
MainThread::INFO::2022-08-21 
09:48:36,200::hosted_engine::517::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop)
 Current state EngineUp (score: 2440)
MainThread::INFO::2022-08-21 
09:48:36,200::hosted_engine::525::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop)
 Best remote host node3.ovirt.example.org (id: 3, score: 2440)
MainThread::ERROR::2022-08-21 
09:48:46,212::states::398::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume)
 Host node3.ovirt.example.org (id 3) score is significantly better than local 
score, shutting down VM on this host
MainThread::INFO::2022-08-21 
09:48:46,641::brokerlink::73::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify)
 Success, was notification of state_transition (EngineUp-EngineStop) sent? 
ignored
MainThread::INFO::2022-08-21 
09:48:46,706::hosted_engine::517::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop)
 Current state EngineStop (score: 3400)
MainThread::INFO::2022-08-21 
09:48:46,706::hosted_engine::525::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop)
 Best remote host node3.ovirt.example.org (id: 3, score: 3400)
MainThread::INFO::2022-08-21 
09:48:56,714::hosted_engine::934::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_stop_engine_vm)
 Shutting down vm using `/usr/sbin/hosted-engine --vm-shutdown`
MainThread::INFO::2022-08-21 
09:48:56,871::hosted_engine::941::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_stop_engine_vm)
 stdout: 
MainThread::INFO::2022-08-21 
09:48:56,871::hosted_engine::942::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_stop_engine_vm)
 stderr: 
MainThread::ERROR::2022-08-21 
09:48:56,871::hosted_engine::950::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_stop_engine_vm)
 Engine VM stopped on localhost
MainThread::INFO::2022-08-21 
09:48:56,880::state_decorators::102::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
 Timeout set to Sun Aug 21 09:53:56 2022 while transitioning  -> 
MainThread::INFO::2022-08-21 
09:48:56,959::hosted_engine::517::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop)
 Current state EngineStop (score: 3400)
MainThread::INFO::2022-08-21 
09:49:06,977::states::537::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume)
 Engine vm not running on local host
MainThread::INFO::2022-08-21 
09:49:06,983::state_decorators::95::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
 Timeout cleared while transitioning  -> 
MainThread::INFO::2022-08-21 
09:49:07,173::brokerlink::73::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify)
 Success, was notification of state_transition (EngineStop-EngineDown) sent? 
ignored
MainThread::INFO::2022-08-21 
09:49:07,795::hosted_engine::517::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop)
 Current state EngineDown (score: 3400)
MainThread::INFO::2022-08-21 
09:49:16,811::states::472::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume)
 Engine down and local host has best score (3400), attempting to start engine VM
MainThread::INFO::2022-08-21 
09:49:16,998::brokerlink::73::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify)
 Success, was notification of state_transition (EngineDown-EngineStart) sent? 
ignored
MainThread::INFO::2022-08-21 
09:49:17,179::hosted_engine::517::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop)
 Current state EngineStart (score: 3400)
MainThread::INFO::2022-08-21 
09:49:17,195::hosted_engine::895::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_clean_vdsm_state)
 Ensuring VDSM state is clear for engine VM
MainThread::INFO::2022-08-21 
09:49:17,200::hosted_engine::915::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_clean_vdsm_state)
 Cleaning state for non-running VM
MainThread::INFO::2022-08-21 
09:49:18,211::hosted_engine::907::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_clean_vdsm_state)
 Vdsm state for VM clean
MainThread::INFO::2022-08-21 
09:49:18,212::hosted_engine::853::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_start_engine_vm)
 Starting vm using `/usr/sbin/hosted-engine --vm-start`
MainThread::INFO::2022-08-21 
09:49:18,814::hosted_engine::862::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_start_engine_vm)
 stdout: VM in WaitForLaunch

MainThread::INFO::2022-08-21 
09:49:18,814::hosted_engine::863::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_start_engine_vm)
 stderr: Command VM.getStats with args {'vmID': 

[ovirt-users] hosted-engine --deploy --restore-from-file fails on oVirt node 4.5.1.3

2022-08-01 Thread P F
I'm running three oVirt 4.5.1.3 nodes with a hosted engine v4.5.1.3.
I've installed ovirt node using the iso downloaded from;
https://resources.ovirt.org/pub/ovirt-4.5/iso/ovirt-node-ng-installer/4.5.1-2022062306/el8/ovirt-node-ng-installer-4.5.1-2022062306.el8.iso

I recently attempted to reinstall the environment using a freshly taken engine 
backup.

After installing a fresh oVirt node, and copying the backup to the node, I ran 
the command;
hosted-engine --deploy --restore-from-file=backup-20220723-2 

The problem appears to occur after the engine_setup ansible task starts.
Where the error can be seen;
[ ERROR ] Failed to execute stage 'Misc configuration': 
'OVESETUP_OVN/ovirtProviderOvnSecret'

Since I was unable to get the restore-from-file method to work, I was forced to 
recreate the environment from scratch.

However, I'm concerned that this may continue to be an unaddressed problem 
going forward.  It is quite tedious to recreate all of the cluster resources 
from scratch.

Has anyone attempted a --restore-from-file on oVirt node 4.5.1.3?
Have you seen this problem?  Were you able to overcome the error?  If so, how?
Or, did it work for you?


Below are the complete log messages from the attempt.
--- snip ---
[root@ovirt-node03 ~]# hosted-engine --deploy 
--restore-from-file=backup-20220723-2 
[ INFO  ] Stage: Initializing
[ INFO  ] Stage: Environment setup
  During customization use CTRL-D to abort.
  Continuing will configure this host for serving as hypervisor and 
will create a local VM with a running engine.
  The provided engine backup file will be restored there,
  it's strongly recommended to run this tool on an host that wasn't 
part of the environment going to be restored.
  If a reference to this host is already contained in the backup file, 
it will be filtered out at restore time.
  The locally running engine will be used to configure a new storage 
domain and create a VM there.
  At the end the disk of the local VM will be moved to the shared 
storage.
  The old hosted-engine storage domain will be renamed, after checking 
that everything is correctly working you can manually remove it.
  Other hosted-engine hosts have to be reinstalled from the engine to 
update their hosted-engine configuration.
  Are you sure you want to continue? (Yes, No)[Yes]: 
  It has been detected that this program is executed through an SSH 
connection without using tmux.
  Continuing with the installation may lead to broken installation if 
the network connection fails.
  It is highly recommended to abort the installation and run it inside 
a tmux session using command "tmux".
  Do you want to continue anyway? (Yes, No)[No]: Yes
  Configuration files: 
  Log file: 
/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20220723215602-h3k4pg.log
  Version: otopi-1.10.0 (otopi-1.10.0-1.el8)
[ INFO  ] Stage: Environment packages setup
[ INFO  ] Stage: Programs detection
[ INFO  ] Stage: Environment setup (late)
[ INFO  ] Stage: Environment customization
 
  --== STORAGE CONFIGURATION ==--
 
 
  --== HOST NETWORK CONFIGURATION ==--
 
[ INFO  ] Bridge ovirtmgmt already created
  Please indicate the gateway IP address [192.168.1.1]: 
[ INFO  ] Checking available network interfaces:
[ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Execute just a specific set 
of steps]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Force facts gathering]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Detecting interface on 
existing management bridge]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Set variable for supported 
bond modes]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Get all active network 
interfaces]
[ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Filter bonds with bad naming]
[ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Generate output list]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Collect interface types]
[ INFO  ] changed: [localhost]
[ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Check for Team devices]
[ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Get list of Team devices]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Collect Team devices]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Filter team devices]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Fail if only team devices are 
available]
[ INFO  ] skipping: [localhost]
[ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Search VLAN devices]
[ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Check for base interface of 
VLAN devices]
[ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Get base 

[ovirt-users] hosted engine ovirt-engine-appliance-4.5-20220419162115.1.el8.x86_64 doesn't deploy

2022-04-30 Thread diego . ercolani
I have a full installation of ovirt hosted-engine but it always stop telling:
[ ERROR ] fatal: [localhost -> 192.168.222.15]: FAILED! => {"attempts": 30, 
"changed": false, "connection": "close", "content_encoding": "identity", 
"content_length": "86", "content_type": "text/html; charset=UTF-8", "date": 
"Sat, 30 Apr 2022 06:56:58 GMT", "elapsed": 0, "msg": "Status code was 500 and 
not [200]: HTTP Error 500: Internal Server Error", "redirected": false, 
"server": "Apache/2.4.37 (centos) OpenSSL/1.1.1k mod_auth_gssapi/1.6.1", 
"status": 500, "url": "http://localhost/ovirt-engine/services/health"}

and it's true, I can access the engine via the temporary address (192.168.1.69 
or 192.168.222.69) and issuing a curl:
[root@ovirt-engine ~]# curl http://localhost/ovirt-engine/services/health
Error500 - Internal Server 
Error[root@ovirt-engine ~]# 

it give back an Internal server error.

I can not deploy how it's possible to fix?

currently the engine is running.. should I try something from inside the engine?
Thank you
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7OVXEYY33B7RTDMMOJHDJR4RI44AN7F3/


[ovirt-users] Hosted Engine Deployment timeout waiting for VM

2022-04-18 Thread Jonas Liechti

Hello users,

I am currently trying to deploy the self-hosted engine via the web 
interface but it seems stuck at the task "Wait for the local VM" 
(https://github.com/oVirt/ovirt-ansible-collection/blob/master/roles/hosted_engine_setup/tasks/bootstrap_local_vm/03_engine_initial_tasks.yml). 
I am unsure what to look at for getting more info as I haven't worked a 
lot with Ansible before. Do you have any idea how to debug?



The temporary IP is added to /etc/hosts and I can also login to the VM 
via SSH:


[root@server-005 ~]# cat /etc/hosts
192.168.1.97 ovirt-engine-test.admin.int.rabe.ch # temporary entry added 
by hosted-engine-setup for the bootstrap VM
127.0.0.1   localhost localhost.localdomain localhost4 
localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 
localhost6.localdomain6

10.128.16.5 server-005.admin.int.rabe.ch
10.128.16.6 server-006.admin.int.rabe.ch
10.128.16.7 server-007.admin.int.rabe.ch
#10.128.32.2 ovirt-engine-test.admin.int.rabe.ch
10.132.16.5 server-005.storage.int.rabe.ch
10.132.16.6 server-006.storage.int.rabe.ch
10.132.16.7 server-007.storage.int.rabe.ch
[root@server-005 ~]# ssh ovirt-engine-test.admin.int.rabe.ch
r...@ovirt-engine-test.admin.int.rabe.ch's password:
Web console: https://ovirt-engine-test.admin.int.rabe.ch:9090/ or 
https://192.168.1.97:9090/


Last login: Mon Apr 18 11:33:53 2022 from 192.168.1.1
[root@ovirt-engine-test ~]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN 
group default qlen 1000

    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
   valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
   valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc fq_codel state 
UP group default qlen 1000

    link/ether 00:16:3e:58:7a:a3 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.97/24 brd 192.168.1.255 scope global dynamic 
noprefixroute eth0

   valid_lft 2313sec preferred_lft 2313sec
    inet6 fe80::216:3eff:fe58:7aa3/64 scope link
   valid_lft forever preferred_lft forever


Thank you for any tips for debugging.
Jonas
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NWZJN4AZFS3IMKNBQ4BTB5JUOUL4BWAT/


[ovirt-users] hosted-engine failed to deploy

2022-03-10 Thread giovanni . ferioli
hello

i try to deploy a vm engine

this host have already a vm that contains various service
named
dhcp
etc
i try to deploy with the cmd: 
hosted-engine --deploy
and i get blocked with error msg


 INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Restore initial abrt config 
files]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Restart abrtd service]
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Unable to 
restart service abrtd: Failed to restart abrtd.service: Transaction contains 
conflicting jobs 'stop' and 'restart' for abrt-ccpp.service. Probably 
contradicting requirement dependencies configured.\nSee system logs and 
'systemctl status abrtd.service' for details.\n"}
[ ERROR ] Failed to execute stage 'Closing up': Failed executing 
ansible-playbook
[ INFO  ] Stage: Clean up

the host :

oVirt Node 4.4.10

can you suggest some troubleshooting check ?
thanks and regards
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DMQPFKJAIAL2CNT6H5FVQSMPWJURKEO4/


[ovirt-users] hosted engine deployment (v4.4.10) - TASK Check engine VM health - fatal FAILED

2022-02-08 Thread Charles Stellen

Dear Ovirt Hackers,

sorry: incidently send to de...@ovitr.org

we are dealing with hosted engine deployment issue on a fresh AMD EPYC 
servers:


and we are ready to donate hardware to Ovirt community after we pass 
this issue ( :-) )


0/ base infra:

- 3 identical physical servers (produced in 2021-4Q)
- fresh, clean and recent version of centos 8 stream installed 
(@^minimal-environment)
- servers are interconnected with cisco switch, each other are network 
visible,

all with nice internet access (NAT)


1/ storage:

- all 3 servers/nodes host nice and clean glusterfs (v9.5) and volume 
"vol-images01" is ready for VM images

- ovirt hosted engine deployment procedure:
- easily accept mentioned glusterfs storage domain
- mount it during "hosted-engine --deploy" with no issue
- all permissions are set correctly at all glustrfs nodes ("chown 
vdsm.kvm vol-images01")

- no issue with storage domain at all


2/ ovirt - hosted engine deployment:

- all 3 servers successfully deployed recent ovirt version with standart 
procedure

(on top of minimal install of centos 8 stream):

dnf -y install ovirt-host
virt-host-validate: PASS ALL

- at first server we continue with:

dnf -y install ovirt-engine-appliance
hosted-engine --deploy (pure commandline - so no cockpit is used)

DEPLOYMENT ISSUE:

- during "hosted-engine --deploy" procedure - hosted engine becomes 
temporairly accessible at:https://server01:6900/ovirt-engine/

- with request to manualy set "ovirtmgmt" virtual nic
- Hosts > server01 > Network Interfaces > [SETUP HOST NETWORKS]
"ovirtmgmt" dropped to eno1 - [OK]
- than All pass fine - and host "server01" becomes Active
- back to commandline to Continue with deployment "Pause execution until 
/tmp/ansible.jksf4_n2_he_setup_lock is removed"

by removing the lock file

- deployment than pass all steps_until_ "[ INFO ] TASK 
[ovirt.ovirt.hosted_engine_setup : Check engine VM health]"


ISSUE DETAILS: new VM becomes not accessible in the final stage - as it 
should be reachable at its final IP:


[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Fail if Engine IP is 
different from engine's he_fqdn resolved IP]
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": 
"Engine VM IP address is while the engine's he_fqdn 
ovirt-engine.mgmt.pss.local resolves to 10.210.1.101. If you are using 
DHCP, check your DHCP reservation configuration"}


- problem is, that even if we go with "Static" IP (provided during 
answering procedure) or with "DHCP" way (with properly set DHCP and DNS 
server responding with correct IP for both

WE STUCK THERE

WE TRYIED:
- no success to connect to terminal/vnc of running VM "HostedEngine" to 
figure out the internal network issue


any suggestion howto "connect" into newly deployed UP and RUNNING 
HostedEngine VM? to figure out eventually manualy fix the internal 
network issue?



Thank You all for your help
Charles Stellen


PS: we are advanced in Ovirt deployment (from version 4.0), also we are 
advanced in GNU/Linux KVM based virtualisation for 10+ years,

so any suggests or any details requested - WE ARE READY to provide
online debuging or direct access to servers is not a problem

PPS: after we pass this deployment - and after decomissioning procedure 
- we are ready to provide older HW to Ovirt community





OpenPGP_signature
Description: OpenPGP digital signature
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LKOLWUCOFAHCXSSIBVHQ2NIJWPHOV7JE/


[ovirt-users] Hosted Engine down and not visible on any host

2022-02-03 Thread milan . mithbaokar
Hi,
We have a 4 node ovirt cluster running at 4.4.1.5. We are running it on iSCSI 
shared Storage. The hosted engine is running on its separate iSCSI shared 
storage.
  Hosted Engine went down all of a sudden and no where to be found on any of 
the nodes when I do virsh list --all
  All other VM's are showed in the list but cannot see the hosted engine?

when we try to bring hosted engine up using "hosted-engine --vm-start" it 
errors out with message "vm does not exist". The shared iscsi storage ip on 
which the HE vm was running is pingable, but when we perform "hosted-engine 
--connect-storage" we are getting request timeouts after 60 seconds.

Any ideas to restore back the hosted engine and the cluster highly appreciated.



___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3OPG7JI2WO5CSIEWOZXLFUQBD25WFXGQ/


[ovirt-users] hosted engine vm not present

2021-11-28 Thread Nathanaël Blanchet

Hello

I successfully migrated from HE 4.3.10 to HE 4.4.9, but I think I 
commited a mistake:


The HostedEngine vm was running on the host (haboob) where I deployed 
the upgrade path.


Everything was ok except that I deployed it on the wrong host (haboob). 
So I live migrated the HostedEngine on the centos prexisting host 
(kilimanjaro) and erase haboob. Then I reinstalled a new host (fuego) to 
replace haboob. The HostedEngine is able to migrate between kilimajaro 
and fuego but now the vm seems to not be seen by any host when doing:


[root@fuego ~]# hosted-engine --vm-status


--== Host fuego (id: 1) status ==--

Host ID    : 1
Host timestamp : 3252
Score  : 3350
Engine status  : {"vm": "down", "health": "bad", 
"detail": "unknown", "reason": "vm not running on this host"}

Hostname   : fuego
Local maintenance  : False
stopped    : False
crc32  : 14527b72
conf_on_shared_storage : True
local_conf_timestamp   : 3257
Status up-to-date  : True
Extra metadata (valid at timestamp):
    metadata_parse_version=1
    metadata_feature_version=1
    timestamp=3252 (Sun Nov 28 18:27:29 2021)
    host-id=1
    score=3350
    vm_conf_refresh_time=3257 (Sun Nov 28 18:27:34 2021)
    conf_on_shared_storage=True
    maintenance=False
    state=EngineDown
    stopped=False



--== Host kilimanjaro.v100.abes.fr (id: 3) status ==--

Host ID    : 3
Host timestamp : 65261186
Score  : 0
Engine status  : {"reason": "vm not running on this 
host", "health": "bad", "vm": "down_unexpected", "detail": "unknown"}

Hostname   : kilimanjaro.v100.abes.fr
Local maintenance  : True
stopped    : False
crc32  : c381cf1e
conf_on_shared_storage : True
local_conf_timestamp   : 65261189
Status up-to-date  : True
Extra metadata (valid at timestamp):
    metadata_parse_version=1
    metadata_feature_version=1
    timestamp=65261186 (Sun Nov 28 19:27:23 2021)
    host-id=3
    score=0
    vm_conf_refresh_time=65261189 (Sun Nov 28 19:27:26 2021)
    conf_on_shared_storage=True
    maintenance=True
    state=LocalMaintenance
    stopped=False


When doing hosted-engine --console, it returns:

[root@fuego ~]# hosted-engine --console
Command VM.getStats with args {'vmID': 
'74d2966c-2efa-41f0-a5c3-dd383f690a92'} failed:
(code=1, message=Virtual machine does not exist: {'vmId': 
'74d2966c-2efa-41f0-a5c3-dd383f690a92'})

The engine VM is not on this host

It is like the vmID was  the old 4.3.10 HostedEngine that doesn't exist 
anymore.


How can I make the new HostedEngine vmID be the good one known by HA and 
hosts?


*I'm afraid to lose the HostedEngine vm when stopping it!*

Thank you for your precious help.

--
Nathanaël Blanchet

Supervision réseau
SIRE
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5   
Tél. 33 (0)4 67 54 84 55
Fax  33 (0)4 67 54 84 14
blanc...@abes.fr

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EHUUM3MGCNTCVRZUXJE3OADK7ENNTEUN/


[ovirt-users] Hosted Engine Deploy Failure

2021-11-27 Thread Gary Pedretty
Keep getting the following error on a new clean install of the Hosted Engine on 
CentOS Stream.  This is latest updated CentOS Stream (version 8 not 9) and 
latest version of ovirt

https://resources.ovirt.org/pub/yum-repo/ovirt-release44.rpm

Attempted via the Cockpit Web Interface Deployment Wizard.


Any ideas?  Some really old references to this issue a few years ago, but 
nothing recent.

Gary



Deployment Log Snippet.

[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Obtain SSO token using 
username/password credentials]
[ ERROR ] ovirtsdk4.AuthError: Error during SSO authentication access_denied : 
Cannot authenticate user 'admin@internal': Unable to log in because the user 
account is disabled or locked. Contact the system administrator..
[ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 50, "changed": false, 
"msg": "Error during SSO authentication access_denied : Cannot authenticate 
user 'admin@internal': Unable to log in because the user account is disabled or 
locked. Contact the system administrator.."}
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HMLCEG2LPSWFOVUBDPU5JLKD7DWUGVE3/


[ovirt-users] hosted-engine deploy - Network error during communication with the Host. (NFS)

2021-10-29 Thread Matyi Szabolcs
Hi,

I get the following error when hosted-engine --deplying:

[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Fetch cluster facts]

[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Fetch Datacenter facts]

[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Fetch Datacenter ID]

[ INFO ] ok: [localhost]

[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Fetch Datacenter name]

[ INFO ] ok: [localhost]

[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Add NFS storage domain]

[ ERROR ] ovirtsdk4.Error: Fault reason is "Operation Failed". Fault detail is
"Network error during communication with the Host.". HTTP r
esponse code is 400.

[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg":
"Fault reason is \"Operation Failed\". Fault detail is \"Network
error
during communication with the Host.\". HTTP response code is 400."}

Please specify the storage you would like to use (glusterfs, iscsi, fc,
nfs)[nfs]:






NFS sharing is fine I can mount from VM and also from localhost..

Thanks!
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6XJ5PX5YYDXORNCGREE32X5QYHGQ2RDP/


[ovirt-users] Hosted Engine Deployment failure

2021-10-19 Thread Raj P
Hi, I ma tryinf to deploy hosted engine and keeps on failing with following 
errors.

Am new to oVirt and unabe to figure out what the issues is?
any help would be very much appreciated.

[ INFO ] TASK [ovirt.ovirt.engine_setup : Install oVirt Engine package]
[ ERROR ] fatal: [localhost -> 192.168.222.214]: FAILED! => {"changed": false, 
"msg": "Failed to download metadata for repo 'ovirt-4.4-centos-gluster8': 
repomd.xml parser error: Parse error at line: 68 (xmlParseStartTag: invalid 
element name\n)", "rc": 1, "results": []}
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Sync on engine machine]
[ INFO ] changed: [localhost -> 192.168.222.214]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Set destination directory path]
[ INFO ] ok: [localhost -> localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Create destination directory]
[ INFO ] changed: [localhost -> localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : include_tasks]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Find the local appliance image]
[ INFO ] ok: [localhost -> localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Set local_vm_disk_path]
[ INFO ] ok: [localhost -> localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Give the vm time to flush 
dirty buffers]
[ INFO ] ok: [localhost -> localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Copy engine logs]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Notify the user about a 
failure]
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "There was a 
failure deploying the engine on the local engine VM. The system may not be 
provisioned according to the playbook results: please check the logs for the 
issue, fix accordingly or re-deploy from scratch.\n"}
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ASVA6SI25UIBFL6HXBDPEPUG4RHUZITA/


[ovirt-users] Hosted Engine ansible runner artifacts cleanup

2021-10-03 Thread Erez Zarum
Hey,
I can't get the artifacts to cleanup, i have configured the ansible runner 
config.yml but it seems not to be running.
Should i restart the ovirt-engine service?

What is the best way to manually cleanup the artifacts (i can't get the correct 
ansible-runner command to clean it up) as it is starting to fill up disk space

Thanks
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QAEPEFAQQUPRJRS34O37XYNOGBG3BPV3/


[ovirt-users] Hosted engine on HCI cluster is not running

2021-08-13 Thread David White via Users
Hello,
It appears that my Manager / hosted-engine isn't working, and I'm unable to get 
it to start.

I have a 3-node HCI cluster, but right now, Gluster is only running on 1 host 
(so no replication).
I was hoping to upgrade / replace the storage on my 2nd host today, but aborted 
that maintenance when I found that I couldn't even get into the Manager.

The storage is mounted, but here's what I see:

> [root@cha2-storage dwhite]# hosted-engine --vm-statusThe hosted engine 
> configuration has not been retrieved from shared storage. Please ensure that 
> ovirt-ha-agent is running and the storage server is reachable.
> 

> [root@cha2-storage dwhite]# systemctl status ovirt-ha-agent● 
> ovirt-ha-agent.service - oVirt Hosted Engine High Availability Monitoring 
> Agent
>    Loaded: loaded (/usr/lib/systemd/system/ovirt-ha-agent.service; enabled; 
> vendor preset: disabled)
>    Active: active (running) since Fri 2021-08-13 11:10:51 EDT; 2h 44min ago
> Main PID: 3591872 (ovirt-ha-agent)
>     Tasks: 1 (limit: 409676)
>    Memory: 21.5M
>    CGroup: /system.slice/ovirt-ha-agent.service
>    └─3591872 /usr/libexec/platform-python 
> /usr/share/ovirt-hosted-engine-ha/ovirt-ha-agent
> 

> Aug 13 11:10:51 cha2-storage.mgt.barredowlweb.com systemd[1]: Started oVirt 
> Hosted Engine High Availability Monitoring Agent.

Any time I try to do anything like connect the engine storage, disconnect the 
engine storage, or connect to the console, it just sits there, and doesn't do 
anything, and I eventually have to ctl-c out of it.
Maybe I have to be patient? When I ctl-c, I get a trackback error:

> [root@cha2-storage dwhite]# hosted-engine --console^CTraceback (most recent 
> call last):
>   File "/usr/lib64/python3.6/runpy.py", line 193, in _run_module_as_main
> 

>     "__main__", mod_spec)
>   File "/usr/lib64/python3.6/runpy.py", line 85, in _run_code
>     exec(code, run_globals)
>   File 
> "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_setup/vdsm_helper.py", 
> line 214, in 
> [root@cha2-storage dwhite]# args.command(args)
>   File 
> "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_setup/vdsm_helper.py", 
> line 42, in func
>     f(*args, **kwargs)
>   File 
> "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_setup/vdsm_helper.py", 
> line 91, in checkVmStatus
>     cli = ohautil.connect_vdsm_json_rpc()
>   File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/lib/util.py", 
> line 472, in connect_vdsm_json_rpc
>     __vdsm_json_rpc_connect(logger, timeout)
>   File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/lib/util.py", 
> line 395, in __vdsm_json_rpc_connect
>     timeout=timeout)
>   File "/usr/lib/python3.6/site-packages/vdsm/client.py", line 154, in connect
>     outgoing_heartbeat=outgoing_heartbeat, nr_retries=nr_retries)
>   File "/usr/lib/python3.6/site-packages/yajsonrpc/stompclient.py", line 426, 
> in SimpleClient
>     nr_retries, reconnect_interval)
>   File "/usr/lib/python3.6/site-packages/yajsonrpc/stompclient.py", line 448, 
> in StandAloneRpcClient
>     client = StompClient(utils.create_connected_socket(host, port, sslctx),
>   File "/usr/lib/python3.6/site-packages/vdsm/utils.py", line 379, in 
> create_connected_socket
>     sock.connect((host, port))
>   File "/usr/lib64/python3.6/ssl.py", line 1068, in connect
>     self._real_connect(addr, False)
>   File "/usr/lib64/python3.6/ssl.py", line 1059, in _real_connect
>     self.do_handshake()
>   File "/usr/lib64/python3.6/ssl.py", line 1036, in do_handshake
>     self._sslobj.do_handshake()
>   File "/usr/lib64/python3.6/ssl.py", line 648, in do_handshake
>     self._sslobj.do_handshake()

This is what I see in /var/log/ovirt-hosted-engine-ha/broker.log:

> MainThread::WARNING::2021-08-11 
> 10:24:41,596::storage_broker::100::ovirt_hosted_engine_ha.broker.storage_broker.StorageBroker::(__init__)
>  Can't connect vdsm storage: Connection to storage server failed
> MainThread::ERROR::2021-08-11 
> 10:24:41,596::broker::69::ovirt_hosted_engine_ha.broker.broker.Broker::(run) 
> Failed initializing the broker: Connection to storage server failed
> MainThread::ERROR::2021-08-11 
> 10:24:41,598::broker::71::ovirt_hosted_engine_ha.broker.broker.Broker::(run) 
> Traceback (most recent call last):
>   File 
> "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/broker/broker.py", 
> line 64, in run
>     self._storage_broker_instance = self._get_storage_broker()
>   File 
> "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/broker/broker.py", 
> line 143, in _get_storage_broker
>     return storage_broker.StorageBroker()
>   File 
> "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/broker/storage_broker.py",
>  line 97, in __init__
>     self._backend.connect()
>   File 
> "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py",
>  line 375, in connect
>     sserver.connect_storage_server()
>   File 
> 

[ovirt-users] Hosted Engine Deployment

2021-07-07 Thread Harry O
Why does the HE Deployment Not deploy a hosted engine dc and cluster which fits 
the host in version?
My Hosted Engine Deployment now fails again because of "Host hej1.5ervers.lan 
is compatible with versions (4.2,4.3,4.4,4.5) and cannot join Cluster Default 
which is set to version 4.6."
I think the deployment shoud make a dc and cluster that fits the host that is 
used to deploy. Otherwise it's doomed to fail.
Is there a proccess for fixing this? I can't change the version from HE ui as 
I'm instructed to, there is no other options on datacenter other then 4.6:
[ INFO ] You can now connect to https://hej1.5ervers.lan:6900/ovirt-engine/ and 
check the status of this host and eventually remediate it, please continue only 
when the host is listed as 'up'
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : include_tasks]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Create temporary lock file]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Pause execution until 
/tmp/ansible.z_g6jh7h_he_setup_lock is removed, delete it once ready to proceed]
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MTMATAQJBGAEQKCL5E74OZVQ4SHJMD7T/


[ovirt-users] hosted-engine cannot can't communicate with vm

2021-06-21 Thread Harry O
The VM is up and the ids match, but it fail when I try anything.
hosted-engine --vm-shutdown
Command VM.shutdown with args {'vmID': '350a168a-beb9-4417-9fbd-5a8121863a57', 
'delay': '120', 'message': 'VM is shutting down!'} failed:
(code=1, message=Virtual machine does not exist: {'vmId': 
'350a168a-beb9-4417-9fbd-5a8121863a57'})


virsh list
 Id   Name   State
--
 2HostedEngine   running


virsh domuuid HostedEngine
350a168a-beb9-4417-9fbd-5a8121863a57







MainThread::ERROR::2021-06-17 
09:58:54,537::hosted_engine::953::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_stop_engine_vm)
 Failed to stop engine VM: Command VM.destroy with args {'vmID': 
'350a168a-beb9-4417-9fbd-5a8121863a57'} failed:
(code=1, message=Virtual machine does not exist: {'vmId': 
'350a168a-beb9-4417-9fbd-5a8121863a57'})

MainThread::INFO::2021-06-17 
09:58:54,563::brokerlink::73::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify)
 Success, was notification of state_transition 
(EngineForceStop-ReinitializeFSM) sent? ignored
MainThread::INFO::2021-06-17 
09:58:54,569::hosted_engine::517::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop)
 Current state ReinitializeFSM (score: 0)
MainThread::INFO::2021-06-17 
09:59:04,654::brokerlink::73::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify)
 Success, was notification of state_transition (ReinitializeFSM-EngineDown) 
sent? ignored
MainThread::INFO::2021-06-17 
09:59:04,738::hosted_engine::517::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop)
 Current state EngineDown (score: 3400)
MainThread::INFO::2021-06-17 
09:59:13,759::states::472::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume)
 Engine down and local host has best score (3400), attempting to start engine VM
MainThread::INFO::2021-06-17 
09:59:13,796::brokerlink::73::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify)
 Success, was notification of state_transition (EngineDown-EngineStart) sent? 
ignored
MainThread::INFO::2021-06-17 
09:59:13,888::hosted_engine::517::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop)
 Current state EngineStart (score: 3400)
MainThread::INFO::2021-06-17 
09:59:13,903::hosted_engine::895::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_clean_vdsm_state)
 Ensuring VDSM state is clear for engine VM
MainThread::INFO::2021-06-17 
09:59:13,909::hosted_engine::907::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_clean_vdsm_state)
 Vdsm state for VM clean
MainThread::INFO::2021-06-17 
09:59:13,909::hosted_engine::853::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_start_engine_vm)
 Starting vm using `/usr/sbin/hosted-engine --vm-start`
MainThread::INFO::2021-06-17 
09:59:14,444::hosted_engine::862::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_start_engine_vm)
 stdout: VM in WaitForLaunch

MainThread::INFO::2021-06-17 
09:59:14,444::hosted_engine::863::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_start_engine_vm)
 stderr: Command VM.getStats with args {'vmID': 
'350a168a-beb9-4417-9fbd-5a8121863a57'} failed:
(code=1, message=Virtual machine does not exist: {'vmId': 
'350a168a-beb9-4417-9fbd-5a8121863a57'})

MainThread::INFO::2021-06-17 
09:59:14,444::hosted_engine::875::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_start_engine_vm)
 Engine VM started on localhost
MainThread::INFO::2021-06-17 
09:59:14,472::brokerlink::73::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify)
 Success, was notification of state_transition (EngineStart-EngineStarting) 
sent? ignored
MainThread::INFO::2021-06-17 
09:59:14,479::hosted_engine::517::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop)
 Current state EngineStarting (score: 3400)
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/733MY2I2IB3MBMPMP2AKJ2TM72C77DS4/


[ovirt-users] hosted-engine --vm-start not working

2021-06-21 Thread Harry O
Hi,
When i run: hosted-engine --vm-start I get this:
VM exists and is Down, cleaning up and restarting
VM in WaitForLaunch

But the VM never starts:
virsh list --all
 Id   Name   State
---
 -HostedEngine   shut off


systemctl status -l ovirt-ha-agent
● ovirt-ha-agent.service - oVirt Hosted Engine High Availability Monitoring 
Agent
   Loaded: loaded (/usr/lib/systemd/system/ovirt-ha-agent.service; disabled; 
vendor preset: disabled)
   Active: active (running) since Wed 2021-06-16 13:27:27 CEST; 3min 26s ago
 Main PID: 79702 (ovirt-ha-agent)
Tasks: 2 (limit: 198090)
   Memory: 28.3M
   CGroup: /system.slice/ovirt-ha-agent.service
   └─79702 /usr/libexec/platform-python 
/usr/share/ovirt-hosted-engine-ha/ovirt-ha-agent

Jun 16 13:27:27 hej1.5ervers.lan systemd[1]: ovirt-ha-agent.service: Succeeded.
Jun 16 13:27:27 hej1.5ervers.lan systemd[1]: Stopped oVirt Hosted Engine High 
Availability Monitoring Agent.
Jun 16 13:27:27 hej1.5ervers.lan systemd[1]: Started oVirt Hosted Engine High 
Availability Monitoring Agent.
Jun 16 13:29:42 hej1.5ervers.lan ovirt-ha-agent[79702]: ovirt-ha-agent 
ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine ERROR Engine VM stopped 
on localhost





hosted-engine --vm-status


--== Host hej1.5ervers.lan (id: 1) status ==--

Host ID: 1
Host timestamp : 3547
Score  : 3400
Engine status  : {"vm": "down", "health": "bad", "detail": 
"Down", "reason": "bad vm status"}
Hostname   : hej1.5ervers.lan
Local maintenance  : False
stopped: False
crc32  : f35899f8
conf_on_shared_storage : True
local_conf_timestamp   : 3547
Status up-to-date  : True
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=3547 (Wed Jun 16 13:32:12 2021)
host-id=1
score=3400
vm_conf_refresh_time=3547 (Wed Jun 16 13:32:12 2021)
conf_on_shared_storage=True
maintenance=False
state=EngineDown
stopped=False


--== Host hej2.5ervers.lan (id: 2) status ==--

Host ID: 2
Host timestamp : 94681
Score  : 0
Engine status  : {"vm": "down_unexpected", "health": "bad", 
"detail": "Down", "reason": "bad vm status"}
Hostname   : hej2.5ervers.lan
Local maintenance  : False
stopped: False
crc32  : 40a3f809
conf_on_shared_storage : True
local_conf_timestamp   : 94681
Status up-to-date  : True
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=94681 (Wed Jun 16 13:32:05 2021)
host-id=2
score=0
vm_conf_refresh_time=94681 (Wed Jun 16 13:32:05 2021)
conf_on_shared_storage=True
maintenance=False
state=EngineUnexpectedlyDown
stopped=False
timeout=Fri Jan  2 03:23:40 1970


--== Host hej3.5ervers.lan (id: 3) status ==--

Host ID: 3
Host timestamp : 94666
Score  : 0
Engine status  : {"vm": "down_unexpected", "health": "bad", 
"detail": "Down", "reason": "bad vm status"}
Hostname   : hej3.5ervers.lan
Local maintenance  : False
stopped: False
crc32  : a50c2b3e
conf_on_shared_storage : True
local_conf_timestamp   : 94666
Status up-to-date  : True
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=94666 (Wed Jun 16 13:32:09 2021)
host-id=3
score=0
vm_conf_refresh_time=94666 (Wed Jun 16 13:32:09 2021)
conf_on_shared_storage=True
maintenance=False
state=EngineUnexpectedlyDown
stopped=False
timeout=Fri Jan  2 03:23:16 1970
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/33W3L6MTJ45U257JGU3NNF4WP3GTCAFM/


[ovirt-users] Hosted-Engine import

2021-06-06 Thread Harry O
Hi,

Is it possible to import hosted engine vm from vm files on gluster only?
If yes, how?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QQIIJLUADCYHDG66ZRTU2QFMNQ7POEH5/


[ovirt-users] Hosted-engine fail and host reboot

2021-05-26 Thread Dominique D
I tried with a 1 baremetal host, 3 baremetals or virtual hosts and I still have 
the same problem for installing the hosted-engine. Hyperconverge installs well.

I have tried with multiple version of ovirt ISO file 4.4.1 and 4.4.4 and 4.4.6. 
 


when I run hosted-engine --deploy or with the cockpit, it creates a temporary 
VM in the 192.168.222.x subnet and I am able to connect in ssh on this 
temporary ip. When the script displays "TASK [ovirt.ovirt.hoted_engine_setup]: 
Wait for the host to be up" the server reboots and I have nothing left.

problem when creation of the ovirtmgmt? 

here all my log 
https://drive.google.com/drive/folders/1kFFSlIqbjVwSN8t88aQZZR45DYEHPUvt?usp=sharing

021-05-26 10:15:09,135-0400 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:111 TASK [ovirt.ovirt.hosted_engine_setup : 
Always revoke the SSO token]
2021-05-26 10:15:10,439-0400 DEBUG 
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:105 
{'msg': "You must specify either 'url' or 'hostname'.", 'invocation': 
{'module_args': {'state': 'absent', 'ovirt_auth': {'changed': False, 
'ansible_facts': {'ovirt_auth': {'token': 
'Mz2onwB7qWX2x8HnJVgetQIQ9U4eVziRt8TEabfoizI2B98d0PDp-yxTU92a9lbun2vcr_i5yOXRsJKJKhqkVw',
 'url': 'https://oe.telecom.lan/ovirt-engine/api', 'ca_file': None, 'insecure': 
True, 'timeout': 0, 'compress': True, 'kerberos': False, 'headers': None}}, 
'failed': False, 'attempts': 1}, 'timeout': 0, 'compress': True, 'kerberos': 
False, 'url': None, 'hostname': None, 'username': None, 'password': None, 
'ca_file': None, 'insecure': None, 'headers': None, 'token': None}}, 
'_ansible_no_log': False, 'changed': False}
2021-05-26 10:15:10,540-0400 DEBUG 
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:105 
ignored: [localhost]: FAILED! => {"changed": false, "msg": "You must specify 
either 'url' or 'hostname'."}
2021-05-26 10:15:11,643-0400 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:111 TASK [ovirt.ovirt.hosted_engine_setup : 
include_tasks]
2021-05-26 10:15:12,647-0400 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:111 ok: [localhost]
2021-05-26 10:15:13,851-0400 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:111 TASK [ovirt.ovirt.hosted_engine_setup : 
Obtain SSO token using username/password credentials]
2021-05-26 10:15:15,261-0400 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:111 ok: [localhost]
2021-05-26 10:15:17,275-0400 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:111 TASK [ovirt.ovirt.hosted_engine_setup : Wait 
for the host to be up]
2021-05-26 10:22:54,758-0400 ERROR otopi.context context._executeMethod:154 
Failed to execute stage 'Closing up': SIG1
2021-05-26 10:22:54,762-0400 DEBUG otopi.context context.dumpEnvironment:765 
ENVIRONMENT DUMP - BEGIN
2021-05-26 10:22:54,763-0400 DEBUG otopi.context context.dumpEnvironment:775 
ENV BASE/error=bool:'True'
2021-05-26 10:22:54,763-0400 DEBUG otopi.context context.dumpEnvironment:775 
ENV BASE/exceptionInfo=list:'[(, RuntimeError('SIG1',), 
)]'

thank you
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6UIOZFLJPS7XYPTY453WP3QKUZ2MR2LA/


[ovirt-users] hosted-engine /var full

2021-05-07 Thread Pascal D
I have an issue with my 4.3 hosted-engine. The /var is full. The directory 
taking 20G is /var/opt/rh/rh-postgresql10/lib/pgsql/data/base/data/16398

some of those files have size over 1GB.  Can they be safely removed?

TIA
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7JRYG3L5BYXWFMPQHVRAXCGLYOYOHYLE/


[ovirt-users] hosted-engine volume removed 3 bricks (replica 3) out of 12 bricks, now I cant start hosted-engine vm

2021-04-14 Thread adrianquintero
Hi,
I tried removing a replica 3 brick from a distributed replicated volume which 
holds the ovirt hosted-engine VM.
As soon as I hit commit the VM went into pause, I tried to recover the volume 
ID "daa292aa-be5c-426e-b124-64263bf8a3ee" from the remvoed bricks and now I am 
able to do a "hosted-engine --vm-status 

Error I see in the logs:
-
MainThread::WARNING::2021-04-14 
17:26:12,348::storage_broker::97::ovirt_hosted_engine_ha.broker.storage_broker.StorageBroker::(__init__)
 Can't connect vdsm storage: Command Image.prepare with args {'imageID': 
'9feffcfa-6af2-4de3-b7d8-e57b84d56003', 'storagepoolID': 
'----', 'volumeID': 
'daa292aa-be5c-426e-b124-64263bf8a3ee', 'storagedomainID': 
'8db17b28-ecbb-4853-8a90-6ed2f69301eb'} failed:
(code=201, message=Volume does not exist: 
(u'daa292aa-be5c-426e-b124-64263bf8a3ee',)) 
-


on the following mount I see the volumeID twice:

[root@vmm10 images]# find /rhev/data-center/mnt/glusterSD/192.168.0.4\:_engine/ 
-name "daa292aa-be5c-426e-b124-64263bf8a3ee"

/rhev/data-center/mnt/glusterSD/192.168.0.4:_engine/8db17b28-ecbb-4853-8a90-6ed2f69301eb/images/9feffcfa-6af2-4de3-b7d8-e57b84d56003/daa292aa-be5c-426e-b124-64263bf8a3ee

/rhev/data-center/mnt/glusterSD/192.168.0.4:_engine/8db17b28-ecbb-4853-8a90-6ed2f69301eb/images/9feffcfa-6af2-4de3-b7d8-e57b84d56003/daa292aa-be5c-426e-b124-64263bf8a3ee


[root@vmm10 9feffcfa-6af2-4de3-b7d8-e57b84d56003]# ls -lh
total 131M
-rw-rw. 1 vdsm kvm  64M Apr 14 19:40 daa292aa-be5c-426e-b124-64263bf8a3ee
-rw-rw. 1 vdsm kvm  64M Apr 14 19:40 daa292aa-be5c-426e-b124-64263bf8a3ee
-rw-rw. 1 vdsm kvm 1.0M Jul  1  2020 
daa292aa-be5c-426e-b124-64263bf8a3ee.lease
-rw-rw. 1 vdsm kvm 1.0M Jul  1  2020 
daa292aa-be5c-426e-b124-64263bf8a3ee.lease
-rw-r--r--. 1 vdsm kvm  329 Jul  1  2020 
daa292aa-be5c-426e-b124-64263bf8a3ee.meta
-rw-r--r--. 1 vdsm kvm  329 Jul  1  2020 
daa292aa-be5c-426e-b124-64263bf8a3ee.meta

Any ideas on how to recover ?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4DA4Z6I4OPDPK5FGUCVGER5EEW7URIUI/


[ovirt-users] hosted engine vm don't boot after minor update

2021-04-04 Thread a . cor
Hello!
I have rhv 4.4.2 and NFS storage type for HE Domain Storage.
I updated to 4.4.4 by guide 
(https://www.ovirt.org/documentation/upgrade_guide/#Updating_a_self-hosted_engine_minor_updates)
 and reboot Hosted Engine VM. After this VM is not booting. 

Connected to console I see next error:
BdsDxe: failed to load Boot0001 "UEFI Misc Device"...
After "Press any key" I got BIOS screen, go to Device Manager -> Drive Health 
Manager I see nothing.

I suggest that something happened with VM config.
Check /etc/ovirt-hosted-engine/hosted-engine.conf
All looks correctly (I checked UUID to Domain Storage and etc.)
Check /var/run/ovirt-hosted-engine-ha/vm.conf.
After encoding base64 I found something like this:
https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7BDT3PNUAVK4H2G5JWPDVI3CNLE3ASDH/


[ovirt-users] Hosted engine deployment fail because host version does not match cluster version

2021-04-04 Thread emanuele . breuza
Hello,
I'm Emanuele. I'm interested in setting up a 3 node HA oVirt cluster on RHEL 
8.3. Following the oVirt Gluster Hyperconvergence guide, I managed to setup a 
GlusterFS storage and then I started the procedure to configure a self-hosted 
engine via cockpit. Unfortunately, I got stack at the "Prepare VM stage".
During this procedure, the automated process paused to let you verify inside 
the ovirt-engine that your host is OK. Since this is a brand new installation 
of oVirt, I have only one host configured and it is marked as not-good. The 
reason reported in the web page is that the host is compatible for v. 4.2, 4.3 
and 4.4 but the Cluster is v. 4.5. I tried to execute the upgrade procedure on 
the host but it was unable to go into maintenance mode.

I tried both ovirt-release44 and ovirt-release-master RPM to load the latest 
repository but without success. Do you have any suggestion about how to fix the 
self-hosted engine procedure?

BTW, if I install my server with the oVirt node ISO the procedure works 
correctly and my host report also 4.5 into the compatibility version list.

Thank you very much for your help!!!
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DKAMD3DF4AP5SRYIBXTATPGJYSD5YZZJ/


[ovirt-users] Hosted-Engine vs Standalone Engine

2021-03-24 Thread Ian Easter
Hello Folks,

I have had to install a Hosted-Engine a few times in my environment.  There
have been some hardware issues and power issues that left the HE
unrecoverable.

In this situation, would the Standalone Engine install be more viable and
less prone to become inoperable due to the previous issues?

My assumption would be to have a head baremetal server run the Engine to
control and maintain my blades.

*Thank you,*
*Ian*
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CFXFE7M7WMXEBQBU3CJE7JFNLADVXRLO/


[ovirt-users] hosted-engine --vm-status returns "vm not running on this host" on all hosts

2021-02-11 Thread paul . chapman
I have tried to upgrade from 4.3 to 4.4 in  a self hosted environment and I now 
have multiple problems. One of which is that neither of my two hosts think they 
are running the hosted engine. What could cause this?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KVLLIFBGAXE3ECFPJFDD7GB6PWKCGASM/


[ovirt-users] Hosted Engine stuck in bios

2021-01-20 Thread Joseph Gelinas
Hi,

I recently did some updates of ovirt from 4.4.1 or 4.4.3 to 4.4.4, also setting 
the default datacenter from 4.4 to 4.5 and making the default bios q35+eufi. 
Unfortunately quite a few things. Now however hosted engine doesn't boot up 
anymore and `hosted-engine --console`  just shows the below bios/firmware 
output:

 RHEL   
 RHEL-8.1.0 PC (Q35 + ICH9, 2009)2.00 GHz   
 0.0.0   16384 MB RAM   



   Select Language This is the option 
 one adjusts to change  
 > Device Managerthe language for the   
 > Boot Manager  current system 
 > Boot Maintenance Manager 

   Continue 
   Reset







  ^v=Move Highlight   =Select Entry  


When in this state `hosted-engine --vm-status` says it is up but failed 
liveliness check

hosted-engine --vm-status | grep -i engine\ status
Engine status  : {"vm": "down", "health": "bad", "detail": 
"unknown", "reason": "vm not running on this host"}
Engine status  : {"vm": "up", "health": "bad", "detail": 
"Up", "reason": "failed liveliness check"}
Engine status  : {"vm": "down", "health": "bad", "detail": 
"Down", "reason": "bad vm status"}

I assume I am running into https://access.redhat.com/solutions/5341561 (RHV: 
Hosted-Engine VM fails to start after changing the cluster to Q35/UEFI) however 
how to fix that isn't really described. I have tried starting hosted engine 
paused (`hosted-engine --vm-start-paused`) and editing the config (`virsh -c 
qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf edit 
HostedEngine`) to have pc-i440fx instead and removing a bunch of pcie lines etc 
until it will accept the config and then resuming hosted engine (`virsh -c 
qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf resume 
HostedEngine`) but haven't come up with something that is able to start.

Anyone know how to resolve this? Am I even chasing the right path?


/var/log/libvirt/qemu/HostedEngine.log 

2021-01-20 15:31:56.500+: starting up libvirt version: 6.6.0, package: 
7.1.el8 (CBS , 2020-12-10-14:05:40, ), qemu version: 
5.1.0qemu-kvm-5.1.0-14.el8.1, kernel: 4.18.0-240.1.1.el8_3.x86_64, hostname: 
ovirt-3
LC_ALL=C \
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin \
HOME=/var/lib/libvirt/qemu/domain-25-HostedEngine \
XDG_DATA_HOME=/var/lib/libvirt/qemu/domain-25-HostedEngine/.local/share \
XDG_CACHE_HOME=/var/lib/libvirt/qemu/domain-25-HostedEngine/.cache \
XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain-25-HostedEngine/.config \
QEMU_AUDIO_DRV=spice \
/usr/libexec/qemu-kvm \
-name guest=HostedEngine,debug-threads=on \
-S \
-object 
secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-25-HostedEngine/master-key.aes
 \
-blockdev 
'{"driver":"file","filename":"/usr/share/OVMF/OVMF_CODE.secboot.fd","node-name":"libvirt-pflash0-storage","auto-read-only":true,"discard":"unmap"}'
 \
-blockdev 
'{"node-name":"libvirt-pflash0-format","read-only":true,"driver":"raw","file":"libvirt-pflash0-storage"}'
 \
-blockdev 
'{"driver":"file","filename":"/var/lib/libvirt/qemu/nvram/81816cd3-5816-4185-b553-b5a636156fbd.fd","node-name":"libvirt-pflash1-storage","auto-read-only":true,"discard":"unmap"}'
 \
-blockdev 
'{"node-name":"libvirt-pflash1-format","read-only":false,"driver":"raw","file":"libvirt-pflash1-storage"}'
 \
-machine 

[ovirt-users] hosted engine wrong bios

2020-12-14 Thread Michael Rohweder
Hi,

i run with ovirt node 4.4.2 in some old mistake.
I changed cluster default to uefi weeks ago.
now today node must be restarted, and now i cannot work.
manager VM try to boot on uefi. and all other vm are down, because i cannot
start anny with cli.

how can i change (some config, file or something els) that setting in this
vm to normal bios?

Greetings

Michael
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KWSCMGZ6E5M66R26S5SFHXVR5QMQRU6A/


[ovirt-users] Hosted engine deployment w/ two networks (one migration, one management).

2020-12-13 Thread Gilboa Davara
Hello all,

I'm slowly building a new ovirt over glusterfs cluster with 3 fairly beefy
servers.
Each of the nodes has the following network configuration:
3x1GbE: ILO, ovirtmgmt and SSH.
4x10GbE: Private and external VM network(s).
2x40GBE: GlusterFS and VM migration.

Now, for some odd reasons, I rather keep the two 40GbE networks
disconnected from my normal management network.
My question is simple: I remember that I can somehow configure ovirt to use
two different networks for for management / migration, but as far as I can
see, I cannot configure the cluster to use a different network for
migration purposes.

1. Am I missing something?
2. Can I somehow configure the hosted engine to have an IP in more than
network (management and migration)?
3. More of a gluster question: As the 40GbE NICs and 1GbE NIC sitting on
different switches, can I somehow configure gluster to fallback to the 1GbE
NIC if the main 40GbE link fails? AFAIR bond doesn't support asymmetrical
network device configuration. (And rightly so, in this case).

Thanks,
Gilboa
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QBCMH2J7HIKCQQXEWLHDGLPG55RPMSNM/


[ovirt-users] Hosted Engine can't migrate

2020-11-17 Thread McNamara, Bradley
I'm new to oVirt, but not new to virtualization technologies.  I've run into a 
wall, but the solution has gotta be simple.  I've seen other threads with the 
same issue, but a solution is never posted.

I've followed the guide perfectly.  I get  the self-hosted engine up and 
running with no issues.  I add hosts to the cluster, but it won't let me 
migrate the HE because of Hosted Engine HA score of "N/A" on every host except 
the one it was initially deployed on.  The physical servers are exactly the 
same:  same networks, storage, OS, patches, etc.  The additional hosts are 
added thru the HE GUI and are added without issue.  Is there a secret button I 
gotta push?

What am I missing?  This is with 4.3 on CentOS 7.9.

HELP!  Thanks.  :-)
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TKNPG3TRZGZIUCVWHL6SPFOMZPFDXGYP/


[ovirt-users] Hosted Engine install via cockpit - proxy issue

2020-10-29 Thread simon
I am installing oVirt in a closed environment where internet access is 
controlled by proxies.

This works until the hosted engine install via cockpit where it fails to 
complete as it appears to require internet access to the repository.

The only workaround I have found is to ssh onto the engine ‘mid install’ and 
add the proxy address to /etc/dnf/dnf.conf. After doing this the install is 
successful.

Am I missing something or does this type of install require unfettered internet 
access?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/U5FFX2VLMBCLOAGHQPUX2VBA472C5NDG/


[ovirt-users] Hosted engine + OVN

2020-09-02 Thread Sverker Abrahamsson via Users
Was it ever solved to install hosted engine with ovn? I tried a few 
years ago, got it almost to work but then gave up.

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6KWV2MKV6E3ABQL6DGKVJ7FZTGNG3JPK/


[ovirt-users] Hosted engine install failure: ipv6.gateway: gateway cannot be set if there are no addresses configured

2020-09-02 Thread Sverker Abrahamsson via Users

Hi,
I'm attempting to install hosted engine but getting this failure:

[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "The 
host has been set in non_operational status, deployment errors:   code 
505: Host h1-mgmt.limetransit.com installation failed. Failed to 
configure management network on the host.,    code 1120: Failed to 
configure management network on host h1-mgmt.limetransit.com due to 
setup networks failure., code 9000: Failed to verify Power Management 
configuration for Host h1-mgmt.limetransit.com.,    code 10802: VDSM 
h1-mgmt.limetransit.com command HostSetupNetworksVDS failed: Internal 
JSON-RPC error: {'reason': 'Unexpected failure of libnm when running the 
mainloop: run execution'},   fix accordingly and re-deploy."}


Looking in vdsm.log I find this which I believe is the root cause:

MainProcess|jsonrpc/0::DEBUG::2020-09-02 
16:38:25,897::supervdsm_server::93::SuperVdsm.ServerCallback::(wrapper) 
call setupNetworks with ({'ovirtmgmt': {'vlan': '4000', 'netmask': 
'255.255.255.0', 'ipv6autoconf': False,
'nic': 'enp4s0', 'bridged': 'true', 'ipaddr': '172.27.1.1', 
'defaultRoute': True, 'dhcpv6': False, 'STP': 'no', 'mtu': 1500, 
'switch': 'legacy'}}, {}, {'connectivityTimeout': 120, 
'commitOnSuccess': True, 'connectivityCh

eck': 'true'}) {}
MainProcess|jsonrpc/0::INFO::2020-09-02 
16:38:25,897::api::220::root::(setupNetworks) Setting up network 
according to configuration: networks:{'ovirtmgmt': {'vlan': '4000', 
'netmask': '255.255.255.0', 'ipv6autoconf': Fal
se, 'nic': 'enp4s0', 'bridged': 'true', 'ipaddr': '172.27.1.1', 
'defaultRoute': True, 'dhcpv6': False, 'STP': 'no', 'mtu': 1500, 
'switch': 'legacy'}}, bondings:{}, options:{'connectivityTimeout': 120, 
'commitOnSuccess':

True, 'connectivityCheck': 'true'}
MainProcess|jsonrpc/0::DEBUG::2020-09-02 
16:38:25,902::cmdutils::130::root::(exec_cmd) /sbin/tc qdisc show (cwd None)
MainProcess|jsonrpc/0::DEBUG::2020-09-02 
16:38:25,906::cmdutils::138::root::(exec_cmd) SUCCESS:  = b'';  = 0
MainProcess|jsonrpc/0::DEBUG::2020-09-02 
16:38:25,945::vsctl::74::root::(commit) Executing commands: 
/usr/bin/ovs-vsctl --timeout=5 --oneline --format=json -- list Bridge -- 
list Port -- list Interface
MainProcess|jsonrpc/0::DEBUG::2020-09-02 
16:38:25,945::cmdutils::130::root::(exec_cmd) /usr/bin/ovs-vsctl 
--timeout=5 --oneline --format=json -- list Bridge -- list Port -- list 
Interface (cwd None)
MainProcess|jsonrpc/0::DEBUG::2020-09-02 
16:38:25,952::cmdutils::138::root::(exec_cmd) SUCCESS:  = b'';  = 0
MainProcess|jsonrpc/0::INFO::2020-09-02 
16:38:25,957::netconfpersistence::58::root::(setNetwork) Adding network 
ovirtmgmt({'vlan': 4000, 'netmask': '255.255.255.0', 'ipv6autoconf': 
False, 'nic': 'enp4s0', 'bridged': True
, 'ipaddr': '172.27.1.1', 'defaultRoute': True, 'dhcpv6': False, 'mtu': 
1500, 'switch': 'legacy', 'stp': False, 'bootproto': 'none', 
'nameservers': ['213.133.98.98']})
MainProcess|jsonrpc/0::DEBUG::2020-09-02 
16:38:25,958::commands::153::common.commands::(start) /usr/bin/taskset 
--cpu-list 0-7 /usr/libexec/vdsm/hooks/before_network_setup/50_fcoe (cwd 
None)
MainProcess|jsonrpc/0::INFO::2020-09-02 
16:38:26,154::hooks::122::root::(_runHooksDir) 
/usr/libexec/vdsm/hooks/before_network_setup/50_fcoe: rc=0 err=b''
MainProcess|jsonrpc/0::INFO::2020-09-02 
16:38:26,155::configurator::195::root::(_setup_nmstate) Processing setup 
through nmstate
MainProcess|jsonrpc/0::INFO::2020-09-02 
16:38:26,175::configurator::197::root::(_setup_nmstate) Desired state: 
{'interfaces': [{'name': 'enp4s0', 'state': 'up', 'mtu': 1500}, {'vlan': 
{'id': 4000, 'base-iface': 'enp4s0'}
, 'name': 'enp4s0.4000', 'type': 'vlan', 'state': 'up', 'mtu': 1500, 
'ipv4': {'enabled': False}, 'ipv6': {'enabled': False}}, {'name': 
'ovirtmgmt', 'type': 'linux-bridge', 'state': 'up', 'mtu': 1500, 
'bridge': {'port': [
{'name': 'enp4s0.4000'}], 'options': {'stp': {'enabled': False}}}, 
'ipv4': {'enabled': True, 'address': [{'ip': '172.27.1.1', 
'prefix-length': 24}], 'dhcp': False}, 'ipv6': {'enabled': False}}], 
'dns-resolver': {'config'

: {'server': ['213.133.98.98']}}}
MainProcess|jsonrpc/0::DEBUG::2020-09-02 
16:38:26,217::checkpoint::121::root::(create) Checkpoint 
/org/freedesktop/NetworkManager/Checkpoint/1 created for all devices: 60
MainProcess|jsonrpc/0::DEBUG::2020-09-02 
16:38:26,218::netapplier::239::root::(_add_interfaces) Adding new 
interfaces: ['ovirtmgmt']
MainProcess|jsonrpc/0::DEBUG::2020-09-02 
16:38:26,221::netapplier::251::root::(_edit_interfaces) Editing 
interfaces: ['enp4s0.4000', 'enp4s0']
MainProcess|jsonrpc/0::DEBUG::2020-09-02 
16:38:26,224::nmclient::136::root::(execute_next_action) Executing NM 
action: func=add_connection_async
MainProcess|jsonrpc/0::DEBUG::2020-09-02 
16:38:26,231::connection::329::root::(_add_connection_callback) 
Connection adding succeeded: dev=ovirtmgmt
MainProcess|jsonrpc/0::DEBUG::2020-09-02 
16:38:26,232::nmclient::136::root::(execute_next_action) Executing NM 
action: 

[ovirt-users] hosted engine migration

2020-08-25 Thread 董青龙
Hi all,
I have an ovirt4.3.10.4 environment of 2 hosts. Normal vms in this 
environment could be migrated, but the hosted engine vm could not be migrated. 
Anyone can help? Thanks a lot!


hosts status:
normal vm migration:
hosted engine vm migration:___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZXHE2AJX42HNHOMYHTDCUUIU3VQTQTLF/


[ovirt-users] Hosted Engine stuck in Firmware

2020-08-22 Thread Vinícius Ferrão via Users
Hello, I’ve an strange issue with oVirt 4.4.1

The hosted engine is stuck in the UEFI firmware trying to “never” boot.

I think this happened when I changed the default VM mode for the cluster inside 
the datacenter.

There’s a way to fix this without redeploying the engine?

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GBFY2F4FNVZ25TLCR6IZ5YP32PUQLDLI/


[ovirt-users] Hosted Engine Deployment stuck at 3. Prepare VM

2020-08-11 Thread hkexdong--- via Users
oVirt Node version is 4.4.1. This version can successful deploy before.
But after I compile the RAID driver. It now stuck at Prepare VM.
The last message is "[ INFO ] TASK [ovirt.hosted_engine_setup : Install 
ovirt-engine-appliance rpm]"

According to the instructions from RAID controller manufacture. I'd install the 
"kernel-headers-xxx.rpm" and "kernel-devel-xxx.rpm" which extracted from 
official CentOS 8.2.2004 ISO. Also include a bunch of packages for compile 
driver (e.g. gcc, make, zlib-devel, etc.)

I think those packages ruined the deployment. But I still want to know it 
actually stuck at which part and the reason. Is there any way to check the 
deployment log?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZRQEVTZYUXWVHATC4IZPNFKBF37SGGW6/


[ovirt-users] hosted-engine upgrade from 4.3 to 4.4 fails with "Cannot edit VM."

2020-08-10 Thread d
Hi!
hosted-engine --deploy --restore-from-file=ovirt_engine_full.arch
fails with 
[ ERROR ] ovirtsdk4.Error: Fault reason is "Operation Failed". Fault detail is 
"[Cannot edit VM. A VM running the engine ("hosted engine") cannot be set to 
highly available as it has its own HA mechanism.]". HTTP response code is 409.
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault 
reason is \"Operation Failed\". Fault detail is \"[Cannot edit VM. A VM running 
the engine (\"hosted engine\") cannot be set to highly available as it has its 
own HA mechanism.]\". HTTP response code is 409."}
[ ERROR ] Failed to execute stage 'Closing up': Failed executing 
ansible-playbook
Can anybody help with this error? May be someone has access to this page 
https://access.redhat.com/solutions/5303571?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5KWSZMBBQKCSUM6LH7VMLA3JVXURAGVC/


[ovirt-users] Hosted Engine 4.4.1

2020-07-22 Thread Vijay Sachdeva via Users
Hello Everyone,

 

Waiting for host to be up task is stuck for hours and when checked engine log 
found this below:

 

2020-07-22 16:50:35,717+02 ERROR [org.ovirt.engine.core.sso.utils.SsoUtils] 
(default task-1) [] OAuthException access_denied: Cannot authenticate user 
'None@N/A': No valid profile found in credentials..

 

 

Has anyone faced such issue then please help me out..!!

 

Thanks

Vijay Sachdeva

 

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QV27EGUU5F6EWZTHN6YE7W6BZ77RIC42/


[ovirt-users] Hosted Engine | Ovirt 4.4.1

2020-07-22 Thread Vijay Sachdeva via Users
Hi Everyone

 

Getting below error when deploying hosted engine:

 

[ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 10, "changed": false, 
"msg": "The Python 2 yum module is needed for this module. If you require 
Python 3 support use the `dnf` Ansible module instead."}

 

Any idea how to fix it?

 

Thanks

Vijay Sachdeva

 

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PDJZ24TUW3V6LFHGR65QVEZ6K4ONUEHG/


[ovirt-users] Hosted Engine Deployement

2020-07-20 Thread Vijay Sachdeva via Users
Hi All,

 

It tried deploying hosted engine using cli  and got this error every time:

 

 

Can anyone please let me know what here is missing..!! As it would be a great 
help.patri

 

Thanks

Vijay Sachdeva

 

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XRLXD5EUCS57KACBSTGLSZIXYAOZHDM4/


[ovirt-users] Hosted Engine Ovirt 4.3.8

2020-07-20 Thread Vijay Sachdeva via Users
Hi Everyone,

 

Does anyone has any idea why hosted engine setup stuck at “Wait for host to be 
up”. 

 

It’s been 4 hours deployment is going on and got stuck. Any help  please..!!

 

 

Thanks

Vijay Sachdeva

 

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XJOXTAN434KIIBPIDBHLHZQRAKCOATKE/


[ovirt-users] Hosted engine can't start after changing cluster settings

2020-07-07 Thread Patrick Lomakin
I changed the BIOS Type settings in the cluster settings section to UEFI BIOS, 
and hosted-engine does not start after rebooting.  Although, before I made the 
changes, I looked in the engine at the /boot partition, which has the /efi 
directory.  Is there any way to change the engine settings manually, and how 
can I connect to it (or to its partitions) to perform actions?  I can't imagine 
how you can reinstall the engine without unplugging storage to import all 
virtual machines later.  Losing data in virtual machines is a disaster for me.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HYBHKETECQVAFGWNBM563P4NHR7IGVZW/


[ovirt-users] Hosted engine deployment doesn't add the host(s) to the /etc/hosts engine, even if hostname doesn't get resolved by DNS server

2020-06-21 Thread Gilboa Davara
Hello,

Following the previous email, I think I'm hitting an odd problem, not
sure if it's my mistake or an actual bug.
1. Newly deployed 4.4 self-hosted engine on localhost NFS storage on a
single node.
2. Installation failed during the final phase with a non-descriptive
error message [1].
3. Log attached.
4. Even though the installation seemed to have failed, I managed to
connect to the ovirt console, and noticed it failed to connect to the
host.
5. SSH into the hosted engine, and noticed it cannot resolve the host hostname.
6. Added the missing /etc/hosts entry, restarted the ovirt-engine
service, and all is green.
7. Looking the deployment log, I'm seeing the following message:
"[WARNING] Failed to resolve gilboa-wx-ovirt.localdomain using DNS, it
can be resolved only locally", which means the ansible was aware the
my DNS server doesn't resolve the host hostname, but didn't add the
missing /etc/hosts entry / and or errored out.

A. Is it a bug, or is it PBKAC?
B. What are the chances that I have a working ovirt (test) setup?

- Gilboa

[1] [ ERROR ] fatal: [localhost]: FAILED! => {"ansible_facts":
{"ovirt_vms": [{"affinity_labels": [], "applications": [], "bios":
{"boot_menu": {"enabled": false}, "type": "cluster_default"},
"cdroms": [], "cluster": {"href":
"/ovirt-engine/api/clusters/1ac7525a-b3d1-11ea-9c7a-00163e57d088",
"id": "1ac7525a-b3d1-11ea-9c7a-00163e57d088"}, "comment": "", "cpu":
{"architecture": "x86_64", "topology": {"cores": 1, "sockets": 4,
"threads": 1}}, "cpu_profile": {"href":
"/ovirt-engine/api/cpuprofiles/58ca604e-01a7-003f-01de-0250",
"id": "58ca604e-01a7-003f-01de-0250"}, "cpu_shares": 0,
"creation_time": "2020-06-21 11:15:08.207000-04:00",
"delete_protected": false, "description": "", "disk_attachments": [],
"display": {"address": "127.0.0.1", "allow_override": false,
"certificate": {"content": "-BEGIN
CERTIFICATE-\nMIID3jCCAsagAwIBAgICEAAwDQYJKoZIhvcNAQELBQAwUTELMAkGA1UEBhMCVVMxFDASBgNVBAoM\nC2xvY2FsZG9tYWluMSwwKgYDVQQDDCNnaWxib2Etd3gtdm1vdmlydC5sb2NhbGRvbWFpbi40MTE5\nMTAeFw0yMDA2MjAxNTA3MTFaFw0zMDA2MTkxNTA3MTFaMFExCzAJBgNVBAYTAlVTMRQwEgYDVQQK\nDAtsb2NhbGRvbWFpbjEsMCoGA1UEAwwjZ2lsYm9hLXd4LXZtb3ZpcnQubG9jYWxkb21haW4uNDEx\nOTEwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCUNgcCn28BMlMcadFZPR9JAWjOWyh0\nWMQffOSKUlr7H+6K02IdjCR5K9bR9moAlMA4dNzF/NJa12BlCmDkwOSsgZl+NK/Ut3kqfPp4CqMl\nU3jkJzqRnh0rqOFnQ4Q1tsejziH1MSiH5/eb4A3g2s0awXF6K+JRMp2MB9wYQx//tZrvhTLprK+Y\n9jXdQFZby8j+/9pqIdN7uoYbuqESRNcfIJ0WigJ10/IOAwloT0MASwyVtCRTCCXNE4PRN+Lexlcc\nxXq2QZ0zG8u3leLT6/J87PCP/OEj976fZ19q83stWjygu4+UiWS+QStlrzc1U+aGVxa+sO+9mv3f\n6CwT0clvAgMBAAGjgb8wgbwwHQYDVR0OBBYEFOiEmL8+rz3I4j5rmL+ws47Jv5KiMHoGA1UdIwRz\nMHGAFOiEmL8+rz3I4j5rmL+ws47Jv5KioVWkUzBRMQswCQYDVQQGEwJVUzEUMBIGA1UECgwLbG9j\nYWxkb21haW4xLDAqBgNVBAMMI2dpbGJvYS13eC12bW92aXJ0LmxvY2FsZG9tYWluLjQxMTkxggIQ\nADAPBgNVHRMBAf8EBTADAQH/MA4GA1UdDwEB/wQEAwIBBjANBgkqhkiG9w0BAQsFAAOCAQEAStVI\nhHRrw5aa3YUNcwYh+kQfS47Es12nNRFeVVzbXj9CLS/TloYjyXEyZvFmYyyjNvuj4/3WcQDfeaG6\nTUGoFJ1sleOMT04WYWNJGyvsOfokT+I7yrBsVMg/7vip8UQV0ttmVoY/kMhZufwAUNlsZyh6F2o2\nNpAAcdLoguHo3UCGyaL8pF4G0NOAR/eV1rpl4VikqehUsXZ1sYzYZfK98xXrmepI42Lt3B2L6f9t\ngzYJ99jsrOGFhgvgV0H+PclviIdz79Jj3ZpPhezHkNQyrp0GOM0rqW+9xy50tlCQJ4rjdrRxnr21\nGpD3ZaQ2KSwGU79pnnRT6m7MSQ8irci3/A==\n-END
CERTIFICATE-\n", "organization": "localdomain", "subject":
"O=localdomain,CN=gilboa-wx-ovirt.localdomain"}, "copy_paste_enabled":
true, "disconnect_action": "LOCK_SCREEN", "file_transfer_enabled":
true, "monitors": 1, "port": 5900, "single_qxl_pci": false,
"smartcard_enabled": false, "type": "vnc"}, "fqdn":
"gilboa-wx-vmovirt.localdomain", "graphics_consoles": [],
"guest_operating_system": {"architecture": "x86_64", "codename": "",
"distribution": "CentOS Linux", "family": "Linux", "kernel":
{"version": {"build": 0, "full_version":
"4.18.0-147.8.1.el8_1.x86_64", "major": 4, "minor": 18, "revision":
147}}, "version": {"full_version": "8", "major": 8}},
"guest_time_zone": {"name": "EDT", "utc_offset": "-04:00"},
"high_availability": {"enabled": false, "priority": 0}, "host":
{"href": "/ovirt-engine/api/hosts/5ca55132-6d20-4a7f-81a8-717095ba8f78",
"id": "5ca55132-6d20-4a7f-81a8-717095ba8f78"}, "host_devices": [],
"href": "/ovirt-engine/api/vms/60ba9f1a-cdb1-406e-810d-187dbdd7775c",
"id": "60ba9f1a-cdb1-406e-810d-187dbdd7775c", "io": {"threads": 1},
"katello_errata": [], "large_icon": {"href":
"/ovirt-engine/api/icons/a753f77a-89a4-4b57-9c23-d23bd61ebdaf", "id":
"a753f77a-89a4-4b57-9c23-d23bd61ebdaf"}, "memory": 8589934592,
"memory_policy": {"guaranteed": 8589934592, "max": 8589934592},
"migration": {"auto_converge": "inherit", "compressed": "inherit",
"encrypted": "inherit"}, "migration_downtime": -1,
"multi_queues_enabled": true, "name": "external-HostedEngineLocal",
"next_run_configuration_exists": false, "nics": [], "numa_nodes": [],
"numa_tune_mode": "interleave", "origin": "external",
"original_template": {"href":
"/ovirt-engine/api/templates/----",
"id": 

[ovirt-users] Hosted engine deployment fails consistently when trying to download files.

2020-06-12 Thread Gilboa Davara
Hello,

I'm trying to deploy a hosted engine on one of my test setups.
No matter how I tried to deploy the hosted engine, either via command line
or via "Hosted Engine" deployment from the cockpit web console, I always
fails with the same error message. [1]
Manually trying to download RPMs via dnf from the host, work just fine.
Firewall log files are clean.

Any idea what's going on?

[1]  2020-06-12 06:09:38,609-0400 DEBUG
otopi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils._process_output:103 {'msg': "Failed to download metadata for
repo 'AppStream'", 'results': [], 'rc': 1, 'invocation': {'module_args':
{'name': ['ovirt-engine'], 'state': 'present', 'allow_downgrade': False,
'autoremove': False, 'bugfix': False, 'disable_gpg_check': False,
'disable_plugin': [], 'disablerepo': [], 'down  load_only': False,
'enable_plugin': [], 'enablerepo': [], 'exclude': [], 'installroot': '/',
'install_repoquery': True, 'install_weak_deps': True, 'security': False,
'skip_broken': False, 'update_cache': False, 'update_only': False,
'validate_certs': True, 'lock_timeout': 30, 'conf_file': None,
'disable_excludes': None, 'download_dir': None, 'list': None, 'releasever':
None}}, '_ansible_no_log': False, 'changed  ': False,
'_ansible_delegated_vars': {'ansible_host': 'test-vmengine.localdomain'}}
  2020-06-12 06:09:38,709-0400 ERROR
otopi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils._process_output:107 fatal: [localhost ->
gilboa-wx-vmovirt.localdomain]: FAILED! => {"changed": false, "msg":
"Failed to download metadata for repo 'AppStream'", "rc": 1, "results": []}
  2020-06-12 06:09:39,711-0400 DEBUG
otopi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils._process_output:103 PLAY RECAP [localhost] : ok: 183 changed:
57 unreachable: 0 skipped: 77 failed: 1
  2020-06-12 06:09:39,812-0400 DEBUG
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:215
ansible-playbook rc: 2
  2020-06-12 06:09:39,812-0400 DEBUG
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:222
ansible-playbook stdout:
  2020-06-12 06:09:39,812-0400 DEBUG
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:225
ansible-playbook stderr:
  2020-06-12 06:09:39,812-0400 DEBUG otopi.context
context._executeMethod:145 method exception
  Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/otopi/context.py", line 132, in
_executeMethod
  method['method']()
File
"/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/gr-he-ansiblesetup/core/misc.py",
line 403, in _closeup
  r = ah.run()
File
"/usr/lib/python3.6/site-packages/ovirt_hosted_engine_setup/ansible_utils.py",
line 229, in run
  raise RuntimeError(_('Failed executing ansible-playbook'))

- Gilboa
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MALLKNVVGH4XRF5QJ7MARBET42CVBPJX/


[ovirt-users] Hosted Engine stalled and unable to restart

2020-03-31 Thread Mark Steele
Hello,

We are on an older version (3.x - cannot be specific as I cannot get my
ovirt hosted engine up).

We experienced a storage failure earlier this evening - the hosted engine
was originally installed with this storage domain although we have moved
all VM's and disks off of it.

The storage was restored and all the VM's are now running, but the ovirt
engine is not pinging and is unreachable.

I have attempted to locate it on my HV's using 'virsh list --all' but only
one of those is taking my credentials - all the others fail to authenticate.

Is there a way to locate what the credentials are on each HV since the
default is not working? Additionally, is there any other way to locate the
hosted engine and restart it directly from a HV?

Thank you for your time and consideration.


***
*Mark Steele*
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XTA35SOC737FLINJ7VJILEEOM4A6OZPL/


[ovirt-users] hosted-engine --deploy --restore-from-file fails with error "Domain format is different from master storage domain format"

2020-02-15 Thread djagoo
Hi there,

for two weeks we are trying to move our hosted engine to a glusterfs storage 
resulting in an error "Domain format is different from master storage domain 
format".

The newly created storage domain has version 5 (default since compatibility 
level 4.3 according to documentation).

Hosted engine version is 4.3.8.2-1.el7
All hosts are updated to the latest versions and are rebooted.

Cluster and  DataCenter compatibility version is 4.3.

Master data domain and all other domains are Format V4 and there is no V5 
available in dropdown menus. Even if I try to create a new storage domain from 
manager there is only V4 available.

The system and all hosts where installed march 2019 so it was ovirt release 
4.3.2 or 4.3.1 which created the existing domains.

Is there a way to update the master storage domain to V5? It seems I cannot 
downgrade the datacenter to compat 4.2 and then raise it again.

After two weeks I'm out of ideas.

Can anyone help please?

Regards,
Marcel

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6CICLMWQYMT4TC3EJIGQK5ETSBYL6JEO/


[ovirt-users] hosted engine storage does not heal

2020-02-13 Thread g . vasilopoulos
Hello 
We have a problem with hosted engine storage after updating one host which 
serves as a gluster server for the engine (the setup is gluster replica 3 with 
local disks from 3 hypervisors)
Volume heal command shows 
[root@o5-car0118 engine]# gluster volume heal engine info
Brick o5-car0118.gfs-int.uoc.gr:/gluster_bricks/engine/engine
/a1ecca6f-1487-492d-b9bf-d3fb60f1bc99/images/e3a3ef50-56b6-48b0-a9f8-2d6382e2286e/b6973d5b-cc7a-4abd-8d2d-94f551936a97.meta
 
/a1ecca6f-1487-492d-b9bf-d3fb60f1bc99/images/f5e576d4-eea7-431b-a0f0-f8a557006471/464518c5-f79c-4271-9d97-995981cde2cb.meta
 
Status: Connected
Number of entries: 2

Brick o2-car0121.gfs-int.uoc.gr:/gluster_bricks/engine/engine
/a1ecca6f-1487-492d-b9bf-d3fb60f1bc99/images/e3a3ef50-56b6-48b0-a9f8-2d6382e2286e/b6973d5b-cc7a-4abd-8d2d-94f551936a97.meta
 
/a1ecca6f-1487-492d-b9bf-d3fb60f1bc99/images/f5e576d4-eea7-431b-a0f0-f8a557006471/464518c5-f79c-4271-9d97-995981cde2cb.meta
 
Status: Connected
Number of entries: 2

Brick o9-car0114.gfs-int.uoc.gr:/gluster_bricks/engine/engine
Status: Connected
Number of entries: 0

On all the gluster servers 
I notice that the affected directories have date in 1970. 
[root@o5-car0118 images]# ls -al
σύνολο 24
drwxr-xr-x. 23 vdsm kvm 8192 Σεπ  24 12:07 .
drwxr-xr-x.  6 vdsm kvm   64 Σεπ  19  2018 ..
drwxr-xr-x.  2 vdsm kvm  149 Αύγ   2  2019 2bac658f-70ce-4adb-ab68-a0f0c205c70c
drwxr-xr-x.  2 vdsm kvm  149 Αύγ   2  2019 3034a69c-b5b5-46fa-a393-59ea46635142
drwxr-xr-x.  2 vdsm kvm  149 Αύγ   2  2019 5538ae6b-ccc6-4861-b71b-6b2c7af2e0ab
drwxr-xr-x.  2 vdsm kvm  149 Αύγ   2  2019 66dbce25-8863-42b5-904a-484f8e9c225a
drwxr-xr-x.  2 vdsm kvm  149 Αύγ   2  2019 6c049108-28f7-47d9-8d54-4ac2697dcba8
drwxr-xr-x.  2 vdsm kvm  149 Αύγ   2  2019 72702607-1896-420d-931a-42c9f01d37f1
drwxr-xr-x.  2 vdsm kvm  149 Αύγ   2  2019 7c617da4-ab6b-4791-80be-541f5be60dd8
drwxr-xr-x.  2 vdsm kvm  149 Αύγ   2  2019 902a16d3-6494-4840-a528-b49972f9c332
drwxr-xr-x.  2 vdsm kvm  149 Αύγ   2  2019 96fd6116-7983-4385-bca6-e6ca8edc94ca
drwxr-xr-x.  2 vdsm kvm  149 Αύγ   2  2019 abd875cd-96b6-47a6-b6a3-ae35300a21cc
drwxr-xr-x.  2 vdsm kvm  149 Αύγ   2  2019 add7bc92-1a40-474d-9255-53ac861b75ed
drwxr-xr-x.  2 vdsm kvm  149 Αύγ   2  2019 b7b06df7-465f-4fc7-a214-033b7dca6bc7
drwxr-xr-x.  2 vdsm kvm  149 Αύγ   2  2019 c0ecacac-26c6-40d9-87da-af17d9de8d21
drwxr-xr-x.  2 vdsm kvm  149 Αύγ   2  2019 c4d2d5da-2a15-4735-8919-324ae8372064
drwxr-xr-x.  2 vdsm kvm  149 Αύγ   2  2019 c7e0c784-bb8e-4024-95df-b6f4267b0208
drwxr-xr-x.  2 vdsm kvm  149 Αύγ   2  2019 d1f1ff5a-387d-442c-9240-1c58e4d6f8a7
drwxr-xr-x.  2 vdsm kvm  149 Αύγ   2  2019 d3e172cb-b6dd-4867-a9cd-f4fa006648bc
drwxr-xr-x.  2 vdsm kvm 8192 Ιαν   1  1970 e3a3ef50-56b6-48b0-a9f8-2d6382e2286e 
 <-
drwxr-xr-x.  2 vdsm kvm  149 Αύγ   2  2019 e477ec02-11ab-4d92-b5fd-44e91fbde7f9
drwxr-xr-x.  2 vdsm kvm  149 Αύγ   2  2019 e839485b-b0be-47f6-9847-b691e02ce9a4
drwxr-xr-x.  2 vdsm kvm 8192 Ιαν   1  1970 f5e576d4-eea7-431b-a0f0-f8a557006471 
<-

I think this has something to do with a gluster bug. 
Is there a way to correct this and heal the volume?
Thank you!
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EHC5TPN6DU5ZV7UY6ZYB4YMEVE6JO3SY/


[ovirt-users] hosted-engine --deploy fails after "Wait for the host to be up" task

2020-01-26 Thread Fredy Sanchez
*Hi all,*

*[root@bric-ovirt-1 ~]# cat /etc/*release**
CentOS Linux release 7.7.1908 (Core)
*[root@bric-ovirt-1 ~]# yum info ovirt-engine-appliance*
Installed Packages
Name: ovirt-engine-appliance
Arch: x86_64
Version : 4.3
Release : 20191121.1.el7
Size: 1.0 G
Repo: installed
>From repo   : ovirt-4.3

*Same situation as https://bugzilla.redhat.com/show_bug.cgi?id=1787267
. The error message
almost everywhere is some red herring message about ansible*
[ INFO  ] TASK [ovirt.hosted_engine_setup : Wait for the host to be up]
[ ERROR ] fatal: [localhost]: FAILED! => {"ansible_facts": {"ovirt_hosts":
[]}, "attempts": 120, "changed": false, "deprecations": [{"msg": "The
'ovirt_host_facts' module has been renamed to 'ovirt_host_info', and the
renamed one no longer returns ansible_facts", "version": "2.13"}]}
[ INFO  ] TASK [ovirt.hosted_engine_setup : Notify the user about a failure]
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "The
system may not be provisioned according to the playbook results: please
check the logs for the issue, fix accordingly or re-deploy from scratch.\n"}
[ ERROR ] Failed to execute stage 'Closing up': Failed executing
ansible-playbook
[ INFO  ] Stage: Termination
[ ERROR ] Hosted Engine deployment failed: please check the logs for the
issue, fix accordingly or re-deploy from scratch.
  Log file is located at
/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20200126170315-req4qb.log

*But the "real" problem seems to be SSH related, as you can see below*
*[root@bric-ovirt-1 ovirt-engine]# pwd*
/var/log/ovirt-hosted-engine-setup/engine-logs-2020-01-26T17:19:28Z/ovirt-engine
*[root@bric-ovirt-1 ovirt-engine]# grep -i error engine.log*
2020-01-26 17:26:50,178Z ERROR
[org.ovirt.engine.core.bll.hostdeploy.AddVdsCommand] (default task-1)
[2341fd23-f0c7-4f1c-ad48-88af20c2d04b] Failed to establish session with
host 'bric-ovirt-1.corp.modmed.com': SSH session closed during connection '
r...@bric-ovirt-1.corp.modmed.com'
2020-01-26 17:26:50,205Z ERROR
[org.ovirt.engine.api.restapi.resource.AbstractBackendResource] (default
task-1) [] Operation Failed: [Cannot add Host. Connecting to host via SSH
has failed, verify that the host is reachable (IP address, routable address
etc.) You may refer to the engine.log file for further details.]

*The funny thing is that the engine can indeed ssh to bric-ovirt-1
(physical host). See below*

*[root@bric-ovirt-1 ovirt-hosted-engine-setup]# cat /etc/hosts*
192.168.1.52 bric-ovirt-engine.corp.modmed.com # temporary entry added by
hosted-engine-setup for the bootstrap VM
127.0.0.1   localhost localhost.localdomain localhost4
localhost4.localdomain4
#::1 localhost localhost.localdomain localhost6
localhost6.localdomain6
10.130.0.50 bric-ovirt-engine bric-ovirt-engine.corp.modmed.com
10.130.0.51 bric-ovirt-1 bric-ovirt-1.corp.modmed.com
10.130.0.52 bric-ovirt-2 bric-ovirt-2.corp.modmed.com
10.130.0.53 bric-ovirt-3 bric-ovirt-3.corp.modmed.com
192.168.0.1 bric-ovirt-1gluster bric-ovirt-1gluster.corp.modmed.com
192.168.0.2 bric-ovirt-2gluster bric-ovirt-2gluster.corp.modmed.com
192.168.0.3 bric-ovirt-3gluster bric-ovirt-3gluster.corp.modmed.com
[root@bric-ovirt-1 ovirt-hosted-engine-setup]#

*[root@bric-ovirt-1 ~]# ssh 192.168.1.52*
Last login: Sun Jan 26 17:55:20 2020 from 192.168.1.1
[root@bric-ovirt-engine ~]#
[root@bric-ovirt-engine ~]#
*[root@bric-ovirt-engine ~]# ssh bric-ovirt-1*
Password:
Password:
Last failed login: Sun Jan 26 18:17:16 UTC 2020 from 192.168.1.52 on
ssh:notty
There was 1 failed login attempt since the last successful login.
Last login: Sun Jan 26 18:16:46 2020
###
# UNAUTHORIZED ACCESS TO THIS SYSTEM IS PROHIBITED#
# #
# This system is the property of Modernizing Medicine, Inc.   #
# It is for authorized Company business purposes only.#
# All connections are monitored and recorded. #
# Disconnect IMMEDIATELY if you are not an authorized user!   #
###
[root@bric-ovirt-1 ~]#
[root@bric-ovirt-1 ~]#
[root@bric-ovirt-1 ~]# exit
logout
Connection to bric-ovirt-1 closed.
[root@bric-ovirt-engine ~]#
[root@bric-ovirt-engine ~]#
*[root@bric-ovirt-engine ~]# ssh bric-ovirt-1.corp.modmed.com
*
Password:
Last login: Sun Jan 26 18:17:22 2020 from 192.168.1.52
###
# UNAUTHORIZED ACCESS TO THIS SYSTEM IS PROHIBITED#
# #
# This system is the property of Modernizing Medicine, Inc.   #
# It is for authorized Company business purposes only.#
# All connections are monitored and recorded.

[ovirt-users] Hosted Engine Failover Timing

2019-12-12 Thread Robert Webb
So in doing some testing, I pulled the plug on my node where the hosted engine 
was running. Rough timing was about 3.5 minutes before the portal was available 
again.

I searched around first, but could not find if there was any way to speed of 
the detection time in order to reboot the hosted engine quicker.

Right now I am only testing this and will add in VM's later, which I understand 
should reboot a lot quicker.

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DH6ZHSISOYRMVTS7BVUH3UPV5E272MOG/


[ovirt-users] Hosted-Engine wizard disappeared after cockpit idle session timeout

2019-11-14 Thread wodel youchi
Hi,

I got this behavior once and I didn't tested again.

I started the hosted-engine deployment using cockpit webui, the process
went smoothly, the LocalHosttedEngine VM  was created, at that time I left
the console for sometime.
When I get back the cockpit console was closed due to idle session timeout,
I reconnect, and the hosted-engine wizard had disappeared and I didn't find
a way to get it back, the only choice was to start the process again even
if the console was showing that the host was registered with a hosted
engine Manager, so I couldn't continue the deployment (the storage phase).
I had to stop the LocalHosttedEngine VM, delete it's temporary disk, then
restart the deployment again and this time I stick with cockpit webui to
get it done.

Regards.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XLVYA5WBAYB7EPDV5V7PHEE6AUZ575FE/


[ovirt-users] hosted engine setup, iSCSI no LUNs shown

2019-08-20 Thread billyburly
I'm trying to setup the hosted engine on top of iSCSI storage. It successfully 
logs in and gets the target, however the process errors out claiming there are 
no LUNs. But if you look on the host, the disks were added to the system.

[ INFO  ] TASK [ovirt.hosted_engine_setup : iSCSI discover with REST API]
[ INFO  ] ok: [localhost]
  The following targets have been found:
[1] iqn.2001-04.com.billdurr.durrnet.vm-int:vmdata
TPGT: 1, portals:
192.168.47.10:3260

  Please select a target (1) [1]: 1
[ INFO  ] Getting iSCSI LUNs list
...
[ INFO  ] TASK [ovirt.hosted_engine_setup : Get iSCSI LUNs]
[ INFO  ] ok: [localhost]
[ ERROR ] Cannot find any LUN on the selected target
[ ERROR ] Unable to get target list

Here's what the config in targetcli looks like
[root@vm1 ~]# targetcli ls
o- / . [...]
  o- backstores .. [...]
  | o- block .. [Storage Objects: 2]
  | | o- p_iscsi_lun1 .. [/dev/drbd0 (62.0GiB) write-thru activated]
  | | | o- alua ... [ALUA Groups: 1]
  | | |   o- default_tg_pt_gp ... [ALUA state: Active/optimized]
  | | o- p_iscsi_lun2 . [/dev/drbd1 (310.6GiB) write-thru activated]
  | |   o- alua ... [ALUA Groups: 1]
  | | o- default_tg_pt_gp ... [ALUA state: Active/optimized]
  | o- fileio . [Storage Objects: 0]
  | o- pscsi .. [Storage Objects: 0]
  | o- ramdisk  [Storage Objects: 0]
  o- iscsi  [Targets: 1]
  | o- iqn.2001-04.com.billdurr.durrnet.vm-int:vmdata  [TPGs: 1]
  |   o- tpg1 .. [gen-acls, no-auth]
  | o- acls .. [ACLs: 0]
  | o- luns .. [LUNs: 2]
  | | o- lun0 . [block/p_iscsi_lun1 (/dev/drbd0) (default_tg_pt_gp)]
  | | o- lun1 . [block/p_iscsi_lun2 (/dev/drbd1) (default_tg_pt_gp)]
  | o- portals  [Portals: 1]
  |   o- 192.168.47.10:3260 ... [OK]
  o- loopback . [Targets: 0]
  o- srpt . [Targets: 0]

The two LUNs show up on the host after the hosted engine setup tries to 
enumerate the LUNs for the target
[root@vm1 ~]# lsscsi
[0:0:0:0]storage HP   P420i8.32  -
[0:1:0:0]diskHP   LOGICAL VOLUME   8.32  /dev/sda
[0:1:0:1]diskHP   LOGICAL VOLUME   8.32  /dev/sdb
[0:1:0:2]diskHP   LOGICAL VOLUME   8.32  /dev/sdc
[11:0:0:0]   diskLIO-ORG  p_iscsi_lun1 4.0   /dev/sdd
[11:0:0:1]   diskLIO-ORG  p_iscsi_lun2 4.0   /dev/sde
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MGPCIAT7QTH7A7EHIC2RBDTZTH6HB4IH/


[ovirt-users] Hosted Engine on seperate L2 network from nodes?

2019-08-16 Thread Dan Poltawski
For some security requirements, I’ve been asked if it’s possible to segregate 
the hosted engine from the physical nodes, with specific firewalling for access 
to do node/ storage operations (I’m using managed block storage).

Is this an approach others us, or is it better practice and just ensure the 
nodes and engine are all sharing the same network?

Thanks,

dan


The Networking People (TNP) Limited. Registered office: Network House, Caton 
Rd, Lancaster, LA1 3PE. Registered in England & Wales with company number: 
07667393

This email and any files transmitted with it are confidential and intended 
solely for the use of the individual or entity to whom they are addressed. If 
you have received this email in error please notify the system manager. This 
message contains confidential information and is intended only for the 
individual named. If you are not the named addressee you should not 
disseminate, distribute or copy this e-mail. Please notify the sender 
immediately by e-mail if you have received this e-mail by mistake and delete 
this e-mail from your system. If you are not the intended recipient you are 
notified that disclosing, copying, distributing or taking any action in 
reliance on the contents of this information is strictly prohibited.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/C4SAO7NAQGEA326YL4FRQQJHRO2NMFTK/


[ovirt-users] hosted engine installation / multipath / iscsi

2019-07-31 Thread Michael Frank
Hi,

 since several days i try to install the hosted engine initially to an iscsi 
multipath device without success.
Some information on the environment:
- Version 4.3.3
- using two 10gbe interfaces as single bond for the ovirtmgmt interface 
- using two 10gbe storage interfaces on each hypervisor for iscsi storage
-- each storage interface is configured without any bonding, etc
-- each storage interface lives in a separate vlan were also the iscsi 
Portals/target are available, the iscsi portals have 4x10ge interfaces each, (2 
in vlan xx and 2 interfaces in vlan yy )
-- so; each storage interface is connected to two iscsi Portals via 4 
interfaces 

The documentation here is for me unclear:
https://ovirt.org/documentation/self-hosted/chap-Deploying_Self-Hosted_Engine.html
>Note: To specify more than one iSCSI target, you must enable multipathing 
>before deploying the self-hosted engine. There is also a Multipath >Helper 
>tool that generates a script to install and configure multipath with different 
>options.

This indicates for me that it should be possbile to install the HE directly on 
the /dev/mapper/mpath device which is availibale when I have prepared the host 
accordingly before installing the HE (log in to multiple iscsi targets, create 
proper multipath,conf, etc) - right ?

I login to the two iscsi portals and get in sum 8 pathes, 4 from each interface 
and iscsi target.
Basically I have then the mpath device on the hypervisor available and i can  
mount the mpath device and put data on it.
In the cockpit interface the mount can also be activated and is recognized 
correctly.
multipathd -ll and lsblk looks good. Everything seems to be fine.

But when I run the "hosted-engine" --deploy, the last option while running the 
assistant is to enter the iscsi data.
So, basically i just want to define my mpath device - when entering the data 
(ip, port)for the iscsi Portal I can see the 4 pathes of the single hosted 
Engine target,
and when i choose the path where the "lun" is finally available it fails. I 
think in general this option is not that what i want to have 
here for using the multipath device.

I' m lost - what is the usual way to install the HE on a multipath device ?

Do i have to change the configuration of the storage interfaces or the iscsi 
network design?
Did I missed something obvious ?
Can I put in my multipath data into the answerfile to get rid of the last step 
of the assistant ?
Can I use the related ansible role for specify the Mpath device which is 
available when activating the multipath service ?

Is it not possible in general ?? :
https://bugzilla.redhat.com/show_bug.cgi?id=1193961

Sorry in advance for the long mail  1!^^

br,
michael

Sent from a mobile device ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QRL2FYUD66C5J2RKC4UJZP4OQJWXWSB5/


[ovirt-users] Hosted Engine VM - CPU and Memory Sizing

2019-07-22 Thread paul . christian . suba
Hi,

Is there a recommended CPU and memory size for the Hosted Engine VM? We have 
what started as a 4 node physical cluster lab with 4 vms that has now grown to 
44vms. The dashboard is slow to load information and the HE VM is consistently 
seen with 99% CPU with the breakdown below.

  PID USER  PR  NIVIRTRESSHR S  %CPU %MEM TIME+ COMMAND
12076 postgres  20   0  506516 276172 143772 R  99.3  1.7 206:17.15 postmaster
10337 postgres  20   0  484756 254156 144000 R  99.0  1.6 179:52.35 postmaster
38603 postgres  20   0  500068 267836 143992 S  70.1  1.6 528:04.13 postmaster
49217 postgres  20   0  468736 235484 143624 S  19.3  1.4  41:57.55 postmaster
 5569 ovirt 20   0 6430912   2.3g   6368 S   1.3 14.5 894:20.75 java


We used the default 4 CPU and 16 GB RAM.
This is oVirt 4.3.1.
I am curious to find out as well if the postgres processes using 99% CPU is 
normal?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VQRGN5NY5YESK4PYW2UV3WBYCN3I6IRF/


[ovirt-users] hosted engine installation on iscsi multipath/mpath device fails

2019-07-14 Thread Michael Frank
Dear all,

since several days i try to install the hosted engine initially to an iscsi 
multipath device without success.
Some information on the environment:
- Ovirt Version 4.3.3
- using two 10gbe interfaces as single lacp - bond for the ovirtmgmt interface 
- using two 10gbe storage interfaces on each hypervisor for iscsi storage
-- each storage interface is configured without any lacp bonding or 802.1q 
tagging, etc (on the switches the vlan is configured; port based vlan)
-- each storage interface lives in a serarate vlan were also the iscsi target 
is available, the iscsi target has 4x10ge interfaces
-- so; each storage interface is connected to a iscsi target through a 
different vlan

The documentation here is for me unclear:
https://ovirt.org/documentation/self-hosted/chap-Deploying_Self-Hosted_Engine.html
 

>Note: To specify more than one iSCSI target, you must enable multipathing 
>before deploying the self-hosted engine. There is also a Multipath >Helper 
>tool that generates a script to install and configure multipath with different 
>options.

This indicates for me that it should be possbile to install the HE directly on 
the /dev/mapper/mpath device which is availibale when I have prepared the host 
accordingly before installing the HE (log in to multiple iscsi targets, create 
proper multipath,conf, etc) - right ?

I login to the two iscsi targets and get in sum 8 pathes, 4 from each interface 
and iscsi target.
I have then the mpath device on the hypervisor available and i can  mount the 
mpath device and put (test) data on it.
In the cockpit interface the mount can also be activated and is recognized 
correctly.
multipathd -ll and lsblk looks good. Everything seems to be fine.

But when I run the "hosted-engine" --deploy, the last option while running the 
assistant is to enter the iscsi data.
So, basically i just want to define my mpath device - when entering the data 
for one of my iscsi target I can see the 4 pathes of the single target,
and when i choose the path where the "lun" is finally available it fails. I 
think in general this option is not that what i want to have 
here for using the multipath device.

I' m lost - what is the usual way to install the HE on a mutlipath device ?

Do I have to change the configuration of the storage interfaces or the iscsi 
network design?
- Are bond interfaces for these iscsi connections mandatory ?
Did I missed something obvious ?
Can I put in my multipath data into the answerfile somehow to get rid of the 
last step of the assistant ?
Is it not possible in general ?? :
https://bugzilla.redhat.com/show_bug.cgi?id=1193961 


Sorry in advance for the long mail  1!^^

br,
michael___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZDI42MOR2G2LJAO7VNFL3ZYI4U2ICKY4/


[ovirt-users] hosted engine, Hyperconverged recovery

2019-07-10 Thread William Kwan
Hi,
I need some direction to make sure we won't make more mistake in recovering a 
3-node self hosted engine with Gluster.
Someone carelessly, very, wiped out the first few MB of the LVM PV on one node. 
 We changed the quorum and brought up the Gluster and oVirt.   We ran 
vgcfgrestore on the 'bad node' and we can run pvscan, vgscan and lvscan to see 
the LV's. 

What should I do next to prevent more damage or corrupt the glusterfs on the 
other two nodes?
Will this work?  mkfs the LV's  bring up gluster  run gluster sync ?

ThanksWill
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/P6ADINQ2UTUQTLAOGRFPX64B4BELNPNU/


[ovirt-users] Hosted Engine Deploy - Error at the end... [NFS ?]

2019-07-03 Thread csi-laser
Hi,
While Trying to Deploy Hosted Engine, I'm stuck on Stage 4. Here is the error :

[ INFO ] TASK [ovirt.hosted_engine_setup : Fetch Datacenter name]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Add NFS storage domain]
[ ERROR ] Verify permission settings on the specified storage path.]". HTTP 
response code is 400.
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault 
reason is \"Operation Failed\". Fault detail is \"[Permission settings on the 
specified path do not allow access to the storage.\nVerify permission settings 
on the specified storage path.]\". HTTP response code is 400."}

But, it's very strange, because, when I'm trying to mount this NFS share 
manually, It's working
So what am I doing wrong here ?

Here is the "Storage Connection" - 8.45.119.16:/ovirt_engine
Mount option is blanked
NFS Version is "v3"

Thanks in advance !
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4DPEX6M5PH7Y35EVCDKBOOFQTJYFUTKK/


[ovirt-users] hosted engine not getting up

2019-06-28 Thread Crazy Ayansh
Hi Team,

Today i rebooted my hosted engine and found it was not getting up. After
connecting through remote viewer i found the error "error:cannot allocate
kernel buffer" and i am not able to start hosted engine.any suggestions ?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MTDRNK2DZVW4UNTRRQDLFKNX3ZLDGTPI/


[ovirt-users] hosted-engine restore bug?

2019-06-20 Thread ada per
Hello everyone, 

im running on ovirt 4.3.5
For some strange reason the httpd service couldn't start on my hosted engine so 
i could not access the Web interface.After various unsuccessful attempts to fix 
it, I decided to restore the hosted engine.

Following the steps:
1.engine-cleanup
2.db had to delete it manually using  rm -rf /var/lib/pgsql/data
3.engine-backup --mode=restore --file=backup_file --log=log_file_name 
--provision-db --no-restore-permissions
4.engine-setup 

On step 3 it keeps coming with error messages "FATAL: Can't connect to database 
'ovirt_engine_history'. Please see '/usr/bin/engine-backup --help'.

i need to get this hosted-engine up and running so please advice 

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/S7AYLQFYRFVDNF7E34FUG6IVCYNATIFE/


[ovirt-users] Hosted Engine Abruptly Stopped Responding - Unexpected Shutdown

2019-06-06 Thread souvaliotimaria
Hello, 

I came upon a problem the previous month that I figured it would be good to 
discuss here. I'm sorry I didn't post here earlier but time slipped me. 

I have set up a glustered, hyperconverged oVirt environment for experimental 
use as a means to see its  behaviour and get used to its management and 
performance before setting it up as a production environment for use in our 
organization. The environment is up and running since 2018 October. The three 
nodes are HP ProLiant DL380 G7 and have the following characteristics:

Mem: 22GB
CPU: 2x Hexa Core - Intel Xeon Hexa Core E56xx
HDD: 5x 300GB
Network: BCM5709C with dual-port Gigabit
OS: Linux RedHat 7.5.1804(Core 3.10.0-862.3.2.el7.x86_64 x86_64) - Ovirt Node 
4.2.3.1

As I was working on the environment, the engine stopped working.
Not long before the time the HE stopped, I was in the web interface managing my 
VMs, when the browser froze and the HE was also not responding to ICMP 
requests. 

The first thing I did was to connect via ssh to all nodes and run the command
#hosted-engine --vm-status 
which showed that the HE was down in nodes 1 and 2 and up on the 3rd node. 

After executing
#virsh -r list
the VM list that was shown contained two of the VMs I had previously created 
and were up; the HE was nowhere.

I tried to restart the HE with the
#hosted-engine --vm-start
but it didn't work.

I then put all nodes in maintenance mode with the command
#hosted-engine --set-maintenance --mode=global
(I guess I should have done that earlier) and re-run
#hosted-engine --vm-start
that had the same result as it previously did. 

After checking the mails the system sent to the root user, I saw there were 
several mails on the 3rd node (where the HE had been), informing of the HE's 
state. The messages were changing between EngineDown-EngineStart, 
EngineStart-EngineStarting, EngineStarting-EngineMaybeAway, 
EngineMaybeAway-EngineUnexpectedlyDown, EngineUnexpectedlyDown-EngineDown, 
EngineDown-EngineStart and so forth.

I continued by searching the following logs in all nodes :
/var/log/libvirt/qemu/HostedEngine.log
/var/log/libvirt/qemu/win10.log
/var/log/libvirt/qemu/DNStest.log
/var/log/vdsm/vdsm.log
/var/log/ovirt-hosted-engine-ha/agent.log

After that I spotted and error that had started appearing almost a month ago in 
node #2:
ERROR Internal server error Traceback (most recent call last): File 
"/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 606, in 
_handle_request res = method(**params) File 
"/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 197, in 
_dynamicMethod result = fn(*methodArgs) File 
"/usr/lib/python2.7/site-packages/vdsm/gluster/apiwrapper.py", line 85, in 
logicalVolumeList return self._gluster.logicalVolumeList() File 
"/usr/lib/python2.7/site-packages/vdsm/gluster/api.py", line 90, in wrapper rv 
= func(*args, **kwargs) File 
"/usr/lib/python2.7/site-packages/vdsm/gluster/api.py", line 808, in 
logicalVolumeList status = self.svdsmProxy.glusterLogicalVolumeList() File 
"/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 55, in 
__call__ return callMethod() File 
"/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 52, in 
 getattr(self._supervdsmProxy._svdsm, self._funcName)(*args, 
AttributeError: 'AutoProxy[instance]' object has no attribute 
'glusterLogicalVolumeList'


The outputs of the following commands were also checked as a way to see if 
there was a mandatory process missing/killed, a memory problem or even disk 
space shortage that led to the sudden death of a process
#ps -A
#top
#free -h
#df -hT

Finally, after some time delving in the logs, the output of the 
#journalctl --dmesg
showed the following message
"Out of memory: Kill process 5422 (qemu-kvm) score 514 or sacrifice child.
Killed process 5422 (qemu-kvm) total-vm:17526548kB, anon-rss:9310396kB,
file-rss:2336kB, shmem-rss:12kB"
which after that the ovirtmgmt started not responding.

I tried to restart the vhostd by executing
#/etc/rc.d/init.d/vhostmd start
but it didn't work. 

Finally, I decided to run the HE restart command on the other nodes as well 
(I'd figured that since the HE was last running on the node #3, that's where I 
should try to restart it). So, I run 
#hosted-engine --vm-start
and the output was 
"Command VM.getStats with args {'vmID':'...<το ID της HE>'} failed:
(code=1,message=Virtual machine does not exist: {'vmID':'...<το ID της
HE>'})"
And then I run the command again and the output was
"VM exists and its status is Powering Up."

After that I executed 
#virsh -r list
and the output was the following:
Id Name   State

2  HostedEngine  running

After the HE's restart two mails came that stated: 
ReinitializeFSMEngineStarting and EngineStarting-EngineUp

After that and after checking that we had access to the web interface again, we 
executed
hosted-engine --set-maintenance --mode=none
to get out of the 

[ovirt-users] hosted-engine and GlusterFS on Vlan help

2019-05-14 Thread Hanson

Hi Guys,

I've converted my lab from using 802.3ad with bonding>bridged vlans to 
one link with two vlan bridges and am now having traffic jumping to the 
gateway when moving VM's/ISO/etc.


802.3ad = node1>switch1>node2
801.1q = node1>switch1>gateway>switch1>node2

I assume I've setup the same vlan style, though this time I used the gui 
on the initial host install... setting up the vlans with their parent 
being eth0.


Hosted-engine on deploy then creates ovirtmgmt on top of eth0.11 ...

Switch is tagged for vlans 10 & 11. Including a PVID of 11 for good 
measure. (Gluster is vlan 11)


I'd expect the traffic from node to node to be going from port to port 
like it did in 802.3ad, what have I done wrong or is it using the gui 
initially?


This is how the current setup looks:

/var/lib/vdsm/Persistent/netconf/nets/ovirtmgmt:
{
"ipv6autoconf": false,
"nameservers": [],
"nic": "eth0",
"vlan": 11,
"ipaddr": "10.0.3.11",
"switch": "legacy",
"mtu": 1500,
"netmask": "255.255.255.0",
"dhcpv6": false,
"stp": false,
"bridged": true,
"gateway": "10.0.3.1",
"defaultRoute": true
}

/etc/sysconfig/network-scripts/ifcfg-ovirtmgmt:
# Generated by VDSM version 4.18.13-1.el7.centos
DEVICE=ovirtmgmt
TYPE=Bridge
DELAY=0
STP=off
ONBOOT=yes
IPADDR=10.0.3.11
NETMASK=255.255.255.0
GATEWAY=10.0.3.1
BOOTPROTO=none
DEFROUTE=yes
NM_CONTROLLED=no
IPV6INIT=no
VLAN_ID=11
MTU=1500

Thanks!!

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

--
IMPORTANT!
This message has been scanned for viruses and phishing links.
However, it is your responsibility to evaluate the links and attachments you 
choose to click.
If you are uncertain, we always try to help.
Greetings helpd...@actnet.se



--
IMPORTANT!
This message has been scanned for viruses and phishing links.
However, it is your responsibility to evaluate the links and attachments you 
choose to click.
If you are uncertain, we always try to help.
Greetings helpd...@actnet.se

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IV23EDGWLQD33AMM5Y3H2PFO2CCNE7X6/


[ovirt-users] Hosted Engine Restore Issues

2019-05-10 Thread Alan G
Hi,



Trying to re-deploy Hosted Engine into a new storage domain. "hosted-engine 
--deploy  --noansible" has completed and the engine is up, but I cannot remove 
the existing hosted_storage domain to allow the new one to be imported.



I cannot remove the domain until the old HostedEngine VM is removed, but I 
cannot remove that as it has Delete Protection enabled. Any attempt to remove 
delete protection errors with "There was an attempt to change Hosted Engine VM 
values that are locked". How is this process supposed to work?



Thanks,



Alan___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/V3FEPMBWGIR42NHPLEH6QEMFB5QCWIU3/


[ovirt-users] Hosted engine restore went very wrong

2019-05-07 Thread Andreas Elvers
I really feel like an idiot. I tried to move our hosted engine from our Default 
datacenter to our Ceph Datacenter.
I ran intro problems which were correctly addressed. 
see: 
https://lists.ovirt.org/archives/list/users@ovirt.org/thread/ZFCLFWRN6XR6KMHMC63O7J37D5GNPVKZ/

It was in case a race condition. I was able to bring back the engine to our 
Default Cluster. And then I tried to 
do the move again to our Ceph Datacenter.

I got the error "The target Data Center does not contain the Virtual Disk" 
twice yesterday. Because it was late,
I decided to do it in the next morning.

I did a new backup from the engine. Copied it over to the new node of the Ceph 
Datacenter and started the 
hosted-engine --deploy. But I FORGET TO SHUTDOWN the other engine! Oh man.

The deploy script errored out with:

[ ERROR ] fatal: [localhost]: FAILED! => {"censored": "the output has been 
hidden due to the fact that 'no_log: true' was specified for this result", 
"changed": false}

[ ERROR ] fatal: [localhost -> engine.infra.solutions.work]: FAILED! => 
{"changed": false, "msg": "There was a failure deploying the engine on the 
local engine VM. The system may not be provisioned accord

Then I realised something was different this time.
I shutdown and undefined the Local Engine. The node is now in a degraded state. 
Is it possible to start the deployment again on a degraded node?
I started the old engine again, but I'm not able to reach the login page.

Any Idea what to do next?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3DMBPWWXL7XWNXO7REZTJJPUE54WBMJJ/


[ovirt-users] hosted engine does not start

2019-04-15 Thread Stefan Wolf
Hello all,

 

after a powerloss the hosted engine won’t start up anymore.

I ‘ve the current ovirt installed.

Storage is glusterfs und it is up and running

 

It is trying to start up hosted engine but it does not work, but I can’t see
where the problem is.

 

[root@kvm320 ~]# hosted-engine --vm-status

 

 

--== Host 1 status ==--

 

conf_on_shared_storage : True

Status up-to-date  : True

Hostname   : kvm380.durchhalten.intern

Host ID: 1

Engine status  : {"reason": "bad vm status", "health":
"bad", "vm": "down", "detail": "Down"}

Score  : 1800

stopped: False

Local maintenance  : False

crc32  : 3ad6d0bd

local_conf_timestamp   : 14594

Host timestamp : 14594

Extra metadata (valid at timestamp):

   metadata_parse_version=1

metadata_feature_version=1

timestamp=14594 (Mon Apr 15 21:25:12 2019)

host-id=1

score=1800

vm_conf_refresh_time=14594 (Mon Apr 15 21:25:12 2019)

conf_on_shared_storage=True

maintenance=False

state=GlobalMaintenance

stopped=False

 

 

--== Host 2 status ==--

 

conf_on_shared_storage : True

Status up-to-date  : True

Hostname   : kvm320.durchhalten.intern

Host ID: 2

Engine status  : {"reason": "failed liveliness check",
"health": "bad", "vm": "up", "detail": "Up"}

Score  : 0

stopped: False

Local maintenance  : False

crc32  : e7d4840d

local_conf_timestamp   : 21500

Host timestamp : 21500

Extra metadata (valid at timestamp):

metadata_parse_version=1

metadata_feature_version=1

timestamp=21500 (Mon Apr 15 21:25:22 2019)

host-id=2

score=0

vm_conf_refresh_time=21500 (Mon Apr 15 21:25:22 2019)

conf_on_shared_storage=True

maintenance=False

state=ReinitializeFSM

stopped=False

 

 

--== Host 3 status ==--

 

conf_on_shared_storage : True

Status up-to-date  : True

Hostname   : kvm360.durchhalten.intern

Host ID: 3

Engine status  : {"reason": "vm not running on this
host", "health": "bad", "vm": "down", "detail": "unknown"}

Score  : 1800

stopped: False

Local maintenance  : False

crc32  : cf9221cb

local_conf_timestamp   : 22121

Host timestamp : 22120

Extra metadata (valid at timestamp):

metadata_parse_version=1

metadata_feature_version=1

timestamp=22120 (Mon Apr 15 21:25:18 2019)

host-id=3

score=1800

vm_conf_refresh_time=22121 (Mon Apr 15 21:25:18 2019)

conf_on_shared_storage=True

maintenance=False

state=GlobalMaintenance

stopped=False

 

[root@kvm320 ~]# virsh -r list

IdName   Status



6 HostedEngine   laufend

 

[root@kvm320 ~]# hosted-engine --console

The engine VM is running on this host

Verbunden mit der Domain: HostedEngine

Escape-Zeichen ist ^]

Fehler: Interner Fehler: Zeichengerät  kann nicht gefunden warden

 

In engish it should be this

 

[root@mgmt~]# hosted-engine --console
The engine VM is running on this host
Connected to domain HostedEngine
Escape character is ^]
error: internal error: cannot find character device

 

This is in the log

 

[root@kvm320 ~]# tail -f /var/log/ovirt-hosted-engine-ha/agent.log

MainThread::INFO::2019-04-15
21:28:33,032::hosted_engine::491::ovirt_hosted_engine_ha.agent.hosted_engine
.HostedEngine::(_monitoring_loop) Current state EngineStarting (score: 1800)

MainThread::INFO::2019-04-15
21:28:43,050::states::779::ovirt_hosted_engine_ha.agent.hosted_engine.Hosted
Engine::(consume) VM is powering up..

MainThread::INFO::2019-04-15
21:28:43,165::hosted_engine::491::ovirt_hosted_engine_ha.agent.hosted_engine
.HostedEngine::(_monitoring_loop) Current state EngineStarting (score: 1800)

MainThread::INFO::2019-04-15
21:28:53,183::states::779::ovirt_hosted_engine_ha.agent.hosted_engine.Hosted
Engine::(consume) VM is powering up..

MainThread::INFO::2019-04-15
21:28:53,300::hosted_engine::491::ovirt_hosted_engine_ha.agent.hosted_engine
.HostedEngine::(_monitoring_loop) Current state EngineStarting (score: 1800)

MainThread::INFO::2019-04-15
21:29:03,317::states::779::ovirt_hosted_engine_ha.agent.hosted_engine.Hosted
Engine::(consume) VM is powering up..


[ovirt-users] Hosted-engine disconnected

2019-04-11 Thread Tau Makgaile
Hi,

I have been experiencing a problem with my hosted-engine after increasing /
partition. The increase went well for few minutes and disconnected. It
started by showing bad head and it has since unreachable/
un-ping-able  until. It was confusing because vms where still running. I
decided to redeploy the engine with a hope that it will pull in the same
vms into the dashboard. That did not give any positive results until i
opted for a redeployment. things went well until i realized it will need to
new glustermounts in order to go through, which also meant it risks losing
information around my vms, which i have since stopped in order to allow the
redeploment to carry on.

I now need help to export all of the information around my vms, most
importantly the disk images. I have been trying to get some disk images
from */gluster_bricks/data/data/  *with no luck in booting them up after
conversion. I am thinking there might be information missing on this images
since they boot into *rescue mode.*

Please share more insight on how one can locate the entire disk information
or the database where vms were last running.

I did a check into log files, and did matching of the images with vms names
but i have been unsuccessful.

Thanks in advance for you reply,

Kind regards,
Tau
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Y7MV6MQH4JB7UBK5OT6OORLUX6POGF2C/


[ovirt-users] Hosted-engine inaccessible

2019-04-10 Thread Tau Makgaile
Hi,

I am currently experiencing a problem with my Hosted-engine. Hosted-engine
disconnected after increasing / partition. The increase went well but after
some time the hosted-enigine VM disconnected and has since been giving
alerts such as* re-initializingFSM*.

Though VMs underneth are running, Hosted-engine --vm-status:


*"reason": "failed liveliness check", "health": "bad", "vm": "up", "detail"*

There is no backup to restore at the moment. I am looking for a way to
bring it up without redeploying the hosted engine.

Thanks in advance for your help.

Kind regards,
Tau
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/32U2DORMVIRF7FWHQ3HGAG67AXBWU2TI/


[ovirt-users] Hosted-engine inaccessible

2019-04-09 Thread Tau Makgaile
Hi,

I am currently experiencing a problem with my Hosted-engine. Hosted-engine
disconnected after increasing / partition. The increase went well but after
some time the hosted-enigine VM disconnected and has since been giving
alerts such as* re-initializingFSM*.

Though VMs underneth are running, Hosted-engine --vm-status:


*"reason": "failed liveliness check", "health": "bad", "vm": "up", "detail"*

There is no backup to restore at the moment. I am looking for a way to
bring it up without redeploying the hosted engine.

Thanks in advance for your help.

Kind regards,
Tau
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LHJBKJGTPZAPEVJ2XETOAMPTYQFJ4GYT/


[ovirt-users] Hosted-engine inaccessible

2019-04-09 Thread tau
Hi,

I am currently experiencing a problem with my Hosted-engine. Hosted-engine 
disconnected after increasing / partition. The increase went well but after 
some time the hosted-enigine VM disconnected and has since been giving alerts 
such as re-initializingFSM. 

Though VMs underneth are running, Hosted-engine --vm-status:

"reason": "failed liveliness check", "health": "bad", "vm": "up", "detail"

There is no backup to restore at the moment. I am looking for a way to bring it 
up without redeploying the hosted engine.

Thanks in advance for your help.

Kind regards,
Tau
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/M7TYAET4IDVNKTF3M4U5VL64G7GIG5PG/


[ovirt-users] Hosted-Engine constantly dies

2019-03-31 Thread Strahil Nikolov
Hi Guys,
As I'm still quite new in oVirt - I have some problems finding the problem on 
this one.My Hosted Engine (4.3.2) is constantly dieing (even when the Global 
Maintenance is enabled).My interpretation of the logs indicates some lease 
problem , but I don't get the whole picture ,yet.
I'm attaching the output of 'journalctl -f | grep -Ev "Started Session|session 
opened|session closed"' after I have tried to power on the hosted engine 
(hosted-engine --vm-start).
The nodes are fully updated and I don't see anything in the gluster v5.5 logs, 
but I can double check.
Any hints are appreciated and thanks in advance.
Best Regards,Strahil Nikolov

hosted-engine-crash
Description: Binary data
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TRQL5EOCRLELX46GSLJI4V5KT2QCME7U/


[ovirt-users] Hosted -engine is down and cannot be restarted

2019-03-19 Thread ada per
Hello everyone, 

For a strange reason the hosted engine went down and I cannot restart it.  I 
tried manually restarting it without any success can you please advice?

For all the nodes the engine status is the same as the one below. 
--== Host nodex. (id: 6) status ==--
conf_on_shared_storage : True
Status up-to-date  : True
Hostname   : nodex
Host ID: 6
Engine status  : {"reason": "bad vm status", "health": 
"bad", "vm": "down_unexpected", "detail": "Down"}
Score  : 3400
stopped: False
Local maintenance  : False
crc32  : 323a9f45
local_conf_timestamp   : 2648874
Host timestamp : 2648874
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=2648874 (Tue Mar 19 12:25:44 2019)
host-id=6
score=3400
vm_conf_refresh_time=2648874 (Tue Mar 19 12:25:44 2019)
conf_on_shared_storage=True
maintenance=False
state=GlobalMaintenance
stopped=False

When I try the commands
root@node5# hosted-engine --vm-shutdown
I ge the response:
root@node5# Command VM.shutdown with args {'delay': '120', 'message': 'VM is 
shutting down!', 'vmID': 'a492d2eb-1dfd-470d-a141-3e55d2189275'} 
failed:(code=1, message=Virtual machine does not exist) 

But when I run  : hosted-engine --vm-start 
I get the response: VM exists and is down, cleaning up and restarting



Below you can see the # journalctl -u ovirt-ha-agent logs

Mar 14 12:04:42 node7. ovirt-ha-agent[4134]: ovirt-ha-agent 
ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine ERROR Unhandled 
monitoring loop exception
  Traceback (most 
recent call last):
File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py",
 line 430, in start_monitoring
  
self._monitoring_loop()
File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py",
 line 449, in _monitoring_loop
  for old_state, 
state, delay in self.fsm:
File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/fsm/machine.py", 
line 127, in next
  new_data = 
self.refresh(self._state.data)
File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/state_machine.py",
 line 81, in refresh
  
stats.update(self.hosted_engine.collect_stats())
File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py",
 line 737, in collect_stats
  all_stats = 
self._broker.get_stats_from_storage()
File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py", 
line 143, in get_stats_from_storage
  result = 
self._proxy.get_stats()
File 
"/usr/lib64/python2.7/xmlrpclib.py", line 1233, in __call__
  return 
self.__send(self.__name, args)
File 
"/usr/lib64/python2.7/xmlrpclib.py", line 1591, in __request
  
verbose=self.__verbose
File 
"/usr/lib64/python2.7/xmlrpclib.py", line 1273, in request
  return 
self.single_request(host, handler, request_body, verbose)
File 
"/usr/lib64/python2.7/xmlrpclib.py", line 1301, in single_request
  
self.send_content(h, request_body)
File 
"/usr/lib64/python2.7/xmlrpclib.py", line 1448, in send_content
  
connection.endheaders(request_body)
File 
"/usr/lib64/python2.7/httplib.py", line 1037, in endheaders
  
self._send_output(message_body)
   

[ovirt-users] Hosted Engine I/O scheduler

2019-03-17 Thread Strahil
Dear All,

I have just noticed that my Hosted Engine has  a strange I/O scheduler:

Last login: Sun Mar 17 18:14:26 2019 from 192.168.1.43
[root@engine ~]# cat /sys/block/vda/queue/scheduler
[mq-deadline] kyber none
[root@engine ~]#

Based on my experience  anything than noop/none  is useless and performance 
degrading  for a VM.


Is there any reason that we have this scheduler ?
It is quite pointless  to process (and delay) the I/O in the VM and then 
process (and again delay)  on Host Level .

If there is no reason to keep the deadline, I will open a bug about it.

Best Regards,
Strahil Nikolov___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZHZIJZVX2KGBNJD4ZFNUA5NUME676I2I/


[ovirt-users] hosted-engine --deploy fails on iSCSI while trying to connect to retrieved ipv6 address, even while forcing ipv4 with --4

2019-03-15 Thread Guillaume Pavese
I try to deploy hosted-engine 4.3.2-rc2 on iSCSI
I put a ipv4 portal address and targets get discovered. However they are
are returned by the Synology hosts with both ipv4 and ipv6 adresses.
 LUN discovery then fails while attempting to connect to ipv6 address
I tried hosted-engine --deploy --4 to force ipv4 but that fails too.


  Please specify the storage you would like: iscsi

  Please specify the iSCSI portal IP address: 10.199.9.16
  Please specify the iSCSI portal port [3260]:
  Please specify the iSCSI discover user:
  Please specify the iSCSI discover password:
  Please specify the iSCSI portal login user:
  Please specify the iSCSI portal login password:
[ INFO  ] Discovering iSCSI targets
[ INFO  ] TASK [ovirt.hosted_engine_setup : Execute just a specific set of
steps]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [ovirt.hosted_engine_setup : Force facts gathering]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [ovirt.hosted_engine_setup : include_tasks]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [ovirt.hosted_engine_setup : Obtain SSO token using
username/password credentials]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [ovirt.hosted_engine_setup : Prepare iSCSI parameters]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [ovirt.hosted_engine_setup : Fetch host facts]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [ovirt.hosted_engine_setup : iSCSI discover with REST API]
[ INFO  ] ok: [localhost]
  The following targets have been found:
  [1] iqn.2000-01.com.synology:SVC-STO-FR-301.Target-1.2dfed4a32a
  TPGT: 1, portals:
  10.199.9.16:3260
  fe80::211:32ff:fe6d:6ddb:3260

  [2] iqn.2000-01.com.synology:SVC-STO-FR-301.Target-2.2dfed4a32a
  TPGT: 1, portals:
  10.199.9.16:3260
  fe80::211:32ff:fe6d:6ddb:3260


in Host's logs :

 mars 15 09:32:25 vs-inf-int-kvm-fr-304-210.hostics.fr iscsid[5983]:conn 0
login rejected: initiator error (02/00)
mars 15 09:32:25 vs-inf-int-kvm-fr-304-210.hostics.fr iscsid[5983]:
Connection1:0 to [target:
iqn.2000-01.com.synology:SVC-STO-FR-301.Target-2.2dfed4a32a, portal:
10.199.9.16,3260] through [iface: default] is shutdown.
mars 15 09:32:27 vs-inf-int-kvm-fr-304-210.hostics.fr iscsid[5983]: cannot
make a connection to fe80::211:32ff:fe6d:6ddb:3260 (-1,22)
...
mars 15 09:33:21 vs-inf-int-kvm-fr-304-210.hostics.fr iscsid[5983]: cannot
make a connection to fe80::211:32ff:fe6d:6ddb:3260 (-1,22)
mars 15 09:33:24 vs-inf-int-kvm-fr-304-210.hostics.fr iscsid[5983]: cannot
make a connection to fe80::211:32ff:fe6d:6ddb:3260 (-1,22)
mars 15 09:33:27 vs-inf-int-kvm-fr-304-210.hostics.fr vdsm[26174]: WARN
Worker blocked:  timeout=60,
duration=60.00 at 0x7fcb904ef410> task#=10 at 0x7fcb905977d0>, traceback:
   File:
"/usr/lib64/python2.7/threading.py", line 785, in __bootstrap

 self.__bootstrap_inner()
   File:
"/usr/lib64/python2.7/threading.py", line 812, in __bootstrap_inner

 self.run()
   File:
"/usr/lib64/python2.7/threading.py", line 765, in run

 self.__target(*self.__args, **self.__kwargs)
   File:
"/usr/lib/python2.7/site-packages/vdsm/common/concurrent.py", line 195, in
run
 ret =
func(*args, **kwargs)
   File:
"/usr/lib/python2.7/site-packages/vdsm/executor.py", line 301, in _run

 self._execute_task()
   File:
"/usr/lib/python2.7/site-packages/vdsm/executor.py", line 315, in
_execute_task
 task()
   File:
"/usr/lib/python2.7/site-packages/vdsm/executor.py", line 391, in __call__

 self._callable()
   File:
"/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 262, in
__call__

 self._handler(self._ctx, self._req)
   File:
"/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 305, in
_serveRequest

 response = self._handle_request(req, ctx)
   File:
"/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 345, in
_handle_request
 res =
method(**params)
   File:
"/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 198, in
_dynamicMethod
 result
= fn(*methodArgs)

[ovirt-users] Hosted engine Migration

2019-03-11 Thread pollard
Am I reading these right in that manual migration is not possible? 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7NP3HP3Q6DCDDOEZZK7LS4P2C6TUMZEB/


[ovirt-users] Hosted engine not starting after 4.3 Upgrade - cannot find OVF_STORE

2019-03-06 Thread Shawn Southern
After running 'hosted-engine --vm-start', the status of the hosted engine VM is:

conf_on_shared_storage : True
Status up-to-date  : True
Hostname   : ovirtnode-01
Host ID: 3
Engine status  : {"reason": "bad vm status", "health": 
"bad", "vm": "down_unexpected", "detail": "Down"}
Score  : 0
stopped: False
Local maintenance  : False
crc32  : 7e3db850
local_conf_timestamp   : 3509
Host timestamp : 3508
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=3508 (Tue Mar  5 16:03:30 2019)
host-id=3
score=0
vm_conf_refresh_time=3509 (Tue Mar  5 16:03:31 2019)
conf_on_shared_storage=True
maintenance=False
state=EngineUnexpectedlyDown
stopped=False
timeout=Wed Dec 31 20:05:37 1969


The /var/log/libvirt/qemu/HostedEngine.log has no entries since the hosted 
engine VM was rebooted.

/var/log/ovirt-hosted-engine-ha/agent.log:
MainThread::ERROR::2019-03-05 
16:07:31,916::config_ovf::42::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config.vm::(_get_vm_conf_content_from_ovf_store)
 Failed scanning for OVF_STORE due to Command Volume.getInfo with args 
{'storagepoolID': '----', 'storagedomainID': 
'79376c46-b80c-4c44-bbb1-80c0714a4b52', 'volumeID': 
u'687e9c0d-e988-4f76-89ff-931685acdf76', 'imageID': 
u'48ee766b-185d-4928-a046-b048d65af2a6'} failed:
(code=201, message=Volume does not exist: 
(u'687e9c0d-e988-4f76-89ff-931685acdf76',))
MainThread::ERROR::2019-03-05 
16:07:31,916::config_ovf::84::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config.vm::(_get_vm_conf_content_from_ovf_store)
 Unable to identify the OVF_STORE volume, falling back to initial vm.conf. 
Please ensure you already added your first data domain for regular VMs
MainThread::INFO::2019-03-05 
16:07:31,971::hosted_engine::493::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop)
 Current state EngineUnexpectedlyDown (score: 0)
MainThread::ERROR::2019-03-05 
16:07:42,304::config_ovf::42::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config.vm::(_get_vm_conf_content_from_ovf_store)
 Failed scanning for OVF_STORE due to Command Volume.getInfo with args 
{'storagepoolID': '----', 'storagedomainID': 
'79376c46-b80c-4c44-bbb1-80c0714a4b52', 'volumeID': 
u'687e9c0d-e988-4f76-89ff-931685acdf76', 'imageID': 
u'48ee766b-185d-4928-a046-b048d65af2a6'} failed:
(code=201, message=Volume does not exist: 
(u'687e9c0d-e988-4f76-89ff-931685acdf76',))
MainThread::ERROR::2019-03-05 
16:07:42,305::config_ovf::84::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config.vm::(_get_vm_conf_content_from_ovf_store)
 Unable to identify the OVF_STORE volume, falling back to initial vm.conf. 
Please ensure you already added your first data domain for regular VMs
MainThread::INFO::2019-03-05 
16:07:42,365::hosted_engine::493::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop)
 Current state EngineUnexpectedlyDown (score: 0)
MainThread::ERROR::2019-03-05 
16:07:51,791::config_ovf::42::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config.vm::(_get_vm_conf_content_from_ovf_store)
 Failed scanning for OVF_STORE due to Command Volume.getInfo with args 
{'storagepoolID': '----', 'storagedomainID': 
'79376c46-b80c-4c44-bbb1-80c0714a4b52', 'volumeID': 
u'687e9c0d-e988-4f76-89ff-931685acdf76', 'imageID': 
u'48ee766b-185d-4928-a046-b048d65af2a6'} failed:
(code=201, message=Volume does not exist: 
(u'687e9c0d-e988-4f76-89ff-931685acdf76',))
MainThread::ERROR::2019-03-05 
16:07:51,792::config_ovf::84::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config.vm::(_get_vm_conf_content_from_ovf_store)
 Unable to identify the OVF_STORE volume, falling back to initial vm.conf. 
Please ensure you already added your first data domain for regular VMs
MainThread::INFO::2019-03-05 
16:07:51,850::hosted_engine::493::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop)
 Current state EngineUnexpectedlyDown (score: 0)
MainThread::INFO::2019-03-05 
16:08:01,868::states::684::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume)
 Engine down, local host does not have best score
MainThread::ERROR::2019-03-05 
16:08:02,196::config_ovf::42::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config.vm::(_get_vm_conf_content_from_ovf_store)
 Failed scanning for OVF_STORE due to Command Volume.getInfo with args 
{'storagepoolID': '----', 'storagedomainID': 
'79376c46-b80c-4c44-bbb1-80c0714a4b52', 'volumeID': 
u'687e9c0d-e988-4f76-89ff-931685acdf76', 'imageID': 

[ovirt-users] Hosted engine installation

2019-02-11 Thread Fabrice SOLER

Hello,

I have installed an hosted engine, and I have seen that this 
installation needs to access to a repository from internet.


Do you know that is possible to download an ISO to install an hosted 
engine and install it from this ISO ?


Sincerely,

--
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AJOQABTQS5JPCN6UN5TCK3MDVTJ3TKJV/


[ovirt-users] Hosted Engine VM fails to start after upgrade from 4.2 to 4.3

2019-02-09 Thread dbrandon . johnson
I just upgraded an ovirt cluster (hosts and engine) from oVirt 4.2 to 4.3.  
After I finished upgrading the hosts from 4.2 to 4.3 engine stopped working and 
now it will not start.  Looking at VDSM it is looking for a path that isn't 
getting mounted by the hosted engine when I try to connect storage and start 
the VM.
VDSM log:
https://paste.fedoraproject.org/paste/aOi4aDMMNmDqpD72SdcI0A

the path that it is mounting at is: 
/rhev/data-center/mnt/glusterSD/hyper2-gluster.storage.open-tec.net:_engine and 
I think it is expecting /rhev/data-center/----

Any help on this would be appreciated.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6JFTGJ37KDZQ5KMLU32LNB5ZZTFQIRFG/


[ovirt-users] Hosted Engine VM and Storage not showing up

2019-01-07 Thread Vinícius Ferrão
Hello,

I’ve a new oVirt installation using oVirt 4.2.7.1 Node and after deploying the 
hosted engine it does not show up on the interface even after adding the first 
storage.

The Datacenter is up but the engine VM and the engine storage does not appear.

I have the following message repeated constantly on /var/log/messages:

Jan  4 20:17:30 ovirt1 journal: ovirt-ha-agent 
ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config.vm ERROR Unable 
to identify the OVF_STORE volume, falling back to initial vm.conf. Please 
ensure you already added your first data domain for regular VMs

What’s wrong? Am I doing something different?


Additional infos:

[root@ovirt1 ~]# vdsm-tool list-nets
ovirtmgmt (default route)
storage

[root@ovirt1 ~]# ip a | grep "inet "
inet 127.0.0.1/8 scope host lo
inet 10.20.0.101/24 brd 10.20.0.255 scope global dynamic ovirtmgmt
inet 192.168.10.1/29 brd 192.168.10.7 scope global storage

[root@ovirt1 ~]# mount | grep -i nfs
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime)
10.20.0.200:/mnt/pool0/ovirt/he on 
/rhev/data-center/mnt/10.20.0.200:_mnt_pool0_ovirt_he type nfs4 
(rw,relatime,vers=4.1,rsize=131072,wsize=131072,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,clientaddr=10.20.0.101,local_lock=none,addr=10.20.0.200)

[root@ovirt1 ~]# hosted-engine --check-deployed
Returns nothing!

[root@ovirt1 ~]# hosted-engine --check-liveliness
Hosted Engine is up!

[root@ovirt1 ~]# hosted-engine --vm-status

--== Host 1 status ==--

conf_on_shared_storage : True
Status up-to-date  : True
Hostname   : ovirt1.local.versatushpc.com.br
Host ID: 1
Engine status  : {"health": "good", "vm": "up", "detail": 
"Up"}
Score  : 3400
stopped: False
Local maintenance  : False
crc32  : 1736a87d
local_conf_timestamp   : 7836
Host timestamp : 7836
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=7836 (Fri Jan  4 20:18:10 2019)
host-id=1
score=3400
vm_conf_refresh_time=7836 (Fri Jan  4 20:18:10 2019)
conf_on_shared_storage=True
maintenance=False
state=EngineUp
stopped=False


Thanks in advance,
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OSLPWWOSHQIHUQEC4AUJXBNCKZR55GXC/


[ovirt-users] Hosted Engine VM and Storage not showing up

2019-01-06 Thread ferrao
Hello,

I’ve a new oVirt installation using oVirt 4.2.7.1 Node and after deploying the 
hosted engine it does not show up on the interface even after adding the first 
storage.

The Datacenter is up but the engine VM and the engine storage does not appear.

I have the following message repeated constantly on /var/log/messages:

Jan  4 20:17:30 ovirt1 journal: ovirt-ha-agent 
ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config.vm ERROR Unable 
to identify the OVF_STORE volume, falling back to initial vm.conf. Please 
ensure you already added your first data domain for regular VMs

What’s wrong? Am I doing something different?


Additional infos:

[root@ovirt1 ~]# vdsm-tool list-nets
ovirtmgmt (default route)
storage

[root@ovirt1 ~]# ip a | grep "inet "
   inet 127.0.0.1/8 scope host lo
   inet 10.20.0.101/24 brd 10.20.0.255 scope global dynamic ovirtmgmt
   inet 192.168.10.1/29 brd 192.168.10.7 scope global storage

[root@ovirt1 ~]# mount | grep -i nfs
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime)
10.20.0.200:/mnt/pool0/ovirt/he on 
/rhev/data-center/mnt/10.20.0.200:_mnt_pool0_ovirt_he type nfs4 
(rw,relatime,vers=4.1,rsize=131072,wsize=131072,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,clientaddr=10.20.0.101,local_lock=none,addr=10.20.0.200)

[root@ovirt1 ~]# hosted-engine --check-deployed
Returns nothing!

[root@ovirt1 ~]# hosted-engine --check-liveliness
Hosted Engine is up!

[root@ovirt1 ~]# hosted-engine --vm-status

--== Host 1 status ==--

conf_on_shared_storage : True
Status up-to-date  : True
Hostname   : ovirt1.local.versatushpc.com.br
Host ID: 1
Engine status  : {"health": "good", "vm": "up", "detail": 
"Up"}
Score  : 3400
stopped: False
Local maintenance  : False
crc32  : 1736a87d
local_conf_timestamp   : 7836
Host timestamp : 7836
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=7836 (Fri Jan  4 20:18:10 2019)
host-id=1
score=3400
vm_conf_refresh_time=7836 (Fri Jan  4 20:18:10 2019)
conf_on_shared_storage=True
maintenance=False
state=EngineUp
stopped=False


Thanks in advance,

PS: Log files are available here: 
http://www.if.ufrj.br/~ferrao/ovirt/issues/he-not-showing/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IQHM6YQ7HVBHLFQYBCRV2ODTELTWLLWC/


[ovirt-users] Hosted Engine Storage migration

2019-01-05 Thread joel+ovirt
Hello listmembers!

I'm in search of some steps to use to migrate my Hosted-Engine from one NFS 
storage domain to another different NFS storage domain. One of the possible 
ways to accomplish this task may already be a feature with oVirt 4.1, I'm using 
4.2

https://lists.ovirt.org/pipermail/users/2017-January/078739.html (New feature 
in 4.1?)

Will someone elaborate the steps to use this new feature and how I could use it 
to accomplish this task? the bug link in the email linked above has a comment 
(https://bugzilla.redhat.com/show_bug.cgi?id=1409112#c12) - does this mean I 
need to redeploy the hosted-engine? I'm not able to follow as I'm new to oVirt 
and still learning the ins and outs and design if I'm on the wrong path here 
please correct me.

Thank you,

-Joel

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PIKR5P7E2JUXHAZKME6ZIUZD5EY5CKIM/


[ovirt-users] Hosted -engine -deploy problems

2018-12-16 Thread xilazz
Dec 17 11:22:17 node1 systemd: mom-vdsm.service holdoff time over, scheduling 
restart.
Hi ,When I try to deploy hosted-engine -deploy, the installation process will 
stay hosted at [INFO] TASK [Wait for OVF_STORE disk content], but I will try to 
check the /var/log/messages log, which will give you a hint of hosted-engine 
-deploy.I hope I can get your help. Thank you very much
Dec 17 11:22:17 node1 systemd: Cannot add dependency job for unit 
lvm2-lvmetad.socket, ignoring: Unit is masked.
Dec 17 11:22:17 node1 systemd: Started MOM instance configured for VDSM 
purposes.
Dec 17 11:22:17 node1 systemd: Starting MOM instance configured for VDSM 
purposes...
Dec 17 11:23:06 node1 vdsm[25047]: WARN MOM not available.
Dec 17 11:23:06 node1 vdsm[25047]: WARN MOM not available, KSM stats will be 
missing.
Dec 17 11:23:07 node1 python: ansible-command Invoked with warn=True 
executable=None _uses_shell=True _raw_params=vdsm-client Image prepare 
storagepoolID=54cf4d5c-01a5-11e9-9851-00163e4b86e6 
storagedomainID=a47f82ed-6a89-4049-948a-f5710e6f12a8 
imageID=7ef1542b-410b-4ca1-9d03-76eb51c96dfe 
volumeID=61493051-ea0d-43aa-91d1-7f086ad8d6ac | grep path | awk '{ print $2 }' 
| xargs -I{} sudo -u vdsm dd if={} | tar -tvf - 
c68ee5b1-a6c8-4c0b-9226-a7c789cd328f.ovf removes=None argv=None creates=None 
chdir=None stdin=None
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XS7AGQT5T2L6CNQLYDOC6GLROWSQK4CX/


[ovirt-users] Hosted Engine goes down while putting gluster node into maintenance mode.

2018-12-03 Thread Abhishek Sahni
Hello Team,

We are running a setup of 3-way replica HC gluster setup configured during
the initial deployment from the cockpit console using ansible.

NODE1
  - /dev/sda   (OS)
  - /dev/sdb   ( Gluster Bricks )
   * /gluster_bricks/engine/engine/
   * /gluster_bricks/data/data/
   * /gluster_bricks/vmstore/vmstore/

NODE2 and NODE3 with a similar setup.

Hosted engine was running on node2.

- While moving NODE1 to maintenance mode along with stopping the
gluster service as it prompts before, Hosted engine instantly went down.

- I start the gluster service back on node1 and start the hosted engine
again and found hosted engine started properly but getting crashed again
and again within frames of second after a successful start because HE
itself stopping glusterd on node1. (not sure) but cross-verified by
checking glusterd status.

*Is it possible to clear pending tasks or not let the HE to stop
glusterd on node1?*

*Or we can start the HE using other gluster node?*

https://paste.fedoraproject.org/paste/Qu2tSHuF-~G4GjGmstV6mg


-- 

ABHISHEK SAHNI


IISER Bhopal
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7ETASYIKXRAGYZRBZIS6G743UHPKGCNA/


[ovirt-users] hosted-engine --deploy fails on Ovirt-Node-NG 4.2.7

2018-12-03 Thread Ralf Schenk
Hello,

I try to deploy hosted-engine to a NFS Share accessible by (currently)
two hosts. The host is running latest ovirt-node-ng 4.2.7.

hosted-engine --deploy fails constantly in late stage when trying to run
engine from NFS. It already ran as "HostedEngineLocal" and I think is
then migrated to NFS storage.

Engine seems to be deployed to NFS already:

[root@epycdphv02 ~]# ls -al
/rhev/data-center/mnt/storage01.office.databay.de:_ovirt_engine
total 23
drwxrwxrwx 3 vdsm kvm    4 Dec  3 13:01 .
drwxr-xr-x 3 vdsm kvm 4096 Dec  1 17:11 ..
drwxr-xr-x 6 vdsm kvm    6 Dec  3 13:09 1dacf1ea-0934-4840-bed4-e9d023572f59
-rwxr-xr-x 1 vdsm kvm    0 Dec  3 13:42 __DIRECT_IO_TEST__

NFS Mount:

storage01.office.databay.de:/ovirt/engine on
/rhev/data-center/mnt/storage01.office.databay.de:_ovirt_engine type
nfs4
(rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,clientaddr=192.168.1.121,local_lock=none,addr=192.168.1.3)

Libvirt quemu states an error:

Could not open
'/var/run/vdsm/storage/1dacf1ea-0934-4840-bed4-e9d023572f59/2b1332f6-3bb6-495b-87fe-c5b85e0ac495/39d45b33-5f29-430b-8b58-14a8ea20fb08':
Permission denied

Even permissions of mentioned file seem to be ok. SELINUX is disabled
since I had a lots of problems with earlier versions trying to deploy
hosted-engine.

[root@epycdphv02 ~]# ls -al
'/var/run/vdsm/storage/1dacf1ea-0934-4840-bed4-e9d023572f59/2b1332f6-3bb6-495b-87fe-c5b85e0ac495/39d45b33-5f29-430b-8b58-14a8ea20fb08'
-rw-rw 1 vdsm kvm 53687091200 Dec  3 13:09
/var/run/vdsm/storage/1dacf1ea-0934-4840-bed4-e9d023572f59/2b1332f6-3bb6-495b-87fe-c5b85e0ac495/39d45b33-5f29-430b-8b58-14a8ea20fb08

hosted-engine --deploy ends with error. Logfile is attached.

[ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 120, "changed":
true, "cmd": ["hosted-engine", "--vm-status", "--json"], "delta":
"0:00:00.218320", "end": "2018-12-03 13:20:19.139919", "rc": 0, "start":
"2018-12-03 13:20:18.921599", "stderr": "", "stderr_lines": [],
"stdout": "{\"1\": {\"conf_on_shared_storage\": true, \"live-data\":
true, \"extra\":
\"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=156443
(Mon Dec  3 13:20:16
2018)\\nhost-id=1\\nscore=0\\nvm_conf_refresh_time=156443 (Mon Dec  3
13:20:16
2018)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineUnexpectedlyDown\\nstopped=False\\ntimeout=Fri
Jan  2 20:29:01 1970\\n\", \"hostname\":
\"epycdphv02.office.databay.de\", \"host-id\": 1, \"engine-status\":
{\"reason\": \"bad vm status\", \"health\": \"bad\", \"vm\":
\"down_unexpected\", \"detail\": \"Down\"}, \"score\": 0, \"stopped\":
false, \"maintenance\": false, \"crc32\": \"d3355c40\",
\"local_conf_timestamp\": 156443, \"host-ts\": 156443},
\"global_maintenance\": false}", "stdout_lines": ["{\"1\":
{\"conf_on_shared_storage\": true, \"live-data\": true, \"extra\":
\"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=156443
(Mon Dec  3 13:20:16
2018)\\nhost-id=1\\nscore=0\\nvm_conf_refresh_time=156443 (Mon Dec  3
13:20:16
2018)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineUnexpectedlyDown\\nstopped=False\\ntimeout=Fri
Jan  2 20:29:01 1970\\n\", \"hostname\":
\"epycdphv02.office.databay.de\", \"host-id\": 1, \"engine-status\":
{\"reason\": \"bad vm status\", \"health\": \"bad\", \"vm\":
\"down_unexpected\", \"detail\": \"Down\"}, \"score\": 0, \"stopped\":
false, \"maintenance\": false, \"crc32\": \"d3355c40\",
\"local_conf_timestamp\": 156443, \"host-ts\": 156443},
\"global_maintenance\": false}"]}
[ INFO  ] TASK [Check VM status at virt level]
[ INFO  ] TASK [Fail if engine VM is not running]
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg":
"Engine VM is not running, please check vdsm logs"}
[ ERROR ] Failed to execute stage 'Closing up': Failed executing
ansible-playbook
[ INFO  ] Stage: Clean up
[ INFO  ] Cleaning temporary resources
[ INFO  ] TASK [Gathering Facts]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [Fetch logs from the engine VM]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [Set destination directory path]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [Create destination directory]
[ INFO  ] changed: [localhost]
[ INFO  ] TASK [include_tasks]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [Find the local appliance image]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [Set local_vm_disk_path]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [Give the vm time to flush dirty buffers]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [Copy engine logs]
[ INFO  ] TASK [include_tasks]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [Remove local vm dir]
[ INFO  ] changed: [localhost]
[ INFO  ] TASK [Remove temporary entry in /etc/hosts for the local VM]
[ INFO  ] ok: [localhost]
[ INFO  ] Generating answer file
'/var/lib/ovirt-hosted-engine-setup/answers/answers-20181203132110.conf'
[ INFO  ] Stage: Pre-termination
[ INFO  ] Stage: Termination
[ ERROR ] Hosted Engine deployment failed: please check the logs for the
issue, fix 

  1   2   3   4   5   6   7   8   9   10   >