[ovirt-users] Re: oVirt + TrueNAS: Unable to create iSCSI domain - I am missing something obvious

2023-01-19 Thread None via Users
FYI the fix is to check the "Disabled Physical Block Size Reporting" box in the 
extent window. Note in testing this I had to delete the extent and create a new 
one, toggling the switch on and then restarting the iscsi service didn't seem 
to do it, maybe the client cached something.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/W5XSBQD6Z6X4HOHIQC2EVBBL6AJRXNDW/


[ovirt-users] Migrating to keycloak

2022-12-08 Thread None via Users
I have an existing ovirt cluster, and I'm trying to migrate it from the 
internal sso and LDAP over to keycloak but am kind of at a loss.

I followed the Activation procedures on 
https://github.com/oVirt/ovirt-engine-keycloak/blob/master/keycloak_usage.md#Internal-Keycloak-activation-procedure
 and am able to login to the keycloak console fine, but when I try to access 
the ovirt-engine admin panel I just get an internal server error.

httpd log contains "oidc_util_json_string_print: oidc_util_check_json_error: 
response contained an "error" entry with value: ""Realm does not exist"""

Does engine-setup not configure the keycloak it creates with the proper 
configuration for ovirt? The apache config seems to have some password and 
other settings for oidc, so that end got configured, but not the keycloak side. 
There's no ovirt-engine or other ovirt related clients inside the newly created 
keycloak.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GCGN6VVSDX3JC5JY7R65I4RJCK2VJRKP/


[ovirt-users] No valid network interface has been found

2022-07-20 Thread None via Users
Hey im kinda new to ovirt im getting this error when i try to create a "hosted 
engine" i hope this is where i can get help
"FULL MSG"
No valid network interface has been found
If you are using Bonds or VLANs Use the following naming conventions:
- VLAN interfaces: physical_device.VLAN_ID (for example, eth0.23, eth1.128, 
enp3s0.50)
- Bond interfaces: bond*number* (for example, bond0, bond1)
- VLANs on bond interfaces: bond*number*.VLAN_ID (for example, bond0.50, 
bond1.128)
* Supported bond modes: active-backup, balance-xor, broadcast, 802.3ad
* Networking teaming is not supported and will cause errors
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LMA72BKLVOEN5PF2RDBA2FANECHY7XYH/


[ovirt-users] Re: Upgrade from 4.3 to 4.4 fails with db or user ovirt_engine_history already exists

2020-07-21 Thread None via Users
Hi Didi,

I don't know. We are were running the same oVirt instance since 2017 and 
updated it a lot of times, including solving some bugs/problems during the 
years. The user was set on the 'vm_device_history'-table and the 
'disk_vm_device_history_seq' sequence. Maybe we did it ourselves, but I 
couldn't find anything about it in our logs.

Since we are new here: After this problem, we ran into some other problems. Do 
you want me to post an article about it, including the steps we have taken to 
solve them?

We got stuck on the latest problem, and we stopped our upgrade to 4.4 for now, 
but it could be related to the other thread about Storage Domains. Running the 
upgrade and accessing the 4.4 GUI before continuing (you can set this option at 
the beginning), we are unable to update any OVF Disks (automatically or 
forced), and the active SPM host keeps resetting (also after changing it to the 
new 4.4 host). It does not matter if we try to update an existing SD, the old 
hosted engine domain, or add a new one (NFS or Gluster). When we ignore the 
problem, the installation fails when it checks the health of the new hosted 
engine domain. Restarting the old HE again, and the problem is (luckily) solved 
and not permanent - so I guess it is a HE 4.4 specific problem. I analyzed the 
logs but couldn't find anything that made sense (on the host, HE VM, or another 
4.3 host running SPM). The error(s):

- Failed to update VMs/Templates OVF data for storage domain XX in Data Center 
Default
- Failed to update OVF disks xGUIDx, xGUIDx, OVF data isn't updated on those 
OVF stores (Data Center Default, Storage domain XX)

The error disappeared on most SDs (except the old HE domain, or any new added 
one) when we force an OVF update via the 4.3 GUI (before starting the upgrade). 
The old HE VM was turned off (and the platform set to global maintenance) 
before we started the upgrade.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5EQQTY6QTDRP4RHGFSEM5LS46ZQ4UGO4/


[ovirt-users] Upgrade from 4.3 to 4.4 fails with db or user ovirt_engine_history already exists

2020-07-19 Thread None via Users
Currently, our upgrade to 4.4 fails with error:
FATAL: Existing database 'ovirt_engine_history' or user 'ovirt_engine_history' 
found and temporary ones created

We have upgraded the running 4.3 installation to the latest version and also 
use the latest packages for the upgrade on the new CentOS 8.2 installation. The 
back-up is made following the Hosted Engine upgrade steps in the manual, using: 
`engine-backup --scope=all --mode=backup --file=backup.bck --log=backuplog.log`

The upgrade is performed after copying the backup.bck file to the new server 
and using `hosted-engine --deploy --restore-from-file=backup.bck`

After creating the Engine VM, the installation process hangs when the backup is 
restored. We tried it several times, using a complete or a partial back-up.

Old/current oVirt version: 4.3.10.4-1.el7
New version: 4.4.1.8
ovirt-ansible-hosted-engine-setup: 1.1.6

Did anyone get the same error while upgrading an existing installation?
Thanks!

Error log Ansible on Host:

2020-07-15 12:34:09,361+0200 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:109 TASK [ovirt.hosted_engine_setup : Run 
engine-backup]
2020-07-15 12:35:28,778+0200 DEBUG 
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:103 
{'msg': 'non-zero return code', 'cmd': 'engine-backup --mode=restore 
--log=/var/log/ovirt-engine/setup/restore-backup-$(date -u +%Y%m%d%H%M%S).log 
--file=/root/engine_backup --provision-all-databases --restore-permissions', 
'stdout': "Start of engine-backup with mode 'restore'\nscope: all\narchive 
file: /root/engine_backup\nlog file: 
/var/log/ovirt-engine/setup/restore-backup-20200715103410.log\nPreparing to 
restore:\n- Unpacking file '/root/engine_backup'\nRestoring:\n- 
Files\n--\nPlease
 note:\n\nOperating system is different from the one used during 
backup.\nCurrent operating system: centos8\nOperating system at backup: 
centos7\n\nApache httpd configuration will not be restored.\nYou will be asked 
about it on the next engine-setup 
run.\n--
 \nProvisioning PostgreSQL users/databases:\n- user 
'engine', database 'engine'\n- extra user 'ovirt_engine_history' having grants 
on database engine, created with a random password\n- user 
'ovirt_engine_history', database 'ovirt_engine_history'", 'stderr': "FATAL: 
Existing database 'ovirt_engine_history' or user 'ovirt_engine_history' found 
and temporary ones created - Please clean up everything and try again", 'rc': 
1, 'start': '2020-07-15 12:34:10.824630', 'end': '2020-07-15 12:35:28.488261', 
'delta': '0:01:17.663631', 'changed': True, 'invocation': {'module_args': 
{'_raw_params': 'engine-backup --mode=restore 
--log=/var/log/ovirt-engine/setup/restore-backup-$(date -u +%Y%m%d%H%M%S).log 
--file=/root/engine_backup --provision-all-databases --restore-permissions', 
'_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 
'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 
'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines'
 : ["Start of engine-backup with mode 'restore'", 'scope: all', 'archive file: 
/root/engine_backup', 'log file: 
/var/log/ovirt-engine/setup/restore-backup-20200715103410.log', 'Preparing to 
restore:', "- Unpacking file '/root/engine_backup'", 'Restoring:', '- Files', 
'--',
 'Please note:', '', 'Operating system is different from the one used during 
backup.', 'Current operating system: centos8', 'Operating system at backup: 
centos7', '', 'Apache httpd configuration will not be restored.', 'You will be 
asked about it on the next engine-setup run.', 
'--',
 'Provisioning PostgreSQL users/databases:', "- user 'engine', database 
'engine'", "- extra user 'ovirt_engine_history' having grants on database 
engine, created with a random password", "- user 'ovirt_engine_history', 
database 'ovirt_engine_history'"], 'stderr_lines': ["FATAL: Existing d
 atabase 'ovirt_engine_history' or user 'ovirt_engine_history' found and 
temporary ones created - Please clean up everything and try again"], 
'_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_host': 
'ovirt-management.dc1.triplon', 'ansible_port': None, 'ansible_user': 'root'}}
2020-07-15 12:35:28,879+0200 ERROR 
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:107 
fatal: [localhost -> ovirt-management.dc1.triplon]: FAILED! => {"changed": 
true, "cmd": "engine-backup --mode=restore 
--log=/var/log/ovirt-engine/setup/restore-backup-$(date -u +%Y%m%d%H%M%S).log 
--file=/root/engine_backup --provision-all-databases --restore-permissions", 
"delta": "0:01:17.663631", "end": "2020-07-15 12:35:28.488261", "msg": 
"non-zero return code", "rc": 1,