[ovirt-users] Re: Setup oVirt self hosted engine on Rocky LInux 8 using cockpit - stuck in deadlock

2023-03-28 Thread Fran Garcia
The hosted engine is a regular Virtual machine.

You need to have a separate FQDN and IP, different from those assgined
to the hypervisors.

In your example, you need to assign a fqdn/IP from the vlan 4000 (or
wherever the mgmt vlan is), and use it whenever queried about the
Hosted Engine VM details.

HTH

Fran

On Tue, 21 Mar 2023 at 13:07,  wrote:
>
> Hi all,
> I have set up 3 servers in 3 data centers, each having one physical interface 
> and a vlan interface parented by it.
> The connection between the 3 servers over the vlan interfaces (using private 
> ip addresses) works (using icmp ping as the test).
>
> Now I want to turn them into an ovirt cluster creating the self hosted engine 
> on the first server. I have
> - made sure the engine fqdn is in dns forward and reverse and in /etc/hosts
> - made sure that both interfaces have unique dns entries which can be 
> resolved forward and reverse
> - made sure that both interfaces' fqdns are in /etc/hosts
> - made sure only the primary hostname (not fqdn) is in /etc/hostname,
> - made sure ipv6 is available on the physical interface,
> - made sure ipv6 method is "disabled" on the vlan interface,
> - set 
> /usr/share/ansible/collections/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/defaults/main.yml:he_force_ip4:
>  true to make sure no ipv6 attempts to interfere.
>
> Now when I use cockpit's hosted engine wizard (not hyperconverged), i run 
> into 2 opposing problems.
> If I set the FQDN in the "Advanced" sub pane to the FQDN of the vlan 
> interface, the wizards gets stuck at "preparing VM" with "The resolved 
> address doesn't resolve on the selected interface\n".
> If I set the FQDN in the "Advanced" sub pane to the FQDN of the physical 
> interface, I get the same result.
>
> If i add the physical interfaces FQDN to the vlan ip address in /etc/hosts, i 
> get "hostname 'x.y.z' doesn't uniquely match the interface 'enp5s0.4000' 
> selected for the management bridge; it matches also interface with IP 
> ['physical']. Please make sure that the hostname got from the interface for 
> the management network resolves only there." So clearly separating the two 
> interfaces namewise is mandatory.
>
> I tried to follow the ansible workflow step by step to see what it does. I 
> seems the validate hostname is triggered twice, second time on filling in 
> FQDN in "Advanced" sub pane - it succeeds with both hostnames (physiscal 
> interface and vlan ip), but that does not prevent the "prepare VM" workflow 
> in doing the same verification and failing, as far as I can see. This is 
> where it happens:
> 2023-03-20 14:31:48,354+0100 DEBUG ansible on_any args TASK: 
> ovirt.ovirt.hosted_engine_setup : Check the resolved address resolves on the 
> selected interface  kwargs is_conditional:False
> 2023-03-20 14:31:48,355+0100 DEBUG ansible on_any args localhost TASK: 
> ovirt.ovirt.hosted_engine_setup : Check the resolved address resolves on the 
> selected interface  kwargs
> 2023-03-20 14:31:48,481+0100 DEBUG var changed: host "localhost" var 
> "ansible_play_hosts" type "" value: "[]"
> 2023-03-20 14:31:48,481+0100 DEBUG var changed: host "localhost" var 
> "ansible_play_batch" type "" value: "[]"
> 2023-03-20 14:31:48,481+0100 DEBUG var changed: host "localhost" var 
> "play_hosts" type "" value: "[]"
> 2023-03-20 14:31:48,481+0100 ERROR ansible failed {
> "ansible_host": "localhost",
> "ansible_playbook": 
> "/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml",
> "ansible_result": {
> "_ansible_no_log": false,
> "changed": false,
> "msg": "The resolved address doesn't resolve on the selected 
> interface\n"
> },
> "ansible_task": "Check the resolved address resolves on the selected 
> interface",
> "ansible_type": "task",
> "status": "FAILED",
> "task_duration": 0
> }
>
>
> So I am really stuck there. I do not have any idea how and where to go on. I 
> can try changing bits in the playbooks and parameters (like using "hostname 
> -A" instead of "hostname -f" for the failing test), but that can't really be 
> the idea - I am to new to this to run into a bug or similar, I will suspect I 
> do overlook something.
>
> Any hint or help is appreciated.
>
> Cheers,
>
> Dirk
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/X2DVC2Q22SIRA3JIGK7SXOXROCFY2TQC/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 

[ovirt-users] Re: Setup oVirt self hosted engine on Rocky LInux 8 using cockpit - stuck in deadlock

2023-03-28 Thread Dirk H. Schulz

Hi Fran,

thanks for helping me.

I have defined a separate fqdn and ip address for the engine - sorry for 
not mentioning that. It is also in /etc/hosts.



The problem seems to be the following:

The error message is thrown by ansible if

    he_host_ip not in target_address_v4.stdout_lines and
    he_host_ip not in target_address_v6.stdout_lines

and grepping he_host_ip from 
ovirt-hosted-engine-setup-ansible-initial_clean-XYZ.log turns out to be 
the ip address of the physical interface but target_address_v4 is 
explicitly set to "ip addr show" of the vlan interface.


Since I defined the vlan interface as bridge interface in the VM 
settings for the engine this seems strange to me.


What is more strange: If I put in the physical host interface as bridge 
interface the error does not occur, but then the Engine-VM is bound to 
the default libvirt bridge (which I did not want it to) and the setup 
process fails with "There was a failure deploying the engine on the 
local engine VM." which I now have to analyze.


Is my idea of binding the bridge to the vlan interface to have a 
management network there completely wrong?


I did not manage to find any docs on what the cockpit module expects 
there, and the ovirt setup docs are also very sparse - can you point me 
to some in depth examples of the requirements the self hosted engine 
setup expects?


Cheers,

Dirk


Am 21.03.23 um 15:30 schrieb Fran Garcia:

The hosted engine is a regular Virtual machine.

You need to have a separate FQDN and IP, different from those assgined
to the hypervisors.

In your example, you need to assign a fqdn/IP from the vlan 4000 (or
wherever the mgmt vlan is), and use it whenever queried about the
Hosted Engine VM details.

HTH

Fran

On Tue, 21 Mar 2023 at 13:07,  wrote:

Hi all,
I have set up 3 servers in 3 data centers, each having one physical interface 
and a vlan interface parented by it.
The connection between the 3 servers over the vlan interfaces (using private ip 
addresses) works (using icmp ping as the test).

Now I want to turn them into an ovirt cluster creating the self hosted engine 
on the first server. I have
- made sure the engine fqdn is in dns forward and reverse and in /etc/hosts
- made sure that both interfaces have unique dns entries which can be resolved 
forward and reverse
- made sure that both interfaces' fqdns are in /etc/hosts
- made sure only the primary hostname (not fqdn) is in /etc/hostname,
- made sure ipv6 is available on the physical interface,
- made sure ipv6 method is "disabled" on the vlan interface,
- set 
/usr/share/ansible/collections/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/defaults/main.yml:he_force_ip4:
 true to make sure no ipv6 attempts to interfere.

Now when I use cockpit's hosted engine wizard (not hyperconverged), i run into 
2 opposing problems.
If I set the FQDN in the "Advanced" sub pane to the FQDN of the vlan interface, the wizards gets 
stuck at "preparing VM" with "The resolved address doesn't resolve on the selected 
interface\n".
If I set the FQDN in the "Advanced" sub pane to the FQDN of the physical 
interface, I get the same result.

If i add the physical interfaces FQDN to the vlan ip address in /etc/hosts, i get 
"hostname 'x.y.z' doesn't uniquely match the interface 'enp5s0.4000' selected for 
the management bridge; it matches also interface with IP ['physical']. Please make sure 
that the hostname got from the interface for the management network resolves only 
there." So clearly separating the two interfaces namewise is mandatory.

I tried to follow the ansible workflow step by step to see what it does. I seems the validate 
hostname is triggered twice, second time on filling in FQDN in "Advanced" sub pane - it 
succeeds with both hostnames (physiscal interface and vlan ip), but that does not prevent the 
"prepare VM" workflow in doing the same verification and failing, as far as I can see. 
This is where it happens:
2023-03-20 14:31:48,354+0100 DEBUG ansible on_any args TASK: 
ovirt.ovirt.hosted_engine_setup : Check the resolved address resolves on the 
selected interface  kwargs is_conditional:False
2023-03-20 14:31:48,355+0100 DEBUG ansible on_any args localhost TASK: 
ovirt.ovirt.hosted_engine_setup : Check the resolved address resolves on the 
selected interface  kwargs
2023-03-20 14:31:48,481+0100 DEBUG var changed: host "localhost" var "ansible_play_hosts" type 
"" value: "[]"
2023-03-20 14:31:48,481+0100 DEBUG var changed: host "localhost" var "ansible_play_batch" type 
"" value: "[]"
2023-03-20 14:31:48,481+0100 DEBUG var changed: host "localhost" var "play_hosts" type "" value: "[]"
2023-03-20 14:31:48,481+0100 ERROR ansible failed {
 "ansible_host": "localhost",
 "ansible_playbook": 
"/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml",
 "ansible_result": {
 "_ansible_no_log": false,
 "changed": false,
 "msg": "The resolved address doesn't resolve on the selected 

[ovirt-users] oVirt hosted-engine deployment times out while "Wait for the host to be up"

2023-03-28 Thread brwsergmslst
Hi all,

I am currently trying to deploy the hosted-engine without success.
Unfortunately I cannot see what I am missing here. It'd be nice if you could 
put an eye on it and help me out. Below is an excerpt of the logfile generated.

```
2023-03-22 20:34:10,417+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:115 TASK [ovirt.ovirt.hosted_engine_setup : Get 
active list of active firewalld zones]
2023-03-22 20:34:12,222+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:115 changed: [localhost]
2023-03-22 20:34:13,527+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:115 TASK [ovirt.ovirt.hosted_engine_setup : 
Configure libvirt firewalld zone]
2023-03-22 20:34:20,246+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:115 changed: [localhost]
2023-03-22 20:34:21,550+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:115 TASK [ovirt.ovirt.hosted_engine_setup : 
Reload firewall-cmd]
2023-03-22 20:34:23,957+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:115 changed: [localhost]
2023-03-22 20:34:25,462+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:115 TASK [ovirt.ovirt.hosted_engine_setup : Add 
host]
2023-03-22 20:34:27,469+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:115 changed: [localhost]
2023-03-22 20:34:28,672+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:115 TASK [ovirt.ovirt.hosted_engine_setup : 
Include after_add_host tasks files]
2023-03-22 20:34:30,678+0100 DEBUG 
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:109 
TASK [ovirt.ovirt.hosted_engine_setup : Let the user connect to the bootstrap 
engine VM to manually fix host configuration]
2023-03-22 20:34:31,882+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:115 skipping: [localhost]
2023-03-22 20:34:32,987+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:115 TASK [ovirt.ovirt.hosted_engine_setup : 
include_tasks]
2023-03-22 20:34:33,890+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:115 skipping: [localhost]
2023-03-22 20:34:34,893+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:115 TASK [ovirt.ovirt.hosted_engine_setup : 
include_tasks]
2023-03-22 20:34:36,096+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:115 ok: [localhost]
2023-03-22 20:34:37,100+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:115 TASK [ovirt.ovirt.hosted_engine_setup : 
Always revoke the SSO token]
2023-03-22 20:34:38,705+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:115 ok: [localhost]
2023-03-22 20:34:40,111+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:115 TASK [ovirt.ovirt.hosted_engine_setup : 
include_tasks]
2023-03-22 20:34:41,015+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:115 ok: [localhost]
2023-03-22 20:34:42,020+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:115 TASK [ovirt.ovirt.hosted_engine_setup : 
Obtain SSO token using username/password credentials]
2023-03-22 20:34:44,028+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:115 ok: [localhost]
2023-03-22 20:34:45,032+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:115 TASK [ovirt.ovirt.hosted_engine_setup : Wait 
for the host to be up]
2023-03-22 20:56:33,746+0100 DEBUG 
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:109 
{'changed': False, 'ovirt_hosts': [{'href': 
'/ovirt-engine/api/hosts/17ad8088-c9e9-433f-90b9-ce8023a625e6', 'comment': '', '
id': '17ad8088-c9e9-433f-90b9-ce8023a625e6', 'name': 'vhost-tmp01.example.com', 
'address': 'vhost-tmp01.example.com', 'affinity_labels': [], 
'auto_numa_status': 'unknown', 'certificate': {'organization': 'example.com', 
'subject': 'O=example.com,CN=vhost-tmp01.example.com'}, 'cluster': {'href': 
'/ovirt-engine/api/clusters/4effacb7-e5dd-4e52-86c9-90ebd2aafa0d', 'id': 
'4effacb7-e5dd-4e52-86c9-90ebd2aafa0d'}, 'cpu
': {'speed': 0.0, 'topology': {}}, 'cpu_units': [], 'device_passthrough': 
{'enabled': False}, 'devices': [], 'external_network_provider_configurations': 
[], 'external_status': 'ok', 'hardware_information': {'supported_rng_sources': 
[]}, 'h
ooks': [], 'katello_errata': [], 'kdump_status': 'unknown', 'ksm': {'enabled': 
False}, 'max_scheduling_memory': 0, 'memory': 0, 'network_attachments': [], 
'nics': [], 'numa_nodes': [], 'numa_supported': False, 'os': 
{'custom_kernel_cmdline
': ''}, 

[ovirt-users] TypeError: Cannot read properties of undefined (reading 'toString')

2023-03-28 Thread charnet1019
Env: 
oVirt version: ovirt-node-ng-installer-4.5.4-2022120615.el8.iso
oVirt engine appliance: 
ovirt-engine-appliance-4.5-20221206133948.1.el8.x86_64.rpm

I have three nodes, each node has two disks, one for installing ovirt node and 
one for installing gfs.
after intalled gfs by web wizard and to install hosted engine by web console, 
at this time, the installation wizard page does not pop up, and the following 
error is reported in the browser console:

PackageKit went away from D-Bus
Failed to read file 
/usr/share/cockpit/ovirt-dashboard/gdeploy-templates/he-common.conf. Check that 
the file exists and is not empty HostedEngineSetup.js:260 
Failed to read certificate   HostedEngineSetup.js:230 
General system data retrieval started.   DefaultValueProvider.js:95 
No gdeploy answer files found.   HeSetupWizardContainer.js:117 
Ansible output file directory created successfully.PlaybookUtil.js:156 
Host FQDN: node210.com   Validation.js:138 

Execution of /usr/share/ovirt-hosted-engine-setup/he_ansible/trigger_role.yml 
with tags get_network_interfaces started   PlaybookUtil.js:30 
Ansible output file directory created successfully.   PlaybookUtil.js:156 
Execution of /usr/share/ovirt-hosted-engine-setup/he_ansible/trigger_role.yml 
with tags validate_hostnames started  PlaybookUtil.js:30 
General system data retrieved successfully.  DefaultValueProvider.js:98 
Execution of /usr/share/ovirt-hosted-engine-setup/he_ansible/trigger_role.yml 
with tags validate_hostnames completed successfully   PlaybookUtil.js:67 
Validation of host FQDN, node210.com, succeeded   Validation.js:176 
Execution of /usr/share/ovirt-hosted-engine-setup/he_ansible/trigger_role.yml 
with tags get_network_interfaces completed successfully   PlaybookUtil.js:67 
Network interfaces retrieved successfully
TypeError: Cannot read properties of undefined (reading 'toString')
react-dom.production.min.js:188 
at Ds.getDisplayValue (ReviewGenerator.js:24:31)
at Ds. (ReviewGenerator.js:45:49)
at Array.forEach ()
at Ds. (ReviewGenerator.js:39:53)
at Array.forEach ()
at Ds.generateReviewSections (ReviewGenerator.js:36:48)
at Ds.getReviewSections (ReviewGenerator.js:77:31)
at Ps.render (AnsiblePhasePreviewContainer.js:138:30)
at Io (react-dom.production.min.js:167:226)
at Lo (react-dom.production.min.js:180:75)
DefaultValueProvider.js:73 Promise.all failed
TypeError: Cannot read properties of undefined (reading 'toString')   
DefaultValueProvider.js:74 
at Ds.getDisplayValue (ReviewGenerator.js:24:31)
at Ds. (ReviewGenerator.js:45:49)
at Array.forEach ()
at Ds. (ReviewGenerator.js:39:53)
at Array.forEach ()
at Ds.generateReviewSections (ReviewGenerator.js:36:48)
at Ds.getReviewSections (ReviewGenerator.js:77:31)
at Ps.render (AnsiblePhasePreviewContainer.js:138:30)
at Io (react-dom.production.min.js:167:226)
at Lo (react-dom.production.min.js:180:75)
Error: dig exited with code 9   HostedEngineSetupUtil.js:1411 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OX5EDOL3Z5U6GVUQKZVUXG3SDY6PIZEM/


[ovirt-users] HostedEngine: Unable to add virtual disk

2023-03-28 Thread ziyi Liu
I want to add a disk to HostedEngine, and the following error occurs when 
adding a disk in the web ui
HostedEngine:
Unable to add virtual disk. The engine is not managing this virtual machine.
Does HostedEngine need to use vdms to add disks?
Is it the following operation
1. Set managed engine maintenance mode to global
2. Close HostedEngine
3. Extend the disk vdsm-client Volume extendSize
4. start vm
5. Use fdisk to partition and expand
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BZII6BR3KM4IYTO2N3VJLWOCYXOXULJM/


[ovirt-users] Encrypted VNC request using SASL not maintained after VM migration

2023-03-28 Thread Jon Sattelberger
I recently followed the instructions for enabling VNC encryption for FIPS 
enabled hosts [1]. The VNC console seem to be fine on the host where the VM is 
initially started (excluding noVNC in the browser). The qemu-kvm arguments are 
not maintained properly upon VM migration, declaring "password=on" in the -vnc 
argument. Subsequent VNC console requests will result in an authentication 
failure. SPICE seems to be fine. All hosts and the engine are FIPS enabled 
running oVirt-4.5.4-1.el8.

Is there a way to maintain the absence of "password=on"after VM migation? 
Perhaps a hook in the interim.

Initial VM start:

-object 
{"qom-type":"tls-creds-x509","id":"vnc-tls-creds0","dir":"/etc/pki/vdsm/libvirt-vnc","endpoint":"server","verify-peer":false}
 -vnc 192.168.100.67:0,tls-creds=vnc-tls-creds0,sasl=on,audiodev=audio1 -k 
en-us 

Debug output from remote-viewer:

(remote-viewer:1495470): gtk-vnc-DEBUG: 12:51:55.812: vncconnection.c Possible 
VeNCrypt sub-auth 263
(remote-viewer:1495470): gtk-vnc-DEBUG: 12:51:55.812: vncconnection.c Emit main 
context 12
(remote-viewer:1495470): gtk-vnc-DEBUG: 12:51:55.812: vncconnection.c Requested 
auth subtype 263
(remote-viewer:1495470): gtk-vnc-DEBUG: 12:51:55.813: vncconnection.c Waiting 
for VeNCrypt auth subtype
(remote-viewer:1495470): gtk-vnc-DEBUG: 12:51:55.813: vncconnection.c Choose 
auth 263
(remote-viewer:1495470): gtk-vnc-DEBUG: 12:51:55.813: vncconnection.c Checking 
if credentials are needed
(remote-viewer:1495470): gtk-vnc-DEBUG: 12:51:55.813: vncconnection.c No 
credentials required
(remote-viewer:1495470): gtk-vnc-DEBUG: 12:51:55.813: vncconnection.c Read 
error Resource temporarily unavailable
(remote-viewer:1495470): gtk-vnc-DEBUG: 12:51:55.841: vncconnection.c Do TLS 
handshake
(remote-viewer:1495470): gtk-vnc-DEBUG: 12:51:55.944: vncconnection.c Checking 
if credentials are needed
(remote-viewer:1495470): gtk-vnc-DEBUG: 12:51:55.944: vncconnection.c Want a 
TLS clientname
... snip ...

Migrated VM:

-object 
{"qom-type":"tls-creds-x509","id":"vnc-tls-creds0","dir":"/etc/pki/vdsm/libvirt-vnc","endpoint":"server","verify-peer":false}
 -vnc 
192.168.100.68:0,password=on,tls-creds=vnc-tls-creds0,sasl=on,audiodev=audio1 
-k en-us

Debug output from remote-viewer:

(remote-viewer:1495270): gtk-vnc-DEBUG: 12:50:29.487: vncconnection.c Possible 
VeNCrypt sub-auth 261
(remote-viewer:1495270): gtk-vnc-DEBUG: 12:50:29.487: vncconnection.c Emit main 
context 12
(remote-viewer:1495270): gtk-vnc-DEBUG: 12:50:29.488: vncconnection.c Requested 
auth subtype 261
(remote-viewer:1495270): gtk-vnc-DEBUG: 12:50:29.488: vncconnection.c Waiting 
for VeNCrypt auth subtype
(remote-viewer:1495270): gtk-vnc-DEBUG: 12:50:29.488: vncconnection.c Choose 
auth 261
(remote-viewer:1495270): gtk-vnc-DEBUG: 12:50:29.488: vncconnection.c Checking 
if credentials are needed
(remote-viewer:1495270): gtk-vnc-DEBUG: 12:50:29.488: vncconnection.c No 
credentials required
... snip ...
(remote-viewer:1495270): gtk-vnc-DEBUG: 12:50:29.780: vncconnection.c Checking 
auth result
(remote-viewer:1495270): gtk-vnc-DEBUG: 12:50:29.808: vncconnection.c Fail 
Authentication failed
(remote-viewer:1495270): gtk-vnc-DEBUG: 12:50:29.808: vncconnection.c Error: 
Authentication failed
(remote-viewer:1495270): gtk-vnc-DEBUG: 12:50:29.808: vncconnection.c Emit main 
context 16

(remote-viewer:1495270): virt-viewer-WARNING **: 12:50:29.808: vnc-session: got 
vnc error Authentication failed

Thank you,

Jon

[1] 
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/html/administration_guide/enabling-encrypted-vnc-consoles-for-fips
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RONNCOJEWXXBYL65FTXL2YPPPT3OQGWF/


[ovirt-users] Re: Expand the disk space of the hosting engine

2023-03-28 Thread BJ一哥
Hello
I got an error when adding a hard disk to HostedEngine in wei ui
   An error occurred while performing the operation: HostedEngine:
Unable to add virtual disk. The engine is not managing this virtual machine.
  Should I use the command line to add disks
  Use vdsm-client VM diskSizeExtend or vdsm-client Volume extendSize. I
have not tested these two commands successfully

matthew.st...@fujitsu.com  于2023年1月3日周二 13:27写道:

> You allocated a 100G storage domain.
>
> Within that storage domain, you allocated a 50G disk to hold the disk
> image.
>
> Just like any other storage domain, you can create additional disks.  I
> have seen warnings, for going over 80% allocated within this storage domain.
>
> When building the Self-Hosted-Engine, I specify 75GB when asked about the
> size of the disk to create, and then use LVM to add the unused storage of
> the disk to the root and swap logical volumes.
>
> You should be able to add a second disk of about 25G to the SHE, and use
> LVM to add it to the existing volume groups and expand the existing logical
> volumes.
>
> Of course, I have not tested this.
>
>
>
> -Original Message-
> From: ziyi Liu 
> Sent: Thursday, December 29, 2022 9:21 PM
> To: users@ovirt.org
> Subject: [ovirt-users] Re: Expand the disk space of the hosting engine
>
> Thank you very much, I know the operation steps, but there is still one
> point that I don’t quite understand, fdisk -l only shows 50G, the actual
> disk allocation has 100G, how can I display the remaining 50G.
> The second question is that if the allocated 100G is also full, how should
> I expand it? I can’t expand it using wei ui
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org Privacy Statement:
> https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/QRKI2S4WLJ2JD7X63ECYUFSJ7Z765R5W/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/36SBX7VA7TRVXNSO6DSDXAVAN5EVOPPT/


[ovirt-users] Re: VDI management on top of Ovirt

2023-03-28 Thread Vinícius Stocker
Hi Samuel, how are you?

I'm trying with OpenUDS (https://github.com/dkmstr/openuds). It's suffer from 
lack of documentation but at this time appears to be very good.

Best regards,

Vinicius.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YG6K26WGDSCWB3W6HIQIQ2YDK6SWVP5F/


[ovirt-users] Uncertain what to do with "Master storage domain" residing on obsolete storage domain.

2023-03-28 Thread goestin
Hello All,

I want to phase out a storage domain however it is marked as "master", I have 
the following questions:
1. What does it mean when a storage domain is "master".
2. What is the correct way to remove a storage  domain that is assigned the 
"master" status.

Any insight on the matter would be highly appreciated.

Kind regards,
Justin
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BAZ7W3P7SQUCZHCWPDRVOYDHYGJS5YEO/


[ovirt-users] Re: Setup oVirt self hosted engine on Rocky LInux 8 using cockpit - stuck in deadlock

2023-03-28 Thread Dirk H. Schulz

Hi Fran,

thanks for your detailled elaboration.

I will go the cli installer way if that is what is supported.

Thank you very much!

Dirk

Am 24.03.23 um 23:32 schrieb Fran Garcia:

Hi Dirk,

I'm having some trouble following the configuration.

I have succesfully deployed HostedEngine over a vlan-tagged interface
in the past.

The configuration in the host should be as follows:


---
r...@rhel86-host1.p1.lab ~ # hostname
rhel86-host1.p1.lab

r...@rhel86-host1.p1.lab ~ # cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
172.16.1.11 rhel86-host1.p1.lab
172.16.1.20 rhvm.p1.lab

r...@rhel86-host1.p1.lab ~ # ip -4 --brief address show
lo   UNKNOWN127.0.0.1/8
eth0 UP 192.168.123.139/24
eth1.2001@eth1   UP 172.16.1.11/24
---

As you can see, just the hostname definition a regular vlan interface with
the IP that the hypervisor has.  The installer will then take care of
creating the 'ovirtmgmt' network, and connect both the hypervisor and the
HostedEngine VM to it, preserving vlan configuration as required.

Two comments around this:

a) I found that using vlan tagging for ovirtmgmt usually makes things more
difficult in the future, should you ever need to renumber/change the vlan.

b) I can't say the current status, but Red Hat dropped support of deploying
RHV-M (Hosted Engine) with cockpit a few releases ago.  Not sure if it is
still in shape.

After checking both the download and documentation urls, it seems it was
also dropped from oVirt, no mention of a cockpit installation whatsoever:

- https://ovirt.org/download/
- 
https://ovirt.org/documentation/installing_ovirt_as_a_self-hosted_engine_using_the_command_line/

I've been able to find some blog that mentions the whole cockpit-ovirt
deployment process, it case it helps:

- https://anthonyspiteri.net/ovirt-kvm-homelab-1/
- https://anthonyspiteri.net/ovirt-kvm-homelab-2/


My suggestion would be to check the 'hosted-engine --deploy' CLI installer,
which is known to work well.

Hope this helps,


Fran


On Wed, Mar 22, 2023 at 09:13:37AM +0100, Dirk H. Schulz wrote:

Hi Fran,

thanks for helping me.

I have defined a separate fqdn and ip address for the engine - sorry
for not mentioning that. It is also in /etc/hosts.


The problem seems to be the following:

The error message is thrown by ansible if

     he_host_ip not in target_address_v4.stdout_lines and
     he_host_ip not in target_address_v6.stdout_lines

and grepping he_host_ip from
ovirt-hosted-engine-setup-ansible-initial_clean-XYZ.log turns out to
be the ip address of the physical interface but target_address_v4 is
explicitly set to "ip addr show" of the vlan interface.

Since I defined the vlan interface as bridge interface in the VM
settings for the engine this seems strange to me.

What is more strange: If I put in the physical host interface as
bridge interface the error does not occur, but then the Engine-VM is
bound to the default libvirt bridge (which I did not want it to) and
the setup process fails with "There was a failure deploying the
engine on the local engine VM." which I now have to analyze.

Is my idea of binding the bridge to the vlan interface to have a
management network there completely wrong?

I did not manage to find any docs on what the cockpit module expects
there, and the ovirt setup docs are also very sparse - can you point
me to some in depth examples of the requirements the self hosted
engine setup expects?

Cheers,

Dirk


Am 21.03.23 um 15:30 schrieb Fran Garcia:

The hosted engine is a regular Virtual machine.

You need to have a separate FQDN and IP, different from those assgined
to the hypervisors.

In your example, you need to assign a fqdn/IP from the vlan 4000 (or
wherever the mgmt vlan is), and use it whenever queried about the
Hosted Engine VM details.

HTH

Fran

On Tue, 21 Mar 2023 at 13:07,  wrote:

Hi all,
I have set up 3 servers in 3 data centers, each having one physical interface 
and a vlan interface parented by it.
The connection between the 3 servers over the vlan interfaces (using private ip 
addresses) works (using icmp ping as the test).

Now I want to turn them into an ovirt cluster creating the self hosted engine 
on the first server. I have
- made sure the engine fqdn is in dns forward and reverse and in /etc/hosts
- made sure that both interfaces have unique dns entries which can be resolved 
forward and reverse
- made sure that both interfaces' fqdns are in /etc/hosts
- made sure only the primary hostname (not fqdn) is in /etc/hostname,
- made sure ipv6 is available on the physical interface,
- made sure ipv6 method is "disabled" on the vlan interface,
- set 
/usr/share/ansible/collections/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/defaults/main.yml:he_force_ip4:
 true to make sure no ipv6 attempts to interfere.

Now when I use 

[ovirt-users] How to enable the storage pool correctly

2023-03-28 Thread ziyi Liu
The /var/ folder is full, I can't enter the web ui to set it, I can only use 
the command line mode
vdsm-client StorageDomain activate
vdsm-client StorageDomain attach
vdsm-client StoragePool connect
vdsm-client StoragePool connectStorageServer
I have tried these commands and they all prompt message=Unknown pool id, pool 
not connected
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZQJJW7DXOWS7R5HO73W5L66OALAFRGWJ/


[ovirt-users] Re: Disable "balancing" and authomatic migration

2023-03-28 Thread Diego Ercolani
New event:

Mar 28 14:37:32 ovirt-node3.ovirt vdsm[4288]: WARN executor state: count=5 
workers={, , , ,  at 0x7fcdc0010898> timeout=7.5, duration=7.50 at 
0x7fcdc0010208> discarded task#=189 at 0x7fcdc0010390>}
Mar 28 14:37:32 ovirt-node3.ovirt sanlock[1662]: 2023-03-28 14:37:32 829 
[7438]: s4 delta_renew read timeout 10 sec offset 0 
/rhev/data-center/mnt/glusterSD/ovirt-node3.ovirt:_gv0/4745320f-bfc3-46c4-8849-b4fe8f1b2de6/dom_md/ids
Mar 28 14:37:32 ovirt-node3.ovirt sanlock[1662]: 2023-03-28 14:37:32 829 
[7438]: s4 renewal error -202 delta_length 10 last_success 798
Mar 28 14:37:33 ovirt-node3.ovirt sanlock[1662]: 2023-03-28 14:37:33 830 
[7660]: s6 delta_renew read timeout 10 sec offset 0 
/rhev/data-center/mnt/ovirt-nfsha.ovirt:_dati_drbd0/2527ed0f-e91a-4748-995c-e644362e8408/dom_md/ids
Mar 28 14:37:33 ovirt-node3.ovirt sanlock[1662]: 2023-03-28 14:37:33 830 
[7660]: s6 renewal error -202 delta_length 10 last_success 799
Mar 28 14:37:36 ovirt-node3.ovirt pacemaker-controld[3145]:  notice: High CPU 
load detected: 32.59
Mar 28 14:37:36 ovirt-node3.ovirt kernel: drbd drbd0/0 drbd1 ovirt-node2.ovirt: 
We did not send a P_BARRIER for 14436ms > ko-count (7) * timeout (10 * 0.1s); 
drbd kernel thread blocked?
Mar 28 14:37:41 ovirt-node3.ovirt libvirtd[2735]: Domain id=1 
name='SSIS-microos' uuid=e41f8148-79ab-4a88-879f-894d5750e5fb is tainted: 
custom-ga-command
Mar 28 14:37:49 ovirt-node3.ovirt kernel: drbd drbd0/0 drbd1 ovirt-node2.ovirt: 
We did not send a P_BARRIER for 7313ms > ko-count (7) * timeout (10 * 0.1s); 
drbd kernel thread blocked?
Mar 28 14:37:56 ovirt-node3.ovirt kernel: drbd drbd0/0 drbd1 ovirt-node2.ovirt: 
We did not send a P_BARRIER for 14481ms > ko-count (7) * timeout (10 * 0.1s); 
drbd kernel thread blocked?
Mar 28 14:38:06 ovirt-node3.ovirt pacemaker-controld[3145]:  notice: High CPU 
load detected: 33.50
Mar 28 14:38:09 ovirt-node3.ovirt kernel: drbd drbd0/0 drbd1 ovirt-node2.ovirt: 
Remote failed to finish a request within 7010ms > ko-count (7) * timeout (10 * 
0.1s)

2023-03-28 14:37:32,601Z INFO  
[org.ovirt.engine.core.bll.scheduling.policyunits.EvenGuestDistributionBalancePolicyUnit]
 (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-73) [] 
There is no host with more than 10 running guests, no balancing is needed
2023-03-28 14:37:50,662Z INFO  
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] 
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-41) [] VM 
'ccb06298-33a3-4b6f-bff3-d0bcd494b18d'(TpayX2GO) moved from 'Up' --> 
'NotResponding'
2023-03-28 14:37:50,666Z WARN  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-41) [] 
EVENT_ID: VM_NOT_RESPONDING(126), VM TpayX2GO is not responding.
2023-03-28 14:38:01,087Z WARN  
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyImpl] 
(EE-ManagedThreadFactory-engine-Thread-6602) [] domain 
'4745320f-bfc3-46c4-8849-b4fe8f1b2de6:gv0' in problem 'PROBLEMATIC'. vds: 
'ovirt-node2.ovirt'
2023-03-28 14:38:05,676Z INFO  
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] 
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-20) [] VM 
'ccb06298-33a3-4b6f-bff3-d0bcd494b18d'(TpayX2GO) moved from 'NotResponding' --> 
'Up'
2023-03-28 14:38:16,107Z INFO  
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyImpl] 
(EE-ManagedThreadFactory-engine-Thread-6609) [] Domain 
'2527ed0f-e91a-4748-995c-e644362e8408:drbd0' recovered from problem. vds: 
'ovirt-node2.ovirt'
2023-03-28 14:38:16,107Z INFO  
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyImpl] 
(EE-ManagedThreadFactory-engine-Thread-6609) [] Domain 
'4745320f-bfc3-46c4-8849-b4fe8f1b2de6:gv0' recovered from problem. vds: 
'ovirt-node2.ovirt'
2023-03-28 14:38:16,107Z INFO  
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyImpl] 
(EE-ManagedThreadFactory-engine-Thread-6610) [] Domain 
'2527ed0f-e91a-4748-995c-e644362e8408:drbd0' recovered from problem. vds: 
'ovirt-node4.ovirt'
2023-03-28 14:38:16,107Z INFO  
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyImpl] 
(EE-ManagedThreadFactory-engine-Thread-6610) [] Domain 
'4745320f-bfc3-46c4-8849-b4fe8f1b2de6:gv0' recovered from problem. vds: 
'ovirt-node4.ovirt'
2023-03-28 14:38:16,327Z INFO  
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyImpl] 
(EE-ManagedThreadFactory-engine-Thread-6612) [] Domain 
'2527ed0f-e91a-4748-995c-e644362e8408:drbd0' recovered from problem. vds: 
'ovirt-node3.ovirt'
2023-03-28 14:38:16,327Z INFO  
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyImpl] 
(EE-ManagedThreadFactory-engine-Thread-6612) [] Domain 
'2527ed0f-e91a-4748-995c-e644362e8408:drbd0' has recovered from problem. No 
active host in the DC is reporting it as problematic, so clearing the domain 
recovery timer.
2023-03-28 14:38:16,327Z INFO  
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyImpl] 
(EE-ManagedThreadFactory-engine-Thread-6612) [] Domain 

[ovirt-users] Re: Failing "change Master storage domain" from gluster to iscsi

2023-03-28 Thread Diego Ercolani
Worked

I halted a node of the gluster cluster (that seemed to be problematic from the 
gluster point of view) and the change of the master storage domain worked
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KRVFDW5LELCZEH2G34IWR33H5EAN76CH/


[ovirt-users] Re: Disable "balancing" and authomatic migration

2023-03-28 Thread Diego Ercolani
It's difficult to answer as the engine normally "freezes" or is taken down 
during events... I will try to get them next time
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NCKTSP6AR3CHJDYBPEYDFXTWEDS3IQGZ/


[ovirt-users] clock skew in hosted engine and VMs due to slow IO storage

2023-03-28 Thread Diego Ercolani
I don't know why (but I suppose is related to storage speed) the virtual 
machines tend to present a skew in the clock from some days to a century 
forward (2177)
I see in the journal of the engine:
Mar 28 13:19:40 ovirt-engine.ovirt NetworkManager[1158]:   
[1680009580.2045] dhcp4 (eth0): state changed new lease, address=192.168.123.20
Mar 28 13:24:40 ovirt-engine.ovirt NetworkManager[1158]:   
[1680009880.2042] dhcp4 (eth0): state changed new lease, address=192.168.123.20
Mar 28 13:29:40 ovirt-engine.ovirt NetworkManager[1158]:   
[1680010180.2039] dhcp4 (eth0): state changed new lease, address=192.168.123.20
Apr 01 08:15:42 ovirt-engine.ovirt chronyd[1072]: Forward time jump detected!
Apr 01 08:15:42 ovirt-engine.ovirt NetworkManager[1158]:   
[1680336942.4396] dhcp4 (eth0): activation: beginning transaction (timeout in 
45 seconds)
Apr 01 08:15:42 ovirt-engine.ovirt chronyd[1072]: Can't synchronise: no 
selectable sources

When this happens in the hosted-engine tipically:
1. the DWH became unconsistent as I stated here: 
https://lists.ovirt.org/archives/list/users@ovirt.org/thread/KPW5FFKG3AI6EINW4G74IKTYB2E4A5DT/#RDMSESARKHEGCV4PTIDVBTLCTEK3VPTA
 or 
https://lists.ovirt.org/archives/list/users@ovirt.org/thread/WUNZUSZ2ARRLGN5AMUSVDXFQ2VWEXK6H/#OMXYSEDVCCHQSPMVXA5KM57ZWR3XHVJI
2. the skew causes the engine to kick off the nodes that appears "down" in 
"connecting" state

This compromises all the task in pending state and raise countermeasures to the 
ovirt-engine manager and also vdsm daemon.


I currently  tried to put in engine's crontab every 5 minutes a "hwclock 
--hctosys" as it seem the hwclock don't skew
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SGCAT6RPBJ42BM3TQ3AI6FS2HHYVXGIQ/


[ovirt-users] Re: Disable "balancing" and authomatic migration

2023-03-28 Thread Gianluca Cecchi
On Tue, Mar 28, 2023 at 3:30 PM Diego Ercolani 
wrote:

> No, now seem "stable" awaiting for next event
>
>
I mean logs around the time of arising problems... It engine has not
shutdown it will contain logs generated on it during the problematic
timeframe...
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZSEHUGX5JYKGZGHPGP3UZQWUIAAB5BPW/


[ovirt-users] Re: Disable "balancing" and authomatic migration

2023-03-28 Thread Diego Ercolani
No, now seem "stable" awaiting for next event
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SSBFQIANLH5RXP2CY3UU7CDJE3ZWLI2A/


[ovirt-users] Re: Disable "balancing" and authomatic migration

2023-03-28 Thread Gianluca Cecchi
On Tue, Mar 28, 2023 at 12:34 PM Diego Ercolani 
wrote:

> I record entry like this in the journal of everynode:
> Mar 28 10:26:58 ovirt-node3.ovirt sanlock[1660]: 2023-03-28 10:26:58
> 1191247 [4105511]: s9 delta_renew read timeout 10 sec offset 0
> /rhev/data-center/mnt/glusterSD/ovirt-node3.ovirt:_gv0/4745320f-bfc3-46c4-8849-b4fe8f1b2de6/dom_md/ids
> Mar 28 10:26:58 ovirt-node3.ovirt sanlock[1660]: 2023-03-28 10:26:58
> 1191247 [4105511]: s9 renewal error -202 delta_length 10 last_success
> 1191216
> Mar 28 10:26:58 ovirt-node3.ovirt sanlock[1660]: 2023-03-28 10:26:58
> 1191247 [2750073]: s11 delta_renew read timeout 10 sec offset 0
> /rhev/data-center/mnt/ovirt-nfsha.ovirt:_dati_drbd0/2527ed0f-e91a-4748-995c-e644362e8408/dom_md/ids
> Mar 28 10:26:58 ovirt-node3.ovirt sanlock[1660]: 2023-03-28 10:26:58
> 1191247 [2750073]: s11 renewal error -202 delta_length 10 last_success
> 1191217
>
> as You see its complaining about a gluster volume (hosting vms and mapped
> on three node with the terrible SATA SSD: Samsung_SSD_870_EVO_4TB
>
>
> And inside the engine.log file of the engine, when it becomes reachable
again?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/STTDIP67ESL3ALJK4EW7Q2NGEBLH4RF4/


[ovirt-users] Failing "change Master storage domain" from gluster to iscsi

2023-03-28 Thread Diego Ercolani
In the current release of ovirt (4.5.4) I'm currently experiencing a fail in 
change master storage domain from a gluster volume to everywhere.

The GUI talk about a "general" error.
watching the engine log:

2023-03-28 11:51:16,601Z WARN  
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand] 
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-46) [] 
Unexpected return value: TaskStatus [code=331, message=value=Tar command 
failed: ({'reader': {'cmd': ['/usr/bin/tar', 'cf', '-', 
'--exclude=./lost+found', '-C', 
'/rhev/data-center/mnt/glusterSD/ovirt-node3.ovirt:_gv0/4745320f-bfc3-46c4-8849-b4fe8f1b2de6/master',
 '.'], 'rc': 1, 'err': '/usr/bin/tar: 
./tasks/20a9aa7f-80f5-403b-b296-ea95d9fd3f97: file changed as we read 
it\n/usr/bin/tar: 
./tasks/87783efa-42ac-4cd9-bda5-ad68c59bb881/87783efa-42ac-4cd9-bda5-ad68c59bb881.task:
 file changed as we read it\n'}},) abortedcode=331]
2023-03-28 11:51:16,601Z ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand] 
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-46) [] 
Failed in 'HSMGetAllTasksStatusesVDS' method

Seeming that somewhat is changing file under the directory but:
[vdsm@ovirt-node2 4745320f-bfc3-46c4-8849-b4fe8f1b2de6]$ /usr/bin/tar -cf - 
--exclude=./lost+found -C 
'/rhev/data-center/mnt/glusterSD/ovirt-node3.ovirt:_gv0/4745320f-bfc3-46c4-8849-b4fe8f1b2de6/master'
 '.' > /tmp/tar.tar
/usr/bin/tar: ./tasks/20a9aa7f-80f5-403b-b296-ea95d9fd3f97: file changed as we 
read it
/usr/bin/tar: ./tasks: file changed as we read it

[vdsm@ovirt-node2 master]$ find 
'/rhev/data-center/mnt/glusterSD/ovirt-node3.ovirt:_gv0/4745320f-bfc3-46c4-8849-b4fe8f1b2de6/master'
 -mtime -1
/rhev/data-center/mnt/glusterSD/ovirt-node3.ovirt:_gv0/4745320f-bfc3-46c4-8849-b4fe8f1b2de6/master/tasks
[vdsm@ovirt-node2 master]$ ls -l 
/rhev/data-center/mnt/glusterSD/ovirt-node3.ovirt:_gv0/4745320f-bfc3-46c4-8849-b4fe8f1b2de6/master/
total 0
drwxr-xr-x. 6 vdsm kvm 182 Mar 28 11:51 tasks
drwxr-xr-x. 2 vdsm kvm   6 Mar 26 20:36 vms

[vdsm@ovirt-node2 master]$ date; stat tasks
Tue Mar 28 12:04:06 UTC 2023
  File: tasks
  Size: 182 Blocks: 0  IO Block: 131072 directory
Device: 31h/49d Inode: 12434008067414313592  Links: 6
Access: (0755/drwxr-xr-x)  Uid: (   36/vdsm)   Gid: (   36/ kvm)
Context: system_u:object_r:fusefs_t:s0
Access: 2023-03-28 11:55:17.771046746 +
Modify: 2023-03-28 11:51:16.641145314 +
Change: 2023-03-28 11:51:16.641145314 +
 Birth: -

It seem the task directory isn't touched since
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CVZQS2KUZWM5RHQKWLQTFKATVUT7JVPR/


[ovirt-users] Re: Disable "balancing" and authomatic migration

2023-03-28 Thread Diego Ercolani
I record entry like this in the journal of everynode:
Mar 28 10:26:58 ovirt-node3.ovirt sanlock[1660]: 2023-03-28 10:26:58 1191247 
[4105511]: s9 delta_renew read timeout 10 sec offset 0 
/rhev/data-center/mnt/glusterSD/ovirt-node3.ovirt:_gv0/4745320f-bfc3-46c4-8849-b4fe8f1b2de6/dom_md/ids
Mar 28 10:26:58 ovirt-node3.ovirt sanlock[1660]: 2023-03-28 10:26:58 1191247 
[4105511]: s9 renewal error -202 delta_length 10 last_success 1191216
Mar 28 10:26:58 ovirt-node3.ovirt sanlock[1660]: 2023-03-28 10:26:58 1191247 
[2750073]: s11 delta_renew read timeout 10 sec offset 0 
/rhev/data-center/mnt/ovirt-nfsha.ovirt:_dati_drbd0/2527ed0f-e91a-4748-995c-e644362e8408/dom_md/ids
Mar 28 10:26:58 ovirt-node3.ovirt sanlock[1660]: 2023-03-28 10:26:58 1191247 
[2750073]: s11 renewal error -202 delta_length 10 last_success 1191217

as You see its complaining about a gluster volume (hosting vms and mapped on 
three node with the terrible SATA SSD: Samsung_SSD_870_EVO_4TB
 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OKLA3DSCPPMTUEXQSKTZ6PYQ5MIDYODT/


[ovirt-users] Re: Disable "balancing" and authomatic migration

2023-03-28 Thread Diego Ercolani
The scheduling policy was the "Suspend Workload if needed" and disabled 
parallel migration.
The problem is that The Engine (mapped on external NFS domain implemented by a 
linux box without any other vm mapped) simply disappear. I have a single 10Gbps 
intel ethernet link that I use to distribute storage, management and 
"production" networks, but I don't record any bandwidth limit issue
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/O22Z6VBG5YVYO5Y55OBCV3EEPQ3Q7P6Y/


[ovirt-users] Re: Disable "balancing" and authomatic migration

2023-03-28 Thread Gianluca Cecchi
On Tue, Mar 28, 2023 at 11:50 AM Diego Ercolani 
wrote:

> Hello,
> in my installation I have to use poor storage... the oVirt installation
> doesn't manage such a case and begin to "balance" and move VMs around...
> taking too many snapshots stressing a poor performance all the cluster mess
> up
> Why the vms don't go in "Pause" state but the cluster prefer to migrate
> things around messing up everything?
> This is a reference I found and for notice I'm disabling the
> auto-migration on every VM, hoping this help
>
>
> https://lists.ovirt.org/archives/list/users@ovirt.org/thread/24KQZFP2PCW462UZXKNAAJKDL44WU5OV/#24KQZFP2PCW462UZXKNAAJKDL44WU5OV
>
>
What is your current scheduling policy for the related oVirt cluster?
What is the event/error you see in engine.log of the engine and vdsm.log of
the host previously carrying on the VM when it happens?

Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4A35FL5J4QHRAKM24NBYJ3HFUIDIBLTZ/


[ovirt-users] Disable "balancing" and authomatic migration

2023-03-28 Thread Diego Ercolani
Hello,
in my installation I have to use poor storage... the oVirt installation doesn't 
manage such a case and begin to "balance" and move VMs around... taking too 
many snapshots stressing a poor performance all the cluster mess up
Why the vms don't go in "Pause" state but the cluster prefer to migrate things 
around messing up everything?
This is a reference I found and for notice I'm disabling the auto-migration on 
every VM, hoping this help

https://lists.ovirt.org/archives/list/users@ovirt.org/thread/24KQZFP2PCW462UZXKNAAJKDL44WU5OV/#24KQZFP2PCW462UZXKNAAJKDL44WU5OV
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6KCIVLZ42JU6GZ6SS4C5LPCTFOLK4GLE/


[ovirt-users] Re: Failing vm backup

2023-03-28 Thread Berat Aksoy
Good to hear you have solved the problem.

I will also suggest you to try Vinchin Backup & Recovery for oVirt backup. It 
is perfectly compatible with oVirt.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4FBYSAL3QDJ7UBEQO7ZHAQCJ6KBM2UR3/