[ovirt-users] Re: Upgrading self-Hosted engine from 4.3 to oVirt 4.4

2020-09-16 Thread Adam Xu


在 2020/9/17 12:58, Adam Xu 写道:


在 2020/9/16 15:53, Yedidyah Bar David 写道:

On Wed, Sep 16, 2020 at 10:46 AM Adam Xu  wrote:

在 2020/9/16 15:12, Yedidyah Bar David 写道:
On Wed, Sep 16, 2020 at 6:10 AM Adam Xu  
wrote:

Hi ovirt

I just try to upgrade a self-Hosted engine from 4.3.10 to 
4.4.1.4.  I followed the step in the document:


https://www.ovirt.org/documentation/upgrade_guide/#SHE_Upgrading_from_4-3 



the old 4.3 env has a FC storage as engine storage domain and I 
have created a new FC storage vv for the new storage domain to be 
used in the next steps.


I backup the old 4.3 env and prepare a total new host to restore 
the env.


in charter 4.4 step 8, it said:

"During the deployment you need to provide a new storage domain. 
The deployment script renames the 4.3 storage domain and retains 
its data."


it does rename the old storage domain. but it didn't let me choose 
a new storage domain during the deployment. So the new enigne just 
deployed in the new host's local storage and can not move to the 
FC storage domain.


Can anyone tell me what the problem is?

What do you mean in "deployed in the new host's local storage"?

Did deploy finish successfully?

I think it was not finished yet.

You did 'hosted-engine --deploy --restore-from-file=something', right?

Did this finish?

not finished yet.


What are the last few lines of the output?


[ INFO  ] You can now connect to 
https://ovirt6.ntbaobei.com:6900/ovirt-engine/ and check the status of 
this host and eventually remediate it, please continue only when the 
host is listed as 'up'


[ INFO  ] TASK [ovirt.hosted_engine_setup : include_tasks]

[ INFO  ] ok: [localhost]
[ INFO  ] TASK [ovirt.hosted_engine_setup : Create temporary lock file]
[ INFO  ] changed: [localhost]

[ INFO  ] TASK [ovirt.hosted_engine_setup : Pause execution until 
/tmp/ansible.g2opa_y6_he_setup_lock is removed, delete it once ready 
to proceed]


but the new host which run the self-hosted engine's status is 
"NonOperational" and never will be "up"




Please also check/share logs from /var/log/ovirt-hosted-engine-setup/*
(including subdirs).
no more errers there, just a lot of DEBUG messages.


sorry. And I have got some error messages:

2020-09-17 10:02:33,438+0800 DEBUG 
otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:103 db_update_host_vms: {'cmd': ['psql', 
'-d', 'engi
ne', '-c', "UPDATE vm_dynamic SET run_on_vds = NULL, status=0 /* Down */ 
WHERE run_on_vds IN (SELECT vds_id FROM vds WHERE 
upper(vds_unique_id)=upper('4c4c4544-003
2-4b10-804e-b3c04f4c5232'))"], 'stdout': 'UPDATE 0', 'stderr': '', 'rc': 
0, 'start': '2020-09-17 10:02:32.241970', 'end': '2020-09-17 
10:02:32.284024', 'delta': '0
:00:00.042054', 'changed': True, 'stdout_lines': ['UPDATE 0'], 
'stderr_lines': [], 'failed': False}
2020-09-17 10:02:33,840+0800 INFO 
otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:109 TASK [ovirt.hosted_engine_setup : 
Update dynamic

data for VMs migrating to the host used to redeploy]
2020-09-17 10:02:35,345+0800 INFO 
otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:109 changed: [localhost -> 
engine.ntbaobei.com]
2020-09-17 10:02:35,747+0800 DEBUG 
otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:103 TASK [ovirt.hosted_engine_setup : debug]
2020-09-17 10:02:36,049+0800 DEBUG 
otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:103 db_update_host_migrating_vms: {'cmd': 
['psql', '-d', 'engine', '-c', "UPDATE vm_dynamic SET migrating_to_vds = 
NULL, status=0 /* Down */ WHERE migrating_to_vds IN (SELECT vds_id FROM 
vds WHERE 
upper(vds_unique_id)=upper('4c4c4544-0032-4b10-804e-b3c04f4c5232'))"], 
'stdout': 'UPDATE 0', 'stderr': '', 'rc': 0, 'start': '2020-09-17 
10:02:34.864051', 'end': '2020-09-17 10:02:34.899514', 'delta': 
'0:00:00.035463', 'changed': True, 'stdout_lines': ['UPDATE 0'], 
'stderr_lines': [], 'failed': False}


it show that can't migrate the engine vm to the original hosts. I don't 
know why.



It didn't tell me to choose a new
storage domain and just give me the new hosts fqdn as the engine's URL.
like host6.example.com:6900 .
Yes, that's temporarily, to let you access the engine VM (on the 
local network).



I can login use the host6.example.com:6900 and I saw the engine vm ran
in host6's /tmp dir.


HE deploy (since 4.3) first creates a VM for the engine on local
storage, then prompts you to provide the storage you want to use, and
then moves the VM disk image there.

Best regards,


Thanks

--
Adam Xu

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XHDGJB2ZAFS7AJZYS4F5BAMC2ZVKCYY4/



--
Adam 

[ovirt-users] Re: hosted engine migration

2020-09-16 Thread Strahil Nikolov via Users
It would be easier if you posted the whole xml.

What about the sections (in HE xml) starting with:
feature policy=

Also the hosts have a section which contains:

 
написа: 





HostedEngine:
..
Haswell-noTSX
..

both of the hosts:
..
Westmere
..

others vms which can be migrated:
..
Haswell-noTSX
..



在 2020-09-17 03:03:24,"Strahil Nikolov"  写道:
>Can you verify the HostedEngine's CPU ?
>
>1. ssh to the host hosting the HE
>2. alias virsh='virsh -c 
>qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf'
>3. virsh dumpxml HostedEngine
>
>
>Then set the alias for virsh on all Hosts and 'virsh capabilites' should show 
>the Hosts'  .
>
>Best Regards,
>Strahil Nikolov
>
>
>
>
>
>
>В сряда, 16 септември 2020 г., 10:16:08 Гринуич+3, ddqlo  
>написа: 
>
>
>
>
>
>My gateway was not pingable. I have fixed this problem and now both nodes have 
>a score(3400).
>Yet, hosted engine could not be migrated. Same log in engine.log:
>host filtered out by 'VAR__FILTERTYPE__INTERNAL' filter 'CPU'
>
>
>在 2020-09-16 02:11:09,"Strahil Nikolov"  写道:
>>Both nodes have a lower than the usual score (should be 3400 ).
>>Based on the score you are probably suffering from gateway-score-penalty 
>>[1][2].
>>Check if your gateway is pingable.
>>
>>Best Regards,
>>Strahil Nikolov
>>
>>1 - https://www.ovirt.org/images/Hosted-Engine-4.3-deep-dive.pdf(page 8)
>>2 - /etc/ovirt-hosted-engine-ha/agent.conf 
>>
>>
>>
>>
>>
>>
>>В вторник, 15 септември 2020 г., 04:49:48 Гринуич+3, ddqlo  
>>написа: 
>>
>>
>>
>>
>>
>>--== Host node28 (id: 1) status ==--
>>
>>conf_on_shared_storage             : True
>>Status up-to-date                  : True
>>Hostname                           : node28
>>Host ID                            : 1
>>Engine status                      : {"reason": "vm not running on this 
>>host", "health": "bad", "vm": "down_unexpected", "detail": "unknown"}
>>Score                              : 1800
>>stopped                            : False
>>Local maintenance                  : False
>>crc32                              : 4ac6105b
>>local_conf_timestamp               : 1794597
>>Host timestamp                     : 1794597
>>Extra metadata (valid at timestamp):
>>        metadata_parse_version=1
>>        metadata_feature_version=1
>>        timestamp=1794597 (Tue Sep 15 09:47:17 2020)
>>        host-id=1
>>        score=1800
>>        vm_conf_refresh_time=1794597 (Tue Sep 15 09:47:17 2020)
>>        conf_on_shared_storage=True
>>        maintenance=False
>>        state=EngineDown
>>        stopped=False
>>
>>
>>--== Host node22 (id: 2) status ==--
>>
>>conf_on_shared_storage             : True
>>Status up-to-date                  : True
>>Hostname                           : node22
>>Host ID                            : 2
>>Engine status                      : {"health": "good", "vm": "up", "detail": 
>>"Up"}
>>Score                              : 1800
>>stopped                            : False
>>Local maintenance                  : False
>>crc32                              : ffc41893
>>local_conf_timestamp               : 1877876
>>Host timestamp                     : 1877876
>>Extra metadata (valid at timestamp):
>>        metadata_parse_version=1
>>        metadata_feature_version=1
>>        timestamp=1877876 (Tue Sep 15 09:47:13 2020)
>>        host-id=2
>>        score=1800
>>        vm_conf_refresh_time=1877876 (Tue Sep 15 09:47:13 2020)
>>        conf_on_shared_storage=True
>>        maintenance=False
>>        state=EngineUp
>>        stopped=False
>>
>>
>>
>>
>>
>>
>>
>>在 2020-09-09 01:32:55,"Strahil Nikolov"  写道:
>>>What is the output of 'hosted-engine --vm-status' on the node where the 
>>>HostedEngine is running ?
>>>
>>>
>>>Best Regards,
>>>Strahil Nikolov
>>>
>>>
>>>
>>>
>>>
>>>
>>>В понеделник, 7 септември 2020 г., 03:53:13 Гринуич+3, ddqlo  
>>>написа: 
>>>
>>>
>>>
>>>
>>>
>>>I could not find any logs because the migration button is disabled in the 
>>>web UI. It seems that the engine migration operation is prevented at first. 
>>>Any other ideas? Thanks!
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>在 2020-09-01 00:06:19,"Strahil Nikolov"  写道:
I'm running oVirt 4.3.10 and I can migrate my Engine from node to node.
I had one similar issue , but powering off and on the HE has fixed it.

You have to check the vdsm log on the source and on destination in order to 
figure out what is going on.
Also you might consider checking the libvirt logs on the destination.

Best Regards,
Strahil Nikolov






В понеделник, 31 август 2020 г., 10:47:22 Гринуич+3, ddqlo  
написа: 





Thanks! The scores of all nodes are not '0'. I find that someone has 
already asked a question like this. It seems that  this feature has been 
disabled in 4.3. I am not sure if it is enabled in 4.4.


在 2020-08-29 02:27:03,"Strahil Nikolov"  :
>Have you checked under a shell the output of 'hosted-engine 

[ovirt-users] Re: Upgrading self-Hosted engine from 4.3 to oVirt 4.4

2020-09-16 Thread Adam Xu


在 2020/9/16 15:53, Yedidyah Bar David 写道:

On Wed, Sep 16, 2020 at 10:46 AM Adam Xu  wrote:

在 2020/9/16 15:12, Yedidyah Bar David 写道:

On Wed, Sep 16, 2020 at 6:10 AM Adam Xu  wrote:

Hi ovirt

I just try to upgrade a self-Hosted engine from 4.3.10 to 4.4.1.4.  I followed 
the step in the document:

https://www.ovirt.org/documentation/upgrade_guide/#SHE_Upgrading_from_4-3

the old 4.3 env has a FC storage as engine storage domain and I have created a 
new FC storage vv for the new storage domain to be used in the next steps.

I backup the old 4.3 env and prepare a total new host to restore the env.

in charter 4.4 step 8, it said:

"During the deployment you need to provide a new storage domain. The deployment 
script renames the 4.3 storage domain and retains its data."

it does rename the old storage domain. but it didn't let me choose a new 
storage domain during the deployment. So the new enigne just deployed in the 
new host's local storage and can not move to the FC storage domain.

Can anyone tell me what the problem is?

What do you mean in "deployed in the new host's local storage"?

Did deploy finish successfully?

I think it was not finished yet.

You did 'hosted-engine --deploy --restore-from-file=something', right?

Did this finish?

not finished yet.


What are the last few lines of the output?


[ INFO  ] You can now connect to 
https://ovirt6.ntbaobei.com:6900/ovirt-engine/ and check the status of 
this host and eventually remediate it, please continue only when the 
host is listed as 'up'


[ INFO  ] TASK [ovirt.hosted_engine_setup : include_tasks]

[ INFO  ] ok: [localhost]
[ INFO  ] TASK [ovirt.hosted_engine_setup : Create temporary lock file]
[ INFO  ] changed: [localhost]

[ INFO  ] TASK [ovirt.hosted_engine_setup : Pause execution until 
/tmp/ansible.g2opa_y6_he_setup_lock is removed, delete it once ready to 
proceed]


but the new host which run the self-hosted engine's status is 
"NonOperational" and never will be "up"




Please also check/share logs from /var/log/ovirt-hosted-engine-setup/*
(including subdirs).
no more errers there, just a lot of DEBUG messages.

It didn't tell me to choose a new
storage domain and just give me the new hosts fqdn as the engine's URL.
like host6.example.com:6900 .

Yes, that's temporarily, to let you access the engine VM (on the local network).


I can login use the host6.example.com:6900 and I saw the engine vm ran
in host6's /tmp dir.


HE deploy (since 4.3) first creates a VM for the engine on local
storage, then prompts you to provide the storage you want to use, and
then moves the VM disk image there.

Best regards,


Thanks

--
Adam Xu

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XHDGJB2ZAFS7AJZYS4F5BAMC2ZVKCYY4/



--
Adam Xu
Phone: 86-512-8777-3585
Adagene (Suzhou) Limited
C14, No. 218, Xinghu Street, Suzhou Industrial Park

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RLOBPKLW7OBZR5K4AUQWG5MZPYNYUDMI/




--
Adam Xu

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UTVZW7W6XHZTZZLJZLNIH2JWMF67EOCA/


[ovirt-users] Re: hosted engine migration

2020-09-16 Thread Strahil Nikolov via Users
It would be easier if you posted the whole xml.

What about the sections (in HE xml) starting with:
feature policy=

Also the hosts have a section which contains:








В четвъртък, 17 септември 2020 г., 05:54:12 Гринуич+3, ddqlo  
написа: 





HostedEngine:
..
Haswell-noTSX
..

both of the hosts:
..
Westmere
..

others vms which can be migrated:
..
Haswell-noTSX
..



在 2020-09-17 03:03:24,"Strahil Nikolov"  写道:
>Can you verify the HostedEngine's CPU ?
>
>1. ssh to the host hosting the HE
>2. alias virsh='virsh -c 
>qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf'
>3. virsh dumpxml HostedEngine
>
>
>Then set the alias for virsh on all Hosts and 'virsh capabilites' should show 
>the Hosts'  .
>
>Best Regards,
>Strahil Nikolov
>
>
>
>
>
>
>В сряда, 16 септември 2020 г., 10:16:08 Гринуич+3, ddqlo  
>написа: 
>
>
>
>
>
>My gateway was not pingable. I have fixed this problem and now both nodes have 
>a score(3400).
>Yet, hosted engine could not be migrated. Same log in engine.log:
>host filtered out by 'VAR__FILTERTYPE__INTERNAL' filter 'CPU'
>
>
>在 2020-09-16 02:11:09,"Strahil Nikolov"  写道:
>>Both nodes have a lower than the usual score (should be 3400 ).
>>Based on the score you are probably suffering from gateway-score-penalty 
>>[1][2].
>>Check if your gateway is pingable.
>>
>>Best Regards,
>>Strahil Nikolov
>>
>>1 - https://www.ovirt.org/images/Hosted-Engine-4.3-deep-dive.pdf(page 8)
>>2 - /etc/ovirt-hosted-engine-ha/agent.conf 
>>
>>
>>
>>
>>
>>
>>В вторник, 15 септември 2020 г., 04:49:48 Гринуич+3, ddqlo  
>>написа: 
>>
>>
>>
>>
>>
>>--== Host node28 (id: 1) status ==--
>>
>>conf_on_shared_storage             : True
>>Status up-to-date                  : True
>>Hostname                           : node28
>>Host ID                            : 1
>>Engine status                      : {"reason": "vm not running on this 
>>host", "health": "bad", "vm": "down_unexpected", "detail": "unknown"}
>>Score                              : 1800
>>stopped                            : False
>>Local maintenance                  : False
>>crc32                              : 4ac6105b
>>local_conf_timestamp               : 1794597
>>Host timestamp                     : 1794597
>>Extra metadata (valid at timestamp):
>>        metadata_parse_version=1
>>        metadata_feature_version=1
>>        timestamp=1794597 (Tue Sep 15 09:47:17 2020)
>>        host-id=1
>>        score=1800
>>        vm_conf_refresh_time=1794597 (Tue Sep 15 09:47:17 2020)
>>        conf_on_shared_storage=True
>>        maintenance=False
>>        state=EngineDown
>>        stopped=False
>>
>>
>>--== Host node22 (id: 2) status ==--
>>
>>conf_on_shared_storage             : True
>>Status up-to-date                  : True
>>Hostname                           : node22
>>Host ID                            : 2
>>Engine status                      : {"health": "good", "vm": "up", "detail": 
>>"Up"}
>>Score                              : 1800
>>stopped                            : False
>>Local maintenance                  : False
>>crc32                              : ffc41893
>>local_conf_timestamp               : 1877876
>>Host timestamp                     : 1877876
>>Extra metadata (valid at timestamp):
>>        metadata_parse_version=1
>>        metadata_feature_version=1
>>        timestamp=1877876 (Tue Sep 15 09:47:13 2020)
>>        host-id=2
>>        score=1800
>>        vm_conf_refresh_time=1877876 (Tue Sep 15 09:47:13 2020)
>>        conf_on_shared_storage=True
>>        maintenance=False
>>        state=EngineUp
>>        stopped=False
>>
>>
>>
>>
>>
>>
>>
>>在 2020-09-09 01:32:55,"Strahil Nikolov"  写道:
>>>What is the output of 'hosted-engine --vm-status' on the node where the 
>>>HostedEngine is running ?
>>>
>>>
>>>Best Regards,
>>>Strahil Nikolov
>>>
>>>
>>>
>>>
>>>
>>>
>>>В понеделник, 7 септември 2020 г., 03:53:13 Гринуич+3, ddqlo  
>>>написа: 
>>>
>>>
>>>
>>>
>>>
>>>I could not find any logs because the migration button is disabled in the 
>>>web UI. It seems that the engine migration operation is prevented at first. 
>>>Any other ideas? Thanks!
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>在 2020-09-01 00:06:19,"Strahil Nikolov"  写道:
I'm running oVirt 4.3.10 and I can migrate my Engine from node to node.
I had one similar issue , but powering off and on the HE has fixed it.

You have to check the vdsm log on the source and on destination in order to 
figure out what is going on.
Also you might consider checking the libvirt logs on the destination.

Best Regards,
Strahil Nikolov






В понеделник, 31 август 2020 г., 10:47:22 Гринуич+3, ddqlo  
написа: 





Thanks! The scores of all nodes are not '0'. I find that someone has 
already asked a question like this. It seems that  this feature has been 
disabled in 4.3. I am not sure if it is enabled in 4.4.


在 2020-08-29 02:27:03,"Strahil Nikolov" 

[ovirt-users] Re: New oVirt Install - Host Engine Deployment Fails

2020-09-16 Thread Strahil Nikolov via Users
It seems that this one fails :

- name: Parse server CPU list
  set_fact:
    server_cpu_dict: "{{ server_cpu_dict |
      combine({item.split(':')[1]: item.split(':')[3]}) }}"

In cases like that I usually define a new variable.

Can you put another task before that like:
- name: Debug server_cpu_dict
  debug:
    var: server_cpu_dict


Best Regards,
Strahil Nikolov


В четвъртък, 17 септември 2020 г., 00:30:57 Гринуич+3, Michael Blanton 
 написа: 





In my previous reply:

>> Ansible task reports them as Xeon 5130.
>> According to Intel Ark these fall in the Woodcrest family, which is
>> older the Nehalem.

Xeon 5130 "Woodcrest"
Do you need something more specific or different?

I also found one a reply from you on an older thread and added it:

~~~
    100  - name: Debug why parsing fails
    101    debug:
    102      msg:
    103      - "Loop is done over 
{{server_cpu_list.json['values']['system_option_value'][0]['value'].split(';')|list|
 
        difference(['']) }}"
    104      - "Actual value of server_cpu_dict before the set_fact is 
{{server_cpu_dict }}"
    105  - name: Parse server CPU list
    106    set_fact:
    107      server_cpu_dict: "{{ server_cpu_dict | 
combine({item.split(':')[1]: item.split(':')[3]}) }}"
    108    with_items: >-
    109      {{ 
server_cpu_list.json['values']['system_option_value'][0]['value'].split('; 
')|list|difference(['']) }}
    110  - debug: var=server_cpu_dict
    111  - name: Convert CPU model name
    112    set_fact:
    113      cluster_cpu_model: "{{ server_cpu_dict[cluster_cpu.type] }}"
    114  - debug: var=cluster_cpu_model
~~~

  [ INFO ] ["Loop is done over ['1:Intel Nehalem 
Family:vmx,nx,model_Nehalem:Nehalem:x86_64', ' 2:Secure Intel Nehalem 
Family:vmx,spec_ctrl,ssbd,md_clear,model_Nehalem:Nehalem,+spec-ctrl,+ssbd,+md-clear:x86_64',
 
' 3:Intel Westmere Family:aes,vmx,nx,model_Westmere:Westmere:x86_64', ' 
4:Secure Intel Westmere 
Family:aes,vmx,spec_ctrl,ssbd,md_clear,model_Westmere:Westmere,+pcid,+spec-ctrl,+ssbd,+md-clear:x86_64',
 
' 5:Intel SandyBridge 
Family:vmx,nx,model_SandyBridge:SandyBridge:x86_64', ' 6:Secure Intel 
SandyBridge 
Family:vmx,spec_ctrl,ssbd,md_clear,model_SandyBridge:SandyBridge,+pcid,+spec-ctrl,+ssbd,+md-clear:x86_64',
 
' 7:Intel IvyBridge Family:vmx,nx,model_IvyBridge:IvyBridge:x86_64', ' 
8:Secure Intel IvyBridge 
Family:vmx,spec_ctrl,ssbd,md_clear,model_IvyBridge:IvyBridge,+pcid,+spec-ctrl,+ssbd,+md-clear:x86_64',
 
' 9:Intel Haswell 
Family:vmx,nx,model_Haswell-noTSX:Haswell-noTSX:x86_64', ' 10:Secure 
Intel Haswell 
Family:vmx,spec_ctrl,ssbd,md_clear,model_Haswell-noTSX:Haswell-noTSX,+spec-ctrl,+ssbd,+md-clear:x86_64',
 
' 11:Intel Broadwell 
Family:vmx,nx,model_Broadwell-noTSX:Broadwell-noTSX:x86_64', ' 12:Secure 
Intel Broadwell 
Family:vmx,spec_ctrl,ssbd,md_clear,model_Broadwell-noTSX:Broadwell-noTSX,+spec-ctrl,+ssbd,+md-clear:x86_64',
 
' 13:Intel Skylake Client 
Family:vmx,nx,model_Skylake-Client:Skylake-Client,-hle,-rtm:x86_64', ' 
14:Secure Intel Skylake Client 
Family:vmx,spec_ctrl,ssbd,md_clear,model_Skylake-Client:Skylake-Client,+spec-ctrl,+ssbd,+md-clear,-hle,-rtm:x86_64',
 
' 15:Intel Skylake Server 
Family:vmx,nx,model_Skylake-Server:Skylake-Server,-hle,-rtm:x86_64', ' 
16:Secure Intel Skylake Server 
Family:vmx,spec_ctrl,ssbd,md_clear,model_Skylake-Server:Skylake-Server,+spec-ctrl,+ssbd,+md-clear,-hle,-rtm:x86_64',
 
' 17:Intel Cascadelake Server 
Family:vmx,model_Cascadelake-Server:Cascadelake-Server,-hle,-rtm,+arch-capabilities:x86_64',
 
' 18:Secure Intel Cascadelake Server 
Family:vmx,md-clear,mds-no,model_Cascadelake-Server:Cascadelake-Server,+md-clear,+mds-no,-hle,-rtm,+tsx-ctrl,+arch-capabilities:x86_64',
 
' 1:AMD Opteron G4:svm,nx,model_Opteron_G4:Opteron_G4:x86_64', ' 2:AMD 
Opteron G5:svm,nx,model_Opteron_G5:Opteron_G5:x86_64', ' 3:AMD 
EPYC:svm,nx,model_EPYC:EPYC:x86_64', ' 4:Secure AMD 
EPYC:svm,nx,ibpb,ssbd,model_EPYC:EPYC,+ibpb,+virt-ssbd:x86_64', ' 1:IBM 
POWER8:powernv,model_POWER8:POWER8:ppc64', ' 2:IBM 
POWER9:powernv,model_POWER9:POWER9:ppc64', ' 1:IBM z114, 
z196:sie,model_z196-base:z196-base:s390x', ' 2:IBM zBC12, 
zEC12:sie,model_zEC12-base:zEC12-base:s390x', ' 3:IBM z13s, 
z13:sie,model_z13-base:z13-base:s390x', ' 4:IBM 
z14:sie,model_z14-base:z14-base:s390x']", 'Actual value of 
server_cpu_dict before the set_fact is {}']

[ INFO ] TASK [ovirt.hosted_engine_setup : Parse server CPU list]

[ INFO ] ok: [localhost]

[ INFO ] TASK [ovirt.hosted_engine_setup : Convert CPU model name]

[ ERROR ] fatal: [localhost]: FAILED! => {"msg": "The task includes an 
option with an undefined variable. The error was: 'dict object' has no 
attribute ''\n\nThe error appears to be in 
'/usr/share/ansible/roles/ovirt.hosted_engine_setup/tasks/create_target_vm/01_create_target_hosted_engine_vm.yml':
 
line 110, column 15, but may\nbe elsewhere in the file depending on the 
exact syntax problem.\n\nThe offending line appears to be:\n\n - debug: 
var=server_cpu_dict\n ^ here\n\nThere 

[ovirt-users] Re: hosted engine migration

2020-09-16 Thread ddqlo
HostedEngine:
..
Haswell-noTSX
..


both of the hosts:
..
Westmere
..


others vms which can be migrated:
..
Haswell-noTSX
..





在 2020-09-17 03:03:24,"Strahil Nikolov"  写道:
>Can you verify the HostedEngine's CPU ?
>
>1. ssh to the host hosting the HE
>2. alias virsh='virsh -c 
>qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf'
>3. virsh dumpxml HostedEngine
>
>
>Then set the alias for virsh on all Hosts and 'virsh capabilites' should show 
>the Hosts'  .
>
>Best Regards,
>Strahil Nikolov
>
>
>
>
>
>
>В сряда, 16 септември 2020 г., 10:16:08 Гринуич+3, ddqlo  
>написа: 
>
>
>
>
>
>My gateway was not pingable. I have fixed this problem and now both nodes have 
>a score(3400).
>Yet, hosted engine could not be migrated. Same log in engine.log:
>host filtered out by 'VAR__FILTERTYPE__INTERNAL' filter 'CPU'
>
>
>在 2020-09-16 02:11:09,"Strahil Nikolov"  写道:
>>Both nodes have a lower than the usual score (should be 3400 ).
>>Based on the score you are probably suffering from gateway-score-penalty 
>>[1][2].
>>Check if your gateway is pingable.
>>
>>Best Regards,
>>Strahil Nikolov
>>
>>1 - https://www.ovirt.org/images/Hosted-Engine-4.3-deep-dive.pdf(page 8)
>>2 - /etc/ovirt-hosted-engine-ha/agent.conf 
>>
>>
>>
>>
>>
>>
>>В вторник, 15 септември 2020 г., 04:49:48 Гринуич+3, ddqlo  
>>написа: 
>>
>>
>>
>>
>>
>>--== Host node28 (id: 1) status ==--
>>
>>conf_on_shared_storage : True
>>Status up-to-date  : True
>>Hostname   : node28
>>Host ID: 1
>>Engine status  : {"reason": "vm not running on this 
>>host", "health": "bad", "vm": "down_unexpected", "detail": "unknown"}
>>Score  : 1800
>>stopped: False
>>Local maintenance  : False
>>crc32  : 4ac6105b
>>local_conf_timestamp   : 1794597
>>Host timestamp : 1794597
>>Extra metadata (valid at timestamp):
>>metadata_parse_version=1
>>metadata_feature_version=1
>>timestamp=1794597 (Tue Sep 15 09:47:17 2020)
>>host-id=1
>>score=1800
>>vm_conf_refresh_time=1794597 (Tue Sep 15 09:47:17 2020)
>>conf_on_shared_storage=True
>>maintenance=False
>>state=EngineDown
>>stopped=False
>>
>>
>>--== Host node22 (id: 2) status ==--
>>
>>conf_on_shared_storage : True
>>Status up-to-date  : True
>>Hostname   : node22
>>Host ID: 2
>>Engine status  : {"health": "good", "vm": "up", "detail": 
>>"Up"}
>>Score  : 1800
>>stopped: False
>>Local maintenance  : False
>>crc32  : ffc41893
>>local_conf_timestamp   : 1877876
>>Host timestamp : 1877876
>>Extra metadata (valid at timestamp):
>>metadata_parse_version=1
>>metadata_feature_version=1
>>timestamp=1877876 (Tue Sep 15 09:47:13 2020)
>>host-id=2
>>score=1800
>>vm_conf_refresh_time=1877876 (Tue Sep 15 09:47:13 2020)
>>conf_on_shared_storage=True
>>maintenance=False
>>state=EngineUp
>>stopped=False
>>
>>
>>
>>
>>
>>
>>
>>在 2020-09-09 01:32:55,"Strahil Nikolov"  写道:
>>>What is the output of 'hosted-engine --vm-status' on the node where the 
>>>HostedEngine is running ?
>>>
>>>
>>>Best Regards,
>>>Strahil Nikolov
>>>
>>>
>>>
>>>
>>>
>>>
>>>В понеделник, 7 септември 2020 г., 03:53:13 Гринуич+3, ddqlo  
>>>написа: 
>>>
>>>
>>>
>>>
>>>
>>>I could not find any logs because the migration button is disabled in the 
>>>web UI. It seems that the engine migration operation is prevented at first. 
>>>Any other ideas? Thanks!
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>在 2020-09-01 00:06:19,"Strahil Nikolov"  写道:
I'm running oVirt 4.3.10 and I can migrate my Engine from node to node.
I had one similar issue , but powering off and on the HE has fixed it.

You have to check the vdsm log on the source and on destination in order to 
figure out what is going on.
Also you might consider checking the libvirt logs on the destination.

Best Regards,
Strahil Nikolov






В понеделник, 31 август 2020 г., 10:47:22 Гринуич+3, ddqlo  
написа: 





Thanks! The scores of all nodes are not '0'. I find that someone has 
already asked a question like this. It seems that  this feature has been 
disabled in 4.3. I am not sure if it is enabled in 4.4.


在 2020-08-29 02:27:03,"Strahil Nikolov"  :
>Have you checked under a shell the output of 'hosted-engine --vm-status' . 
>Check the Score of the hosts. Maybe there is a node with score of '0' ?
>
>Best Regards,
>Strahil Nikolov
>
>
>
>
>
>

[ovirt-users] Re: New oVirt Install - Host Engine Deployment Fails

2020-09-16 Thread Michael Blanton

In my previous reply:

>> Ansible task reports them as Xeon 5130.
>> According to Intel Ark these fall in the Woodcrest family, which is
>> older the Nehalem.

Xeon 5130 "Woodcrest"
Do you need something more specific or different?

I also found one a reply from you on an older thread and added it:

~~~
100   - name: Debug why parsing fails
101 debug:
102   msg:
103   - "Loop is done over 
{{server_cpu_list.json['values']['system_option_value'][0]['value'].split(';')|list| 
   difference(['']) }}"
104   - "Actual value of server_cpu_dict before the set_fact is 
{{server_cpu_dict }}"

105   - name: Parse server CPU list
106 set_fact:
107   server_cpu_dict: "{{ server_cpu_dict | 
combine({item.split(':')[1]: item.split(':')[3]}) }}"

108 with_items: >-
109   {{ 
server_cpu_list.json['values']['system_option_value'][0]['value'].split('; 
')|list|difference(['']) }}

110   - debug: var=server_cpu_dict
111   - name: Convert CPU model name
112 set_fact:
113   cluster_cpu_model: "{{ server_cpu_dict[cluster_cpu.type] }}"
114   - debug: var=cluster_cpu_model
~~~

 [ INFO ] ["Loop is done over ['1:Intel Nehalem 
Family:vmx,nx,model_Nehalem:Nehalem:x86_64', ' 2:Secure Intel Nehalem 
Family:vmx,spec_ctrl,ssbd,md_clear,model_Nehalem:Nehalem,+spec-ctrl,+ssbd,+md-clear:x86_64', 
' 3:Intel Westmere Family:aes,vmx,nx,model_Westmere:Westmere:x86_64', ' 
4:Secure Intel Westmere 
Family:aes,vmx,spec_ctrl,ssbd,md_clear,model_Westmere:Westmere,+pcid,+spec-ctrl,+ssbd,+md-clear:x86_64', 
' 5:Intel SandyBridge 
Family:vmx,nx,model_SandyBridge:SandyBridge:x86_64', ' 6:Secure Intel 
SandyBridge 
Family:vmx,spec_ctrl,ssbd,md_clear,model_SandyBridge:SandyBridge,+pcid,+spec-ctrl,+ssbd,+md-clear:x86_64', 
' 7:Intel IvyBridge Family:vmx,nx,model_IvyBridge:IvyBridge:x86_64', ' 
8:Secure Intel IvyBridge 
Family:vmx,spec_ctrl,ssbd,md_clear,model_IvyBridge:IvyBridge,+pcid,+spec-ctrl,+ssbd,+md-clear:x86_64', 
' 9:Intel Haswell 
Family:vmx,nx,model_Haswell-noTSX:Haswell-noTSX:x86_64', ' 10:Secure 
Intel Haswell 
Family:vmx,spec_ctrl,ssbd,md_clear,model_Haswell-noTSX:Haswell-noTSX,+spec-ctrl,+ssbd,+md-clear:x86_64', 
' 11:Intel Broadwell 
Family:vmx,nx,model_Broadwell-noTSX:Broadwell-noTSX:x86_64', ' 12:Secure 
Intel Broadwell 
Family:vmx,spec_ctrl,ssbd,md_clear,model_Broadwell-noTSX:Broadwell-noTSX,+spec-ctrl,+ssbd,+md-clear:x86_64', 
' 13:Intel Skylake Client 
Family:vmx,nx,model_Skylake-Client:Skylake-Client,-hle,-rtm:x86_64', ' 
14:Secure Intel Skylake Client 
Family:vmx,spec_ctrl,ssbd,md_clear,model_Skylake-Client:Skylake-Client,+spec-ctrl,+ssbd,+md-clear,-hle,-rtm:x86_64', 
' 15:Intel Skylake Server 
Family:vmx,nx,model_Skylake-Server:Skylake-Server,-hle,-rtm:x86_64', ' 
16:Secure Intel Skylake Server 
Family:vmx,spec_ctrl,ssbd,md_clear,model_Skylake-Server:Skylake-Server,+spec-ctrl,+ssbd,+md-clear,-hle,-rtm:x86_64', 
' 17:Intel Cascadelake Server 
Family:vmx,model_Cascadelake-Server:Cascadelake-Server,-hle,-rtm,+arch-capabilities:x86_64', 
' 18:Secure Intel Cascadelake Server 
Family:vmx,md-clear,mds-no,model_Cascadelake-Server:Cascadelake-Server,+md-clear,+mds-no,-hle,-rtm,+tsx-ctrl,+arch-capabilities:x86_64', 
' 1:AMD Opteron G4:svm,nx,model_Opteron_G4:Opteron_G4:x86_64', ' 2:AMD 
Opteron G5:svm,nx,model_Opteron_G5:Opteron_G5:x86_64', ' 3:AMD 
EPYC:svm,nx,model_EPYC:EPYC:x86_64', ' 4:Secure AMD 
EPYC:svm,nx,ibpb,ssbd,model_EPYC:EPYC,+ibpb,+virt-ssbd:x86_64', ' 1:IBM 
POWER8:powernv,model_POWER8:POWER8:ppc64', ' 2:IBM 
POWER9:powernv,model_POWER9:POWER9:ppc64', ' 1:IBM z114, 
z196:sie,model_z196-base:z196-base:s390x', ' 2:IBM zBC12, 
zEC12:sie,model_zEC12-base:zEC12-base:s390x', ' 3:IBM z13s, 
z13:sie,model_z13-base:z13-base:s390x', ' 4:IBM 
z14:sie,model_z14-base:z14-base:s390x']", 'Actual value of 
server_cpu_dict before the set_fact is {}']


[ INFO ] TASK [ovirt.hosted_engine_setup : Parse server CPU list]

[ INFO ] ok: [localhost]

[ INFO ] TASK [ovirt.hosted_engine_setup : Convert CPU model name]

[ ERROR ] fatal: [localhost]: FAILED! => {"msg": "The task includes an 
option with an undefined variable. The error was: 'dict object' has no 
attribute ''\n\nThe error appears to be in 
'/usr/share/ansible/roles/ovirt.hosted_engine_setup/tasks/create_target_vm/01_create_target_hosted_engine_vm.yml': 
line 110, column 15, but may\nbe elsewhere in the file depending on the 
exact syntax problem.\n\nThe offending line appears to be:\n\n - debug: 
var=server_cpu_dict\n ^ here\n\nThere appears to be both 'k=v' shorthand 
syntax and YAML in this task. Only one syntax may be used.\n"}





On 9/16/2020 4:14 PM, Strahil Nikolov wrote:

You didn't mention your CPU type.

Best Regards,
Strahil Nikolov






В сряда, 16 септември 2020 г., 20:44:23 Гринуич+3, Michael Blanton 
 написа:





Wondering if there are any suggestions here before I wipe these nodes
and go back to another Hypervisor.




On 9/14/2020 12:59 PM, Michael Blanton wrote:


[ovirt-users] Re: New oVirt Install - Host Engine Deployment Fails

2020-09-16 Thread Strahil Nikolov via Users
You didn't mention your CPU type.

Best Regards,
Strahil Nikolov






В сряда, 16 септември 2020 г., 20:44:23 Гринуич+3, Michael Blanton 
 написа: 





Wondering if there are any suggestions here before I wipe these nodes 
and go back to another Hypervisor.




On 9/14/2020 12:59 PM, Michael Blanton wrote:
> Thanks for the quick response.
> 
> Ansible task reports them as Xeon 5130.
> According to Intel Ark these fall in the Woodcrest family, which is 
> older the Nehalem.
> 
> Obviously the CPUs support virtualization.
> I also confirmed the required extensions from the oVirt documents.
> 
> # grep -E 'svm|vmx' /proc/cpuinfo | grep n
> 
> Question for my lab:
> So is this a situation where "Woodcrest" is simply not in the dictionary?
> Is there a way to manually add that or "force" it, just to get the 
> engine to deploy? That way I can kick the tires on oVirt while I 
> consider an upgrade to my lab systems. Knowing ahead of time that it is 
> a "hack" and unsupported.
> 
> Question for product:
> If this is an unsupported CPU, shouldn't the installer/Hosted Engine 
> Deployment flag that at the beginning of the process, not 45 minutes 
> later when trying to move the already created VM to shared storage?
> 
> Thanks again
> 
> 
> 
> On 9/14/2020 12:45 PM, Edward Berger wrote:
>> What is the CPU?  I'm asking because you said it was old servers, and 
>> at some point oVirt started filtering out old CPU types which were no 
>> longer supported under windows.   There was also the case where if a 
>> certain bios option wasn't enabled (AES?) a westmere(supported) 
>> reported as an older model(unsupported).
>>
>>
>> On Mon, Sep 14, 2020 at 12:20 PM > > wrote:
>>
>>     I am attempting a new oVirt install. I have two nodes installed
>>     (with oVirt Node 4.4). I have NFS shared storage for the hosted 
>> engine.
>>     Both nodes are Dell quad core Xeon CPUs with 32GB of RAM. Both have
>>     been hypervisors before, XCP-NG and Proxmox. However I'm very
>>     interested to learn oVirt now.
>>
>>     The hosted engine deployment (through cockpit) fails during the
>>     "Finish" stage.
>>     I do see the initial files created on the NFS storage.
>>
>>     [ INFO ] TASK [ovirt.hosted_engine_setup : Convert CPU model name]
>>     [ ERROR ] fatal: [localhost]: FAILED! => {"msg": "The task includes
>>     an option with an undefined variable. The error was: 'dict object'
>>     has no attribute ''\n\nThe error appears to be in
>>    
>> '/usr/share/ansible/roles/ovirt.hosted_engine_setup/tasks/create_target_vm/01_create_target_hosted_engine_vm.yml':
>>  
>>
>>     line 105, column 16, but may\nbe elsewhere in the file depending on
>>     the exact syntax problem.\n\nThe offending line appears to be:\n\n#
>>     - debug: var=server_cpu_dict\n ^ here\n\nThere appears to be both
>>     'k=v' shorthand syntax and YAML in this task. Only one syntax may be
>>     used.\n"}
>>
>>     2020-09-13 17:39:56,507+ ERROR ansible failed {
>>      "ansible_host": "localhost",
>>      "ansible_playbook":
>>     "/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml",
>>      "ansible_result": {
>>          "_ansible_no_log": false,
>>          "msg": "The task includes an option with an undefined
>>     variable. The error was: 'dict object' has no attribute ''
>>     \n\nThe error appears to be in
>>    
>> '/usr/share/ansible/roles/ovirt.hosted_engine_setup/tasks/create_target_vm/01_create_targ
>>  
>>
>>     et_hosted_engine_vm.yml': line 105, column 16, but may\nbe elsewhere
>>     in the file depending on the exact syntax problem.\
>>     n\nThe offending line appears to be:\n\n#  - debug:
>>     var=server_cpu_dict\n               ^ here\n\nThere appears to be bo
>>     th 'k=v' shorthand syntax and YAML in this task. Only one syntax may
>>     be used.\n"
>>      },
>>      "ansible_task": "Convert CPU model name",
>>      "ansible_type": "task",
>>      "status": "FAILED",
>>      "task_duration": 1
>>     }
>>
>>     I can see the host engine is created and running locally on the node.
>>     I can event SSH into the HostedEngineLocal instance.
>>
>>     [root@ovirt-node01]# virsh --readonly list
>>   Id   Name                State
>>     ---
>>   1    HostedEngineLocal   running
>>
>>
>>     Looking at the "Convert CPU model name" task:
>>    
>> https://github.com/oVirt/ovirt-ansible-hosted-engine-setup/blob/master/tasks/create_target_vm/01_create_target_hosted_engine_vm.yml
>>  
>>
>>    
>> 
>>  
>>
>>
>>     set_fact:
>>        cluster_cpu_model: "{{ server_cpu_dict[cluster_cpu.type] }}"
>>
>>     server_cpu_dict is good, I can find that in the logs, cluster_cpu is
>>     undefined.
>>     But this is normal correct? The Cluster CPU type is "undefined"
>>     

[ovirt-users] Re: Low Performance (KVM Vs VMware Hypervisor) When running multi-process application

2020-09-16 Thread Strahil Nikolov via Users
In the VM 'edit' settings you can pick the 'Host' tab on the left, specify 
'Specific Host(s)' , define the migration mode (I'm using both Auto and Manual 
as my cluster has the same CPU type) and last enable 'Pass-Through Host CPU' 
and save the VM.

Then you can power it up and it should be good to go.


Best Regards,
Strahil Nikolov






В сряда, 16 септември 2020 г., 17:29:10 Гринуич+3, Arman Khalatyan 
 написа: 





ok will try on our env with passthrough, could you please send how you 
passthrough the cpu? simply over the ovirt gui?

Rav Ya  schrieb am Mi., 16. Sept. 2020, 00:56:
> 
> Hi Arman, 
> 
> Intel(R) Xeon(R) Gold 6126 CPU @ 2.60GHz 
> 
> The VM is configured for host CPU pass through and pinned to 6 CPUs.
> 
> Architecture:          x86_64
> CPU op-mode(s):        32-bit, 64-bit
> Byte Order:            Little Endian
> CPU(s):                6
> On-line CPU(s) list:   0-5
> Thread(s) per core:    1
> Core(s) per socket:    1
> Socket(s):             6
> NUMA node(s):          1
> Vendor ID:             GenuineIntel
> CPU family:            6
> Model:                 85
> Model name:            Intel(R) Xeon(R) Gold 6126 CPU @ 2.60GHz
> Stepping:              4
> CPU MHz:               2593.906
> BogoMIPS:              5187.81
> Hypervisor vendor:     KVM
> Virtualization type:   full
> L1d cache:             32K
> L1i cache:             32K
> L2 cache:              4096K
> L3 cache:              16384K
> NUMA node0 CPU(s):     0-5
> 
> Thank You
> -RY
> 
> On Tue, Sep 15, 2020 at 6:21 PM Arman Khalatyan  wrote:
>> what kind of CPUs are you using?
>> 
>> 
>> Rav Ya  schrieb am Di., 15. Sept. 2020, 16:58:
>>> Hello Everyone,
>>> Please advice. Any help will be highly appreciated. Thank you in advance.
>>> Test Setup:
>>> 1. oVirt Centos 7.8 Virtulization Host
>>> 2. Guest VM Centos 7.8 (Mutiqueue enabled 6 vCPUs with 6 Rx Tx Queues)
>>> 3. The vCPUs are configured for host pass through (Pinned CPU).
>>> The Guest VM runs the application in userspace. The Application consists of 
>>> the parent process that reads packets in raw socket mode from the interface 
>>> and forwards then to child processes (~vCPUs) via IPC (shared memory – 
>>> pipes). The performance (throughput / CPU utilization) that I get with KVM 
>>> is half of what I get with VMware.
>>> 
>>> Any thoughts on the below observations? Any suggestions? 
>>> 
>>> * KVM Guest VMs degraded performance when running multi-process 
>>> applications.
>>> * High FUTEX time (Seen on the Guest VM when passing traffic).
>>> * High SY: System CPU time spent in kernel space (Seen on both 
>>> Hypervisor and the Guest VMs only when running my application.)
>>> 
>>> -Rav Ya
>>> 
>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>>> oVirt Code of Conduct: 
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives: 
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/QSEUE5VM4UCRT7MT4JLGSCABK7MDXFF4/
>>> 
>> 
> 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2435WWIOIGERH2EQQ7SOQOQGO4TDLSBU/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HOFDUABWYYGKEN4LAHQSZTR3JIFAUHTZ/


[ovirt-users] How to discover why a VM is getting suspended without recovery possibility?

2020-09-16 Thread Vinícius Ferrão via Users
Hello,

I’m an Exchange Server VM that’s going down to suspend without possibility of 
recovery. I need to click on shutdown and them power on. I can’t find anything 
useful on the logs, except on “dmesg” of the host:

[47807.747606] *** Guest State ***
[47807.747633] CR0: actual=0x00050032, shadow=0x00050032, 
gh_mask=fff7
[47807.747671] CR4: actual=0x2050, shadow=0x, 
gh_mask=f871
[47807.747721] CR3 = 0x001ad002
[47807.747739] RSP = 0xc20904fa3770  RIP = 0x8000
[47807.747766] RFLAGS=0x0002 DR7 = 0x0400
[47807.747792] Sysenter RSP= CS:RIP=:
[47807.747821] CS:   sel=0x9100, attr=0x08093, limit=0x, 
base=0x7ff91000
[47807.747855] DS:   sel=0x, attr=0x08093, limit=0x, 
base=0x
[47807.747889] SS:   sel=0x, attr=0x08093, limit=0x, 
base=0x
[47807.747923] ES:   sel=0x, attr=0x08093, limit=0x, 
base=0x
[47807.747957] FS:   sel=0x, attr=0x08093, limit=0x, 
base=0x
[47807.747991] GS:   sel=0x, attr=0x08093, limit=0x, 
base=0x
[47807.748025] GDTR:   limit=0x0057, 
base=0x80817e7d5fb0
[47807.748059] LDTR: sel=0x, attr=0x1, limit=0x000f, 
base=0x
[47807.748093] IDTR:   limit=0x, 
base=0x
[47807.748128] TR:   sel=0x0040, attr=0x0008b, limit=0x0067, 
base=0x80817e7d4000
[47807.748162] EFER = 0x  PAT = 0x0007010600070106
[47807.748189] DebugCtl = 0x  DebugExceptions = 
0x
[47807.748221] Interruptibility = 0009  ActivityState = 
[47807.748248] *** Host State ***
[47807.748263] RIP = 0xc0c65024  RSP = 0x9252bda5fc90
[47807.748290] CS=0010 SS=0018 DS= ES= FS= GS= TR=0040
[47807.748318] FSBase=7f46d462a700 GSBase=9252ffac 
TRBase=9252ffac4000
[47807.748351] GDTBase=9252ffacc000 IDTBase=ff528000
[47807.748377] CR0=80050033 CR3=00105ac8c000 CR4=001627e0
[47807.748407] Sysenter RSP= CS:RIP=0010:8f196cc0
[47807.748435] EFER = 0x0d01  PAT = 0x0007050600070106
[47807.748461] *** Control State ***
[47807.748478] PinBased=003f CPUBased=b6a1edfa SecondaryExec=0ceb
[47807.748507] EntryControls=d1ff ExitControls=002fefff
[47807.748531] ExceptionBitmap=00060042 PFECmask= PFECmatch=
[47807.748561] VMEntry: intr_info= errcode=0006 ilen=
[47807.748589] VMExit: intr_info= errcode= ilen=0001
[47807.748618] reason=8021 qualification=
[47807.748645] IDTVectoring: info= errcode=
[47807.748669] TSC Offset = 0xf9b8c8d943b6
[47807.748699] TPR Threshold = 0x00
[47807.748715] EPT pointer = 0x00105cd5601e
[47807.748735] PLE Gap=0080 Window=1000
[47807.748755] Virtual processor ID = 0x0003

So something really went crazy. The VM is going down at least two times a day 
for the last 5 days.

At first I thought it would be an hardware issue, so I restarted the VM on 
other host, and the same thing happened.

About the VM it’s configured with 10 CPUs, 48GB of RAM running oVirt 4.3.10 
with iSCSI storage to a FreeNAS box, where the VM disks are running; there are 
a 300GB disc for C:\ and 2TB disk for D:\.

Any ideia on how to start troubleshooting it?

Thanks,

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/X34PTPXY5GLAULTQ2ZCB3PGZA2MON5KX/


[ovirt-users] Re: Question on "Memory" column/field in Virtual Machines list/table in ovirt GUI

2020-09-16 Thread Strahil Nikolov via Users
What is your VM's OS type ?

There is some differences per OS version -> 
https://www.redhat.com/sysadmin/dissecting-free-command

Best Regards,
Strahil Nikolov






В сряда, 16 септември 2020 г., 11:13:51 Гринуич+3, KISHOR K 
 написа: 





Hi,

Memory field/column for few of VMs in our ovirt (Compute -> Virtual Machines -> 
Memory column) shows more than 90%.  
But, when I checked (from free and also other commands) the actual "used" 
memory by those VMs, it is less than 60%.  What I see (from free -h) is that 
ovirt seems to be considering both "used" + "buff/cache" memory and is 
reporting the same in GUI.
Isn't it the "available" memory that should be considered, because that is the 
actual memory available and cache memory is something that is adjusted every 
time?

:~> free -h
                    total        used        free      shared  buff/cache  
available
Mem:          15Gi      6.8Gi      724Mi      310Mi      8.1Gi      9.2Gi
Swap:            0B          0B          0B


Can someone help to answer. Thanks !

/Kishore
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PAGYQZLHQJIDBXIPAWPO4MRIO4KJETJC/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PUBE5OQV5G6KCEXR6CE6GINFF7GR6A3W/


[ovirt-users] Re: hosted engine migration

2020-09-16 Thread Strahil Nikolov via Users
Can you verify the HostedEngine's CPU ?

1. ssh to the host hosting the HE
2. alias virsh='virsh -c 
qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf'
3. virsh dumpxml HostedEngine


Then set the alias for virsh on all Hosts and 'virsh capabilites' should show 
the Hosts'  .

Best Regards,
Strahil Nikolov






В сряда, 16 септември 2020 г., 10:16:08 Гринуич+3, ddqlo  
написа: 





My gateway was not pingable. I have fixed this problem and now both nodes have 
a score(3400).
Yet, hosted engine could not be migrated. Same log in engine.log:
host filtered out by 'VAR__FILTERTYPE__INTERNAL' filter 'CPU'


在 2020-09-16 02:11:09,"Strahil Nikolov"  写道:
>Both nodes have a lower than the usual score (should be 3400 ).
>Based on the score you are probably suffering from gateway-score-penalty 
>[1][2].
>Check if your gateway is pingable.
>
>Best Regards,
>Strahil Nikolov
>
>1 - https://www.ovirt.org/images/Hosted-Engine-4.3-deep-dive.pdf(page 8)
>2 - /etc/ovirt-hosted-engine-ha/agent.conf 
>
>
>
>
>
>
>В вторник, 15 септември 2020 г., 04:49:48 Гринуич+3, ddqlo  
>написа: 
>
>
>
>
>
>--== Host node28 (id: 1) status ==--
>
>conf_on_shared_storage             : True
>Status up-to-date                  : True
>Hostname                           : node28
>Host ID                            : 1
>Engine status                      : {"reason": "vm not running on this host", 
>"health": "bad", "vm": "down_unexpected", "detail": "unknown"}
>Score                              : 1800
>stopped                            : False
>Local maintenance                  : False
>crc32                              : 4ac6105b
>local_conf_timestamp               : 1794597
>Host timestamp                     : 1794597
>Extra metadata (valid at timestamp):
>        metadata_parse_version=1
>        metadata_feature_version=1
>        timestamp=1794597 (Tue Sep 15 09:47:17 2020)
>        host-id=1
>        score=1800
>        vm_conf_refresh_time=1794597 (Tue Sep 15 09:47:17 2020)
>        conf_on_shared_storage=True
>        maintenance=False
>        state=EngineDown
>        stopped=False
>
>
>--== Host node22 (id: 2) status ==--
>
>conf_on_shared_storage             : True
>Status up-to-date                  : True
>Hostname                           : node22
>Host ID                            : 2
>Engine status                      : {"health": "good", "vm": "up", "detail": 
>"Up"}
>Score                              : 1800
>stopped                            : False
>Local maintenance                  : False
>crc32                              : ffc41893
>local_conf_timestamp               : 1877876
>Host timestamp                     : 1877876
>Extra metadata (valid at timestamp):
>        metadata_parse_version=1
>        metadata_feature_version=1
>        timestamp=1877876 (Tue Sep 15 09:47:13 2020)
>        host-id=2
>        score=1800
>        vm_conf_refresh_time=1877876 (Tue Sep 15 09:47:13 2020)
>        conf_on_shared_storage=True
>        maintenance=False
>        state=EngineUp
>        stopped=False
>
>
>
>
>
>
>
>在 2020-09-09 01:32:55,"Strahil Nikolov"  写道:
>>What is the output of 'hosted-engine --vm-status' on the node where the 
>>HostedEngine is running ?
>>
>>
>>Best Regards,
>>Strahil Nikolov
>>
>>
>>
>>
>>
>>
>>В понеделник, 7 септември 2020 г., 03:53:13 Гринуич+3, ddqlo  
>>написа: 
>>
>>
>>
>>
>>
>>I could not find any logs because the migration button is disabled in the web 
>>UI. It seems that the engine migration operation is prevented at first. Any 
>>other ideas? Thanks!
>>
>>
>>
>>
>>
>>
>>
>>在 2020-09-01 00:06:19,"Strahil Nikolov"  写道:
>>>I'm running oVirt 4.3.10 and I can migrate my Engine from node to node.
>>>I had one similar issue , but powering off and on the HE has fixed it.
>>>
>>>You have to check the vdsm log on the source and on destination in order to 
>>>figure out what is going on.
>>>Also you might consider checking the libvirt logs on the destination.
>>>
>>>Best Regards,
>>>Strahil Nikolov
>>>
>>>
>>>
>>>
>>>
>>>
>>>В понеделник, 31 август 2020 г., 10:47:22 Гринуич+3, ddqlo  
>>>написа: 
>>>
>>>
>>>
>>>
>>>
>>>Thanks! The scores of all nodes are not '0'. I find that someone has already 
>>>asked a question like this. It seems that  this feature has been disabled in 
>>>4.3. I am not sure if it is enabled in 4.4.
>>>
>>>
>>>在 2020-08-29 02:27:03,"Strahil Nikolov"  :
Have you checked under a shell the output of 'hosted-engine --vm-status' . 
Check the Score of the hosts. Maybe there is a node with score of '0' ?

Best Regards,
Strahil Nikolov






В вторник, 25 август 2020 г., 13:46:18 Гринуич+3, 董青龙  
написа: 





Hi all,
        I have an ovirt4.3.10.4 environment of 2 hosts. Normal vms in this 
environment could be migrated, but the hosted engine vm could not be 
migrated. Anyone can help? Thanks a lot!

hosts status:

normal vm migration:

hosted engine 

[ovirt-users] Re: New oVirt Install - Host Engine Deployment Fails

2020-09-16 Thread Michael Blanton
Wondering if there are any suggestions here before I wipe these nodes 
and go back to another Hypervisor.





On 9/14/2020 12:59 PM, Michael Blanton wrote:

Thanks for the quick response.

Ansible task reports them as Xeon 5130.
According to Intel Ark these fall in the Woodcrest family, which is 
older the Nehalem.


Obviously the CPUs support virtualization.
I also confirmed the required extensions from the oVirt documents.

# grep -E 'svm|vmx' /proc/cpuinfo | grep n

Question for my lab:
So is this a situation where "Woodcrest" is simply not in the dictionary?
Is there a way to manually add that or "force" it, just to get the 
engine to deploy? That way I can kick the tires on oVirt while I 
consider an upgrade to my lab systems. Knowing ahead of time that it is 
a "hack" and unsupported.


Question for product:
If this is an unsupported CPU, shouldn't the installer/Hosted Engine 
Deployment flag that at the beginning of the process, not 45 minutes 
later when trying to move the already created VM to shared storage?


Thanks again



On 9/14/2020 12:45 PM, Edward Berger wrote:
What is the CPU?  I'm asking because you said it was old servers, and 
at some point oVirt started filtering out old CPU types which were no 
longer supported under windows.   There was also the case where if a 
certain bios option wasn't enabled (AES?) a westmere(supported) 
reported as an older model(unsupported).



On Mon, Sep 14, 2020 at 12:20 PM > wrote:


    I am attempting a new oVirt install. I have two nodes installed
    (with oVirt Node 4.4). I have NFS shared storage for the hosted 
engine.

    Both nodes are Dell quad core Xeon CPUs with 32GB of RAM. Both have
    been hypervisors before, XCP-NG and Proxmox. However I'm very
    interested to learn oVirt now.

    The hosted engine deployment (through cockpit) fails during the
    "Finish" stage.
    I do see the initial files created on the NFS storage.

    [ INFO ] TASK [ovirt.hosted_engine_setup : Convert CPU model name]
    [ ERROR ] fatal: [localhost]: FAILED! => {"msg": "The task includes
    an option with an undefined variable. The error was: 'dict object'
    has no attribute ''\n\nThe error appears to be in

'/usr/share/ansible/roles/ovirt.hosted_engine_setup/tasks/create_target_vm/01_create_target_hosted_engine_vm.yml': 


    line 105, column 16, but may\nbe elsewhere in the file depending on
    the exact syntax problem.\n\nThe offending line appears to be:\n\n#
    - debug: var=server_cpu_dict\n ^ here\n\nThere appears to be both
    'k=v' shorthand syntax and YAML in this task. Only one syntax may be
    used.\n"}

    2020-09-13 17:39:56,507+ ERROR ansible failed {
     "ansible_host": "localhost",
     "ansible_playbook":
    "/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml",
     "ansible_result": {
         "_ansible_no_log": false,
         "msg": "The task includes an option with an undefined
    variable. The error was: 'dict object' has no attribute ''
    \n\nThe error appears to be in

'/usr/share/ansible/roles/ovirt.hosted_engine_setup/tasks/create_target_vm/01_create_targ 


    et_hosted_engine_vm.yml': line 105, column 16, but may\nbe elsewhere
    in the file depending on the exact syntax problem.\
    n\nThe offending line appears to be:\n\n#  - debug:
    var=server_cpu_dict\n               ^ here\n\nThere appears to be bo
    th 'k=v' shorthand syntax and YAML in this task. Only one syntax may
    be used.\n"
     },
     "ansible_task": "Convert CPU model name",
     "ansible_type": "task",
     "status": "FAILED",
     "task_duration": 1
    }

    I can see the host engine is created and running locally on the node.
    I can event SSH into the HostedEngineLocal instance.

    [root@ovirt-node01]# virsh --readonly list
  Id   Name                State
    ---
  1    HostedEngineLocal   running


    Looking at the "Convert CPU model name" task:

https://github.com/oVirt/ovirt-ansible-hosted-engine-setup/blob/master/tasks/create_target_vm/01_create_target_hosted_engine_vm.yml 


 



    set_fact:
       cluster_cpu_model: "{{ server_cpu_dict[cluster_cpu.type] }}"

    server_cpu_dict is good, I can find that in the logs, cluster_cpu is
    undefined.
    But this is normal correct? The Cluster CPU type is "undefined"
    until the first host is added to the cluster.
    The error makes it seems that server_cpu_dict and not
    cluster_cpu.type is the problem.
    I'm not sure this is really the problem, but that is the only 
undefined variable I can find.


    Any advice or recommendation is appreciated
    -Thanks in advance
    ___
    Users mailing list -- users@ovirt.org 
    To 

[ovirt-users] Re: Bad volume specification

2020-09-16 Thread Facundo Garat
The VM has one snapshot which I can't delete because it shows a similar
error. That doesn't allow me to attach the disks to another VM. This VM
will boot ok if the disks are deactivated.

Find the engine.log attached.

The steps associated with the engine log:

   - The VM is on booted from CD with all disks deactivated
   - Try to attach all three disks (fail!)
   - Power off the VM
   - Activate all three disks
   - Try to delete the snapshot.

Thanks.





On Wed, Sep 16, 2020 at 9:35 AM Ahmad Khiet  wrote:

> Hi,
>
> can you please attach the engine log? what steps did you make before
> this error is shown? did you tried to create a snapshot and failed before
>
>
> On Wed, Sep 16, 2020 at 7:49 AM Strahil Nikolov via Users 
> wrote:
>
>> What happens if you create another VM and attach the disks to it ?
>> Does it boot properly ?
>>
>> Best Regards,
>> Strahil Nikolov
>>
>>
>>
>>
>>
>>
>> В сряда, 16 септември 2020 г., 02:19:26 Гринуич+3, Facundo Garat <
>> fga...@gmail.com> написа:
>>
>>
>>
>>
>>
>>
>> Hi all,
>>  I'm having some issues with one VM. The VM won't start and it's showing
>> problems with the virtual disks so I started the VM without any disks and
>> trying to hot adding the disk and that's fail too.
>>
>>  The servers are connected thru FC, all the other VMs are working fine.
>>
>>   Any ideas?.
>>
>> Thanks!!
>>
>> PS: The engine.log is showing this:
>> 2020-09-15 20:10:37,926-03 INFO
>>  [org.ovirt.engine.core.bll.storage.disk.HotPlugDiskToVmCommand] (default
>> task-168) [dd72c8e8-cdbe-470f-8e32-b3d14b96f37a] Lock Acquired to object
>> 'EngineLock:{exclusiveLocks='[f5bd2e15-a1ab-4724-883a-988b4dc7985b=DISK]',
>> sharedLocks='[71db02c2-df29-4552-8a7e-cb8bb429a2ac=VM]'}'
>> 2020-09-15 20:10:38,082-03 INFO
>>  [org.ovirt.engine.core.bll.storage.disk.HotPlugDiskToVmCommand]
>> (EE-ManagedThreadFactory-engine-Thread-36528)
>> [dd72c8e8-cdbe-470f-8e32-b3d14b96f37a] Running command:
>> HotPlugDiskToVmCommand internal: false. Entities affected :  ID:
>> 71db02c2-df29-4552-8a7e-cb8bb429a2ac Type: VMAction group
>> CONFIGURE_VM_STORAGE with role type USER
>> 2020-09-15 20:10:38,117-03 INFO
>>  [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand]
>> (EE-ManagedThreadFactory-engine-Thread-36528)
>> [dd72c8e8-cdbe-470f-8e32-b3d14b96f37a] START,
>> HotPlugDiskVDSCommand(HostName = nodo2,
>> HotPlugDiskVDSParameters:{hostId='1c24c269-76c3-468d-a7ce-d0332beb7aef',
>> vmId='71db02c2-df29-4552-8a7e-cb8bb429a2ac',
>> diskId='f5bd2e15-a1ab-4724-883a-988b4dc7985b', addressMap='null'}), log id:
>> f57ee9e
>> 2020-09-15 20:10:38,125-03 INFO
>>  [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand]
>> (EE-ManagedThreadFactory-engine-Thread-36528)
>> [dd72c8e8-cdbe-470f-8e32-b3d14b96f37a] Disk hot-plug: > encoding="UTF-8"?>
>>   
>> 
>>   
>>   > dev="/rhev/data-center/mnt/blockSD/55327311-e47c-46b5-b168-258c5924757b/images/f5bd2e15-a1ab-4724-883a-988b4dc7985b/bd714f21-8eed-43ee-a2d4-3d2ef1ee4c3f">
>> 
>>   
>>   > cache="none"/>
>>   
>>   f5bd2e15-a1ab-4724-883a-988b4dc7985b
>> 
>>   
>>   http://ovirt.org/vm/1.0;>
>> 
>>   
>>
>> 0001-0001-0001-0001-0311
>>
>> bd714f21-8eed-43ee-a2d4-3d2ef1ee4c3f
>>
>> f5bd2e15-a1ab-4724-883a-988b4dc7985b
>>
>> 55327311-e47c-46b5-b168-258c5924757b
>>   
>> 
>>   
>> 
>>
>> 2020-09-15 20:10:38,289-03 ERROR
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand]
>> (EE-ManagedThreadFactory-engine-Thread-36528)
>> [dd72c8e8-cdbe-470f-8e32-b3d14b96f37a] Failed in 'HotPlugDiskVDS' method
>> 2020-09-15 20:10:38,295-03 ERROR
>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>> (EE-ManagedThreadFactory-engine-Thread-36528)
>> [dd72c8e8-cdbe-470f-8e32-b3d14b96f37a] EVENT_ID:
>> VDS_BROKER_COMMAND_FAILURE(10,802), VDSM nodo2 command HotPlugDiskVDS
>> failed: General Exception: ("Bad volume specification {'device': 'disk',
>> 'type': 'disk', 'diskType': 'block', 'specParams': {}, 'alias':
>> 'ua-f5bd2e15-a1ab-4724-883a-988b4dc7985b', 'domainID':
>> '55327311-e47c-46b5-b168-258c5924757b', 'imageID':
>> 'f5bd2e15-a1ab-4724-883a-988b4dc7985b', 'poolID':
>> '0001-0001-0001-0001-0311', 'volumeID':
>> 'bd714f21-8eed-43ee-a2d4-3d2ef1ee4c3f', 'path':
>> '/rhev/data-center/mnt/blockSD/55327311-e47c-46b5-b168-258c5924757b/images/f5bd2e15-a1ab-4724-883a-988b4dc7985b/bd714f21-8eed-43ee-a2d4-3d2ef1ee4c3f',
>> 'discard': False, 'format': 'cow', 'propagateErrors': 'off', 'cache':
>> 'none', 'iface': 'virtio', 'name': 'vda', 'serial':
>> 'f5bd2e15-a1ab-4724-883a-988b4dc7985b', 'index': 0}",)
>> 2020-09-15 20:10:38,295-03 INFO
>>  [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand]
>> (EE-ManagedThreadFactory-engine-Thread-36528)
>> [dd72c8e8-cdbe-470f-8e32-b3d14b96f37a] Command
>> 'org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand' return
>> value 'StatusOnlyReturn [status=Status [code=100, message=General
>> Exception: ("Bad volume 

[ovirt-users] Re: Moving VM disks from one storage domain to another. Automate?

2020-09-16 Thread Ahmad Khiet
Hi,

I suggest using REST API to do what you described. or the python sdk.

have a nice day


On Tue, Sep 15, 2020 at 10:53 PM Green, Jacob Allen /C <
jacob.a.gr...@exxonmobil.com> wrote:

>I am looking for an automated way, via Ansible to move a VM
> disk from one storage domain to another. I found the following,
> https://docs.ansible.com/ansible/latest/modules/ovirt_disk_module.html
> and while it mentions copying a VM disk image from one domain to another it
> does not mention a live storage migration. Which is what I am looking to
> do. I want to take roughly 100 VMs and move their disk images from one
> domain to another that is available to the datacenter in some
> automated/scripted fashion. I am just curious if anyone out there has had
> to do this and how they tackled it. Or perhaps I am missing some easy
> obvious way, other than clicking all the disks and clicking move. However
> from the looks of it, if I did click all the disk and selected move, it
> appears RHV tries to do them all at once, which is probably not ideal, I
> would like it to move the disks in a serial One after another fashion, to
> conserve throughput and IO.
>
>
>
> I also did not see anything on Ansible galaxy or the ovirt github that
> would do this.
>
>
>
>
>
>
>
> Thank you.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/DRKDNUBXH7HXKPNBIW6ZO2U36XIANLGO/
>


-- 

Ahmad Khiet

Red Hat 

akh...@redhat.com
M: +972-54-6225629

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KYY7D3TYYOEQ7NDQITKOZLJLJTIHZM32/


[ovirt-users] Re: Low Performance (KVM Vs VMware Hypervisor) When running multi-process application

2020-09-16 Thread Arman Khalatyan
ok will try on our env with passthrough, could you please send how you
passthrough the cpu? simply over the ovirt gui?

Rav Ya  schrieb am Mi., 16. Sept. 2020, 00:56:

>
> Hi Arman,
>
> Intel(R) Xeon(R) Gold 6126 CPU @ 2.60GHz
>
> *The VM is configured for host CPU pass through and pinned to 6 CPUs.*
>
> Architecture:  x86_64
> CPU op-mode(s):32-bit, 64-bit
> Byte Order:Little Endian
> CPU(s):6
> On-line CPU(s) list:   0-5
> Thread(s) per core:1
> Core(s) per socket:1
> Socket(s): 6
> NUMA node(s):  1
> Vendor ID: GenuineIntel
> CPU family:6
> Model: 85
> Model name:Intel(R) Xeon(R) Gold 6126 CPU @ 2.60GHz
> Stepping:  4
> CPU MHz:   2593.906
> BogoMIPS:  5187.81
> Hypervisor vendor: KVM
> Virtualization type:   full
> L1d cache: 32K
> L1i cache: 32K
> L2 cache:  4096K
> L3 cache:  16384K
> NUMA node0 CPU(s): 0-5
>
> Thank You
> -RY
>
> On Tue, Sep 15, 2020 at 6:21 PM Arman Khalatyan  wrote:
>
>> what kind of CPUs are you using?
>>
>>
>> Rav Ya  schrieb am Di., 15. Sept. 2020, 16:58:
>>
>>> Hello Everyone,
>>> Please advice. Any help will be highly appreciated. Thank you in advance.
>>> Test Setup:
>>>
>>>1. oVirt Centos 7.8 Virtulization Host
>>>2. Guest VM Centos 7.8 (Mutiqueue enabled 6 vCPUs with 6 Rx Tx
>>>Queues)
>>>3. The vCPUs are configured for host pass through (Pinned CPU).
>>>
>>> The Guest VM runs the application in userspace. The Application consists
>>> of the parent process that reads packets in raw socket mode from the
>>> interface and forwards then to child processes (~vCPUs) via IPC (shared
>>> memory – pipes). *The performance (throughput / CPU utilization) that I
>>> get with KVM is half of what I get with VMware.*
>>>
>>> Any thoughts on the below observations? Any suggestions?
>>>
>>>
>>>- KVM Guest VMs degraded performance when running multi-process
>>>applications.
>>>- High FUTEX time (Seen on the Guest VM when passing traffic).
>>>- *High SY: *System CPU time spent in kernel space (Seen on both
>>>Hypervisor and the Guest VMs only when running my application.)
>>>
>>>
>>> -Rav Ya
>>>
>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>>> oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/QSEUE5VM4UCRT7MT4JLGSCABK7MDXFF4/
>>>
>>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2435WWIOIGERH2EQQ7SOQOQGO4TDLSBU/


[ovirt-users] Re: Bad volume specification

2020-09-16 Thread Ahmad Khiet
Hi,

can you please attach the engine log? what steps did you make before
this error is shown? did you tried to create a snapshot and failed before


On Wed, Sep 16, 2020 at 7:49 AM Strahil Nikolov via Users 
wrote:

> What happens if you create another VM and attach the disks to it ?
> Does it boot properly ?
>
> Best Regards,
> Strahil Nikolov
>
>
>
>
>
>
> В сряда, 16 септември 2020 г., 02:19:26 Гринуич+3, Facundo Garat <
> fga...@gmail.com> написа:
>
>
>
>
>
>
> Hi all,
>  I'm having some issues with one VM. The VM won't start and it's showing
> problems with the virtual disks so I started the VM without any disks and
> trying to hot adding the disk and that's fail too.
>
>  The servers are connected thru FC, all the other VMs are working fine.
>
>   Any ideas?.
>
> Thanks!!
>
> PS: The engine.log is showing this:
> 2020-09-15 20:10:37,926-03 INFO
>  [org.ovirt.engine.core.bll.storage.disk.HotPlugDiskToVmCommand] (default
> task-168) [dd72c8e8-cdbe-470f-8e32-b3d14b96f37a] Lock Acquired to object
> 'EngineLock:{exclusiveLocks='[f5bd2e15-a1ab-4724-883a-988b4dc7985b=DISK]',
> sharedLocks='[71db02c2-df29-4552-8a7e-cb8bb429a2ac=VM]'}'
> 2020-09-15 20:10:38,082-03 INFO
>  [org.ovirt.engine.core.bll.storage.disk.HotPlugDiskToVmCommand]
> (EE-ManagedThreadFactory-engine-Thread-36528)
> [dd72c8e8-cdbe-470f-8e32-b3d14b96f37a] Running command:
> HotPlugDiskToVmCommand internal: false. Entities affected :  ID:
> 71db02c2-df29-4552-8a7e-cb8bb429a2ac Type: VMAction group
> CONFIGURE_VM_STORAGE with role type USER
> 2020-09-15 20:10:38,117-03 INFO
>  [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand]
> (EE-ManagedThreadFactory-engine-Thread-36528)
> [dd72c8e8-cdbe-470f-8e32-b3d14b96f37a] START,
> HotPlugDiskVDSCommand(HostName = nodo2,
> HotPlugDiskVDSParameters:{hostId='1c24c269-76c3-468d-a7ce-d0332beb7aef',
> vmId='71db02c2-df29-4552-8a7e-cb8bb429a2ac',
> diskId='f5bd2e15-a1ab-4724-883a-988b4dc7985b', addressMap='null'}), log id:
> f57ee9e
> 2020-09-15 20:10:38,125-03 INFO
>  [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand]
> (EE-ManagedThreadFactory-engine-Thread-36528)
> [dd72c8e8-cdbe-470f-8e32-b3d14b96f37a] Disk hot-plug:  encoding="UTF-8"?>
>   
> 
>   
>dev="/rhev/data-center/mnt/blockSD/55327311-e47c-46b5-b168-258c5924757b/images/f5bd2e15-a1ab-4724-883a-988b4dc7985b/bd714f21-8eed-43ee-a2d4-3d2ef1ee4c3f">
> 
>   
>cache="none"/>
>   
>   f5bd2e15-a1ab-4724-883a-988b4dc7985b
> 
>   
>   http://ovirt.org/vm/1.0;>
> 
>   
>
> 0001-0001-0001-0001-0311
>
> bd714f21-8eed-43ee-a2d4-3d2ef1ee4c3f
>
> f5bd2e15-a1ab-4724-883a-988b4dc7985b
>
> 55327311-e47c-46b5-b168-258c5924757b
>   
> 
>   
> 
>
> 2020-09-15 20:10:38,289-03 ERROR
> [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand]
> (EE-ManagedThreadFactory-engine-Thread-36528)
> [dd72c8e8-cdbe-470f-8e32-b3d14b96f37a] Failed in 'HotPlugDiskVDS' method
> 2020-09-15 20:10:38,295-03 ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (EE-ManagedThreadFactory-engine-Thread-36528)
> [dd72c8e8-cdbe-470f-8e32-b3d14b96f37a] EVENT_ID:
> VDS_BROKER_COMMAND_FAILURE(10,802), VDSM nodo2 command HotPlugDiskVDS
> failed: General Exception: ("Bad volume specification {'device': 'disk',
> 'type': 'disk', 'diskType': 'block', 'specParams': {}, 'alias':
> 'ua-f5bd2e15-a1ab-4724-883a-988b4dc7985b', 'domainID':
> '55327311-e47c-46b5-b168-258c5924757b', 'imageID':
> 'f5bd2e15-a1ab-4724-883a-988b4dc7985b', 'poolID':
> '0001-0001-0001-0001-0311', 'volumeID':
> 'bd714f21-8eed-43ee-a2d4-3d2ef1ee4c3f', 'path':
> '/rhev/data-center/mnt/blockSD/55327311-e47c-46b5-b168-258c5924757b/images/f5bd2e15-a1ab-4724-883a-988b4dc7985b/bd714f21-8eed-43ee-a2d4-3d2ef1ee4c3f',
> 'discard': False, 'format': 'cow', 'propagateErrors': 'off', 'cache':
> 'none', 'iface': 'virtio', 'name': 'vda', 'serial':
> 'f5bd2e15-a1ab-4724-883a-988b4dc7985b', 'index': 0}",)
> 2020-09-15 20:10:38,295-03 INFO
>  [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand]
> (EE-ManagedThreadFactory-engine-Thread-36528)
> [dd72c8e8-cdbe-470f-8e32-b3d14b96f37a] Command
> 'org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand' return
> value 'StatusOnlyReturn [status=Status [code=100, message=General
> Exception: ("Bad volume specification {'device': 'disk', 'type': 'disk',
> 'diskType': 'block', 'specParams': {}, 'alias':
> 'ua-f5bd2e15-a1ab-4724-883a-988b4dc7985b', 'domainID':
> '55327311-e47c-46b5-b168-258c5924757b', 'imageID':
> 'f5bd2e15-a1ab-4724-883a-988b4dc7985b', 'poolID':
> '0001-0001-0001-0001-0311', 'volumeID':
> 'bd714f21-8eed-43ee-a2d4-3d2ef1ee4c3f', 'path':
> '/rhev/data-center/mnt/blockSD/55327311-e47c-46b5-b168-258c5924757b/images/f5bd2e15-a1ab-4724-883a-988b4dc7985b/bd714f21-8eed-43ee-a2d4-3d2ef1ee4c3f',
> 'discard': False, 'format': 'cow', 'propagateErrors': 'off', 'cache':
> 'none', 'iface': 'virtio', 'name': 'vda', 'serial':

[ovirt-users] Ovirt legacy migration policy is missing

2020-09-16 Thread ovirtand-cnj342--- via Users
Hello,

We have many cases of failed migrations, and reducing the load on the 
respective VMs made migration possible. Using "Suspend workload when needed" 
did not help either with the migrations, which only worked when stopping 
services on the VMs, thus reducing load.

I was therefore trying to tweak the migration downtime setting. The suggested 
way of doing this is by using the Legacy migration policy.

That being said, I am unable to find the Legacy migration policy in the VM Host 
tab, nor in the cluster's "Migration policy" tab. I cannot find any mention of 
it being left out of our current version (4.3.6.1) - moreover, the setting "Use 
custom migration downtime" (only available when using the Legacy migration 
policy) does exist in the interface - but I cannot find the Legacy policy 
itself.

The only policies present are Minimal downtime (default for cluster), Post-copy 
migration and Suspend workload when needed. 

Is there a way to make the Legacy migration policy available if it's missing?

Thanks!

Best regards,
Andy

PS. Setting DefaultMaximumMigrationDowntime=2 as default did not have any 
effect, presumably because the system uses one of the 3 pre-defined policies 
and ignores the "default"
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MT6JRNFTTURW72AOOYVKUCWPXUENXBCU/


[ovirt-users] Re: Low Performance (KVM Vs VMware Hypervisor) When running multi-process application

2020-09-16 Thread Dominik Holler
On Tue, Sep 15, 2020 at 5:02 PM Ravin Ya  wrote:

> Hello Everyone,
>
> Please advice. Any help will be highly appreciated. Thank you in advance.
>
> Test Setup:
> oVirt Centos 7.8 Virtulization Host
> Guest VM Centos 7.8 (Mutiqueue enabled 6 vCPUs with 6 Rx Tx Queues)
> The vCPUs are configured for host pass through (Pinned CPU).
>
> The Guest VM runs the application in userspace. The Application consists
> of the parent process that reads packets in raw socket mode from the
> interface


Is this interface a virtual NIC? If so, please ensure to disable network
filtering on the used vNIC profile before starting the VM.
If there is a way to use SR-IOV/ passthrough vNIC profile, this would
provide nearly bare metal performance of the virtual NIC.


> and forwards then to child processes (~vCPUs) via IPC (shared memory –
> pipes). The performance (throughput / CPU utilization) that I get with KVM
> is half of what I get with VMware.
>
> Any thoughts on the below observations? Any suggestions?
> KVM Guest VMs degraded performance when running multi-process applications.
> High FUTEX time (Seen on the Guest VM when passing traffic).
> High SY: System CPU time spent in kernel space (Seen on both Hypervisor
> and the Guest VMs only when running my application.)
>
> -Rav Ya
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/QGR367Q6QGSURVY6552JMYESGE2K3H2Y/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZQ7E76MOUKDCSI752DTOSESMIES5IZEW/


[ovirt-users] Re: cpu QoS doesn't work

2020-09-16 Thread Arik Hadas
On Tue, Sep 15, 2020 at 10:01 AM  wrote:

> Hello,
>
> I set cpu QoS as 10 and applied it to VM on oVirt 4.2, but it doesn't seem
> to work.
> compared to VM without QoS, there wasn't any difference in cpu usage.
> Also, there wasn't any ... or ... field
> related to QoS in libvirt file.
>
> is it right result??
>

well, it depends on the specific properties of your environment - it could
be that with certain values the cpu usage wouldn't change.
as for the last part, yeah - we don't set it in the cputune part of the
domain xml but rather in the metadata section.
you should look for it within the  section - mom is
supposed to detect this and make the changes in libvirt [1]

[1]
https://github.com/oVirt/mom/blob/2438f74cb9fd67f3cc5ab4fb32479b62f08001cf/mom/HypervisorInterfaces/libvirtInterface.py#L300


>
> Thanks,
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NHH4PXCIGB7LKGK3SRYVITSHYUB7QMLB/


[ovirt-users] Re: OVN Geneve tunnels not been established

2020-09-16 Thread Dominik Holler
On Tue, Sep 15, 2020 at 6:53 PM Konstantinos Betsis 
wrote:

> So a new test-net was created under DC01 and was depicted in the networks
> tab under both DC01 and DC02.
> I believe for some reason networks are duplicated in DCs, maybe for future
> use??? Don't know.
> If one tries to delete the network from the other DC it gets an error,
> while if deleted from the once initially created it gets deleted from both.
>
>
In oVirt a logical network is an entity in a data center. If the automatic
synchronization is enabled on the ovirt-provider-ovn entity in oVirt
Engine, the OVN networks are reflected to all data centers. If you do not
like this, you can disable the automatic synchronization of the
ovirt-provider-ovn in Admin Portal.


> From the DC01-node02 i get the following errors:
>
> 2020-09-15T16:48:49.904Z|22748|main|INFO|OVNSB commit failed, force
> recompute next time.
> 2020-09-15T16:48:49.905Z|22749|binding|INFO|Claiming lport
> 9a6cc189-0934-4468-97ae-09f90fa4598d for this chassis.
> 2020-09-15T16:48:49.905Z|22750|binding|INFO|9a6cc189-0934-4468-97ae-09f90fa4598d:
> Claiming 56:6f:77:61:00:06
> 2020-09-15T16:48:49.905Z|22751|binding|INFO|Claiming lport
> 16162721-c815-4cd8-ab57-f22e6e482c7f for this chassis.
> 2020-09-15T16:48:49.905Z|22752|binding|INFO|16162721-c815-4cd8-ab57-f22e6e482c7f:
> Claiming 56:6f:77:61:00:03
> 2020-09-15T16:48:49.905Z|22753|binding|INFO|Claiming lport
> b88de6e4-6d77-4e42-b734-4cc676728910 for this chassis.
> 2020-09-15T16:48:49.905Z|22754|binding|INFO|b88de6e4-6d77-4e42-b734-4cc676728910:
> Claiming 56:6f:77:61:00:15
> 2020-09-15T16:48:49.905Z|22755|binding|INFO|Claiming lport
> b7ff5f2b-4bb4-4250-8ad8-8a7e19d2b4c7 for this chassis.
> 2020-09-15T16:48:49.905Z|22756|binding|INFO|b7ff5f2b-4bb4-4250-8ad8-8a7e19d2b4c7:
> Claiming 56:6f:77:61:00:0d
> 2020-09-15T16:48:49.905Z|22757|binding|INFO|Claiming lport
> 5d03a7a5-82a1-40f9-b50c-353a26167fa3 for this chassis.
> 2020-09-15T16:48:49.905Z|22758|binding|INFO|5d03a7a5-82a1-40f9-b50c-353a26167fa3:
> Claiming 56:6f:77:61:00:02
> 2020-09-15T16:48:49.905Z|22759|binding|INFO|Claiming lport
> 12d829c3-64eb-44bc-a0bd-d7219991f35f for this chassis.
> 2020-09-15T16:48:49.905Z|22760|binding|INFO|12d829c3-64eb-44bc-a0bd-d7219991f35f:
> Claiming 56:6f:77:61:00:1c
> 2020-09-15T16:48:49.959Z|22761|main|INFO|OVNSB commit failed, force
> recompute next time.
> 2020-09-15T16:48:49.960Z|22762|binding|INFO|Claiming lport
> 9a6cc189-0934-4468-97ae-09f90fa4598d for this chassis.
> 2020-09-15T16:48:49.960Z|22763|binding|INFO|9a6cc189-0934-4468-97ae-09f90fa4598d:
> Claiming 56:6f:77:61:00:06
> 2020-09-15T16:48:49.960Z|22764|binding|INFO|Claiming lport
> 16162721-c815-4cd8-ab57-f22e6e482c7f for this chassis.
> 2020-09-15T16:48:49.960Z|22765|binding|INFO|16162721-c815-4cd8-ab57-f22e6e482c7f:
> Claiming 56:6f:77:61:00:03
> 2020-09-15T16:48:49.960Z|22766|binding|INFO|Claiming lport
> b88de6e4-6d77-4e42-b734-4cc676728910 for this chassis.
> 2020-09-15T16:48:49.960Z|22767|binding|INFO|b88de6e4-6d77-4e42-b734-4cc676728910:
> Claiming 56:6f:77:61:00:15
> 2020-09-15T16:48:49.960Z|22768|binding|INFO|Claiming lport
> b7ff5f2b-4bb4-4250-8ad8-8a7e19d2b4c7 for this chassis.
> 2020-09-15T16:48:49.960Z|22769|binding|INFO|b7ff5f2b-4bb4-4250-8ad8-8a7e19d2b4c7:
> Claiming 56:6f:77:61:00:0d
> 2020-09-15T16:48:49.960Z|22770|binding|INFO|Claiming lport
> 5d03a7a5-82a1-40f9-b50c-353a26167fa3 for this chassis.
> 2020-09-15T16:48:49.960Z|22771|binding|INFO|5d03a7a5-82a1-40f9-b50c-353a26167fa3:
> Claiming 56:6f:77:61:00:02
> 2020-09-15T16:48:49.960Z|22772|binding|INFO|Claiming lport
> 12d829c3-64eb-44bc-a0bd-d7219991f35f for this chassis.
> 2020-09-15T16:48:49.960Z|22773|binding|INFO|12d829c3-64eb-44bc-a0bd-d7219991f35f:
> Claiming 56:6f:77:61:00:1c
>
>
> And this repeats forever.
>
>
Looks like the southbound db is confused.

Can you try to delete all chassis listed by
sudo ovn-sbctl show
via
sudo /usr/share/ovirt-provider-ovn/scripts/remove_chassis.sh  dev-host0
?
if the script remove_chassis.sh is not installed, you can use
https://github.com/oVirt/ovirt-provider-ovn/blob/master/provider/scripts/remove_chassis.py
instead.

Can you please also share the output of
ovs-vsctl list Interface
on the host which produced the logfile above?




> The connections to ovn-sbctl is ok and the geneve tunnels are depicted
> under ovs-vsctl ok.
> VMs still not able to ping each other.
>
> On Tue, Sep 15, 2020 at 7:22 PM Dominik Holler  wrote:
>
>>
>>
>> On Tue, Sep 15, 2020 at 6:18 PM Konstantinos Betsis 
>> wrote:
>>
>>> Hi Dominik
>>>
>>> Fixed the issue.
>>>
>>
>> Thanks.
>>
>>
>>> I believe the 
>>> /etc/ovirt-provider-ovn/conf.d/10-setup-ovirt-provider-ovn.conf
>>> needed update also.
>>> The package is upgraded to the latest version.
>>>
>>> Once the provider was updated with the following it functioned perfectly:
>>>
>>> Name: ovirt-provider-ovn
>>> Description: oVirt network provider for OVN
>>> Type: External Network Provider
>>> Network Plugin: oVirt Network Provider for 

[ovirt-users] Question on "Memory" column/field in Virtual Machines list/table in ovirt GUI

2020-09-16 Thread KISHOR K
Hi,

Memory field/column for few of VMs in our ovirt (Compute -> Virtual Machines -> 
Memory column) shows more than 90%.  
But, when I checked (from free and also other commands) the actual "used" 
memory by those VMs, it is less than 60%.  What I see (from free -h) is that 
ovirt seems to be considering both "used" + "buff/cache" memory and is 
reporting the same in GUI.
Isn't it the "available" memory that should be considered, because that is the 
actual memory available and cache memory is something that is adjusted every 
time?

:~> free -h
totalusedfree  shared  buff/cache   
available
Mem:   15Gi   6.8Gi   724Mi   310Mi   8.1Gi   9.2Gi
Swap:0B  0B  0B


Can someone help to answer. Thanks !

/Kishore
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PAGYQZLHQJIDBXIPAWPO4MRIO4KJETJC/


[ovirt-users] Re: Upgrading self-Hosted engine from 4.3 to oVirt 4.4

2020-09-16 Thread Yedidyah Bar David
On Wed, Sep 16, 2020 at 10:46 AM Adam Xu  wrote:
>
> 在 2020/9/16 15:12, Yedidyah Bar David 写道:
> > On Wed, Sep 16, 2020 at 6:10 AM Adam Xu  wrote:
> >> Hi ovirt
> >>
> >> I just try to upgrade a self-Hosted engine from 4.3.10 to 4.4.1.4.  I 
> >> followed the step in the document:
> >>
> >> https://www.ovirt.org/documentation/upgrade_guide/#SHE_Upgrading_from_4-3
> >>
> >> the old 4.3 env has a FC storage as engine storage domain and I have 
> >> created a new FC storage vv for the new storage domain to be used in the 
> >> next steps.
> >>
> >> I backup the old 4.3 env and prepare a total new host to restore the env.
> >>
> >> in charter 4.4 step 8, it said:
> >>
> >> "During the deployment you need to provide a new storage domain. The 
> >> deployment script renames the 4.3 storage domain and retains its data."
> >>
> >> it does rename the old storage domain. but it didn't let me choose a new 
> >> storage domain during the deployment. So the new enigne just deployed in 
> >> the new host's local storage and can not move to the FC storage domain.
> >>
> >> Can anyone tell me what the problem is?
> > What do you mean in "deployed in the new host's local storage"?
> >
> > Did deploy finish successfully?
>
> I think it was not finished yet.

You did 'hosted-engine --deploy --restore-from-file=something', right?

Did this finish?

What are the last few lines of the output?

Please also check/share logs from /var/log/ovirt-hosted-engine-setup/*
(including subdirs).

> It didn't tell me to choose a new
> storage domain and just give me the new hosts fqdn as the engine's URL.
> like host6.example.com:6900 .

Yes, that's temporarily, to let you access the engine VM (on the local network).

>
> I can login use the host6.example.com:6900 and I saw the engine vm ran
> in host6's /tmp dir.
>
> >
> > HE deploy (since 4.3) first creates a VM for the engine on local
> > storage, then prompts you to provide the storage you want to use, and
> > then moves the VM disk image there.
> >
> > Best regards,
> >
> >> Thanks
> >>
> >> --
> >> Adam Xu
> >>
> >> ___
> >> Users mailing list -- users@ovirt.org
> >> To unsubscribe send an email to users-le...@ovirt.org
> >> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> >> oVirt Code of Conduct: 
> >> https://www.ovirt.org/community/about/community-guidelines/
> >> List Archives: 
> >> https://lists.ovirt.org/archives/list/users@ovirt.org/message/XHDGJB2ZAFS7AJZYS4F5BAMC2ZVKCYY4/
> >
> >
> --
> Adam Xu
> Phone: 86-512-8777-3585
> Adagene (Suzhou) Limited
> C14, No. 218, Xinghu Street, Suzhou Industrial Park
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/RLOBPKLW7OBZR5K4AUQWG5MZPYNYUDMI/



-- 
Didi
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/C7GLI6CMO55WZJNCVTV2K6JGEFM6C4RH/


[ovirt-users] Re: Unable to create a node in oVirt 4.0

2020-09-16 Thread Rodrigo G . López

Thank you Didi,


We'll look for an alternative, maybe even migrate to the more recent, 
supported version.

That would be the best option IMO.



Best regards,

-rodri


On 9/16/20 9:39 AM, Yedidyah Bar David wrote:

Hi!

On Wed, Sep 16, 2020 at 10:25 AM Rodrigo G. López  wrote:

Hello,

Any idea about this problem? I don't know if the email got through to the list.

It did


Should I join the #vdsm channel and discuss it there? Is there any other place 
specific to vdsm where I could report this?

You are welcome to join #ovirt channel, but I think the list is ok as well.




Cheers,

-rodri


On 9/15/20 9:55 AM, Rodrigo G. López wrote:

Hi there,

We are trying to setup a node in the same machine where we are running the 
engine,

This is called "all-in-one". It used to be supported until 3.6, and
removed from 4.0 and later.


and noticed that the vdsmd service fails because the supervdsmd daemon can't 
authenticate against libvirtd afaict.

The error is the following on supervdsmd:

 daemonAdapter[17803]: libvirt: XML-RPC error : authentication failed: 
authentication failed
 ...

and in libvirtd:

 Sep 15 03:34:18 ovirt-test libvirtd[17775]: 2020-09-15 07:34:18.410+: 
17775: error : virNetSocketReadWire:1806 : End of file while reading data: 
Input/output error
 Sep 15 03:34:18 ovirt-test libvirtd[17775]: 2020-09-15 07:34:18.612+: 
17776: error : virNetSASLSessionListMechanisms:393 : internal error: cannot 
list SASL mechanisms -4 (SASL(-4): no mechanism available: Internal Error -4 in 
server.c near line 1757)
 Sep 15 03:34:18 ovirt-test libvirtd[17775]: 2020-09-15 07:34:18.612+: 
17776: error : remoteDispatchAuthSaslInit:3440 : authentication failed: 
authentication failed
 Sep 15 03:34:18 ovirt-test libvirtd[17775]: 2020-09-15 07:34:18.612+: 
17775: error : virNetSocketReadWire:1806 : End of file while reading data: 
Input/output error
 Sep 15 03:34:18 ovirt-test libvirtd[17775]: 2020-09-15 07:34:18.814+: 
17778: error : virNetSASLSessionListMechanisms:393 : internal error: cannot 
list SASL mechanisms -4 (SASL(-4): no mechanism available: Internal Error -4 in 
server.c near line 1757)
 Sep 15 03:34:18 ovirt-test libvirtd[17775]: 2020-09-15 07:34:18.814+: 
17778: error : remoteDispatchAuthSaslInit:3440 : authentication failed: 
authentication failed
 Sep 15 03:34:18 ovirt-test libvirtd[17775]: 2020-09-15 07:34:18.815+: 
17775: error : virNetSocketReadWire:1806 : End of file while reading data: 
Input/output error
 Sep 15 03:34:19 ovirt-test libvirtd[17775]: 2020-09-15 07:34:19.017+: 
17780: error : virNetSASLSessionListMechanisms:393 : internal error: cannot 
list SASL mechanisms -4 (SASL(-4): no mechanism available: Internal Error -4 in 
server.c near line 1757)
 Sep 15 03:34:19 ovirt-test libvirtd[17775]: 2020-09-15 07:34:19.017+: 
17780: error : remoteDispatchAuthSaslInit:3440 : authentication failed: 
authentication failed
 Sep 15 03:34:19 ovirt-test libvirtd[17775]: 2020-09-15 07:34:19.020+: 
17775: error : virNetSocketReadWire:1806 : End of file while reading data: 
Input/output error


Is there any way to work around that?

We have working infra on top of 4.0 in CentOS 7 systems, and we would like to 
replicate the exact same environment for availability purposes, in case 
anything bad happened.

4.0 is old and unsupported.

I suggest to try 4.4.

You can try checking your existing setup and try to see if someone
made there specific customizations to make this work. Or perhaps you
have site-wide policy that is changing your configuration a bit (e.g.
around libvirt)?

That said, people do report occasionally that all-in-one still works,
and we even fixed a bug for it in imageio recently. That said, it's
still considered unsupported, and the "official" answer is "Use
hosted-engine (with gluster), aka HCI" (or do not use oVirt at all -
there isn't much advantage in it for a single host, compared e.g. with
virt-manager).

Best regards,


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AGQM5XC2ZLFRYQL4GNWTGWQX7I2UCMJM/


[ovirt-users] Re: hosted engine migration

2020-09-16 Thread ddqlo
My gateway was not pingable. I have fixed this problem and now both nodes have 
a score(3400).
Yet, hosted engine could not be migrated. Same log in engine.log:
host filtered out by 'VAR__FILTERTYPE__INTERNAL' filter 'CPU'








在 2020-09-16 02:11:09,"Strahil Nikolov"  写道:
>Both nodes have a lower than the usual score (should be 3400 ).
>Based on the score you are probably suffering from gateway-score-penalty 
>[1][2].
>Check if your gateway is pingable.
>
>Best Regards,
>Strahil Nikolov
>
>1 - https://www.ovirt.org/images/Hosted-Engine-4.3-deep-dive.pdf(page 8)
>2 - /etc/ovirt-hosted-engine-ha/agent.conf 
>
>
>
>
>
>
>В вторник, 15 септември 2020 г., 04:49:48 Гринуич+3, ddqlo  
>написа: 
>
>
>
>
>
>--== Host node28 (id: 1) status ==--
>
>conf_on_shared_storage : True
>Status up-to-date  : True
>Hostname   : node28
>Host ID: 1
>Engine status  : {"reason": "vm not running on this host", 
>"health": "bad", "vm": "down_unexpected", "detail": "unknown"}
>Score  : 1800
>stopped: False
>Local maintenance  : False
>crc32  : 4ac6105b
>local_conf_timestamp   : 1794597
>Host timestamp : 1794597
>Extra metadata (valid at timestamp):
>metadata_parse_version=1
>metadata_feature_version=1
>timestamp=1794597 (Tue Sep 15 09:47:17 2020)
>host-id=1
>score=1800
>vm_conf_refresh_time=1794597 (Tue Sep 15 09:47:17 2020)
>conf_on_shared_storage=True
>maintenance=False
>state=EngineDown
>stopped=False
>
>
>--== Host node22 (id: 2) status ==--
>
>conf_on_shared_storage : True
>Status up-to-date  : True
>Hostname   : node22
>Host ID: 2
>Engine status  : {"health": "good", "vm": "up", "detail": 
>"Up"}
>Score  : 1800
>stopped: False
>Local maintenance  : False
>crc32  : ffc41893
>local_conf_timestamp   : 1877876
>Host timestamp : 1877876
>Extra metadata (valid at timestamp):
>metadata_parse_version=1
>metadata_feature_version=1
>timestamp=1877876 (Tue Sep 15 09:47:13 2020)
>host-id=2
>score=1800
>vm_conf_refresh_time=1877876 (Tue Sep 15 09:47:13 2020)
>conf_on_shared_storage=True
>maintenance=False
>state=EngineUp
>stopped=False
>
>
>
>
>
>
>
>在 2020-09-09 01:32:55,"Strahil Nikolov"  写道:
>>What is the output of 'hosted-engine --vm-status' on the node where the 
>>HostedEngine is running ?
>>
>>
>>Best Regards,
>>Strahil Nikolov
>>
>>
>>
>>
>>
>>
>>В понеделник, 7 септември 2020 г., 03:53:13 Гринуич+3, ddqlo  
>>написа: 
>>
>>
>>
>>
>>
>>I could not find any logs because the migration button is disabled in the web 
>>UI. It seems that the engine migration operation is prevented at first. Any 
>>other ideas? Thanks!
>>
>>
>>
>>
>>
>>
>>
>>在 2020-09-01 00:06:19,"Strahil Nikolov"  写道:
>>>I'm running oVirt 4.3.10 and I can migrate my Engine from node to node.
>>>I had one similar issue , but powering off and on the HE has fixed it.
>>>
>>>You have to check the vdsm log on the source and on destination in order to 
>>>figure out what is going on.
>>>Also you might consider checking the libvirt logs on the destination.
>>>
>>>Best Regards,
>>>Strahil Nikolov
>>>
>>>
>>>
>>>
>>>
>>>
>>>В понеделник, 31 август 2020 г., 10:47:22 Гринуич+3, ddqlo  
>>>написа: 
>>>
>>>
>>>
>>>
>>>
>>>Thanks! The scores of all nodes are not '0'. I find that someone has already 
>>>asked a question like this. It seems that  this feature has been disabled in 
>>>4.3. I am not sure if it is enabled in 4.4.
>>>
>>>
>>>在 2020-08-29 02:27:03,"Strahil Nikolov"  :
Have you checked under a shell the output of 'hosted-engine --vm-status' . 
Check the Score of the hosts. Maybe there is a node with score of '0' ?

Best Regards,
Strahil Nikolov






В вторник, 25 август 2020 г., 13:46:18 Гринуич+3, 董青龙  
написа: 





Hi all,
I have an ovirt4.3.10.4 environment of 2 hosts. Normal vms in this 
 environment could be migrated, but the hosted engine vm could not be 
 migrated. Anyone can help? Thanks a lot!

hosts status:

normal vm migration:

hosted engine vm migration:



 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 

[ovirt-users] Re: Unable to create a node in oVirt 4.0

2020-09-16 Thread Sandro Bonazzola
Il giorno mer 16 set 2020 alle ore 09:24 Rodrigo G. López <
r.gonza...@telfy.com> ha scritto:

> Hello,
>
> Any idea about this problem? I don't know if the email got through to the
> list.
>
> Should I join the #vdsm channel and discuss it there? Is there any other
> place specific to vdsm where I could report this?
>


Hi, you are on the right mailing list.
Please note oVirt 4.0 gone EOL long time ago, please consider upgrading to
4.4 as soon as practical.
We lack the capacity for debugging issues on so old releases.



>
>
>
> Cheers,
>
> -rodri
>
>
> On 9/15/20 9:55 AM, Rodrigo G. López wrote:
>
> Hi there,
>
> We are trying to setup a node in the same machine where we are running the
> engine, and noticed that the vdsmd service fails because the supervdsmd
> daemon can't authenticate against libvirtd afaict.
>
> The error is the following on supervdsmd:
>
> daemonAdapter[17803]: libvirt: XML-RPC error : authentication failed:
> authentication failed
> ...
>
> and in libvirtd:
>
> Sep 15 03:34:18 ovirt-test libvirtd[17775]: 2020-09-15
> 07:34:18.410+: 17775: error : virNetSocketReadWire:1806 : End of file
> while reading data: Input/output error
> Sep 15 03:34:18 ovirt-test libvirtd[17775]: 2020-09-15
> 07:34:18.612+: 17776: error : virNetSASLSessionListMechanisms:393 :
> internal error: cannot list SASL mechanisms -4 (SASL(-4): no mechanism
> available: Internal Error -4 in server.c near line 1757)
> Sep 15 03:34:18 ovirt-test libvirtd[17775]: 2020-09-15
> 07:34:18.612+: 17776: error : remoteDispatchAuthSaslInit:3440 :
> authentication failed: authentication failed
> Sep 15 03:34:18 ovirt-test libvirtd[17775]: 2020-09-15
> 07:34:18.612+: 17775: error : virNetSocketReadWire:1806 : End of file
> while reading data: Input/output error
> Sep 15 03:34:18 ovirt-test libvirtd[17775]: 2020-09-15
> 07:34:18.814+: 17778: error : virNetSASLSessionListMechanisms:393 :
> internal error: cannot list SASL mechanisms -4 (SASL(-4): no mechanism
> available: Internal Error -4 in server.c near line 1757)
> Sep 15 03:34:18 ovirt-test libvirtd[17775]: 2020-09-15
> 07:34:18.814+: 17778: error : remoteDispatchAuthSaslInit:3440 :
> authentication failed: authentication failed
> Sep 15 03:34:18 ovirt-test libvirtd[17775]: 2020-09-15
> 07:34:18.815+: 17775: error : virNetSocketReadWire:1806 : End of file
> while reading data: Input/output error
> Sep 15 03:34:19 ovirt-test libvirtd[17775]: 2020-09-15
> 07:34:19.017+: 17780: error : virNetSASLSessionListMechanisms:393 :
> internal error: cannot list SASL mechanisms -4 (SASL(-4): no mechanism
> available: Internal Error -4 in server.c near line 1757)
> Sep 15 03:34:19 ovirt-test libvirtd[17775]: 2020-09-15
> 07:34:19.017+: 17780: error : remoteDispatchAuthSaslInit:3440 :
> authentication failed: authentication failed
> Sep 15 03:34:19 ovirt-test libvirtd[17775]: 2020-09-15
> 07:34:19.020+: 17775: error : virNetSocketReadWire:1806 : End of file
> while reading data: Input/output error
>
>
> Is there any way to work around that?
>
> We have working infra on top of 4.0 in CentOS 7 systems, and we would like
> to replicate the exact same environment for availability purposes, in case
> anything bad happened.
>
>
>
> Best regards,
>
> -rodri
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/OO377TE4EZUMYXFGGPPPKNTVJRCGCFS4/
>


-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com


*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.
*
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/437JG6VJKYTHQC3B4B4P2LFK5XOXYXUT/


[ovirt-users] Re: Upgrading self-Hosted engine from 4.3 to oVirt 4.4

2020-09-16 Thread Adam Xu

在 2020/9/16 15:12, Yedidyah Bar David 写道:

On Wed, Sep 16, 2020 at 6:10 AM Adam Xu  wrote:

Hi ovirt

I just try to upgrade a self-Hosted engine from 4.3.10 to 4.4.1.4.  I followed 
the step in the document:

https://www.ovirt.org/documentation/upgrade_guide/#SHE_Upgrading_from_4-3

the old 4.3 env has a FC storage as engine storage domain and I have created a 
new FC storage vv for the new storage domain to be used in the next steps.

I backup the old 4.3 env and prepare a total new host to restore the env.

in charter 4.4 step 8, it said:

"During the deployment you need to provide a new storage domain. The deployment 
script renames the 4.3 storage domain and retains its data."

it does rename the old storage domain. but it didn't let me choose a new 
storage domain during the deployment. So the new enigne just deployed in the 
new host's local storage and can not move to the FC storage domain.

Can anyone tell me what the problem is?

What do you mean in "deployed in the new host's local storage"?

Did deploy finish successfully?


I think it was not finished yet. It didn't tell me to choose a new 
storage domain and just give me the new hosts fqdn as the engine's URL. 
like host6.example.com:6900 .


I can login use the host6.example.com:6900 and I saw the engine vm ran  
in host6's /tmp dir.




HE deploy (since 4.3) first creates a VM for the engine on local
storage, then prompts you to provide the storage you want to use, and
then moves the VM disk image there.

Best regards,


Thanks

--
Adam Xu

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XHDGJB2ZAFS7AJZYS4F5BAMC2ZVKCYY4/




--
Adam Xu
Phone: 86-512-8777-3585
Adagene (Suzhou) Limited
C14, No. 218, Xinghu Street, Suzhou Industrial Park

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RLOBPKLW7OBZR5K4AUQWG5MZPYNYUDMI/


[ovirt-users] Re: Unable to create a node in oVirt 4.0

2020-09-16 Thread Yedidyah Bar David
Hi!

On Wed, Sep 16, 2020 at 10:25 AM Rodrigo G. López  wrote:
>
> Hello,
>
> Any idea about this problem? I don't know if the email got through to the 
> list.

It did

>
> Should I join the #vdsm channel and discuss it there? Is there any other 
> place specific to vdsm where I could report this?

You are welcome to join #ovirt channel, but I think the list is ok as well.

>
>
>
> Cheers,
>
> -rodri
>
>
> On 9/15/20 9:55 AM, Rodrigo G. López wrote:
>
> Hi there,
>
> We are trying to setup a node in the same machine where we are running the 
> engine,

This is called "all-in-one". It used to be supported until 3.6, and
removed from 4.0 and later.

> and noticed that the vdsmd service fails because the supervdsmd daemon can't 
> authenticate against libvirtd afaict.
>
> The error is the following on supervdsmd:
>
> daemonAdapter[17803]: libvirt: XML-RPC error : authentication failed: 
> authentication failed
> ...
>
> and in libvirtd:
>
> Sep 15 03:34:18 ovirt-test libvirtd[17775]: 2020-09-15 07:34:18.410+: 
> 17775: error : virNetSocketReadWire:1806 : End of file while reading data: 
> Input/output error
> Sep 15 03:34:18 ovirt-test libvirtd[17775]: 2020-09-15 07:34:18.612+: 
> 17776: error : virNetSASLSessionListMechanisms:393 : internal error: cannot 
> list SASL mechanisms -4 (SASL(-4): no mechanism available: Internal Error -4 
> in server.c near line 1757)
> Sep 15 03:34:18 ovirt-test libvirtd[17775]: 2020-09-15 07:34:18.612+: 
> 17776: error : remoteDispatchAuthSaslInit:3440 : authentication failed: 
> authentication failed
> Sep 15 03:34:18 ovirt-test libvirtd[17775]: 2020-09-15 07:34:18.612+: 
> 17775: error : virNetSocketReadWire:1806 : End of file while reading data: 
> Input/output error
> Sep 15 03:34:18 ovirt-test libvirtd[17775]: 2020-09-15 07:34:18.814+: 
> 17778: error : virNetSASLSessionListMechanisms:393 : internal error: cannot 
> list SASL mechanisms -4 (SASL(-4): no mechanism available: Internal Error -4 
> in server.c near line 1757)
> Sep 15 03:34:18 ovirt-test libvirtd[17775]: 2020-09-15 07:34:18.814+: 
> 17778: error : remoteDispatchAuthSaslInit:3440 : authentication failed: 
> authentication failed
> Sep 15 03:34:18 ovirt-test libvirtd[17775]: 2020-09-15 07:34:18.815+: 
> 17775: error : virNetSocketReadWire:1806 : End of file while reading data: 
> Input/output error
> Sep 15 03:34:19 ovirt-test libvirtd[17775]: 2020-09-15 07:34:19.017+: 
> 17780: error : virNetSASLSessionListMechanisms:393 : internal error: cannot 
> list SASL mechanisms -4 (SASL(-4): no mechanism available: Internal Error -4 
> in server.c near line 1757)
> Sep 15 03:34:19 ovirt-test libvirtd[17775]: 2020-09-15 07:34:19.017+: 
> 17780: error : remoteDispatchAuthSaslInit:3440 : authentication failed: 
> authentication failed
> Sep 15 03:34:19 ovirt-test libvirtd[17775]: 2020-09-15 07:34:19.020+: 
> 17775: error : virNetSocketReadWire:1806 : End of file while reading data: 
> Input/output error
>
>
> Is there any way to work around that?
>
> We have working infra on top of 4.0 in CentOS 7 systems, and we would like to 
> replicate the exact same environment for availability purposes, in case 
> anything bad happened.

4.0 is old and unsupported.

I suggest to try 4.4.

You can try checking your existing setup and try to see if someone
made there specific customizations to make this work. Or perhaps you
have site-wide policy that is changing your configuration a bit (e.g.
around libvirt)?

That said, people do report occasionally that all-in-one still works,
and we even fixed a bug for it in imageio recently. That said, it's
still considered unsupported, and the "official" answer is "Use
hosted-engine (with gluster), aka HCI" (or do not use oVirt at all -
there isn't much advantage in it for a single host, compared e.g. with
virt-manager).

Best regards,
-- 
Didi
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QPNCCVLMJPWD5EBCGFIJQNFSOLZFUSSE/


[ovirt-users] Re: Unable to create a node in oVirt 4.0

2020-09-16 Thread Rodrigo G . López

Hello,

Any idea about this problem? I don't know if the email got through to 
the list.


Should I join the #vdsm channel and discuss it there? Is there any other 
place specific to vdsm where I could report this?




Cheers,

-rodri


On 9/15/20 9:55 AM, Rodrigo G. López wrote:

Hi there,

We are trying to setup a node in the same machine where we are running 
the engine, and noticed that the vdsmd service fails because the 
supervdsmd daemon can't authenticate against libvirtd afaict.


The error is the following on supervdsmd:

    daemonAdapter[17803]: libvirt: XML-RPC error : authentication 
failed: authentication failed

    ...

and in libvirtd:

    Sep 15 03:34:18 ovirt-test libvirtd[17775]: 2020-09-15 
07:34:18.410+: 17775: error : virNetSocketReadWire:1806 : End of 
file while reading data: Input/output error
    Sep 15 03:34:18 ovirt-test libvirtd[17775]: 2020-09-15 
07:34:18.612+: 17776: error : virNetSASLSessionListMechanisms:393 
: internal error: cannot list SASL mechanisms -4 (SASL(-4): no 
mechanism available: Internal Error -4 in server.c near line 1757)
    Sep 15 03:34:18 ovirt-test libvirtd[17775]: 2020-09-15 
07:34:18.612+: 17776: error : remoteDispatchAuthSaslInit:3440 : 
authentication failed: authentication failed
    Sep 15 03:34:18 ovirt-test libvirtd[17775]: 2020-09-15 
07:34:18.612+: 17775: error : virNetSocketReadWire:1806 : End of 
file while reading data: Input/output error
    Sep 15 03:34:18 ovirt-test libvirtd[17775]: 2020-09-15 
07:34:18.814+: 17778: error : virNetSASLSessionListMechanisms:393 
: internal error: cannot list SASL mechanisms -4 (SASL(-4): no 
mechanism available: Internal Error -4 in server.c near line 1757)
    Sep 15 03:34:18 ovirt-test libvirtd[17775]: 2020-09-15 
07:34:18.814+: 17778: error : remoteDispatchAuthSaslInit:3440 : 
authentication failed: authentication failed
    Sep 15 03:34:18 ovirt-test libvirtd[17775]: 2020-09-15 
07:34:18.815+: 17775: error : virNetSocketReadWire:1806 : End of 
file while reading data: Input/output error
    Sep 15 03:34:19 ovirt-test libvirtd[17775]: 2020-09-15 
07:34:19.017+: 17780: error : virNetSASLSessionListMechanisms:393 
: internal error: cannot list SASL mechanisms -4 (SASL(-4): no 
mechanism available: Internal Error -4 in server.c near line 1757)
    Sep 15 03:34:19 ovirt-test libvirtd[17775]: 2020-09-15 
07:34:19.017+: 17780: error : remoteDispatchAuthSaslInit:3440 : 
authentication failed: authentication failed
    Sep 15 03:34:19 ovirt-test libvirtd[17775]: 2020-09-15 
07:34:19.020+: 17775: error : virNetSocketReadWire:1806 : End of 
file while reading data: Input/output error



Is there any way to work around that?

We have working infra on top of 4.0 in CentOS 7 systems, and we would 
like to replicate the exact same environment for availability 
purposes, in case anything bad happened.




Best regards,

-rodri


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OO377TE4EZUMYXFGGPPPKNTVJRCGCFS4/


[ovirt-users] Removal of deprecated init-scripts (network-scripts)

2020-09-16 Thread Ales Musil
Hello,

network-scripts for host networking were deprecated since oVirt 4.4.
It will be removed completely in the 4.4.3 release. There is no action
required
for setups that did not change the configuration to use network-scripts
backend (net_nmstate_enabled = false).

Users that did disable nmstate should redeploy all affected hosts before
4.4.3.
Also can you please tell us what was the reason to use network-scripts, if
that is the case?

Thank you.
Best regards,
Ales Musil

-- 

Ales Musil

Software Engineer - RHV Network

Red Hat EMEA 

amu...@redhat.comIM: amusil

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YGE4MXPFCZ3OTYT6DFZDRKHFC36SS4AY/


[ovirt-users] Re: Upgrading self-Hosted engine from 4.3 to oVirt 4.4

2020-09-16 Thread Yedidyah Bar David
On Wed, Sep 16, 2020 at 6:10 AM Adam Xu  wrote:
>
> Hi ovirt
>
> I just try to upgrade a self-Hosted engine from 4.3.10 to 4.4.1.4.  I 
> followed the step in the document:
>
> https://www.ovirt.org/documentation/upgrade_guide/#SHE_Upgrading_from_4-3
>
> the old 4.3 env has a FC storage as engine storage domain and I have created 
> a new FC storage vv for the new storage domain to be used in the next steps.
>
> I backup the old 4.3 env and prepare a total new host to restore the env.
>
> in charter 4.4 step 8, it said:
>
> "During the deployment you need to provide a new storage domain. The 
> deployment script renames the 4.3 storage domain and retains its data."
>
> it does rename the old storage domain. but it didn't let me choose a new 
> storage domain during the deployment. So the new enigne just deployed in the 
> new host's local storage and can not move to the FC storage domain.
>
> Can anyone tell me what the problem is?

What do you mean in "deployed in the new host's local storage"?

Did deploy finish successfully?

HE deploy (since 4.3) first creates a VM for the engine on local
storage, then prompts you to provide the storage you want to use, and
then moves the VM disk image there.

Best regards,

>
> Thanks
>
> --
> Adam Xu
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/XHDGJB2ZAFS7AJZYS4F5BAMC2ZVKCYY4/



-- 
Didi
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3QQEOJFSFFLMOHXJ4JKASDLDKAHD3XUW/