[ovirt-users] Re: oVirt & Zabbix agent

2020-02-16 Thread Jorick Astrego
Hi,

We've been running zabbix for years without issues.

I also really don't know what issues it could cause, it's just an agent
using little resources connecting to a specific port that isn't used by
any other program.

Haven't heard any issues with it running on any host for many years .

Regards,

Jorick

On 2/14/20 7:46 PM, Diggy Mc wrote:
> Are there any known issues with running the Zabbix agent on either the Hosted 
> Engine (4.3.8) or oVirt Nodes (4.3.8) ???  I'd like to install the agent 
> while not crashing my hosting environment.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/DR3LNJFP2KV53DMXESZCI4MIFR3ZCNUQ/




Met vriendelijke groet, With kind regards,

Jorick Astrego

Netbulae Virtualization Experts 



Tel: 053 20 30 270  i...@netbulae.euStaalsteden 4-3A
KvK 08198180
Fax: 053 20 30 271  www.netbulae.eu 7547 TA Enschede
BTW NL821234584B01



___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QD6N7C2X7QLHRUFDRLY6ZTYPAP4A5ZA3/


[ovirt-users] What should I do to support DPDK in ovirt, any instruction?

2020-02-16 Thread lifuqi...@sunyainfo.com

Hi All, 
I found that there is rarely topics about supporting dpdk in ovirt 4.2  in 
Internet except 
https://blogs.ovirt.org/2018/07/upgraded-dpdk-support-in-ovirt/; and I can't 
get information such as  whether or not should I install dpdk or ovn manual? 
And I will get an error when I execute such cmds:
ansible-playbook ovirt.dpdk-setup/tasks/main.yml
ansible-playbook oVirt.dpdk-setup


I have some experience about ovn and dpdk, but I can't make ovirt supporting 
dpdk according the instruction in 
https://blogs.ovirt.org/2018/07/upgraded-dpdk-support-in-ovirt/; 

Is there anybody helping me?  Thank you.

Mark 

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VVRC6NMM373JZZCYPQLQTOZXEK5NDZ24/


[ovirt-users] Re: oVirt 4.3.7 and Gluster 6.6 multiple issues

2020-02-16 Thread Adrian Quintero
Hi Strahil, no, just the,  .meta files and that solved everything.

On Sun, Feb 16, 2020, 8:36 PM Strahil Nikolov  wrote:

> On February 16, 2020 9:16:14 PM GMT+02:00, adrianquint...@gmail.com wrote:
> >After a couple of hours all looking good and seems that the timestamps
> >corrected themselves and OVF errors are no more.
> >
> >Thank you all for the help.
> >
> >Regards,
> >
> >Adrian
> >___
> >Users mailing list -- users@ovirt.org
> >To unsubscribe send an email to users-le...@ovirt.org
> >Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> >oVirt Code of Conduct:
> >https://www.ovirt.org/community/about/community-guidelines/
> >List Archives:
> >
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/57T3ELDXT5DYZA4DEQLB3ZAF4JBMWVE4/
>
> Hi Adrian,
>
> Did you rsync the brticks ?
>
> Best Regards,
> Strahil Nikolov
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3KHP5XWEXTCPHCFQWJHCOUYUGAPTBFNZ/


[ovirt-users] Re: oVirt 4.3.7 and Gluster 6.6 multiple issues

2020-02-16 Thread Strahil Nikolov
On February 16, 2020 9:16:14 PM GMT+02:00, adrianquint...@gmail.com wrote:
>After a couple of hours all looking good and seems that the timestamps
>corrected themselves and OVF errors are no more.
>
>Thank you all for the help.
>
>Regards,
>
>Adrian 
>___
>Users mailing list -- users@ovirt.org
>To unsubscribe send an email to users-le...@ovirt.org
>Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>oVirt Code of Conduct:
>https://www.ovirt.org/community/about/community-guidelines/
>List Archives:
>https://lists.ovirt.org/archives/list/users@ovirt.org/message/57T3ELDXT5DYZA4DEQLB3ZAF4JBMWVE4/

Hi Adrian,

Did you rsync the brticks ?

Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GYUOF6KO5VRKILPQEYOAJEV5CYWLJN7Q/


[ovirt-users] Re: oVirt 4.3.7 and Gluster 6.6 multiple issues

2020-02-16 Thread adrianquintero
Thanks Strahil,
I did the Rsync only for the .meta files and that seemed to have done the 
trick, I just waited a couple of hours and the OVF error resolved itself  and 
since it was the engine OVF I think Edward was right and the rest of the issues 
got resolved.

regards,

Adrian
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AL3WBM4NR5KYS2OQWMOOF7EBJVLAKBPE/


[ovirt-users] Re: hosted-engine --deploy --restore-from-file fails with error "Domain format is different from master storage domain format"

2020-02-16 Thread eevans
I tried to implement gluster post install. For me it was a disaster. I rebuilt 
the engine host and deployed without gluster. I will try it in the future but 
it will be a ground up deployment.

I f you disable gluster and the update the cluster  again with all hosts, it 
may come back to life. 
I spent hours trying to get it to work.

Eric Evans
Digital Data Services LLC.
304.660.9080


-Original Message-
From: djagoo  
Sent: Saturday, February 15, 2020 3:56 AM
To: users@ovirt.org
Subject: [ovirt-users] hosted-engine --deploy --restore-from-file fails with 
error "Domain format is different from master storage domain format"

Hi there,

for two weeks we are trying to move our hosted engine to a glusterfs storage 
resulting in an error "Domain format is different from master storage domain 
format".

The newly created storage domain has version 5 (default since compatibility 
level 4.3 according to documentation).

Hosted engine version is 4.3.8.2-1.el7
All hosts are updated to the latest versions and are rebooted.

Cluster and  DataCenter compatibility version is 4.3.

Master data domain and all other domains are Format V4 and there is no V5 
available in dropdown menus. Even if I try to create a new storage domain from 
manager there is only V4 available.

The system and all hosts where installed march 2019 so it was ovirt release 
4.3.2 or 4.3.1 which created the existing domains.

Is there a way to update the master storage domain to V5? It seems I cannot 
downgrade the datacenter to compat 4.2 and then raise it again.

After two weeks I'm out of ideas.

Can anyone help please?

Regards,
Marcel

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: 
https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6CICLMWQYMT4TC3EJIGQK5ETSBYL6JEO/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MXJVJJDFKA2M6DKUP3AYD3M26DXRBVBZ/


[ovirt-users] Re: oVirt 4.3.7 and Gluster 6.6 multiple issues

2020-02-16 Thread Strahil Nikolov
On February 16, 2020 8:03:17 PM GMT+02:00, adrianquint...@gmail.com wrote:
>Ok so I ran and rsync from host2 over to host3 for the .meta files only
>and that seems to have worked:
>
>98d64fb4-df01-4981-9e5e-62be6ca7e07c.meta
>ed569aed-005e-40fd-9297-dd54a1e4946c.meta 
>
>
>[root@host1 ~]# gluster vol heal engine info
>Brick host1.grupolucerna.local:/gluster_bricks/engine/engine
>Status: Connected
>Number of entries: 0
>
>Brick host3:/gluster_bricks/engine/engine
>Status: Connected
>Number of entries: 0
>
>Brick host3:/gluster_bricks/engine/engine
>Status: Connected
>Number of entries: 0
>
>In this case I did not have to stop/start the ovirt-ha-broker and 
>ovirt-ha-agent
>
>I still see the issue of the OVF, wondering if I should just rsync the
>whole /gluster_bricks/engine/engine directory from host3 over to host1
>and 2 because of the following 1969 timestamps:
>
>I 1969 as the timestamps on some directories in host1:
>/gluster_bricks/engine/engine/7a68956e-3736-46d1-8932-8576f8ee8882/images:
>drwxr-xr-x. 2 vdsm kvm 8.0K Dec 31  1969
>b8ce22c5-8cbd-4d7f-b544-9ce930e04dcd
>drwxr-xr-x. 2 vdsm kvm 8.0K Dec 31  1969
>86196e10-8103-4b00-bd3e-0f577a8bb5b2
>
>on host2 I see the same:
>drwxr-xr-x. 2 vdsm kvm 8.0K Dec 31  1969
>b8ce22c5-8cbd-4d7f-b544-9ce930e04dcd
>drwxr-xr-x. 2 vdsm kvm 8.0K Dec 31  1969
>86196e10-8103-4b00-bd3e-0f577a8bb5b2
>
>but for  host3: I see a valid timestamp
>drwxr-xr-x. 2 vdsm kvm 149 Feb 16 09:43
>86196e10-8103-4b00-bd3e-0f577a8bb5b2
>drwxr-xr-x. 2 vdsm kvm 149 Feb 16 09:45
>b8ce22c5-8cbd-4d7f-b544-9ce930e04dcd
>
>If we take a close look it seems that host3 has a valid timestamp but
>host1 and host2 have a 1969 date.
>
>Thoughts?
>
>
>thanks,
>
>Adrian
>___
>Users mailing list -- users@ovirt.org
>To unsubscribe send an email to users-le...@ovirt.org
>Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>oVirt Code of Conduct:
>https://www.ovirt.org/community/about/community-guidelines/
>List Archives:
>https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZXL3YXZLO4QGFXCYAMUNAK4NPJTHZT3B/

Hi Adrian,

If you are fully in sync , you can sync the brick without  gluster directories :
For example:
rsync -avP /gluster_bricks/engine/engine/808423f9-8a5c-40cd-bc9f-2568c85b8c74/ 
node2:/gluster_bricks/engine/engine/808423f9-8a5c-40cd-bc9f-2568c85b8c74/


Best Regards,
Sttrahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/N34MD6ZV2CM5NILMB7HZDVJY7IPX6XWQ/


[ovirt-users] Re: oVirt 4.3.7 and Gluster 6.6 multiple issues

2020-02-16 Thread adrianquintero
After a couple of hours all looking good and seems that the timestamps 
corrected themselves and OVF errors are no more.

Thank you all for the help.

Regards,

Adrian 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/57T3ELDXT5DYZA4DEQLB3ZAF4JBMWVE4/


[ovirt-users] Re: oVirt 4.3.7 and Gluster 6.6 multiple issues

2020-02-16 Thread adrianquintero
Ok so I ran and rsync from host2 over to host3 for the .meta files only and 
that seems to have worked:

98d64fb4-df01-4981-9e5e-62be6ca7e07c.meta
ed569aed-005e-40fd-9297-dd54a1e4946c.meta 


[root@host1 ~]# gluster vol heal engine info
Brick host1.grupolucerna.local:/gluster_bricks/engine/engine
Status: Connected
Number of entries: 0

Brick host3:/gluster_bricks/engine/engine
Status: Connected
Number of entries: 0

Brick host3:/gluster_bricks/engine/engine
Status: Connected
Number of entries: 0

In this case I did not have to stop/start the ovirt-ha-broker and  
ovirt-ha-agent

I still see the issue of the OVF, wondering if I should just rsync the whole 
/gluster_bricks/engine/engine directory from host3 over to host1 and 2 because 
of the following 1969 timestamps:

I 1969 as the timestamps on some directories in host1:
/gluster_bricks/engine/engine/7a68956e-3736-46d1-8932-8576f8ee8882/images:
drwxr-xr-x. 2 vdsm kvm 8.0K Dec 31  1969 b8ce22c5-8cbd-4d7f-b544-9ce930e04dcd
drwxr-xr-x. 2 vdsm kvm 8.0K Dec 31  1969 86196e10-8103-4b00-bd3e-0f577a8bb5b2

on host2 I see the same:
drwxr-xr-x. 2 vdsm kvm 8.0K Dec 31  1969 b8ce22c5-8cbd-4d7f-b544-9ce930e04dcd
drwxr-xr-x. 2 vdsm kvm 8.0K Dec 31  1969 86196e10-8103-4b00-bd3e-0f577a8bb5b2

but for  host3: I see a valid timestamp
drwxr-xr-x. 2 vdsm kvm 149 Feb 16 09:43 86196e10-8103-4b00-bd3e-0f577a8bb5b2
drwxr-xr-x. 2 vdsm kvm 149 Feb 16 09:45 b8ce22c5-8cbd-4d7f-b544-9ce930e04dcd

If we take a close look it seems that host3 has a valid timestamp but host1 and 
host2 have a 1969 date.

Thoughts?


thanks,

Adrian
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZXL3YXZLO4QGFXCYAMUNAK4NPJTHZT3B/


[ovirt-users] Re: oVirt & Zabbix agent

2020-02-16 Thread Alex K
On Fri, Feb 14, 2020, 20:48 Diggy Mc  wrote:

> Are there any known issues with running the Zabbix agent on either the
> Hosted Engine (4.3.8) or oVirt Nodes (4.3.8) ???  I'd like to install the
> agent while not crashing my hosting environment.
>
I've done that and have seen no issues for several years.

> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/DR3LNJFP2KV53DMXESZCI4MIFR3ZCNUQ/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5ZGCPQNWV3M6PHFWMWB3WE7UAKGM77A3/


[ovirt-users] Re: HostedEngine migration fails with VM destroyed during the startup.

2020-02-16 Thread Strahil Nikolov
ssh root@engine "poweroff"
ssh host-that-holded-engine "virsh undefine HostedEngine; virsh list --all"

Lot's of virsh - less vdsm :)

Good luck

Best Regards,
Strahil Nikolov


В неделя, 16 февруари 2020 г., 16:01:44 ч. Гринуич+2, Vrgotic, Marko 
 написа: 





Hi Strahil,

Regarding step 3:  Stop and undefine the VM on the last working host
One question: How do I undefine HostedEngine from last Host? Hosted-engine 
command does not provide such option, or it's just not obvious.

Kindly awaiting your reply.


-
kind regards/met vriendelijke groeten

Marko Vrgotic
ActiveVideo


On 14/02/2020, 18:44, "Strahil Nikolov"  wrote:

    On February 14, 2020 4:19:53 PM GMT+02:00, "Vrgotic, Marko" 
 wrote:
    >Good answer Strahil,
    >
    >Thank you, I forgot.
    >
    >Libvirt logs are actually showing the reason why:
    >
    >2020-02-14T12:33:51.847970Z qemu-kvm: -drive
    
>file=/var/run/vdsm/storage/054c43fc-1924-4106-9f80-0f2ac62b9886/b019c5fa-8fb5-4bfc-8339-f5b7f590a051/f1ce8ba6-2d3b-4309-bca0-e6a00ce74c75,format=raw,if=none,id=drive-ua-b019c5fa-8fb5-4bfc-8339-f5b7f590a051,serial=b019c5fa-8fb5-4bfc-8339-f5b7f590a051,werror=stop,rerror=stop,cache=none,aio=threads:
    >'serial' is deprecated, please use the corresponding option of
    >'-device' instead
    >Spice-Message: 04:33:51.856: setting TLS option 'CipherString' to
    >'kECDHE+FIPS:kDHE+FIPS:kRSA+FIPS:!eNULL:!aNULL' from
    >/etc/pki/tls/spice.cnf configuration file
    >2020-02-14T12:33:51.863449Z qemu-kvm: warning: CPU(s) not present in
    >any NUMA nodes: CPU 4 [socket-id: 1, core-id: 0, thread-id: 0], CPU 5
    >[socket-id: 1, core-id: 1, thread-id: 0], CPU 6 [socket-id: 1, core-id:
    >2, thread-id: 0], CPU 7 [socket-id: 1, core-id: 3, thread-id: 0], CPU 8
    >[socket-id: 2, core-id: 0, thread-id: 0], CPU 9 [socket-id: 2, core-id:
    >1, thread-id: 0], CPU 10 [socket-id: 2, core-id: 2, thread-id: 0], CPU
    >11 [socket-id: 2, core-id: 3, thread-id: 0], CPU 12 [socket-id: 3,
    >core-id: 0, thread-id: 0], CPU 13 [socket-id: 3, core-id: 1, thread-id:
    >0], CPU 14 [socket-id: 3, core-id: 2, thread-id: 0], CPU 15 [socket-id:
    >3, core-id: 3, thread-id: 0], CPU 16 [socket-id: 4, core-id: 0,
    >thread-id: 0], CPU 17 [socket-id: 4, core-id: 1, thread-id: 0], CPU 18
    >[socket-id: 4, core-id: 2, thread-id: 0], CPU 19 [socket-id: 4,
    >core-id: 3, thread-id: 0], CPU 20 [socket-id: 5, core-id: 0, thread-id:
    >0], CPU 21 [socket-id: 5, core-id: 1, thread-id: 0], CPU 22 [socket-id:
    >5, core-id: 2, thread-id: 0], CPU 23 [socket-id: 5, core-id: 3,
    >thread-id: 0], CPU 24 [socket-id: 6, core-id: 0, thread-id: 0], CPU 25
    >[socket-id: 6, core-id: 1, thread-id: 0], CPU 26 [socket-id: 6,
    >core-id: 2, thread-id: 0], CPU 27 [socket-id: 6, core-id: 3, thread-id:
    >0], CPU 28 [socket-id: 7, core-id: 0, thread-id: 0], CPU 29 [socket-id:
    >7, core-id: 1, thread-id: 0], CPU 30 [socket-id: 7, core-id: 2,
    >thread-id: 0], CPU 31 [socket-id: 7, core-id: 3, thread-id: 0], CPU 32
    >[socket-id: 8, core-id: 0, thread-id: 0], CPU 33 [socket-id: 8,
    >core-id: 1, thread-id: 0], CPU 34 [socket-id: 8, core-id: 2, thread-id:
    >0], CPU 35 [socket-id: 8, core-id: 3, thread-id: 0], CPU 36 [socket-id:
    >9, core-id: 0, thread-id: 0], CPU 37 [socket-id: 9, core-id: 1,
    >thread-id: 0], CPU 38 [socket-id: 9, core-id: 2, thread-id: 0], CPU 39
    >[socket-id: 9, core-id: 3, thread-id: 0], CPU 40 [socket-id: 10,
    >core-id: 0, thread-id: 0], CPU 41 [socket-id: 10, core-id: 1,
    >thread-id: 0], CPU 42 [socket-id: 10, core-id: 2, thread-id: 0], CPU 43
    >[socket-id: 10, core-id: 3, thread-id: 0], CPU 44 [socket-id: 11,
    >core-id: 0, thread-id: 0], CPU 45 [socket-id: 11, core-id: 1,
    >thread-id: 0], CPU 46 [socket-id: 11, core-id: 2, thread-id: 0], CPU 47
    >[socket-id: 11, core-id: 3, thread-id: 0], CPU 48 [socket-id: 12,
    >core-id: 0, thread-id: 0], CPU 49 [socket-id: 12, core-id: 1,
    >thread-id: 0], CPU 50 [socket-id: 12, core-id: 2, thread-id: 0], CPU 51
    >[socket-id: 12, core-id: 3, thread-id: 0], CPU 52 [socket-id: 13,
    >core-id: 0, thread-id: 0], CPU 53 [socket-id: 13, core-id: 1,
    >thread-id: 0], CPU 54 [socket-id: 13, core-id: 2, thread-id: 0], CPU 55
    >[socket-id: 13, core-id: 3, thread-id: 0], CPU 56 [socket-id: 14,
    >core-id: 0, thread-id: 0], CPU 57 [socket-id: 14, core-id: 1,
    >thread-id: 0], CPU 58 [socket-id: 14, core-id: 2, thread-id: 0], CPU 59
    >[socket-id: 14, core-id: 3, thread-id: 0], CPU 60 [socket-id: 15,
    >core-id: 0, thread-id: 0], CPU 61 [socket-id: 15, core-id: 1,
    >thread-id: 0], CPU 62 [socket-id: 15, core-id: 2, thread-id: 0], CPU 63
    >[socket-id: 15, core-id: 3, thread-id: 0]
    >2020-02-14T12:33:51.863475Z qemu-kvm: warning: All CPU(s) up to maxcpus
    >should be described in NUMA config, ability to start up with partial
    >NUMA mappings is obsoleted and will be removed in future
    >2020-02-14T12:33:51.863973Z qemu-kvm: warning: host does

[ovirt-users] Re: hosted-engine --deploy --restore-from-file fails with error "Domain format is different from master storage domain format"

2020-02-16 Thread Strahil Nikolov
Hi Shani,


in theory it's exactly you have quoted.
In reality ... that's not what we all thought about :)
I had issues with the master domain role.

Djagoo,
if you have issues - just poweroff and power on the engine.
Upon 5-6th time it will decide to change it :) 

Best Regards,
Strahil Nikolov






В неделя, 16 февруари 2020 г., 14:53:21 ч. Гринуич+2, Shani Leviim 
 написа: 





Basically, in case the Master SD is down, that function should move to the 
other SD on the same DC.
Taken from [1]:

To change the Master Storage Domain to another specific Storage Domain, the 
below steps need to be followed:

* Put all storage domains except the Master storage domain and the one that 
needs to be the new master storage to maintenance mode from 
Data Centre -> Lower Sub tab -> Storage -> Right-click -> Maintenance.
(The only active storage domains now would be the Master storage domain and the 
storage domain which need to be the master.)

*Now put the "Master storage" to maintenance mode.

* The only active storage domain (which you want to make the new master 
storage) would automatically be the master storage.

[1] https://access.redhat.com/solutions/34923

Regards,
Shani Leviim


On Sun, Feb 16, 2020 at 2:20 PM djagoo  wrote:
> Hi Shani,
> 
> thanks for your help. The master SD is the one the hosted engine is on. I 
> can't put it to maintenance and attach it to another domain. But I`ll try 
> this procedure with one of the oder SDs. If it works: is there a way to 
> change the master domain to one of the others or is it always the one the 
> hosted enginge is running on?
> 
> Regards,
> 
> Marcel
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/XAAO4IHSPC66ZNZUF7XXJCMJ32Y5T4H7/
> 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RQ43AKLV57VQCA6LJDSYQTACVVYYXYRA/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HLDHP55J5JOYPGXKSHMHV63T3HHG6QSY/


[ovirt-users] Re: HostedEngine migration fails with VM destroyed during the startup.

2020-02-16 Thread Vrgotic, Marko
Hi Strahil,

Regarding step 3:  Stop and undefine the VM on the last working host
One question: How do I undefine HostedEngine from last Host? Hosted-engine 
command does not provide such option, or it's just not obvious.

Kindly awaiting your reply.


-
kind regards/met vriendelijke groeten
 
Marko Vrgotic
ActiveVideo


On 14/02/2020, 18:44, "Strahil Nikolov"  wrote:

On February 14, 2020 4:19:53 PM GMT+02:00, "Vrgotic, Marko" 
 wrote:
>Good answer Strahil,
>
>Thank you, I forgot.
>
>Libvirt logs are actually showing the reason why:
>
>2020-02-14T12:33:51.847970Z qemu-kvm: -drive

>file=/var/run/vdsm/storage/054c43fc-1924-4106-9f80-0f2ac62b9886/b019c5fa-8fb5-4bfc-8339-f5b7f590a051/f1ce8ba6-2d3b-4309-bca0-e6a00ce74c75,format=raw,if=none,id=drive-ua-b019c5fa-8fb5-4bfc-8339-f5b7f590a051,serial=b019c5fa-8fb5-4bfc-8339-f5b7f590a051,werror=stop,rerror=stop,cache=none,aio=threads:
>'serial' is deprecated, please use the corresponding option of
>'-device' instead
>Spice-Message: 04:33:51.856: setting TLS option 'CipherString' to
>'kECDHE+FIPS:kDHE+FIPS:kRSA+FIPS:!eNULL:!aNULL' from
>/etc/pki/tls/spice.cnf configuration file
>2020-02-14T12:33:51.863449Z qemu-kvm: warning: CPU(s) not present in
>any NUMA nodes: CPU 4 [socket-id: 1, core-id: 0, thread-id: 0], CPU 5
>[socket-id: 1, core-id: 1, thread-id: 0], CPU 6 [socket-id: 1, core-id:
>2, thread-id: 0], CPU 7 [socket-id: 1, core-id: 3, thread-id: 0], CPU 8
>[socket-id: 2, core-id: 0, thread-id: 0], CPU 9 [socket-id: 2, core-id:
>1, thread-id: 0], CPU 10 [socket-id: 2, core-id: 2, thread-id: 0], CPU
>11 [socket-id: 2, core-id: 3, thread-id: 0], CPU 12 [socket-id: 3,
>core-id: 0, thread-id: 0], CPU 13 [socket-id: 3, core-id: 1, thread-id:
>0], CPU 14 [socket-id: 3, core-id: 2, thread-id: 0], CPU 15 [socket-id:
>3, core-id: 3, thread-id: 0], CPU 16 [socket-id: 4, core-id: 0,
>thread-id: 0], CPU 17 [socket-id: 4, core-id: 1, thread-id: 0], CPU 18
>[socket-id: 4, core-id: 2, thread-id: 0], CPU 19 [socket-id: 4,
>core-id: 3, thread-id: 0], CPU 20 [socket-id: 5, core-id: 0, thread-id:
>0], CPU 21 [socket-id: 5, core-id: 1, thread-id: 0], CPU 22 [socket-id:
>5, core-id: 2, thread-id: 0], CPU 23 [socket-id: 5, core-id: 3,
>thread-id: 0], CPU 24 [socket-id: 6, core-id: 0, thread-id: 0], CPU 25
>[socket-id: 6, core-id: 1, thread-id: 0], CPU 26 [socket-id: 6,
>core-id: 2, thread-id: 0], CPU 27 [socket-id: 6, core-id: 3, thread-id:
>0], CPU 28 [socket-id: 7, core-id: 0, thread-id: 0], CPU 29 [socket-id:
>7, core-id: 1, thread-id: 0], CPU 30 [socket-id: 7, core-id: 2,
>thread-id: 0], CPU 31 [socket-id: 7, core-id: 3, thread-id: 0], CPU 32
>[socket-id: 8, core-id: 0, thread-id: 0], CPU 33 [socket-id: 8,
>core-id: 1, thread-id: 0], CPU 34 [socket-id: 8, core-id: 2, thread-id:
>0], CPU 35 [socket-id: 8, core-id: 3, thread-id: 0], CPU 36 [socket-id:
>9, core-id: 0, thread-id: 0], CPU 37 [socket-id: 9, core-id: 1,
>thread-id: 0], CPU 38 [socket-id: 9, core-id: 2, thread-id: 0], CPU 39
>[socket-id: 9, core-id: 3, thread-id: 0], CPU 40 [socket-id: 10,
>core-id: 0, thread-id: 0], CPU 41 [socket-id: 10, core-id: 1,
>thread-id: 0], CPU 42 [socket-id: 10, core-id: 2, thread-id: 0], CPU 43
>[socket-id: 10, core-id: 3, thread-id: 0], CPU 44 [socket-id: 11,
>core-id: 0, thread-id: 0], CPU 45 [socket-id: 11, core-id: 1,
>thread-id: 0], CPU 46 [socket-id: 11, core-id: 2, thread-id: 0], CPU 47
>[socket-id: 11, core-id: 3, thread-id: 0], CPU 48 [socket-id: 12,
>core-id: 0, thread-id: 0], CPU 49 [socket-id: 12, core-id: 1,
>thread-id: 0], CPU 50 [socket-id: 12, core-id: 2, thread-id: 0], CPU 51
>[socket-id: 12, core-id: 3, thread-id: 0], CPU 52 [socket-id: 13,
>core-id: 0, thread-id: 0], CPU 53 [socket-id: 13, core-id: 1,
>thread-id: 0], CPU 54 [socket-id: 13, core-id: 2, thread-id: 0], CPU 55
>[socket-id: 13, core-id: 3, thread-id: 0], CPU 56 [socket-id: 14,
>core-id: 0, thread-id: 0], CPU 57 [socket-id: 14, core-id: 1,
>thread-id: 0], CPU 58 [socket-id: 14, core-id: 2, thread-id: 0], CPU 59
>[socket-id: 14, core-id: 3, thread-id: 0], CPU 60 [socket-id: 15,
>core-id: 0, thread-id: 0], CPU 61 [socket-id: 15, core-id: 1,
>thread-id: 0], CPU 62 [socket-id: 15, core-id: 2, thread-id: 0], CPU 63
>[socket-id: 15, core-id: 3, thread-id: 0]
>2020-02-14T12:33:51.863475Z qemu-kvm: warning: All CPU(s) up to maxcpus
>should be described in NUMA config, ability to start up with partial
>NUMA mappings is obsoleted and will be removed in future
>2020-02-14T12:33:51.863973Z qemu-kvm: warning: host doesn't support
>requested feature: CPUID.07H:EDX.md-clear [bit 10]
>2020-02-14T12:33:51.865066Z qemu-kvm: warning: host doesn't support
>requested feature: CPUID.07H:EDX.md-clear [bit 10]
>2020-02-14T12:33:51.865547Z qemu-kvm: warning: host doesn't 

[ovirt-users] Re: hosted-engine --deploy --restore-from-file fails with error "Domain format is different from master storage domain format"

2020-02-16 Thread Shani Leviim
Basically, in case the Master SD is down, that function should move to the
other SD on the same DC.
Taken from [1]:

To change the Master Storage Domain to another specific Storage Domain, the
below steps need to be followed:

* Put all storage domains except the Master storage domain and the one that
needs to be the new master storage to maintenance mode from
Data Centre -> Lower Sub tab -> Storage -> Right-click -> Maintenance.
(The only active storage domains now would be the Master storage domain and
the storage domain which need to be the master.)

*Now put the "Master storage" to maintenance mode.

* The only active storage domain (which you want to make the new master
storage) would automatically be the master storage.

[1] https://access.redhat.com/solutions/34923


*Regards,*

*Shani Leviim*


On Sun, Feb 16, 2020 at 2:20 PM djagoo  wrote:

> Hi Shani,
>
> thanks for your help. The master SD is the one the hosted engine is on. I
> can't put it to maintenance and attach it to another domain. But I`ll try
> this procedure with one of the oder SDs. If it works: is there a way to
> change the master domain to one of the others or is it always the one the
> hosted enginge is running on?
>
> Regards,
>
> Marcel
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/XAAO4IHSPC66ZNZUF7XXJCMJ32Y5T4H7/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RQ43AKLV57VQCA6LJDSYQTACVVYYXYRA/


[ovirt-users] Re: hosted-engine --deploy --restore-from-file fails with error "Domain format is different from master storage domain format"

2020-02-16 Thread djagoo
Hi Shani,

thanks for your help. The master SD is the one the hosted engine is on. I can't 
put it to maintenance and attach it to another domain. But I`ll try this 
procedure with one of the oder SDs. If it works: is there a way to change the 
master domain to one of the others or is it always the one the hosted enginge 
is running on?

Regards,

Marcel
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XAAO4IHSPC66ZNZUF7XXJCMJ32Y5T4H7/


[ovirt-users] Re: hosted-engine --deploy --restore-from-file fails with error "Domain format is different from master storage domain format"

2020-02-16 Thread Shani Leviim
Hi Marcel,
For the 4.3 DC version, since 4.3.3, the storage format was changed to V5.
Till that version, the storage format for 4.3 DCs was V4.
Since your SDs were created before 4.3.3, their storage format should be V4.

In order to upgrade this format, you can try to detach those storage
domains and attach them to a DC which its version is V5.
(On the UI: Go to the relevant SD -> Data Center tab -> maintenance and
then Detach).

You should get a confirmation window that asks you to confirm the storage
format's upgrade.


*Regards,*

*Shani Leviim*


On Sun, Feb 16, 2020 at 8:20 AM djagoo  wrote:

> Just created a test data center. Now, when I try to add/create a new
> storage domain there is V4 format for the existing domain and V5 for the
> new data center.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/RYHYHSDUP3SIEFOAIMLWDSYJCIF4U6AS/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3QWMB6DH7JWOPCAKJKCA367L6DHVLRJ7/


[ovirt-users] Re: HostedEngine migration fails with VM destroyed during the startup.

2020-02-16 Thread Vrgotic, Marko
Hi Strahil,

No, note this time and it's not the first time I am doing only Host upgrade 
without yet Upgrading the Engine packages.
True, I am aware that the Engine upgrade states to upon Engine upgrade the 
Hosts should be updated as well. 
Still, seeing the "Box with CD" next to Host name, was never an indicator to me 
that I should not update just Hosts.

In complete honesty, I update HostedEngine and its OS packages, only with 
release upgrade and in GlobalMaintenance mode,  but Hosts packages, I update 
more frequently. Yet, this is first time that caused an issue.

I took quick look at Engine's CPU type, from WebUI, and its showing that its 
using Cluster Default (which is matching CPU type of all updated Hosts), but 
XML might show differently.

As soon as I know more, I will share.


-
kind regards/met vriendelijke groeten
 
Marko Vrgotic
ActiveVideo

 

On 16/02/2020, 11:18, "Strahil Nikolov"  wrote:

On February 16, 2020 11:40:37 AM GMT+02:00, "Vrgotic, Marko" 
 wrote:
>Hi Strahil,
>
>Thank you for proposed steps. If this is now the only way to go,
>without having to do restore, I will take it.
>Before I start, I will first gather information and later share the
>outcome.
>
>What does not make me happy, is the cause.
>What is the reason CPU family type of the processor would be changed
>upon upgrade? Can you please share with me?
>At least for the Hosts which are part of HA, the upgrade should check
>and if possible give a warning if any of HA cluster crucial parameters
>will be changed.
>
>
>-
>kind regards/met vriendelijke groeten
> 
>Marko Vrgotic
>ActiveVideo
>
>
>On 14/02/2020, 18:44, "Strahil Nikolov"  wrote:
>
>On February 14, 2020 4:19:53 PM GMT+02:00, "Vrgotic, Marko"
> wrote:
>>Good answer Strahil,
>>
>>Thank you, I forgot.
>>
>>Libvirt logs are actually showing the reason why:
>>
>>2020-02-14T12:33:51.847970Z qemu-kvm: -drive

>>file=/var/run/vdsm/storage/054c43fc-1924-4106-9f80-0f2ac62b9886/b019c5fa-8fb5-4bfc-8339-f5b7f590a051/f1ce8ba6-2d3b-4309-bca0-e6a00ce74c75,format=raw,if=none,id=drive-ua-b019c5fa-8fb5-4bfc-8339-f5b7f590a051,serial=b019c5fa-8fb5-4bfc-8339-f5b7f590a051,werror=stop,rerror=stop,cache=none,aio=threads:
>>'serial' is deprecated, please use the corresponding option of
>>'-device' instead
>>Spice-Message: 04:33:51.856: setting TLS option 'CipherString' to
>>'kECDHE+FIPS:kDHE+FIPS:kRSA+FIPS:!eNULL:!aNULL' from
>>/etc/pki/tls/spice.cnf configuration file
>  >2020-02-14T12:33:51.863449Z qemu-kvm: warning: CPU(s) not present in
> >any NUMA nodes: CPU 4 [socket-id: 1, core-id: 0, thread-id: 0], CPU 5
>>[socket-id: 1, core-id: 1, thread-id: 0], CPU 6 [socket-id: 1,
>core-id:
>>2, thread-id: 0], CPU 7 [socket-id: 1, core-id: 3, thread-id: 0], CPU
>8
>>[socket-id: 2, core-id: 0, thread-id: 0], CPU 9 [socket-id: 2,
>core-id:
>>1, thread-id: 0], CPU 10 [socket-id: 2, core-id: 2, thread-id: 0], CPU
>>11 [socket-id: 2, core-id: 3, thread-id: 0], CPU 12 [socket-id: 3,
>>core-id: 0, thread-id: 0], CPU 13 [socket-id: 3, core-id: 1,
>thread-id:
>>0], CPU 14 [socket-id: 3, core-id: 2, thread-id: 0], CPU 15
>[socket-id:
>>3, core-id: 3, thread-id: 0], CPU 16 [socket-id: 4, core-id: 0,
>>thread-id: 0], CPU 17 [socket-id: 4, core-id: 1, thread-id: 0], CPU 18
>>[socket-id: 4, core-id: 2, thread-id: 0], CPU 19 [socket-id: 4,
>>core-id: 3, thread-id: 0], CPU 20 [socket-id: 5, core-id: 0,
>thread-id:
>>0], CPU 21 [socket-id: 5, core-id: 1, thread-id: 0], CPU 22
>[socket-id:
>>5, core-id: 2, thread-id: 0], CPU 23 [socket-id: 5, core-id: 3,
>>thread-id: 0], CPU 24 [socket-id: 6, core-id: 0, thread-id: 0], CPU 25
>>[socket-id: 6, core-id: 1, thread-id: 0], CPU 26 [socket-id: 6,
>>core-id: 2, thread-id: 0], CPU 27 [socket-id: 6, core-id: 3,
>thread-id:
>>0], CPU 28 [socket-id: 7, core-id: 0, thread-id: 0], CPU 29
>[socket-id:
>>7, core-id: 1, thread-id: 0], CPU 30 [socket-id: 7, core-id: 2,
>>thread-id: 0], CPU 31 [socket-id: 7, core-id: 3, thread-id: 0], CPU 32
>>[socket-id: 8, core-id: 0, thread-id: 0], CPU 33 [socket-id: 8,
>>core-id: 1, thread-id: 0], CPU 34 [socket-id: 8, core-id: 2,
>thread-id:
>>0], CPU 35 [socket-id: 8, core-id: 3, thread-id: 0], CPU 36
>[socket-id:
>>9, core-id: 0, thread-id: 0], CPU 37 [socket-id: 9, core-id: 1,
>>thread-id: 0], CPU 38 [socket-id: 9, core-id: 2, thread-id: 0], CPU 39
>>[socket-id: 9, core-id: 3, thread-id: 0], CPU 40 [socket-id: 10,
>>core-id: 0, thread-id: 0], CPU 41 [socket-id: 10, core-id: 1,
>>thread-id: 0], CPU 42 [socket-id: 10, core-id: 2, thread-id: 0], CPU
>43
>>[socket-id: 10, core-id: 3, thread-id: 0], CPU 44 [socket-id: 

[ovirt-users] Add LDAP user : ERROR: null value in column "external_id" violates not-null constraint

2020-02-16 Thread lucaslamy87
Hi,
I have previously configured LDAP though ovirt-engine-extension-aaa-ldap-setup.
The only working configuration was IBM Security Directory Server (the IBM 
Security Directory Server RFC-2307 Schema doesn't work), ladps and anonymous 
search user.
With this one the search and login are working fine when I test them with 
ovirt-engine-extensions-tool aaa.
But when I try to add a LDAP User in the User Administration Panel I get this 
Error message : "Error while executing action AddUser : Internal Engine Error"

None of the solutions I've found on previous threads seems to works.

Does someone have an idea please ? 
Please find the logs attached.
Thank you beforehand.

Caused by: org.postgresql.util.PSQLException: ERROR: null value in column 
"external_id" violates not-null constraint
  Detail: Failing row contains (**user info**).
  Where: SQL statement "INSERT INTO users (
department,
domain,
email,
name,
note,
surname,
user_id,
username,
external_id,
namespace
)
VALUES (
v_department,
v_domain,
v_email,
v_name,
v_note,
v_surname,
v_user_id,
v_username,
v_external_id,
v_namespace
)"
PL/pgSQL function insertuser(character varying,character varying,character 
varying,character varying,character varying,character varying,uuid,character 
varying,text,character varying) line 3 at SQL state$
at 
org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2433)
at 
org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2178)
at 
org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:306)
at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:441)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:365)
at 
org.postgresql.jdbc.PgPreparedStatement.executeWithFlags(PgPreparedStatement.java:155)
at 
org.postgresql.jdbc.PgCallableStatement.executeWithFlags(PgCallableStatement.java:78)
at 
org.postgresql.jdbc.PgPreparedStatement.execute(PgPreparedStatement.java:144)
at 
org.jboss.jca.adapters.jdbc.CachedPreparedStatement.execute(CachedPreparedStatement.java:303)
at 
org.jboss.jca.adapters.jdbc.WrappedPreparedStatement.execute(WrappedPreparedStatement.java:442)
at 
org.springframework.jdbc.core.JdbcTemplate.lambda$call$4(JdbcTemplate.java:1105)
 [spring-jdbc.jar:5.0.4.RELEASE]
at 
org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:1050) 
[spring-jdbc.jar:5.0.4.RELEASE]
... 162 more

2020-02-15 10:16:53,337+01 ERROR [org.ovirt.engine.core.bll.aaa.AddUserCommand] 
(default task-4) [222f7ca7-b669-40e0-b152-2ca898ebde09] Transaction rolled-back 
for command 'org.ovirt.engine.core.bll.aaa.$
2020-02-15 10:16:53,341+01 WARN  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default 
task-4) [222f7ca7-b669-40e0-b152-2ca898ebde09] EVENT_ID: 
USER_FAILED_ADD_ADUSER(327), Fail, Failed to add User 'user' to the system.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5KREMZWPSSOXZMVYQDIZ2HNOCDLNIGX6/


[ovirt-users] Re: HostedEngine migration fails with VM destroyed during the startup.

2020-02-16 Thread Strahil Nikolov
On February 16, 2020 11:40:37 AM GMT+02:00, "Vrgotic, Marko" 
 wrote:
>Hi Strahil,
>
>Thank you for proposed steps. If this is now the only way to go,
>without having to do restore, I will take it.
>Before I start, I will first gather information and later share the
>outcome.
>
>What does not make me happy, is the cause.
>What is the reason CPU family type of the processor would be changed
>upon upgrade? Can you please share with me?
>At least for the Hosts which are part of HA, the upgrade should check
>and if possible give a warning if any of HA cluster crucial parameters
>will be changed.
>
>
>-
>kind regards/met vriendelijke groeten
> 
>Marko Vrgotic
>ActiveVideo
>
>
>On 14/02/2020, 18:44, "Strahil Nikolov"  wrote:
>
>On February 14, 2020 4:19:53 PM GMT+02:00, "Vrgotic, Marko"
> wrote:
>>Good answer Strahil,
>>
>>Thank you, I forgot.
>>
>>Libvirt logs are actually showing the reason why:
>>
>>2020-02-14T12:33:51.847970Z qemu-kvm: -drive
>>file=/var/run/vdsm/storage/054c43fc-1924-4106-9f80-0f2ac62b9886/b019c5fa-8fb5-4bfc-8339-f5b7f590a051/f1ce8ba6-2d3b-4309-bca0-e6a00ce74c75,format=raw,if=none,id=drive-ua-b019c5fa-8fb5-4bfc-8339-f5b7f590a051,serial=b019c5fa-8fb5-4bfc-8339-f5b7f590a051,werror=stop,rerror=stop,cache=none,aio=threads:
>>'serial' is deprecated, please use the corresponding option of
>>'-device' instead
>>Spice-Message: 04:33:51.856: setting TLS option 'CipherString' to
>>'kECDHE+FIPS:kDHE+FIPS:kRSA+FIPS:!eNULL:!aNULL' from
>>/etc/pki/tls/spice.cnf configuration file
>  >2020-02-14T12:33:51.863449Z qemu-kvm: warning: CPU(s) not present in
> >any NUMA nodes: CPU 4 [socket-id: 1, core-id: 0, thread-id: 0], CPU 5
>>[socket-id: 1, core-id: 1, thread-id: 0], CPU 6 [socket-id: 1,
>core-id:
>>2, thread-id: 0], CPU 7 [socket-id: 1, core-id: 3, thread-id: 0], CPU
>8
>>[socket-id: 2, core-id: 0, thread-id: 0], CPU 9 [socket-id: 2,
>core-id:
>>1, thread-id: 0], CPU 10 [socket-id: 2, core-id: 2, thread-id: 0], CPU
>>11 [socket-id: 2, core-id: 3, thread-id: 0], CPU 12 [socket-id: 3,
>>core-id: 0, thread-id: 0], CPU 13 [socket-id: 3, core-id: 1,
>thread-id:
>>0], CPU 14 [socket-id: 3, core-id: 2, thread-id: 0], CPU 15
>[socket-id:
>>3, core-id: 3, thread-id: 0], CPU 16 [socket-id: 4, core-id: 0,
>>thread-id: 0], CPU 17 [socket-id: 4, core-id: 1, thread-id: 0], CPU 18
>>[socket-id: 4, core-id: 2, thread-id: 0], CPU 19 [socket-id: 4,
>>core-id: 3, thread-id: 0], CPU 20 [socket-id: 5, core-id: 0,
>thread-id:
>>0], CPU 21 [socket-id: 5, core-id: 1, thread-id: 0], CPU 22
>[socket-id:
>>5, core-id: 2, thread-id: 0], CPU 23 [socket-id: 5, core-id: 3,
>>thread-id: 0], CPU 24 [socket-id: 6, core-id: 0, thread-id: 0], CPU 25
>>[socket-id: 6, core-id: 1, thread-id: 0], CPU 26 [socket-id: 6,
>>core-id: 2, thread-id: 0], CPU 27 [socket-id: 6, core-id: 3,
>thread-id:
>>0], CPU 28 [socket-id: 7, core-id: 0, thread-id: 0], CPU 29
>[socket-id:
>>7, core-id: 1, thread-id: 0], CPU 30 [socket-id: 7, core-id: 2,
>>thread-id: 0], CPU 31 [socket-id: 7, core-id: 3, thread-id: 0], CPU 32
>>[socket-id: 8, core-id: 0, thread-id: 0], CPU 33 [socket-id: 8,
>>core-id: 1, thread-id: 0], CPU 34 [socket-id: 8, core-id: 2,
>thread-id:
>>0], CPU 35 [socket-id: 8, core-id: 3, thread-id: 0], CPU 36
>[socket-id:
>>9, core-id: 0, thread-id: 0], CPU 37 [socket-id: 9, core-id: 1,
>>thread-id: 0], CPU 38 [socket-id: 9, core-id: 2, thread-id: 0], CPU 39
>>[socket-id: 9, core-id: 3, thread-id: 0], CPU 40 [socket-id: 10,
>>core-id: 0, thread-id: 0], CPU 41 [socket-id: 10, core-id: 1,
>>thread-id: 0], CPU 42 [socket-id: 10, core-id: 2, thread-id: 0], CPU
>43
>>[socket-id: 10, core-id: 3, thread-id: 0], CPU 44 [socket-id: 11,
>>core-id: 0, thread-id: 0], CPU 45 [socket-id: 11, core-id: 1,
>>thread-id: 0], CPU 46 [socket-id: 11, core-id: 2, thread-id: 0], CPU
>47
>>[socket-id: 11, core-id: 3, thread-id: 0], CPU 48 [socket-id: 12,
>>core-id: 0, thread-id: 0], CPU 49 [socket-id: 12, core-id: 1,
>>thread-id: 0], CPU 50 [socket-id: 12, core-id: 2, thread-id: 0], CPU
>51
>>[socket-id: 12, core-id: 3, thread-id: 0], CPU 52 [socket-id: 13,
>>core-id: 0, thread-id: 0], CPU 53 [socket-id: 13, core-id: 1,
>>thread-id: 0], CPU 54 [socket-id: 13, core-id: 2, thread-id: 0], CPU
>55
>>[socket-id: 13, core-id: 3, thread-id: 0], CPU 56 [socket-id: 14,
>>core-id: 0, thread-id: 0], CPU 57 [socket-id: 14, core-id: 1,
>>thread-id: 0], CPU 58 [socket-id: 14, core-id: 2, thread-id: 0], CPU
>59
>>[socket-id: 14, core-id: 3, thread-id: 0], CPU 60 [socket-id: 15,
>>core-id: 0, thread-id: 0], CPU 61 [socket-id: 15, core-id: 1,
>>thread-id: 0], CPU 62 [socket-id: 15, core-id: 2, thread-id: 0], CPU
>63
>>[socket-id: 15, core-id: 3, thread-id: 0]
>>2020-02-14T12:33:51.863475Z qemu-kvm: warning: All CPU(s) up to
>maxcpus
>  >should be described in NUMA config, ability to start up with partial
>>NUMA mappings is obsoleted and will be removed in

[ovirt-users] Re: HostedEngine migration fails with VM destroyed during the startup.

2020-02-16 Thread Vrgotic, Marko
Hi Strahil,

Thank you for proposed steps. If this is now the only way to go, without having 
to do restore, I will take it.
Before I start, I will first gather information and later share the outcome.

What does not make me happy, is the cause.
What is the reason CPU family type of the processor would be changed upon 
upgrade? Can you please share with me?
At least for the Hosts which are part of HA, the upgrade should check and if 
possible give a warning if any of HA cluster crucial parameters will be changed.


-
kind regards/met vriendelijke groeten
 
Marko Vrgotic
ActiveVideo


On 14/02/2020, 18:44, "Strahil Nikolov"  wrote:

On February 14, 2020 4:19:53 PM GMT+02:00, "Vrgotic, Marko" 
 wrote:
>Good answer Strahil,
>
>Thank you, I forgot.
>
>Libvirt logs are actually showing the reason why:
>
>2020-02-14T12:33:51.847970Z qemu-kvm: -drive

>file=/var/run/vdsm/storage/054c43fc-1924-4106-9f80-0f2ac62b9886/b019c5fa-8fb5-4bfc-8339-f5b7f590a051/f1ce8ba6-2d3b-4309-bca0-e6a00ce74c75,format=raw,if=none,id=drive-ua-b019c5fa-8fb5-4bfc-8339-f5b7f590a051,serial=b019c5fa-8fb5-4bfc-8339-f5b7f590a051,werror=stop,rerror=stop,cache=none,aio=threads:
>'serial' is deprecated, please use the corresponding option of
>'-device' instead
>Spice-Message: 04:33:51.856: setting TLS option 'CipherString' to
>'kECDHE+FIPS:kDHE+FIPS:kRSA+FIPS:!eNULL:!aNULL' from
>/etc/pki/tls/spice.cnf configuration file
>2020-02-14T12:33:51.863449Z qemu-kvm: warning: CPU(s) not present in
>any NUMA nodes: CPU 4 [socket-id: 1, core-id: 0, thread-id: 0], CPU 5
>[socket-id: 1, core-id: 1, thread-id: 0], CPU 6 [socket-id: 1, core-id:
>2, thread-id: 0], CPU 7 [socket-id: 1, core-id: 3, thread-id: 0], CPU 8
>[socket-id: 2, core-id: 0, thread-id: 0], CPU 9 [socket-id: 2, core-id:
>1, thread-id: 0], CPU 10 [socket-id: 2, core-id: 2, thread-id: 0], CPU
>11 [socket-id: 2, core-id: 3, thread-id: 0], CPU 12 [socket-id: 3,
>core-id: 0, thread-id: 0], CPU 13 [socket-id: 3, core-id: 1, thread-id:
>0], CPU 14 [socket-id: 3, core-id: 2, thread-id: 0], CPU 15 [socket-id:
>3, core-id: 3, thread-id: 0], CPU 16 [socket-id: 4, core-id: 0,
>thread-id: 0], CPU 17 [socket-id: 4, core-id: 1, thread-id: 0], CPU 18
>[socket-id: 4, core-id: 2, thread-id: 0], CPU 19 [socket-id: 4,
>core-id: 3, thread-id: 0], CPU 20 [socket-id: 5, core-id: 0, thread-id:
>0], CPU 21 [socket-id: 5, core-id: 1, thread-id: 0], CPU 22 [socket-id:
>5, core-id: 2, thread-id: 0], CPU 23 [socket-id: 5, core-id: 3,
>thread-id: 0], CPU 24 [socket-id: 6, core-id: 0, thread-id: 0], CPU 25
>[socket-id: 6, core-id: 1, thread-id: 0], CPU 26 [socket-id: 6,
>core-id: 2, thread-id: 0], CPU 27 [socket-id: 6, core-id: 3, thread-id:
>0], CPU 28 [socket-id: 7, core-id: 0, thread-id: 0], CPU 29 [socket-id:
>7, core-id: 1, thread-id: 0], CPU 30 [socket-id: 7, core-id: 2,
>thread-id: 0], CPU 31 [socket-id: 7, core-id: 3, thread-id: 0], CPU 32
>[socket-id: 8, core-id: 0, thread-id: 0], CPU 33 [socket-id: 8,
>core-id: 1, thread-id: 0], CPU 34 [socket-id: 8, core-id: 2, thread-id:
>0], CPU 35 [socket-id: 8, core-id: 3, thread-id: 0], CPU 36 [socket-id:
>9, core-id: 0, thread-id: 0], CPU 37 [socket-id: 9, core-id: 1,
>thread-id: 0], CPU 38 [socket-id: 9, core-id: 2, thread-id: 0], CPU 39
>[socket-id: 9, core-id: 3, thread-id: 0], CPU 40 [socket-id: 10,
>core-id: 0, thread-id: 0], CPU 41 [socket-id: 10, core-id: 1,
>thread-id: 0], CPU 42 [socket-id: 10, core-id: 2, thread-id: 0], CPU 43
>[socket-id: 10, core-id: 3, thread-id: 0], CPU 44 [socket-id: 11,
>core-id: 0, thread-id: 0], CPU 45 [socket-id: 11, core-id: 1,
>thread-id: 0], CPU 46 [socket-id: 11, core-id: 2, thread-id: 0], CPU 47
>[socket-id: 11, core-id: 3, thread-id: 0], CPU 48 [socket-id: 12,
>core-id: 0, thread-id: 0], CPU 49 [socket-id: 12, core-id: 1,
>thread-id: 0], CPU 50 [socket-id: 12, core-id: 2, thread-id: 0], CPU 51
>[socket-id: 12, core-id: 3, thread-id: 0], CPU 52 [socket-id: 13,
>core-id: 0, thread-id: 0], CPU 53 [socket-id: 13, core-id: 1,
>thread-id: 0], CPU 54 [socket-id: 13, core-id: 2, thread-id: 0], CPU 55
>[socket-id: 13, core-id: 3, thread-id: 0], CPU 56 [socket-id: 14,
>core-id: 0, thread-id: 0], CPU 57 [socket-id: 14, core-id: 1,
>thread-id: 0], CPU 58 [socket-id: 14, core-id: 2, thread-id: 0], CPU 59
>[socket-id: 14, core-id: 3, thread-id: 0], CPU 60 [socket-id: 15,
>core-id: 0, thread-id: 0], CPU 61 [socket-id: 15, core-id: 1,
>thread-id: 0], CPU 62 [socket-id: 15, core-id: 2, thread-id: 0], CPU 63
>[socket-id: 15, core-id: 3, thread-id: 0]
>2020-02-14T12:33:51.863475Z qemu-kvm: warning: All CPU(s) up to maxcpus
>should be described in NUMA config, ability to start up with partial
>NUMA mappings is obsoleted and will be removed in future
>2020-02-14T12:33:51.863973Z qemu-kvm: warning: hos

[ovirt-users] Re: Cannot Increase Hosted Engine VM Memory

2020-02-16 Thread Serhiy Morhun
Hi, Can you explain your workaround a little more?
I cannot take a snapshot through UI. Get a message this VM is not managed
by the engine.
Also, do you edit the properties in Web UI? Every time I do that my "Memory
Size" always saves the old value.
And what do you mean by "cold reboot"? Is it "reboot" from hosted engine
session or "hosted-engine --vm-shutdown" followed by " hosted-engine
--vm-start" from node session?

Thank you. I would like to solve this issue. Just upgraded to 4.3.8 hoping
it was fixed but no luck.



On Thu, Dec 12, 2019 at 2:44 AM Michaël Couren  wrote:

>
>
> >
> > On Wed, 11 Dec 2019 at 09:36, Serhiy Morhun  >
> > wrote:
> >
> >> Hello, did anyone find a resolution for this issue? I'm having exactly
> the
> >> same problem:
>
> Hi, same issue for us, the solution was :
> Make a snappshot
> Edit the properties, putting 32768 MB for "Mem", 131072 MB for "Max" and
> 32768 MB for "Guaranteed"
> Then reboot the server (cold reboot)
>
> --
> Cordialement / Best regards, Michaël Couren,
> ABES, Montpellier, France.
>

-- 



*---***


*THE INFORMATION CONTAINED IN THIS MESSAGE (E-MAIL AND ANY ATTACHMENTS) IS 
INTENDED ONLY FOR THE INDIVIDUAL AND CONFIDENTIAL USE OF THE DESIGNATED 
RECIPIENT(S).*

If any reader of this message is not an intended recipient 
or any agent responsible for delivering it to an intended recipient, you 
are hereby notified that you have received this document in error, and that 
any review, dissemination, distribution, copying or other use of this 
message is prohibited.  If you have received this message in error, please 
notify us immediately by reply e-mail message or by telephone and delete 
the original message from your e-mail system and/or computer database.  
Thank you.


*---*


**NOTICE**:

*You are advised that e-mail correspondence and attachments 
between the public and the Ridgewood Board of Education are obtainable by 
any person who files a request under the NJ Open Public Records Act (OPRA) 
unless it is subject to a specific OPRA exception.  You should have no 
expectation that the content of e-mails sent to or from school district 
e-mail addresses, or between the public and school district officials and 
employees, will remain private.*


*---*
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2VLAZCWVUAHSLEUWSAEVXDO5VJSO7Z5T/


[ovirt-users] Re: hosted-engine --deploy fails after "Wait for the host to be up" task

2020-02-16 Thread Yedidyah Bar David
Hi all,

On Fri, Feb 14, 2020 at 6:45 PM Florian Nolden  wrote:

> Thanks, Fredy for your great help. Setting the Banner and PrintMotd
> options on all 3 nodes helped me to succeed with the installation.
>

Thanks a lot for the report!


> Am Fr., 14. Feb. 2020 um 16:23 Uhr schrieb Fredy Sanchez <
> fredy.sanc...@modmed.com>:
>
>> Banner none
>> PrintMotd no
>>
>> # systemctl restart sshd
>>
>
> That should be fixed in the ovirt-node images.
>

I think I agree. Would you like to open a bug about this?

I wonder what we can/should do with EL7 hosts (non-ovirt-node).

Also need to check how 4.4 behaves - there, host-deploy was fully rewritten
using ansible. No idea how sensitive ansible is to these banners (compared
with otopi, which is very). Adding Dana.

Best regards,


>
>
>> If gluster installed successfully, you don't have to reinstall it.
>> Just run the hyperconverged install again from cockpit, and it will
>> detect the existing gluster install, and ask you if you want to re-use it;
>> re-using worked for me. Only thing I'd point out here is that gluster
>> didn't enable in my servers automagically; I had to enable it and start it
>> by hand before cockpit picked it up.
>> # systemctl enable glusterd --now
>> # systemctl status glusterd
>>
>> Gluster was running fine for me. For me that was not needed.
>
> Also,
>> # tail -f /var/log/secure
>> while the install is going will help you see if there's a problem with
>> ssh, other than the banners.
>>
>> --
>> Fredy
>>
>> On Fri, Feb 14, 2020 at 9:32 AM Florian Nolden 
>> wrote:
>>
>>>
>>> Am Fr., 14. Feb. 2020 um 12:21 Uhr schrieb Fredy Sanchez <
>>> fredy.sanc...@modmed.com>:
>>>
 Hi Florian,

>>>
 In my case, Didi's suggestions got me thinking, and I ultimately traced
 this to the ssh banners; they must be disabled. You can do this in
 sshd_config. I do think that logging could be better for this issue, and
 that the host up check should incorporate things other than ssh, even if
 just a ping. Good luck.

 Hi Fredy,
>>>
>>> thanks for the reply.
>>>
>>> I just have to uncomment "Banners none" in the /etc/ssh/sshd_config on
>>> all 3 nodes, and run redeploy in the cockpit?
>>> Or have you also reinstalled the nodes and the gluster storage?
>>>
 --
 Fredy

 On Fri, Feb 14, 2020, 4:55 AM Florian Nolden 
 wrote:

> I'also stuck with that issue.
>
> I have
> 3x  HP ProLiant DL360 G7
>
> 1x 1gbit => as control network
> 3x 1gbit => bond0 as Lan
> 2x 10gbit => bond1 as gluster network
>
> I installed on all 3 servers Ovirt Node 4.3.8
> configured the networks using cockpit.
> followed this guide for the gluster setup with cockpit:
> https://www.ovirt.org/documentation/gluster-hyperconverged/chap-Deploying_Hyperconverged.html
>
> the installed the hosted engine with cockpit ->:
>
> [ INFO ] TASK [ovirt.hosted_engine_setup : Wait for the host to be up]
> [ ERROR ] fatal: [localhost]: FAILED! => {"ansible_facts": 
> {"ovirt_hosts": [{"address": "x-c01-n01.lan.xilloc.com", 
> "affinity_labels": [], "auto_numa_status": "unknown", "certificate": 
> {"organization": "lan.xilloc.com", "subject": 
> "O=lan.xilloc.com,CN=x-c01-n01.lan.xilloc.com"}, "cluster": {"href": 
> "/ovirt-engine/api/clusters/3dff6890-4e7b-11ea-90cb-00163e6a7afe", "id": 
> "3dff6890-4e7b-11ea-90cb-00163e6a7afe"}, "comment": "", "cpu": {"speed": 
> 0.0, "topology": {}}, "device_passthrough": {"enabled": false}, 
> "devices": [], "external_network_provider_configurations": [], 
> "external_status": "ok", "hardware_information": 
> {"supported_rng_sources": []}, "hooks": [], "href": 
> "/ovirt-engine/api/hosts/ded7aa60-4a5e-456e-b899-dd7fc25cc7b3", "id": 
> "ded7aa60-4a5e-456e-b899-dd7fc25cc7b3", "katello_errata": [], 
> "kdump_status": "unknown", "ksm": {"enabled": false}, 
> "max_scheduling_memory": 0, "memory": 0, "name": 
> "x-c01-n01.lan.xilloc.com", "network_attachments": [], "nics": [], 
> "numa_nodes": [], "numa_supported": false, "os": 
> {"custom_kernel_cmdline": ""}, "permissions": [], "port": 54321, 
> "power_management": {"automatic_pm_enabled": true, "enabled": false, 
> "kdump_detection": true, "pm_proxies": []}, "protocol": "stomp", 
> "se_linux": {}, "spm": {"priority": 5, "status": "none"}, "ssh": 
> {"fingerprint": "SHA256:lWc/BuE5WukHd95WwfmFW2ee8VPJ2VugvJeI0puMlh4", 
> "port": 22}, "statistics": [], "status": "non_responsive", 
> "storage_connection_extensions": [], "summary": {"total": 0}, "tags": [], 
> "transparent_huge_pages": {"enabled": false}, "type": "ovirt_node", 
> "unmanaged_networks": [], "update_available": false, "vgpu_placement": 
> "consolidated"}]}, "attempts": 120, "changed": false, "deprecations": 
> [{"msg": "The 'ovirt_host_facts' module has been renamed to 
> 'ovirt_host_info', and the renamed one no longer returns