[ovirt-users] engine.log flooded with "Field 'foo' can not be updated when status is 'Up'"

2019-08-09 Thread Matthias Leopold

Hi,

I updated my production oVirt environment from 4.3.3 to 4.3.5 today. 
Everything went fine so far, but there's one annoying phenomenon:


When I log into the "Administration Portal" and request the VM list 
("/ovirt-engine/webadmin/?locale=en_US#vms") engine.log is flooded with 
lines like


WARN  [org.ovirt.engine.core.utils.ObjectIdentityChecker] (default 
task-10618) [54d8c375-aa72-42f8-876e-8777d9d1a08a] Field 
'balloonEnabled' can not be updated when status is 'Up'


"Field", task and UUID vary and the flood stops after a while. Also 
listing or trying to edit other entities seems to trigger this "storm" 
or loop over and over again to a point that log file size is becoming an 
issue and interface is becoming sluggish. I can also see that CPU usage 
of engine java process goes up. When I log out everything is quiet and 
"VM Portal" is not affected at all.


I have seen lines like that before and know that they are usually OK 
(when changing VM properties), but these logs used to be linked to 
singular events. I suspect that the present behaviour might be linked to 
VMs that have "Pending Virtual Machine changes", which are in most cases 
"Custom Compatibility Version" changes that still stem from the upgrade 
to Cluster Version 4.3. I can't be sure and I can't resolve all these 
pending changes now, but these should not be causing such annoying 
behaviour in the first place.


I resorted to setting engine log level to "ERROR" right now to at least 
stop the log file from growing, but this not a solution. I can still see 
CPU load going up when using the interface. I very much hope that 
someone can explain whats happening and tell me how to resolve this.


thanks a lot
Matthias
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MKFQRCKHRT6NJUHF7URJTQG753MND6PJ/


[ovirt-users] Re: oVirt 4.3.5.1 failed to configure management network on the host

2019-08-09 Thread Strahil
You can't bond a bond, nor to layer a teaming device ontop a bond ... There was 
an article about that.

If it worked - most probably it won't as  you wish.

Lacp can play active-backup based on aggregation group, but you need all NICs 
in the same LACP.

Best Regards,
Strahil NikolovOn Aug 9, 2019 10:22, Mitja Pirih  wrote:
>
> On 08. 08. 2019 11:42, Strahil wrote: 
> > 
> > LACP supports works with 2 switches, but if you wish to aggregate all 
> > links - you need switch support (high-end hardware). 
> > 
> > Best Regards, 
> > Strahil Nikolov 
> > 
>
> I am aware of that. That's why my idea was to use bond1 (LACP) on eth1+2 
> on switch1 and bond2 (LACP) on eth3+4 on switch2 and then team together 
> bond1 + bond2. With this config theoretically I should get bonding 
> spanned over two switches. Technically it worked, redundancy and 
> aggregation. 
>
> The problem was deploying self-hosted engine, because the script was 
> unable to configure management network. 
>
> If I use bonding spanned over two switches as you suggest, based on 
> documentation 
> https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.2/html/administration_guide/sect-Bonding#Bonding_Modes
>  
> my options are: 
> - Mode 2 (XOR policy) 
> - Mode 5 (adaptive transmit load-balancing policy): no use of bridges 
> - Mode 6 (adaptive load-balancing policy): same limitation of mode 5 
>
> Basically only Mode 2 looks usable for us. 
>
>
> Regards, 
> Mitja
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/CCFVROZHDJ5UPYYFIRFX7JIL2T7HBO3O/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XNIEEY5GRKOJFNFVQGTTH5CPT7UNMORS/


[ovirt-users] Re: oVirt 4.3.5 potential issue with NFS storage

2019-08-09 Thread Vrgotic, Marko
Hey Shanii,

Thank you for the reply.
Sure, I will attach the full logs asap.
What do you mean by “flow you are doing”?

Kindly awaiting your reply.

Marko Vrgotic

From: Shani Leviim 
Date: Thursday, 8 August 2019 at 00:01
To: "Vrgotic, Marko" 
Cc: "users@ovirt.org" 
Subject: Re: [ovirt-users] Re: oVirt 4.3.5 potential issue with NFS storage

Hi,
Can you please clarify the flow you're doing?
Also, can you please attach full vdsm and engine logs?

Regards,
Shani Leviim


On Thu, Aug 8, 2019 at 6:25 AM Vrgotic, Marko 
mailto:m.vrgo...@activevideo.com>> wrote:
Log line form VDSM:

“[root@ovirt-sj-05 ~]# tail -f /var/log/vdsm/vdsm.log | grep WARN
2019-08-07 09:40:03,556-0700 WARN  (check/loop) [storage.check] Checker 
u'/rhev/data-center/mnt/10.210.13.64:_ovirt__production/bda97276-a399-448f-9113-017972f6b55a/dom_md/metadata'
 is blocked for 20.00 seconds (check:282)
2019-08-07 09:40:47,132-0700 WARN  (monitor/bda9727) [storage.Monitor] Host id 
for domain bda97276-a399-448f-9113-017972f6b55a was released (id: 5) 
(monitor:445)
2019-08-07 09:44:53,564-0700 WARN  (check/loop) [storage.check] Checker 
u'/rhev/data-center/mnt/10.210.13.64:_ovirt__production/bda97276-a399-448f-9113-017972f6b55a/dom_md/metadata'
 is blocked for 20.00 seconds (check:282)
2019-08-07 09:46:38,604-0700 WARN  (monitor/bda9727) [storage.Monitor] Host id 
for domain bda97276-a399-448f-9113-017972f6b55a was released (id: 5) 
(monitor:445)”



From: "Vrgotic, Marko" 
mailto:m.vrgo...@activevideo.com>>
Date: Wednesday, 7 August 2019 at 09:09
To: "users@ovirt.org" 
mailto:users@ovirt.org>>
Subject: oVirt 4.3.5 potential issue with NFS storage

Dear oVIrt,

This is my third oVirt platform in the company, but first time I am seeing 
following logs:

“2019-08-07 16:00:16,099Z INFO  
[org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand] 
(EE-ManagedThreadFactory-engineScheduled-Thread-51) [1b85e637] Lock freed to 
object 
'EngineLock:{exclusiveLocks='[2350ee82-94ed-4f90-9366-451e0104d1d6=PROVIDER]', 
sharedLocks=''}'
2019-08-07 16:00:25,618Z WARN  
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxy] 
(EE-ManagedThreadFactory-engine-Thread-37723) [] domain 
'bda97276-a399-448f-9113-017972f6b55a:ovirt_production' in problem 
'PROBLEMATIC'. vds: 'ovirt-sj-05.ictv.com'
2019-08-07 16:00:40,630Z INFO  
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxy] 
(EE-ManagedThreadFactory-engine-Thread-37735) [] Domain 
'bda97276-a399-448f-9113-017972f6b55a:ovirt_production' recovered from problem. 
vds: 'ovirt-sj-05.ictv.com'
2019-08-07 16:00:40,652Z INFO  
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxy] 
(EE-ManagedThreadFactory-engine-Thread-37737) [] Domain 
'bda97276-a399-448f-9113-017972f6b55a:ovirt_production' recovered from problem. 
vds: 'ovirt-sj-01.ictv.com'
2019-08-07 16:00:40,652Z INFO  
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxy] 
(EE-ManagedThreadFactory-engine-Thread-37737) [] Domain 
'bda97276-a399-448f-9113-017972f6b55a:ovirt_production' has recovered from 
problem. No active host in the DC is reporting it as problematic, so clearing 
the domain recovery timer.”

Can you help me understanding why is this being reported?

This setup is:

5HOSTS, 3 in HA
SelfHostedEngine
Version 4.3.5
NFS based Netapp storage, version 4.1
“10.210.13.64:/ovirt_hosted_engine on 
/rhev/data-center/mnt/10.210.13.64:_ovirt__hosted__engine type nfs4 
(rw,relatime,vers=4.1,rsize=65536,wsize=65536,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,clientaddr=10.210.11.14,local_lock=none,addr=10.210.13.64)

10.210.13.64:/ovirt_production on 
/rhev/data-center/mnt/10.210.13.64:_ovirt__production type nfs4 
(rw,relatime,vers=4.1,rsize=65536,wsize=65536,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,clientaddr=10.210.11.14,local_lock=none,addr=10.210.13.64)
tmpfs on /run/user/0 type tmpfs 
(rw,nosuid,nodev,relatime,seclabel,size=9878396k,mode=700)”

First mount is SHE dedicated storage.
Second mount “ovirt_produciton” is for other VM Guests.

Kindly awaiting your reply.

Marko Vrgotic
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to 
users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ICRKHD3GXTPQEZN2T6LJBS6YIVLER6TP/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 

[ovirt-users] Re: Does cluster upgrade wait for heal before proceeding to next host?

2019-08-09 Thread Laura Wright
I'd be happy to take a look at the process from a UX perspective. Would
anyone be able to document a series of screenshots or a video of the end to
end experience of it?

On Fri, Aug 9, 2019 at 6:11 AM Martin Perina  wrote:

>
>
> On Thu, Aug 8, 2019 at 10:25 AM Sandro Bonazzola 
> wrote:
>
>>
>>
>> Il giorno mar 6 ago 2019 alle ore 23:17 Jayme  ha
>> scritto:
>>
>>> I’m aware of the heal process but it’s unclear to me if the update
>>> continues to run while the volumes are healing and resumes when they are
>>> done. There doesn’t seem to be any indication in the ui (unless I’m
>>> mistaken)
>>>
>>
>> Adding @Martin Perina  , @Sahina Bose
>>and @Laura Wright   on this,
>> hyperconverged deployments using cluster upgrade command would probably
>> need some improvement.
>>
>
> The cluster upgrade process continues to the 2nd host after the 1st host
> becomes Up. If 2nd host then fails to switch to maintenance, we stop the
> upgrade process to prevent breakage.
> Sahina, is gluster healing process status exposed in RESTAPI? If so, does
> it makes sense to wait for healing to be finished before trying to move
> next host to maintenance? Or any other ideas how to improve?
>
>>
>>
>>
>>>
>>> On Tue, Aug 6, 2019 at 6:06 PM Robert O'Kane  wrote:
>>>
 Hello,

 Often(?), updates to a hypervisor that also has (provides) a Gluster
 brick takes the hypervisor offline (updates often require a reboot).

 This reboot then makes the brick "out of sync" and it has to be
 resync'd.

 I find it a "feature" than another host that is also part of a gluster
 domain can not be updated (rebooted) before all the bricks are updated
 in order to guarantee there is not data loss. It is called Quorum, or?

 Always let the heal process end. Then the next update can start.
 For me there is ALWAYS a healing time before Gluster is happy again.

 Cheers,

 Robert O'Kane


 Am 06.08.2019 um 16:38 schrieb Shani Leviim:
 > Hi Jayme,
 > I can't recall such a healing time.
 > Can you please retry and attach the engine & vdsm logs so we'll be
 smarter?
 >
 > *Regards,
 > *
 > *Shani Leviim
 > *
 >
 >
 > On Tue, Aug 6, 2019 at 5:24 PM Jayme >>> > > wrote:
 >
 > I've yet to have cluster upgrade finish updating my three host HCI
 > cluster.  The most recent try was today moving from oVirt 4.3.3 to
 > 4.3.5.5.  The first host updates normally, but when it moves on to
 > the second host it fails to put it in maintenance and the cluster
 > upgrade stops.
 >
 > I suspect this is due to that fact that after my hosts are updated
 > it takes 10 minutes or more for all volumes to sync/heal.  I have
 > 2Tb SSDs.
 >
 > Does the cluster upgrade process take heal time in to account
 before
 > attempting to place the next host in maintenance to upgrade it? Or
 > is there something else that may be at fault here, or perhaps a
 > reason why the heal process takes 10 minutes after reboot to
 complete?
 > ___
 > Users mailing list -- users@ovirt.org 
 > To unsubscribe send an email to users-le...@ovirt.org
 > 
 > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
 > oVirt Code of Conduct:
 > https://www.ovirt.org/community/about/community-guidelines/
 > List Archives:
 >
 https://lists.ovirt.org/archives/list/users@ovirt.org/message/5XM3QB3364ZYIPAKY4KTTOSJZMCWHUPD/
 >
 >
 > ___
 > Users mailing list -- users@ovirt.org
 > To unsubscribe send an email to users-le...@ovirt.org
 > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
 > oVirt Code of Conduct:
 https://www.ovirt.org/community/about/community-guidelines/
 > List Archives:
 https://lists.ovirt.org/archives/list/users@ovirt.org/message/GBX3L23MWGMTF7Q4KGVR63RIQZFYXGWK/
 >

 --
 Systems Administrator
 Kunsthochschule für Medien Köln
 Peter-Welter-Platz 2
 50676 Köln
 ___
 Users mailing list -- users@ovirt.org
 To unsubscribe send an email to users-le...@ovirt.org
 Privacy Statement: https://www.ovirt.org/site/privacy-policy/
 oVirt Code of Conduct:
 https://www.ovirt.org/community/about/community-guidelines/
 List Archives:
 https://lists.ovirt.org/archives/list/users@ovirt.org/message/OBAHFFFTDOI7LHAH5AVI5OPUQUQTABWM/

>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct:

[ovirt-users] Re: Does cluster upgrade wait for heal before proceeding to next host?

2019-08-09 Thread Martin Perina
On Thu, Aug 8, 2019 at 10:25 AM Sandro Bonazzola 
wrote:

>
>
> Il giorno mar 6 ago 2019 alle ore 23:17 Jayme  ha
> scritto:
>
>> I’m aware of the heal process but it’s unclear to me if the update
>> continues to run while the volumes are healing and resumes when they are
>> done. There doesn’t seem to be any indication in the ui (unless I’m
>> mistaken)
>>
>
> Adding @Martin Perina  , @Sahina Bose
>and @Laura Wright   on this,
> hyperconverged deployments using cluster upgrade command would probably
> need some improvement.
>

The cluster upgrade process continues to the 2nd host after the 1st host
becomes Up. If 2nd host then fails to switch to maintenance, we stop the
upgrade process to prevent breakage.
Sahina, is gluster healing process status exposed in RESTAPI? If so, does
it makes sense to wait for healing to be finished before trying to move
next host to maintenance? Or any other ideas how to improve?

>
>
>
>>
>> On Tue, Aug 6, 2019 at 6:06 PM Robert O'Kane  wrote:
>>
>>> Hello,
>>>
>>> Often(?), updates to a hypervisor that also has (provides) a Gluster
>>> brick takes the hypervisor offline (updates often require a reboot).
>>>
>>> This reboot then makes the brick "out of sync" and it has to be resync'd.
>>>
>>> I find it a "feature" than another host that is also part of a gluster
>>> domain can not be updated (rebooted) before all the bricks are updated
>>> in order to guarantee there is not data loss. It is called Quorum, or?
>>>
>>> Always let the heal process end. Then the next update can start.
>>> For me there is ALWAYS a healing time before Gluster is happy again.
>>>
>>> Cheers,
>>>
>>> Robert O'Kane
>>>
>>>
>>> Am 06.08.2019 um 16:38 schrieb Shani Leviim:
>>> > Hi Jayme,
>>> > I can't recall such a healing time.
>>> > Can you please retry and attach the engine & vdsm logs so we'll be
>>> smarter?
>>> >
>>> > *Regards,
>>> > *
>>> > *Shani Leviim
>>> > *
>>> >
>>> >
>>> > On Tue, Aug 6, 2019 at 5:24 PM Jayme >> > > wrote:
>>> >
>>> > I've yet to have cluster upgrade finish updating my three host HCI
>>> > cluster.  The most recent try was today moving from oVirt 4.3.3 to
>>> > 4.3.5.5.  The first host updates normally, but when it moves on to
>>> > the second host it fails to put it in maintenance and the cluster
>>> > upgrade stops.
>>> >
>>> > I suspect this is due to that fact that after my hosts are updated
>>> > it takes 10 minutes or more for all volumes to sync/heal.  I have
>>> > 2Tb SSDs.
>>> >
>>> > Does the cluster upgrade process take heal time in to account
>>> before
>>> > attempting to place the next host in maintenance to upgrade it? Or
>>> > is there something else that may be at fault here, or perhaps a
>>> > reason why the heal process takes 10 minutes after reboot to
>>> complete?
>>> > ___
>>> > Users mailing list -- users@ovirt.org 
>>> > To unsubscribe send an email to users-le...@ovirt.org
>>> > 
>>> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> > oVirt Code of Conduct:
>>> > https://www.ovirt.org/community/about/community-guidelines/
>>> > List Archives:
>>> >
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/5XM3QB3364ZYIPAKY4KTTOSJZMCWHUPD/
>>> >
>>> >
>>> > ___
>>> > Users mailing list -- users@ovirt.org
>>> > To unsubscribe send an email to users-le...@ovirt.org
>>> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> > oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> > List Archives:
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/GBX3L23MWGMTF7Q4KGVR63RIQZFYXGWK/
>>> >
>>>
>>> --
>>> Systems Administrator
>>> Kunsthochschule für Medien Köln
>>> Peter-Welter-Platz 2
>>> 50676 Köln
>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/OBAHFFFTDOI7LHAH5AVI5OPUQUQTABWM/
>>>
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/T27ROHWZPJL475HBHTFDGRBSYHJMWYDR/
>>
>
>
> --
>
> Sandro Bonazzola
>
> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
>
> Red Hat EMEA 
>
> sbona...@redhat.com
> *Red Hat 

[ovirt-users] Re: RFE: Add the ability to the engine to serve as a fencing proxy

2019-08-09 Thread Martin Perina
On Thu, Aug 8, 2019 at 8:04 PM Strahil  wrote:

> I think poison pill-based  fencing is easier  to implement but it requires
> either  Network-based  (iSCSI or NFS)  or FC-based  shared  storage.
>
> It is used  in corosync/pacemaker clusters and is easier to implement.
>

Corosync/pacemake uses completely different way how to perform fencing and
this is not applicable for oVirt.
But oVirt also uses shared storage information (we call it storage leases)
which can detect that host is still running and only connection between
enigne and host is broken. For details about VM leases please take a look:

https://ovirt.org/documentation/vmm-guide/chap-Administrative_Tasks.html#configuring-a-highly-available-virtual-machine

> Best Regards,
> Strahil Nikolov
> On Aug 8, 2019 11:29, Sandro Bonazzola  wrote:
>
>
>
> Il giorno ven 2 ago 2019 alle ore 10:50 Sandro E 
> ha scritto:
>
> Hi,
>
> i hope that this hits the right people i found  an RFE (Bug 1373957) which
> would be a realy nice feature for my company as we have to request firewall
> rules for every new host and this ends up in a lot of mess and work. Is
> there any change that this RFE gets implemented ?
>
>
You can specify custom firewalld rules, which are applied during host
installation/reinstallation:

https://ovirt.org/documentation/admin-guide/chap-Hosts.html#configuring-host-firewall-rules

So is there anything you are missing?

>
> Thanks for any help or tips
>
>
> This RFE has been filed in 2016 and didn't got much interest so far. Can
> you elaborate a bit on the user story for this?
>
>
>
>
>
> BR,
> Sandro
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/UP7NZWXZBNHM7B7MNY5NMCAUK6UBPXXD/
>
>
>
> --
>
> Sandro Bonazzola
>
> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
>
> Red Hat EMEA 
>
> sbona...@redhat.com
> *Red Hat respects your work life balance.
> Therefore there is no need to answer this email out of your office hours.
> *
>
>

-- 
Martin Perina
Manager, Software Engineering
Red Hat Czech s.r.o.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/N7BXHMFXSFMSUOEZK66POQOIU63TMCPL/


[ovirt-users] Re: VDSM ovirt-node-2 command Get Host Capabilities failed: Internal JSON-RPC error: {'reason': "invalid argument: KVM is not supported by '/usr/libexec/qemu-kvm' on this host"}

2019-08-09 Thread Sandro Bonazzola
Il giorno ven 9 ago 2019 alle ore 04:34  ha
scritto:

> The version of ovirt-engine  is 4.3.5.5-1.el7
>
>
> The version of ovirt-node-2  is 4.3.5.2-1.el7
>
> When I add ovirt-node-2 to ovirt-engine ,it report:
> VDSM ovirt-node-2 command Get Host Capabilities failed: Internal JSON-RPC
> error: {'reason': "invalid argument: KVM is not supported by
> '/usr/libexec/qemu-kvm' on this host"}
>
> What 's the root cause of this problem ?  How can I solve this problem ?
>

can you please share a sos report of ovirt-node-2 host?



> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/O4FTJ7F5JVMMJB622WZWE5FAU2UYBJG4/
>


-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com
*Red Hat respects your work life balance.
Therefore there is no need to answer this email out of your office hours.
*
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VUHVM3AATJ35FKPU7AVPIYXF4HJDT63X/


[ovirt-users] Re: oVirt 4.3.5.1 failed to configure management network on the host

2019-08-09 Thread Mitja Pirih
On 08. 08. 2019 11:42, Strahil wrote:
>
> LACP supports works with 2 switches, but if you wish to aggregate all
> links - you need switch support (high-end hardware).
>
> Best Regards,
> Strahil Nikolov
>

I am aware of that. That's why my idea was to use bond1 (LACP) on eth1+2
on switch1 and bond2 (LACP) on eth3+4 on switch2 and then team together
bond1 + bond2. With this config theoretically I should get bonding
spanned over two switches. Technically it worked, redundancy and
aggregation.
 
The problem was deploying self-hosted engine, because the script was
unable to configure management network.

If I use bonding spanned over two switches as you suggest, based on
documentation
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.2/html/administration_guide/sect-Bonding#Bonding_Modes
my options are:
- Mode 2 (XOR policy)
- Mode 5 (adaptive transmit load-balancing policy): no use of bridges
- Mode 6 (adaptive load-balancing policy): same limitation of mode 5

Basically only Mode 2 looks usable for us.


Regards,
Mitja
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CCFVROZHDJ5UPYYFIRFX7JIL2T7HBO3O/