[ovirt-users] 4.4 -> 4.3

2021-03-24 Thread KSNull Zero
Hi, 
What is the most simpliest way to backward migrate VMs from oVirt 4.4 to oVirt 
4.3 ?
Is it possible to use export domain or there are some other ways ?
Thank you.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NTCSHYAGWZXOJSAMFYHSZAKU4WU2IOTL/


[ovirt-users] Re: Problem to create provider

2021-03-24 Thread Ales Musil
On Wed, Mar 24, 2021 at 3:27 PM  wrote:

> Funny though there are no certificates at
> /etc/ovirt-provider-ovn/conf.d/10-setup-ovirt-provider-ovn.conf
>
> I guess need to reinstall ovirt engine from scratch, correct?
>

It might be an option, but you can try to fix it manually.

Usually the basic config is not changed by the setup, you can try to
replace it
https://github.com/oVirt/ovirt-provider-ovn/blob/master/provider/ovirt-provider-ovn.conf


> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/OLBNUWJKV5TNWHT7POF6VTKU6JP5HEAZ/
>


-- 

Ales Musil

Software Engineer - RHV Network

Red Hat EMEA 

amu...@redhat.comIM: amusil

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QZVEIXAEWDXEL7M3BEVCCXZAEIDYHKXX/


[ovirt-users] Hosted-Engine vs Standalone Engine

2021-03-24 Thread Ian Easter
Hello Folks,

I have had to install a Hosted-Engine a few times in my environment.  There
have been some hardware issues and power issues that left the HE
unrecoverable.

In this situation, would the Standalone Engine install be more viable and
less prone to become inoperable due to the previous issues?

My assumption would be to have a head baremetal server run the Engine to
control and maintain my blades.

*Thank you,*
*Ian*
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CFXFE7M7WMXEBQBU3CJE7JFNLADVXRLO/


[ovirt-users] Re: websockify + ovirt

2021-03-24 Thread Pascal D
finally got it. I needed to force --ssl-version=tls1.2  I will write a summary 
of my findings fot anyone interested. 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2QV6JYHD6KLKEGQ6KKC6WGFGYICGQOPH/


[ovirt-users] Re: websockify + ovirt

2021-03-24 Thread Pascal D
I think part of my misunderstanding is that ovirt-websocket-proxy does a few 
things behind the scene as it is not the source of the initial connection to 
ovirt.

I am going another route. My proxy are servers which get the console.vv file 
from ovirt when they are alerted someone wants to make a webview to a 
particular Vm. at that point it requests the console.vv file from ovirt using 
rest api and then create the websockify process with a random port which they 
send back to the requesting app via another secure channel. The receiving app 
then launches a browser tab connecting the web-spice-client to the address of 
the webproxy and the port.

the http connection is encrypted using a letsencrypt certificate and that is 
working fine. The part I am having difficulties is the connection part between 
the web proxy and the ovirt host. Ovirt expect it to be encapsulated in TLS/1.2 
if I am not mistaken, but can't figure out how to make websockify to use the 
cafile, ssl-cyphers and the host-subject to do so. I am missing a part which I 
think should be simple for someone understanding ssl better than I do

Thanks
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FHLRE7GU2GVUX3DEJHB2QGCZB7JMXRVO/


[ovirt-users] VM listed as unmanaged external

2021-03-24 Thread miguel . garcia
We have a vm that got renamed with prefix "external-*" and in the status of 
resources for this vm are as unmanaged.

Is there a way to turn it back to manage vm and rollback how it was before?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EWXPPTYD6II6OT7ZE62JKH77ESILM67P/


[ovirt-users] Re: Problem to create provider

2021-03-24 Thread miguel . garcia
Funny though there are no certificates at 
/etc/ovirt-provider-ovn/conf.d/10-setup-ovirt-provider-ovn.conf

I guess need to reinstall ovirt engine from scratch, correct?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OLBNUWJKV5TNWHT7POF6VTKU6JP5HEAZ/


[ovirt-users] Re: Ovirt Node (iso) - OK to enable Centos-BaseOS/Centos-Appstream repos ?

2021-03-24 Thread jb


Am 23.03.21 um 12:45 schrieb Vojtech Juranek:

On Tuesday, 23 March 2021 11:56:26 CET morgan cox wrote:

Hi.

I have installed Ovirt nodes via the ovirt-node iso (centos8 based) - on a
fresh install the standard CentOS repos are disabled (the ovirt 4-4  repo
is enabled)
  

As part of our company hardening we need to install a few packages from the
Centos repos.
  

Can I enable the CentOS-Linux-AppStream.repo + CentOS-Linux-BaseOS.repo
repos or will this cause issues when we update the node ?

AFAIK it shouldn't break anything. ovirt repos have newer versions, so
anything required by ovirt should be installed from ovirt repo during upgrade.
I made the experience that after installing a upgrade, everything which 
I installed from that repos disappear.

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CTW2PKGCC62IVIZTTQG3BQPQUFERZBB2/


[ovirt-users] Re: Ovirt Node (iso) - OK to enable Centos-BaseOS/Centos-Appstream repos ?

2021-03-24 Thread morgan cox
Thank you for confirming .
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Q2QRVPYWQN3XFOXD455F4SL7KBBUR6VC/


[ovirt-users] Re: supervdsm failing during network_caps

2021-03-24 Thread Ales Musil
On Wed, Mar 24, 2021 at 1:24 PM Alan G  wrote:

> Looking back in the logs, in fact the first error we get is Out of memory.
> So it seems we're hitting
> https://bugzilla.redhat.com/show_bug.cgi?id=1623851
>
> It's not clear from the ticket. Is there an explicit fix for this is 4.4,
> or the problem just kind of went away?
>

If it is the described issue, the problem seems to go away in 4.4. The
reason might be a newer kernel and libnl3.




>
>
>
>  On Wed, 24 Mar 2021 11:18:57 + *Alan G  >* wrote 
>
> Hi,
>
> I sent this a while back and never got a response. We've since upgrade to
> 4.3 and the issue persists.
>
> 2021-03-24 10:53:48,934+ ERROR (periodic/2) [virt.periodic.Operation]
>  operation failed
> (periodic:188)
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/vdsm/virt/periodic.py", line 186,
> in __call__
> self._func()
>   File "/usr/lib/python2.7/site-packages/vdsm/virt/sampling.py", line 481,
> in __call__
> stats = hostapi.get_stats(self._cif, self._samples.stats())
>   File "/usr/lib/python2.7/site-packages/vdsm/host/api.py", line 50, in
> get_stats
> decStats = stats.produce(first_sample, last_sample)
>   File "/usr/lib/python2.7/site-packages/vdsm/host/stats.py", line 72, in
> produce
> stats.update(get_interfaces_stats())
>   File "/usr/lib/python2.7/site-packages/vdsm/host/stats.py", line 154, in
> get_interfaces_stats
> return net_api.network_stats()
>   File "/usr/lib/python2.7/site-packages/vdsm/network/api.py", line 63, in
> network_stats
> return netstats.report()
>   File "/usr/lib/python2.7/site-packages/vdsm/network/netstats.py", line
> 32, in report
> stats = link_stats.report()
>   File "/usr/lib/python2.7/site-packages/vdsm/network/link/stats.py", line
> 34, in report
> for iface_properties in iface.list():
>   File "/usr/lib/python2.7/site-packages/vdsm/network/link/iface.py", line
> 257, in list
> for properties in itertools.chain(link.iter_links(), dpdk_links):
>   File "/usr/lib/python2.7/site-packages/vdsm/network/netlink/link.py",
> line 47, in iter_links
> with _nl_link_cache(sock) as cache:
>   File "/usr/lib64/python2.7/contextlib.py", line 17, in __enter__
> return self.gen.next()
>   File
> "/usr/lib/python2.7/site-packages/vdsm/network/netlink/__init__.py", line
> 108, in _cache_manager
> cache = cache_allocator(sock)
>   File "/usr/lib/python2.7/site-packages/vdsm/network/netlink/link.py",
> line 157, in _rtnl_link_alloc_cache
> return libnl.rtnl_link_alloc_cache(socket, AF_UNSPEC)
>   File "/usr/lib/python2.7/site-packages/vdsm/network/netlink/libnl.py",
> line 578, in rtnl_link_alloc_cache
> raise IOError(-err, nl_geterror(err))
> IOError: [Errno 16] Message sequence number mismatch
>
> This occurs on both nodes in the cluster. A restart of vdsm/supervdsm will
> sort it for a while, but within 24 hours it occurs again. We run a number
> of clusters and it only occurs on one so must be some specific corner case
> we're triggering.
>
> I can find almost no information on this. The best I could find was this
> https://linuxlizard.com/2020/10/18/message-sequence-number-mismatch-in-libnl/
> which details a sequence number issue. I'm guessing I'm experiencing the
> same issue in that the nl sequence numbers are getting out of sync and
> closing/re-opening the nl socket (aka restart vdsm) is the only way to
> resolve.
>
> I've completely hit a brick wall with it. We've had to disable fencing on
> both nodes as sometimes they get erroneously fenced when vdsm stops
> function correctly. At this point I'm thinking about replaced the severs
> with different models in-case it's something in the NIC drivers...
>
> Alan
>
>
>  On Mon, 06 Jan 2020 10:54:52 + *Alan G  >* wrote 
>
> Hi,
>
> I have issues with one host where supervdsm is failing in network_caps.
>
> I see the following trace in the log.
>
> MainProcess|jsonrpc/1::ERROR::2020-01-06
> 03:01:05,558::supervdsm_server::100::SuperVdsm.ServerCallback::(wrapper)
> Error in network_caps
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/vdsm/supervdsm_server.py", line
> 98, in wrapper
> res = func(*args, **kwargs)
>   File "/usr/lib/python2.7/site-packages/vdsm/network/api.py", line 56, in
> network_caps
> return netswitch.configurator.netcaps(compatibility=30600)
>   File
> "/usr/lib/python2.7/site-packages/vdsm/network/netswitch/configurator.py",
> line 317, in netcaps
> net_caps = netinfo(compatibility=compatibility)
>   File
> "/usr/lib/python2.7/site-packages/vdsm/network/netswitch/configurator.py",
> line 325, in netinfo
> _netinfo = netinfo_get(vdsmnets, compatibility)
>   File "/usr/lib/python2.7/site-packages/vdsm/network/netinfo/cache.py",
> line 150, in get
> return _stringify_mtus(_get(vdsmnets))
>   File "/usr/lib/python2.7/site-packages/vdsm/network/netinfo/cache.py",
> line 59, in _get
> ipaddrs = getIpAddrs()
>   File
> "/usr/lib/python2.

[ovirt-users] Re: supervdsm failing during network_caps

2021-03-24 Thread Alan G
Looking back in the logs, in fact the first error we get is Out of memory. So 
it seems we're hitting https://bugzilla.redhat.com/show_bug.cgi?id=1623851



It's not clear from the ticket. Is there an explicit fix for this is 4.4, or 
the problem just kind of went away?







 On Wed, 24 Mar 2021 11:18:57 + Alan G  wrote 




Hi,



I sent this a while back and never got a response. We've since upgrade to 4.3 
and the issue persists.



2021-03-24 10:53:48,934+ ERROR (periodic/2) [virt.periodic.Operation] 
 operation failed 
(periodic:188)

Traceback (most recent call last):

  File "/usr/lib/python2.7/site-packages/vdsm/virt/periodic.py", line 186, in 
__call__

    self._func()

  File "/usr/lib/python2.7/site-packages/vdsm/virt/sampling.py", line 481, in 
__call__

    stats = hostapi.get_stats(self._cif, self._samples.stats())

  File "/usr/lib/python2.7/site-packages/vdsm/host/api.py", line 50, in 
get_stats

    decStats = stats.produce(first_sample, last_sample)

  File "/usr/lib/python2.7/site-packages/vdsm/host/stats.py", line 72, in 
produce

    stats.update(get_interfaces_stats())

  File "/usr/lib/python2.7/site-packages/vdsm/host/stats.py", line 154, in 
get_interfaces_stats

    return net_api.network_stats()

  File "/usr/lib/python2.7/site-packages/vdsm/network/api.py", line 63, in 
network_stats

    return netstats.report()

  File "/usr/lib/python2.7/site-packages/vdsm/network/netstats.py", line 32, in 
report

    stats = link_stats.report()

  File "/usr/lib/python2.7/site-packages/vdsm/network/link/stats.py", line 34, 
in report

    for iface_properties in iface.list():

  File "/usr/lib/python2.7/site-packages/vdsm/network/link/iface.py", line 257, 
in list

    for properties in itertools.chain(link.iter_links(), dpdk_links):

  File "/usr/lib/python2.7/site-packages/vdsm/network/netlink/link.py", line 
47, in iter_links

    with _nl_link_cache(sock) as cache:

  File "/usr/lib64/python2.7/contextlib.py", line 17, in __enter__

    return self.gen.next()

  File "/usr/lib/python2.7/site-packages/vdsm/network/netlink/__init__.py", 
line 108, in _cache_manager

    cache = cache_allocator(sock)

  File "/usr/lib/python2.7/site-packages/vdsm/network/netlink/link.py", line 
157, in _rtnl_link_alloc_cache

    return libnl.rtnl_link_alloc_cache(socket, AF_UNSPEC)

  File "/usr/lib/python2.7/site-packages/vdsm/network/netlink/libnl.py", line 
578, in rtnl_link_alloc_cache

    raise IOError(-err, nl_geterror(err))

IOError: [Errno 16] Message sequence number mismatch



This occurs on both nodes in the cluster. A restart of vdsm/supervdsm will sort 
it for a while, but within 24 hours it occurs again. We run a number of 
clusters and it only occurs on one so must be some specific corner case we're 
triggering.



I can find almost no information on this. The best I could find was this 
https://linuxlizard.com/2020/10/18/message-sequence-number-mismatch-in-libnl/ 
which details a sequence number issue. I'm guessing I'm experiencing the same 
issue in that the nl sequence numbers are getting out of sync and 
closing/re-opening the nl socket (aka restart vdsm) is the only way to resolve.



I've completely hit a brick wall with it. We've had to disable fencing on both 
nodes as sometimes they get erroneously fenced when vdsm stops function 
correctly. At this point I'm thinking about replaced the severs with different 
models in-case it's something in the NIC drivers...



Alan





 On Mon, 06 Jan 2020 10:54:52 + Alan G  
wrote 



Hi,



I have issues with one host where supervdsm is failing in network_caps.



I see the following trace in the log.



MainProcess|jsonrpc/1::ERROR::2020-01-06 
03:01:05,558::supervdsm_server::100::SuperVdsm.ServerCallback::(wrapper) Error 
in network_caps

Traceback (most recent call last):

  File "/usr/lib/python2.7/site-packages/vdsm/supervdsm_server.py", line 98, in 
wrapper

    res = func(*args, **kwargs)

  File "/usr/lib/python2.7/site-packages/vdsm/network/api.py", line 56, in 
network_caps

    return netswitch.configurator.netcaps(compatibility=30600)

  File 
"/usr/lib/python2.7/site-packages/vdsm/network/netswitch/configurator.py", line 
317, in netcaps

    net_caps = netinfo(compatibility=compatibility)

  File 
"/usr/lib/python2.7/site-packages/vdsm/network/netswitch/configurator.py", line 
325, in netinfo

    _netinfo = netinfo_get(vdsmnets, compatibility)

  File "/usr/lib/python2.7/site-packages/vdsm/network/netinfo/cache.py", line 
150, in get

    return _stringify_mtus(_get(vdsmnets))

  File "/usr/lib/python2.7/site-packages/vdsm/network/netinfo/cache.py", line 
59, in _get

    ipaddrs = getIpAddrs()

  File "/usr/lib/python2.7/site-packages/vdsm/network/netinfo/addresses.py", 
line 72, in getIpAddrs

    for addr in nl_addr.iter_addrs():

  File "/usr/lib/python2.7/site-packages/vdsm/network/netlink/addr.py", line 
33, in iter_addrs

    with _nl_addr_cache(sock) as addr

[ovirt-users] Re: [ANN] Async release for oVirt 4.4.5

2021-03-24 Thread Sandro Bonazzola
Il giorno mar 23 mar 2021 alle ore 19:17 Sandro Bonazzola <
sbona...@redhat.com> ha scritto:

> On March 23rd 2021 the oVirt project released an async update to the
> following packages:
>
>-
>
>ovirt-ansible-collection-1.4.1
>-
>
>vdsm-4.40.50.9
>-
>
>ovirt-engine-4.4.5.11
>-
>
>ovirt-release44-4.4.5.1
>-
>
>ovirt-engine-appliance-4.4-20210323171213.1
>-
>
>oVirt Node is still building, will follow tomorrow.
>
>
oVirt node update has been published.



>
> Fixing the following bugs:
>
>-
>
>Bug 1940438  -
>Revoking a token using ovirt_auth module fails hosted_engine_setup ansible
>role
>-
>
>Bug 1941311  -
>Live merge after extend disk fails - 'Vm' object has no attribute
>'refreshDriveVolume'
>-
>
>Bug 1940448  -
>Upgrade to 4.4.5 fails schema upgrade if user_profiles table contains
>duplicate entries
>
>
> --
>
> Sandro Bonazzola
>
> MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
>
> Red Hat EMEA 
>
> sbona...@redhat.com
> 
>
> *Red Hat respects your work life balance. Therefore there is no need to
> answer this email out of your office hours.*
>
>
>

-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV

Red Hat EMEA 

sbona...@redhat.com


*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.
*
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XYKSNB42JEJP2OR6WGI2VYXQ2HT6JPQ7/


[ovirt-users] Re: supervdsm failing during network_caps

2021-03-24 Thread Alan G
Hi,



I sent this a while back and never got a response. We've since upgrade to 4.3 
and the issue persists.



2021-03-24 10:53:48,934+ ERROR (periodic/2) [virt.periodic.Operation] 
 operation failed 
(periodic:188)

Traceback (most recent call last):

  File "/usr/lib/python2.7/site-packages/vdsm/virt/periodic.py", line 186, in 
__call__

    self._func()

  File "/usr/lib/python2.7/site-packages/vdsm/virt/sampling.py", line 481, in 
__call__

    stats = hostapi.get_stats(self._cif, self._samples.stats())

  File "/usr/lib/python2.7/site-packages/vdsm/host/api.py", line 50, in 
get_stats

    decStats = stats.produce(first_sample, last_sample)

  File "/usr/lib/python2.7/site-packages/vdsm/host/stats.py", line 72, in 
produce

    stats.update(get_interfaces_stats())

  File "/usr/lib/python2.7/site-packages/vdsm/host/stats.py", line 154, in 
get_interfaces_stats

    return net_api.network_stats()

  File "/usr/lib/python2.7/site-packages/vdsm/network/api.py", line 63, in 
network_stats

    return netstats.report()

  File "/usr/lib/python2.7/site-packages/vdsm/network/netstats.py", line 32, in 
report

    stats = link_stats.report()

  File "/usr/lib/python2.7/site-packages/vdsm/network/link/stats.py", line 34, 
in report

    for iface_properties in iface.list():

  File "/usr/lib/python2.7/site-packages/vdsm/network/link/iface.py", line 257, 
in list

    for properties in itertools.chain(link.iter_links(), dpdk_links):

  File "/usr/lib/python2.7/site-packages/vdsm/network/netlink/link.py", line 
47, in iter_links

    with _nl_link_cache(sock) as cache:

  File "/usr/lib64/python2.7/contextlib.py", line 17, in __enter__

    return self.gen.next()

  File "/usr/lib/python2.7/site-packages/vdsm/network/netlink/__init__.py", 
line 108, in _cache_manager

    cache = cache_allocator(sock)

  File "/usr/lib/python2.7/site-packages/vdsm/network/netlink/link.py", line 
157, in _rtnl_link_alloc_cache

    return libnl.rtnl_link_alloc_cache(socket, AF_UNSPEC)

  File "/usr/lib/python2.7/site-packages/vdsm/network/netlink/libnl.py", line 
578, in rtnl_link_alloc_cache

    raise IOError(-err, nl_geterror(err))

IOError: [Errno 16] Message sequence number mismatch


This occurs on both nodes in the cluster. A restart of vdsm/supervdsm will sort 
it for a while, but within 24 hours it occurs again. We run a number of 
clusters and it only occurs on one so must be some specific corner case we're 
triggering.

I can find almost no information on this. The best I could find was this 
https://linuxlizard.com/2020/10/18/message-sequence-number-mismatch-in-libnl/ 
which details a sequence number issue. I'm guessing I'm experiencing the same 
issue in that the nl sequence numbers are getting out of sync and 
closing/re-opening the nl socket (aka restart vdsm) is the only way to resolve.

I've completely hit a brick wall with it. We've had to disable fencing on both 
nodes as sometimes they get erroneously fenced when vdsm stops function 
correctly. At this point I'm thinking about replaced the severs with different 
models in-case it's something in the NIC drivers...

Alan



 On Mon, 06 Jan 2020 10:54:52 + Alan G  
wrote 


Hi,



I have issues with one host where supervdsm is failing in network_caps.



I see the following trace in the log.



MainProcess|jsonrpc/1::ERROR::2020-01-06 
03:01:05,558::supervdsm_server::100::SuperVdsm.ServerCallback::(wrapper) Error 
in network_caps

Traceback (most recent call last):

  File "/usr/lib/python2.7/site-packages/vdsm/supervdsm_server.py", line 98, in 
wrapper

    res = func(*args, **kwargs)

  File "/usr/lib/python2.7/site-packages/vdsm/network/api.py", line 56, in 
network_caps

    return netswitch.configurator.netcaps(compatibility=30600)

  File 
"/usr/lib/python2.7/site-packages/vdsm/network/netswitch/configurator.py", line 
317, in netcaps

    net_caps = netinfo(compatibility=compatibility)

  File 
"/usr/lib/python2.7/site-packages/vdsm/network/netswitch/configurator.py", line 
325, in netinfo

    _netinfo = netinfo_get(vdsmnets, compatibility)

  File "/usr/lib/python2.7/site-packages/vdsm/network/netinfo/cache.py", line 
150, in get

    return _stringify_mtus(_get(vdsmnets))

  File "/usr/lib/python2.7/site-packages/vdsm/network/netinfo/cache.py", line 
59, in _get

    ipaddrs = getIpAddrs()

  File "/usr/lib/python2.7/site-packages/vdsm/network/netinfo/addresses.py", 
line 72, in getIpAddrs

    for addr in nl_addr.iter_addrs():

  File "/usr/lib/python2.7/site-packages/vdsm/network/netlink/addr.py", line 
33, in iter_addrs

    with _nl_addr_cache(sock) as addr_cache:

  File "/usr/lib64/python2.7/contextlib.py", line 17, in __enter__

    return self.gen.next()

  File "/usr/lib/python2.7/site-packages/vdsm/network/netlink/__init__.py", 
line 92, in _cache_manager

    cache = cache_allocator(sock)

  File "/usr/lib/python2.7/site-packages/vdsm/network/netlink/libnl.py", line 
469, in rtnl_addr_alloc_cac

[ovirt-users] Restored engine backup: The provided authorization grant for the auth code has expired.

2021-03-24 Thread Nicolás

Hi,

I'm restoring a full ovirt engine backup, having used the --scope=all 
option, for oVirt 4.3.


I restored the backup on a fresh CentOS7 machine. The process went well, 
but when trying to log into the restored authentication system I get the 
following message which won't allow me to log in:


  The provided authorization grant for the auth code has expired.

What does that mean and how can it be fixed?

Thanks.

Nicolás
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PADF5SSC6XUYCOTJCNOSALL6U544KG5A/


[ovirt-users] Re: Problem to create provider

2021-03-24 Thread Ales Musil
On Tue, Mar 23, 2021 at 8:27 PM  wrote:

> We have a problem to create the first provider in the ovirt cluster by
> following next steps:
>
> Go to Administration - Provider - click Add
> Filled up fields for new provider with next information:
>
> Name: ovirt-provider-ovn
> Type: External Network Provider
> Network Plugin: oVirt Network Provider for OVN
> Provider URL: https://ovirthostname:9696
> Uncheck Read-only
>
> Check Request authentication
> username : admin@internal
> Protocol HTTPS
> Hostname ovirthostname
> API port: 35357
>
> Click Test
>
> Result:
> Test Failed (unknow error)
>
> Log file:
> 2021-03-23 15:23:50,986-04 ERROR
> [org.ovirt.engine.core.bll.provider.network.openstack.ExternalNetworkProviderProxy]
> (default task-118) [501852ef-cd76-40d0-8ef3-6b0826b3c0dd] Failed to
> communicate with external provider 'ovirt-provider-ovn' due to error
> 'ConnectException: Connection refused (Connection refused)'
> [org.ovirt.engine.core.utils.servlet.ServletUtils] (default t2021-03-23
> 15:23:50,986-04 ERROR
> [org.ovirt.engine.core.bll.GetProviderCertificateChainQuery] (default
> task-118) [501852ef-cd76-40d0-8ef3-6b0826b3c0dd] Error in encoding
> certificate: EngineException: Connection refused (Connection refused)
> (Failed with error PROVIDER_FAILURE and code 5050)
>
> I think I would expect to ask the certificate to complete connection but
> does not ask for certificate at all.
>
> Any idea how to solve this?
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/JX5D6IRKYT5FVL4Z26ZQNDA4DYLYOCDE/
>

Hi,

it seems like the error is related to parsing the certificate.
Is the provider API reachable from the browser?

Best regards,
Ales

-- 

Ales Musil

Software Engineer - RHV Network

Red Hat EMEA 

amu...@redhat.comIM: amusil

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/D27TSQEEPCFKG5Z4KKIBX6J5CT3UM7FZ/