[ovirt-users] Re: ovirt-engine unresponsive - how to rescue?

2020-04-09 Thread eevans
Do these files exist on the hosted engine? I am a Ovirt newbie but it sounds 
like file or disk corruption. 
How much actual storage space is left on the volume with he prob files? Or the 
hosted engine disk?
Can you ssh into the hosted engine and put it in global maintenance and rerun 
engine-setup? 

What happened to trigger these errors? I'm coming in a bit late to the 
conversation.


Mar 23 18:02:59 ovirt-node-01.phoelex.com supervdsmd[29409]: *failed 
>to load module nvdimm: libbd_nvdimm.so.2: cannot open shared object 
>file:
>No
>such file or directory*
>
>c) Apr 09 08:05:13 ovirt-node-01.phoelex.com vdsm[4801]: *ERROR failed 
>to retrieve Hosted Engine HA score '[Errno 2] No such file or 
>directory'Is the Hosted Engine setup finished?*

Eric Evans
Digital Data Services LLC.
304.660.9080


-Original Message-
From: Strahil Nikolov  
Sent: Thursday, April 9, 2020 12:57 PM
To: Shareef Jalloq 
Cc: eev...@digitaldatatechs.com; Ovirt Users 
Subject: [ovirt-users] Re: ovirt-engine unresponsive - how to rescue?

On April 9, 2020 11:12:30 AM GMT+03:00, Shareef Jalloq  
wrote:
>OK, let's go through this.  I'm looking at the node that at least still 
>has some VMs running.  virsh also tells me that the HostedEngine VM is 
>running but it's unresponsive and I can't shut it down.
>
>1. All storage domains exist and are mounted.
>2. The ha_agent exists:
>
>[root@ovirt-node-01 ovirt-hosted-engine-ha]# ls /rhev/data-center/mnt/ 
>nas-01.phoelex.com\:_volume2_vmstore/a6cea67d-dbfb-45cf-a775-b4d0d47b26
>f2/
>
>dom_md  ha_agent  images  master
>
>3.  There are two links
>
>[root@ovirt-node-01 ovirt-hosted-engine-ha]# ll /rhev/data-center/mnt/ 
>nas-01.phoelex.com 
>\:_volume2_vmstore/a6cea67d-dbfb-45cf-a775-b4d0d47b26f2/ha_agent/
>
>total 8
>
>lrwxrwxrwx. 1 vdsm kvm 132 Apr  2 14:50 hosted-engine.lockspace ->
>/var/run/vdsm/storage/a6cea67d-dbfb-45cf-a775-b4d0d47b26f2/ffb90b82-42f
>e-4253-85d5-aaec8c280aaf/90e68791-0c6f-406a-89ac-e0d86c631604
>
>lrwxrwxrwx. 1 vdsm kvm 132 Apr  2 14:50 hosted-engine.metadata ->
>/var/run/vdsm/storage/a6cea67d-dbfb-45cf-a775-b4d0d47b26f2/2161aed0-725
>0-4c1d-b667-ac94f60af17e/6b818e33-f80a-48cc-a59c-bba641e027d4
>
>4. The services exist but all seem to have some sort of warning:
>
>a) Apr 08 18:10:55 ovirt-node-01.phoelex.com sanlock[1728]: *2020-04-08
>18:10:55 1744152 [36796]: s16 delta_renew long write time 10 sec*
>
>b) Mar 23 18:02:59 ovirt-node-01.phoelex.com supervdsmd[29409]: *failed 
>to load module nvdimm: libbd_nvdimm.so.2: cannot open shared object 
>file:
>No
>such file or directory*
>
>c) Apr 09 08:05:13 ovirt-node-01.phoelex.com vdsm[4801]: *ERROR failed 
>to retrieve Hosted Engine HA score '[Errno 2] No such file or 
>directory'Is the Hosted Engine setup finished?*
>
>d)Apr 08 22:48:27 ovirt-node-01.phoelex.com libvirtd[29307]: 2020-04-08
>22:48:27.134+: 29309: warning : qemuGetProcessInfo:1404 : cannot 
>parse process status data
>
>Apr 08 22:48:27 ovirt-node-01.phoelex.com libvirtd[29307]: 2020-04-08
>22:48:27.134+: 29309: error : virNetDevTapInterfaceStats:764 :
>internal
>error: /proc/net/dev: Interface not found
>
>Apr 08 23:09:39 ovirt-node-01.phoelex.com libvirtd[29307]: 2020-04-08
>23:09:39.844+: 29307: error : virNetSocketReadWire:1806 : End of 
>file while reading data: Input/output error
>
>Apr 09 01:05:26 ovirt-node-01.phoelex.com libvirtd[29307]: 2020-04-09
>01:05:26.660+: 29307: error : virNetSocketReadWire:1806 : End of 
>file while reading data: Input/output error
>
>5 & 6.  The broker log is continually printing this error:
>
>MainThread::INFO::2020-04-09
>08:07:31,438::broker::47::ovirt_hosted_engine_ha.broker.broker.Broker::
>(run) ovirt-hosted-engine-ha broker 2.3.6 started
>
>MainThread::DEBUG::2020-04-09
>08:07:31,438::broker::55::ovirt_hosted_engine_ha.broker.broker.Broker::
>(run)
>Running broker
>
>MainThread::DEBUG::2020-04-09
>08:07:31,438::broker::120::ovirt_hosted_engine_ha.broker.broker.Broker:
>:(_get_monitor)
>Starting monitor
>
>MainThread::INFO::2020-04-09
>08:07:31,438::monitor::40::ovirt_hosted_engine_ha.broker.monitor.Monito
>r::(_discover_submonitors)
>Searching for submonitors in
>/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker
>
>/submonitors
>
>MainThread::INFO::2020-04-09
>08:07:31,439::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monito
>r::(_discover_submonitors)
>Loaded submonitor network
>
>MainThread::INFO::2020-04-09
>08:07:31,440::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monito
>r::(_discover_submonitors)
>Loaded submonitor cpu-load-no-engine
>
>MainThread::INFO::2020-04-09
>08:07:31,441::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monito
>r::(_discover_submonitors)
>Loaded submonitor mgmt-bridge
>
>MainThread::INFO::2020-04-09
>08:07:31,441::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monito
>r::(_discover_submonitors)
>Loaded submonitor network
>
>MainThread::INFO::2020-04-09
>08:07:31,441::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monito

[ovirt-users] Re: ovirt-engine unresponsive - how to rescue?

2020-04-09 Thread Strahil Nikolov
On April 9, 2020 11:12:30 AM GMT+03:00, Shareef Jalloq  
wrote:
>OK, let's go through this.  I'm looking at the node that at least still
>has
>some VMs running.  virsh also tells me that the HostedEngine VM is
>running
>but it's unresponsive and I can't shut it down.
>
>1. All storage domains exist and are mounted.
>2. The ha_agent exists:
>
>[root@ovirt-node-01 ovirt-hosted-engine-ha]# ls /rhev/data-center/mnt/
>nas-01.phoelex.com\:_volume2_vmstore/a6cea67d-dbfb-45cf-a775-b4d0d47b26f2/
>
>dom_md  ha_agent  images  master
>
>3.  There are two links
>
>[root@ovirt-node-01 ovirt-hosted-engine-ha]# ll /rhev/data-center/mnt/
>nas-01.phoelex.com
>\:_volume2_vmstore/a6cea67d-dbfb-45cf-a775-b4d0d47b26f2/ha_agent/
>
>total 8
>
>lrwxrwxrwx. 1 vdsm kvm 132 Apr  2 14:50 hosted-engine.lockspace ->
>/var/run/vdsm/storage/a6cea67d-dbfb-45cf-a775-b4d0d47b26f2/ffb90b82-42fe-4253-85d5-aaec8c280aaf/90e68791-0c6f-406a-89ac-e0d86c631604
>
>lrwxrwxrwx. 1 vdsm kvm 132 Apr  2 14:50 hosted-engine.metadata ->
>/var/run/vdsm/storage/a6cea67d-dbfb-45cf-a775-b4d0d47b26f2/2161aed0-7250-4c1d-b667-ac94f60af17e/6b818e33-f80a-48cc-a59c-bba641e027d4
>
>4. The services exist but all seem to have some sort of warning:
>
>a) Apr 08 18:10:55 ovirt-node-01.phoelex.com sanlock[1728]: *2020-04-08
>18:10:55 1744152 [36796]: s16 delta_renew long write time 10 sec*
>
>b) Mar 23 18:02:59 ovirt-node-01.phoelex.com supervdsmd[29409]: *failed
>to
>load module nvdimm: libbd_nvdimm.so.2: cannot open shared object file:
>No
>such file or directory*
>
>c) Apr 09 08:05:13 ovirt-node-01.phoelex.com vdsm[4801]: *ERROR failed
>to
>retrieve Hosted Engine HA score '[Errno 2] No such file or directory'Is
>the
>Hosted Engine setup finished?*
>
>d)Apr 08 22:48:27 ovirt-node-01.phoelex.com libvirtd[29307]: 2020-04-08
>22:48:27.134+: 29309: warning : qemuGetProcessInfo:1404 : cannot
>parse
>process status data
>
>Apr 08 22:48:27 ovirt-node-01.phoelex.com libvirtd[29307]: 2020-04-08
>22:48:27.134+: 29309: error : virNetDevTapInterfaceStats:764 :
>internal
>error: /proc/net/dev: Interface not found
>
>Apr 08 23:09:39 ovirt-node-01.phoelex.com libvirtd[29307]: 2020-04-08
>23:09:39.844+: 29307: error : virNetSocketReadWire:1806 : End of
>file
>while reading data: Input/output error
>
>Apr 09 01:05:26 ovirt-node-01.phoelex.com libvirtd[29307]: 2020-04-09
>01:05:26.660+: 29307: error : virNetSocketReadWire:1806 : End of
>file
>while reading data: Input/output error
>
>5 & 6.  The broker log is continually printing this error:
>
>MainThread::INFO::2020-04-09
>08:07:31,438::broker::47::ovirt_hosted_engine_ha.broker.broker.Broker::(run)
>ovirt-hosted-engine-ha broker 2.3.6 started
>
>MainThread::DEBUG::2020-04-09
>08:07:31,438::broker::55::ovirt_hosted_engine_ha.broker.broker.Broker::(run)
>Running broker
>
>MainThread::DEBUG::2020-04-09
>08:07:31,438::broker::120::ovirt_hosted_engine_ha.broker.broker.Broker::(_get_monitor)
>Starting monitor
>
>MainThread::INFO::2020-04-09
>08:07:31,438::monitor::40::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
>Searching for submonitors in
>/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker
>
>/submonitors
>
>MainThread::INFO::2020-04-09
>08:07:31,439::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
>Loaded submonitor network
>
>MainThread::INFO::2020-04-09
>08:07:31,440::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
>Loaded submonitor cpu-load-no-engine
>
>MainThread::INFO::2020-04-09
>08:07:31,441::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
>Loaded submonitor mgmt-bridge
>
>MainThread::INFO::2020-04-09
>08:07:31,441::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
>Loaded submonitor network
>
>MainThread::INFO::2020-04-09
>08:07:31,441::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
>Loaded submonitor cpu-load
>
>MainThread::INFO::2020-04-09
>08:07:31,441::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
>Loaded submonitor engine-health
>
>MainThread::INFO::2020-04-09
>08:07:31,442::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
>Loaded submonitor mgmt-bridge
>
>MainThread::INFO::2020-04-09
>08:07:31,442::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
>Loaded submonitor cpu-load-no-engine
>
>MainThread::INFO::2020-04-09
>08:07:31,443::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
>Loaded submonitor cpu-load
>
>MainThread::INFO::2020-04-09
>08:07:31,443::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
>Loaded submonitor mem-free
>
>MainThread::INFO::2020-04-09
>08:07:31,443::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
>Loaded submonitor storage-domain
>
>MainThread::INFO::2020-04-09

[ovirt-users] Re: Ovirt and Dell Compellent in ISCSI

2020-04-09 Thread Vinícius Ferrão
It’s the same problem all over again.

iSCSI in oVirt/RHV is broken. For years.

Reported this a while: https://bugzilla.redhat.com/show_bug.cgi?id=1474904

iSCSI Multipath in the engine does not means a thing. It’s broken.

I don’t know why the oVirt team does not acknowledge this. I’m being an idiot 
always answering and inquiring about this, but I can’t understand why anyone 
doesn’t take this seriously. It’s real, it’s affects everyone and simply does 
not makes sense.


Sent from my iPhone

On 9 Apr 2020, at 10:39, "dalma...@cines.fr"  wrote:


Hi Shani,
thank's for the reply.
In this case, the bounding, I think, is inapropriate.
The dell compellent has 2 "fault domain" with différent IP network
This is a iSCSI array with 8 front-end ports (4 per controller). The iSCSI 
network is simple: two independant switches with a single VLAN, front-end ports 
are split equally between the two switches.
And for each serveur one Ethernet controller is connected on each switch. So, 
bonding seems inappropriate.
(see this dell documentation : 
https://downloads.dell.com/manuals/common/scv30x0iscsi-setup_en.pdf )
Maybe I misunderstood how iscsi bonding work in ovirt ?

Regards,
Sylvain


De: "Shani Leviim" 
À: dalma...@cines.fr
Cc: "users" 
Envoyé: Mardi 7 Avril 2020 15:41:16
Objet: Re: [ovirt-users] Ovirt and Dell Compellent in ISCSI

Hi Sylvain,
Not sure that's exactly what you're looking for, but you can define an iscsi 
bond (iscsi multipath) using the UI and REST API:
https://www.ovirt.org/develop/release-management/features/storage/iscsi-multipath.html

Note that this is a character of the DC.

Hope it helps.

Regards,
Shani Leviim


On Wed, Apr 1, 2020 at 12:35 PM mailto:dalma...@cines.fr>> 
wrote:
hi all,
we use ovirt 4.3 on dell server r640 runing centos 7.7 and a storage bay Dell 
Compellent SCv3020 in ISCSI.
We use two 10gb interfaces for iSCSI connection on each dell server.
If we configure ISCSI connection directly from web IU, we can’t specify the two 
physical ethernet interface , and there are missing path . (only 4 path on 8)
So, on the shell of hypervisor we use this commands  for configure the 
connections :
iscsiadm -m iface -I em1 --op=new # 1st ethernet interface
iscsiadm -m iface -I p3p1 --op=new # 2d ethernet interface
iscsiadm -m discovery -t sendtargets -p xx.xx.xx.xx
iscsiadm -m node -o show
iscsiadm -m node --login
after this, on the web IU we can connect our LUN with all path.

Also, I don’t understand how to configure multipath in the web UI . By defaut 
the configuration is in failover :
multipath -ll :
36000d3100457e405 dm-3 COMPELNT,Compellent Vol
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
  |- 23:0:0:1 sdb 8:16   active ready running
  |- 24:0:0:1 sdd 8:48   active ready running
  |- 25:0:0:1 sdc 8:32   active ready running
  |- 26:0:0:1 sde 8:64   active ready running
  |- 31:0:0:1 sdf 8:80   active ready running
  |- 32:0:0:1 sdg 8:96   active ready running
  |- 33:0:0:1 sdh 8:112  active ready running
  |- 34:0:0:1 sdi 8:128  active ready running

I think round robind or another configuration will be more performent.

So can we made this configuration , select physical interface and configure 
multipath in web UI ? for easyly maintenance and adding other server ?

Thank you.

Sylvain.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to 
users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UDKK3ZW7QCWHXQL2SXHAL3EN5SHZNRM4/

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5E4FUJXAOPVDFYTUB6HWK3LUDSO4A5V6/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MFOLP37VWRHR3HL7LV4VZUBNEDCJ4DA2/


[ovirt-users] Re: Ovirt and Dell Compellent in ISCSI

2020-04-09 Thread dalmasso
Hi Shani, 
thank's for the reply. 
In this case, the bounding, I think, is inapropriate. 
The dell compellent has 2 "fault domain" with différent IP network 
This is a iSCSI array with 8 front-end ports (4 per controller). The iSCSI 
network is simple: two independant switches with a single VLAN, front-end ports 
are split equally between the two switches. 
And for each serveur one Ethernet controller is connected on each switch. So, 
bonding seems inappropriate. 
(see this dell documentation : [ 
https://downloads.dell.com/manuals/common/scv30x0iscsi-setup_en.pdf | 
https://downloads.dell.com/manuals/common/scv30x0iscsi-setup_en.pdf ] ) 
Maybe I misunderstood how iscsi bonding work in ovirt ? 

Regards, 
Sylvain 


De: "Shani Leviim"  
À: dalma...@cines.fr 
Cc: "users"  
Envoyé: Mardi 7 Avril 2020 15:41:16 
Objet: Re: [ovirt-users] Ovirt and Dell Compellent in ISCSI 

Hi Sylvain, 
Not sure that's exactly what you're looking for, but you can define an iscsi 
bond (iscsi multipath) using the UI and REST API: 
[ 
https://www.ovirt.org/develop/release-management/features/storage/iscsi-multipath.html
 | 
https://www.ovirt.org/develop/release-management/features/storage/iscsi-multipath.html
 ] 

Note that this is a character of the DC. 

Hope it helps. 

Regards, 
Shani Leviim 


On Wed, Apr 1, 2020 at 12:35 PM < [ mailto:dalma...@cines.fr | 
dalma...@cines.fr ] > wrote: 


hi all, 
we use ovirt 4.3 on dell server r640 runing centos 7.7 and a storage bay Dell 
Compellent SCv3020 in ISCSI. 
We use two 10gb interfaces for iSCSI connection on each dell server. 
If we configure ISCSI connection directly from web IU, we can’t specify the two 
physical ethernet interface , and there are missing path . (only 4 path on 8) 
So, on the shell of hypervisor we use this commands for configure the 
connections : 
iscsiadm -m iface -I em1 --op=new # 1st ethernet interface 
iscsiadm -m iface -I p3p1 --op=new # 2d ethernet interface 
iscsiadm -m discovery -t sendtargets -p xx.xx.xx.xx 
iscsiadm -m node -o show 
iscsiadm -m node --login 
after this, on the web IU we can connect our LUN with all path. 

Also, I don’t understand how to configure multipath in the web UI . By defaut 
the configuration is in failover : 
multipath -ll : 
36000d3100457e405 dm-3 COMPELNT,Compellent Vol 
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw 
`-+- policy='service-time 0' prio=1 status=active 
|- 23:0:0:1 sdb 8:16 active ready running 
|- 24:0:0:1 sdd 8:48 active ready running 
|- 25:0:0:1 sdc 8:32 active ready running 
|- 26:0:0:1 sde 8:64 active ready running 
|- 31:0:0:1 sdf 8:80 active ready running 
|- 32:0:0:1 sdg 8:96 active ready running 
|- 33:0:0:1 sdh 8:112 active ready running 
|- 34:0:0:1 sdi 8:128 active ready running 

I think round robind or another configuration will be more performent. 

So can we made this configuration , select physical interface and configure 
multipath in web UI ? for easyly maintenance and adding other server ? 

Thank you. 

Sylvain. 
___ 
Users mailing list -- [ mailto:users@ovirt.org | users@ovirt.org ] 
To unsubscribe send an email to [ mailto:users-le...@ovirt.org | 
users-le...@ovirt.org ] 
Privacy Statement: [ https://www.ovirt.org/privacy-policy.html | 
https://www.ovirt.org/privacy-policy.html ] 
oVirt Code of Conduct: [ 
https://www.ovirt.org/community/about/community-guidelines/ | 
https://www.ovirt.org/community/about/community-guidelines/ ] 
List Archives: [ 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UDKK3ZW7QCWHXQL2SXHAL3EN5SHZNRM4/
 | 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UDKK3ZW7QCWHXQL2SXHAL3EN5SHZNRM4/
 ] 




___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5E4FUJXAOPVDFYTUB6HWK3LUDSO4A5V6/


[ovirt-users] Re: oVirt 4.4.0 Beta release refresh is now available for testing

2020-04-09 Thread Martin Perina
On Thu, Apr 9, 2020 at 10:49 AM Sandro Bonazzola 
wrote:

> oVirt 4.4.0 Beta release refresh is now available for testing
>
> The oVirt Project is excited to announce the availability of the beta
> release of oVirt 4.4.0 refresh for testing, as of April 9th, 2020
>
> This release unleashes an altogether more powerful and flexible open
> source virtualization solution that encompasses hundreds of individual
> changes and a wide range of enhancements across the engine, storage,
> network, user interface, and analytics on top of oVirt 4.3.
>
> Important notes before you try it
>
> Please note this is a Beta release.
>
> The oVirt Project makes no guarantees as to its suitability or usefulness.
>
> This pre-release must not to be used in production.
>
> In particular, please note that upgrades from 4.3 and future upgrades from
> this beta to the final 4.4 release from this version are not supported.
>
> Some of the features included in oVirt 4.4.0 Beta require content that
> will be available in CentOS Linux 8.2 but can’t be tested on RHEL 8.2 beta
> yet due to some incompatibility in openvswitch package shipped in CentOS
> Virt SIG which requires to rebuild openvswitch on top of CentOS 8.2.
>
> Known Issues
>
>-
>
>ovirt-imageio development is still in progress. In this beta you can’t
>upload images to data domains using the engine web application. You can
>still copy iso images into the deprecated ISO domain for installing VMs or
>upload and download to/from data domains is fully functional via the REST
>API and SDK.
>For uploading and downloading via the SDK, please see:
>  -
>
> https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/upload_disk.py
>  -
>
> https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/download_disk.py
>Both scripts are standalone command line tool, try --help for more
>info.
>
>
> Installation instructions
>
> For the engine: either use appliance or:
>
> - Install CentOS Linux 8 minimal from
> http://centos.mirror.garr.it/centos/8.1.1911/isos/x86_64/CentOS-8.1.1911-x86_64-dvd1.iso
>
> - dnf install
> https://resources.ovirt.org/pub/yum-repo/ovirt-release44-pre.rpm
>
> - dnf update (reboot if needed)
>
> - dnf module enable -y javapackages-tools pki-deps 389-ds
>

This is not correct, we should use:

  dnf module enable -y javapackages-tools pki-deps postgresql:12

- dnf install ovirt-engine
>
> - engine-setup
>
> For the nodes:
>
> Either use oVirt Node ISO or:
>
> - Install CentOS Linux 8 from
> http://centos.mirror.garr.it/centos/8.1.1911/isos/x86_64/CentOS-8.1.1911-x86_64-dvd1.iso
> ; select minimal installation
>
> - dnf install
> https://resources.ovirt.org/pub/yum-repo/ovirt-release44-pre.rpm
>
> - dnf update (reboot if needed)
>
> - Attach the host to engine and let it be deployed.
>
> What’s new in oVirt 4.4.0 Beta?
>
>-
>
>Hypervisors based on CentOS Linux 8 (rebuilt from award winning
>RHEL8), for both oVirt Node and standalone CentOS Linux hosts
>-
>
>Easier network management and configuration flexibility with
>NetworkManager
>-
>
>VMs based on a more modern Q35 chipset with legacy seabios and UEFI
>firmware
>-
>
>Support for direct passthrough of local host disks to VMs
>-
>
>Live migration improvements for High Performance guests.
>-
>
>New Windows Guest tools installer based on WiX framework now moved to
>VirtioWin project
>-
>
>Dropped support for cluster level prior to 4.2
>-
>
>Dropped SDK3 support
>-
>
>4K disks support only for file based storage. iSCSI/FC storage do not
>support 4k disks yet.
>-
>
>Exporting a VM to a data domain
>-
>
>Editing of floating disks
>-
>
>Integrating ansible-runner into engine, which allows a more detailed
>monitoring of playbooks executed from engine
>-
>
>Adding/reinstalling hosts are now completely based on Ansible
>-
>
>The OpenStack Neutron Agent cannot be configured by oVirt anymore, it
>should be configured by TripleO instead
>
>
> This release is available now on x86_64 architecture for:
>
> * Red Hat Enterprise Linux 8.1
>
> * CentOS Linux (or similar) 8.1
>
> This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
> for:
>
> * Red Hat Enterprise Linux 8.1
>
> * CentOS Linux (or similar) 8.1
>
> * oVirt Node 4.4 based on CentOS Linux 8.1 (available for x86_64 only)
>
> See the release notes [1] for installation instructions and a list of new
> features and bugs fixed.
>
> If you manage more than one oVirt instance, OKD or RDO we also recommend
> to try ManageIQ .
>
> In such a case, please be sure  to take the qc2 image and not the ova
> image.
>
> Notes:
>
> - oVirt Appliance is already available for CentOS Linux 8
>
> - oVirt Node NG is already available for CentOS Linux 8
>
> Additional Resources:
>
> * Read more about the oVirt 4.4.0 release highlights:
> 

[ovirt-users] Firewall GARP not reachable to VM

2020-04-09 Thread k . betsis
Hi all

Does anyone know how i can allow my Firewall VM cluster act as the default 
gateway to VMs within the same network?
I've configured the GARP functionality on the OPNSENSE firewalls (PFSENSE fork).
VMs within the same network can ping the firewall IP addresses successfully but 
not the GARP IP.
The ovirt network has been configured with the MAC Address Anti-spoofing to 
false.
One firewall has been configured with virtio network drivers and the with e1000 
both exhibiting the same behavior.

Currently all VMs have been configured with a default gateway the primary 
firewall.
Network workarounds using BGP and attributes can work, but are way to 
complicate to streamline for all VMs when a simple VRRP can do the job.

Any ideas what i am missing?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JL25NRQOTDQKKEKMLFGXFSEFNMG6SEBE/


[ovirt-users] Re: Sometimes paused due to unknown storage error on gluster

2020-04-09 Thread Gianluca Cecchi
On Thu, Apr 9, 2020 at 7:46 AM Krutika Dhananjay 
wrote:

>
>
> On Tue, Apr 7, 2020 at 7:36 PM Gianluca Cecchi 
> wrote:
>
>>
>> OK. So I set log at least at INFO level on all subsystems and tried a
>> redeploy of Openshift with 3 mater nodes and 7 worker nodes.
>> One worker got the error and VM in paused mode
>>
>> Apr 7, 2020, 3:27:28 PM VM worker-6 has been paused due to unknown
>> storage error.
>>
>> The vm has only one 100Gb virtual disk on gluster volume named vmstore
>>
>>
>> Here below all the logs around time at the different layers.
>> Let me know if you need another log file not yet considered.
>>
>> From what I see, the matching error is found in
>>
>> - rhev-data-center-mnt-glusterSD-ovirtst.mydomain.storage:_vmstore.log
>>
>> [2020-04-07 13:27:28.721262] E [MSGID: 133010]
>> [shard.c:2327:shard_common_lookup_shards_cbk] 0-vmstore-shard: Lookup on
>> shard 523 failed. Base file gfid = d22530cf-2e50-4059-8924-0aafe38497b1 [No
>> such file or directory]
>> [2020-04-07 13:27:28.721432] W [fuse-bridge.c:2918:fuse_writev_cbk]
>> 0-glusterfs-fuse: 4435189: WRITE => -1
>> gfid=d22530cf-2e50-4059-8924-0aafe38497b1 fd=0x7f3c4c07ab38 (No such file
>> or directory)
>>
>>
> This ^^, right here is the reason the VM paused. Are you using a plain
> distribute volume here?
> Can you share some of the log messages that occur right above these errors?
> Also, can you check if the file
> $VMSTORE_BRICKPATH/.glusterfs/d2/25/d22530cf-2e50-4059-8924-0aafe38497b1
> exists on the brick?
>
> -Krutika
>
>

Thanks for answering Krutika
To verify that sharding in some way was "involved" in the problem, I
executed a new re-deploy of the 9 Openshift OCP servers, without indeed
receiving any error.
While with sharding enable I received at least 3-4 errors every deployment
run.
In particular I deleted the VM disks of the previous VMs to put them on a
volume without sharding.
Right now the directory is so empty:

[root@ovirt ~]# ll -a /gluster_bricks/vmstore/vmstore/.glusterfs/d2/25/
total 8
drwx--.   2 root root6 Apr  8 16:59 .
drwx--. 105 root root 8192 Apr  9 00:50 ..
[root@ovirt ~]#

Here you can find the entire log (in gzip format) from [2020-04-05
01:20:02.978429] to [2020-04-09 10:45:36.734079] of the vmstore volume
https://drive.google.com/file/d/1Dqr7KJMqKdMFg-jvhsDAzvr1xgWtvtnQ/view?usp=sharing

You will find same error at least in these timestamps below corresponding
to engine webadmin events "unknown storage error", taking care that inside
the log file the time is UTC, so you have to shift 2hours behind (03:27:28
PM in engine webadmin event corresponds to 13:27:28 in log file)

Apr 7, 2020, 3:27:28 PM

Apr 7, 2020, 4:38:55 PM

Apr 7, 2020, 5:31:02 PM

Apr 8, 2020, 8:52:49 AM

Apr 8, 2020, 12:05:17 PM

Apr 8, 2020, 3:11:10 PM

Apr 8, 2020, 3:20:30 PM

Apr 8, 2020, 3:26:54 PM

Thanks again, and I'm available to re-try on sharding enable volume after
modifying anything, eventually
Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7V47CCSRJWEUTOAULCSZZM7MZRJJXYCF/


[ovirt-users] access to QEMU monitor

2020-04-09 Thread Matthias Leopold

Hi,

for educational purposes I'm trying to access the QEMU monitor of oVirt 
VMs. Can someone tell me how this can be done? Connecting to the unix 
socket with socat doesn't work, probably because it's not started with 
"server,nowait". Can the QEMU monitor be reached with the SPICE console?


thx
Matthias
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KRATK2MJ76DMDQRXMYZ7QPHUH2TLVMP2/


[ovirt-users] oVirt 4.4.0 Beta release refresh is now available for testing

2020-04-09 Thread Sandro Bonazzola
oVirt 4.4.0 Beta release refresh is now available for testing

The oVirt Project is excited to announce the availability of the beta
release of oVirt 4.4.0 refresh for testing, as of April 9th, 2020

This release unleashes an altogether more powerful and flexible open source
virtualization solution that encompasses hundreds of individual changes and
a wide range of enhancements across the engine, storage, network, user
interface, and analytics on top of oVirt 4.3.

Important notes before you try it

Please note this is a Beta release.

The oVirt Project makes no guarantees as to its suitability or usefulness.

This pre-release must not to be used in production.

In particular, please note that upgrades from 4.3 and future upgrades from
this beta to the final 4.4 release from this version are not supported.

Some of the features included in oVirt 4.4.0 Beta require content that will
be available in CentOS Linux 8.2 but can’t be tested on RHEL 8.2 beta yet
due to some incompatibility in openvswitch package shipped in CentOS Virt
SIG which requires to rebuild openvswitch on top of CentOS 8.2.

Known Issues

   -

   ovirt-imageio development is still in progress. In this beta you can’t
   upload images to data domains using the engine web application. You can
   still copy iso images into the deprecated ISO domain for installing VMs or
   upload and download to/from data domains is fully functional via the REST
   API and SDK.
   For uploading and downloading via the SDK, please see:
 -
   
https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/upload_disk.py
 -
   
https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/download_disk.py
   Both scripts are standalone command line tool, try --help for more info.


Installation instructions

For the engine: either use appliance or:

- Install CentOS Linux 8 minimal from
http://centos.mirror.garr.it/centos/8.1.1911/isos/x86_64/CentOS-8.1.1911-x86_64-dvd1.iso

- dnf install
https://resources.ovirt.org/pub/yum-repo/ovirt-release44-pre.rpm

- dnf update (reboot if needed)

- dnf module enable -y javapackages-tools pki-deps 389-ds

- dnf install ovirt-engine

- engine-setup

For the nodes:

Either use oVirt Node ISO or:

- Install CentOS Linux 8 from
http://centos.mirror.garr.it/centos/8.1.1911/isos/x86_64/CentOS-8.1.1911-x86_64-dvd1.iso
; select minimal installation

- dnf install
https://resources.ovirt.org/pub/yum-repo/ovirt-release44-pre.rpm

- dnf update (reboot if needed)

- Attach the host to engine and let it be deployed.

What’s new in oVirt 4.4.0 Beta?

   -

   Hypervisors based on CentOS Linux 8 (rebuilt from award winning RHEL8),
   for both oVirt Node and standalone CentOS Linux hosts
   -

   Easier network management and configuration flexibility with
   NetworkManager
   -

   VMs based on a more modern Q35 chipset with legacy seabios and UEFI
   firmware
   -

   Support for direct passthrough of local host disks to VMs
   -

   Live migration improvements for High Performance guests.
   -

   New Windows Guest tools installer based on WiX framework now moved to
   VirtioWin project
   -

   Dropped support for cluster level prior to 4.2
   -

   Dropped SDK3 support
   -

   4K disks support only for file based storage. iSCSI/FC storage do not
   support 4k disks yet.
   -

   Exporting a VM to a data domain
   -

   Editing of floating disks
   -

   Integrating ansible-runner into engine, which allows a more detailed
   monitoring of playbooks executed from engine
   -

   Adding/reinstalling hosts are now completely based on Ansible
   -

   The OpenStack Neutron Agent cannot be configured by oVirt anymore, it
   should be configured by TripleO instead


This release is available now on x86_64 architecture for:

* Red Hat Enterprise Linux 8.1

* CentOS Linux (or similar) 8.1

This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:

* Red Hat Enterprise Linux 8.1

* CentOS Linux (or similar) 8.1

* oVirt Node 4.4 based on CentOS Linux 8.1 (available for x86_64 only)

See the release notes [1] for installation instructions and a list of new
features and bugs fixed.

If you manage more than one oVirt instance, OKD or RDO we also recommend to
try ManageIQ .

In such a case, please be sure  to take the qc2 image and not the ova image.

Notes:

- oVirt Appliance is already available for CentOS Linux 8

- oVirt Node NG is already available for CentOS Linux 8

Additional Resources:

* Read more about the oVirt 4.4.0 release highlights:
http://www.ovirt.org/release/4.4.0/

* Get more oVirt project updates on Twitter: https://twitter.com/ovirt

* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/


[1] http://www.ovirt.org/release/4.4.0/
[2] http://resources.ovirt.org/pub/ovirt-4.4-pre/iso/

-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com
*

[ovirt-users] Re: ovirt-engine unresponsive - how to rescue?

2020-04-09 Thread Shareef Jalloq
OK, let's go through this.  I'm looking at the node that at least still has
some VMs running.  virsh also tells me that the HostedEngine VM is running
but it's unresponsive and I can't shut it down.

1. All storage domains exist and are mounted.
2. The ha_agent exists:

[root@ovirt-node-01 ovirt-hosted-engine-ha]# ls /rhev/data-center/mnt/
nas-01.phoelex.com\:_volume2_vmstore/a6cea67d-dbfb-45cf-a775-b4d0d47b26f2/

dom_md  ha_agent  images  master

3.  There are two links

[root@ovirt-node-01 ovirt-hosted-engine-ha]# ll /rhev/data-center/mnt/
nas-01.phoelex.com
\:_volume2_vmstore/a6cea67d-dbfb-45cf-a775-b4d0d47b26f2/ha_agent/

total 8

lrwxrwxrwx. 1 vdsm kvm 132 Apr  2 14:50 hosted-engine.lockspace ->
/var/run/vdsm/storage/a6cea67d-dbfb-45cf-a775-b4d0d47b26f2/ffb90b82-42fe-4253-85d5-aaec8c280aaf/90e68791-0c6f-406a-89ac-e0d86c631604

lrwxrwxrwx. 1 vdsm kvm 132 Apr  2 14:50 hosted-engine.metadata ->
/var/run/vdsm/storage/a6cea67d-dbfb-45cf-a775-b4d0d47b26f2/2161aed0-7250-4c1d-b667-ac94f60af17e/6b818e33-f80a-48cc-a59c-bba641e027d4

4. The services exist but all seem to have some sort of warning:

a) Apr 08 18:10:55 ovirt-node-01.phoelex.com sanlock[1728]: *2020-04-08
18:10:55 1744152 [36796]: s16 delta_renew long write time 10 sec*

b) Mar 23 18:02:59 ovirt-node-01.phoelex.com supervdsmd[29409]: *failed to
load module nvdimm: libbd_nvdimm.so.2: cannot open shared object file: No
such file or directory*

c) Apr 09 08:05:13 ovirt-node-01.phoelex.com vdsm[4801]: *ERROR failed to
retrieve Hosted Engine HA score '[Errno 2] No such file or directory'Is the
Hosted Engine setup finished?*

d)Apr 08 22:48:27 ovirt-node-01.phoelex.com libvirtd[29307]: 2020-04-08
22:48:27.134+: 29309: warning : qemuGetProcessInfo:1404 : cannot parse
process status data

Apr 08 22:48:27 ovirt-node-01.phoelex.com libvirtd[29307]: 2020-04-08
22:48:27.134+: 29309: error : virNetDevTapInterfaceStats:764 : internal
error: /proc/net/dev: Interface not found

Apr 08 23:09:39 ovirt-node-01.phoelex.com libvirtd[29307]: 2020-04-08
23:09:39.844+: 29307: error : virNetSocketReadWire:1806 : End of file
while reading data: Input/output error

Apr 09 01:05:26 ovirt-node-01.phoelex.com libvirtd[29307]: 2020-04-09
01:05:26.660+: 29307: error : virNetSocketReadWire:1806 : End of file
while reading data: Input/output error

5 & 6.  The broker log is continually printing this error:

MainThread::INFO::2020-04-09
08:07:31,438::broker::47::ovirt_hosted_engine_ha.broker.broker.Broker::(run)
ovirt-hosted-engine-ha broker 2.3.6 started

MainThread::DEBUG::2020-04-09
08:07:31,438::broker::55::ovirt_hosted_engine_ha.broker.broker.Broker::(run)
Running broker

MainThread::DEBUG::2020-04-09
08:07:31,438::broker::120::ovirt_hosted_engine_ha.broker.broker.Broker::(_get_monitor)
Starting monitor

MainThread::INFO::2020-04-09
08:07:31,438::monitor::40::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
Searching for submonitors in
/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker

/submonitors

MainThread::INFO::2020-04-09
08:07:31,439::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
Loaded submonitor network

MainThread::INFO::2020-04-09
08:07:31,440::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
Loaded submonitor cpu-load-no-engine

MainThread::INFO::2020-04-09
08:07:31,441::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
Loaded submonitor mgmt-bridge

MainThread::INFO::2020-04-09
08:07:31,441::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
Loaded submonitor network

MainThread::INFO::2020-04-09
08:07:31,441::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
Loaded submonitor cpu-load

MainThread::INFO::2020-04-09
08:07:31,441::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
Loaded submonitor engine-health

MainThread::INFO::2020-04-09
08:07:31,442::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
Loaded submonitor mgmt-bridge

MainThread::INFO::2020-04-09
08:07:31,442::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
Loaded submonitor cpu-load-no-engine

MainThread::INFO::2020-04-09
08:07:31,443::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
Loaded submonitor cpu-load

MainThread::INFO::2020-04-09
08:07:31,443::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
Loaded submonitor mem-free

MainThread::INFO::2020-04-09
08:07:31,443::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
Loaded submonitor storage-domain

MainThread::INFO::2020-04-09
08:07:31,443::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
Loaded submonitor storage-domain

MainThread::INFO::2020-04-09