[ovirt-users] Re: VM HostedEngine is down with error

2020-09-01 Thread Yedidyah Bar David
On Tue, Sep 1, 2020 at 7:17 PM  wrote:
>
> Hello everyone,
>
> I have a replica 2 + arbiter installation and this morning the Hosted Engine 
> gave the following error on the UI and resumed on a different node (node3) 
> than the one it was originally running(node1). (The original node has more 
> memory than the one it ended up, but it had a better memory usage percentage 
> at the time). Also, the only way I discovered the migration had happened and 
> there was an Error in Events, was because I logged in the web interface of 
> ovirt for a routine inspection. Βesides that, everything was working properly 
> and still is.
>
> The error that popped is the following:
>
> VM HostedEngine is down with error. Exit message: internal error: qemu 
> unexpectedly closed the monitor:
> 2020-09-01T06:49:20.749126Z qemu-kvm: warning: All CPU(s) up to maxcpus 
> should be described in NUMA config, ability to start up with partial NUMA 
> mappings is obsoleted and will be removed in future
> 2020-09-01T06:49:20.927274Z qemu-kvm: -device 
> virtio-blk-pci,iothread=iothread1,scsi=off,bus=pci.0,addr=0x7,drive=drive-ua-d5de54b6-9f8e-4fba-819b-ebf6780757d2,id=ua-d5de54b6-9f8e-4fba-819b-ebf6780757d2,bootindex=1,write-cache=on:
>  Failed to get "write" lock
> Is another process using the image?.

It's quite likely that this isn't the root cause.

Please check your logs from before that.

Above looks like something (ovirt-ha-agent?) tried to start the hosted
engine VM, but failed due to locking - most likely, because it was
already up elsewhere (on some other host?).

So you want to check when/where the VM was started before this error,
and then carefully any errors before it was started.

Also, check that the clocks on all your machines are in sync.

>
> Which from what I could gather concerns the following snippet from the 
> HostedEngine.xml and it's the virtio disk of the Hosted Engine:
>
> 
>io='threads' iothread='1'/>
>file='/var/run/vdsm/storage/80f6e393-9718-4738-a14a-64cf43c3d8c2/d5de54b6-9f8e-4fba-819b-ebf6780757d2/a48555f4-be23-4467-8a54-400ae7baf9d7'>
> 
>   
>   
>   d5de54b6-9f8e-4fba-819b-ebf6780757d2
>   
>function='0x0'/>
> 
>
> I've tried looking into the logs and the sar command but I couldn't find 
> anything to relate with the above errors and determining the reason for it to 
> happen. Is this a Gluster or a QEMU problem?

Likely, but hard to tell without more information.

>
> The Hosted Engine was manually migrated five days before on node1.
>
> Is there a standard practice I could follow to determine what happened and 
> secure my system?

Nothing, other than checking the logs.

Check, on all of your hosts:

/var/log/messages
/var/log/vdsm/*
/var/log/ovirt-hosted-engine-ha/*

And on the engine (likely won't help in this case, but just in case):

/var/log/ovirt-engine/*

>
> Thank you very much for your time,

Good luck and best regards,
-- 
Didi
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JANMNMJRXADGQIT4R2H2NNYLYCX3FSBS/


[ovirt-users] Re: How can Gluster be a HCI default, when it's hardly ever working?

2020-09-01 Thread thomas
In this specific case Ieven used virgin hardware originally.

Once I managed to kill the hosted-engine by downgrading the datacenter cluster 
to legacy, I re-installed all gluster storage from the VDO level up. No traces 
of a file system should be left with LVM and XFS on top, even if I didn't 
actually null the SSD (does writing nulls to an SSD actually cost you an 
overwrite these days or is that translated into a trim by the firmware?)

No difference in terms of faults between the virgin hardware and the 
re-install, so stale Gluster extended file attributes etc. (your error theory, 
I believe) is not a factor.

Choosing between 'vmstore' and 'data' domains for the imports makes no 
difference, full allocation over thin allocation neither. But actually I didn't 
just see write errors from qemu-img, but also read-errors, which had me 
concerned about some other corruption source. That was another motivation to 
start with a fresh source, which meant a backup-domain instead of an export 
domain or OVAs.

The storage underneath the backup domain is NFS (Posix has a 4k issue and I'm 
not sure I want to try moving Glusters between farms just yet), which is easy 
to detach at the source and import at the target. If NFS is your default, oVirt 
can be so much easier, but that more 'professional' domain we use vSphere and 
actually SAN storage. The attraction of oVirt for the lab use case, critically 
depends on HCI and gluster.

The VMs were fine running from the backup domain (which incidentally must have 
lost its backup attribute at the target, because otherwise it should have kept 
the VMs from launching...), but once I tried moving their disks to the gluster, 
I got empty or unusable disks again, or error while moving.

The only way that I found to transfer gluster to gluster was to use disk 
uploads either via the GUI or by Python, but that results into fully allocated 
images and is very slow at 50MB/s even with Python. BTW sparsifying does 
nothing to those images, I guess because sectors full of nulls aren't actually 
the same as a logically unused sector. At least the VDO underneath should take 
reduce some of the overhead.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TTQE7YLN5JKABRGSNOFTV3FMMZNO2DRC/


[ovirt-users] Re: Mellanox OFED with oVirt

2020-09-01 Thread Strahil Nikolov via Users
Have you tried enabling the gluster repos from the CentOS Storage SIG ?
I think it was something like : yum install centos-release-gluster7

Best Regards,
Strahil Nikolov






В сряда, 2 септември 2020 г., 06:05:03 Гринуич+3, Vinícius Ferrão via Users 
 написа: 





Hello,

Anyone had success using Mellanox OFED with oVirt? Already learned some things:

1. I can’t use oVirt Node.
2. Mellanox OFED cannot be installed with mlnx-ofed-all since it breaks dnf. We 
need to rely on the upstream RDMA implementation.
3. The way to go is running: dnf install mlnx-ofed-dpdk-upstream-libs

But after the installation I ended up with broken dnf:

[root@c4140 ~]# dnf update
Updating Subscription Management repositories.
Last metadata expiration check: 0:03:54 ago on Tue 01 Sep 2020 11:52:41 PM -03.
Error: 
Problem: both package mlnx-ofed-all-user-only-5.1-0.6.6.0.rhel8.2.noarch and 
mlnx-ofed-all-5.1-0.6.6.0.rhel8.2.noarch obsolete glusterfs-rdma
  - cannot install the best update candidate for package 
glusterfs-rdma-6.0-37.el8.x86_64
  - package ovirt-host-4.4.1-4.el8ev.x86_64 requires glusterfs-rdma, but none 
of the providers can be installed
  - package mlnx-ofed-all-5.1-0.6.6.0.rhel8.2.noarch obsoletes glusterfs-rdma 
provided by glusterfs-rdma-6.0-37.el8.x86_64
  - package glusterfs-rdma-3.12.2-40.2.el8.x86_64 requires glusterfs(x86-64) = 
3.12.2-40.2.el8, but none of the providers can be installed
  - package glusterfs-rdma-6.0-15.el8.x86_64 requires glusterfs(x86-64) = 
6.0-15.el8, but none of the providers can be installed
  - package glusterfs-rdma-6.0-20.el8.x86_64 requires glusterfs(x86-64) = 
6.0-20.el8, but none of the providers can be installed
  - cannot install both glusterfs-3.12.2-40.2.el8.x86_64 and 
glusterfs-6.0-37.el8.x86_64
  - cannot install both glusterfs-6.0-15.el8.x86_64 and 
glusterfs-6.0-37.el8.x86_64
  - cannot install both glusterfs-6.0-20.el8.x86_64 and 
glusterfs-6.0-37.el8.x86_64
  - cannot install the best update candidate for package 
ovirt-host-4.4.1-4.el8ev.x86_64
  - cannot install the best update candidate for package 
glusterfs-6.0-37.el8.x86_64
(try to add '--allowerasing' to command line to replace conflicting packages or 
'--skip-broken' to skip uninstallable packages or '--nobest' to use not only 
best candidate packages)

That are the packages installed:

[root@c4140 ~]# rpm -qa *mlnx*
mlnx-dpdk-19.11.0-1.51066.x86_64
mlnx-ofa_kernel-devel-5.1-OFED.5.1.0.6.6.1.rhel8u2.x86_64
mlnx-ethtool-5.4-1.51066.x86_64
mlnx-dpdk-devel-19.11.0-1.51066.x86_64
mlnx-ofa_kernel-5.1-OFED.5.1.0.6.6.1.rhel8u2.x86_64
mlnx-dpdk-doc-19.11.0-1.51066.noarch
mlnx-dpdk-tools-19.11.0-1.51066.x86_64
mlnx-ofed-dpdk-upstream-libs-5.1-0.6.6.0.rhel8.2.noarch
kmod-mlnx-ofa_kernel-5.1-OFED.5.1.0.6.6.1.rhel8u2.x86_64
mlnx-iproute2-5.6.0-1.51066.x86_64

And finally this is the repo that I’m using:
[root@c4140 ~]# cat /etc/yum.repos.d/mellanox_mlnx_ofed.repo 
#
# Mellanox Technologies Ltd. public repository configuration file.
# For more information, refer to http://linux.mellanox.com
#

[mlnx_ofed_latest_base]
name=Mellanox Technologies rhel8.2-$basearch mlnx_ofed latest
baseurl=http://linux.mellanox.com/public/repo/mlnx_ofed/latest/rhel8.2/$basearch
enabled=1
gpgkey=http://www.mellanox.com/downloads/ofed/RPM-GPG-KEY-Mellanox
gpgcheck=1


So anyone had success with this?

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Z2QBLGLN5NNUUCGHYM5HL4QDHIPZ6J72/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5575GDGYZ5DQPWAZMSAFHUG6EXKZTF5V/


[ovirt-users] Mellanox OFED with oVirt

2020-09-01 Thread Vinícius Ferrão via Users
Hello,

Anyone had success using Mellanox OFED with oVirt? Already learned some things:

1. I can’t use oVirt Node.
2. Mellanox OFED cannot be installed with mlnx-ofed-all since it breaks dnf. We 
need to rely on the upstream RDMA implementation.
3. The way to go is running: dnf install mlnx-ofed-dpdk-upstream-libs

But after the installation I ended up with broken dnf:

[root@c4140 ~]# dnf update
Updating Subscription Management repositories.
Last metadata expiration check: 0:03:54 ago on Tue 01 Sep 2020 11:52:41 PM -03.
Error: 
 Problem: both package mlnx-ofed-all-user-only-5.1-0.6.6.0.rhel8.2.noarch and 
mlnx-ofed-all-5.1-0.6.6.0.rhel8.2.noarch obsolete glusterfs-rdma
  - cannot install the best update candidate for package 
glusterfs-rdma-6.0-37.el8.x86_64
  - package ovirt-host-4.4.1-4.el8ev.x86_64 requires glusterfs-rdma, but none 
of the providers can be installed
  - package mlnx-ofed-all-5.1-0.6.6.0.rhel8.2.noarch obsoletes glusterfs-rdma 
provided by glusterfs-rdma-6.0-37.el8.x86_64
  - package glusterfs-rdma-3.12.2-40.2.el8.x86_64 requires glusterfs(x86-64) = 
3.12.2-40.2.el8, but none of the providers can be installed
  - package glusterfs-rdma-6.0-15.el8.x86_64 requires glusterfs(x86-64) = 
6.0-15.el8, but none of the providers can be installed
  - package glusterfs-rdma-6.0-20.el8.x86_64 requires glusterfs(x86-64) = 
6.0-20.el8, but none of the providers can be installed
  - cannot install both glusterfs-3.12.2-40.2.el8.x86_64 and 
glusterfs-6.0-37.el8.x86_64
  - cannot install both glusterfs-6.0-15.el8.x86_64 and 
glusterfs-6.0-37.el8.x86_64
  - cannot install both glusterfs-6.0-20.el8.x86_64 and 
glusterfs-6.0-37.el8.x86_64
  - cannot install the best update candidate for package 
ovirt-host-4.4.1-4.el8ev.x86_64
  - cannot install the best update candidate for package 
glusterfs-6.0-37.el8.x86_64
(try to add '--allowerasing' to command line to replace conflicting packages or 
'--skip-broken' to skip uninstallable packages or '--nobest' to use not only 
best candidate packages)

That are the packages installed:

[root@c4140 ~]# rpm -qa *mlnx*
mlnx-dpdk-19.11.0-1.51066.x86_64
mlnx-ofa_kernel-devel-5.1-OFED.5.1.0.6.6.1.rhel8u2.x86_64
mlnx-ethtool-5.4-1.51066.x86_64
mlnx-dpdk-devel-19.11.0-1.51066.x86_64
mlnx-ofa_kernel-5.1-OFED.5.1.0.6.6.1.rhel8u2.x86_64
mlnx-dpdk-doc-19.11.0-1.51066.noarch
mlnx-dpdk-tools-19.11.0-1.51066.x86_64
mlnx-ofed-dpdk-upstream-libs-5.1-0.6.6.0.rhel8.2.noarch
kmod-mlnx-ofa_kernel-5.1-OFED.5.1.0.6.6.1.rhel8u2.x86_64
mlnx-iproute2-5.6.0-1.51066.x86_64

And finally this is the repo that I’m using:
[root@c4140 ~]# cat /etc/yum.repos.d/mellanox_mlnx_ofed.repo 
#
# Mellanox Technologies Ltd. public repository configuration file.
# For more information, refer to http://linux.mellanox.com
#

[mlnx_ofed_latest_base]
name=Mellanox Technologies rhel8.2-$basearch mlnx_ofed latest
baseurl=http://linux.mellanox.com/public/repo/mlnx_ofed/latest/rhel8.2/$basearch
enabled=1
gpgkey=http://www.mellanox.com/downloads/ofed/RPM-GPG-KEY-Mellanox
gpgcheck=1


So anyone had success with this?

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Z2QBLGLN5NNUUCGHYM5HL4QDHIPZ6J72/


[ovirt-users] Re: How to Backup a VM

2020-09-01 Thread Nir Soffer
On Sun, Aug 30, 2020 at 7:13 PM  wrote:
>
> Struggling with bugs and issues on OVA export/import (my clear favorite 
> otherwise, especially when moving VMs between different types of 
> hypervisors), I've tried pretty much everything else, too.
>
> Export domains are deprecated and require quite a bit of manual handling. 
> Unfortunately the buttons for the various operations are all over the place 
> e.g. the activation and maintenance toggles are in different pages.

Using export domain is not a single click, but it is not that complicated.
But this is good feedback anyway.

> In the end the mechanisms underneath (qemu-img) seem very much the same and 
> suffer from the same issues (I have larger VMs that keep failing on imports).

I think the issue is gluster, not qemu-img.

> So far the only fool-proof method has been to use the imageio daemon to 
> upload and download disk images, either via the Python API or the Web-GUI.

How did you try? transfer via the UI is completely different than
transfer using the python API.

From the UI, you get the image content on storage, without sparseness
support. If you
download 500g raw sparse disk (e.g. gluster with allocation policy
thin) with 50g of data
and 450g of unallocated space, you will get 50g of data, and 450g of
zeroes. This is very
slow. If you upload the image to another system you will upload 500g
of data, which will
again be very slow.

From the python API, download and upload support sparseness, so you
will download and
upload only 50g. Both upload and download use 4 connections, so you
can maximize the
throughput that you can get from the storage. From python API, you can
convert the image
format during download/upload automatically, for example download raw
disk to qcow2
image.

Gluster is a challenge (as usual), since when using sharding (enabled
by default for ovirt),
it does not report sparness. So even from the python API you will
download the entire 500g.
We can improve this using zero detection but this is not implemented yet.

> Transfer times are terrible though, 50MB/s is quite low when the network 
> below is 2.5-10Gbit and SSDs all around.

In our lab we tested upload of 100 GiB image and 10 concurrent uploads
of 100 GiB
images, and we measured throughput of 1 GiB/s:
https://bugzilla.redhat.com/show_bug.cgi?id=1591439#c24

I would like to understand the setup better:

- upload or download?
- disk format?
- disk storage?
- how is storage connected to host?
- how do you access the host (1g network? 10g?)
- image format?
- image storage?

> Obviously with Python as everybody's favorite GUI these days, you can also 
> copy and transfer the VMs complete definition, but I am one of those old 
> guys, who might even prefer a real GUI to mouse clicks on a browser.
>
> The documentation on backup domains is terrible. What's missing behind the 
> 404 link in oVirt becomes a very terse section in the RHV manuals, where 
> you're basically just told that after cloning the VM, you should then move 
> its disks to the backup domain...

backup domain is a partly cooked feature and it is not very useful.
There is no reason
to use it for moving VMs from one environment to another.

I already explained how to move vms using a data domain. Check here:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ULLFLFKBAW7T7B6OD63BMNZXJK6EU6AI/
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GFOK55O5N4SRU5PA32P3LATW74E7WKT6/

I'm not sure it is documented properly, please file a documentation
bug if we need to
add something to the documentation.

> What you are then supposed to do with the cloned VM, if it's ok to simplay 
> throw it away, because the definition is silently copied to the OVF_STORE on 
> the backup... none of that is explained or mentioned.

If you cloned a vm to data domain and then detach the data domain
there is nothing to cleanup in the source system.

> There is also no procedure for restoring a machine from a backup domain, when 
> really a cloning process that allows a target domain would be pretty much 
> what I'd vote for.

We have this in 4.4, try to select a VM and click "Export".

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UB2YZK3DD3KDHZYQQW4TVYCKASRRSOK4/


[ovirt-users] Re: HP blade G7, with be2net.

2020-09-01 Thread Strahil Nikolov via Users
Have you checked elrepo ?

Best Regards,
Strahil Nikolov






В вторник, 1 септември 2020 г., 08:27:10 Гринуич+3, Remulo  
написа: 





Hello

I have some blades with 10GE interface that have / need the Emulex be2net 
driver, however it is no longer available on Redhat8 / Centos8. Is there any 
way to install ovirt to work on these machines?

Here are some links to problems with Redhat.

Emulex NIC using be2net driver
https://access.redhat.com/solutions/1229853
https://access.redhat.com/solutions/514353

I did a lot of research and couldn't get a functional driver for ovirt.

Thank you.

-- 
Atenciosamente,Rêmulo Ferreira.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JZY7GYTWN3VQDKD3X75AAD2CWSHHIP5I/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BK7L7XOFT4DXN52JSOE3XBWY27NY6QCB/


[ovirt-users] Re: How can Gluster be a HCI default, when it's hardly ever working?

2020-09-01 Thread Strahil Nikolov via Users
Are you reusing a gluster volume or you have created a fresh one ?

Best Regards,
Strahil Nikolov






В вторник, 1 септември 2020 г., 02:58:19 Гринуич+3, tho...@hoberg.net 
 написа: 





I've just tried to verify what you said here.

As a base line I started with the 1nHCI Gluster setup. From four VMs, two 
legacy, two Q35 on the single node Gluster, one survived the import, one failed 
silently with an empty disk, two failed somewhere in the middle of qemu-img 
trying to write the image to the Gluster storage. For each of those two, this 
always happened at the same block number, a unique one per machine, not in 
random places, as if qemu-img reading and writing the very same image could not 
agree. That's two types of error and a 75% failure rate

I created another domain, basically using an NFS automount export from one of 
the HCI nodes (a 4.3 node serving as 4.4 storage) and imported the very same 
VMs (source all 4.3) transported via a re-attached export domain to 4.4. Three 
of the for imports worked fine, no error with qemu-img writing to NFS. All VMs 
had full disk images and launched, which verified that there is nothing wrong 
with the exports at least.

But there was still one, that failed with the same qemu-img error.

I then tried to move the disks from NFS to Gluster, which internally is also 
done via qemu-img, and I had those fail every time.

Gluster or HCI seems a bit of a Russian roulette for migrations, and I am 
wondering how much it is better for normal operations.

I'm still going to try moving via a backup domain (on NFS) and moving between 
that and Gluster, to see if it makes any difference.

I really haven't done a lot of stress testing yet with oVirt, but this 
experience doesn't build confidence.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XM6YYH5H455EPGA33MYDLHYY2J3N35UT/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/73RTGJ3K66HSFARUCGAA2OIR22HCDTCB/


[ovirt-users] VM HostedEngine is down with error

2020-09-01 Thread souvaliotimaria
Hello everyone, 

I have a replica 2 + arbiter installation and this morning the Hosted Engine 
gave the following error on the UI and resumed on a different node (node3) than 
the one it was originally running(node1). (The original node has more memory 
than the one it ended up, but it had a better memory usage percentage at the 
time). Also, the only way I discovered the migration had happened and there was 
an Error in Events, was because I logged in the web interface of ovirt for a 
routine inspection. Βesides that, everything was working properly and still is.

The error that popped is the following:

VM HostedEngine is down with error. Exit message: internal error: qemu 
unexpectedly closed the monitor: 
2020-09-01T06:49:20.749126Z qemu-kvm: warning: All CPU(s) up to maxcpus should 
be described in NUMA config, ability to start up with partial NUMA mappings is 
obsoleted and will be removed in future
2020-09-01T06:49:20.927274Z qemu-kvm: -device 
virtio-blk-pci,iothread=iothread1,scsi=off,bus=pci.0,addr=0x7,drive=drive-ua-d5de54b6-9f8e-4fba-819b-ebf6780757d2,id=ua-d5de54b6-9f8e-4fba-819b-ebf6780757d2,bootindex=1,write-cache=on:
 Failed to get "write" lock
Is another process using the image?.

Which from what I could gather concerns the following snippet from the 
HostedEngine.xml and it's the virtio disk of the Hosted Engine:


  
  

  
  
  d5de54b6-9f8e-4fba-819b-ebf6780757d2
  
  


I've tried looking into the logs and the sar command but I couldn't find 
anything to relate with the above errors and determining the reason for it to 
happen. Is this a Gluster or a QEMU problem?

The Hosted Engine was manually migrated five days before on node1.

Is there a standard practice I could follow to determine what happened and 
secure my system?

Thank you very much for your time, 
Maria Souvalioti
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HBU4P4E5ECOA6BNNFVLK2Y44ZX5UHYYE/


[ovirt-users] Re: Problem installing Windows VM on 4.4.1

2020-09-01 Thread Facundo Garat
Arik, I'm finishing validating the problem. the ISO image I was using it's
the probblem.

This ISOs didn't work:
SW_DVD9_Windows_Svr_Std_and_DataCtr_2012_R2_64Bit_Spanish_-4_MLF_X19-82897.ISO
SW_DVD9_Win_Server_STD_CORE_2019_1809.5_64Bit_Spanish_DC_STD_MLF_X22-34336.ISO

but with this ones I was able to install:
9600.17050.WINBLUE_REFRESH.140317-1640_X64FRE_SERVER_EVAL_EN-US-IR3_SSS_X64FREE_EN-US_DV9.ISO
9600.17050.WINBLUE_REFRESH.140317-1640_X64FRE_SERVER_EVAL_ES-ES-IR3_SSS_X64FREE_ES-ES_DV9.ISO

If you want to debug this using the ISOs that didn't work let me know and
we can try it.

I don't know the difference between those ISOs and why the first ones don't
work.



On Tue, Sep 1, 2020 at 4:28 AM Arik Hadas  wrote:

> Facundo, can you please provide the output of
> virsh -r dumpxml 
> on the host that the VM runs on when you start it with an IDE disk and
> Windows installer doesn't detect it?
>
> On Tue, Sep 1, 2020 at 9:29 AM Sandro Bonazzola 
> wrote:
>
>> +Arik Hadas  can you help debugging this?
>>
>> Il giorno gio 27 ago 2020 alle ore 14:21  ha scritto:
>>
>>> Hi,
>>>  I'm having problem after I upgraded to 4.4.1 with Windows machines.
>>>
>>>  The installation sees no disk. Even IDE disk doesn't get detected and
>>> installation won't move forward no matter what driver i use for the disk.
>>>
>>>   Any one else having this issue?.
>>>
>>> Regards,
>>> Facundo
>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>>> oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/NWECE32ZPJKUCN7CZK45A3MXCHZPI5CX/
>>>
>>
>>
>> --
>>
>> Sandro Bonazzola
>>
>> MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
>>
>> Red Hat EMEA 
>>
>> sbona...@redhat.com
>> 
>>
>> *Red Hat respects your work life balance. Therefore there is no need to
>> answer this email out of your office hours.
>> *
>>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PQG4OWKRTZDB73WOVEMAQQXXLBOIEL3L/


[ovirt-users] Re: How can you avoid breaking 4.3.11 legacy VMs imported in 4.4.1 during a migration?

2020-09-01 Thread Michal Skrivanek


> On 31 Aug 2020, at 21:20, Arik Hadas  wrote:
> 
> 
> 
> On Mon, Aug 31, 2020 at 8:41 PM  > wrote:
> Testing the 4.3 to 4.4 migration... what I describe here is facts is mostly 
> observations and conjecture, could be wrong, just makes writing easier...
> 
> While 4.3 seems to maintain a default emulated machine type 
> (pc-i440fx-rhel7.6.0 by default), it doesn't actually allow setting it in the 
> cluster settings: Could be built-in, could be inherited from the default 
> template... Most of my VMs were created with the default on 4.3.
> 
> oVirt 4.4 presets that to pc-q35-rhel8.1.0 and that has implications:
> 1. Any VM imported from an export on a 4.3 farm, will get upgraded to Q35, 
> which unfortunately breaks things, e.g. network adapters getting renamed as 
> the first issue I stumbled on some Debian machines 
> 2. If you try to compensate by lowering the cluster default from Q35 to 
> pc-i44fx the hosted-engine will fail, because it was either built or came as 
> Q35 and can no longer find critical devices: It evidently doesn't take/use 
> the VM configuration data it had at the last shutdown, but seems to 
> re-generate it according to some obscure logic, which fails here.

that is currently the case, yes. We have 
https://bugzilla.redhat.com/show_bug.cgi?id=1871694 - should be fixed by that, 
right?

> 
> I've tried creating a bit of backward compatibility by creating another 
> template based on pc-i440fx, but at the time of the import, I cannot switch 
> the template.
> If I try to downgrade the cluster, the hosted-engine will fail to start and I 
> can't change the template of the hosted-engine to something Q35.
> 
> Currently this leaves me in a position where I can't separate the move of VMs 
> from 4.3 to 4.4 and the upgrade of the virtual hardware, which is a different 
> mess for every OS in the mix of VMs.
> 
> Recommendations, tips anyone?
> 
> If you have to preserve the i440fx chipset, you can create another cluster 
> that is set with legacy bios and import the VMs to that cluster.
> The drawback in the alternative you tested (importing it to a q35 cluster and 
> override the chipset/emulated machine before launching the VM) is that on 
> import we 
> convert the VM to q35 (changing devices, clearing PCI addresses) and later 
> the VM is converted back to i440fx - so it's less recommended.

once the HE problem goes away it should be perfectly fine to just use that one 
cluster in i440fx mode if you prefer to. It’s q35 only because it’s a new one 
and we assume a new one is for new stuff. For upgraded setups the cluster is 
left as it was.

>  
> 
> P.S. A hypervisor reconstructing the virtual hardware from anywhere but 
> storage at every launch, is difficult to trust IMHO.
> ___
> Users mailing list -- users@ovirt.org 
> To unsubscribe send an email to users-le...@ovirt.org 
> 
> Privacy Statement: https://www.ovirt.org/privacy-policy.html 
> 
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/ 
> 
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/36WNCP6YMRM3MG44WIVHLVOUD2MACDQ5/
>  
> 
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/PAJJNL44JIIY3RAQTWB456EMQALV4SHM/

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QDWSRU6XSBZ6BSPQNE3ZKJCDKKTHK6G7/


[ovirt-users] Re: HP blade G7, with be2net.

2020-09-01 Thread Dominik Holler
On Tue, Sep 1, 2020 at 7:27 AM Remulo  wrote:

> Hello
>
> I have some blades with 10GE interface that have / need the Emulex be2net
> driver, however it is no longer available on Redhat8 / Centos8.
>


Why do you think the be2net is not available on CentOS8?
"modprobe be2net" is succeeding for me.
Which NICs are you referring to?



> Is there any way to install ovirt to work on these machines?
>
>
Maybe not required here, but there is also the CentOS plus kernel, which
contains additional drivers.


> Here are some links to problems with Redhat.
>
> Emulex NIC using be2net driver
> https://access.redhat.com/solutions/1229853
> https://access.redhat.com/solutions/514353
>
> I did a lot of research and couldn't get a functional driver for ovirt.
>
> Thank you.
>
> --
> Atenciosamente,
> Rêmulo Ferreira.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/JZY7GYTWN3VQDKD3X75AAD2CWSHHIP5I/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FJ7XEA2QACNMRC3I73PC2ELPFV5Z7EZZ/


[ovirt-users] Re: Problem installing Windows VM on 4.4.1

2020-09-01 Thread Arik Hadas
Facundo, can you please provide the output of
virsh -r dumpxml 
on the host that the VM runs on when you start it with an IDE disk and
Windows installer doesn't detect it?

On Tue, Sep 1, 2020 at 9:29 AM Sandro Bonazzola  wrote:

> +Arik Hadas  can you help debugging this?
>
> Il giorno gio 27 ago 2020 alle ore 14:21  ha scritto:
>
>> Hi,
>>  I'm having problem after I upgraded to 4.4.1 with Windows machines.
>>
>>  The installation sees no disk. Even IDE disk doesn't get detected and
>> installation won't move forward no matter what driver i use for the disk.
>>
>>   Any one else having this issue?.
>>
>> Regards,
>> Facundo
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/NWECE32ZPJKUCN7CZK45A3MXCHZPI5CX/
>>
>
>
> --
>
> Sandro Bonazzola
>
> MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
>
> Red Hat EMEA 
>
> sbona...@redhat.com
> 
>
> *Red Hat respects your work life balance. Therefore there is no need to
> answer this email out of your office hours.
> *
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7ERGUEODXAGULQ3KP3U4N4IMT57MOJ3T/


[ovirt-users] Re: How can you avoid breaking 4.3.11 legacy VMs imported in 4.4.1 during a migration?

2020-09-01 Thread Arik Hadas
On Tue, Sep 1, 2020 at 1:22 AM  wrote:

> Thanks for the suggestion, I tried that, but I didn't get very far on a
> single node HCI cluster...
> And I'm afraid it won't be much better on HCI in general, which is really
> the use case I am most interested in.
>

Yes, that requires at least one more host for the second cluster


>
> Silently converting VMs is something rather unexpected from a hypervisor,
> doing it twice my result in the same machine here only by accident.
>
> That type of design decision needs highlighting in the documentation,
> because users just won't be expecting it.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/MUGBA2TBEEIHIJJLBYVQTYQKUTOJ2MWK/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6GZIVEFL6MJLUSSHRHMUIYPXDZPOFKER/