[ovirt-users] Re: New setup - Failing to Activate storage domain on NFS shared storage

2021-01-25 Thread Nir Soffer
On Mon, Jan 25, 2021 at 7:23 PM Matt Snow  wrote:
>
> I can confirm that removing "--manage-gids" flag from RPCMOUNTDOPTS in 
> /etc/default/nfs-kernel-server now allows me to add the ZFS backed NFS share.
>
> From the man page:
>
> -g or --manage-gids
> Accept requests from the kernel to map user id numbers into lists of group id 
> numbers for use in access control. An NFS request will normally (except when 
> using Kerberos or other cryptographic authentication) contains a user-id and 
> a list of group-ids. Due to a limitation in the NFS protocol, at most 16 
> groups ids can be listed. If you use the -g flag, then the list of group ids 
> received from the client will be replaced by a list of group ids determined 
> by an appropriate lookup on the server. Note that the 'primary' group id is 
> not affected so a newgroup command on the client will still be effective. 
> This function requires a Linux Kernel with version at least 2.6.21.
>
>
> I speculate that if I had directory services setup and the NFS server 
> directed there, this would be a non-issue.
>
> Thank you so much for your help on this, Nir & Team!

You can contribute by updating the nfs troubleshooting page with this info:
https://www.ovirt.org/develop/troubleshooting-nfs-storage-issues.html

See the link "Edit this page" in the bottom of the page.

> On Mon, Jan 25, 2021 at 1:00 AM Nir Soffer  wrote:
>>
>> On Mon, Jan 25, 2021 at 12:33 AM Matt Snow  wrote:
>>
>> I reproduced the issue with ubuntu server 20.04 nfs server.
>>
>> The root cause is this setting in /etc/default/nfs-kernel-server:
>> RPCMOUNTDOPTS="--manage-gids"
>>
>> Looking at https://bugs.launchpad.net/ubuntu/+source/nfs-utils/+bug/1454112
>> it seems that the purpose of this option is to ignore client groups, which
>> breaks ovirt.
>>
>> After removing this option:
>> RPCMOUNTDOPTS=""
>>
>> And restarting nfs-kernel-server service, creating storage domain works.
>>
>> You can check with Ubuntu folks why they are enabling this configuration
>> by default, and if disabling it has any unwanted side effects.
>>
>> Nir
>>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PHTQVU5GZCQ5AEMT53ISFXLR2IRU7T2Y/


[ovirt-users] Re: New setup - Failing to Activate storage domain on NFS shared storage

2021-01-25 Thread Nir Soffer
On Mon, Jan 25, 2021 at 12:33 AM Matt Snow  wrote:

I reproduced the issue with ubuntu server 20.04 nfs server.

The root cause is this setting in /etc/default/nfs-kernel-server:
RPCMOUNTDOPTS="--manage-gids"

Looking at https://bugs.launchpad.net/ubuntu/+source/nfs-utils/+bug/1454112
it seems that the purpose of this option is to ignore client groups, which
breaks ovirt.

After removing this option:
RPCMOUNTDOPTS=""

And restarting nfs-kernel-server service, creating storage domain works.

You can check with Ubuntu folks why they are enabling this configuration
by default, and if disabling it has any unwanted side effects.

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BDRVTC675KQX2RX5P42YNNGYG7Z75UBC/


[ovirt-users] Re: split brain scenario?

2021-01-23 Thread Nir Soffer
On Sat, Jan 23, 2021 at 5:18 PM Henry lol  wrote:
>
> Hi,
>
> according to HA VM documentation, a paused VM may be started on another host 
> and later resumed on the original host.
> - https://www.ovirt.org/develop/ha-vms.html
> here, I'm assuming the HA VM was paused due to I/O error.
>
> but I'm wondering how it can happen because I guess the HA VM will be 
> restarted on another host only after it's completely killed from the original 
> host.

This is true for normal VMs, but not for HA VMs. These can be started
on another host even if we don't know if the VM is still running on the
original host. An example use case is host becoming disconnected from
the management network, or host having a hardware issue.

> can you give the split brain scenario?

HA VM is using a storage lease, so it cannot have a split brain.

When VM is paused, it releases the lease. When VM is resumed,
it tries to acquire the lease before resuming, and resume will fail
if the lease is owned by another host.

If you start the HA VM on another host, the other host will acquire
the storage lease.  Resuming the original paused VM will fail.

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KPB6FMFVIFX3FFFL64BQXVPGATLSS6TQ/


[ovirt-users] Re: [ANN] oVirt 4.4.4 is now generally available

2021-01-21 Thread Nir Soffer
On Thu, Jan 21, 2021 at 8:50 AM Konstantin Shalygin  wrote:
>
> I understood, more than the code that works with qemu already exists for 
> openstack integration

We have code on vdsm and engine to support librbd, but using in cinderlib
based volume is not a trivial change.

On engine side, this means changing the flow, so instead of attaching
a device to a host, engine will configure the xml with network disk, using
the rbd url, same way as old cinder support was using.

To make this work, engine needs to configure the ceph authentication
secrets on all hosts in the DC. We have code to do this for old cinder storage
doman, but it is not used for new cinderlib setup. I'm not sure how easy is to
use the same mechanism for cinderlib.

Generally, we don't want to spend time on special code for ceph, and prefer
to outsource this to os brick and the kernel, so we have a uniform way to
use volumes. But if the special code gives important benefits, we can
consider it.

I think openshift virtualization is using the same solution (kernel based rbd)
for ceph. An important requirement for us is having an easy way to migrate
vms from ovirt to openshift virtuations. Using the same ceph configuration
can make this migration easier.

I'm also not sure about the future of librbd support in qemu. I know that
qemu folks also want to get rid of such code. For example libgfapi
(Glsuter native driver) is not maintained and likely to be removed soon.

If this feature is important to you, please open RFE for this, and explain why
it is needed.

We can consider it for future 4.4.z release.

Adding some storage and qemu folks to get more info on this.

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KBJWIHO4TYLHUPPFVKS5LTZRMU454W53/


[ovirt-users] Re: New setup - Failing to Activate storage domain on NFS shared storage

2021-01-21 Thread Nir Soffer
On Wed, Jan 20, 2021 at 7:36 PM Matt Snow  wrote:
>
>
>
> On Wed, Jan 20, 2021 at 2:46 AM Nir Soffer  wrote:
>>
>> On Mon, Jan 18, 2021 at 8:58 AM Matt Snow  wrote:
>> >
>> > I installed ovirt node 4.4.4 as well as 4.4.5-pre and experience the same 
>> > problem with both versions. The issue occurs in both cockpit UI and tmux'd 
>> > CLI of ovirt-hosted-engine-setup. I get passed the point where the VM is 
>> > created and running.
>> > I tried to do some debugging on my own before reaching out to this list. 
>> > Any help is much appreciated!
>> >
>> > ovirt node hardware: NUC format Jetway w/ Intel N3160 (Braswell 4 
>> > cores/4threads), 8GB RAM, 64GB SSD. I understand this is underspec'd, but 
>> > I believe it meets the minimum requirements.
>> >
>> > NFS server:
>> > * Ubuntu 19.10 w/ ZFS share w/ 17TB available space.
>>
>> We don't test ZFS, but since  you tried also non-ZFS server with same issue, 
>> the
>> issue is probably not ZFS.
>>
>> > * NFS share settings are just 'rw=@172.16.1.0/24' but have also tried 
>> > 'rw,sec=sys,anon=0' and '@172.16.1.0/24,insecure'
>>
>> You are missing anonuid=36,anongid=36. This should not affect sanlock, but
>> you will have issues with libvirt without this.
>
>
> I found this setting in another email thread and have applied it
>
> root@stumpy:/tanker/ovirt/host_storage# zfs get all tanker/ovirt| grep nfs
>
> tanker/ovirt  sharenfs  rw,anonuid=36,anongid=36  local
>
> root@stumpy:/tanker/ovirt/host_storage#
>>
>>
>> Here is working export from my test system:
>>
>> /export/00 *(rw,sync,anonuid=36,anongid=36,no_subtree_check)
>>
> I have updated both ZFS server and the Ubuntu 20.04 NFS server to use the 
> above settings.
>  # ZFS Server
>
> root@stumpy:/tanker/ovirt/host_storage# zfs set 
> sharenfs="rw,sync,anonuid=36,anongid=36,no_subtree_check" tanker/ovirt
>
> root@stumpy:/tanker/ovirt/host_storage# zfs get all tanker/ovirt| grep nfs
>
> tanker/ovirt  sharenfs  
> rw,sync,anonuid=36,anongid=36,no_subtree_check  local
>
> root@stumpy:/tanker/ovirt/host_storage#
>
> # Ubuntu laptop
>
> ls -ld /exports/host_storage
>
> drwxr-xr-x 2 36 36 4096 Jan 16 19:42 /exports/host_storage
>
> root@msnowubntlt:/exports/host_storage# showmount -e dongle
>
> Export list for dongle:
>
> /exports/host_storage *
>
> root@msnowubntlt:/exports/host_storage# grep host_storage /etc/exports
>
> /exports/host_storage *(rw,sync,anonuid=36,anongid=36,no_subtree_check)
>
> root@msnowubntlt:/exports/host_storage#
>
>
>
> Interesting point - upon being re-prompted to configure storage after initial 
> failure I provided *different* NFS server information (the ubuntu laptop)
> **snip**
>
> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Activate storage domain]
>
> [ ERROR ] ovirtsdk4.Error: Fault reason is "Operation Failed". Fault detail 
> is "[]". HTTP response code is 400.
>
> [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault 
> reason is \"Operation Failed\". Fault detail is \"[]\". HTTP response code is 
> 400."}
>
>   Please specify the storage you would like to use (glusterfs, iscsi, 
> fc, nfs)[nfs]: nfs
>
>   Please specify the nfs version you would like to use (auto, v3, v4, 
> v4_0, v4_1, v4_2)[auto]:
>
>   Please specify the full shared storage connection path to use 
> (example: host:/path): dongle:/exports/host_storage
>
>   If needed, specify additional mount options for the connection to 
> the hosted-engine storagedomain (example: rsize=32768,wsize=32768) []:
>
> [ INFO  ] Creating Storage Domain
>
> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Execute just a specific set 
> of steps]
>
> **snip**
>
> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Activate storage domain]
>
> [ ERROR ] ovirtsdk4.Error: Fault reason is "Operation Failed". Fault detail 
> is "[]". HTTP response code is 400.
>
> [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault 
> reason is \"Operation Failed\". Fault detail is \"[]\". HTTP response code is 
> 400."}
>
>   Please specify the storage you would like to use (glusterfs, iscsi, 
> fc, nfs)[nfs]:
>
> **snip**
> Upon checking mounts I see that the original server is still being used.
>
> [root@brick ~]# mount | grep nfs
>
> sunrpc on /var/lib/n

[ovirt-users] Re: New setup - Failing to Activate storage domain on NFS shared storage

2021-01-20 Thread Nir Soffer
On Wed, Jan 20, 2021 at 3:52 AM Matt Snow  wrote:
>
> [root@brick ~]# ps -efz | grep sanlock

Sorry, its "ps -efZ", but we already know its not selinux.

> [root@brick ~]# ps -ef | grep sanlock
> sanlock 1308   1  0 10:21 ?00:00:01 /usr/sbin/sanlock daemon

Does sanlock run with the right groups?

On a working system:

$ ps -efZ | grep sanlock | grep -v grep
system_u:system_r:sanlock_t:s0-s0:c0.c1023 sanlock 983 1  0 11:23 ?
00:00:03 /usr/sbin/sanlock daemon
system_u:system_r:sanlock_t:s0-s0:c0.c1023 root 986  983  0 11:23 ?
00:00:00 /usr/sbin/sanlock daemon

The sanlock process running with "sanlock" user (pid=983) is the
interesting one.
The other one is a helper that never accesses storage.

$ grep Groups: /proc/983/status
Groups: 6 36 107 179

Vdsm verify this on startup using vdsm-tool is-configured. On a working system:

$ sudo vdsm-tool is-configured
lvm is configured for vdsm
libvirt is already configured for vdsm
sanlock is configured for vdsm
Managed volume database is already configured
Current revision of multipath.conf detected, preserving
abrt is already configured for vdsm

> [root@brick ~]# ausearch -m avc
> 

Looks good.

> [root@brick ~]# ls -lhZ 
> /rhev/data-center/mnt/stumpy\:_tanker_ovirt_host__storage/8fd5420f-61fd-41af-8575-f61853a18d91/dom_md
> total 278K
> -rw-rw. 1 vdsm kvm system_u:object_r:nfs_t:s00 Jan 19 13:38 ids

Looks correct.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/M5GPNUKUOSM252Z4WSQY3AOLEUBCM3II/


[ovirt-users] Re: New setup - Failing to Activate storage domain on NFS shared storage

2021-01-20 Thread Nir Soffer
On Wed, Jan 20, 2021 at 6:45 AM Matt Snow  wrote:
>
> Hi Nir, Yedidyah,
> for what it's worth I ran through the steps outlined here: 
> https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/using_selinux/troubleshooting-problems-related-to-selinux_using-selinux
>  and eventually got to running
> `setenforce 0` and the issue still persists.

Good, this and no reports in "ausearch -m avc" show that this is
not selinux issue.

> On Tue, Jan 19, 2021 at 6:52 PM Matt Snow  wrote:
>>
>> [root@brick ~]# ps -efz | grep sanlock
>>
>> error: unsupported SysV option
>>
>>
>> Usage:
>>
>>  ps [options]
>>
>>
>>  Try 'ps --help '
>>
>>   or 'ps --help '
>>
>>  for additional help text.
>>
>>
>> For more details see ps(1).
>>
>> [root@brick ~]# ps -ef | grep sanlock
>>
>> sanlock 1308   1  0 10:21 ?00:00:01 /usr/sbin/sanlock daemon
>>
>> root13091308  0 10:21 ?00:00:00 /usr/sbin/sanlock daemon
>>
>> root   68086   67674  0 13:38 pts/400:00:00 tail -f sanlock.log
>>
>> root   73724   68214  0 18:49 pts/500:00:00 grep --color=auto sanlock
>>
>>
>> [root@brick ~]# ausearch -m avc
>>
>> 
>>
>>
>> [root@brick ~]# ls -lhZ 
>> /rhev/data-center/mnt/stumpy\:_tanker_ovirt_host__storage/8fd5420f-61fd-41af-8575-f61853a18d91/dom_md
>>
>> total 278K
>>
>> -rw-rw. 1 vdsm kvm system_u:object_r:nfs_t:s00 Jan 19 13:38 ids
>>
>> -rw-rw. 1 vdsm kvm system_u:object_r:nfs_t:s0  16M Jan 19 13:38 inbox
>>
>> -rw-rw. 1 vdsm kvm system_u:object_r:nfs_t:s00 Jan 19 13:38 leases
>>
>> -rw-rw-r--. 1 vdsm kvm system_u:object_r:nfs_t:s0  342 Jan 19 13:38 metadata
>>
>> -rw-rw. 1 vdsm kvm system_u:object_r:nfs_t:s0  16M Jan 19 13:38 outbox
>>
>> -rw-rw. 1 vdsm kvm system_u:object_r:nfs_t:s0 1.3M Jan 19 13:38 xleases
>>
>> [root@brick ~]#
>>
>>
>> On Tue, Jan 19, 2021 at 4:13 PM Nir Soffer  wrote:
>>>
>>> On Tue, Jan 19, 2021 at 6:13 PM Matt Snow  wrote:
>>> >
>>> >
>>> > [root@brick log]# cat sanlock.log
>>> >
>>> > 2021-01-15 18:18:48 3974 [36280]: sanlock daemon started 3.8.2 host 
>>> > 3b903780-4f79-1018-816e-aeb2724778a7 (brick.co.slakin.net)
>>> > 2021-01-15 19:17:31 7497 [36293]: s1 lockspace 
>>> > 54532dd4-3e5b-4885-b88e-599c81efb146:250:/rhev/data-center/mnt/stumpy:_tanker_ovirt_host__storage/54532dd4-3e5b-4885-b88e-599c81efb146/dom_md/ids:0
>>> > 2021-01-15 19:17:31 7497 [50873]: open error -13 EACCES: no permission to 
>>> > open 
>>> > /rhev/data-center/mnt/stumpy:_tanker_ovirt_host__storage/54532dd4-3e5b-4885-b88e-599c81efb146/dom_md/ids
>>>
>>> Smells like selinux issue.
>>>
>>> What do you see in "ausearch -m avc"?
>>>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/B5KWEDIY4LS4GYNGQDKJQJYMDICGHPVE/


[ovirt-users] Re: New setup - Failing to Activate storage domain on NFS shared storage

2021-01-20 Thread Nir Soffer
On Mon, Jan 18, 2021 at 8:58 AM Matt Snow  wrote:
>
> I installed ovirt node 4.4.4 as well as 4.4.5-pre and experience the same 
> problem with both versions. The issue occurs in both cockpit UI and tmux'd 
> CLI of ovirt-hosted-engine-setup. I get passed the point where the VM is 
> created and running.
> I tried to do some debugging on my own before reaching out to this list. Any 
> help is much appreciated!
>
> ovirt node hardware: NUC format Jetway w/ Intel N3160 (Braswell 4 
> cores/4threads), 8GB RAM, 64GB SSD. I understand this is underspec'd, but I 
> believe it meets the minimum requirements.
>
> NFS server:
> * Ubuntu 19.10 w/ ZFS share w/ 17TB available space.

We don't test ZFS, but since  you tried also non-ZFS server with same issue, the
issue is probably not ZFS.

> * NFS share settings are just 'rw=@172.16.1.0/24' but have also tried 
> 'rw,sec=sys,anon=0' and '@172.16.1.0/24,insecure'

You are missing anonuid=36,anongid=36. This should not affect sanlock, but
you will have issues with libvirt without this.

Here is working export from my test system:

/export/00 *(rw,sync,anonuid=36,anongid=36,no_subtree_check)

> * The target directory is always empty and chown'd 36:36 with 0755 
> permissions.

Correct.

> * I have tried using both IP and DNS names. forward and reverse DNS works 
> from ovirt host and other systems on the network.
> * The NFS share always gets mounted successfully on the ovirt node system.
> * I have tried auto and v3 NFS versions in other various combinations.
> * I have also tried setting up an NFS server on a non-ZFS backed storage 
> system that is open to any host and get the same errors as shown below.
> * I ran nfs-check.py script without issue against both NFS servers and 
> followed other verification steps listed on 
> https://www.ovirt.org/develop/troubleshooting-nfs-storage-issues.html
>
> ***Snip from ovirt-hosted-engine-setup***
>   Please specify the storage you would like to use (glusterfs, iscsi, 
> fc, nfs)[nfs]: nfs
>   Please specify the nfs version you would like to use (auto, v3, v4, 
> v4_0, v4_1, v4_2)[auto]:
>   Please specify the full shared storage connection path to use 
> (example: host:/path): stumpy.mydomain.com:/tanker/ovirt/host_storage
>   If needed, specify additional mount options for the connection to 
> the hosted-engine storagedomain (example: rsize=32768,wsize=32768) []: rw
> [ INFO  ] Creating Storage Domain
> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Execute just a specific set 
> of steps]
> [ INFO  ] ok: [localhost]
> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Force facts gathering]
> [ INFO  ] ok: [localhost]
> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Wait for the storage 
> interface to be up]
> [ INFO  ] skipping: [localhost]
> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Check local VM dir stat]
> [ INFO  ] ok: [localhost]
> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Enforce local VM dir 
> existence]
> [ INFO  ] skipping: [localhost]
> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : include_tasks]
> [ INFO  ] ok: [localhost]
> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Obtain SSO token using 
> username/password credentials]
> [ INFO  ] ok: [localhost]
> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Fetch host facts]
> [ INFO  ] ok: [localhost]
> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Fetch cluster ID]
> [ INFO  ] ok: [localhost]
> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Fetch cluster facts]
> [ INFO  ] ok: [localhost]
> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Fetch Datacenter facts]
> [ INFO  ] ok: [localhost]
> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Fetch Datacenter ID]
> [ INFO  ] ok: [localhost]
> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Fetch Datacenter name]
> [ INFO  ] ok: [localhost]
> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Add NFS storage domain]
> [ INFO  ] ok: [localhost]
> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Add glusterfs storage 
> domain]
> [ INFO  ] skipping: [localhost]
> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Add iSCSI storage domain]
> [ INFO  ] skipping: [localhost]
> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Add Fibre Channel storage 
> domain]
> [ INFO  ] skipping: [localhost]
> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Get storage domain details]
> [ INFO  ] ok: [localhost]
> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Find the appliance OVF]
> [ INFO  ] ok: [localhost]
> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Parse OVF]
> [ INFO  ] ok: [localhost]
> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Get required size]
> [ INFO  ] ok: [localhost]
> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Remove unsuitable storage 
> domain]
> [ INFO  ] skipping: [localhost]
> [ INFO  ] skipping: [localhost]
> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Check storage domain free 
> space]
> [ INFO  ] 

[ovirt-users] Re: New setup - Failing to Activate storage domain on NFS shared storage

2021-01-19 Thread Nir Soffer
On Tue, Jan 19, 2021 at 6:13 PM Matt Snow  wrote:
>
>
> [root@brick log]# cat sanlock.log
>
> 2021-01-15 18:18:48 3974 [36280]: sanlock daemon started 3.8.2 host 
> 3b903780-4f79-1018-816e-aeb2724778a7 (brick.co.slakin.net)
> 2021-01-15 19:17:31 7497 [36293]: s1 lockspace 
> 54532dd4-3e5b-4885-b88e-599c81efb146:250:/rhev/data-center/mnt/stumpy:_tanker_ovirt_host__storage/54532dd4-3e5b-4885-b88e-599c81efb146/dom_md/ids:0
> 2021-01-15 19:17:31 7497 [50873]: open error -13 EACCES: no permission to 
> open 
> /rhev/data-center/mnt/stumpy:_tanker_ovirt_host__storage/54532dd4-3e5b-4885-b88e-599c81efb146/dom_md/ids

Smells like selinux issue.

What do you see in "ausearch -m avc"?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KVYDPG2SIAU7AAHFHKC63MDN2NAINWTI/


[ovirt-users] Re: New setup - Failing to Activate storage domain on NFS shared storage

2021-01-19 Thread Nir Soffer
On Tue, Jan 19, 2021 at 3:43 PM Yedidyah Bar David  wrote:
...
> > 2021-01-18 08:43:25,524-0700 INFO  (jsonrpc/0) [storage.SANLock] 
> > Initializing sanlock for domain 4b3fb9a9-6975-4b80-a2c1-af4e30865088 
> > path=/rhev/data-center/mnt/stumpy:_tanker_ovirt_host__storage/4b3fb9a9-6975-4b80-a2c1-af4e30865088/dom_md/ids
> >  alignment=1048576 block_size=512 io_timeout=10 (clusterlock:286)
> > 2021-01-18 08:43:25,533-0700 ERROR (jsonrpc/0) [storage.SANLock] Cannot 
> > initialize lock for domain 4b3fb9a9-6975-4b80-a2c1-af4e30865088 
> > (clusterlock:305)
> > Traceback (most recent call last):
> >   File "/usr/lib/python3.6/site-packages/vdsm/storage/clusterlock.py", line 
> > 295, in initLock
> > sector=self._block_size)
> > sanlock.SanlockException: (19, 'Sanlock lockspace write failure', 'No such 
> > device')
> > 2021-01-18 08:43:25,534-0700 INFO  (jsonrpc/0) [vdsm.api] FINISH 
> > createStorageDomain error=Could not initialize cluster lock: () 
> > from=:::192.168.222.53,39612, flow_id=5618fb28, 
> > task_id=49a1bc04-91d0-4d8f-b847-b6461d980495 (api:52)
> > 2021-01-18 08:43:25,534-0700 ERROR (jsonrpc/0) [storage.TaskManager.Task] 
> > (Task='49a1bc04-91d0-4d8f-b847-b6461d980495') Unexpected error (task:880)
> > Traceback (most recent call last):
> >   File "/usr/lib/python3.6/site-packages/vdsm/storage/clusterlock.py", line 
> > 295, in initLock
> > sector=self._block_size)
> > sanlock.SanlockException: (19, 'Sanlock lockspace write failure', 'No such 
> > device')
>
> I think this ^^ is the issue. Can you please check /var/log/sanlock.log?

This is a symptom of inaccessible storage. Sanlock failed to write to
the "ids" file.

We need to understand why sanlock failed. If the partialy created domain is
still available, this may explain the issue:

ls -lhZ /rhv/data-center/mnt/server:_path/storage-domain-uuid/dom_md

ps -efz | grep sanlock

ausearch -m avc

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QT7I2TWFCTTQDI37W3RIW6AQQZAMIHWB/


[ovirt-users] Re: not able to upload disks, iso - Connection to ovirt-imageio service has failed. Ensure that ovirt-engine certificate is registered as a valid CA in the browser.

2021-01-12 Thread Nir Soffer
On Tue, Jan 12, 2021 at 11:46 AM dhanaraj.ramesh--- via Users
 wrote:
>
> I fixed it with below solutions
>
> # cp -p /etc/ovirt-imageio/conf.d/50-engine.conf 
> /etc/ovirt-imageio/conf.d/99-local.conf

It seems that you have a all-in-one setup, running engine and vdsm on
the same host.
This setup is not supported since ovirt-4.0 (or earlier), but it used
to work on 4.3.

This is unlikely to work since engine configuration will not work for
vdsm. If ovirt-imageio
will run with engine configuration, it will not work with vdsm, and it
it run with vdsm configuration
it will not work with engine.

The way to fix this is to use vdsm configuration for ovirt-imageio,
and disable the proxy in
engine side, so engine UI works directly with the local imageio daemon.

engine-config -s "ImageTransferProxyEnabled=false"
systemctl restart ovirt-engine

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/P6SM3BCGW7M2X4MG7OHDHZUB5Y34VZKT/


[ovirt-users] Re: QEMU error qemuDomainAgentAvailable in /var/log/messages

2021-01-07 Thread Nir Soffer
On Thu, Jan 7, 2021 at 2:51 PM  wrote:
>
> Hello everyone and a happy new year!
>
> I have a question which might be silly but I am stumped. I keep getting the 
> following error in my /var/log/messages
>
> Jan 5 12:20:30 ovno3 libvirtd: 2021-01-05 10:20:30.481+: 5283: error : 
> qemuDomainAgentAvailable:9133 : Guest agent is not responding: QEMU guest 
> agent is not connected
>
> This entry appears on the arbitrary node only and it has a recurrence of 5 
> minutes.

Did not install qemu-guest-agent in the guest? Is it
qemu-guest-agent.service running?

> I have GlusterFS on the ovirt environment (production environment) and it's 
> serving several vital services. The VMs are running ok and I haven't noticed 
> any discrepancy. Almost a week ago there was a disconnection on the gluster 
> storage but since then everything works as expected.
>
> Does anyone know what this error is and if there is a guide or something to 
> fix it? I have no idea what to search and where.

If you don't use snapshots or backup, and you don't need to know the
guest IP address
you can ignore this issue. Otherwise you should install and make sure the guest
agent service is running.

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PGGZ75XUCEHP4YEFPNZY6N3WVKLYGQZ4/


[ovirt-users] Re: how to build oVirt engine on eclipse?

2021-01-07 Thread Nir Soffer
On Thu, Jan 7, 2021 at 8:20 PM Henry lol  wrote:
...
> I've just only imported existing maven projects from engine's git, but there 
> were so many errors and failed to build.
>
> Should I do some configurations before importing?

There are some dependencies mentioned in README.adoc.

I think most developers are using IntelliJ IDEA
https://www.jetbrains.com/idea/

But I don't think anyone is building from the IDE. You can build using:

make clean install-dev PREFIX="$HOME/ovirt-engine"

See README.adoc for more info on building engine.

Adding de...@ovirt.org since this question is not about using ovirt.

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JLYQWKE7KLQ64ZXJPXDE4MLSCMRB6HJ6/


[ovirt-users] Re: VMs shut down after backup: "moved from 'Up' --> 'Down'" only on RHEL hosts

2021-01-07 Thread Nir Soffer
On Thu, Jan 7, 2021, 16:48 Łukasz Kołaciński 
wrote:
...

> *2020-12-08 10:18:35,451+01 INFO
>  [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
> (ForkJoinPool-1-worker-31) [] VM
> '35183baa-1c70-4016-b7cd-528889876f19'(stor2rrd) moved from 'Up' --> 'Down'*
> *2020-12-08 10:18:35,466+01 ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (ForkJoinPool-1-worker-31) [] EVENT_ID: VM_DOWN_ERROR(119), VM stor2rrd is
> down with error. Exit message: Lost connection with qemu process.*
>

Looks like qemu crashed when stopping the backup.

Can you file a bug for this, and attach logs?

We these logs from host the vm was running on:
- /var/log/vdsm/vdsm.log (the right log showing the time backup was stopped)
- /var/log/ovirt-imageio/daemon.log
- /var/log/libvirt/libvird.log (may be missing)
- /var/log/libvirt/qemu/.log

Also the vm xml (virsh -r dumpxml vm-name).

...

> The environment of faulty host is:
> OS Version: RHEL - 8.3 - 1.0.el8
> OS Description: Red Hat Enterprise Linux 8.3 (Ootpa)
> Kernel Version: 4.18.0 - 240.1.1.el8_3.x86_64
> KVM Version: 5.1.0 - 14.module+el8.3.0+8438+644aff69
> LIBVIRT Version: libvirt-6.6.0-7.module+el8.3.0+8424+5ea525c5
>

I never had any issue with these version:

# rpm -q libvirt-daemon qemu-kvm
libvirt-daemon-6.6.0-8.module+el8.3.1+8648+130818f2.x86_64
qemu-kvm-5.1.0-13.module+el8.3.0+8424+e82f331d.x86_64

Looks like you run older libvirt version, but have newer qemu version.
Where did you got these packages?

Are you running on RHEL or Centos?

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RC4BUWJ4W4CY5C5BJGNOEQRSHIGYPKEU/


[ovirt-users] Re: upload_disk.py - CLI Upload Disk

2021-01-03 Thread Nir Soffer
On Sun, Jan 3, 2021 at 11:08 AM Steven Rosenberg  wrote:
>
> Dear Strahil,
>
> For 4.4.4 the sdk examples do require that you create a configuration file. 
> Here is an example:
>
> ~/config/ovirt.conf:
>
>
> [engine1]
>
> engine_url = http://localhost:8080
>
> username = admin@internal
>
> password = 123
>
> secure = yes
>
> cafile = /home/user/ovirt_engine_master/etc/pki/ovirt-engine/ca.pem

Correct. There is an example configuration file here:
https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/ovirt.conf

It should be installed with the example scripts. We will fix this in the next
release.

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7M2NU5KPSGVJF3XD52OENP4DLLCGZWWD/


[ovirt-users] Re: Cannot copy or move disks

2020-11-16 Thread Nir Soffer
On Sun, Nov 15, 2020 at 10:27 PM  wrote:
>
> So, you think it's really a bug?

I'm pretty sure this is a bug on gluster side.

>
> ____
> De: "Nir Soffer" 
> Para: supo...@logicworks.pt
> Cc: "users" , "Sahina Bose" , "Krutika 
> Dhananjay" , "Nisan, Tal" 
> Enviadas: Domingo, 15 De Novembro de 2020 15:03:21
> Assunto: Re: [ovirt-users] Cannot copy or move disks
>
> On Sat, Nov 14, 2020 at 4:45 PM  wrote:
> >
> > Hello,
> >
> > I just update to Version 4.4.3.11-1.el8. Engine and host
> >
> > and now I cannot copy or move disks.
> >
> > Storage domains are glusterfs
> >
> > # gluster --version
> > glusterfs 7.8
> >
> > Here is what I found on vdsm.log
> >
> > 2020-11-14 14:08:16,917+ INFO  (tasks/5) [storage.SANLock] Releasing 
> > Lease(name='01178644-2ad6-4d37-8657-f33f547bee6b', 
> > path='/rhev/data-center/mnt/glusterSD/node1-teste.aclou
> > d.pt:_data2/83f8bbfd-cfa3-46d9-a823-c36054826d13/images/97977cbf-eecc-4476-a11f-7798425d40c4/01178644-2ad6-4d37-8657-f33f547bee6b.lease',
> >  offset=0) (clusterlock:530)
> > 2020-11-14 14:08:17,015+ INFO  (tasks/5) [storage.SANLock] Successfully 
> > released Lease(name='01178644-2ad6-4d37-8657-f33f547bee6b', 
> > path='/rhev/data-center/mnt/glusterSD/node1
> > -teste.acloud.pt:_data2/83f8bbfd-cfa3-46d9-a823-c36054826d13/images/97977cbf-eecc-4476-a11f-7798425d40c4/01178644-2ad6-4d37-8657-f33f547bee6b.lease',
> >  offset=0) (clusterlock:540)
> > 2020-11-14 14:08:17,016+ ERROR (tasks/5) [root] Job 
> > '8cd732fc-d69b-4c32-8b35-e4a8e47396fb' failed (jobs:223)
> > Traceback (most recent call last):
> >  File "/usr/lib/python3.6/site-packages/vdsm/jobs.py", line 159, in run
> >self._run()
> >  File "/usr/lib/python3.6/site-packages/vdsm/storage/sdm/api/copy_data.py", 
> > line 110, in _run
> >self._operation.run()
> >  File "/usr/lib/python3.6/site-packages/vdsm/storage/qemuimg.py", line 374, 
> > in run
> >for data in self._operation.watch():
> >  File "/usr/lib/python3.6/site-packages/vdsm/storage/operation.py", line 
> > 106, in watch
> >self._finalize(b"", err)
> >  File "/usr/lib/python3.6/site-packages/vdsm/storage/operation.py", line 
> > 179, in _finalize
> >raise cmdutils.Error(self._cmd, rc, out, err)
> > vdsm.common.cmdutils.Error: Command ['/usr/bin/qemu-img', 'convert', '-p', 
> > '-t', 'none', '-T', 'none', '-f', 'raw', '-O', 'raw', 
> > '/rhev/data-center/mnt/glusterSD/node1-teste.aclou
> > d.pt:_data2/83f8bbfd-cfa3-46d9-a823-c36054826d13/images/789f6e50-b954-4dda-a6d5-077fdfb357d2/d95a3e83-74d2-40a6-9f8f-e6ae68794051',
> >  '/rhev/data-center/mnt/glusterSD/node1-teste.ac
> > loud.pt:_data2/83f8bbfd-cfa3-46d9-a823-c36054826d13/images/97977cbf-eecc-4476-a11f-7798425d40c4/01178644-2ad6-4d37-8657-f33f547bee6b']
> >  failed with rc=1 out=b'' err=bytearray(b'qem
> > u-img: error while reading sector 260177858: No such file or directory\n')
>
> This is an impossible error for read(), preadv() etc.
>
> > 2020-11-14 14:08:17,017+ INFO  (tasks/5) [root] Job 
> > '8cd732fc-d69b-4c32-8b35-e4a8e47396fb' will be deleted in 3600 seconds 
> > (jobs:251)
> > 2020-11-14 14:08:17,017+ INFO  (tasks/5) 
> > [storage.ThreadPool.WorkerThread] FINISH task 
> > 6cb1d496-d1ca-40b5-a488-a72982738bab (threadPool:151)
> > 2020-11-14 14:08:17,316+ INFO  (jsonrpc/2) [api.host] START 
> > getJobs(job_type='storage', 
> > job_ids=['8cd732fc-d69b-4c32-8b35-e4a8e47396fb']) 
> > from=:::192.168.5.165,36616, flow
> > _id=49320e0a-14fb-4cbb-bdfd-b2546c260bf7 (api:48)
>
> This was reported here a long time ago with various versions of gluster.
> I don't think we got any response from gluster folks about it yet.
>
> Can you file an oVirt bug about this?
>
> Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QAY6PKCNVB7ZIUUF5GNPNWVLSTDE7WRW/


[ovirt-users] Re: Cannot copy or move disks

2020-11-15 Thread Nir Soffer
On Sun, Nov 15, 2020 at 6:16 PM  wrote:
>
> I have an alert:
>
> Data Center Default compatibility version is 4.4, which is lower than latest 
> available version 4.5. Please upgrade your Data Center to latest version to 
> successfully finish upgrade of your setup.
>
> But when trying to update it to 4.5 always get this error:
>
> Host  is compatible with versions (4.2,4.3,4.4) and cannot join Cluster 
> Default which is set to version 4.5.
>
> How can I change host to 4.5?

4.5 cluster version is available only on RHEL 8.3. When CentOS 8.3 will be
released, and RHEL AV 8.3 will be built for CentOS, you will be able to
upgrade your hosts. When all hosts support cluster version 4.5 you will be
able to upgrade the cluster and dc.

This should not be related to the qemu-img error.

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3HN53ZQ5JHICPZSKOHS5WVVSKO22T5WJ/


[ovirt-users] Re: Cannot copy or move disks

2020-11-15 Thread Nir Soffer
On Sat, Nov 14, 2020 at 4:45 PM  wrote:
>
> Hello,
>
> I just update to Version 4.4.3.11-1.el8. Engine and host
>
> and now I cannot copy or move disks.
>
> Storage domains are glusterfs
>
> # gluster --version
> glusterfs 7.8
>
> Here is what I found on vdsm.log
>
> 2020-11-14 14:08:16,917+ INFO  (tasks/5) [storage.SANLock] Releasing 
> Lease(name='01178644-2ad6-4d37-8657-f33f547bee6b', 
> path='/rhev/data-center/mnt/glusterSD/node1-teste.aclou
> d.pt:_data2/83f8bbfd-cfa3-46d9-a823-c36054826d13/images/97977cbf-eecc-4476-a11f-7798425d40c4/01178644-2ad6-4d37-8657-f33f547bee6b.lease',
>  offset=0) (clusterlock:530)
> 2020-11-14 14:08:17,015+ INFO  (tasks/5) [storage.SANLock] Successfully 
> released Lease(name='01178644-2ad6-4d37-8657-f33f547bee6b', 
> path='/rhev/data-center/mnt/glusterSD/node1
> -teste.acloud.pt:_data2/83f8bbfd-cfa3-46d9-a823-c36054826d13/images/97977cbf-eecc-4476-a11f-7798425d40c4/01178644-2ad6-4d37-8657-f33f547bee6b.lease',
>  offset=0) (clusterlock:540)
> 2020-11-14 14:08:17,016+ ERROR (tasks/5) [root] Job 
> '8cd732fc-d69b-4c32-8b35-e4a8e47396fb' failed (jobs:223)
> Traceback (most recent call last):
>  File "/usr/lib/python3.6/site-packages/vdsm/jobs.py", line 159, in run
>self._run()
>  File "/usr/lib/python3.6/site-packages/vdsm/storage/sdm/api/copy_data.py", 
> line 110, in _run
>self._operation.run()
>  File "/usr/lib/python3.6/site-packages/vdsm/storage/qemuimg.py", line 374, 
> in run
>for data in self._operation.watch():
>  File "/usr/lib/python3.6/site-packages/vdsm/storage/operation.py", line 106, 
> in watch
>self._finalize(b"", err)
>  File "/usr/lib/python3.6/site-packages/vdsm/storage/operation.py", line 179, 
> in _finalize
>raise cmdutils.Error(self._cmd, rc, out, err)
> vdsm.common.cmdutils.Error: Command ['/usr/bin/qemu-img', 'convert', '-p', 
> '-t', 'none', '-T', 'none', '-f', 'raw', '-O', 'raw', 
> '/rhev/data-center/mnt/glusterSD/node1-teste.aclou
> d.pt:_data2/83f8bbfd-cfa3-46d9-a823-c36054826d13/images/789f6e50-b954-4dda-a6d5-077fdfb357d2/d95a3e83-74d2-40a6-9f8f-e6ae68794051',
>  '/rhev/data-center/mnt/glusterSD/node1-teste.ac
> loud.pt:_data2/83f8bbfd-cfa3-46d9-a823-c36054826d13/images/97977cbf-eecc-4476-a11f-7798425d40c4/01178644-2ad6-4d37-8657-f33f547bee6b']
>  failed with rc=1 out=b'' err=bytearray(b'qem
> u-img: error while reading sector 260177858: No such file or directory\n')

This is an impossible error for read(), preadv() etc.

> 2020-11-14 14:08:17,017+ INFO  (tasks/5) [root] Job 
> '8cd732fc-d69b-4c32-8b35-e4a8e47396fb' will be deleted in 3600 seconds 
> (jobs:251)
> 2020-11-14 14:08:17,017+ INFO  (tasks/5) 
> [storage.ThreadPool.WorkerThread] FINISH task 
> 6cb1d496-d1ca-40b5-a488-a72982738bab (threadPool:151)
> 2020-11-14 14:08:17,316+ INFO  (jsonrpc/2) [api.host] START 
> getJobs(job_type='storage', job_ids=['8cd732fc-d69b-4c32-8b35-e4a8e47396fb']) 
> from=:::192.168.5.165,36616, flow
> _id=49320e0a-14fb-4cbb-bdfd-b2546c260bf7 (api:48)

This was reported here a long time ago with various versions of gluster.
I don't think we got any response from gluster folks about it yet.

Can you file an oVirt bug about this?

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WMD7HJQQSIIQ7B3XKPF77JHELRRBJHYN/


[ovirt-users] Re: Removal of dpdk

2020-11-03 Thread Nir Soffer
On Tue, Nov 3, 2020 at 1:07 PM Ales Musil  wrote:

> Hello,
> we have decided to remove dpdk in the upcoming version of oVirt namely
> 4.4.4. Let us know if there are any concerns about this.
>

Can you give more info why we want to remove this feature, and what is
the replacement for existing users?

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/X6K4I6OZM6Z2J7L3ZQHIYMQG3Q63VX77/


[ovirt-users] Re: Select host for image upload

2020-11-02 Thread Nir Soffer
On Mon, Nov 2, 2020 at 3:33 AM C Williams  wrote:
>
> Nir,
>
> Thank you for getting back with me !  Sorry to be slow responding.
>
> Indeed, when i pick the host that I want to use for the upload, oVirt ignores 
> me and picks the first host in my data center.

I tested with 4.4.3 and it works.

Which oVirt version are you using?

> Thank You For Your Help
>
> On Sun, Nov 1, 2020 at 4:29 AM Nir Soffer  wrote:
>>
>>
>>
>> On Mon, Oct 26, 2020, 21:34 C Williams  wrote:
>>>
>>> Hello,
>>>
>>> I am having problems selecting the host that I want to use for .iso and 
>>> other image uploads  via the oVirt Web Console. I have 20+ servers of 
>>> varying speeds.
>>>
>>> Somehow oVirt keeps wanting to select my oldest and slowest system to use 
>>> for image uploads.
>>
>>
>> oVirts picks a random host from the hosts that can access the storage 
>> domain. If you have a better way to do this, you can specify the host in the 
>> UI.
>>
>> If you select a host and oVirt ignores your selection it sounds like a bug.
>>
>> Nir
>>
>>>
>>> Please Help !
>>>
>>> Thank You !
>>>
>>> C Williams
>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>>> oVirt Code of Conduct: 
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives: 
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/RY6SCAMTCA4W5MALU5QFGSR33F6LDXG6/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4OP4NIQTYM2BLALFNV553UNONBHJ73IV/


[ovirt-users] Re: Q: Failed creating snapshot

2020-11-02 Thread Nir Soffer
On Fri, Oct 30, 2020 at 10:43 AM Andrei Verovski  wrote:
>
> Hi,
>
> I have oVirt 4.4 with nodes 4.2.8
>
> Attempt to create snapshot failed with:
> EventID 10802 VDSM node11 command SnapshotVDS failed: Message timeout can be 
> caused by communication issues.
> Event ID 2022 Add-disk operation failed to complete.
>
> However, snapshot is on disk since VM now takes 1200GB instead of 500.
> Yes snapshot list in VM details is empty.
>
> How to recover disk space occupied by unsuccessful snapshot ?
> I looked at /vmraid/nfs/disks/--xxx/images

Is /vmraid/nfs/disks/ your storage domain directory on the server?

You can find the disk uuid on the engine side, and search the disk
snapshots engine
knows about:

$ sudo -u postgres psql -d engine

engine=# select image_group_id,image_guid from images where
image_group_id = '4b62aa6d-3bdd-4db3-b26f-0484c4124631';
image_group_id|  image_guid
--+--
 4b62aa6d-3bdd-4db3-b26f-0484c4124631 | 20391fbc-fa77-4fef-9aea-cf59f27f90b5
 4b62aa6d-3bdd-4db3-b26f-0484c4124631 | 13ad63bf-71e2-4243-8967-ab4324818b31
 4b62aa6d-3bdd-4db3-b26f-0484c4124631 | b5e56953-bf86-4496-a6bd-19132cba9763
(3 rows)

Looking up the same image on storage will be:

$ find /rhev/data-center/mnt/nfs1\:_export_3/ -name
4b62aa6d-3bdd-4db3-b26f-0484c4124631
/rhev/data-center/mnt/nfs1:_export_3/f5915245-0ac5-4712-b8b2-dd4d4be7cdc4/images/4b62aa6d-3bdd-4db3-b26f-0484c4124631

$ ls -1 
/rhev/data-center/mnt/nfs1:_export_3/f5915245-0ac5-4712-b8b2-dd4d4be7cdc4/images/4b62aa6d-3bdd-4db3-b26f-0484c4124631
13ad63bf-71e2-4243-8967-ab4324818b31
13ad63bf-71e2-4243-8967-ab4324818b31.lease
13ad63bf-71e2-4243-8967-ab4324818b31.meta
20391fbc-fa77-4fef-9aea-cf59f27f90b5
20391fbc-fa77-4fef-9aea-cf59f27f90b5.lease
20391fbc-fa77-4fef-9aea-cf59f27f90b5.meta
b5e56953-bf86-4496-a6bd-19132cba9763
b5e56953-bf86-4496-a6bd-19132cba9763.lease
b5e56953-bf86-4496-a6bd-19132cba9763.meta

Note that for every image we have a file without extension (the actual
image data)
and .meta and .lease files.

You can find the leaf volume - the volume used by the VM like this:

$ grep LEAF 
/rhev/data-center/mnt/nfs1:_export_3/f5915245-0ac5-4712-b8b2-dd4d4be7cdc4/images/4b62aa6d-3bdd-4db3-b26f-0484c4124631/*.meta
/rhev/data-center/mnt/nfs1:_export_3/f5915245-0ac5-4712-b8b2-dd4d4be7cdc4/images/4b62aa6d-3bdd-4db3-b26f-0484c4124631/b5e56953-bf86-4496-a6bd-19132cba9763.meta:VOLTYPE=LEAF

From this you can get the image chain:

$ sudo -u vdsm qemu-img info --backing-chain
/rhev/data-center/mnt/nfs1:_export_3/f5915245-0ac5-4712-b8b2-dd4d4be7cdc4/images/4b62aa6d-3bdd-4db3-b26f-0484c4124631/b5e56953-bf86-4496-a6bd-19132cba9763
[sudo] password for nsoffer:
image: 
/rhev/data-center/mnt/nfs1:_export_3/f5915245-0ac5-4712-b8b2-dd4d4be7cdc4/images/4b62aa6d-3bdd-4db3-b26f-0484c4124631/b5e56953-bf86-4496-a6bd-19132cba9763
file format: qcow2
virtual size: 6 GiB (6442450944 bytes)
disk size: 103 MiB
cluster_size: 65536
backing file: 13ad63bf-71e2-4243-8967-ab4324818b31 (actual path:
/rhev/data-center/mnt/nfs1:_export_3/f5915245-0ac5-4712-b8b2-dd4d4be7cdc4/images/4b62aa6d-3bdd-4db3-b26f-0484c4124631/13ad63bf-71e2-4243-8967-ab4324818b31)
backing file format: qcow2
Format specific information:
compat: 1.1
compression type: zlib
lazy refcounts: false
refcount bits: 16
corrupt: false

image: 
/rhev/data-center/mnt/nfs1:_export_3/f5915245-0ac5-4712-b8b2-dd4d4be7cdc4/images/4b62aa6d-3bdd-4db3-b26f-0484c4124631/13ad63bf-71e2-4243-8967-ab4324818b31
file format: qcow2
virtual size: 6 GiB (6442450944 bytes)
disk size: 4.88 MiB
cluster_size: 65536
backing file: 20391fbc-fa77-4fef-9aea-cf59f27f90b5 (actual path:
/rhev/data-center/mnt/nfs1:_export_3/f5915245-0ac5-4712-b8b2-dd4d4be7cdc4/images/4b62aa6d-3bdd-4db3-b26f-0484c4124631/20391fbc-fa77-4fef-9aea-cf59f27f90b5)
backing file format: qcow2
Format specific information:
compat: 1.1
compression type: zlib
lazy refcounts: false
refcount bits: 16
corrupt: false

image: 
/rhev/data-center/mnt/nfs1:_export_3/f5915245-0ac5-4712-b8b2-dd4d4be7cdc4/images/4b62aa6d-3bdd-4db3-b26f-0484c4124631/20391fbc-fa77-4fef-9aea-cf59f27f90b5
file format: qcow2
virtual size: 6 GiB (6442450944 bytes)
disk size: 1.65 GiB
cluster_size: 65536
Format specific information:
compat: 1.1
compression type: zlib
lazy refcounts: false
refcount bits: 16
corrupt: false

Any other image in this directory which engine does not know about and
is not part
of the backing chain is safe to to delete.

For example if you found unknown files:

3b37d7e3-a6d7-4f76-9145-2732669d2ebd
3b37d7e3-a6d7-4f76-9145-2732669d2ebd.meta
3b37d7e3-a6d7-4f76-9145-2732669d2ebd.lease

You can safely delete them.

This may be more complicated if the snapshot failed after changing the .meta
file, and your leaf is the image that engine does not know about. In this case
you will have to fix the metadata.

If you are not sure, 

[ovirt-users] Re: Select host for image upload

2020-11-01 Thread Nir Soffer
On Mon, Oct 26, 2020, 21:34 C Williams  wrote:

> Hello,
>
> I am having problems selecting the host that I want to use for .iso and
> other image uploads  via the oVirt Web Console. I have 20+ servers of
> varying speeds.
>
> Somehow oVirt keeps wanting to select my oldest and slowest system to use
> for image uploads.
>

oVirts picks a random host from the hosts that can access the storage
domain. If you have a better way to do this, you can specify the host in
the UI.

If you select a host and oVirt ignores your selection it sounds like a bug.

Nir


> Please Help !
>
> Thank You !
>
> C Williams
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/RY6SCAMTCA4W5MALU5QFGSR33F6LDXG6/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RGOKPH3372LSCEARRNWIHN6S2KC5552F/


[ovirt-users] Re: Discard after Delete

2020-11-01 Thread Nir Soffer
On Sun, Nov 1, 2020, 08:56 Joris DEDIEU  wrote:

> Hi list,
> I forgot to check "Discard after Delete" when creating a new volume. Is
> there a way (other than to empty the volume) to reclaim free blocks.
>

You can enable discard after delete by puting the storage domain to
maintenace and modifying the domain advanced options.

If the storage supports discard, and you did not select wipe after delete
storage domain option, and you use virtio-scsi interface, you can use
"fstrim" in the guest to free unused space.

Using fstrim does not reduce logical volume allocation size, only block in
the underlying storage.

Nir


> In https://bugzilla.redhat.com/show_bug.cgi?id=981626 I read about
> blkdiscard, but as far as I understand it will erase my VMs ...
>
> Thanks for your answers
> Joris
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/LNTZMV2VFQGCZF5GHNGEZ63JFOSOPKPH/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IHGCQ57KOSKBGQY5GWNQQGXYMKNWTCU5/


[ovirt-users] Re: Q: Hybrid GlusterFS / local storage setup?

2020-10-15 Thread Nir Soffer
On Thu, Oct 15, 2020 at 3:20 PM Gilboa Davara  wrote:
>
> On Thu, Oct 15, 2020 at 2:38 PM Nir Soffer  wrote:
>>
>> > I've got room to spare.
>> > Any documentation on how to achieve this (or some pointers where to look)?
>>
>> It should be documented in ovirt.org, and in RHV documentation:
>> https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/html/administration_guide/
>>
>> > I couldn't find LVM / block device under host devices / storage domain / 
>> > etc and Google search returned irrelevant results.
>>
>> I tested locally, LVM devices are not available in:
>> Compute > Hosts > {hostname} > Host Devices
>>
>> Looks like libvirt does not support device mapper devices. You can try:
>> # virsh -r nodedev-list
>>
>> To see supported devices. The list seems to match what oVirt displays
>> in the Host Devices tab.
>>
>> So you only option it to attach the entire local device to the VM, either 
>> using
>> pci passthrough or as a scsi disk.
>>
>> Nir
>
>
> Full SCSI passthrough per "desktop" VM is an overkill for this user case. 
> (Plus, I don't see MD devices in the list, only pure SATA/SAS devices).
> Any idea if there are plans to add support for LVM devices (or any other 
> block device)?

I don't think there is such a plan, but it makes sense to support such usage.

Please file ovit-engine RFE explaining the use case, and we can consider
it for a future version.

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5WHDHGZ6TZ5S4VQT3FTYI7COYTIEX2DQ/


[ovirt-users] Re: Q: Hybrid GlusterFS / local storage setup?

2020-10-15 Thread Nir Soffer
On Thu, Oct 15, 2020 at 8:45 AM Gilboa Davara  wrote:
>
> Hello Nir,
>
> Thanks for the prompt answer.
>
> On Wed, Oct 14, 2020 at 1:02 PM Nir Soffer  wrote:
>>
>>
>> GlusterFS?
>
>
> Yep, GlusterFS. Sorry, wrong abbreviation on my end..
>
>>
>>
>>
>> This will not be fast as local device passed-through to the vm
>>
>> It will also be problematic, since all hosts will mount, monitor, and 
>> maintain leases on this NFS storage, since it is considered as shared 
>> storage.
>> If another host fail to access this NFS storage the other host will be 
>> deactivated and all the VMs will migrate to other hosts. This migration 
>> storm can cause lot of trouble.
>> In the worst case, in no other host can access this NFS storage all other 
>> hosts will be deactivated.
>>
>> This is same as NFS (internally this is the same code). It will work only if 
>> you can mount the same device/export on all hosts. This is even worse than 
>> NFS.
>
>
> OK. Understood. No NFS / POSIXFS storage than.
>>
>>
>>>
>>> A. Am I barking at the wrong tree here? Is this setup even possible?
>>
>>
>> This is possible using host device.
>> You can attach a host device to a VM. This will pin the VM to the host, and 
>> give best performance.
>> It may not be flexible enough since you need to attach entire device. Maybe 
>> it can work with LVM logical volumes.
>
>
> I've got room to spare.
> Any documentation on how to achieve this (or some pointers where to look)?

It should be documented in ovirt.org, and in RHV documentation:
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/html/administration_guide/

> I couldn't find LVM / block device under host devices / storage domain / etc 
> and Google search returned irrelevant results.

I tested locally, LVM devices are not available in:
Compute > Hosts > {hostname} > Host Devices

Looks like libvirt does not support device mapper devices. You can try:
# virsh -r nodedev-list

To see supported devices. The list seems to match what oVirt displays
in the Host Devices tab.

So you only option it to attach the entire local device to the VM, either using
pci passthrough or as a scsi disk.

Nir




> - Gilboa
>
>>
>> Nir
>>
>>> B. If it is even possible, any documentation / pointers on setting up
>>> per-host private storage?
>>>
>>> I should mention that these workstations are quite beefy (64-128GB
>>> RAM, large MDRAID, SSD, etc) so I can spare memory / storage space (I
>>> can even split the local storage and GFS to different arrays).
>>>
>>> - Gilboa
>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>>> oVirt Code of Conduct: 
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives: 
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/VT24EBHC4OF7VXO7BGY3C2IT64M2T5FD/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2TOJGWKBWXVNJIEGND56E264OZO6ICAY/


[ovirt-users] Re: Q: Hybrid GlusterFS / local storage setup?

2020-10-14 Thread Nir Soffer
On Wed, Oct 14, 2020 at 1:29 PM Strahil Nikolov via Users
 wrote:
>
> Hi Gilboa,
>
> I think that storage domains need to be accessible from all nodes in the 
> cluster - and as yours will be using local storage and yet be in a 2-node 
> cluster that will be hard.
>
> My guess is that you can try the following cheat:
>
>  Create a single brick gluster volume and do some modifications:
> - volume type 'replica 1'
> - cluster.choose-local should be set to yes , once you apply the virt group 
> of settings (cause it's setting it to no)
> - Set your VMs in such way that they don't failover

This has the same issues of "shared" NFS domain served
by one host.

> Of course, creation of new VMs will happen from the host with the SPM flag, 
> but the good thing is that you can change the host with that flag. So if your 
> gluster has a brick ovirt1:/local_brick/brick , you can set the host ovirt1 
> to 'SPM' and then create your VM.
>
> Of course the above is just pure speculation as I picked my setup to be 
> 'replica 3 arbiter 1' and trade storage for live migration.
>
> Best Regards,
> Strahil Nikolov
>
>
>
>
>
>
> В сряда, 14 октомври 2020 г., 12:42:44 Гринуич+3, Gilboa Davara 
>  написа:
>
>
>
>
>
> Hello all,
>
> I'm thinking about converting a couple of old dual Xeon V2
> workstations into (yet another) oVirt setup.
> However, the use case for this cluster is somewhat different:
> While I do want most of the VMs to be highly available (Via 2+1 GFS
> storage domain), I'd also want pin at least one "desktop" VM to each
> host (possibly with vGPU) and let this VM access the local storage
> directly in-order to get near bare metal performance.
>
> Now, I am aware that I can simply share an LVM LV over NFS / localhost
> and pin a specific VM to each specific host, and the performance will
> be acceptable, I seem to remember that there's a POSIX-FS storage
> domain that at least in theory should be able to give me per-host
> private storage.
>
> A. Am I barking at the wrong tree here? Is this setup even possible?
> B. If it is even possible, any documentation / pointers on setting up
> per-host private storage?
>
> I should mention that these workstations are quite beefy (64-128GB
> RAM, large MDRAID, SSD, etc) so I can spare memory / storage space (I
> can even split the local storage and GFS to different arrays).
>
> - Gilboa
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/VT24EBHC4OF7VXO7BGY3C2IT64M2T5FD/
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/XJ44CLBZ3BIVVDRONWS5NGIZ2RXGXKP7/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CKJXR42GAFCHZNU77RSXXAFPHDSFQMUB/


[ovirt-users] Re: Q: Hybrid GlusterFS / local storage setup?

2020-10-14 Thread Nir Soffer
On Wed, Oct 14, 2020, 12:42 Gilboa Davara  wrote:

> Hello all,
>
> I'm thinking about converting a couple of old dual Xeon V2
> workstations into (yet another) oVirt setup.
> However, the use case for this cluster is somewhat different:
> While I do want most of the VMs to be highly available (Via 2+1 GFS
>

GlusterFS?

storage domain), I'd also want pin at least one "desktop" VM to each
> host (possibly with vGPU) and let this VM access the local storage
> directly in-order to get near bare metal performance.
>

This makes sense.

Now, I am aware that I can simply share an LVM LV over NFS / localhost
> and pin a specific VM to each specific host, and the performance will
> be acceptable,


This will not be fast as local device passed-through to the vm

It will also be problematic, since all hosts will mount, monitor, and
maintain leases on this NFS storage, since it is considered as shared
storage.

If another host fail to access this NFS storage the other host will be
deactivated and all the VMs will migrate to other hosts. This migration
storm can cause lot of trouble.

In the worst case, in no other host can access this NFS storage all other
hosts will be deactivated.


I seem to remember that there's a POSIX-FS storage
> domain that at least in theory should be able to give me per-host
> private storage.
>

This is same as NFS (internally this is the same code). It will work only
if you can mount the same device/export on all hosts. This is even worse
than NFS.


> A. Am I barking at the wrong tree here? Is this setup even possible?
>

This is possible using host device.

You can attach a host device to a VM. This will pin the VM to the host, and
give best performance.

It may not be flexible enough since you need to attach entire device. Maybe
it can work with LVM logical volumes.

Nir

B. If it is even possible, any documentation / pointers on setting up
> per-host private storage?
>
> I should mention that these workstations are quite beefy (64-128GB
> RAM, large MDRAID, SSD, etc) so I can spare memory / storage space (I
> can even split the local storage and GFS to different arrays).
>
> - Gilboa
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/VT24EBHC4OF7VXO7BGY3C2IT64M2T5FD/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7S7KA474CGP7AC342TLFNN62WH2DLH6N/


[ovirt-users] Re: Upgrade from ovirt-node-ng 4.4.1 to 4.4.2 fails with "Local storage domains were found on the same filesystem as / ! Please migrate the data to a new LV before upgrading, or you will

2020-10-13 Thread Nir Soffer
On Tue, Oct 13, 2020 at 5:56 PM Nir Levy  wrote:
>
>
>
> On Tue, Oct 13, 2020 at 5:51 PM Nir Soffer  wrote:
>>
>> On Tue, Oct 13, 2020 at 4:20 PM Yedidyah Bar David  wrote:
>> >
>> > On Tue, Oct 13, 2020 at 1:11 PM Yedidyah Bar David  wrote:
>> > >
>> > > On Thu, Oct 8, 2020 at 1:01 PM gantonjo-ovirt--- via Users
>> > >  wrote:
>> > > >
>> > > > So, we have a cluster of 3 servers running oVirt Node 4.4.1. Now we 
>> > > > are attempting to upgrade it to latest version, 4.4.2, but it fails as 
>> > > > shown below. Problem is that the Storage domains listed are all 
>> > > > located on an external iSCSI SAN. The Storage Domains were created in 
>> > > > another cluster we had (oVirt Node 4.3 based) and detached from the 
>> > > > old cluster and imported successfully into the new cluster through the 
>> > > > oVirt Management interface. As I understand, oVirt itself has created 
>> > > > the mount points under /rhev/data-center/mnt/blockSD/ for each of the 
>> > > > iSCSI domains, and as such they are not really storaged domains on the 
>> > > > / filesystem.
>> > > >
>> > > > I do believe the solution to the mentioned BugZilla bug has caused a 
>> > > > new bug, but I may be wrong. I cannot see what we have done wrong when 
>> > > > importing these storage domains to the cluster (well, actually, some 
>> > > > were freshly created in this cluster, thus fully managed by oVirt 4.4 
>> > > > manager interface).
>> > >
>> > > This is likely caused by the fix for:
>> > > https://bugzilla.redhat.com/show_bug.cgi?id=1850378 .
>> > >
>> > > Adding Nir.
>> > >
>> > > >
>> > > > What can we do to proceed in upgrading the hosts to latest oVirt Node?
>> > >
>> > > Right now, without another fix? Make sure that the following command:
>> > >
>> > > find / -xdev -path "*/dom_md/metadata" -not -empty
>> > >
>> > > Returns an empty output.
>> > >
>> > > You might need to move the host to maintenance and then manually
>> > > umount your SDs, or something like that.
>> > >
>> > > Please open a bug so that we can refine this command further.
>> >
>> > Nir (Levy) - perhaps we should change this command to something like:
>> >
>> > find / -xdev -path "*/dom_md/metadata" -not -empty -not -type l
>> >
>> > >
>> > > Thanks and best regards,
>> > >
>> > > >
>> > > > Dependencies resolved.
>> > > > =
>> > > >  Package   
>> > > > Architecture   
>> > > >Version 
>> > > >  Repository
>> > > > Size
>> > > > =
>> > > > Upgrading:
>> > > >  ovirt-node-ng-image-update
>> > > > noarch 
>> > > >4.4.2-1.el8 
>> > > >  ovirt-4.4 
>> > > >782 M
>> > > >  replacing  ovirt-node-ng-image-u

[ovirt-users] Re: Upgrade from ovirt-node-ng 4.4.1 to 4.4.2 fails with "Local storage domains were found on the same filesystem as / ! Please migrate the data to a new LV before upgrading, or you will

2020-10-13 Thread Nir Soffer
---
> > > Total 
> > >   
> > >   
> > >8.6 MB/s | 
> > > 782 MB 01:31
> > > Running transaction check
> > > Transaction check succeeded.
> > > Running transaction test
> > > Transaction test succeeded.
> > > Running transaction
> > >   Preparing:  
> > >   
> > >   
> > >   
> > >  1/1
> > >   Running scriptlet: ovirt-node-ng-image-update-4.4.2-1.el8.noarch
> > >   
> > >   
> > >   
> > >  1/3
> > > Local storage domains were found on the same filesystem as / ! Please 
> > > migrate the data to a new LV before upgrading, or you will lose the VMs
> > > See: https://bugzilla.redhat.com/show_bug.cgi?id=1550205#c3
> > > Storage domains were found in:
> > > 
> > > /rhev/data-center/mnt/blockSD/c3df4c98-ca97-4486-a5d4-d0321a0fb801/dom_md
> > > 
> > > /rhev/data-center/mnt/blockSD/90a52746-e0cb-4884-825d-32a9d94710ff/dom_md
> > > 
> > > /rhev/data-center/mnt/blockSD/74673f68-e1fa-46cf-b0ac-a35f05d42a7a/dom_md
> > > 
> > > /rhev/data-center/mnt/blockSD/f5fe00ba-c899-428f-96a2-e8d5e5707905/dom_md
> > > 
> > > /rhev/data-center/mnt/blockSD/5c3d9aff-66a3-4555-a17d-172fbf043505/dom_md
> > > 
> > > /rhev/data-center/mnt/blockSD/4cc6074b-a5f5-4337-a32f-0ace577e5e47/dom_md
> > > 
> > > /rhev/data-center/mnt/blockSD/a7658abd-e605-455e-9253-69d7e59ff50a/dom_md
> > > 
> > > /rhev/data-center/mnt/blockSD/f18e6e5c-124b-4a66-ae98-2088c87de42b/dom_md
> > > 
> > > /rhev/data-center/mnt/blockSD/f431e29b-77cd-4e51-8f7f-dd73543dfce6/dom_md
> > > 
> > > /rhev/data-center/mnt/blockSD/0f53281c-c756-4171-bcd2-8946956ebbd0/dom_md
> > > 
> > > /rhev/data-center/mnt/blockSD/9fad9f9b-c549-4226-9278-51208411b2ac/dom_md
> > > 
> > > /rhev/data-center/mnt/blockSD/c64006e7-e22c-486f-82a5-20d2b9431299/dom_md
> > > 
> > > /rhev/data-center/mnt/blockSD/509de8b4-bc41-40fa-9354-16c24ae16442/dom_md
> > > 
> > > /rhev/data-center/mnt/blockSD/0d57fcd3-4622-41cc-ab23-744b93d175a0/dom_md
>
> Adding also Nir Soffer for a workaround. Nir - is it safe, in this
> case, to remove
> these dead symlinks? Or even all of /rhev/data-center/mnt/blockSD/* ? Will 
> VDSM
> re-create them upon return from maintenance?

It is safe to remove /rhev/data-center/mnt/blockSD/* when the host is
in maintenance
but contents of /rhev/data-center is not meant be modified by users.
This area is menaged
by vdsm and nobody should touch it.

The issue is wrong search, looking into /rhev/* when looking for local
storage domains.
Local storage domains cannot be under /rhev.

>
> Any other workaround?
>
> Thanks and best regards,
>
> > > error: %prein(ovirt-node-ng-image-update-4.4.2-1.el8.noarch) scriptlet 
> > > failed, exit status 1
> > >
> > > Error in PREIN scriptlet in rpm package ovirt-node-ng-image-update
> > >   Verifying: ovirt-node-ng-image-update-4.4.2-1.el8.noarch
> > >   
> > >   
> > >   
> > >  1/3
> > >   Verifying: ovirt-node-ng-image-update-4.4.1.5-1.el8.noarch  
> > >   
> > >   

[ovirt-users] Re: Imageio Daemon not listening on port 54323

2020-10-13 Thread Nir Soffer
On Tue, Oct 13, 2020 at 12:25 PM Tim Bordemann via Users
 wrote:
>
>
>
> > On 13. Oct 2020, at 10:01, Vojtech Juranek  wrote:
> >
> > On úterý 13. října 2020 9:37:11 CEST Tim Bordemann wrote:
> >>> On 12. Oct 2020, at 12:15, Vojtech Juranek  wrote:
> >>>
> >>> On pátek 9. října 2020 19:02:32 CEST tim-nospam--- via Users wrote:
>  Hello.
> 
>  After an upgrade I am not able to upload images anymore via the ovirt ui.
>  When testing the connection, I always get the error message "Connection
>  to
>  ovirt-imageio-proxy service has failed. Make sure the service is
>  installed,
>  configured, and ovirt-engine certificate is registered as a valid CA in
>  the
>  browser.".
> 
>  I found out that the imageio daemon doesn't listen on port 54323 anymore,
>  so the browser can not connect to it. The daemon is configured to listen
>  on port 54323 though:
> 
>  # cat /etc/ovirt-imageio/conf.d/50-engine.conf
>  [...]
>  [remote]
>  port = 54323
>  [...]
> 
>  The imageio daemon has been started successfully on the engine host as
>  well
>  as on the other hosts.
> 
>  I am currently stuck, what should I do next?
>  The ovirt version I am using is 4.4.
> >>>
> >>> what is exact version of imageio? (rpm -qa|grep imageio)
> >>
> >> # rpm -qa|grep imageio
> >> ovirt-engine-setup-plugin-imageio-4.4.2.6-1.el8.noarch
> >> ovirt-imageio-daemon-2.0.10-1.el8.x86_64
> >> ovirt-imageio-common-2.0.10-1.el8.x86_64
> >> ovirt-imageio-client-2.0.10-1.el8.x86_64
> >>
> >>> On which port imageio listens? You can use e.g. netstat etc. Also plase
> >>> check imageio logs (/var/log/ovirt-imageio/daemon.log), what is there,
> >>> there shold
> >>> be something like this:
> >> # netstat -tulpn | grep 543
> >> tcp0  0 0.0.0.0:54322   0.0.0.0:*   LISTEN
> >>   2527872/platform-py tcp0  0 127.0.0.1:54324
> >> 0.0.0.0:*   LISTEN  2527872/platform-py
> >>> 2020-10-08 08:37:48,906 INFO(MainThread) [services] remote.service
> >>> listening on ('::', 54323)
> >>
> >> 2020-10-09 17:46:38,216 INFO(MainThread) [server] Starting 
> >> (pid=2527872,
> >> version=2.0.10) 2020-10-09 17:46:38,220 INFO(MainThread) [services]
> >> remote.service listening on ('0.0.0.0', 54322) 2020-10-09 17:46:38,221 INFO
> >>   (MainThread) [services] control.service listening on ('127.0.0.1',
> >> 54324) 2020-10-09 17:46:38,227 INFO(MainThread) [server] Ready for
> >> requests
> >>
> >> No entries for Port 54323 in the last 3 months. I found logentries in july
> >> though:
> >>
> >> [root@helios ~]# cat /var/log/ovirt-imageio/daemon.log | grep 54323
> >> 2020-07-11 10:13:24,777 INFO(MainThread) [services] remote.service
> >> listening on ('::', 54323) [...]
> >> 2020-07-16 19:54:13,398 INFO(MainThread) [services] remote.service
> >> listening on ('::', 54323) 2020-07-16 19:54:36,715 INFO(MainThread)
> >> [services] remote.service listening on ('::', 54323)
> >>> Also, please check if there are any other config files (*.conf) in
> >>> /etc/ovirt- imageio/conf.d or in /usr/lib/ovirt-imageio/conf.d
> >>
> >> I have, but I couldn't find anything interesting in those two files:
> >>
> >> # ls -l /etc/ovirt-imageio/conf.d/
> >> total 8
> >> -rw-r--r--. 1 root root 1458 Oct  9 17:54 50-engine.conf
> >> -rw-r--r--. 1 root root 1014 Sep 15 11:16 50-vdsm.conf
> >
> > 50-vdsm.conf overwrites 50-engine.conf (later config taken alphabetically 
> > overwrites previous config if there is any).
> >
> > If you don't use engine as a host at the same time, stop an uninstall vdsm 
> > from engine (should remove also 50-vdsm.conf) and restart ovirt-imageio 
> > service.
> >
> > If you use engine as a host at the same time, note that this is 
> > unsupported. However, there were some patches in this area recently, but 
> > they are not released yet AFAICT. See [1, 2] for more details.
> >
> > [1] https://bugzilla.redhat.com/1871348
> > [2] 
> > https://lists.ovirt.org/archives/list/users@ovirt.org/thread/W4OTINLXYDWG3YSF2OUQU3NW3ADRPGUR/
>
> Uninstalling vdsm would also remove nearly 500 packages on the machine. I 
> have disabled and masked the vdsm service for now and restarted the imageio 
> service. Uploading an image via the webui now works again.
> I will remove vdsm during a maintainence window and maybe even reinstall the 
> machine completely.

Uninstalling vdsm and it's 500 dependencies is safe, it is not needed
on the engine side unless you are using the same host as engine and
hypervisor host (all-in-one setup).

But using mainaintance windows is always safer.

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 

[ovirt-users] Re: Imageio Daemon not listening on port 54323

2020-10-12 Thread Nir Soffer
On Sun, Oct 11, 2020, 11:58 tim-nospam--- via Users  wrote:

> Hello.
>
> After an upgrade I am not able to upload images anymore via the ovirt ui.
> When testing the connection, I always get the error message "Connection to
> ovirt-imageio-proxy service has failed. Make sure the service is installed,
> configured, and ovirt-engine certificate is registered as a valid CA in the
> browser.".
>
> I found out that the imageio daemon doesn't listen on port 54323 anymore,
> so the browser can not connect to it. The daemon is configured to listen on
> port 54323 though:
>
> # cat /etc/ovirt-imageio/conf.d/50-engine.conf
> [...]
> [remote]
> port = 54323
> [...]
>
> The imageio daemon has been started successfully on the engine host as
> well as on the other hosts.
>
> I am currently stuck, what should I do next?
> The ovirt version I am using is 4.4.


4.4 is not specific enough, can you share complete packages versions?

There is one machine running the ovirt engine and there are 2 additional
> hosts. The OS on the machines is Centos 8.
>

Do you use engine host as another hypervisor or it only use for engine?

If you used engine host as hypervisor in the past, you will have vdsm
configuration (60-vdsm.conf) overriding engine configuration.

Nir


> Thank you,
> Tim
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/GWIVRHYHNGUVJNSQTYCDQBOG6VFXCZPB/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2GVN6KYGC7EKYORNZBUMZMCXFL3JFE3D/


[ovirt-users] Re: Is it possible to backup stopped vm? (ovirt 4.4)

2020-10-06 Thread Nir Soffer
On Tue, Oct 6, 2020 at 4:08 PM Łukasz Kołaciński
 wrote:
>
> Hello,
> While trying to backup stopped VM (with POST on 
> /ovirt-engine/api/vms/b79b34c0-d8db-43e5-916e-5528ff7bcfbe/backups) I got the 
> response:
>
> 
> 
> [Cannot backup VM. The Virtual Machine should be in Up 
> status.]
> Operation Failed
> 

Yes, this is not supported now.

> I found here: 
> https://www.ovirt.org/develop/release-management/features/storage/incremental-backup.html
>  that it should be possible to back up a virtual machine that is not running.
> "If the VM is not running, the system will create a paused, stripped-down 
> version of the VM, with only backup disks attached, and use libvirt API to 
> start and stop the backup."

This is a possible solution we considered, but it is not implemented yet.

The next line is:

   Since creating special paused VM for backing up non-running VM is a
lot of work,
   we may defer support for backing up non-running VMs.

We got a request to implement this from other backup vendor, and we are
discussing the possible solutions with platform folks.

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SOG7V6SRAIHAKB6CKJOHDCDJDRQRLC7W/


[ovirt-users] Re: oVirt Node 4.4.2 is now generally available

2020-10-05 Thread Nir Soffer
On Mon, Oct 5, 2020 at 9:06 AM Gianluca Cecchi
 wrote:
>
> On Mon, Oct 5, 2020 at 2:19 AM Nir Soffer  wrote:
>>
>> On Sun, Oct 4, 2020 at 6:09 PM Amit Bawer  wrote:
>> >
>> >
>> >
>> > On Sun, Oct 4, 2020 at 5:28 PM Gianluca Cecchi  
>> > wrote:
>> >>
>> >> On Sun, Oct 4, 2020 at 10:21 AM Amit Bawer  wrote:
>> >>>
>> >>>
>> >>>
>> >>> Since there wasn't a filter set on the node, the 4.4.2 update added the 
>> >>> default filter for the root-lv pv
>> >>> if there was some filter set before the upgrade, it would not have been 
>> >>> added by the 4.4.2 update.
>> >>>>
>> >>>>
>> >>
>> >> Do you mean that I will get the same problem upgrading from 4.4.2 to an 
>> >> upcoming 4.4.3, as also now I don't have any filter set?
>> >> This would not be desirable
>> >
>> > Once you have got back into 4.4.2, it's recommended to set the lvm filter 
>> > to fit the pvs you use on your node
>> > for the local root pv you can run
>> > # vdsm-tool config-lvm-filter -y
>> > For the gluster bricks you'll need to add their uuids to the filter as 
>> > well.
>>
>> vdsm-tool is expected to add all the devices needed by the mounted
>> logical volumes, so adding devices manually should not be needed.
>>
>> If this does not work please file a bug and include all the info to reproduce
>> the issue.
>>
>
> I don't know what exactly happened when I installed ovirt-ng-node in 4.4.0, 
> but the effect was that no filter at all was set up in lvm.conf, and so the 
> problem I had upgrading to 4.4.2.
> Any way to see related logs for 4.4.0? In which phase of the install of the 
> node itself or of the gluster based wizard is it supposed to run the 
> vdsm-tool command?
>
> Right now in 4.4.2 I get this output, so it seems it works in 4.4.2:
>
> "
> [root@ovirt01 ~]# vdsm-tool config-lvm-filter
> Analyzing host...
> Found these mounted logical volumes on this host:
>
>   logical volume:  /dev/mapper/gluster_vg_sda-gluster_lv_data
>   mountpoint:  /gluster_bricks/data
>   devices: 
> /dev/disk/by-id/lvm-pv-uuid-5D4JSI-vqEc-ir4o-BGnG-sZmh-ILjS-jgzICr
>
>   logical volume:  /dev/mapper/gluster_vg_sda-gluster_lv_engine
>   mountpoint:  /gluster_bricks/engine
>   devices: 
> /dev/disk/by-id/lvm-pv-uuid-5D4JSI-vqEc-ir4o-BGnG-sZmh-ILjS-jgzICr
>
>   logical volume:  /dev/mapper/gluster_vg_sda-gluster_lv_vmstore
>   mountpoint:  /gluster_bricks/vmstore
>   devices: 
> /dev/disk/by-id/lvm-pv-uuid-5D4JSI-vqEc-ir4o-BGnG-sZmh-ILjS-jgzICr
>
>   logical volume:  /dev/mapper/onn-home
>   mountpoint:  /home
>   devices: 
> /dev/disk/by-id/lvm-pv-uuid-52iT6N-L9sU-ubqE-6vPt-dn7T-W19c-NOXjc7
>
>   logical volume:  /dev/mapper/onn-ovirt--node--ng--4.4.2--0.20200918.0+1
>   mountpoint:  /
>   devices: 
> /dev/disk/by-id/lvm-pv-uuid-52iT6N-L9sU-ubqE-6vPt-dn7T-W19c-NOXjc7
>
>   logical volume:  /dev/mapper/onn-swap
>   mountpoint:  [SWAP]
>   devices: 
> /dev/disk/by-id/lvm-pv-uuid-52iT6N-L9sU-ubqE-6vPt-dn7T-W19c-NOXjc7
>
>   logical volume:  /dev/mapper/onn-tmp
>   mountpoint:  /tmp
>   devices: 
> /dev/disk/by-id/lvm-pv-uuid-52iT6N-L9sU-ubqE-6vPt-dn7T-W19c-NOXjc7
>
>   logical volume:  /dev/mapper/onn-var
>   mountpoint:  /var
>   devices: 
> /dev/disk/by-id/lvm-pv-uuid-52iT6N-L9sU-ubqE-6vPt-dn7T-W19c-NOXjc7
>
>   logical volume:  /dev/mapper/onn-var_crash
>   mountpoint:  /var/crash
>   devices: 
> /dev/disk/by-id/lvm-pv-uuid-52iT6N-L9sU-ubqE-6vPt-dn7T-W19c-NOXjc7
>
>   logical volume:  /dev/mapper/onn-var_log
>   mountpoint:  /var/log
>   devices: 
> /dev/disk/by-id/lvm-pv-uuid-52iT6N-L9sU-ubqE-6vPt-dn7T-W19c-NOXjc7
>
>   logical volume:  /dev/mapper/onn-var_log_audit
>   mountpoint:  /var/log/audit
>   devices: 
> /dev/disk/by-id/lvm-pv-uuid-52iT6N-L9sU-ubqE-6vPt-dn7T-W19c-NOXjc7
>
> This is the recommended LVM filter for this host:
>
>   filter = [ 
> "a|^/dev/disk/by-id/lvm-pv-uuid-52iT6N-L9sU-ubqE-6vPt-dn7T-W19c-NOXjc7$|", 
> "a|^/dev/disk/by-id/lvm-pv-uuid-5D4JSI-vqEc-ir4o-BGnG-sZmh-ILjS-jgzICr$|", 
> "r|.*|" ]
>
> This filter allows LVM to access the local devices used by the
> hypervisor, but not shared storage owned by Vdsm. If you add a new
> device to the volume group, you will need to edit the filter manually.
>
> To use the recommended filter we need to

[ovirt-users] Re: oVirt Node 4.4.2 is now generally available

2020-10-04 Thread Nir Soffer
On Sun, Oct 4, 2020 at 6:09 PM Amit Bawer  wrote:
>
>
>
> On Sun, Oct 4, 2020 at 5:28 PM Gianluca Cecchi  
> wrote:
>>
>> On Sun, Oct 4, 2020 at 10:21 AM Amit Bawer  wrote:
>>>
>>>
>>>
>>> Since there wasn't a filter set on the node, the 4.4.2 update added the 
>>> default filter for the root-lv pv
>>> if there was some filter set before the upgrade, it would not have been 
>>> added by the 4.4.2 update.


>>
>> Do you mean that I will get the same problem upgrading from 4.4.2 to an 
>> upcoming 4.4.3, as also now I don't have any filter set?
>> This would not be desirable
>
> Once you have got back into 4.4.2, it's recommended to set the lvm filter to 
> fit the pvs you use on your node
> for the local root pv you can run
> # vdsm-tool config-lvm-filter -y
> For the gluster bricks you'll need to add their uuids to the filter as well.

vdsm-tool is expected to add all the devices needed by the mounted
logical volumes, so adding devices manually should not be needed.

If this does not work please file a bug and include all the info to reproduce
the issue.

> The next upgrade should not set a filter on its own if one is already set.
>
>>
>>


 Right now only two problems:

 1) a long running problem that from engine web admin all the volumes are 
 seen as up and also the storage domains up, while only the hosted engine 
 one is up, while "data" and vmstore" are down, as I can verify from the 
 host, only one /rhev/data-center/ mount:

>> [snip]


 I already reported this, but I don't know if there is yet a bugzilla open 
 for it.
>>>
>>> Did you get any response for the original mail? haven't seen it on the 
>>> users-list.
>>
>>
>> I think it was this thread related to 4.4.0 released and question about 
>> auto-start of VMs.
>> A script from Derek that tested if domains were active and got false 
>> positive, and my comments about the same registered behaviour:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/25KYZTFKX5Y4UOEL2SNHUUC7M4WAJ5NO/
>>
>> But I think there was no answer on that particular item/problem.
>> Indeed I think you can easily reproduce, I don't know if only with Gluster 
>> or also with other storage domains.
>> I don't know if it can have a part the fact that on the last host during a 
>> whole shutdown (and the only host in case of single host) you have to run  
>> the script
>> /usr/share/glusterfs/scripts/stop-all-gluster-processes.sh
>> otherwise you risk not to get a complete shutdown sometimes.
>> And perhaps this stop can have an influence on the following startup.
>> In any case the web admin gui (and the API access) should not show the 
>> domains active when they are not. I think there is a bug in the code that 
>> checks this.
>
> If it got no response so far, I think it could be helpful to file a bug with 
> the details of the setup and the steps involved here so it will get tracked.
>
>>
>>>

 2) I see that I cannot connect to cockpit console of node.

>> [snip]

 NOTE: the ost is not resolved by DNS but I put an entry in my hosts client.
>>>
>>> Might be required to set DNS for authenticity, maybe other members on the 
>>> list could tell better.
>>
>>
>> It would be the first time I see it. The access to web admin GUI works ok 
>> even without DNS resolution.
>> I'm not sure if I had the same problem with the cockpit host console on 
>> 4.4.0.
>
> Perhaps +Yedidyah Bar David  could help regarding cockpit web access.
>
>>
>> Gianluca
>>
>>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/VYWJPRKRESPBAR7I45QSVNTCVWNRZ5WQ/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FRSYXNIUTNXC7S2B4ALAQIWECBKUCR4H/


[ovirt-users] Re: ISO Repo

2020-09-23 Thread Nir Soffer
On Wed, Sep 23, 2020 at 8:26 PM Philip Brown  wrote:
>
> well, I havent used it before :)
>
> I just discovered that some windows VMs of ours needed guest additions, and 
> looked for the right way to handle it for ovirt.
>
> OH. I think you are implying perhaps that modern windows VMs automatically 
> have hooks for virtual hosting?
> Sad to say, these are ... "older"*cough*   Vms.

I don't know how the windows guest tools are shipped now, but Gal should know.

If you have this ISO, you can upload it to some data domain. After that you can
attach it to any VM.

> ----- Original Message -
> From: "Nir Soffer" 
> To: "Philip Brown" 
> Cc: "Jeremey Wise" , "users" , "Arik 
> Hadas" 
> Sent: Wednesday, September 23, 2020 10:09:36 AM
> Subject: Re: [ovirt-users] Re: ISO Repo
>
>
> Maybe it is not needed now?
>
> How did you use this ISO before? uploaded it to ISO domain?
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZM2AX43JBW447ZUAIWIX7KTD6WAZXJQP/


[ovirt-users] Re: ISO Repo

2020-09-23 Thread Nir Soffer
On Wed, Sep 23, 2020 at 7:59 PM Philip Brown  wrote:
>
>
>
> > The note is about ISO storage domain - this is a special storage
> > domain that can be created
> > only on NFS, and can hold only ISO images. To add ISO images to this
> > special storage domain,
> > you had to use a special iso uploader program, or add manually to the
> > right place in the storage
> > domain directory, and make sure the permissions are correct. This
> > storage domain type was
> > deprecated several version ago,
>
>
>
>
> Might I suggest that you folks revisit the issue of what to do with the
> "guest tools iso". Since supposedly they are,
>
> "provided by the oVirt-guest-tools-iso package installed as a dependency
>  to the oVirt Engine. This ISO file is located in
>  /usr/share/oVirt-guest-tools-iso/oVirt-tools-setup.iso on the system
>  on which the oVirt Engine is installed."
>
>
> https://www.ovirt.org/documentation/vmm-guide/chap-Installing_Windows_Virtual_Machines.html
>
> (the package is no longer an automatic dependancy either. at least for 
> hosted-engine)

Maybe it is not needed now?

How did you use this ISO before? uploaded it to ISO domain?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/X2M6LXDHUFZCSH33H7USTR2TU5DP6H3Y/


[ovirt-users] Re: ISO Repo

2020-09-23 Thread Nir Soffer
On Wed, Sep 23, 2020 at 6:13 PM Jeremey Wise  wrote:
>
>
> I saw notes about oVirt 4.4 may no longer support ISO images... but there are 
> times like now I need to build based on specific ISO images.

4.4 supports iso images of course.

The note is about ISO storage domain - this is a special storage
domain that can be created
only on NFS, and can hold only ISO images. To add ISO images to this
special storage domain,
you had to use a special iso uploader program, or add manually to the
right place in the storage
domain directory, and make sure the permissions are correct. This
storage domain type was
deprecated several version ago, but it is still available.

Since 4.2 (or maybe even before that) you can upload ISO images to
data domain of any type
(e.g. NFS, Gluster, iSCSI, FC). Upload is available via the
administration portal, or via the
API/SDK, same as other image types.

Note that there is an issue on block storage data domain, preventing
changing CD on a running
VM. We are working on fixing this for 4.4.3.

> I tried to do a cycle to create an image file 8GB  then do dd if=blah.iso 
> of=/
>
> Created a new vm with this as boot disk and it fails to boot...  so.. back to 
> "create volume for iso images"

I'm not sure what you tried to do.

> But when I do that I get error
>
> New Domain -> "Domain function" =iso  Storage type = glusterFs
> Use Managed Gluster volume -> Select already working gluster file space 
> "thor.penguinpages.local:/iso
> VFS Type: glusterfs
> mount options: 
> backup-volfile-servers=odin.penguinpages.local:medusa.penguinpages.local
>
>
> Error:
> Error while executing action: Cannot add Storage Connection. Performance 
> o_direct option is not enabled for storage domain.

Your gluster volume is not configured properly for ovirt. Did you add
the volume via the administration portal?

Why do  you need special gluster volume for ISO images?

> Questions:
> 1) Why did the image and dd of iso to boot disk not work?

We don't have enough information here to tell.

> 2) Any ideas about create of iso mount volume?

Yes:
1. Visit  Storage > Disks
2. Click Upload
3. Select ISO image
4. Select storage domain to upload the image
5. Click "Upload"

Nir

>
> --
> penguinpages
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/WU2QJ3UYO7P2RSWAWEOXLYHGI2LMWKID/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WCM3Z2GRF463DZF2ZMKD5TTMWUKWBUUI/


[ovirt-users] Re: info on iSCSI connection setup in oVirt 4.4

2020-09-23 Thread Nir Soffer
On Wed, Sep 23, 2020 at 1:29 PM Gianluca Cecchi
 wrote:
>
> Hello,
> supposing to have a node that connects to an iSCSI storage domain in oVirt 
> 4.4, is there any particular requirement in the configuration of the network 
> adapter (ifcfg-eno1 file) when I pre-configure the server OS?
> Eg, do I need to have it managed by NetworkManager in 4.4? Can I instead set 
> NM_CONTROLLED=no for this controller?

4.4 is using NetworkManager, so it is unlikely be able to manage the nic in the
suggested configuration.

Adding Ales to add more info on this.

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HDGVC7BMNJ2YAMPPSS5WBZDS6EY23FFZ/


[ovirt-users] Re: oVirt - vdo: ERROR - Device /dev/sd excluded by a filter

2020-09-22 Thread Nir Soffer
On Tue, Sep 22, 2020 at 11:23 PM Strahil Nikolov  wrote:
>
> In my setup , I got no filter at all (yet, I'm on 4.3.10):
> [root@ovirt ~]# lvmconfig | grep -i filter

We create lvm filter automatically since 4.4.1. If you don't use block storage
(FC, iSCSI) you don't need lvm filter. If you do, you can create it manually
using vdsm-tool.

> [root@ovirt ~]#
>
> P.S.: Don't forget to 'dracut -f' due to the fact that the initramfs has a 
> local copy of the lvm.conf

Good point

>
>
> Best Regards,
> Strahil Nikolov
>
>
>
>
> В вторник, 22 септември 2020 г., 23:05:29 Гринуич+3, Jeremey Wise 
>  написа:
>
>
>
>
>
>
>
> Correct..  on wwid
>
>
> I do want to make clear here.  that to geta around the error you must ADD  
> (not remove ) drives to /etc/lvm/lvm.conf  so oVirt Gluster can complete 
> setup of drives.
>
> [root@thor log]# cat /etc/lvm/lvm.conf |grep filter
> # Broken for gluster in oVirt
> #filter = 
> ["a|^/dev/disk/by-id/lvm-pv-uuid-AAHPao-R62q-8aac-410x-ZdA7-UL4i-Bh2bwJ$|", 
> "a|^/dev/disk/by-id/lvm-pv-uuid-bSnFU3-jtUj-AGds-07sw-zdYC-52fM-mujuvC$|", 
> "r|.*|"]
> # working for gluster wizard in oVirt
> filter = 
> ["a|^/dev/disk/by-id/lvm-pv-uuid-AAHPao-R62q-8aac-410x-ZdA7-UL4i-Bh2bwJ$|", 
> "a|^/dev/disk/by-id/lvm-pv-uuid-bSnFU3-jtUj-AGds-07sw-zdYC-52fM-mujuvC$|", 
> "a|^/dev/disk/by-id/wwn-0x5001b448b847be41$|", "r|.*|"]
>
>
>
> On Tue, Sep 22, 2020 at 3:57 PM Strahil Nikolov  wrote:
> > Obtaining the wwid is not exactly correct.
> > You can identify them via:
> >
> > multipath -v4 | grep 'got wwid of'
> >
> > Short example:
> > [root@ovirt conf.d]# multipath -v4 | grep 'got wwid of'
> > Sep 22 22:55:58 | nvme0n1: got wwid of 
> > 'nvme.1cc1-324a31313230303131343036-414441544120535838323030504e50-0001'
> > Sep 22 22:55:58 | sda: got wwid of 'TOSHIBA-TR200_Z7KB600SK46S'
> > Sep 22 22:55:58 | sdb: got wwid of 'ST500NM0011_Z1M00LM7'
> > Sep 22 22:55:58 | sdc: got wwid of 'WDC_WD5003ABYX-01WERA0_WD-WMAYP2303189'
> > Sep 22 22:55:58 | sdd: got wwid of 'WDC_WD15EADS-00P8B0_WD-WMAVU0115133'
> >
> > Of course if you are planing to use only gluster it could be far easier to 
> > set:
> >
> > [root@ovirt conf.d]# cat /etc/multipath/conf.d/blacklist.conf
> > blacklist {
> > devnode "*"
> > }
> >
> >
> >
> > Best Regards,
> > Strahil Nikolov
> >
> > В вторник, 22 септември 2020 г., 22:12:21 Гринуич+3, Nir Soffer 
> >  написа:
> >
> >
> >
> >
> >
> > On Tue, Sep 22, 2020 at 1:50 AM Jeremey Wise  wrote:
> >>
> >>
> >> Agree about an NVMe Card being put under mpath control.
> >
> > NVMe can be used via multipath, this is a new feature added in RHEL 8.1:
> > https://bugzilla.redhat.com/1498546
> >
> > Of course when the NVMe device is local there is no point to use it
> > via multipath.
> > To avoid this, you need to blacklist the devices like this:
> >
> > 1. Find the device wwid
> >
> > For NVMe, you need the device ID_WWN:
> >
> > $ udevadm info -q property /dev/nvme0n1 | grep ID_WWN
> > ID_WWN=eui.5cd2e42a81a11f69
> >
> > 2. Add local blacklist file:
> >
> > $ mkdir /etc/multipath/conf.d
> > $ cat /etc/multipath/conf.d/local.conf
> > blacklist {
> > wwid "eui.5cd2e42a81a11f69"
> > }
> >
> > 3. Reconfigure multipath
> >
> > $ multipathd reconfigure
> >
> > Gluster should do this for you automatically during installation, but
> > it does not
> > you can do this manually.
> >
> >> I have not even gotten to that volume / issue.  My guess is something 
> >> weird in CentOS / 4.18.0-193.19.1.el8_2.x86_64  kernel with NVMe block 
> >> devices.
> >>
> >> I will post once I cross bridge of getting standard SSD volumes working
> >>
> >> On Mon, Sep 21, 2020 at 4:12 PM Strahil Nikolov  
> >> wrote:
> >>>
> >>> Why is your NVME under multipath ? That doesn't make sense at all .
> >>> I have modified my multipath.conf to block all local disks . Also ,don't 
> >>> forget the '# VDSM PRIVATE' line somewhere in the top of the file.
> >>>
> >>> Best Regards,
> >>> Strahil Nikolov
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>
> >>> В понеделник, 21 септември 2020 г., 09:04:28 Гринуич+3, Jeremey Wise 
> >&g

[ovirt-users] Re: oVirt - vdo: ERROR - Device /dev/sd excluded by a filter

2020-09-22 Thread Nir Soffer
On Tue, Sep 22, 2020 at 11:05 PM Jeremey Wise  wrote:
>
>
>
> Correct..  on wwid
>
>
> I do want to make clear here.  that to geta around the error you must ADD  
> (not remove ) drives to /etc/lvm/lvm.conf  so oVirt Gluster can complete 
> setup of drives.
>
> [root@thor log]# cat /etc/lvm/lvm.conf |grep filter
> # Broken for gluster in oVirt
> #filter = 
> ["a|^/dev/disk/by-id/lvm-pv-uuid-AAHPao-R62q-8aac-410x-ZdA7-UL4i-Bh2bwJ$|", 
> "a|^/dev/disk/by-id/lvm-pv-uuid-bSnFU3-jtUj-AGds-07sw-zdYC-52fM-mujuvC$|", 
> "r|.*|"]
> # working for gluster wizard in oVirt
> filter = 
> ["a|^/dev/disk/by-id/lvm-pv-uuid-AAHPao-R62q-8aac-410x-ZdA7-UL4i-Bh2bwJ$|", 
> "a|^/dev/disk/by-id/lvm-pv-uuid-bSnFU3-jtUj-AGds-07sw-zdYC-52fM-mujuvC$|", 
> "a|^/dev/disk/by-id/wwn-0x5001b448b847be41$|", "r|.*|"]

Yes, you need to add the devices gluster is going to use to the
filter. The easiest way
it to remove the filter before you install gluster, and then created
the filter using

vdsm-tool config-lvm-filter

It should add all the devices needed for the mounted logical volumes
automatically.
Please file a bug if it does not do this.

> On Tue, Sep 22, 2020 at 3:57 PM Strahil Nikolov  wrote:
>>
>> Obtaining the wwid is not exactly correct.
>> You can identify them via:
>>
>> multipath -v4 | grep 'got wwid of'
>>
>> Short example:
>> [root@ovirt conf.d]# multipath -v4 | grep 'got wwid of'
>> Sep 22 22:55:58 | nvme0n1: got wwid of 
>> 'nvme.1cc1-324a31313230303131343036-414441544120535838323030504e50-0001'
>> Sep 22 22:55:58 | sda: got wwid of 'TOSHIBA-TR200_Z7KB600SK46S'
>> Sep 22 22:55:58 | sdb: got wwid of 'ST500NM0011_Z1M00LM7'
>> Sep 22 22:55:58 | sdc: got wwid of 'WDC_WD5003ABYX-01WERA0_WD-WMAYP2303189'
>> Sep 22 22:55:58 | sdd: got wwid of 'WDC_WD15EADS-00P8B0_WD-WMAVU0115133'
>>
>> Of course if you are planing to use only gluster it could be far easier to 
>> set:
>>
>> [root@ovirt conf.d]# cat /etc/multipath/conf.d/blacklist.conf
>> blacklist {
>> devnode "*"
>> }
>>
>>
>>
>> Best Regards,
>> Strahil Nikolov
>>
>> В вторник, 22 септември 2020 г., 22:12:21 Гринуич+3, Nir Soffer 
>>  написа:
>>
>>
>>
>>
>>
>> On Tue, Sep 22, 2020 at 1:50 AM Jeremey Wise  wrote:
>> >
>> >
>> > Agree about an NVMe Card being put under mpath control.
>>
>> NVMe can be used via multipath, this is a new feature added in RHEL 8.1:
>> https://bugzilla.redhat.com/1498546
>>
>> Of course when the NVMe device is local there is no point to use it
>> via multipath.
>> To avoid this, you need to blacklist the devices like this:
>>
>> 1. Find the device wwid
>>
>> For NVMe, you need the device ID_WWN:
>>
>> $ udevadm info -q property /dev/nvme0n1 | grep ID_WWN
>> ID_WWN=eui.5cd2e42a81a11f69
>>
>> 2. Add local blacklist file:
>>
>> $ mkdir /etc/multipath/conf.d
>> $ cat /etc/multipath/conf.d/local.conf
>> blacklist {
>> wwid "eui.5cd2e42a81a11f69"
>> }
>>
>> 3. Reconfigure multipath
>>
>> $ multipathd reconfigure
>>
>> Gluster should do this for you automatically during installation, but
>> it does not
>> you can do this manually.
>>
>> > I have not even gotten to that volume / issue.  My guess is something 
>> > weird in CentOS / 4.18.0-193.19.1.el8_2.x86_64  kernel with NVMe block 
>> > devices.
>> >
>> > I will post once I cross bridge of getting standard SSD volumes working
>> >
>> > On Mon, Sep 21, 2020 at 4:12 PM Strahil Nikolov  
>> > wrote:
>> >>
>> >> Why is your NVME under multipath ? That doesn't make sense at all .
>> >> I have modified my multipath.conf to block all local disks . Also ,don't 
>> >> forget the '# VDSM PRIVATE' line somewhere in the top of the file.
>> >>
>> >> Best Regards,
>> >> Strahil Nikolov
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>
>> >> В понеделник, 21 септември 2020 г., 09:04:28 Гринуич+3, Jeremey Wise 
>> >>  написа:
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>
>> >> vdo: ERROR - Device /dev/sdc excluded by a filter
>> >>
>> >>
>> >>
>> >>
>> >> Other

[ovirt-users] Re: oVirt - vdo: ERROR - Device /dev/sd excluded by a filter

2020-09-22 Thread Nir Soffer
On Tue, Sep 22, 2020 at 10:57 PM Strahil Nikolov  wrote:
>
> Obtaining the wwid is not exactly correct.

It is correct - for nvme devices, see:
https://github.com/oVirt/vdsm/blob/353e7b1e322aa02d4767b6617ed094be0643b094/lib/vdsm/storage/lvmfilter.py#L300

This matches the way that multipath lookup devices wwids.

> You can identify them via:
>
> multipath -v4 | grep 'got wwid of'
>
> Short example:
> [root@ovirt conf.d]# multipath -v4 | grep 'got wwid of'
> Sep 22 22:55:58 | nvme0n1: got wwid of 
> 'nvme.1cc1-324a31313230303131343036-414441544120535838323030504e50-0001'
> Sep 22 22:55:58 | sda: got wwid of 'TOSHIBA-TR200_Z7KB600SK46S'
> Sep 22 22:55:58 | sdb: got wwid of 'ST500NM0011_Z1M00LM7'
> Sep 22 22:55:58 | sdc: got wwid of 'WDC_WD5003ABYX-01WERA0_WD-WMAYP2303189'
> Sep 22 22:55:58 | sdd: got wwid of 'WDC_WD15EADS-00P8B0_WD-WMAVU0115133'

There are 2 issues with this:
- It detects and setup maps for all devices in the system, unwanted when you
  want to blacklist devices
- It depends on debug output that may change, not on public documented API

You can use these commands:

Show devices that multipath does not use yet without setting up maps:

$ sudo multipath -d

Show devices that multipath is already using:

$ sudo multipath -ll

But I'm not sure if these commands work if dm_multipath kernel module
is not loaded or multiapthd is not running.

Getting the device wwid using udevadm works regardless of
multipathd/dm_multipath module.

> Of course if you are planing to use only gluster it could be far easier to 
> set:
>
> [root@ovirt conf.d]# cat /etc/multipath/conf.d/blacklist.conf
> blacklist {
> devnode "*"
> }
>
>
>
> Best Regards,
> Strahil Nikolov
>
> В вторник, 22 септември 2020 г., 22:12:21 Гринуич+3, Nir Soffer 
>  написа:
>
>
>
>
>
> On Tue, Sep 22, 2020 at 1:50 AM Jeremey Wise  wrote:
> >
> >
> > Agree about an NVMe Card being put under mpath control.
>
> NVMe can be used via multipath, this is a new feature added in RHEL 8.1:
> https://bugzilla.redhat.com/1498546
>
> Of course when the NVMe device is local there is no point to use it
> via multipath.
> To avoid this, you need to blacklist the devices like this:
>
> 1. Find the device wwid
>
> For NVMe, you need the device ID_WWN:
>
> $ udevadm info -q property /dev/nvme0n1 | grep ID_WWN
> ID_WWN=eui.5cd2e42a81a11f69
>
> 2. Add local blacklist file:
>
> $ mkdir /etc/multipath/conf.d
> $ cat /etc/multipath/conf.d/local.conf
> blacklist {
> wwid "eui.5cd2e42a81a11f69"
> }
>
> 3. Reconfigure multipath
>
> $ multipathd reconfigure
>
> Gluster should do this for you automatically during installation, but
> it does not
> you can do this manually.
>
> > I have not even gotten to that volume / issue.  My guess is something weird 
> > in CentOS / 4.18.0-193.19.1.el8_2.x86_64  kernel with NVMe block devices.
> >
> > I will post once I cross bridge of getting standard SSD volumes working
> >
> > On Mon, Sep 21, 2020 at 4:12 PM Strahil Nikolov  
> > wrote:
> >>
> >> Why is your NVME under multipath ? That doesn't make sense at all .
> >> I have modified my multipath.conf to block all local disks . Also ,don't 
> >> forget the '# VDSM PRIVATE' line somewhere in the top of the file.
> >>
> >> Best Regards,
> >> Strahil Nikolov
> >>
> >>
> >>
> >>
> >>
> >>
> >> В понеделник, 21 септември 2020 г., 09:04:28 Гринуич+3, Jeremey Wise 
> >>  написа:
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >> vdo: ERROR - Device /dev/sdc excluded by a filter
> >>
> >>
> >>
> >>
> >> Other server
> >> vdo: ERROR - Device 
> >> /dev/mapper/nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-0001p1
> >>  excluded by a filter.
> >>
> >>
> >> All systems when I go to create VDO volume on blank drives.. I get this 
> >> filter error.  All disk outside of the HCI wizard setup are now blocked 
> >> from creating new Gluster volume group.
> >>
> >> Here is what I see in /dev/lvm/lvm.conf |grep filter
> >> [root@odin ~]# cat /etc/lvm/lvm.conf |grep filter
> >> filter = 
> >> ["a|^/dev/disk/by-id/lvm-pv-uuid-e1fvwo-kEfX-v3lT-SKBp-cgze-TwsO-PtyvmC$|",
> >>  
> >> "a|^/dev/disk/by-id/lvm-pv-uuid-mr9awW-oQH5-F4IX-CbEO-RgJZ-x4jK-e4YZS1$|", 
> >> "r|.*|"]
> >>
&g

[ovirt-users] Re: oVirt - vdo: ERROR - Device /dev/sd excluded by a filter

2020-09-22 Thread Nir Soffer
On Tue, Sep 22, 2020 at 1:50 AM Jeremey Wise  wrote:
>
>
> Agree about an NVMe Card being put under mpath control.

NVMe can be used via multipath, this is a new feature added in RHEL 8.1:
https://bugzilla.redhat.com/1498546

Of course when the NVMe device is local there is no point to use it
via multipath.
To avoid this, you need to blacklist the devices like this:

1. Find the device wwid

For NVMe, you need the device ID_WWN:

$ udevadm info -q property /dev/nvme0n1 | grep ID_WWN
ID_WWN=eui.5cd2e42a81a11f69

2. Add local blacklist file:

$ mkdir /etc/multipath/conf.d
$ cat /etc/multipath/conf.d/local.conf
blacklist {
wwid "eui.5cd2e42a81a11f69"
}

3. Reconfigure multipath

$ multipathd reconfigure

Gluster should do this for you automatically during installation, but
it does not
you can do this manually.

> I have not even gotten to that volume / issue.   My guess is something weird 
> in CentOS / 4.18.0-193.19.1.el8_2.x86_64  kernel with NVMe block devices.
>
> I will post once I cross bridge of getting standard SSD volumes working
>
> On Mon, Sep 21, 2020 at 4:12 PM Strahil Nikolov  wrote:
>>
>> Why is your NVME under multipath ? That doesn't make sense at all .
>> I have modified my multipath.conf to block all local disks . Also ,don't 
>> forget the '# VDSM PRIVATE' line somewhere in the top of the file.
>>
>> Best Regards,
>> Strahil Nikolov
>>
>>
>>
>>
>>
>>
>> В понеделник, 21 септември 2020 г., 09:04:28 Гринуич+3, Jeremey Wise 
>>  написа:
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> vdo: ERROR - Device /dev/sdc excluded by a filter
>>
>>
>>
>>
>> Other server
>> vdo: ERROR - Device 
>> /dev/mapper/nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-0001p1
>>  excluded by a filter.
>>
>>
>> All systems when I go to create VDO volume on blank drives.. I get this 
>> filter error.  All disk outside of the HCI wizard setup are now blocked from 
>> creating new Gluster volume group.
>>
>> Here is what I see in /dev/lvm/lvm.conf |grep filter
>> [root@odin ~]# cat /etc/lvm/lvm.conf |grep filter
>> filter = 
>> ["a|^/dev/disk/by-id/lvm-pv-uuid-e1fvwo-kEfX-v3lT-SKBp-cgze-TwsO-PtyvmC$|", 
>> "a|^/dev/disk/by-id/lvm-pv-uuid-mr9awW-oQH5-F4IX-CbEO-RgJZ-x4jK-e4YZS1$|", 
>> "r|.*|"]
>>
>> [root@odin ~]# ls -al /dev/disk/by-id/
>> total 0
>> drwxr-xr-x. 2 root root 1220 Sep 18 14:32 .
>> drwxr-xr-x. 6 root root  120 Sep 18 14:32 ..
>> lrwxrwxrwx. 1 root root9 Sep 18 22:40 
>> ata-INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN -> ../../sda
>> lrwxrwxrwx. 1 root root   10 Sep 18 22:40 
>> ata-INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN-part1 -> ../../sda1
>> lrwxrwxrwx. 1 root root   10 Sep 18 22:40 
>> ata-INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN-part2 -> ../../sda2
>> lrwxrwxrwx. 1 root root9 Sep 18 14:32 
>> ata-Micron_1100_MTFDDAV512TBN_17401F699137 -> ../../sdb
>> lrwxrwxrwx. 1 root root9 Sep 18 22:40 
>> ata-WDC_WDS100T2B0B-00YS70_183533804564 -> ../../sdc
>> lrwxrwxrwx. 1 root root   10 Sep 18 16:40 dm-name-cl-home -> ../../dm-2
>> lrwxrwxrwx. 1 root root   10 Sep 18 16:40 dm-name-cl-root -> ../../dm-0
>> lrwxrwxrwx. 1 root root   10 Sep 18 16:40 dm-name-cl-swap -> ../../dm-1
>> lrwxrwxrwx. 1 root root   11 Sep 18 16:40 
>> dm-name-gluster_vg_sdb-gluster_lv_data -> ../../dm-11
>> lrwxrwxrwx. 1 root root   10 Sep 18 16:40 
>> dm-name-gluster_vg_sdb-gluster_lv_engine -> ../../dm-6
>> lrwxrwxrwx. 1 root root   11 Sep 18 16:40 
>> dm-name-gluster_vg_sdb-gluster_lv_vmstore -> ../../dm-12
>> lrwxrwxrwx. 1 root root   10 Sep 18 23:35 
>> dm-name-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-0001
>>  -> ../../dm-3
>> lrwxrwxrwx. 1 root root   10 Sep 18 23:49 
>> dm-name-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-0001p1
>>  -> ../../dm-4
>> lrwxrwxrwx. 1 root root   10 Sep 18 14:32 dm-name-vdo_sdb -> ../../dm-5
>> lrwxrwxrwx. 1 root root   10 Sep 18 16:40 
>> dm-uuid-LVM-GpvYIuypEfrR7nEDn5uHPenKwjrsn4ADc49gc6PWLRBCoJ2B3JC9tDJejyx5eDPT 
>> -> ../../dm-1
>> lrwxrwxrwx. 1 root root   10 Sep 18 16:40 
>> dm-uuid-LVM-GpvYIuypEfrR7nEDn5uHPenKwjrsn4ADOMNJfgcat9ZLOpcNO7FyG8ixcl5s93TU 
>> -> ../../dm-2
>> lrwxrwxrwx. 1 root root   10 Sep 18 16:40 
>> dm-uuid-LVM-GpvYIuypEfrR7nEDn5uHPenKwjrsn4ADzqPGk0yTQ19FIqgoAfsCxWg7cDMtl71r 
>> -> ../../dm-0
>> lrwxrwxrwx. 1 root root   10 Sep 18 16:40 
>> dm-uuid-LVM-ikNfztYY7KGT1SI2WYXPz4DhM2cyTelOq6Om5comvRFWJDbtVZAKtE5YGl4jciP9 
>> -> ../../dm-6
>> lrwxrwxrwx. 1 root root   11 Sep 18 16:40 
>> dm-uuid-LVM-ikNfztYY7KGT1SI2WYXPz4DhM2cyTelOqVheASEgerWSEIkjM1BR3us3D9ekHt0L 
>> -> ../../dm-11
>> lrwxrwxrwx. 1 root root   11 Sep 18 16:40 
>> dm-uuid-LVM-ikNfztYY7KGT1SI2WYXPz4DhM2cyTelOQz6vXuivIfup6cquKAjPof8wIGOSe4Vz 
>> -> ../../dm-12
>> lrwxrwxrwx. 1 root root   10 Sep 18 23:35 
>> dm-uuid-mpath-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-0001
>>  -> ../../dm-3
>> lrwxrwxrwx. 1 root root   10 

[ovirt-users] Re: oVirt - KVM QCow2 Import

2020-09-22 Thread Nir Soffer
On Tue, Sep 22, 2020 at 4:18 AM Jeremey Wise  wrote:
>
>
> Well.. to know how to do it with Curl is helpful.. but I think I did
>
> [root@odin ~]#  curl -s -k --user admin@internal:blahblah 
> https://ovirte01.penguinpages.local/ovirt-engine/api/storagedomains/ |grep 
> ''
> data
> hosted_storage
> ovirt-image-repository
>
> What I guess I did is translated that field --sd-name my-storage-domain \
> to " volume" name... My question is .. where do those fields come from?  And 
> which would you typically place all your VMs into?
>
>
>
>
> I just took a guess..  and figured "data" sounded like a good place to stick 
> raw images to build into VM...
>
> [root@medusa thorst.penguinpages.local:_vmstore]# python3 
> /usr/share/doc/python3-ovirt-engine-sdk4/examples/upload_disk.py --engine-url 
> https://ovirte01.penguinpages.local/ --username admin@internal 
> --password-file 
> /rhev/data-center/mnt/glusterSD/thorst.penguinpages.local:_vmstore/.ovirt.password
>  --cafile 
> /rhev/data-center/mnt/glusterSD/thorst.penguinpages.local:_vmstore/.ovirte01_pki-resource.cer
>  --sd-name data --disk-sparse 
> /rhev/data-center/mnt/glusterSD/thorst.penguinpages.local:_vmstore/ns02.qcow2
> Checking image...
> Image format: qcow2
> Disk format: cow
> Disk content type: data
> Disk provisioned size: 21474836480
> Disk initial size: 11574706176
> Disk name: ns02.qcow2
> Disk backup: False
> Connecting...
> Creating disk...
> Disk ID: 9ccb26cf-dd4a-4c9a-830c-ee084074d7a1
> Creating image transfer...
> Transfer ID: 3a382f0b-1e7d-4397-ab16-4def0e9fe890
> Transfer host name: medusa
> Uploading image...
> [ 100.00% ] 20.00 GiB, 249.86 seconds, 81.97 MiB/s
> Finalizing image transfer...
> Upload completed successfully
> [root@medusa thorst.penguinpages.local:_vmstore]# python3 
> /usr/share/doc/python3-ovirt-engine-sdk4/examples/upload_disk.py --engine-url 
> https://ovirte01.penguinpages.local/ --username admin@internal 
> --password-file 
> /rhev/data-center/mnt/glusterSD/thorst.penguinpages.local:_vmstore/.ovirt.password
>  --cafile 
> /rhev/data-center/mnt/glusterSD/thorst.penguinpages.local:_vmstore/.ovirte01_pki-resource.cer
>  --sd-name data --disk-sparse 
> /rhev/data-center/mnt/glusterSD/thorst.penguinpages.local:_vmstore/ns02_v^C
> [root@medusa thorst.penguinpages.local:_vmstore]# ls
> example.log  f118dcae-6162-4e9a-89e4-f30ffcfb9ccf  ns02_20200910.tgz  
> ns02.qcow2  ns02_var.qcow2
> [root@medusa thorst.penguinpages.local:_vmstore]# python3 
> /usr/share/doc/python3-ovirt-engine-sdk4/examples/upload_disk.py --engine-url 
> https://ovirte01.penguinpages.local/ --username admin@internal 
> --password-file 
> /rhev/data-center/mnt/glusterSD/thorst.penguinpages.local:_vmstore/.ovirt.password
>  --cafile 
> /rhev/data-center/mnt/glusterSD/thorst.penguinpages.local:_vmstore/.ovirte01_pki-resource.cer
>  --sd-name data --disk-sparse 
> /rhev/data-center/mnt/glusterSD/thorst.penguinpages.local:_vmstore/ns02_var.qcow2
> Checking image...
> Image format: qcow2
> Disk format: cow
> Disk content type: data
> Disk provisioned size: 107374182400
> Disk initial size: 107390828544
> Disk name: ns02_var.qcow2
> Disk backup: False
> Connecting...
> Creating disk...
> Disk ID: 26def4e7-1153-417c-88c1-fd3dfe2b0fb9
> Creating image transfer...
> Transfer ID: 41518eac-8881-453e-acc0-45391fd23bc7
> Transfer host name: medusa
> Uploading image...
> [  16.50% ] 16.50 GiB, 556.42 seconds, 30.37 MiB/s
>
> Now with those ID numbers and that it kept its name (very helpful)... I am 
> able to re-constitute the VM
>
>
> VM boots fine.  Fixing VLANs and manual macs on vNICs.. but this process 
> worked fine.
>
> Thanks for input.   Would be nice to have a GUI "upload" via http into system 
> :)

We have upload via GUI, but from your mail I understood the images are on
the hypervisor, so copying them to the machine running the browser would be
wasted of time.

Go to Storage > Disks and click "Upload" or "Download".

But this is less efficient, less correct, and not supporting all the
features like converting
image format and controlling sparseness.

For uploading and downloading qcow2 images it should be fine, but if
you have a qcow2
and want to upload to raw format this can be done only using the API,
for example with
upload_disk.py.

> On Mon, Sep 21, 2020 at 2:19 PM Nir Soffer  wrote:
>>
>> On Mon, Sep 21, 2020 at 8:37 PM penguin pages  wrote:
>> >
>> >
>> > I pasted old / file path not right example above.. But here is a cleaner 
>> > version with error i am trying to root cause
>> >
>> > [root@odin vmstore]# pyth

[ovirt-users] Re: oVirt - KVM QCow2 Import

2020-09-21 Thread Nir Soffer
On Mon, Sep 21, 2020 at 8:37 PM penguin pages  wrote:
>
>
> I pasted old / file path not right example above.. But here is a cleaner 
> version with error i am trying to root cause
>
> [root@odin vmstore]# python3 
> /usr/share/doc/python3-ovirt-engine-sdk4/examples/upload_disk.py --engine-url 
> https://ovirte01.penguinpages.local/ --username admin@internal 
> --password-file /gluster_bricks/vmstore/vmstore/.ovirt.password --cafile 
> /gluster_bricks/vmstore/vmstore/.ovirte01_pki-resource.cer --sd-name vmstore 
> --disk-sparse /gluster_bricks/vmstore/vmstore/ns01.qcow2
> Checking image...
> Image format: qcow2
> Disk format: cow
> Disk content type: data
> Disk provisioned size: 21474836480
> Disk initial size: 431751168
> Disk name: ns01.qcow2
> Disk backup: False
> Connecting...
> Creating disk...
> Traceback (most recent call last):
>   File "/usr/share/doc/python3-ovirt-engine-sdk4/examples/upload_disk.py", 
> line 262, in 
> name=args.sd_name
>   File "/usr/lib64/python3.6/site-packages/ovirtsdk4/services.py", line 7697, 
> in add
> return self._internal_add(disk, headers, query, wait)
>   File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 232, 
> in _internal_add
> return future.wait() if wait else future
>   File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 55, in 
> wait
> return self._code(response)
>   File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 229, 
> in callback
> self._check_fault(response)
>   File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 132, 
> in _check_fault
> self._raise_error(response, body)
>   File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 118, 
> in _raise_error
> raise error
> ovirtsdk4.NotFoundError: Fault reason is "Operation Failed". Fault detail is 
> "Entity not found: vmstore". HTTP response code is 404.

You used:

--sd-name vmstore

But there is no such storage domain in this setup.

Check the storage domains on this setup. One (ugly) way is is:

$ curl -s -k --user admin@internal:password
https://ovirte01.penguinpages.local/ovirt-engine/api/storagedomains/ |
grep ''
export1
iscsi1
iscsi2
nfs1
nfs2
ovirt-image-repository

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7K2LBSN5POKIKYQ3CXJZEJQCGNG26VFV/


[ovirt-users] Re: oVirt - vdo: ERROR - Device /dev/sd excluded by a filter

2020-09-21 Thread Nir Soffer
On Mon, Sep 21, 2020 at 3:30 PM Jeremey Wise  wrote:
>
>
> Old System
> Three servers..  Centos 7 -> Lay down VDO (dedup / compression) add those VDO 
> volumes as bricks to gluster.
>
> New cluster (remove boot drives and run wipe of all data drives)
>
> Goal: use first 512GB Drives to ignite the cluster and get things on feet and 
> stage infrastructure things.  Then use one of the 1TB drives in each server 
> for my "production" volume.  And second 1TB drive in each server as staging.  
> I want to be able to "learn" and not loose days / weeks of data... so disk 
> level rather give up capacity for sake of "oh.. well .. that messed up.. 
> rebuild.
>
> After minimal install.  Setup of network..  run HCI wizard.
>
> It failed various times along build... lack SELInux permissive, .. did not 
> wipe 1TB drives with hope of importing old Gluster file system / VDO voluemes 
> to import my five or six custom and important VMs. (OCP cluster bootstrap 
> environment, Plex servers, DNS / DHCP / Proxy HA cluster nodes et)
>
> Gave up on too many HCI failures about disk.. so wiped drives (will use 
> external NAS to repopulate important VMs back (or so is plan... see other 
> posting on no import of qcow2 images / xml :P )
>
> Ran into next batch of issues about use of true device ID  ... as name too 
> long... but /dev/sd?  makes me nervious as I have seen many systems with 
> issues when they use this old and should be depricated means to address disk 
> ID:  use UUID  or raw ID... 
> "/dev/disk/by-id/ata-Samsung_SSD_850_PRO_512GB_S250NXAGA15787L
>
> Started getting errors about HCI failing with "excluded by filter" errors.

I'm not sure I follow your long story, but this error is caused by too
strict lvm
filter in /etc/lvm/lvm.conf.

Edit this file and remove the line that looks like this:

filter = 
["a|^/dev/disk/by-id/lvm-pv-uuid-80ovnb-mZIO-J65Y-rl9n-YAY7-h0Q9-Aezk8D$|",
"r|.*|"]

Then install gluster, it will stop complaining about the filter.

At the end of the installation, you are going to add the hosts to
engine. At this point
a new lvm filter will be created, considering all the mounted logical volumes.

Maybe gluster setup should warn about lvm filter or remove it before
the installation.

> wiped drives ( gdisk /dev/sd?  => x => z => y => y)
>
> filters errors I could not fiture out what they were.. .. error of "filter 
> exists"  to me meant ..  you have one.. remove it so I can remove drive.
>
> Did full dd if=/dev/zero of=dev/sd? ..  still same issue
> filtered in multipath just for grins still same issue.
>
> Posted to forums.. nobody had ideas 
> https://forums.centos.org/viewtopic.php?f=54=75687   Posted to slack 
> gluster channel.. they looked at it and could not figure out...
>
> Wiped systems.. started over.   This time the HCI wizard deployed.
>
> My guess... is once I polished setup to make sure wizard did not attempt 
> before SELinux set to permissive (vs disable)  drives all wiped (even though 
> they SHOULD just be ignored..  I I think VDO scanned and saw VDO definition 
> on drive so freeked some ansible wizard script out).
>
> Now cluster is up..  but then went to add "production"  gluster +VDO and 
> "staging"  gluster + vdo volumes... and having issues.
>
> Sorry for long back story but I think this will add color to issues.
>
> My Thoughts as to root issues
> 1) HCI wizard has issues just using drives told, and ignoring other data 
> drives in system ... VDO as example I saw notes about failed attempt ... but 
> it should not have touched that volume... just used one it needed and igored 
> rest.
> 2) HCI wizard bug of ignoring user set /dev/sd?  for each server again, was 
> another failure attempt where clean up may not have run. (noted this in 
> posting about manual edit .. and apply button :P to ingest)
> 3) HCI wizard bug of name I was using of device ID vs /sd?  which is IMAO ... 
> bad form.. but name too long.. again. another cleanup where things may not 
> have fully cleaned.. or I forgot to click clean ...  where system was left in 
> non-pristine state
> 2) HCI wizard does NOT clean itself up properly if it fails ... or when I ran 
> clean up, maybe it did not complete and I closed wizard which then created 
> this orphaned state.
> 3) HCI Setup and post setup needs to add filtering
>
>
>   With a perfect and pristine process  .. it ran.   But only when all other 
> learning and requirements to get it just right were setup first.  oVirt HCI 
> is S very close to being a great platform , well thought out and 
> production class.  Just needs some more nerds beating on it to find these 
> cracks, and ge

[ovirt-users] Re: oVirt - KVM QCow2 Import

2020-09-21 Thread Nir Soffer
On Mon, Sep 21, 2020 at 9:11 AM Jeremey Wise  wrote:
>
>
> I rebuilt my lab environment.   And their are four or five VMs that really 
> would help if I did not have to rebuild.
>
> oVirt as I am now finding when it creates infrastructure, sets it out such 
> that I cannot just use older  means of placing .qcow2 files in folders and 
> .xml files in other folders and they show up on services restarting.
>
> How do I import VMs from files?

You did not share the oVirt version, so I'm assuming 4.4.

The simplest way is to upload the qcow2 images to oVirt, and create a new
VM with the new disk.

On the hypervisor where the files are located, install the required packages:

dnf install ovirt-imageio-client python3-ovirt-engine-sdk4

And upload the image:

python3 /usr/share/doc/python3-ovirt-engine-sdk4/examples/upload_disk.py \
--engine-url https://my.engine/ \
--username admin@internal \
--password-file /path/to/password/file \
--cafile /path/to/cafile \
--sd-name my-storage-domain \
--disk-sparse \
/path/to/image.qcow2

This will upload the file in qcow2 format to whatever type of storage you
have. You can change the format if you like using --disk-format. See --help
for all the options.

We also support importing from libvirt, but for this you need to have the vm
defined in libvirt. If you don't have this, It will probably be easier to upload
the images and create a new vm in oVirt.

Nir

> I found this article but implies VM is running: 
> https://www.ovirt.org/develop/release-management/features/virt/KvmToOvirt.html
> https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.1/html/administration_guide/sect-adding_external_providers#Adding_KVM_as_an_External_Provider
>
> I need a way to import a file.  Even if it means temporarily hosting on "KVM 
> on one of the hosts to then bring in once it is up.
>
>
> Thanks
> --
>
> penguinpages
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/6LSE4MNEBGODIRPVAQCUNBO2KGCCQTM5/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/H25R25XLNHEP2EQQ4X62PESPRXUTGX4Y/


[ovirt-users] Re: oVirt - vdo: ERROR - Device /dev/sd excluded by a filter

2020-09-21 Thread Nir Soffer
On Mon, Sep 21, 2020 at 9:02 AM Jeremey Wise  wrote:
>
>
>
>
>
>
> vdo: ERROR - Device /dev/sdc excluded by a filter
>
>
>
>
>
> Other server
>
> vdo: ERROR - Device 
> /dev/mapper/nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-0001p1
>  excluded by a filter.
>
>
>
> All systems when I go to create VDO volume on blank drives.. I get this 
> filter error.  All disk outside of the HCI wizard setup are now blocked from 
> creating new Gluster volume group.
>
> Here is what I see in /dev/lvm/lvm.conf |grep filter
> [root@odin ~]# cat /etc/lvm/lvm.conf |grep filter
> filter = 
> ["a|^/dev/disk/by-id/lvm-pv-uuid-e1fvwo-kEfX-v3lT-SKBp-cgze-TwsO-PtyvmC$|", 
> "a|^/dev/disk/by-id/lvm-pv-uuid-mr9awW-oQH5-F4IX-CbEO-RgJZ-x4jK-e4YZS1$|", 
> "r|.*|"]

This filter is correct for a normal oVirt host. But gluster wants to
use more local disks,
so you should:

1. remove the lvm filter
2. configure gluster
3. create the lvm filter

This will create a filter including all the mounted logical volumes
created by gluster.

Can you explain how do you reproduce this?

The lvm filter is created when you add a host to engine. Did you add the host
to engine before configuring gluster? Or maybe you are trying to add a host that
was used previously by oVirt?

In the last case, removing the filter before installing gluster will
fix the issue.

Nir

> [root@odin ~]# ls -al /dev/disk/by-id/
> total 0
> drwxr-xr-x. 2 root root 1220 Sep 18 14:32 .
> drwxr-xr-x. 6 root root  120 Sep 18 14:32 ..
> lrwxrwxrwx. 1 root root9 Sep 18 22:40 
> ata-INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN -> ../../sda
> lrwxrwxrwx. 1 root root   10 Sep 18 22:40 
> ata-INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN-part1 -> ../../sda1
> lrwxrwxrwx. 1 root root   10 Sep 18 22:40 
> ata-INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN-part2 -> ../../sda2
> lrwxrwxrwx. 1 root root9 Sep 18 14:32 
> ata-Micron_1100_MTFDDAV512TBN_17401F699137 -> ../../sdb
> lrwxrwxrwx. 1 root root9 Sep 18 22:40 
> ata-WDC_WDS100T2B0B-00YS70_183533804564 -> ../../sdc
> lrwxrwxrwx. 1 root root   10 Sep 18 16:40 dm-name-cl-home -> ../../dm-2
> lrwxrwxrwx. 1 root root   10 Sep 18 16:40 dm-name-cl-root -> ../../dm-0
> lrwxrwxrwx. 1 root root   10 Sep 18 16:40 dm-name-cl-swap -> ../../dm-1
> lrwxrwxrwx. 1 root root   11 Sep 18 16:40 
> dm-name-gluster_vg_sdb-gluster_lv_data -> ../../dm-11
> lrwxrwxrwx. 1 root root   10 Sep 18 16:40 
> dm-name-gluster_vg_sdb-gluster_lv_engine -> ../../dm-6
> lrwxrwxrwx. 1 root root   11 Sep 18 16:40 
> dm-name-gluster_vg_sdb-gluster_lv_vmstore -> ../../dm-12
> lrwxrwxrwx. 1 root root   10 Sep 18 23:35 
> dm-name-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-0001
>  -> ../../dm-3
> lrwxrwxrwx. 1 root root   10 Sep 18 23:49 
> dm-name-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-0001p1
>  -> ../../dm-4
> lrwxrwxrwx. 1 root root   10 Sep 18 14:32 dm-name-vdo_sdb -> ../../dm-5
> lrwxrwxrwx. 1 root root   10 Sep 18 16:40 
> dm-uuid-LVM-GpvYIuypEfrR7nEDn5uHPenKwjrsn4ADc49gc6PWLRBCoJ2B3JC9tDJejyx5eDPT 
> -> ../../dm-1
> lrwxrwxrwx. 1 root root   10 Sep 18 16:40 
> dm-uuid-LVM-GpvYIuypEfrR7nEDn5uHPenKwjrsn4ADOMNJfgcat9ZLOpcNO7FyG8ixcl5s93TU 
> -> ../../dm-2
> lrwxrwxrwx. 1 root root   10 Sep 18 16:40 
> dm-uuid-LVM-GpvYIuypEfrR7nEDn5uHPenKwjrsn4ADzqPGk0yTQ19FIqgoAfsCxWg7cDMtl71r 
> -> ../../dm-0
> lrwxrwxrwx. 1 root root   10 Sep 18 16:40 
> dm-uuid-LVM-ikNfztYY7KGT1SI2WYXPz4DhM2cyTelOq6Om5comvRFWJDbtVZAKtE5YGl4jciP9 
> -> ../../dm-6
> lrwxrwxrwx. 1 root root   11 Sep 18 16:40 
> dm-uuid-LVM-ikNfztYY7KGT1SI2WYXPz4DhM2cyTelOqVheASEgerWSEIkjM1BR3us3D9ekHt0L 
> -> ../../dm-11
> lrwxrwxrwx. 1 root root   11 Sep 18 16:40 
> dm-uuid-LVM-ikNfztYY7KGT1SI2WYXPz4DhM2cyTelOQz6vXuivIfup6cquKAjPof8wIGOSe4Vz 
> -> ../../dm-12
> lrwxrwxrwx. 1 root root   10 Sep 18 23:35 
> dm-uuid-mpath-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-0001
>  -> ../../dm-3
> lrwxrwxrwx. 1 root root   10 Sep 18 23:49 
> dm-uuid-part1-mpath-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-0001
>  -> ../../dm-4
> lrwxrwxrwx. 1 root root   10 Sep 18 14:32 
> dm-uuid-VDO-472035cc-8d2b-40ac-afe9-fa60b62a887f -> ../../dm-5
> lrwxrwxrwx. 1 root root   10 Sep 18 14:32 
> lvm-pv-uuid-e1fvwo-kEfX-v3lT-SKBp-cgze-TwsO-PtyvmC -> ../../dm-5
> lrwxrwxrwx. 1 root root   10 Sep 18 22:40 
> lvm-pv-uuid-mr9awW-oQH5-F4IX-CbEO-RgJZ-x4jK-e4YZS1 -> ../../sda2
> lrwxrwxrwx. 1 root root   13 Sep 18 14:32 
> nvme-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-0001
>  -> ../../nvme0n1
> lrwxrwxrwx. 1 root root   15 Sep 18 14:32 
> nvme-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-0001-part1
>  -> ../../nvme0n1p1
> lrwxrwxrwx. 1 root root   13 Sep 18 14:32 
> nvme-SPCC_M.2_PCIe_SSD_AA002458 -> ../../nvme0n1
> lrwxrwxrwx. 1 root 

[ovirt-users] Re: Any eta for 4.4.2 final?

2020-09-15 Thread Nir Soffer
On Tue, Sep 15, 2020 at 12:35 PM Gianluca Cecchi
 wrote:
>
> Hello,
> I would like to upgrade a 4.4.0 environment to the latest 4.4.2 when 
> available.
> Any indication if there are any show stoppers after the rc5 released on 27th 
> of August

This blocks the release:
https://bugzilla.redhat.com/1837864

> or any eta about other release candidates?

Sandro may have more info.

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives:


[ovirt-users] Re: Testing ovirt 4.4.1 Nested KVM on Skylake-client (core i5) does not work

2020-09-14 Thread Nir Soffer
On Mon, Sep 14, 2020 at 8:42 AM Yedidyah Bar David  wrote:
>
> On Mon, Sep 14, 2020 at 12:28 AM wodel youchi  wrote:
> >
> > Hi,
> >
> > Thanks for the help, I think I found the solution using this link : 
> > https://www.berrange.com/posts/2018/06/29/cpu-model-configuration-for-qemu-kvm-on-x86-hosts/
> >
> > When executing : virsh dumpxml on my ovirt hypervisor I saw that the mpx 
> > flag was disabled, so I edited the XML file of the hypervisor VM and I did 
> > this : add the already enabled features and enable mpx with them. I 
> > stopped/started my hyerpvisor VM and voila, le nested VM-Manager has booted 
> > successfully.
> >
> >
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> >   
> Thanks for the report!
>
> Would you like to open a bug about this?
>
> A possible fix is probably to pass relevant options to the
> virt-install command in ovirt-ansible-hosted-engine-setup.
> Either always - no idea what the implications are - or
> optionally, or even allow the user to pass arbitrary options.

I don't think we need to do such change on our side. This seems like a
hard to reproduce libvirt bug.

The strange thing is that after playing with the XML generated by
virt-manager, using

[x] Copy host CPU configuration

Creating this XML:

  
Skylake-Client-IBRS
Intel



















  

Or using this XML in virt-manager:

  

Both work with these cluster CPU Type:

- Secure Intel Skylake Client Family
- Intel Skylake Client Family

I think the best place to discuss this is libvirt-users mailing list:
https://www.redhat.com/mailman/listinfo/libvirt-users

Nir

> Thanks and best regards,
>
> >
> >
> > Regards.
> >
> > Le dim. 13 sept. 2020 à 19:47, Nir Soffer  a écrit :
> >>
> >> On Sun, Sep 13, 2020 at 8:32 PM wodel youchi  
> >> wrote:
> >> >
> >> > Hi,
> >> >
> >> > I've been using my core i5 6500 (skylake-client) for some time now to 
> >> > test oVirt on my machine.
> >> > However this is no longer the case.
> >> >
> >> > I am using Fedora 32 as my base system with nested-kvm enabled, when I 
> >> > try to install oVirt 4.4 as HCI single node, I get an error in the last 
> >> > phase which consists of copying the VM-Manager to the engine volume and 
> >> > boot it.
> >> > It is the boot that causes the problem, I get an error about the CPU :
> >> > the CPU is incompatible with host CPU: Host CPU does not provide 
> >> > required features: mpx
> >> >
> >> > This is the CPU part from virsh domcapabilities on my physical machine
> >> > 
> >> >
> >> >
> >> >  Skylake-Client-IBRS
> >> >  Intel
> >> >  
> >> >  
> >> >  
> >> >  
> >> >  
> >> >  
> >> >  
> >> >  
> >> >  
> >> >  
> >> >  
> >> >  
> >> >  
> >> >  
> >> >  
> >> >  
> >> >  
> >> >
> >> >
> >> >  qemu64
> >> >  qemu32
> >> >  phenom
> >> >  pentium3
> >> >  pentium2
> >> >  pentium
> >> >  n270
> >> >  kvm64
> >> >  kvm32
> >> >  coreduo
> >> >  core2duo
> >> >  athlon
> >> >  Westmere-IBRS
> >> >  Westmere
> >> >  Skylake-Server-IBRS
> >> >  Skylake-Server
> >> >  Skylake-Client-IBRS
> >> >  Skylake-Client
> >> >  SandyBridge-IBRS
> >> >  SandyBridge
> >> >  Penryn
> >> >  Opteron_G5
> >> >  Opteron_G4
> >> >  Opteron_G3
> >> >  Opteron_G2
> >> >  Opteron_G1
> >> >  Nehalem-IBRS
> >> >  Nehalem
> >> >  IvyBridge-IBRS
> >> >  IvyBridge
> >> >  Icelake-Server
> >> >  Icelake-Client
> >> >  Haswell-noTSX-IBRS
> >> >  Haswell-noTSX
> >> >  Haswell-IBRS

[ovirt-users] Re: Testing ovirt 4.4.1 Nested KVM on Skylake-client (core i5) does not work

2020-09-13 Thread Nir Soffer
On Sun, Sep 13, 2020 at 8:32 PM wodel youchi  wrote:
>
> Hi,
>
> I've been using my core i5 6500 (skylake-client) for some time now to test 
> oVirt on my machine.
> However this is no longer the case.
>
> I am using Fedora 32 as my base system with nested-kvm enabled, when I try to 
> install oVirt 4.4 as HCI single node, I get an error in the last phase which 
> consists of copying the VM-Manager to the engine volume and boot it.
> It is the boot that causes the problem, I get an error about the CPU :
> the CPU is incompatible with host CPU: Host CPU does not provide required 
> features: mpx
>
> This is the CPU part from virsh domcapabilities on my physical machine
> 
>
>
>  Skylake-Client-IBRS
>  Intel
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  
>
>
>  qemu64
>  qemu32
>  phenom
>  pentium3
>  pentium2
>  pentium
>  n270
>  kvm64
>  kvm32
>  coreduo
>  core2duo
>  athlon
>  Westmere-IBRS
>  Westmere
>  Skylake-Server-IBRS
>  Skylake-Server
>  Skylake-Client-IBRS
>  Skylake-Client
>  SandyBridge-IBRS
>  SandyBridge
>  Penryn
>  Opteron_G5
>  Opteron_G4
>  Opteron_G3
>  Opteron_G2
>  Opteron_G1
>  Nehalem-IBRS
>  Nehalem
>  IvyBridge-IBRS
>  IvyBridge
>  Icelake-Server
>  Icelake-Client
>  Haswell-noTSX-IBRS
>  Haswell-noTSX
>  Haswell-IBRS
>  Haswell
>  EPYC-IBPB
>  EPYC
>  Dhyana
>  Conroe
>  Cascadelake-Server
>  Broadwell-noTSX-IBRS
>  Broadwell-noTSX
>  Broadwell-IBRS
>  Broadwell
>  486
>
>  
>
> Here is the lscpu of my physical machine
> # lscpu
> Architecture:x86_64
> CPU op-mode(s):  32-bit, 64-bit
> Byte Order:  Little Endian
> Address sizes:   39 bits physical, 48 bits virtual
> CPU(s):  4
> On-line CPU(s) list: 0-3
> Thread(s) per core:  1
> Core(s) per socket:  4
> Socket(s):   1
> NUMA node(s):1
> Vendor ID:   GenuineIntel
> CPU family:  6
> Model:   94
> Model name:  Intel(R) Core(TM) i5-6500 CPU @ 3.20GHz
> Stepping:3
> CPU MHz: 954.588
> CPU max MHz: 3600.
> CPU min MHz: 800.
> BogoMIPS:6399.96
> Virtualization:  VT-x
> L1d cache:   128 KiB
> L1i cache:   128 KiB
> L2 cache:1 MiB
> L3 cache:6 MiB
> NUMA node0 CPU(s):   0-3
> Vulnerability Itlb multihit: KVM: Mitigation: Split huge pages
> Vulnerability L1tf:  Mitigation; PTE Inversion; VMX conditional 
> cache flushes, SMT disabled
> Vulnerability Mds:   Mitigation; Clear CPU buffers; SMT disabled
> Vulnerability Meltdown:  Mitigation; PTI
> Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass 
> disabled via prctl and seccomp
> Vulnerability Spectre v1:Mitigation; usercopy/swapgs barriers and 
> __user pointer sanitization
> Vulnerability Spectre v2:Mitigation; Full generic retpoline, IBPB 
> conditional, IBRS_FW, STIBP disabled, RSB filling
> Vulnerability Srbds: Vulnerable: No microcode
> Vulnerability Tsx async abort:   Mitigation; Clear CPU buffers; SMT disabled
> Flags:   fpu vme de pse tsc msr pae mce cx8 apic sep 
> mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe 
> syscall nx pdpe1gb rdtscp lm constan
> t_tsc art arch_perfmon pebs bts rep_good nopl 
> xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl 
> vmx smx est tm2 ssse3 sdbg fma cx16
>  xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe 
> popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch 
> cpuid_fault invpcid_single pti ssbd
>  ibrs ibpb stibp tpr_shadow vnmi flexpriority 
> ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm 
> mpx rdseed adx smap clflushopt in
> tel_pt xsaveopt xsavec xgetbv1 xsaves dtherm 
> ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp md_clear flush_l1d
>
>
>
> Here is the CPU part from virsh dumpxml of my ovirt hypervisor
> 
>Skylake-Client-IBRS
>Intel
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>  
>
> Here is the lcpu of my ovirt hypervisor
> [root@node1 ~]# lscpu
> Architecture :  x86_64
> Mode(s) opératoire(s) des processeurs : 32-bit, 64-bit

[ovirt-users] Re: [EXTERNAL] Re: Storage Domain won't activate

2020-09-05 Thread Nir Soffer
On Sat, Sep 5, 2020 at 1:49 AM Gillingham, Eric J (US 393D)
 wrote:
>
> On 9/4/20, 2:26 PM, "Nir Soffer"  wrote:
> On Fri, Sep 4, 2020 at 5:43 PM Gillingham, Eric J (US 393D) via Users
>  wrote:
> >
> > On 9/4/20, 4:50 AM, "Vojtech Juranek"  wrote:
> >
> > On čtvrtek 3. září 2020 22:49:17 CEST Gillingham, Eric J (US 393D) 
> via Users
> > wrote:
> >
> > how do you remove the fist host, did you put it into maintenance 
> first? I
> > wonder, how this situation (two lockspaces with conflicting names) 
> can occur.
> >
> > You can try to re-initialize the lockspace directly using sanlock 
> command (see
> > man sanlock), but it would be good to understand the situation 
> first.
> >
> >
> > Just as you said, put into maintenance mode, shut it down, removed it 
> via the engine UI.
>
> Eric, it is possible that you shutdown the host too quickly, before it 
> actually
> disconnected from the lockspace?
>
> When engine move a host to maintenance, it does not wait until the host 
> actually
> move into maintenance. This is actually a bug, so it would be good idea 
> to file
> a bug about this.
>
>
> That is a possibility, from the UI view it usually takes a bit for the host 
> to show is in maintenance, so I assumed it was an accurate representation of 
> the state. Unfortunately all hosts have since been completely wiped and 
> re-installed, this issue  brought down the entire cluster for over a day so I 
> needed to get everything up again ASAP.
>
> I did not archive/backup the sanlock logs beforehand, so I can't check for 
> the sanlock events David mentioned. When I cleared the sanlock there were no 
> s or r entries listed in sanlock client status, and there were no other 
> running hosts to obtain other locks, but I don’t fully grok sanlock if there 
> was maybe some lock that existed only on the iscsi space separate from any 
> current or past hosts.

Looks like we lost all evidence. If this happens again, please file a
bug and attach
the logs.

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UKBGG6MA3VORUD5KP3JSILZ3VYVIJ2PL/


[ovirt-users] Re: [EXTERNAL] Re: Storage Domain won't activate

2020-09-05 Thread Nir Soffer
On Sat, Sep 5, 2020 at 12:45 AM David Teigland  wrote:
>
> On Sat, Sep 05, 2020 at 12:25:45AM +0300, Nir Soffer wrote:
> > > > /var/log/sanlock.log contains a repeating:
> > > > add_lockspace
> > > > 
> > > e1270474-108c-4cae-83d6-51698cffebbf:1:/dev/e1270474-108c-4cae-83d6-51698cf
> > > > febbf/ids:0 conflicts with name of list1 s1
> > > > 
> > > e1270474-108c-4cae-83d6-51698cffebbf:3:/dev/e1270474-108c-4cae-83d6-51698cf
> > > > febbf/ids:0
> >
> > David, what does this message mean?
> >
> > It is clear that there is a conflict, but not clear what is the
> > conflicting item. The host id in the
> > request is 1, and in the conflicting item, 3. No conflicting data is
> > displayed in the error message.
>
> The lockspace being added is already being managed by sanlock, but using
> host_id 3.  sanlock.log should show when lockspace e1270474 with host_id 3
> was added.

Do you mean that the host reporting this already joined the lockspace
with host_id=3,
and  then tried to join again with host_id=1?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AISVNBSU7OA57EURGUMQZWCNMBDDZY3C/


[ovirt-users] Re: [EXTERNAL] Re: Storage Domain won't activate

2020-09-04 Thread Nir Soffer
On Fri, Sep 4, 2020 at 5:43 PM Gillingham, Eric J (US 393D) via Users
 wrote:
>
> On 9/4/20, 4:50 AM, "Vojtech Juranek"  wrote:
>
> On čtvrtek 3. září 2020 22:49:17 CEST Gillingham, Eric J (US 393D) via 
> Users
> wrote:
> > I recently removed a host from my cluster to upgrade it to 4.4, after I
> > removed the host from the datacenter VMs started to pause on the second
> > system they all migrated to. Investigating via the engine showed the
> > storage domain was showing as "unknown", when I try to activate it via 
> the
> > engine it cycles to locked then to unknown again.
>
> > /var/log/sanlock.log contains a repeating:
> > add_lockspace
> > 
> e1270474-108c-4cae-83d6-51698cffebbf:1:/dev/e1270474-108c-4cae-83d6-51698cf
> > febbf/ids:0 conflicts with name of list1 s1
> > 
> e1270474-108c-4cae-83d6-51698cffebbf:3:/dev/e1270474-108c-4cae-83d6-51698cf
> > febbf/ids:0

David, what does this message mean?

It is clear that there is a conflict, but not clear what is the
conflicting item. The host id in the
request is 1, and in the conflicting item, 3. No conflicting data is
displayed in the error message.

> how do you remove the fist host, did you put it into maintenance first? I
> wonder, how this situation (two lockspaces with conflicting names) can 
> occur.
>
> You can try to re-initialize the lockspace directly using sanlock command 
> (see
> man sanlock), but it would be good to understand the situation first.
>
>
> Just as you said, put into maintenance mode, shut it down, removed it via the 
> engine UI.

Eric, it is possible that you shutdown the host too quickly, before it actually
disconnected from the lockspace?

When engine move a host to maintenance, it does not wait until the host actually
move into maintenance. This is actually a bug, so it would be good idea to file
a bug about this.

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DF5JRTXDQOHVTVNQ7BT3SF564GSQA4ZX/


[ovirt-users] Re: How to Backup a VM

2020-09-02 Thread Nir Soffer
On Tue, Sep 1, 2020 at 11:26 PM Nir Soffer  wrote:
>
> On Sun, Aug 30, 2020 at 7:13 PM  wrote:
> >
> > Struggling with bugs and issues on OVA export/import (my clear favorite 
> > otherwise, especially when moving VMs between different types of 
> > hypervisors), I've tried pretty much everything else, too.
> >
> > Export domains are deprecated and require quite a bit of manual handling. 
> > Unfortunately the buttons for the various operations are all over the place 
> > e.g. the activation and maintenance toggles are in different pages.
>
> Using export domain is not a single click, but it is not that complicated.
> But this is good feedback anyway.
>
> > In the end the mechanisms underneath (qemu-img) seem very much the same and 
> > suffer from the same issues (I have larger VMs that keep failing on 
> > imports).
>
> I think the issue is gluster, not qemu-img.
>
> > So far the only fool-proof method has been to use the imageio daemon to 
> > upload and download disk images, either via the Python API or the Web-GUI.
>
> How did you try? transfer via the UI is completely different than
> transfer using the python API.
>
> From the UI, you get the image content on storage, without sparseness
> support. If you
> download 500g raw sparse disk (e.g. gluster with allocation policy
> thin) with 50g of data
> and 450g of unallocated space, you will get 50g of data, and 450g of
> zeroes. This is very
> slow. If you upload the image to another system you will upload 500g
> of data, which will
> again be very slow.
>
> From the python API, download and upload support sparseness, so you
> will download and
> upload only 50g. Both upload and download use 4 connections, so you
> can maximize the
> throughput that you can get from the storage. From python API, you can
> convert the image
> format during download/upload automatically, for example download raw
> disk to qcow2
> image.
>
> Gluster is a challenge (as usual), since when using sharding (enabled
> by default for ovirt),
> it does not report sparness. So even from the python API you will
> download the entire 500g.
> We can improve this using zero detection but this is not implemented yet.

I forgot to add that NFS < 4.2 is also a challenge, and will cause
very slow downloads
creating fully allocated files for the same reason. If you use export
domain to move
VMs, you should use NFS 4.2.

Unfortunately oVirt tries hard to prevent you from using NFS 4.2. Not
only this is
not the default, the settings to select version 4.2 are hidden under:

 Storage > Domains >  domain-name > Manage Domain > Custom Connection Parameters

Select "V4.2" for NFS Version.

All this can be done only when the storage domain is in maintenance.

With this creating preallocated disks is infinity times faster (using
fallocate()), copying disks
from this domain will can much faster, and downloading raw sparse disk
will be much faster
and more correct, preserving sparseness.

> > Transfer times are terrible though, 50MB/s is quite low when the network 
> > below is 2.5-10Gbit and SSDs all around.
>
> In our lab we tested upload of 100 GiB image and 10 concurrent uploads
> of 100 GiB
> images, and we measured throughput of 1 GiB/s:
> https://bugzilla.redhat.com/show_bug.cgi?id=1591439#c24
>
> I would like to understand the setup better:
>
> - upload or download?
> - disk format?
> - disk storage?
> - how is storage connected to host?
> - how do you access the host (1g network? 10g?)
> - image format?
> - image storage?

If NFS, which version?

>
> > Obviously with Python as everybody's favorite GUI these days, you can also 
> > copy and transfer the VMs complete definition, but I am one of those old 
> > guys, who might even prefer a real GUI to mouse clicks on a browser.
> >
> > The documentation on backup domains is terrible. What's missing behind the 
> > 404 link in oVirt becomes a very terse section in the RHV manuals, where 
> > you're basically just told that after cloning the VM, you should then move 
> > its disks to the backup domain...
>
> backup domain is a partly cooked feature and it is not very useful.
> There is no reason
> to use it for moving VMs from one environment to another.
>
> I already explained how to move vms using a data domain. Check here:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/ULLFLFKBAW7T7B6OD63BMNZXJK6EU6AI/
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/GFOK55O5N4SRU5PA32P3LATW74E7WKT6/
>
> I'm not sure it is documented properly, please file a documentation
> bug if we need to
> add something to the documentation.
>
> > What you are then supposed to

[ovirt-users] Re: How to Backup a VM

2020-09-01 Thread Nir Soffer
On Sun, Aug 30, 2020 at 7:13 PM  wrote:
>
> Struggling with bugs and issues on OVA export/import (my clear favorite 
> otherwise, especially when moving VMs between different types of 
> hypervisors), I've tried pretty much everything else, too.
>
> Export domains are deprecated and require quite a bit of manual handling. 
> Unfortunately the buttons for the various operations are all over the place 
> e.g. the activation and maintenance toggles are in different pages.

Using export domain is not a single click, but it is not that complicated.
But this is good feedback anyway.

> In the end the mechanisms underneath (qemu-img) seem very much the same and 
> suffer from the same issues (I have larger VMs that keep failing on imports).

I think the issue is gluster, not qemu-img.

> So far the only fool-proof method has been to use the imageio daemon to 
> upload and download disk images, either via the Python API or the Web-GUI.

How did you try? transfer via the UI is completely different than
transfer using the python API.

From the UI, you get the image content on storage, without sparseness
support. If you
download 500g raw sparse disk (e.g. gluster with allocation policy
thin) with 50g of data
and 450g of unallocated space, you will get 50g of data, and 450g of
zeroes. This is very
slow. If you upload the image to another system you will upload 500g
of data, which will
again be very slow.

From the python API, download and upload support sparseness, so you
will download and
upload only 50g. Both upload and download use 4 connections, so you
can maximize the
throughput that you can get from the storage. From python API, you can
convert the image
format during download/upload automatically, for example download raw
disk to qcow2
image.

Gluster is a challenge (as usual), since when using sharding (enabled
by default for ovirt),
it does not report sparness. So even from the python API you will
download the entire 500g.
We can improve this using zero detection but this is not implemented yet.

> Transfer times are terrible though, 50MB/s is quite low when the network 
> below is 2.5-10Gbit and SSDs all around.

In our lab we tested upload of 100 GiB image and 10 concurrent uploads
of 100 GiB
images, and we measured throughput of 1 GiB/s:
https://bugzilla.redhat.com/show_bug.cgi?id=1591439#c24

I would like to understand the setup better:

- upload or download?
- disk format?
- disk storage?
- how is storage connected to host?
- how do you access the host (1g network? 10g?)
- image format?
- image storage?

> Obviously with Python as everybody's favorite GUI these days, you can also 
> copy and transfer the VMs complete definition, but I am one of those old 
> guys, who might even prefer a real GUI to mouse clicks on a browser.
>
> The documentation on backup domains is terrible. What's missing behind the 
> 404 link in oVirt becomes a very terse section in the RHV manuals, where 
> you're basically just told that after cloning the VM, you should then move 
> its disks to the backup domain...

backup domain is a partly cooked feature and it is not very useful.
There is no reason
to use it for moving VMs from one environment to another.

I already explained how to move vms using a data domain. Check here:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ULLFLFKBAW7T7B6OD63BMNZXJK6EU6AI/
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GFOK55O5N4SRU5PA32P3LATW74E7WKT6/

I'm not sure it is documented properly, please file a documentation
bug if we need to
add something to the documentation.

> What you are then supposed to do with the cloned VM, if it's ok to simplay 
> throw it away, because the definition is silently copied to the OVF_STORE on 
> the backup... none of that is explained or mentioned.

If you cloned a vm to data domain and then detach the data domain
there is nothing to cleanup in the source system.

> There is also no procedure for restoring a machine from a backup domain, when 
> really a cloning process that allows a target domain would be pretty much 
> what I'd vote for.

We have this in 4.4, try to select a VM and click "Export".

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UB2YZK3DD3KDHZYQQW4TVYCKASRRSOK4/


[ovirt-users] Re: can't mount an export domain with 4K block size (VDO) as PosixFS (xfs)

2020-08-31 Thread Nir Soffer
On Mon, Aug 31, 2020 at 8:17 PM  wrote:
>
> After this 
> (https://devconfcz2020a.sched.com/event/YOtG/ovirt-4k-teaching-an-old-dog-new-tricks)

Vdsm learned a few tricks but is still old :-)

> I sure do not expect this (log below):
>
> Actually I am trying to evaluate just how portable oVirt storage is and this 
> case I had prepared a USB3 HDD with VDO,
> which I could literally move between farms to transport VMs.
> Logical disks are typically large for simplicity within the VMs, QCOW2 and 
> VDO assumed to compensate for this 'lack of planning' while the allocated 
> storage easily fits the HDD.
>
> Once I got beyond the initial issues, I came across this somewhat unexpected 
> issue: VDO storage uses 4k blocks all around,

VDO has 512 bytes emulation mode. This is how gluster used VDO before 4.3.8.

> but evidently when you mount an export domain (or I guess any domain) as 
> POSIX, 512byte blocks are assumed somewhere and 4k blocks rejected.

Yes, NFS and POSIX (internally NFS) storage domains do not support 4k.

With NFS we cannot detect storage domain block size, since NFS does not use
direct I/O on the server side, so we kept the old configuration, 512.
Theoretically
we can change the code to default to 4k in this case, but it was never tested.

POSIX was never tested wth 4k and since internally it is NFS, it has the same
limits. It it likely to support 4k if we create a specific storage
class for it, like
local storage.

Why do you need VDO for export domain? qcow2 keeps only the used data.
You don't have enough space on the disk for keep all VMs in qcow2 format?

> I'd say that is a bug in VDSM, right?

No, this is unsupported configuration.

export domain is using safelease, which does not support 4k storage,
and since export
domain it is deprecated and safelease is unmaintained, it will never support 4k.

If you need 4k support for POSIX or NFS storage *data* domain, this is
something that
can be considered for a future version, please file RFE for this.

Nir

> Or is there anything in the mount options to fix this?
> 020-08-31 18:44:40,424+0200 INFO  (periodic/0) [vdsm.api] START 
> repoStats(domains=()) from=internal, 
> task_id=7a293dec-85b3-4b82-92b7-4e7d03b40343 (api:48)
> 2020-08-31 18:44:40,425+0200 INFO  (periodic/0) [vdsm.api] FINISH repoStats 
> return={'9992dc21-edf2-4951-9020-7c78f1220e02': {'code': 0, 'lastCheck': 
> '1.6', 'delay': '0.00283651', 'valid': True, 'version': 5, 'acquired': True, 
> 'actual': True}, '25d95783-44df-4fda-b642-55fe09162149': {'code': 0, 
> 'lastCheck': '2.2', 'delay': '0.00256944', 'valid': True, 'version': 5, 
> 'acquired': True, 'actual': True}, '148d9e9e-d7b8-4220-9ee8-057da96e608c': 
> {'code': 0, 'lastCheck': '2.2', 'delay': '0.00050217', 'valid': True, 
> 'version': 5, 'acquired': True, 'actual': True}} from=internal, 
> task_id=7a293dec-85b3-4b82-92b7-4e7d03b40343 (api:54)
> 2020-08-31 18:44:41,121+0200 INFO  (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC 
> call Host.ping2 succeeded in 0.00 seconds (__init__:312)
> 2020-08-31 18:44:41,412+0200 INFO  (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC 
> call Host.ping2 succeeded in 0.00 seconds (__init__:312)
> 2020-08-31 18:44:41,414+0200 INFO  (jsonrpc/2) [vdsm.api] START 
> repoStats(domains=['9992dc21-edf2-4951-9020-7c78f1220e02']) from=::1,44782, 
> task_id=657c820e-e2eb-4b1d-b2fb-1772b3af6f32 (api:48)
> 2020-08-31 18:44:41,414+0200 INFO  (jsonrpc/2) [vdsm.api] FINISH repoStats 
> return={'9992dc21-edf2-4951-9020-7c78f1220e02': {'code': 0, 'lastCheck': 
> '2.6', 'delay': '0.00283651', 'valid': True, 'version': 5, 'acquired': True, 
> 'actual': True}} from=::1,44782, task_id=657c820e-e2eb-4b1d-b2fb-1772b3af6f32 
> (api:54)
> 2020-08-31 18:44:41,414+0200 INFO  (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC 
> call Host.getStorageRepoStats succeeded in 0.00 seconds (__init__:312)
> 2020-08-31 18:44:41,965+0200 INFO  (jsonrpc/3) [IOProcessClient] (Global) 
> Starting client (__init__:308)
> 2020-08-31 18:44:41,984+0200 INFO  (ioprocess/87332) [IOProcess] (Global) 
> Starting ioprocess (__init__:434)
> 2020-08-31 18:44:42,006+0200 INFO  (jsonrpc/3) [storage.StorageDomainCache] 
> Removing domain fe9fb0db-2743-457a-80f0-9a4edc509e9d from storage domain 
> cache (sdc:211)
> 2020-08-31 18:44:42,006+0200 INFO  (jsonrpc/3) [storage.StorageDomainCache] 
> Invalidating storage domain cache (sdc:74)
> 2020-08-31 18:44:42,006+0200 INFO  (jsonrpc/3) [vdsm.api] FINISH 
> connectStorageServer return={'statuslist': [{'id': 
> '----', 'status': 0}]} 
> from=:::192.168.0.87,40378, flow_id=05fe72ef-c8ac-4e03-8453-5171e5fc5f8b, 
> task_id=b92d4144-a9db-4423-ba6b-35934d4f9200 (api:54)
> 2020-08-31 18:44:42,008+0200 INFO  (jsonrpc/3) [jsonrpc.JsonRpcServer] RPC 
> call StoragePool.connectStorageServer succeeded in 3.11 seconds (__init__:312)
> 2020-08-31 18:44:42,063+0200 INFO  (jsonrpc/5) [vdsm.api] START 
> getStorageDomainsList(spUUID='----', 
> domainClass=3, 

[ovirt-users] Re: Error exporting into ova

2020-08-31 Thread Nir Soffer
On Sun, Aug 30, 2020 at 11:46 PM  wrote:
>
> BTW: This is the message I get on the import:
> VDSM nucvirt command HSMGetAllTasksStatusesVDS failed: value=low level Image 
> copy failed: ("Command ['/usr/bin/qemu-img', 'convert', '-p', '-t', 'none', 
> '-T', 'none', '-f', 'qcow2', 
> '/rhev/data-center/mnt/petitcent.mtk.hoberg.net:_flash_export/fe9fb0db-2743-457a-80f0-9a4edc509e9d/images/3be7c1bb-377c-4d5e-b4f6-1a6574b8a52b/845cdd93-def8-4d84-9a08-f8c991f89fe3',
>  '-O', 'raw', 
> '/rhev/data-center/mnt/glusterSD/nucvirt.mtk.hoberg.net:_vmstore/ba410e27-458d-4b32-969c-ad0c37edaceb/images/3be7c1bb-377c-4d5e-b4f6-1a6574b8a52b/845cdd93-def8-4d84-9a08-f8c991f89fe3']
>  failed with rc=1 out=b'' err=bytearray(b'qemu-img: error while writing 
> sector 9566208: No such file or directory\\n')",) abortedcode=261

This is a gluster error that was already reported here several times.

ENOENT is not a valid error for write. Please report a gluster bug for this.

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HJP4MU5UURFZRZWQLXQRD6DVHBZD27LR/


[ovirt-users] Re: Error exporting into ova

2020-08-30 Thread Nir Soffer
On Fri, Aug 28, 2020 at 2:31 AM  wrote:
>
> I am testing the migration from CentOS7/oVirt 4.3 to CentOS8/oVirt 4.4.
>
> Exporting all VMs to OVAs, and re-importing them on a new cluster built from 
> scratch seems the safest and best method, because in the step-by-step 
> migration, there is simply far too many things that can go wrong and no easy 
> way to fail-back after each step.

You should really try the attach/detach storage domain, this is the
recommended way to move
vms from one ovirt system to another.

You could detach the entire domain with all vms from the old system,
and connect it to the new
system, without copying even one bit.

I guess you cannot do this because you don't use shared storage?

...
> So I have manually put the single-line fix in, which settles udev to ensure 
> that disks are not exported as zeros. That's the bug which renders the final 
> release oVirt 4.3 forever unfit, 4 years before the end of maintenance of 
> CentOS7, because it won't be fixed there.

Using ovirt 4.3 when 4.4 was released is going to be painful, don't do this.

...
> But just as I was exporting not one of the trivial machines, that I have been 
> using for testing, but one of the bigger ones, that actually contain a 
> significant amout of data, I find myself hitting this timeout bug.
>
> The disks for both the trival and less-trivial are defined at 500GB, thinly 
> allocated. The trivial is the naked OS at something like 7GB actually 
> allocated, the 'real' has 113GB allocated. In both cases the OVA export file 
> to a local SSD xfs partition is 500GB, with lots of zeros and sparse 
> allocation in the case of the first one.
>
> The second came to 72GB of 500GB actually allocated, which didn't seem like a 
> good sign already, but perhaps there was some compression involved?
>
> Still the export finished without error or incident and the import on the 
> other side went just as well. The machine even boots and runs, it was only 
> once I started using it, that I suddenly had all types of file system 
> errors... it turns out 113-73GB were actually really cut off and missing from 
> the OVA export, and there is nobody and nothing checking for that.

You are hitting https://bugzilla.redhat.com/1854888

...
> I have the export domain backup running right now, but I'm not sure it's not 
> using the same mechanism under the cover with potentially similar results.

No, export domain is using qemu-img, which is the best tool for copying images.
This is how all disks are copied in oVirt in all flows. There are no
issues like ignored
errors or silent failures in storage code.

...
> P.P.S. So just where (and on which machine) do I need to change the timeout?

There are no timeouts in storage code, e.g. attach/detach domain, or
export to export
domain.

Nir


Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Z5BMSR6OGW7I4AU363QH562MY7HJ57NV/


[ovirt-users] Re: shutdown virtual machine did not display in the list when using virsh list --all and virsh list --state-shutoff

2020-08-30 Thread Nir Soffer
On Sun, Aug 30, 2020 at 11:49 AM Arik Hadas  wrote:
>
>
>
> On Sun, Aug 30, 2020 at 10:26 AM Yedidyah Bar David  wrote:
>>
>> Hi,
>>
>> On Sun, Aug 30, 2020 at 9:55 AM Kyre  wrote:
>> >
>> > Hello, I am using ovirt 4.3.8. I want to use the virsh command to boot the 
>> > shutdown virtual machine, but when using virsh list --all and virsh list 
>> > --state-shutoff, the shutdown virtual machine did not display in the list. 
>> > What is the cause of this, and how can I display the shutdown virtual 
>> > machine.
>>
>> You can't.
>>
>> All oVirt-managed VMs are, in libvirt terms, transient. The meta-data
>> about them is kept in the engine's database, not in libvirt.
>
>
> This is no longer the case - in 2017 we switched to persistent domains 
> (https://gerrit.ovirt.org/c/78046)
> When the engine issues a 'destroy' call, VDSM undefines the VM

Which makes the VM transient.

I think the reason we switched to "persistent" VM was to use VM
metadata, and use more
common code paths that are less likely to break.

Even if we keep the VMs defined, you may not be able to start them
later since the storage
required by the VM may not be available on the host at the point you
start the VM later
(.e.g. after reboot). If the storage is available, starting a VM
behind engine back may lead
a split brain since the VM may be running on another host.

The right way to manage oVirt VMs is using engine API. Since we
removed the oVirt shell,
there is no easy way like virsh, but one can script this using the SDK.

Nir

>> If you want this to plan for (or already are in) a case where the
>> engine is dead/unavailable, then you should make sure you keep regular
>> engine backups, so that you can restore it, and/or check about DR.
>>
>> Best regards,
>>
>> >
>> > virsh # list --all
>> > Please enter your authentication name: root
>> > Please enter your password:
>> >  IdName   State
>> > 
>> >  4 win10E2  running
>> >
>> > virsh # list --state-shutoff
>> >  IdName   State
>> > 
>> >
>> >
>> > ___
>> > Users mailing list -- users@ovirt.org
>> > To unsubscribe send an email to users-le...@ovirt.org
>> > Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> > oVirt Code of Conduct: 
>> > https://www.ovirt.org/community/about/community-guidelines/
>> > List Archives: 
>> > https://lists.ovirt.org/archives/list/users@ovirt.org/message/JGFSRX5YF3GK5DBI6EVQBSEJ34VUM66E/
>>
>>
>>
>> --
>> Didi
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct: 
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives: 
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/PD6Y4LWKVVNI4NPQBBJEPQEWROF7ZPYQ/
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/KXV3XVLZZQ7627VWPA2VAUA4S5YSPG3M/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OSAI7XPC6V2RMISYVJJVYCYI7PR6LKPB/


[ovirt-users] Re: ovirt-imageio : can't upload / download

2020-08-22 Thread Nir Soffer
On Sat, Aug 22, 2020 at 6:26 PM Michael Jones  wrote:
>
> On 20/08/2020 20:55, Michael Jones wrote:
>
> On 19/08/2020 14:48, Michael Jones wrote:
>
> On 19/08/2020 12:12, Michael Jones wrote:
>
> On 19/08/2020 10:41, Nir Soffer wrote:
>
> There is no warning the method was deprecated and will be missing 
> functionality.
>
> The steps detailed on the alt install page are for the all-in-one running 
> engine-setup.
>
> It's also worth noting this works fine in;
>
> Version 4.3.1.1-1.el7
>
> but not in;
>
> Version 4.4.1.10-1.el8
>
> (el8 has the change in imageio daemons)
>
> The alternate install method is still useful to have, but i think a red 
> warning about all-in-one on el8 on that page would be good.
>
> Kind Regards,
> Michael Jones
>
> Micheal, can you file a bug for this?
>
> If you have a good use case for all-in-one deployment (not using
> hosted engine), please explain
> it in the bug.
>
> Personally I think simple all-in-one deployment without the complexity
> of hosted engine is better,
> and we should keep it, but for this we need to teach engine to handle
> the case when the proxy
> and the daemon are the same server.
>
> In this case engine will not try to setup a proxy ticket, and image
> transfer would work directly
> with the host daemon.
>
> I'm not very optimistic that we will support this again, since this
> feature is not needed for RHV
> customers, but for oVirt this makes sense.
>
> Nir
>
> Yes, I can file a bug,
>
> The main usage / setup's I have are;
>
> on-prem installs:
>
> - hosted engine
> - gluster
> - high availiblity
> - internal ip address
> - easy great...
>
> dedicated host provider for example OVH single machine:
>
> - alternate install
> - all-in-one
>
> The main reason for the separation is that using the cockpit install / hosted 
> engine install causes problems with ip allocations;
>
> cockpit method requires 1x ip for host, and 1x ip for engine vm, and both ip 
> must be in the same subnet...
>
> applying internal ip would cut off access, and to make it even harder, 
> getting public ip blocks didn't work as the box main ip wouldn't be in the 
> same subnet, adding nic alias ip doesn't work either (fail on install due to 
> failing to setup ovirtmgmt network).
>
> atm, i'll struggle with changing the machine's main ip to be one of the same 
> subnet with the engine one... (currently causes host to be taken offline due 
> to hosting provider, health checks)
>
> provided i can change the host primary ip to be one of the OVH failover ip 
> allocated in a block... i will be able to install using the cockpit.
>
> and after the install i can setup internal ip with the network tag.
>
> Kind Regards,
>
> Mike
>
> despite managing to get OVH to disable monitoring (ping to the main ip, and 
> rebooting host) and getting the host in the same ip range as the engine vm...
>
> ie:
>
> host ip: 158.x.x.13/32 = not used anymore
>
> new subnet: 54.x.x.x/28
>
> and reserving;
> host = 54.x.x.16
> engine = 54.x.x.17
>
> [ ERROR ] The Engine VM (54.x.x.17/28) and the default gateway (158.x.x.254) 
> will not be in the same IP subnet.
>
> the hosted engine installer crashes due to the gw being in a different 
> subnet, so all three;
>
> - host
> - engine
> - gateway
>
> must be in the same subnet...
>
> this rules out an install on ovh dedicated server.
>
> unless... I can install the all-in-one again (this bit works), and then 
> install the engine vm in an existing all-in-one setup...
>
> essentially the cockpit installation is not compatible with this infra setup.
>
> After going through the documentation again, I understand the best way to 
> approach this would be to have a remote manager, ie;
>
> self hosted engine (on-prem) > host/2nd DC/Cluster (remote/ovh)
>
> standalone manager (on-prem) > host/2nd DC/Cluster (remote/ovh)
>
> That way resolves the ip issues (only need host ip, just don't install the 
> manager on the remote server)
>
> outstanding... i need to workout the security implications of this.
>
> shame all-in-one is gone, but the above does work, and even means the remote 
> host can again use local storage.
>
> I'll raise the bug report now i've finished testing, as I think stand alone, 
> all-in-one, dedicated hosts are affordable and open ovirt to a wider user 
> base (keeping hardware requirements minimal).
>
> Thanks again,
>
> Mike
>
> Bug raised:
>
> https://bugzilla.redhat.com/show_bug.cgi?id=1871348

Thanks, but we need more info why you cannot use the recommended deployment.
See my questions in the bug.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2TGH3E4MZWFKIQGID64MDRBHYMHT3USZ/


[ovirt-users] Re: Incremental VM backups

2020-08-19 Thread Nir Soffer
On Wed, Aug 19, 2020 at 1:04 PM Kevin Doyle
 wrote:
>
> Hi
> I am looking at ways to backup VM's, ideally that support incremental 
> backups. I have found a couple of python scripts that snapshot a VM and back 
> it up but not incremental. The question is what do you use to backup the VM's 
> ? (both linux and windows)

Full backup is supported for a while and can be used now. Incremental backup
can be used now with on CentOS 8.2, but there are many rough edges that may
break your incremental backup, like creating and deleting snapshots or moving
disks.

Restoring backups requires manual work, since we don't really have a backup
solution but a backup API. The example backup scripts only do the minimal
work to show how to use the API, and require some scripting to build a simple
solution.

What you run on the VM does not matter, since the backup is in the disk
level, not in the guest filesystem level. However, to get consistent backups,
you must run qemu-guest-agent in the guest.

With these notes, here is an example backup for a vm.

The VM has  2 disks:
- 6g qcow2 - os disk
- 100g qcow2 - data disk (empty file system)

qcow2 is important, incremental backup is possible only for qcow2
disks. To enable
incremental backup when creating a disk, you need to check this:

[ ] Enable incremental backup

This will use qcow2 format for the disk, regardless of the allocation policy
(thin or preallocated).

I'm running this on RHEL AV 8.3 nightly build, but this used to work
on RHEL AV 8.2
few weeks ago, so CentOS 8.2 should work. If not you will have to wait
for the next
CentOS updated since we don't test CentOS.

Install packages on the host running the backup script:

dnf install python3-ovirt-engine-sdk4 ovirt-imageio-client

If you run this on a oVirt host, the packages are already installed,
and the backup
will be faster.

Creating a full backup:

$ python3 /usr/share/doc/python3-ovirt-engine-sdk4/examples/backup_vm.py full \
--engine-url https://engine3 \
--username admin@internal \
--password-file password \
--cafile engine3.pem \
--backup-dir /var/tmp/backups \
ed2e2c59-36d3-41e2-ac7e-f4d33eb69ad4
[   0.0 ] Starting full backup for VM ed2e2c59-36d3-41e2-ac7e-f4d33eb69ad4
[   0.2 ] Waiting until backup e0de71d6-146a-40f6-80f6-7c6b2969aa3f is ready
[   1.2 ] Created checkpoint 'c13420d7-ab33-4545-acad-4fe1417cc61f'
(to use in --from-checkpoint-uuid for the next incremental backup)
[   1.2 ] Creating image transfer for disk 419b83e7-a6b5-4445-9324-a85e27b134d2
[   2.3 ] Image transfer 6fc1944c-ce4c-40e7-8119-61208f6489fe is ready
Formatting 
'/var/tmp/backups/419b83e7-a6b5-4445-9324-a85e27b134d2.202008192037.full.qcow2',
fmt=qcow2 cluster_size=65536 compression_type=zlib size=107374182400
lazy_refcounts=off refcount_bits=16
[ 100.00% ] 100.00 GiB, 0.79 seconds, 126.17 GiB/s
[   3.1 ] Finalizing image transfer
[   5.2 ] Creating image transfer for disk 58daea80-1229-4c6b-b33c-1a4e568c8ad7
[   6.2 ] Image transfer 63e1f899-7add-46a8-8d68-4ef3e6a1385b is ready
Formatting 
'/var/tmp/backups/58daea80-1229-4c6b-b33c-1a4e568c8ad7.202008192037.full.qcow2',
fmt=qcow2 cluster_size=65536 compression_type=zlib size=6442450944
lazy_refcounts=off refcount_bits=16
[ 100.00% ] 6.00 GiB, 8.39 seconds, 731.93 MiB/s
[  14.6 ] Finalizing image transfer
[  19.8 ] Full backup completed successfully

The backup created these files:

$ qemu-img info
/var/tmp/backups/419b83e7-a6b5-4445-9324-a85e27b134d2.202008192037.full.qcow2
image: 
/var/tmp/backups/419b83e7-a6b5-4445-9324-a85e27b134d2.202008192037.full.qcow2
file format: qcow2
virtual size: 100 GiB (107374182400 bytes)
disk size: 51 MiB
...

$ qemu-img info
/var/tmp/backups/58daea80-1229-4c6b-b33c-1a4e568c8ad7.202008192037.full.qcow2
image: 
/var/tmp/backups/58daea80-1229-4c6b-b33c-1a4e568c8ad7.202008192037.full.qcow2
file format: qcow2
virtual size: 6 GiB (6442450944 bytes)
disk size: 2.2 GiB
...

Now lets create an incremental backup - we need to use the checkpoint id created
during the previous backup:

[   1.2 ] Created checkpoint
'c13420d7-ab33-4545-acad-4fe1417cc61f' (to use in
--from-checkpoint-uuid for the next incremental backup)

And the "incremental" sub command:

$ python3 /usr/share/doc/python3-ovirt-engine-sdk4/examples/backup_vm.py
incremental \
--engine-url https://engine3 \
--username admin@internal \
--password-file password \
--cafile engine3.pem \
--backup-dir /var/tmp/backups \
--from-checkpoint-uuid c13420d7-ab33-4545-acad-4fe1417cc61f \
ed2e2c59-36d3-41e2-ac7e-f4d33eb69ad4
[   0.0 ] Starting incremental backup for VM
ed2e2c59-36d3-41e2-ac7e-f4d33eb69ad4
[   0.2 ] Waiting until backup 76fbea79-b92a-4970-a1bf-f60904dabc6a is ready
[   1.2 ] Created checkpoint '68936483-6c5c-41e8-9f0c-b00ff963edfc'
(to use in --from-checkpoint-uuid for the next incremental backup)
[   1.2 ] Creating image transfer for disk 419b83e7-a6b5-4445-9324-a85e27b134d2
[   2.3 ] Image transfer 

[ovirt-users] Re: ovirt-imageio : can't upload / download

2020-08-19 Thread Nir Soffer
On Tue, Aug 18, 2020 at 8:51 PM Michael Jones  wrote:
>
> On 18/08/2020 17:58, Nir Soffer wrote:
>
> it does sound as if, my problems are around the fact that i am using an
> all-in-one box, (host and engine all in one);
>
> https://www.ovirt.org/download/alternate_downloads.html
>
> This explains how that you can install all-in-one when engine is a VM
> running on the single host, not as a program running on the host.
>
> How did you managed to get engine installed on the same host?
>
> I would expect that the installer would fail or at least warn about this.
>
> The alternate install method is essentially, install packages and run 
> engine-setup, which doesn't setup the hosted vm. The default installer via 
> cockpit always installs engine as vm.
>
> perhaps a warning is needed on the alternate installer page for epel8.
>
> at the moment there is only a recommendation to use the normal installer.
>
> On 18/08/2020 17:58, Nir Soffer wrote:
>
> the download / upload function is important to me as my backup solution
> is dependent on this.
>
> Until the sort this out, you should know that you can transfer images
> without the
> proxy. The proxy is needed only for the UI, for cases when engine and hosts 
> are
> on different networks, so the only way to transfer images is via the
> engine host.
>
> I use vprotect which i think is dependent on the proxy,

This worth a bug for vprotect, it should never use a proxy if it can access
the host directly. This is very bad for backup use case, specially with the old
proxy.

>  but I'll definitely checkout the sdk/scripts you linked.
>
> I'll post an update once the new host is setup the normal way / working.
>
> Thanks again.
>
> Michael Jones
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IKPIVAWZ22W5PWJYSWJ3WB4QW6SAF74I/


[ovirt-users] Re: ovirt-imageio : can't upload / download

2020-08-19 Thread Nir Soffer
On Wed, Aug 19, 2020 at 12:29 PM Michael Jones  wrote:
>
> On 19/08/2020 06:52, Yedidyah Bar David wrote:
>
> On Tue, Aug 18, 2020 at 8:51 PM Michael Jones  wrote:
>
> On 18/08/2020 17:58, Nir Soffer wrote:
>
> it does sound as if, my problems are around the fact that i am using an
> all-in-one box, (host and engine all in one);
>
> https://www.ovirt.org/download/alternate_downloads.html
>
> This explains how that you can install all-in-one when engine is a VM
> running on the single host, not as a program running on the host.
>
> Let's clarify the terminology first. I admit we are not always super-clear
> about this.
>
> - Standalone engine - Engine is running on some machine, which it does
> not manage by itself. Normally, this is a physical machine, but can be
> a VM managed by something else (virsh/virt-manager, another ovirt engine,
> vmware/virtualbox/hyperv/xen, etc.).
>
> - Hosted-engine - An engine that is running in a VM, that runs inside
> a host, that this engine manages. If it sounds like a chicken-and-egg
> problem, it indeed is... See documentation and some presentation slides
> on the website for the architecture, if interested.
>
> - All-In-One - a Standalone engine that also manages, as a host, the
> machine on which it runs. This used to have official support in the
> past, in terms of code helping to implement it (in engine-setup):
>
> https://www.ovirt.org/develop/release-management/features/integration/allinone.html
>
> As above page states, this is long gone. However, over the years,
> people did report successes in doing this manually - install and
> setup an engine, then add it to itself. I agree it would be nice to
> keep it working, and the discussion below indeed clarifies that it's
> currently broken, but this is definitely very low priority IMO. The
> official answer to the question "How can I setup oVirt on a single
> machine?" is: Use Hosted-engine with gluster, a.k.a HCI.
>
> from the link you shared, the status is;
>
> Current status
>
> Included since 3.1
> Deprecated since 3.6.0
> Removed since 4.0.0
>
> I think there should be a warning that the install is deprecated, traversing;
>
> - https://www.ovirt.org/
> - https://www.ovirt.org/download/ (download button)
> - https://www.ovirt.org/download/alternate_downloads.html (Alternate download 
> options)
>
> There is no warning the method was deprecated and will be missing 
> functionality.
>
> The steps detailed on the alt install page are for the all-in-one running 
> engine-setup.
>
> It's also worth noting this works fine in;
>
> Version 4.3.1.1-1.el7
>
> but not in;
>
> Version 4.4.1.10-1.el8
>
> (el8 has the change in imageio daemons)
>
> The alternate install method is still useful to have, but i think a red 
> warning about all-in-one on el8 on that page would be good.
>
> Kind Regards,
> Michael Jones

Micheal, can you file a bug for this?

If you have a good use case for all-in-one deployment (not using
hosted engine), please explain
it in the bug.

Personally I think simple all-in-one deployment without the complexity
of hosted engine is better,
and we should keep it, but for this we need to teach engine to handle
the case when the proxy
and the daemon are the same server.

In this case engine will not try to setup a proxy ticket, and image
transfer would work directly
with the host daemon.

I'm not very optimistic that we will support this again, since this
feature is not needed for RHV
customers, but for oVirt this makes sense.

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3UUITU44JQTTALJOVAH6GQVY54N3JN7L/


[ovirt-users] Re: ovirt-imageio : can't upload / download

2020-08-18 Thread Nir Soffer
On Tue, Aug 18, 2020 at 7:10 PM Michael Jones  wrote:
> Thanks for the help, not sure if this is a top post or bottom post mail
> list, feel free to tell me off;

We don't have strict rules, but I think that bottom posting will be better, even
better selecting the relevant parts in your reply.

> it does sound as if, my problems are around the fact that i am using an
> all-in-one box, (host and engine all in one);
>
> https://www.ovirt.org/download/alternate_downloads.html

This explains how that you can install all-in-one when engine is a VM
running on the single host, not as a program running on the host.

How did you managed to get engine installed on the same host?

I would expect that the installer would fail or at least warn about this.

> I'm about to setup another server, so i'll perhaps try engine as vm
> setup through the cockpit installer, which i've only used on ovirt
> clusters so far, not on single host machines.
>
> If that works, perhaps i'll work out a way to port the server i'm having
> issues with.

Taking a backup of engine database, installing engine on another host or as vm,
and restoring the backup should be the easiest way to move the engine.

> the download / upload function is important to me as my backup solution
> is dependent on this.

Until the sort this out, you should know that you can transfer images
without the
proxy. The proxy is needed only for the UI, for cases when engine and hosts are
on different networks, so the only way to transfer images is via the
engine host.

To get this working on your setup, you need to restore 50-vdsm.conf to
the default
configuration and restart ovirt-imageio service.

There is no way to disable the proxy in engine, so for every transfer
engine will try
to add a ticket the proxy and will fail, but this should not fail the
transfer, only log
the error.

To transfer images, use upload_disk.py and download_disk.py from
ovirt-engine-sdk.

dnf install python3-ovirt-enigne-sdk4

And try:

python3 /usr/share/doc/python3-ovirt-engine-sdk4/examples/upload_disk.py -h
python3 /usr/share/doc/python3-ovirt-engine-sdk4/examples/download_disk.py
-h

There is also a backup_vm.py example that can be very useful for
backup, supporting
both full and incremental backup:

python3 /usr/share/doc/python3-ovirt-engine-sdk4/examples/backup_vm.py -h

Nir

> Thanks,
>
> Kind Regards,
>
> Michael Jones
>
> On 18/08/2020 16:55, Nir Soffer wrote:
> > On Tue, Aug 18, 2020 at 4:06 PM Michael Jones  
> > wrote:
> >> I have image upload/download working on some older ovirt servers where
> >> it still has the split daemon/proxy...
> >>
> >> on one newer one this feature is not working;
> >>
> >> software in use:
> >>
> >> CentOS 8
> >> ovirt-engine-4.4.1.10-1.el8.noarch
> >> ovirt-imageio-client-2.0.9-1.el8.x86_64
> >> ovirt-imageio-daemon-2.0.9-1.el8.x86_64
> >> ovirt-imageio-common-2.0.9-1.el8.x86_64
> > Both host and engine running these versions?
> >
> >> the ui test button allowed me to work out 50-vdsm.conf was setting the
> >> wrong remote port... (was 54322, changed to 54323)
> > 50-vdsm.conf is cannot break the connection test, since it should be
> > installed only
> > on the hosts. On engine you have 50-engine.conf.
> >
> > The correct configuration for engine is:
> >
> > [tls]
> > enable = true
> > key_file = /etc/pki/ovirt-engine/keys/apache.key.nopass
> > cert_file = /etc/pki/ovirt-engine/certs/apache.cer
> > ca_file = /etc/pki/ovirt-engine/apache-ca.pem
> >
> > [remote]
> > port = 54323
> >
> > [local]
> > enable = false
> >
> > [control]
> > transport = tcp
> > port = 54324
> >
> > The correct settings for the hosts are:
> >
> > [tls]
> > enable = true
> > key_file = /etc/pki/vdsm/keys/vdsmkey.pem
> > cert_file = /etc/pki/vdsm/certs/vdsmcert.pem
> > ca_file = /etc/pki/vdsm/certs/cacert.pem
> >
> > [remote]
> > port = 54322
> >
> > These files belongs to engine and vdsm and you should not change them.
> > Your changes will be overwirtten on the next upgrade.
> >
> > The top of the file explain how to change the configuration.
> >
> >> updated remote with;
> >>
> >> [remote]
> >> host = 0.0.0.0
> >> port = 54323
> >>
> >> the test now passes, but on upload or download it still fails.
> > Do you have 50-vdsm.conf on the engine host?!
> >
> > It sounds like you have all-in-one configuration when engine host is also
> > the single host. This configuration is not supported for 5 years or so.
> >
> > Or you 

[ovirt-users] Re: ovirt-imageio : can't upload / download

2020-08-18 Thread Nir Soffer
On Tue, Aug 18, 2020 at 4:06 PM Michael Jones  wrote:
> I have image upload/download working on some older ovirt servers where
> it still has the split daemon/proxy...
>
> on one newer one this feature is not working;
>
> software in use:
>
> CentOS 8
> ovirt-engine-4.4.1.10-1.el8.noarch
> ovirt-imageio-client-2.0.9-1.el8.x86_64
> ovirt-imageio-daemon-2.0.9-1.el8.x86_64
> ovirt-imageio-common-2.0.9-1.el8.x86_64

Both host and engine running these versions?

> the ui test button allowed me to work out 50-vdsm.conf was setting the
> wrong remote port... (was 54322, changed to 54323)

50-vdsm.conf is cannot break the connection test, since it should be
installed only
on the hosts. On engine you have 50-engine.conf.

The correct configuration for engine is:

[tls]
enable = true
key_file = /etc/pki/ovirt-engine/keys/apache.key.nopass
cert_file = /etc/pki/ovirt-engine/certs/apache.cer
ca_file = /etc/pki/ovirt-engine/apache-ca.pem

[remote]
port = 54323

[local]
enable = false

[control]
transport = tcp
port = 54324

The correct settings for the hosts are:

[tls]
enable = true
key_file = /etc/pki/vdsm/keys/vdsmkey.pem
cert_file = /etc/pki/vdsm/certs/vdsmcert.pem
ca_file = /etc/pki/vdsm/certs/cacert.pem

[remote]
port = 54322

These files belongs to engine and vdsm and you should not change them.
Your changes will be overwirtten on the next upgrade.

The top of the file explain how to change the configuration.

> updated remote with;
>
> [remote]
> host = 0.0.0.0
> port = 54323
>
> the test now passes, but on upload or download it still fails.

Do you have 50-vdsm.conf on the engine host?!

It sounds like you have all-in-one configuration when engine host is also
the single host. This configuration is not supported for 5 years or so.

Or you installed vdsm by mistake on engine host, in this case you will have
both 50-vdsm.conf and 50-engine.conf, and because "vdsm" sorts after "engine"
its configuration will win.

> Next i changed the control to be unix socket instead of tcp port 54324
> (vdsm was giving an error: Image daemon is unsupported);
>
> I looked up the error line in the vdsm code, and found it was looking
> for unix socket: DAEMON_SOCK=/run/ovirt-imageio/sock
>
> switching to sock seemed to resolve all errors in the vdsm log;

Expected, using tcp for the host is not supported.

>
> ---
>
> content of the imageio log;

imagieo log on engine or host?

> no errors as far as i can see:
>
> 2020-08-18 12:49:56,109 INFO(MainThread) [server] Starting
> (pid=2696562, version=2.0.9)
> 2020-08-18 12:49:56,109 DEBUG   (MainThread) [services] Creating
> remote.service on port 54323
> 2020-08-18 12:49:56,111 DEBUG   (MainThread) [http] Prefer IPv4: False
> 2020-08-18 12:49:56,111 DEBUG   (MainThread) [http] Available network
> interfaces: [(, ,
> 6, '', ('0.0.0.0', 54323))]
> 2020-08-18 12:49:56,111 DEBUG   (MainThread) [http] Creating server
> socket with family=AddressFamily.AF_INET and type=SocketKind.SOCK_STREAM
> 2020-08-18 12:49:56,111 DEBUG   (MainThread) [services] Securing server
> (cafile=/etc/pki/vdsm/certs/cacert.pem,
> certfile=/etc/pki/vdsm/certs/vdsmcert.pem,
> keyfile=/etc/pki/vdsm/keys/vdsmkey.pem)

So this is log from the host

> 2020-08-18 12:49:56,113 INFO(MainThread) [services] remote.service
> listening on ('0.0.0.0', 54323)

This port is wrong, you will not be able to tansfer anything since
engine assumes
port 54322.

> 2020-08-18 12:49:56,113 DEBUG   (MainThread) [services] Creating
> local.service on socket '\x00/org/ovirt/imageio'
> 2020-08-18 12:49:56,113 INFO(MainThread) [services] local.service
> listening on '\x00/org/ovirt/imageio'
> 2020-08-18 12:49:56,113 DEBUG   (MainThread) [services] Creating
> control.service on socket '/run/ovirt-imageio/sock'
> 2020-08-18 12:49:56,113 DEBUG   (MainThread) [uhttp] Removing socket
> '/run/ovirt-imageio/sock'
> 2020-08-18 12:49:56,113 INFO(MainThread) [services] control.service
> listening on '/run/ovirt-imageio/sock'
> 2020-08-18 12:49:56,115 DEBUG   (MainThread) [server] Changing ownership
> of /run/ovirt-imageio to 988:984
> 2020-08-18 12:49:56,115 DEBUG   (MainThread) [server] Changing ownership
> of /var/log/ovirt-imageio/daemon.log to 988:984
> 2020-08-18 12:49:56,115 DEBUG   (MainThread) [server] Dropping root
> privileges, running as 988:984
> 2020-08-18 12:49:56,116 DEBUG   (MainThread) [services] Starting
> remote.service
> 2020-08-18 12:49:56,116 DEBUG   (remote.service) [services]
> remote.service started
> 2020-08-18 12:49:56,116 DEBUG   (MainThread) [services] Starting
> local.service
> 2020-08-18 12:49:56,117 DEBUG   (local.service) [services] local.service
> started
> 2020-08-18 12:49:56,117 DEBUG   (MainThread) [services] Starting
> control.service
> 2020-08-18 12:49:56,117 DEBUG   (control.service) [services]
> control.service started
> 2020-08-18 12:49:56,117 INFO(MainThread) [server] Ready for requests
> 2020-08-18 12:51:34,602 INFO(Thread-1) [http] OPEN client=local
> 2020-08-18 12:51:34,603 INFO

[ovirt-users] Re: Help with problem in oVirt 4.4.

2020-08-18 Thread Nir Soffer
On Tue, Aug 18, 2020 at 6:07 AM Rodrigo Mauriz  wrote:

Hi Rodrigo, I'm moving the discussion to ovirt users list, which is the right
place to discuss this:
https://lists.ovirt.org/archives/list/users%40ovirt.org/

> When I try to upload an ISO file by:
>
> Storage -> Storage Domains -> Disks
> I get an error:
> "Connection to ovirt-imageio-proxy service failed. Make sure service is 
> installed, configured, and ovirt-engine certificate is registered as valid CA 
> in browser."
>
> Review and note that the services that perform this task were not installed 
> in the ovirt-engine:
> • ovirt-imageio-proxy

This package does not exist and not needed in ovirt 4.4

> • ovirt-imageio-daemon
> • ovirt-imageio-common

These packages are needed, and installed when you instal ovirt-engine.
The ovirt-imageio service is configured automatically when you run engine-setup.

> Example:
> #systemctl status ovirt-imageio-proxy (not installed)
>
> I tried to install it but it throws it an error:
>
> #yum install ovirt-imageio-proxy engine-setup
>
> Updating Subscription Management repositories.
> Unable to read consumer identity
> This system is not registered to Red Hat Subscription Management. You can use 
> subscription-manager to register.
> Last metadata expiration check: 0:41:46 Aug on Fri 07 Aug 2020 10:20:42 AM 
> -04.
> No match for argument: ovirt-imageio-proxy
> No match for argument: engine-setup
> Error: Unable to find a match: ovirt-imageio-proxy engine-setup

This is correct, ovirt-imageio-proxy is not needed and does not exist.

> The same, for the engine-iso-uploader

I think the iso uploader was deprecated a long time ago, and it is not
needed to upload images.

> I asked a person who works at Red Hat and they told me that this  apps 
> (services) are not released yet.

Uploading images from the UI was available in ovirt 4.4.0, but is fully
functional since ovirt 4.4.1.

We know about hone issue - if you replaced apache certificates with 3rd party
certificates, uploading via ovirt-imageio service on engine host (as a proxy)
will not work:
https://bugzilla.redhat.com/1866745
https://bugzilla.redhat.com/1862107

If you are using engine built certificates, upload and download should work.

We may have more info about the failure in engine log. Please share the log
showing the time when you tried to upload an image.

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XJP2EPTUMW6EJJMZMOFXMSYX3HIOVCAI/


[ovirt-users] Re: Is the udev settling issue more wide spread? Getting failed 'qemu-img -convert' also while copying disks between data and vmstore domains

2020-08-12 Thread Nir Soffer
On Wed, Aug 12, 2020 at 2:25 AM  wrote:

> While trying to diagnose an issue with a set of VMs that get stopped for
> I/O problems at startup, I try to deal with the fact that their boot disks
> cause this issue, no matter where I connect them. They might have been the
> first disks I ever tried to sparsify and I was afraid that might have
> messed them up. The images are for a nested oVirt deployment and they
> worked just fine, before I shut down those VMs...
>
> So I first tried to hook them as secondary disks to another VM to have a
> look, but that just cause the other VM to stop at boot.
>
> Also tried downloading, exporting, and plain copying the disks to no
> avail, OVA exports on the entire VM fail again (fix is in!).
>
> So to make sure copying disks between volumes *generally* work, I tried
> copying a disk from a working (but stopped) VM from 'vmstore' to 'data' on
> my 3nHCI farm, but that failed, too!
>
> Plenty of space all around, but all disks are using thin/sparse/VDO on SSD
> underneath.
>
> Before I open a bug, I'd like to have some feedback if this is a standard
> QA test, this is happening to you etc.
>
> Still on oVirt 4.3.11 with pack_ova.py patched to wait for the udev
> settle,
>
> This is from the engine.log on the hosted-engine:
>
> 2020-08-12 00:04:15,870+02 ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (EE-ManagedThreadFactory-engineScheduled-Thread-67) [] EVENT_ID:
> VDS_BROKER_COMMAND_FAILURE(10,802), VDSM gem2 command
> HSMGetAllTasksStatusesVDS failed: low level Image copy failed: ("Command
> ['/usr/bin/qemu-img', 'convert', '-p', '-t', 'none', '-T', 'none', '-f',
> 'raw', 
> u'/rhev/data-center/mnt/glusterSD/192.168.0.91:_vmstore/9d1b8774-c5dc-46a8-bfa2-6a6db5851195/images/aca27b96-7215-476f-b793-fb0396543a2e/311f853c-e9cc-4b9e-8a00-5885ec7adf14',
> '-O', 'raw', 
> u'/rhev/data-center/mnt/glusterSD/192.168.0.91:_data/32129b5f-d47c-495b-a282-7eae1079257e/images/f6a08d2a-4ddb-42da-88e6-4f92a38b9c95/e0d00d46-61a1-4d8c-8cb4-2e5f1683d7f5']
> failed with rc=1 out='' err=bytearray(b'qemu-img: error while reading
> sector 131072: Transport endpoint is not connected\\nqemu-img: error while
> reading sector 135168: Transport endpoint is not connected\\nqemu-img:
> error while reading sector 139264: Transport
>   endpoint is not connected\\nqemu-img: error while reading sector 143360:
> Transport endpoint is not connected\\nqemu-img: error while reading sector
> 147456: Transport endpoint is not connected\\nqemu-img: error while reading
> sector 151552: Transport endpoint is not connected\\n')",)
>
> and this is from the vdsm.log on the gem2 node:
> Error: Command ['/usr/bin/qemu-img', 'convert', '-p', '-t', 'none', '-T',
> 'none', '-f', 'raw', 
> u'/rhev/data-center/mnt/glusterSD/192.168.0.91:_vmstore/9d1b8774-c5dc-46a8-bfa2-6a6db5851195/images/aca27b96-7215-476f-b793-fb0396543a2e/311f853c-e9cc-4b9e-8a00-5885ec7adf14',
> '-O', 'raw', 
> u'/rhev/data-center/mnt/glusterSD/192.168.0.91:_data/32129b5f-d47c-495b-a282-7eae1079257e/images/f6a08d2a-4ddb-42da-88e6-4f92a38b9c95/e0d00d46-61a1-4d8c-8cb4-2e5f1683d7f5']
> failed with rc=1 out='' err=bytearray(b'qemu-img: error while reading
> sector 131072: Transport endpoint is not connected\nqemu-img: error while
> reading sector 135168: Transport endpoint is not connected\nqemu-img: error
> while reading sector 139264: Transport endpoint is not connected\nqemu-img:
> error while reading sector 143360: Transport endpoint is not
> connected\nqemu-img: error while reading sector 147456: Transport endpoint
> is not connected\nqemu-img: error while reading sector 151552: Transport
> endpoint is not connected\n')
> 2020-08-12 00:03:15,428+0200 ERROR (tasks/7) [storage.Image] Unexpected
> error (image:849)
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/image.py", line 837,
> in copyCollapsed
> raise se.CopyImageError(str(e))
> CopyImageError: low level Image copy failed: ("Command
> ['/usr/bin/qemu-img', 'convert', '-p', '-t', 'none', '-T', 'none', '-f',
> 'raw', 
> u'/rhev/data-center/mnt/glusterSD/192.168.0.91:_vmstore/9d1b8774-c5dc-46a8-bfa2-6a6db5851195/images/aca27b96-7215-476f-b793-fb0396543a2e/311f853c-e9cc-4b9e-8a00-5885ec7adf14',
> '-O', 'raw', 
> u'/rhev/data-center/mnt/glusterSD/192.168.0.91:_data/32129b5f-d47c-495b-a282-7eae1079257e/images/f6a08d2a-4ddb-42da-88e6-4f92a38b9c95/e0d00d46-61a1-4d8c-8cb4-2e5f1683d7f5']
> failed with rc=1 out='' err=bytearray(b'qemu-img: error while reading
> sector 131072: Transport endpoint is not connected\\nqemu-img: error while
> reading sector 135168: Transport endpoint is not connected\\nqemu-img:
> error while reading sector 139264: Transport endpoint is not
> connected\\nqemu-img: error while reading sector 143360: Transport endpoint
> is not connected\\nqemu-img: error while reading sector 147456: Transport
> endpoint is not connected\\nqemu-img: error while reading sector 151552: T
>  ransport endpoint is not 

[ovirt-users] Re: Support for Shared SAS storage

2020-08-12 Thread Nir Soffer
On Wed, Aug 12, 2020 at 9:46 AM Lao Dh  wrote:

> Update for all. The deployment finally detect the SAS collected storage.
> After manual compile the driver for the RAID control PCI card.
> Nir is correct. FC detect anything that is not "NFS, iSCSI". As this time
> it also detect a local connected SATA SSD.
>

You should blacklist this local device, see the instructions in
/etc/multipath.conf.


> 2020年8月10日月曜日 1:55:37 GMT+8、Nir Soffer が書いたメール:
>
>
> On Sat, Aug 8, 2020 at 6:56 AM Jeff Bailey  wrote:
> >
> > I haven't tried with 4.4 but shared SAS works just fine with 4.3 (and
> has for many, many years).  You simply treat it as Fibre Channel.  If your
> LUNs aren't showing up I'd make sure they're being claimed as multipath
> devices.  You want them to be.  After that, just make sure they're
> sufficiently wiped so they don't look like they're in use.
>
> Correct. FC in oVirt means actually not "iSCSI". Any device - including
> local
> devices - that multipath expose, and is not iSCSI is considered as FC.
>
> This is not officially supported and not documented anywhere. I learned
> about it only we tried to deploy strict multipath blacklist allowing
> only "iscsi"
> or "fc" transport, and found that it broke users' setups with sas other
> esoteric
> storage.
>
> Since 4.4 is the last feature release, this will never change even if it
> is not
> documented.
>
> Nir
>
> > On 8/7/2020 10:49 PM, Lao Dh via Users wrote:
> >
> > Wow. That's sound bad. Then what storage type you choose at last (with
> your SAS connected storage)? VMware vSphere support DAS. Red Hat should do
> something.
> >
> > 2020年8月8日土曜日 4:06:34 GMT+8、Vinícius Ferrão via Users  >が書いたメール:
> >
> >
> > No, there’s no support for direct attached shared SAS storage on
> oVirt/RHV.
> >
> > Fibre Channel is a different thing that oVirt/RHV supports.
> >
> > > On 7 Aug 2020, at 08:52, hkexdong--- via Users 
> wrote:
> > >
> > > Hello Vinícius,
> > > Do you able to connect the SAS external storage?
> > > Now I've the problem during host engine setup. Select Fibre Channel
> and end up show "No LUNS found".
> > > ___
> > > Users mailing list -- users@ovirt.org
> > > To unsubscribe send an email to users-le...@ovirt.org
> > > Privacy Statement: https://www.ovirt.org/privacy-policy.html
> > > oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> > > List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/RDPLKGIRN5ZGIEPWGOKMGNFZNMCEN5RC/
> >
> >
> > ___
> > Users mailing list -- users@ovirt.org
> > To unsubscribe send an email to users-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/privacy-policy.html
> > oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:
> >
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/2CLI3YSYU7BPI62YANJXZV7RIQFOXXED/
> >
> >
> > ___
> > Users mailing list -- users@ovirt.org
> > To unsubscribe send an email to users-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/privacy-policy.html
> > oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/WOBHQDCBZZK5WKRAUNHP5CGFYY3HQYYU/
> >
> > ___
> > Users mailing list -- users@ovirt.org
> > To unsubscribe send an email to users-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/privacy-policy.html
> > oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/UY52JRY5EMQJTMKG3POE2YXSFGL7P55S/
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
>
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/JYWUVULGUNLTU6QOY6WOUKBCA5IAO5QR/
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DW4XQ7SFMFVS6G7WDMN6SHJEHPXBOHTT/


[ovirt-users] Re: Thin Provisioned to Preallocated

2020-08-09 Thread Nir Soffer
On Mon, Aug 10, 2020 at 2:26 AM Nir Soffer  wrote:
>
> On Thu, Aug 6, 2020 at 5:35 PM Jorge Visentini  
> wrote:
> >
> > Hi oVirt land.
> >
> > Can I convert the disks of Thin Provision to Preallocated?
>
> We don't have a way to convert the format for the same disk.
>
> We should have this in future version, since it is important for
> incremental backup.
See https://bugzilla.redhat.com/98


>
> The most flexible way now is to download and upload the disk. During
> upload you can
> convert the disk format.
>
> Here is an example:
>
> Download qcow2 disks to raw-sparse format:
>
> $ ./download_disk.py \
> --engine-url https://engine3 \
> --username admin@internal \
> --password-file engine3-password \
> --cafile engine3.pem \
> --format raw \
> 58daea80-1229-4c6b-b33c-1a4e568c8ad7 \
> /var/tmp/download.raw
> Connecting...
> Creating image transfer...
> Transfer ID: fe4c3409-7e82-4d68-a68b-8b61b9d70647
> Transfer host name: host4
> Downloading image...
> Formatting '/var/tmp/download.raw', fmt=raw size=6442450944
> [ 100.00% ] 6.00 GiB, 9.45 seconds, 650.28 MiB/s
> Finalizing image transfer...
>
> $ qemu-img info /var/tmp/download.raw
> image: /var/tmp/download.raw
> file format: raw
> virtual size: 6 GiB (6442450944 bytes)
> disk size: 2.18 GiB
>
> We could download also to qcow2 format, this was just an example how
> you can convert the format during download.
>
> Now upload back to new disk in raw preallocated format.
>
> $ ./upload_disk.py \
> --engine-url https://engine3 \
> --username admin@internal \
> --password-file engine3-password \
> --cafile engine3.pem \
> --sd-name nfs1 \
> --disk-format raw \
> /var/tmp/download.raw
> Checking image...
> Image format: raw
> Disk format: raw
> Disk content type: data
> Disk provisioned size: 6442450944
> Disk initial size: 6442450944
> Disk name: download.raw
> Disk backup: False
> Connecting...
> Creating disk...
> Disk ID: beb2435e-9607-4a09-b6a6-b5e5f4ae9fd8
> Creating image transfer...
> Transfer ID: b9fd2ad9-be85-4db8-998b-c5f981f8103c
> Transfer host name: host4
> Uploading image...
> [ 100.00% ] 6.00 GiB, 5.30 seconds, 1.13 GiB/s
> Finalizing image transfer...
> Upload completed successfully
>
> You can replace now the old thin disk (58daea80-1229-4c6b-b33c-1a4e568c8ad7)
> with the new preallocated disk (beb2435e-9607-4a09-b6a6-b5e5f4ae9fd8).
>
> Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TUIMTUWKX3YOCGMKUCFFRSX57HCJCNJ2/


[ovirt-users] Re: Thin Provisioned to Preallocated

2020-08-09 Thread Nir Soffer
On Thu, Aug 6, 2020 at 5:35 PM Jorge Visentini  wrote:
>
> Hi oVirt land.
>
> Can I convert the disks of Thin Provision to Preallocated?

We don't have a way to convert the format for the same disk.

We should have this in future version, since it is important for
incremental backup.

The most flexible way now is to download and upload the disk. During
upload you can
convert the disk format.

Here is an example:

Download qcow2 disks to raw-sparse format:

$ ./download_disk.py \
--engine-url https://engine3 \
--username admin@internal \
--password-file engine3-password \
--cafile engine3.pem \
--format raw \
58daea80-1229-4c6b-b33c-1a4e568c8ad7 \
/var/tmp/download.raw
Connecting...
Creating image transfer...
Transfer ID: fe4c3409-7e82-4d68-a68b-8b61b9d70647
Transfer host name: host4
Downloading image...
Formatting '/var/tmp/download.raw', fmt=raw size=6442450944
[ 100.00% ] 6.00 GiB, 9.45 seconds, 650.28 MiB/s
Finalizing image transfer...

$ qemu-img info /var/tmp/download.raw
image: /var/tmp/download.raw
file format: raw
virtual size: 6 GiB (6442450944 bytes)
disk size: 2.18 GiB

We could download also to qcow2 format, this was just an example how
you can convert the format during download.

Now upload back to new disk in raw preallocated format.

$ ./upload_disk.py \
--engine-url https://engine3 \
--username admin@internal \
--password-file engine3-password \
--cafile engine3.pem \
--sd-name nfs1 \
--disk-format raw \
/var/tmp/download.raw
Checking image...
Image format: raw
Disk format: raw
Disk content type: data
Disk provisioned size: 6442450944
Disk initial size: 6442450944
Disk name: download.raw
Disk backup: False
Connecting...
Creating disk...
Disk ID: beb2435e-9607-4a09-b6a6-b5e5f4ae9fd8
Creating image transfer...
Transfer ID: b9fd2ad9-be85-4db8-998b-c5f981f8103c
Transfer host name: host4
Uploading image...
[ 100.00% ] 6.00 GiB, 5.30 seconds, 1.13 GiB/s
Finalizing image transfer...
Upload completed successfully

You can replace now the old thin disk (58daea80-1229-4c6b-b33c-1a4e568c8ad7)
with the new preallocated disk (beb2435e-9607-4a09-b6a6-b5e5f4ae9fd8).

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TKO5DVRPURQ66QVUQZBHEX73MOC6VPH7/


[ovirt-users] Re: Do distinct export domains share a name space? Can't export a VM, because it already exists in an unattached export domain...

2020-08-09 Thread Nir Soffer
On Mon, Aug 10, 2020 at 1:01 AM Strahil Nikolov  wrote:
>
> Thanks Nir, flr the detailed explanation.
> Can you tell me with export/import data domains, what happens to VMs with 
> snapshots.

Snapshots are not affected by moving disks to different storage
domains. It copies
the entire chain from one domain to the other.

If you want collapsed snapshots you need to use export  VM (4.4+), clone VM,
or make template.

> Recently it was mentioned that snapahots are not visible after such migration.

Can you point to the thread about this?

>
> Best Regards,
> Strahil Nikolov
>
> На 10 август 2020 г. 0:00:36 GMT+03:00, Nir Soffer  
> написа:
> >On Wed, Aug 5, 2020 at 7:00 PM  wrote:
> >>
> >> After OVA export/import was a) recommended against b) not working
> >with the current 4.3 on CentOS 7, I am trying to make sure I keep
> >working copies of critical VMs before I test if the OVA export now
> >works properly, with the Redhat fix from 4.4. applied to 4.3.
> >>
> >> Long story short, I have an export domain "export", primarily
> >attached to a 3 node HCI gluster-cluster and another domain
> >"exportMono", primarily attached to a single node HCI gluster-cluster.
> >>
> >> Yes I use an export domain for backup, because, ...well there is no
> >easy and working alternative out of the box, or did I overlook
> >something?
> >> But of course, I also use an export domain for shipping between
> >farms, so evidently I swap export domains like good old PDP-11 disk
> >cartridges or at least I'd like to.
> >>
> >>
> >> I started by exporting VM "tdc" from the 1nHCI to exportMono,
> >reattached that to 3nHCI for am import. Import worked everything fine,
> >transfer succeed. So I detach exportMono, which belongs to the 1nHCI
> >cluster.
> >>
> >> Next I do the OVA export on 1nHCI, but I need to get the working and
> >reconfigured VM "tdc" out of the way on 3nHCI, so I dump it into the
> >"export" export domain belonging to 3nHCI, because I understand I can't
> >run two copies of the same VM on a single cluster.
> >>
> >> Turns out I can' export it, because even if the export domain is now
> >a different one and definitely doesn't contain "tdc" at all, oVirt
> >complains that the volume ID that belongs to "tdc" already exists in
> >the export domain
> >>
> >> So what's the theory here behind export domains? And what's the state
> >of their support in oVirt 4.4?
> >
> >Export domains are deprecated for several releases, but they were not
> >removed
> >in 4.4, mainly to make it easier to upgrade to 4.4.
> >
> >They are deprecated because we have better solutions, mainly export
> >import data
> >domain (since 3.6), and also upload/download disks (since 4.0), and
> >export/import
> >OVA.
> >
> >To move a VM to another environment using a data domain in 4.3:
> >- Create or attach a data storage domain of any type to use as an
> >"export" domain
> >- Move the VM disks to the data domain. This can be done while the VM
> >is running.
> >- Stop the exported VM.
> >- Detach the data domain from the system
> >- Attach the data domain to another system
> >- Import the VM from the data domain
> >- Start the imported VM
> >- If you want to continue using this data domain as an export domain,
> >move the VM disks to another domain. This can be done while the VM is
> >running.
> >
> >This minimizes the downtime to the time it takes to detach the domain
> >from one system
> >and attach to another system. If you don't care about downtime, you
> >can stop the VM
> >before the export and start it after copying the imported VM disks.
> >This is much simpler
> >and more reliable.
> >
> >In the best case, when you have a data domain that you want to move to
> >another system,
> >no data copy is needed. In the worst case when you want to move VM
> >from one storage
> >on another storage on another system, you need to move the disks twice
> >- just like export
> >domain.
> >
> >If you want to move the VM to another hypervisor, OVA export is not
> >helpful since OVA is
> >not a real standard and hypervisors do not support other hypervisor
> >OVAs without some
> >conversion tool such as virt-v2v. oVirt uses virt-v2v to import VMWare
> >OVA to oVirt. I don't
> >know if this tool supports converting oVirt OVA to other hypervisors.
> >
> >To move VM disks out of the system in 4

[ovirt-users] Re: Fail to upload image file after Replace oVirt Engine SSL/TLS Certificate

2020-08-09 Thread Nir Soffer
On Mon, Aug 3, 2020 at 8:55 AM zhou...@vip.friendtimes.net
 wrote:
>
> I replaced my  Certificate follow 
> https://myhomelab.gr/linux/2020/01/20/replacing_ovirt_ssl.html
> Then I cant upload image files

We need basic detail on this issue like oVirt version used, but this
smells like this bug:
https://bugzilla.redhat.com/1862107

Note that this issue is only about the proxy, you can upload images
directly the  host using
the SDK:
https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/upload_disk.py

Here is example upload on my test environment:

$ ./upload_disk.py --engine-url https://engine3 \
--username admin@internal \
--password-file engine3-password \
--cafile engine3.pem \
--sd-name nfs1 \
--disk-format raw \
--disk-sparse \
/var/tmp/fedora-32.raw
Checking image...
Image format: raw
Disk format: raw
Disk content type: data
Disk provisioned size: 6442450944
Disk initial size: 6442450944
Disk name: fedora-32.raw
Disk backup: False
Connecting...
Creating disk...
Disk ID: 139c29a3-b9a8-4501-836d-92417c3d2eaf
Creating image transfer...
Transfer ID: d50a1b50-5bd8-417e-a950-d3a19a262daa
Transfer host name: host4
Uploading image...
[ 100.00% ] 6.00 GiB, 3.01 seconds, 1.99 GiB/s
Finalizing image transfer...
Upload completed successfully

Nir

> THE LOGS BELOW
> 
> 2020-08-03 13:37:42,276+08 INFO  
> [org.ovirt.engine.core.bll.storage.disk.image.TransferImageStatusCommand] 
> (default task-28) [272b83fa-1d1a-473e-b72a-19886433801e] Running command: 
> TransferImageStatusCommand internal: false. Entities affected :  ID: 
> aaa0----123456789aaa Type: SystemAction group CREATE_DISK 
> with role type USER
> 2020-08-03 13:37:43,679+08 INFO  
> [org.ovirt.engine.core.sso.utils.AuthenticationUtils] (default task-27) [] 
> User yuanq...@ft.com successfully logged in with scopes: ovirt-app-admin 
> ovirt-app-api ovirt-app-portal ovirt-ext=auth:sequence-priority=~ 
> ovirt-ext=revoke:revoke-all ovirt-ext=token-info:authz-search 
> ovirt-ext=token-info:public-authz-search ovirt-ext=token-info:validate 
> ovirt-ext=token:password-access
> 2020-08-03 13:37:43,773+08 INFO  
> [org.ovirt.engine.core.bll.aaa.CreateUserSessionCommand] (default task-27) 
> [1e6c5da4] Running command: CreateUserSessionCommand internal: false.
> 2020-08-03 13:37:43,814+08 INFO  
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
> (default task-27) [1e6c5da4] EVENT_ID: USER_VDC_LOGIN(30), User 
> yuanq...@ft.com@ft.com connecting from '192.168.16.199' using session 
> 'AkdaHSmXHYlF3v53VoJRWVCIp0VYjFJPcR/vbRs0tfT20Qq9zylnacmSQKJJ8kwkWmj392Lq8j6EFcz22BKdTg=='
>  logged in.
> 2020-08-03 13:37:44,428+08 INFO  
> [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback] 
> (EE-ManagedThreadFactory-engineScheduled-Thread-35) 
> [2a5567e7-25bc-4d58-ab31-8f88f5a4a5fa] Command 'AddDisk' id: 
> '0f22fdf1-6016-40f3-9768-b436b5c83972' child commands 
> '[abaa6f3d-82a1-4317-857e-0c32e3ffeca1]' executions were completed, status 
> 'SUCCEEDED'
> 2020-08-03 13:37:44,428+08 INFO  
> [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback] 
> (EE-ManagedThreadFactory-engineScheduled-Thread-35) 
> [2a5567e7-25bc-4d58-ab31-8f88f5a4a5fa] Command 'AddDisk' id: 
> '0f22fdf1-6016-40f3-9768-b436b5c83972' Updating status to 'SUCCEEDED', The 
> command end method logic will be executed by one of its parent commands.
> 2020-08-03 13:37:44,488+08 INFO  
> [org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand] 
> (EE-ManagedThreadFactory-engineScheduled-Thread-35) 
> [2a5567e7-25bc-4d58-ab31-8f88f5a4a5fa] Successfully added Upload disk 
> 'vyos-1.1.8-amd64.iso' (disk id: '2720658b-c1cb-4021-810c-8333e80858eb', 
> image id: '6a61dee5-fc07-4afb-af81-3672b9077a3a') for image transfer command 
> '900b725e-d409-4192-bb52-c52cb29b37ee'
> 2020-08-03 13:37:44,524+08 INFO  
> [org.ovirt.engine.core.vdsbroker.vdsbroker.PrepareImageVDSCommand] 
> (EE-ManagedThreadFactory-engineScheduled-Thread-35) 
> [2a5567e7-25bc-4d58-ab31-8f88f5a4a5fa] START, PrepareImageVDSCommand(HostName 
> = 192.168.4.23, 
> PrepareImageVDSCommandParameters:{hostId='b6142941-cc9e-4da3-b66d-5132f359edb5'}),
>  log id: 4bfe1a2c
> 2020-08-03 13:37:44,640+08 INFO  
> [org.ovirt.engine.core.vdsbroker.vdsbroker.PrepareImageVDSCommand] 
> (EE-ManagedThreadFactory-engineScheduled-Thread-35) 
> [2a5567e7-25bc-4d58-ab31-8f88f5a4a5fa] FINISH, PrepareImageVDSCommand, 
> return: PrepareImageReturn:{status='Status [code=0, message=Done]'}, log id: 
> 4bfe1a2c
> 2020-08-03 13:37:44,641+08 INFO  
> [org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeLegalityVDSCommand] 
> (EE-ManagedThreadFactory-engineScheduled-Thread-35) 
> [2a5567e7-25bc-4d58-ab31-8f88f5a4a5fa] START, SetVolumeLegalityVDSCommand( 
> SetVolumeLegalityVDSCommandParameters:{storagePoolId='17fae97b-4a94-4de5-88c0-0743ab2b9ab8',
>  ignoreFailoverLimit='false', 
> storageDomainId='0d707e2b-42ed-48d4-9675-2021a3840f40', 
> 

[ovirt-users] Re: OVA export creates empty and unusable images

2020-08-09 Thread Nir Soffer
On Tue, Aug 4, 2020 at 7:37 PM  wrote:
>
> Nir, first of all, thanks a lot for the detailed description and the quick 
> fix in 4.4!
>
> I guess I'll be able to paste that single line fix into the 4.3. variant 
> myself, but I'd rather see that included in the next 4.3 release, too: How 
> would that work?
>
> ---
> "OVA is not a backup solution":
>
> From time to time, try to put youself into a user's shoes.
>
> The fist thing you read about Export Domains, is that they are deprecated: 
> That doesn't give you the warm fuzzy feeling that you are learning something 
> useful when you start using them, especially in the context of a migration to 
> the next release.

This is good, you should not use export domain at this point. We have better
replacement (see my other mail).

> OVA on the other hand, stands for a maximum of interoperability and when 
> given a choice between something proprietary and deprecated and a file format 
> that will port pretty much everywhere, any normal user (who doesn't have the 
> code behind the scene in his mind), will jump for OVA export/import.

I think you already found that OVA are not what you think they are.
They work for exporting
VMs from the same hypervisor and back, unless you have a tool that
know how to convert
OVA from one hypervisor to another like virt-v2v.

> Also it's just two buttons, no hassle, while it took me a while to get an 
> Export domain defined, filled, detached, re-attached, and tested.

True, ease of use is important. But if you are going to do this a
lost, scripting the
operation is important, and oVirt has a very powerful API/SDK.

> Again from a user's perspective: HCI gluster storage in oVirt is black magic, 
> especially since disk images are all chunked up. For a user it will probably 
> take many years of continous oVirt operation until he's confident that he'l 
> recover VM storage in the case of a serious hickup and that whatever might 
> have gone wrong or bitrot might have occured, won't carry over to an export 
> domain. OVA files seem like a nice bet to recover your VM on whatever 
> platform you can get back running in a minor disaster.

What you say is basically that having a backup is useful :-)

> In many cases, it doesn't even matter you have to shut down the machine to do 
> the export, because the machines are application level redundant or simply 
> it's ok to have them down for a couple of minutes, if you know you can get 
> them back up no matter what in a comparable time frame, oVirt farm dead or 
> alive, e.g. on a bare metal machine.

How are you going to use the OVA on a bare metal machine?

> And then my case, many of the images are just meant to move between an oVirt 
> farm and a desktop hypervisor.

Why do you need the desktop hypervisor? I would like to hear more
about this use case.

And if you need one, why not use something based on KVM (like
virt-manager) so disks
from oVirt can work without any change? This will make it easy to move
from oVit to desktop
hypervisor and back with relatively little effort.

> tl;dr
>
> (working) OVA export and import IMHO are elemental and crucial functionality, 
> without which oVirt can't be labelled a product.
>
> I completely appreciate the new backup API, especially with the consistency 
> enabled for running VMs; perhaps a little less that I'd have to purchase an 
> extra product to do a fundamental operation with a similar ease as the OVA 
> export/import buttons, but at least it's there.
>
> That doesn't mean OVA in/ex isn't important or that in fact a shared 
> import/export domain would be nice, too.
>
> Thanks for your time!

Thanks Thomas, this is very useful feedback!

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4RQBNMRDMDEBLVLVVBVPTOSIHCYSEO5I/


[ovirt-users] Re: Do distinct export domains share a name space? Can't export a VM, because it already exists in an unattached export domain...

2020-08-09 Thread Nir Soffer
On Wed, Aug 5, 2020 at 7:00 PM  wrote:
>
> After OVA export/import was a) recommended against b) not working with the 
> current 4.3 on CentOS 7, I am trying to make sure I keep working copies of 
> critical VMs before I test if the OVA export now works properly, with the 
> Redhat fix from 4.4. applied to 4.3.
>
> Long story short, I have an export domain "export", primarily attached to a 3 
> node HCI gluster-cluster and another domain "exportMono", primarily attached 
> to a single node HCI gluster-cluster.
>
> Yes I use an export domain for backup, because, ...well there is no easy and 
> working alternative out of the box, or did I overlook something?
> But of course, I also use an export domain for shipping between farms, so 
> evidently I swap export domains like good old PDP-11 disk cartridges or 
> at least I'd like to.
>
>
> I started by exporting VM "tdc" from the 1nHCI to exportMono, reattached that 
> to 3nHCI for am import. Import worked everything fine, transfer succeed. So I 
> detach exportMono, which belongs to the 1nHCI cluster.
>
> Next I do the OVA export on 1nHCI, but I need to get the working and 
> reconfigured VM "tdc" out of the way on 3nHCI, so I dump it into the "export" 
> export domain belonging to 3nHCI, because I understand I can't run two copies 
> of the same VM on a single cluster.
>
> Turns out I can' export it, because even if the export domain is now a 
> different one and definitely doesn't contain "tdc" at all, oVirt complains 
> that the volume ID that belongs to "tdc" already exists in the export 
> domain
>
> So what's the theory here behind export domains? And what's the state of 
> their support in oVirt 4.4?

Export domains are deprecated for several releases, but they were not removed
in 4.4, mainly to make it easier to upgrade to 4.4.

They are deprecated because we have better solutions, mainly export import data
domain (since 3.6), and also upload/download disks (since 4.0), and
export/import
OVA.

To move a VM to another environment using a data domain in 4.3:
- Create or attach a data storage domain of any type to use as an
"export" domain
- Move the VM disks to the data domain. This can be done while the VM
is running.
- Stop the exported VM.
- Detach the data domain from the system
- Attach the data domain to another system
- Import the VM from the data domain
- Start the imported VM
- If you want to continue using this data domain as an export domain,
  move the VM disks to another domain. This can be done while the VM is running.

This minimizes the downtime to the time it takes to detach the domain
from one system
and attach to another system. If you don't care about downtime, you
can stop the VM
before the export and start it after copying the imported VM disks.
This is much simpler
and more reliable.

In the best case, when you have a data domain that you want to move to
another system,
no data copy is needed. In the worst case when you want to move VM
from one storage
on another storage on another system, you need to move the disks twice
- just like export
domain.

If you want to move the VM to another hypervisor, OVA export is not
helpful since OVA is
not a real standard and hypervisors do not support other hypervisor
OVAs without some
conversion tool such as virt-v2v. oVirt uses virt-v2v to import VMWare
OVA to oVirt. I don't
know if this tool supports converting oVirt OVA to other hypervisors.

To move VM disks out of the system in 4.3:
- Stop the VM, or create a snapshot.
- Create a template from the VM or the snapshot, selecting qcow2 format
- Start the VM if needed, or delete the snapshot.
- Download the template disks, from the UI or using the API/SDK if you
want a faster
  download that can be scripted.
- Delete the template if not needed

Why create a template instead of cloning the VM? because cloning may create raw
preallocated disks (depending on the storage type and the original VM)
and downloading
a preallocated disk is not efficient and creates fully preallocated
images which are much
larger than needed.

To move the disk back to oVirt, upload the disk to oVirt and create a new VM.
This can be done from the UI or using the API/SDK.

The entire process can be scripted. In 4.4 backup_vm.py example does
all this without
creating snapshot or templates (see my previous mail about backup).

> I understand that distinct farms can't share an export domain, because they 
> have no way of coordinating properly. Of course I tried to use one single NFS 
> mount for both farms but the second farm properly detected the presense of 
> another and required a distinct path.
>
> But from the evidence before me, oVirt doesn't support or like the existance 
> of more than one export domain, either: Something that deserves a note or 
> explanation.

Yes, export domains have many limitations.

> I understand they are deprecated in 4.3 already, but since they are also the 
> only way to manage valuable VM images moving around, that currently 

[ovirt-users] Re: Support for Shared SAS storage

2020-08-09 Thread Nir Soffer
On Sat, Aug 8, 2020 at 6:56 AM Jeff Bailey  wrote:
>
> I haven't tried with 4.4 but shared SAS works just fine with 4.3 (and has for 
> many, many years).  You simply treat it as Fibre Channel.  If your LUNs 
> aren't showing up I'd make sure they're being claimed as multipath devices.  
> You want them to be.  After that, just make sure they're sufficiently wiped 
> so they don't look like they're in use.

Correct. FC in oVirt means actually not "iSCSI". Any device - including local
devices - that multipath expose, and is not iSCSI is considered as FC.

This is not officially supported and not documented anywhere. I learned
about it only we tried to deploy strict multipath blacklist allowing
only "iscsi"
or "fc" transport, and found that it broke users' setups with sas other esoteric
storage.

Since 4.4 is the last feature release, this will never change even if it is not
documented.

Nir

> On 8/7/2020 10:49 PM, Lao Dh via Users wrote:
>
> Wow. That's sound bad. Then what storage type you choose at last (with your 
> SAS connected storage)? VMware vSphere support DAS. Red Hat should do 
> something.
>
> 2020年8月8日土曜日 4:06:34 GMT+8、Vinícius Ferrão via Users が書いたメール:
>
>
> No, there’s no support for direct attached shared SAS storage on oVirt/RHV.
>
> Fibre Channel is a different thing that oVirt/RHV supports.
>
> > On 7 Aug 2020, at 08:52, hkexdong--- via Users  wrote:
> >
> > Hello Vinícius,
> > Do you able to connect the SAS external storage?
> > Now I've the problem during host engine setup. Select Fibre Channel and end 
> > up show "No LUNS found".
> > ___
> > Users mailing list -- users@ovirt.org
> > To unsubscribe send an email to users-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/privacy-policy.html
> > oVirt Code of Conduct: 
> > https://www.ovirt.org/community/about/community-guidelines/
> > List Archives: 
> > https://lists.ovirt.org/archives/list/users@ovirt.org/message/RDPLKGIRN5ZGIEPWGOKMGNFZNMCEN5RC/
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/2CLI3YSYU7BPI62YANJXZV7RIQFOXXED/
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/WOBHQDCBZZK5WKRAUNHP5CGFYY3HQYYU/
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/UY52JRY5EMQJTMKG3POE2YXSFGL7P55S/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JYWUVULGUNLTU6QOY6WOUKBCA5IAO5QR/


[ovirt-users] Re: [rhev-tech] ovirt-imageio-proxy not working after updating SSL certificates with a wildcard cert issued by AlphaSSL (intermediate)

2020-08-08 Thread Nir Soffer
On Mon, Jul 27, 2020 at 6:40 PM Nir Soffer  wrote:

> On Sat, Jul 25, 2020 at 5:24 AM Lynn Dixon  wrote:
>
>> All,
>> I recently bought a wildcard certificate for my lab domain (shadowman.dev)
>> and I replaced all the certs on my RHV4.3 machine per our documentation.
>> The WebUI presents the certs successfully and without any issues, and
>> everything seemed to be fine, until I tried to upload a disk image (or an
>> ISO) to my storage domain.  I get this error in the events tab:
>>
>> https://share.getcloudapp.com/p9uPvegx
>> [image: image.png]
>>
>> I also see that the disk is showing up in my storage domain, but its
>> showing "Paused by System" and I can't do anything with it.  I cant even
>> delete it!
>>
>> I have tried following this document to fix the issue, but it didn't
>> work: https://access.redhat.com/solutions/4148361
>>
>> I am seeing this error pop into my engine.log:
>> https://pastebin.com/kDLSEq1A
>>
>> And I see this error in my image-proxy.log:
>> WARNING 2020-07-24 15:26:34,802 web:137:web:(log_error) ERROR
>> [172.17.0.30] PUT /tickets/ [403] Error verifying signed ticket: Invalid
>> ovirt ticket (data='--my_ticket_data-', reason=Untrusted
>> certificate) [request=0.002946/1]
>>
>
> This means ssl_* configuration in broken.
>
> We have 2 groups:
>
> Client ssl configuration:
>
> # Key file for SSL connections
> ssl_key_file = /etc/pki/ovirt-engine/keys/image-proxy.key.nopass
>
> # Certificate file for SSL connections
> ssl_cert_file = /etc/pki/ovirt-engine/certs/image-proxy.cer
>
> And engine SSL configuration:
>
> # Certificate file used when decoding signed token
> engine_cert_file = /etc/pki/ovirt-engine/certs/engine.cer
>
> # CA certificate file used to verify signed token
> engine_ca_cert_file = /etc/pki/ovirt-engine/ca.pem
>
> engine configuration is used to verify signed ticket used by engine when
> adding tickets to the proxy. This is internal flow that clients should not
> care
> about. You should not replace these unless you are using also custom
> certificate
> for engine itself - very unlikely and maybe unsupported.
> (Didi please correct me on this).
>
> SSL client configuration is used when communicating with clients, and does
> not depend on engine ssl configuration. You can replace these with your
> certificates.
>
> Can you share your /etc/ovirt-imageio/ovirt-imageio-proxy.conf?
>
> The main issue with the current configuration is that we don't have
> ssl_ca_cert configuration,
> assuming that ssl_cert_file is a self signed certificate that includes the
> CA certificate, since
> this is what engine is creating.
>
> In 4.4, we have more flexible configuration that should work for your case:
>
> $ cat /etc/ovirt-imageio/conf.d/50-engine.conf
> ...
> [tls]
> enable = true
> key_file = /etc/pki/ovirt-engine/keys/apache.key.nopass
> cert_file = /etc/pki/ovirt-engine/certs/apache.cer
> ca_file = /etc/pki/ovirt-engine/apache-ca.pem
>
> Adding ssl_ca_cert to imageio 1.5.3 looks simple enough, so I posted this
> completely untested patch:
> https://gerrit.ovirt.org/c/110498/
>
> You can try to upgrade your proxy to using this build:
>
> https://jenkins.ovirt.org/job/ovirt-imageio_standard-check-patch/3384/artifact/build-artifacts.el7.x86_64/
>
> Add a yum repo file with this baseurl=.
>
> Again this is untested, but you seem to be in the best place to test it,
> since I don't have any real certificates for testing.
>
> It would also be useful if you file a bug for this issue.
>

Lynn, did you resolve this issue?


>
> Nir
>
> Now, when I bought my wildcard, I was given a root certificate for the CA,
>> as well as a separate intermediate CA certificate from the provider.
>> Likewise, they gave me a certificate and a private key of course. The root
>> and intermediate CA's certificates have been added
>> to /etc/pki/ca-trust/source/anchors/ and I did an update-ca-trust.
>>
>> I also started experiencing issues with the ovpn network provider at the
>> same time I replaced the SSL certs, but I disregarded it at the time, but
>> now I am thinking its related.  Any advice on what to look for to fix the
>> ovirt-imageio-proxy?
>>
>> Thanks!
>>
>>
>> *Lynn Dixon* | Red Hat Certified Architect #100-006-188
>> *Solutions Architect* | NA Commercial
>> Google Voice: 423-618-1414
>> Cell/Text: 423-774-3188
>> Click here to view my Certification Portfolio <http://red.ht/1XMX2Mi>
>>
>>
>>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RITYEGP7J3BO2IMIQ7YEXZWV3STKEXLF/


[ovirt-users] Re: PATCH method not allowed in imageio

2020-08-07 Thread Nir Soffer
On Fri, Aug 7, 2020, 15:52 Łukasz Kołaciński 
wrote:

> Hello,
> Thank you for previous answers. I don't have problems with checkpoints
> anymore.
>
> I am trying to send PATCH request to imageio but it seems like I don't
> have write access. In the documentation I saw that it must be a RAW format.
> I think I am missing something else.
>
> OPTIONS Request:
> {
> "features": [
> "extents"
> ],
> "max_readers": 8,
> "max_writers": 8
> }
> Allow: OPTIONS,GET
>
> PATCH Request:
> *You are not allowed to access this resource: Ticket
> 485493df-b07a-495c-8aa3-824aad45b4ab forbids write*
>
> I created transfer using java sdk:
> ImageTransfer imageTransfer =
> connection.getImageTransfersSvc().addForDisk().imageTransfer(
> imageTransfer()
> .direction(direction)
>

Is this a backup? You cannot write to disks during backup.

.disk(disk)
> .backup(backup)
> .inactivityTimeout(120)
> .format(DiskFormat.RAW))
> .send().imageTransfer();
>

Can you explain what are you trying to do?


> It's similar to python examples.
>
> Best Regards
>
> Łukasz Kołaciński
>
> Junior Java Developer
>
> e-mail: l.kolacin...@storware.eu
> 
>
>
>
>
> *[image: STORWARE]* 
>
>
>
> *ul. Leszno 8/44 01-192 Warszawa www.storware.eu
> *
>
> *[image: facebook]* 
>
> *[image: twitter]* 
>
> *[image: linkedin]* 
>
> *[image: Storware_Stopka_09]*
> 
>
>
>
> *Storware Spółka z o.o. nr wpisu do ewidencji KRS dla M.St. Warszawa
> 000510131* *, NIP 5213672602.** Wiadomość ta jest przeznaczona jedynie
> dla osoby lub podmiotu, który jest jej adresatem i może zawierać poufne
> i/lub uprzywilejowane informacje. Zakazane jest jakiekolwiek przeglądanie,
> przesyłanie, rozpowszechnianie lub inne wykorzystanie tych informacji lub
> podjęcie jakichkolwiek działań odnośnie tych informacji przez osoby lub
> podmioty inne niż zamierzony adresat. Jeżeli Państwo otrzymali przez
> pomyłkę tę informację prosimy o poinformowanie o tym nadawcy i usunięcie
> tej wiadomości z wszelkich komputerów. **This message is intended only
> for the person or entity to which it is addressed and may contain
> confidential and/or privileged material. Any review, retransmission,
> dissemination or other use of, or taking of any action in reliance upon,
> this information by persons or entities other than the intended recipient
> is prohibited. If you have received this message in error, please contact
> the sender and remove the material from all of your computer systems.*
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LJOZQKHMSBLVRNTJKOJMWU5ADF6YURYZ/


[ovirt-users] Re: Semi off-topic: invalid SPF record for ovirt.org

2020-08-05 Thread Nir Soffer
On Wed, Aug 5, 2020 at 3:21 PM Chris Adams  wrote:
>
> Not sure where to send this (so sending to the list); the ovirt.org SPF
> record is invalid:
>
> $ dig ovirt.org txt +short
> "v=spf1 a:mail.ovirt.org a:gerrit.ovirt.org 66.187.233.88 ~all"
>
> The bare IP address needs to be "ip4:66.187.233.88" instead.

I think infra-support is the right place for this, this will open a
new bug for the infra team.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OWS7I4L7MHSSSVQC2VSZGLH5GV2Q3WKY/


[ovirt-users] Re: Setting Up oVirt + NFS Storage Issues

2020-08-04 Thread Nir Soffer
On Tue, Aug 4, 2020 at 10:02 PM Arden Shackelford  wrote:
>
>
> Hey Amit,
>
> Thanks for the response. Here's what I've got:
>
> >Are your NFS exports permissions set correctly (probably yes if you can see
> something created on your share)?
>
> Here's the perms on the folder (/mnt/ovirt on NFS server):
>
> File: ovirt
> Size: 2   Blocks: 1  IO Block: 512directory
> Device: 33h/51d Inode: 34  Links: 2
> Access: (0755/drwxr-xr-x)  Uid: (   36/vdsm)   Gid: (   36/kvmovirt)
> Access: 2020-08-04 17:54:06.971018988 +
> Modify: 2020-08-04 17:54:09.410982092 +
> Change: 2020-08-04 17:54:09.410982092 +
> Birth: -
>
> >Can you list your share contents with ls -lhZ?
>
> Root of share:
>
> drwxrwxr-x 4 vdsm kvmovirt ? 4 Aug  4 18:52 
> c6244268-aaeb-4b67-b12b-8a0e81d7c205

I think selinux is not supported on your file system. Here is a working setup:

On the host:

# ls -lhZ /rhev/data-center/mnt/nfs1\:_export_1/
total 0
drwxr-xr-x. 5 vdsm kvm system_u:object_r:nfs_t:s0 48 Jun 20 03:56
56ecc03c-4bb5-4792-8971-3c51ea924d2e

On the server:

# ls -lhZ /export/1
total 0
drwxr-xr-x. 5 vdsm kvm system_u:object_r:unlabeled_t:s0 48 Jun 20
03:56 56ecc03c-4bb5-4792-8971-3c51ea924d2e

> Inside of that:
>
> drwxrwxr-x 2 vdsm kvmovirt ? 8 Aug  4 18:52 dom_md
> drwxrwxr-x 2 vdsm kvmovirt ? 2 Aug  4 18:52 images
>
> > Can you share the full error trace from vdsm.log?
>
> Here's what I see:
...
> 2020-08-04 13:52:58,819-0500 ERROR (jsonrpc/1) [storage.initSANLock] Cannot 
> initialize SANLock for domain c6244268-aaeb-4b67-b12b-8a0e81d7c205 
> (clusterlock:259)
> Traceback (most recent call last):
> File "/usr/lib/python3.6/site-packages/vdsm/storage/clusterlock.py", line 
> 250, in initSANLock
> lockspace_name, idsPath, align=alignment, sector=block_size)
> sanlock.SanlockException: (19, 'Sanlock lockspace write failure', 'No such 
> device')

I think this is this bug:
https://bugzilla.redhat.com/1778485

Sanlok gets EACCESS - probably because selinux is disabled on the server,
or not supported with ZFS, and then it returns ENODEV.

Unfortunately this was not fixed yet in sanlock.

To fix this, try to get selinux working on your server, or use another
filesystem
instead of ZFS. ZFS is not tested with oVirt, so while it may have
great features,
you may have even more trouble later even if you can get it working.

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IU7V5GCOVAQDIPFKUOVCW4SH7I3T4UHM/


[ovirt-users] Re: Snapshot and disk size allocation

2020-08-04 Thread Nir Soffer
On Tue, Aug 4, 2020 at 4:50 PM  wrote:
>
> Yes, I understand, but my question is whether I can reclaim the allocated 
> space after deleting the snapshot. Because oVirt is not returning space, it 
> is only increasing, even though you have not done anything in the snapshot. 
> That is, with each snapshot I create, it increases 1GB, and even after 
> deleting it, it does not reclaim this space.

This is a known issue when you remove the last snapshot with a running VM.
If you stop the VM before deleting the snapshot, this will not happen.

The only way to reclaim the space now is:
1. Stop the VM
2. Create snapshot
3. Delete the created snapshot

This will merge the new empty snapshot into the old top volume, which
has 1G extra
space for every snapshot you created in the past. Since the VM is not
running, we
can safely shrink the top volume to the optimal size after the merge.

A little better way is to call the Disk.reduce() API - this can be scripted:

1. Stop the VM
2. Reduce the disk to optimal size
3. Start the VM

Use the API:
http://ovirt.github.io/ovirt-engine-api-model/4.4/#services/disk/methods/reduce

Or the SDK:
http://ovirt.github.io/ovirt-engine-sdk/4.4/services.m.html#ovirtsdk4.services.DiskService

It is possible to fix this in libvirt 6.0, and may also solve other problems
with snapshots, but this is a very delicate change.

If you think this is important to fix, please file a vdsm bug:
https://bugzilla.redhat.com/enter_bug.cgi?product=vdsm

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6LBI2PT6K62VEQQCBQHHJC3DQSF4N2WX/


[ovirt-users] Re: OVA export creates empty and unusable images

2020-08-01 Thread Nir Soffer
On Thu, Jul 30, 2020 at 5:10 PM  wrote:
>
> Export/Import/Transfer via export domain worked just fine. Just moved my last 
> surviving Univention domain controller from the single node HCI 
> disaster-recovery farm to the three node HCI primary, where I can now try to 
> make it primary and start building new backups...
>
> Backups that do not work are food for nightmares!

OVA export/import[1]  is not a backup solution. This is a way to move
a VM from one
environment to another. For this we have a much better way, detach
data domain and import it
in another environment.

If 4.4 we have a real backup API (tech preview). This will be used by
backup vendors to provide
complete backup solution, but we have basic backup capabilities
included in the sdk examples.
See the devconf 2020 talk[4] for more info.

The examples deal only with backing up and restoring disks. Getting
the vm configuration and
creating a vm from the configuration is not included yet.

Here is how you can do full back of a running vm disks using
backup_vm.py[2] example:

$ ./backup_vm.py full --engine-url https://engine3 \
--username admin@internal \
--password-file engine3-password \
--cafile engine3.pem \
--backup-dir backups \
ed2e2c59-36d3-41e2-ac7e-f4d33eb69ad4
[   0.0 ] Starting full backup for VM ed2e2c59-36d3-41e2-ac7e-f4d33eb69ad4
[   1.7 ] Waiting until backup 4ac0ae37-b1dd-4a8d-ad13-1ddb51e70d3e is ready
[   3.8 ] Created checkpoint '3648484c-61dc-4680-a7b4-0256544fa21d'
(to use in --from-checkpoint-uuid for the next incremental backup)
[   3.8 ] Creating image transfer for disk 419b83e7-a6b5-4445-9324-a85e27b134d2
[   5.1 ] Image transfer c659437e-6242-42b9-9293-8ccf10321261 is ready
Formatting 
'backups/419b83e7-a6b5-4445-9324-a85e27b134d2.202008011652.full.qcow2',
fmt=qcow2 size=107374182400 cluster_size=65536 lazy_refcounts=off
refcount_bits=16
[ 100.00% ] 100.00 GiB, 0.39 seconds, 258.06 GiB/s
[   5.5 ] Finalizing image transfer
[   7.5 ] Creating image transfer for disk 58daea80-1229-4c6b-b33c-1a4e568c8ad7
[   8.7 ] Image transfer 78254cef-5f22-4161-8dbd-010420730a7f is ready
Formatting 
'backups/58daea80-1229-4c6b-b33c-1a4e568c8ad7.202008011652.full.qcow2',
fmt=qcow2 size=6442450944 cluster_size=65536 lazy_refcounts=off
refcount_bits=16
[ 100.00% ] 6.00 GiB, 10.33 seconds, 594.54 MiB/s
[  19.0 ] Finalizing image transfer
[  23.3 ] Full backup completed successfully

Note that we did not create any snapshot, backup provides a consistent
view on all disks
at the time of the backup, but you must have qemu-guest-agent
installed in the VM. This is
the same consistency you get from snapshot without memory.

$ ls -lh backups/
total 2.3G
-rw-r--r--. 1 nsoffer nsoffer  52M Aug  1 16:52
419b83e7-a6b5-4445-9324-a85e27b134d2.202008011652.full.qcow2
-rw-r--r--. 1 nsoffer nsoffer 2.2G Aug  1 16:52
58daea80-1229-4c6b-b33c-1a4e568c8ad7.202008011652.full.qcow2

In this example we detected that the 100 GiB disk
419b83e7-a6b5-4445-9324-a85e27b134d2.202008011652
is mostly empty (mounted file system, no data written yet) so we could
download only the allocated blocks.

$ qemu-img info
backups/419b83e7-a6b5-4445-9324-a85e27b134d2.202008011652.full.qcow2
image: backups/419b83e7-a6b5-4445-9324-a85e27b134d2.202008011652.full.qcow2
file format: qcow2
virtual size: 100 GiB (107374182400 bytes)
disk size: 51 MiB
cluster_size: 65536
Format specific information:
compat: 1.1
lazy refcounts: false
refcount bits: 16
corrupt: false

The second disk is a Fedora 32 OS disk, so we downloaded only the
allocated blocks, about 2.2 GiB.

$ qemu-img info
backups/58daea80-1229-4c6b-b33c-1a4e568c8ad7.202008011652.full.qcow2
image: backups/58daea80-1229-4c6b-b33c-1a4e568c8ad7.202008011652.full.qcow2
file format: qcow2
virtual size: 6 GiB (6442450944 bytes)
disk size: 2.17 GiB
cluster_size: 65536
Format specific information:
compat: 1.1
lazy refcounts: false
refcount bits: 16
corrupt: false

To restore the disks you can use upload_disk.py[3] example:

$ ./upload_disk.py --engine-url https://engine3 \
--username admin@internal \
--password-file engine3-password \
--cafile engine3.pem \
--sd-name nfs1 \
--disk-format raw \
--disk-sparse \
backups/419b83e7-a6b5-4445-9324-a85e27b134d2.202008011652.full.qcow2
Checking image...
Image format: qcow2
Disk format: raw
Disk content type: data
Disk provisioned size: 107374182400
Disk initial size: 107374182400
Disk name: 419b83e7-a6b5-4445-9324-a85e27b134d2.202008011652.full.raw
Disk backup: False
Connecting...
Creating disk...
Disk ID: 5c9465d6-6437-4ece-bcaa-eec6b2d00a9d
Creating image transfer...
Transfer ID: dfcce09b-1f65-45e6-8ea5-41dff9368b15
Transfer host name: host4
Uploading image...
[ 100.00% ] 100.00 GiB, 0.75 seconds, 133.31 GiB/s
Finalizing image transfer...
Upload completed successfully

Again we upload only the allocated blocks, so the upload is very quick.

Note that we upload the qcow2 disk to raw sparse disk. We support
on-the-fly 

[ovirt-users] Re: testing ovirt 4.4 as single node VM inside 4.3

2020-07-31 Thread Nir Soffer
On Fri, Jul 31, 2020 at 8:09 PM Philip Brown  wrote:
>
> well, this was related to my thread here a day or two ago, about "where are 
> the error logs for import?" I give more details there.
>
> The resulting logs are of no use.
> ovirt 4.3 fails on direct import from Vsphere. and it fails on importing a 
> manually exported OVA file from vsphere.
> With no feedback as to where the failure is caused.

Steven, do we know about any issue in 4.3 with importing ovas?

> Therefore, I have concluded my only option is to try out 4.4 somehow.

This is a good idea, you will not get any fixes in 4.3 at this point. We usually
support only the latest version.

> Which is difficult since I dont have centos 8 running on bare metal anywhere.

Nested VM should work, we use this internally for development. But you will
not get the best performance this way.

Nir

> - Original Message -
> From: "Gianluca Cecchi" 
> To: "Philip Brown" 
> Cc: "users" 
> Sent: Friday, July 31, 2020 9:13:12 AM
> Subject: Re: [ovirt-users] testing ovirt 4.4 as single node VM inside 4.3
>
> On Fri, Jul 31, 2020 at 5:37 PM Philip Brown  wrote:
>
> >
> >
> > I'm having problems importing OVA exports from vmware 4, into ovirt 4.3
> >
>
> So do you mean an OVA created in vSphere and to be used then in oVirt?
> What kind of problems? Too generic...
> ...
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/UK73PEGJ3TK2TANJ7NDIEL37BI4KF6RW/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/33VSVTIWIK52NK4KNXJ2JT6JBMTAZ2RE/


[ovirt-users] Re: PKI Problem

2020-07-30 Thread Nir Soffer
On Thu, Jul 30, 2020 at 12:53 PM Nir Soffer  wrote:
>
>
>
> On Sun, Jul 19, 2020, 17:22  wrote:
>>
>> Hi
>>
>> I did a fresh installation of version 4.4.0.3. After the engine setup I 
>> replaced the apache certificate with a custom certificate. I used this 
>> article to do it: 
>> https://myhomelab.gr/linux/2020/01/20/replacing_ovirt_ssl.html
>>
>> To summarize, I replaced those files with my own authority and the signed 
>> custom certificate
>>
>> /etc/pki/ovirt-engine/keys/apache.key.nopass
>> /etc/pki/ovirt-engine/certs/apache.cer
>> /etc/pki/ovirt-engine/apache-ca.pem
>>
>> That worked so far, apache uses now my certificate, login is possible. To 
>> setup a new machine, I need to upload an iso image, which failed. I found 
>> this error in /var/log/ovirt-imageio/daemon.log
>>
>> 2020-07-08 20:43:23,750 INFO(Thread-10) [http] OPEN client=192.168.1.228
>> 2020-07-08 20:43:23,767 INFO(Thread-10) [backends.http] Open backend 
>> netloc='the_secret_hostname:54322' 
>> path='/images/ef60404c-dc69-4a3d-bfaa-8571f675f3e1' 
>> cafile='/etc/pki/ovirt-engine/apache-ca.pem' secure=True
>> 2020-07-08 20:43:23,770 ERROR   (Thread-10) [http] Server error
>> Traceback (most recent call last):
>>   File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/http.py", 
>> line 699, in __call__
>> self.dispatch(req, resp)
>>   File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/http.py", 
>> line 744, in dispatch
>> return method(req, resp, *match.groups())
>>   File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/cors.py", 
>> line 84, in wrapper
>> return func(self, req, resp, *args)
>>   File 
>> "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/images.py", line 
>> 66, in put
>> backends.get(req, ticket, self.config),
>>   File 
>> "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/backends/__init__.py",
>>  line 53, in get
>> cafile=config.tls.ca_file)
>>   File 
>> "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/backends/http.py",
>>  line 48, in open
>> secure=options.get("secure", True))
>>   File 
>> "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/backends/http.py",
>>  line 63, in __init__
>> options = self._options()
>>   File 
>> "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/backends/http.py",
>>  line 364, in _options
>> self._con.request("OPTIONS", self.url.path)
>>   File "/usr/lib64/python3.6/http/client.py", line 1254, in request
>> self._send_request(method, url, body, headers, encode_chunked)
>>   File "/usr/lib64/python3.6/http/client.py", line 1300, in _send_request
>> self.endheaders(body, encode_chunked=encode_chunked)
>>   File "/usr/lib64/python3.6/http/client.py", line 1249, in endheaders
>> self._send_output(message_body, encode_chunked=encode_chunked)
>>   File "/usr/lib64/python3.6/http/client.py", line 1036, in _send_output
>> self.send(msg)
>>   File "/usr/lib64/python3.6/http/client.py", line 974, in send
>> self.connect()
>>   File "/usr/lib64/python3.6/http/client.py", line 1422, in connect
>> server_hostname=server_hostname)
>>   File "/usr/lib64/python3.6/ssl.py", line 365, in wrap_socket
>> _context=self, _session=session)
>>   File "/usr/lib64/python3.6/ssl.py", line 776, in __init__
>> self.do_handshake()
>>   File "/usr/lib64/python3.6/ssl.py", line 1036, in do_handshake
>> self._sslobj.do_handshake()
>>   File "/usr/lib64/python3.6/ssl.py", line 648, in do_handshake
>> self._sslobj.do_handshake()
>> ssl.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed 
>> (_ssl.c:897)
>> 2020-07-08 20:43:23,770 INFO(Thread-10) [http] CLOSE 
>> client=192.168.1.228 [connection 1 ops, 0.019775 s] [dispatch 1 ops, 
>> 0.003114 s]
>>
>> I'm a python developer so I had no problem reading the traceback.
>>
>> The SSL handshake fails when image-io tries to connect to what I think is 
>> called an ovn-provider. But it is using my new authority certificate 
>> cafile='/etc/pki/ovirt-engine/apache-ca.pem' which does not validate the 
>> certificate generated by the ovirt engine setup, which the ovn-provider 
>> probably uses.
>>
>> I didn't exactly know where the parame

[ovirt-users] Re: PKI Problem

2020-07-30 Thread Nir Soffer
On Sun, Jul 19, 2020, 17:22  wrote:

> Hi
>
> I did a fresh installation of version 4.4.0.3. After the engine setup I
> replaced the apache certificate with a custom certificate. I used this
> article to do it:
> https://myhomelab.gr/linux/2020/01/20/replacing_ovirt_ssl.html
>
> To summarize, I replaced those files with my own authority and the signed
> custom certificate
>
> /etc/pki/ovirt-engine/keys/apache.key.nopass
> /etc/pki/ovirt-engine/certs/apache.cer
> /etc/pki/ovirt-engine/apache-ca.pem
>
> That worked so far, apache uses now my certificate, login is possible. To
> setup a new machine, I need to upload an iso image, which failed. I found
> this error in /var/log/ovirt-imageio/daemon.log
>
> 2020-07-08 20:43:23,750 INFO(Thread-10) [http] OPEN
> client=192.168.1.228
> 2020-07-08 20:43:23,767 INFO(Thread-10) [backends.http] Open backend
> netloc='the_secret_hostname:54322'
> path='/images/ef60404c-dc69-4a3d-bfaa-8571f675f3e1'
> cafile='/etc/pki/ovirt-engine/apache-ca.pem' secure=True
> 2020-07-08 20:43:23,770 ERROR   (Thread-10) [http] Server error
> Traceback (most recent call last):
>   File
> "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/http.py", line
> 699, in __call__
> self.dispatch(req, resp)
>   File
> "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/http.py", line
> 744, in dispatch
> return method(req, resp, *match.groups())
>   File
> "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/cors.py", line
> 84, in wrapper
> return func(self, req, resp, *args)
>   File
> "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/images.py",
> line 66, in put
> backends.get(req, ticket, self.config),
>   File
> "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/backends/__init__.py",
> line 53, in get
> cafile=config.tls.ca_file)
>   File
> "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/backends/http.py",
> line 48, in open
> secure=options.get("secure", True))
>   File
> "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/backends/http.py",
> line 63, in __init__
> options = self._options()
>   File
> "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/backends/http.py",
> line 364, in _options
> self._con.request("OPTIONS", self.url.path)
>   File "/usr/lib64/python3.6/http/client.py", line 1254, in request
> self._send_request(method, url, body, headers, encode_chunked)
>   File "/usr/lib64/python3.6/http/client.py", line 1300, in _send_request
> self.endheaders(body, encode_chunked=encode_chunked)
>   File "/usr/lib64/python3.6/http/client.py", line 1249, in endheaders
> self._send_output(message_body, encode_chunked=encode_chunked)
>   File "/usr/lib64/python3.6/http/client.py", line 1036, in _send_output
> self.send(msg)
>   File "/usr/lib64/python3.6/http/client.py", line 974, in send
> self.connect()
>   File "/usr/lib64/python3.6/http/client.py", line 1422, in connect
> server_hostname=server_hostname)
>   File "/usr/lib64/python3.6/ssl.py", line 365, in wrap_socket
> _context=self, _session=session)
>   File "/usr/lib64/python3.6/ssl.py", line 776, in __init__
> self.do_handshake()
>   File "/usr/lib64/python3.6/ssl.py", line 1036, in do_handshake
> self._sslobj.do_handshake()
>   File "/usr/lib64/python3.6/ssl.py", line 648, in do_handshake
> self._sslobj.do_handshake()
> ssl.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed
> (_ssl.c:897)
> 2020-07-08 20:43:23,770 INFO(Thread-10) [http] CLOSE
> client=192.168.1.228 [connection 1 ops, 0.019775 s] [dispatch 1 ops,
> 0.003114 s]
>
> I'm a python developer so I had no problem reading the traceback.
>
> The SSL handshake fails when image-io tries to connect to what I think is
> called an ovn-provider. But it is using my new authority certificate
> cafile='/etc/pki/ovirt-engine/apache-ca.pem' which does not validate the
> certificate generated by the ovirt engine setup, which the ovn-provider
> probably uses.
>
> I didn't exactly know where the parameter for the validation ca file is.
> Probably it is the ca_file parameter in
> /etc/ovirt-imageio/conf.d/50-engine.conf. But that needs to be set to my
> own authority ca file.
>
> I modified the python file to set the ca_file parameter to the engine
> setups ca_file directly
>
>
> /usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/backends/__init__.py
>
> So the function call around line 50 looks like this:
>
> backend = module.open(
> ticket.url,
> mode,
> sparse=ticket.sparse,
> dirty=ticket.dirty,
> cafile='/etc/pki/ovirt-engine/ca.pem' #config.tls.ca_file
> )
>

Reading this again, the problem is clear now.

The imageio proxy is trying to use your CA to verify the the host imageio
daemon certificate. This cannot work because the host certificate is signed
by engine CA, and the imageio daemon on the host is using vdsm 

[ovirt-users] Re: PKI Problem

2020-07-30 Thread Nir Soffer
On Thu, Jul 30, 2020, 09:31 Ramon Clematide  wrote:

> Hi Nir
>
> I did not modify /etc/ovirt-imageio/conf.d/50-engine.conf
>
> I only replaced those files:
>
> /etc/pki/ovirt-engine/keys/apache.key.nopass
> /etc/pki/ovirt-engine/certs/apache.cer
> /etc/pki/ovirt-engine/apache-ca.pem
>
> ovirt-imageio has the apache certificates configured by default.
>

So why did you change the code using the default configuration?


>
> I found certificates generated by the engine setup for imageio (but not
> used?)
>
> So I switched to those certificates:
>
> cat /etc/ovirt-imageio/conf.d/99-locl.conf
> [tls]
> key_file = /etc/pki/ovirt-engine/keys/imageio-proxy.key.nopass
> cert_file = /etc/pki/ovirt-engine/certs/imageio-proxy.cer
> ca_file = /etc/pki/ovirt-engine/ca.pem
>
>
> When I test the connection in the image upload screen, now my browser does
> not validate the imageio's certificate. When import the ca generated by the
> engine setup, upload works. But I don't want to import the ca generated by
> the engine setup.
>

Why did you switch to engine ca if you don't want to use it?

When you change certificates, you need to restart the ovirt-imageio service
since it loads the certificates during startup.

Did you restart it?


___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/GRKFPQKHKODCJUV3YAL7M5ZJP2PSZCCU/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WXQXUDOSUVFFUV2ANM5FHCM4GYDCSJ35/


[ovirt-users] Re: [rhev-tech] ovirt-imageio-proxy not working after updating SSL certificates with a wildcard cert issued by AlphaSSL (intermediate)

2020-07-27 Thread Nir Soffer
On Sat, Jul 25, 2020 at 5:24 AM Lynn Dixon  wrote:

> All,
> I recently bought a wildcard certificate for my lab domain (shadowman.dev)
> and I replaced all the certs on my RHV4.3 machine per our documentation.
> The WebUI presents the certs successfully and without any issues, and
> everything seemed to be fine, until I tried to upload a disk image (or an
> ISO) to my storage domain.  I get this error in the events tab:
>
> https://share.getcloudapp.com/p9uPvegx
> [image: image.png]
>
> I also see that the disk is showing up in my storage domain, but its
> showing "Paused by System" and I can't do anything with it.  I cant even
> delete it!
>
> I have tried following this document to fix the issue, but it didn't work:
> https://access.redhat.com/solutions/4148361
>
> I am seeing this error pop into my engine.log:
> https://pastebin.com/kDLSEq1A
>
> And I see this error in my image-proxy.log:
> WARNING 2020-07-24 15:26:34,802 web:137:web:(log_error) ERROR
> [172.17.0.30] PUT /tickets/ [403] Error verifying signed ticket: Invalid
> ovirt ticket (data='--my_ticket_data-', reason=Untrusted
> certificate) [request=0.002946/1]
>

This means ssl_* configuration in broken.

We have 2 groups:

Client ssl configuration:

# Key file for SSL connections
ssl_key_file = /etc/pki/ovirt-engine/keys/image-proxy.key.nopass

# Certificate file for SSL connections
ssl_cert_file = /etc/pki/ovirt-engine/certs/image-proxy.cer

And engine SSL configuration:

# Certificate file used when decoding signed token
engine_cert_file = /etc/pki/ovirt-engine/certs/engine.cer

# CA certificate file used to verify signed token
engine_ca_cert_file = /etc/pki/ovirt-engine/ca.pem

engine configuration is used to verify signed ticket used by engine when
adding tickets to the proxy. This is internal flow that clients should not
care
about. You should not replace these unless you are using also custom
certificate
for engine itself - very unlikely and maybe unsupported.
(Didi please correct me on this).

SSL client configuration is used when communicating with clients, and does
not depend on engine ssl configuration. You can replace these with your
certificates.

Can you share your /etc/ovirt-imageio/ovirt-imageio-proxy.conf?

The main issue with the current configuration is that we don't have
ssl_ca_cert configuration,
assuming that ssl_cert_file is a self signed certificate that includes the
CA certificate, since
this is what engine is creating.

In 4.4, we have more flexible configuration that should work for your case:

$ cat /etc/ovirt-imageio/conf.d/50-engine.conf
...
[tls]
enable = true
key_file = /etc/pki/ovirt-engine/keys/apache.key.nopass
cert_file = /etc/pki/ovirt-engine/certs/apache.cer
ca_file = /etc/pki/ovirt-engine/apache-ca.pem

Adding ssl_ca_cert to imageio 1.5.3 looks simple enough, so I posted this
completely untested patch:
https://gerrit.ovirt.org/c/110498/

You can try to upgrade your proxy to using this build:
https://jenkins.ovirt.org/job/ovirt-imageio_standard-check-patch/3384/artifact/build-artifacts.el7.x86_64/

Add a yum repo file with this baseurl=.

Again this is untested, but you seem to be in the best place to test it,
since I don't have any real certificates for testing.

It would also be useful if you file a bug for this issue.

Nir

Now, when I bought my wildcard, I was given a root certificate for the CA,
> as well as a separate intermediate CA certificate from the provider.
> Likewise, they gave me a certificate and a private key of course. The root
> and intermediate CA's certificates have been added
> to /etc/pki/ca-trust/source/anchors/ and I did an update-ca-trust.
>
> I also started experiencing issues with the ovpn network provider at the
> same time I replaced the SSL certs, but I disregarded it at the time, but
> now I am thinking its related.  Any advice on what to look for to fix the
> ovirt-imageio-proxy?
>
> Thanks!
>
>
> *Lynn Dixon* | Red Hat Certified Architect #100-006-188
> *Solutions Architect* | NA Commercial
> Google Voice: 423-618-1414
> Cell/Text: 423-774-3188
> Click here to view my Certification Portfolio 
>
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IT7OWF7WZ6LTLLLP4TSSPBNKMTCDNG2H/


[ovirt-users] Re: CentOS 8.2, ovVirt 4.4.1.8, SHE -- kernel: device-mapper: core: ovirt-ha-broker: sending ioctl 5401 to DM device without required privilege

2020-07-24 Thread Nir Soffer
On Thu, Jul 23, 2020 at 1:16 PM Dmitry Kharlamov  wrote:
>
> Good day!
> CentOS 8.2.2004 (Core), ovVirt 4.4.1.8-1.el8, storage type FC with 
> multipathing.
>
> Please tell me what could be the problem.
>
> Messages are constantly poured into the console and / var / log / messages:
> kernel: device-mapper: core: ovirt-ha-broker: sending ioctl 5401 to DM device 
> without required privilege
>
> Messages appeared during the installation process (SHE via CLI), during the 
> LUN partitioning phase, and after that they do not stop.
> At the same time, the installation was successful and now everything is 
> working fine.
>
> A similar problem is described here, but no solution has been proposed:
> https://bugzilla.redhat.com/show_bug.cgi?id=1838453

See https://access.redhat.com/solutions/2189851:

Resolution
The messages are benign and can be ignored.


Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CXHKRXJPDKB54AGKWRL2J3JLGFVYZ6IO/


[ovirt-users] Re: PKI Problem

2020-07-23 Thread Nir Soffer
On Thu, Jul 23, 2020 at 5:14 PM Yedidyah Bar David  wrote:
>
> On Sun, Jul 19, 2020 at 5:23 PM  wrote:
> >
> > Hi
> >
> > I did a fresh installation of version 4.4.0.3. After the engine setup I 
> > replaced the apache certificate with a custom certificate. I used this 
> > article to do it: 
> > https://myhomelab.gr/linux/2020/01/20/replacing_ovirt_ssl.html
> >
> > To summarize, I replaced those files with my own authority and the signed 
> > custom certificate
> >
> > /etc/pki/ovirt-engine/keys/apache.key.nopass
> > /etc/pki/ovirt-engine/certs/apache.cer
> > /etc/pki/ovirt-engine/apache-ca.pem
> >
> > That worked so far, apache uses now my certificate, login is possible. To 
> > setup a new machine, I need to upload an iso image, which failed. I found 
> > this error in /var/log/ovirt-imageio/daemon.log
> >
> > 2020-07-08 20:43:23,750 INFO(Thread-10) [http] OPEN client=192.168.1.228
> > 2020-07-08 20:43:23,767 INFO(Thread-10) [backends.http] Open backend 
> > netloc='the_secret_hostname:54322' 
> > path='/images/ef60404c-dc69-4a3d-bfaa-8571f675f3e1' 
> > cafile='/etc/pki/ovirt-engine/apache-ca.pem' secure=True
> > 2020-07-08 20:43:23,770 ERROR   (Thread-10) [http] Server error
> > Traceback (most recent call last):
> >   File 
> > "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/http.py", line 
> > 699, in __call__
> > self.dispatch(req, resp)
> >   File 
> > "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/http.py", line 
> > 744, in dispatch
> > return method(req, resp, *match.groups())
> >   File 
> > "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/cors.py", line 
> > 84, in wrapper
> > return func(self, req, resp, *args)
> >   File 
> > "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/images.py", 
> > line 66, in put
> > backends.get(req, ticket, self.config),
> >   File 
> > "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/backends/__init__.py",
> >  line 53, in get
> > cafile=config.tls.ca_file)
> >   File 
> > "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/backends/http.py",
> >  line 48, in open
> > secure=options.get("secure", True))
> >   File 
> > "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/backends/http.py",
> >  line 63, in __init__
> > options = self._options()
> >   File 
> > "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/backends/http.py",
> >  line 364, in _options
> > self._con.request("OPTIONS", self.url.path)
> >   File "/usr/lib64/python3.6/http/client.py", line 1254, in request
> > self._send_request(method, url, body, headers, encode_chunked)
> >   File "/usr/lib64/python3.6/http/client.py", line 1300, in _send_request
> > self.endheaders(body, encode_chunked=encode_chunked)
> >   File "/usr/lib64/python3.6/http/client.py", line 1249, in endheaders
> > self._send_output(message_body, encode_chunked=encode_chunked)
> >   File "/usr/lib64/python3.6/http/client.py", line 1036, in _send_output
> > self.send(msg)
> >   File "/usr/lib64/python3.6/http/client.py", line 974, in send
> > self.connect()
> >   File "/usr/lib64/python3.6/http/client.py", line 1422, in connect
> > server_hostname=server_hostname)
> >   File "/usr/lib64/python3.6/ssl.py", line 365, in wrap_socket
> > _context=self, _session=session)
> >   File "/usr/lib64/python3.6/ssl.py", line 776, in __init__
> > self.do_handshake()
> >   File "/usr/lib64/python3.6/ssl.py", line 1036, in do_handshake
> > self._sslobj.do_handshake()
> >   File "/usr/lib64/python3.6/ssl.py", line 648, in do_handshake
> > self._sslobj.do_handshake()
> > ssl.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed 
> > (_ssl.c:897)
> > 2020-07-08 20:43:23,770 INFO(Thread-10) [http] CLOSE 
> > client=192.168.1.228 [connection 1 ops, 0.019775 s] [dispatch 1 ops, 
> > 0.003114 s]
> >
> > I'm a python developer so I had no problem reading the traceback.
> >
> > The SSL handshake fails when image-io tries to connect to what I think is 
> > called an ovn-provider. But it is using my new authority certificate 
> > cafile='/etc/pki/ovirt-engine/apache-ca.pem' which does not validate the 
> > certificate generated by the ovirt engine setup, which the ovn-provider 
> > probably uses.
> >
> > I didn't exactly know where the parameter for the validation ca file is. 
> > Probably it is the ca_file parameter in 
> > /etc/ovirt-imageio/conf.d/50-engine.conf. But that needs to be set to my 
> > own authority ca file.
> >
> > I modified the python file to set the ca_file parameter to the engine 
> > setups ca_file directly
> >
> > /usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/backends/__init__.py
> >
> > So the function call around line 50 looks like this:
> >
> > backend = module.open(
> > ticket.url,
> > mode,
> > sparse=ticket.sparse,
> > dirty=ticket.dirty,
> > 

[ovirt-users] Re: PKI Problem

2020-07-23 Thread Nir Soffer
On Sun, Jul 19, 2020 at 5:22 PM  wrote:
>
> Hi
>
> I did a fresh installation of version 4.4.0.3. After the engine setup I 
> replaced the apache certificate with a custom certificate. I used this 
> article to do it: 
> https://myhomelab.gr/linux/2020/01/20/replacing_ovirt_ssl.html
>
> To summarize, I replaced those files with my own authority and the signed 
> custom certificate
>
> /etc/pki/ovirt-engine/keys/apache.key.nopass
> /etc/pki/ovirt-engine/certs/apache.cer
> /etc/pki/ovirt-engine/apache-ca.pem
>
> That worked so far, apache uses now my certificate, login is possible. To 
> setup a new machine, I need to upload an iso image, which failed. I found 
> this error in /var/log/ovirt-imageio/daemon.log
>
> 2020-07-08 20:43:23,750 INFO(Thread-10) [http] OPEN client=192.168.1.228
> 2020-07-08 20:43:23,767 INFO(Thread-10) [backends.http] Open backend 
> netloc='the_secret_hostname:54322' 
> path='/images/ef60404c-dc69-4a3d-bfaa-8571f675f3e1' 
> cafile='/etc/pki/ovirt-engine/apache-ca.pem' secure=True
> 2020-07-08 20:43:23,770 ERROR   (Thread-10) [http] Server error
> Traceback (most recent call last):
>   File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/http.py", 
> line 699, in __call__
> self.dispatch(req, resp)
>   File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/http.py", 
> line 744, in dispatch
> return method(req, resp, *match.groups())
>   File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/cors.py", 
> line 84, in wrapper
> return func(self, req, resp, *args)
>   File 
> "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/images.py", line 
> 66, in put
> backends.get(req, ticket, self.config),
>   File 
> "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/backends/__init__.py",
>  line 53, in get
> cafile=config.tls.ca_file)
>   File 
> "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/backends/http.py",
>  line 48, in open
> secure=options.get("secure", True))
>   File 
> "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/backends/http.py",
>  line 63, in __init__
> options = self._options()
>   File 
> "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/backends/http.py",
>  line 364, in _options
> self._con.request("OPTIONS", self.url.path)
>   File "/usr/lib64/python3.6/http/client.py", line 1254, in request
> self._send_request(method, url, body, headers, encode_chunked)
>   File "/usr/lib64/python3.6/http/client.py", line 1300, in _send_request
> self.endheaders(body, encode_chunked=encode_chunked)
>   File "/usr/lib64/python3.6/http/client.py", line 1249, in endheaders
> self._send_output(message_body, encode_chunked=encode_chunked)
>   File "/usr/lib64/python3.6/http/client.py", line 1036, in _send_output
> self.send(msg)
>   File "/usr/lib64/python3.6/http/client.py", line 974, in send
> self.connect()
>   File "/usr/lib64/python3.6/http/client.py", line 1422, in connect
> server_hostname=server_hostname)
>   File "/usr/lib64/python3.6/ssl.py", line 365, in wrap_socket
> _context=self, _session=session)
>   File "/usr/lib64/python3.6/ssl.py", line 776, in __init__
> self.do_handshake()
>   File "/usr/lib64/python3.6/ssl.py", line 1036, in do_handshake
> self._sslobj.do_handshake()
>   File "/usr/lib64/python3.6/ssl.py", line 648, in do_handshake
> self._sslobj.do_handshake()
> ssl.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed 
> (_ssl.c:897)
> 2020-07-08 20:43:23,770 INFO(Thread-10) [http] CLOSE client=192.168.1.228 
> [connection 1 ops, 0.019775 s] [dispatch 1 ops, 0.003114 s]
>
> I'm a python developer so I had no problem reading the traceback.
>
> The SSL handshake fails when image-io tries to connect to what I think is 
> called an ovn-provider. But it is using my new authority certificate 
> cafile='/etc/pki/ovirt-engine/apache-ca.pem' which does not validate the 
> certificate generated by the ovirt engine setup, which the ovn-provider 
> probably uses.
>
> I didn't exactly know where the parameter for the validation ca file is. 
> Probably it is the ca_file parameter in 
> /etc/ovirt-imageio/conf.d/50-engine.conf.

Right

>  But that needs to be set to my own authority ca file.

Right, but you should not modify this file, it is owned by engine and
your changes will be lost
on the next upgrade.

As documented in the top of the file, you need to create a drop in file:

$ cat /etc/ovirt-imageio/cond.d/99-local.conf
[tls]
ca_file = ...

I think you need to change the key_file and cert_file, otherwise
clients connected
to imageio server may fail to verify the server certificate.

And restart the ovirt-imageio service.

> I modified the python file to set the ca_file parameter to the engine setups 
> ca_file directly
>
> /usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/backends/__init__.py
>
> So the function call around line 50 looks like this:
>
> backend = module.open(
> 

[ovirt-users] Re: very very bad iscsi performance

2020-07-21 Thread Nir Soffer
On Tue, Jul 21, 2020 at 2:20 AM Philip Brown  wrote:
>
> yes I am testing small writes. "oltp workload" means, simulation of OLTP 
> database access.
>
> You asked me to test the speed of iscsi from another host, which is very 
> reasonable. So here are the results,
> run from another node in the ovirt cluster.
> Setup is using:
>
>  - exact same vg device, exported via iscsi
>  - mounted directly into another physical host running centos 7, rather than 
> a VM running on it
>  -   literaly the same filesystem, again, mounted noatime
>
> I ran the same oltp workload. this setup gives the following results over 2 
> runs.
>
>  grep Summary oltp.iscsimount.?
> oltp.iscsimount.1:35906: 63.433: IO Summary: 648762 ops, 10811.365 ops/s, 
> (5375/5381 r/w),  21.4mb/s,475us cpu/op,   1.3ms latency
> oltp.iscsimount.2:36830: 61.072: IO Summary: 824557 ops, 13741.050 ops/s, 
> (6844/6826 r/w),  27.2mb/s,429us cpu/op,   1.1ms latency
>
>
> As requested, I attach virsh output, and qemu log

What we see in your logs:

You are using:
- thin disk - qcow2 image on logical volume:
- virtio-scsi


  
  

  
  
  
  47af0207-8b51-4a59-a93e-fddf9ed56d44
  
  
  


-object iothread,id=iothread1 \
-device 
virtio-scsi-pci,iothread=iothread1,id=ua-a50f193d-fa74-419d-bf03-f5a2677acd2a,bus=pci.0,addr=0x5
\
-drive 
file=/rhev/data-center/mnt/blockSD/87cecd83-d6c8-4313-9fad-12ea32768703/images/47af0207-8b51-4a59-a93e-fddf9ed56d44/743550ef-7670-4556-8d7f-4d6fcfd5eb70,format=qcow2,if=none,id=drive-ua-47af0207-8b51-4a59-a93e-fddf9ed56d44,serial=47af0207-8b51-4a59-a93e-fddf9ed56d44,werror=stop,rerror=stop,cache=none,aio=native
\

This is the most flexible option oVirt has, but not the default.

Known issue with such a disk is possible pausing of the VM when the
disk becomes full,
if oVirt cannot extend the underlying logical volume fast enough. It
can be mitigated by
using  larger chunks in vdsm.

We recommend these settings if you are going to use VMs with heavy I/O
with thin disks:

# cat /etc/vdsm/vdsm.conf.d/99-local.conf
[irs]

# Together with volume_utilization_chunk_mb, set the minimal free
# space before a thin provisioned block volume is extended. Use lower
# values to extend earlier.
# default value:
# volume_utilization_percent = 50
volume_utilization_percent = 25

# Size of extension chunk in megabytes, and together with
# volume_utilization_percent, set the free space limit. Use higher
# values to extend in bigger chunks.
# default value:
# volume_utilization_chunk_mb = 1024
volume_utilization_chunk_mb = 4096


With this configuration, when free space on the disk is 1 GiB, oVirt will extend
the disk by 4 GiB. So your disk may be up to 5 GiB larger than the used space,
but if the VM is writing data very fast, the chance of pausing is reduced.

If you want to reduce the chance of pausing your database in the most busy times
to zero, using a preallocated disk is the way.

In oVirt 4.4. you can check this option when creating a disk:

[x] Enable Incremental Backup

With:

Allocation Policy: [Preallocated]

You will get a preallocated disk in the specified size, using qcow2
format. This gives
you both the option to use incremental backup, faster disk operations
in oVirt (since
qemu-img does not need to read the entire disk), and avoids the
pausing issue. It may also
defeat thin provisioning, but if your backend storage supports thin
provisioning anyway
it does not matter.

To get best performance for database use case preallocated volume
should be better.

Please try to benchmark:
- raw preallocated disk
- using virtio instead of virtio-scsi

If your database can use multiple disks, you may get better
performance by adding multiple
disks and use one iothread per disk.

See also interesting talk about storage performance from 2017:
https://events19.lfasiallc.com/wp-content/uploads/2017/11/Storage-Performance-Tuning-for-FAST-Virtual-Machines_Fam-Zheng.pdf
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5F7CMTKPCVLV4WICTBBN63YXSW6ADRYO/


[ovirt-users] Re: very very bad iscsi performance

2020-07-20 Thread Nir Soffer
On Mon, Jul 20, 2020 at 8:51 PM Philip Brown  wrote:
>
> I'm trying to get optimal iscsi performance. We're a heavy iscsi shop, with 
> 10g net.
>
> I'mm experimenting with SSDs, and the performance in ovirt is way, way less 
> than I would have hoped.
> More than an order of magnitude slower.
>
> here's a datapoint.
> Im running filebench, with the OLTP workload.

Did you try fio?
https://fio.readthedocs.io/en/latest/fio_doc.html

I think this is the most common and advanced tool for such tests.

> First, i run it on one of the hosts, that has an SSD directly attached.
> create an xfs filesystem (created on a vg "device" on top of the SSD), mount 
> it with noatime, and run the benchmark.
>
>
> 37166: 74.084: IO Summary: 3746362 ops, 62421.629 ops/s, (31053/31049 r/w), 
> 123.6mb/s,161us cpu/op,   1.1ms latency

What do you get if you login to the target on the host  and access the
LUN directly on the host?

If you create a file system on the LUN and mount it on the host?

> I then unmount it, and make the exact same device an iscsi target, and create 
> a storage domain with it.
> I then create a disk for a VM running *on the same host*, and run the 
> benchmark.

What kind of disk? thin? preallocated?

> The same thing: filebench, oltp workload, xfs filesystem, noatime.
>
>
> 13329: 91.728: IO Summary: 153548 ops, 2520.561 ops/s, (1265/1243 r/w),   
> 4.9mb/s,289us cpu/op,  88.4ms latency

4.9mb/s looks very low. Are you testing very small random writes?

> 62,000 ops/s vs 2500 ops/s.
>
> what
>
>
> Someone might be tempted to say, "try making the device directly available, 
> AS a device, to the VM".
> Unfortunately,this is not an option.
> My goal is specifically to put together a new, high performing storage 
> domain, that I can use as database devices in VMs.

This is something to discuss with qemu folks. oVirt is just an easy
way to manage VMs.

Please attach the VM XML using:
virsh -r dumpxml vm-name-or-id

And the qemu command line from:
/var/log/libvirt/qemu/vm-name.log

I think you will get the best performance using direct LUN. Storage
domain is best if you want
to use features provided by storage domain. If your important feature
is performance, you want
to connect the storage in the most direct way to your VM.

Mordechai, did we do any similar performance tests in our lab?
Do you have example results?

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SGQ7K6K4URDSS54B2IC7GHD5NZYPG66N/


[ovirt-users] Re: Slow ova export performance

2020-07-20 Thread Nir Soffer
On Wed, Jul 15, 2020 at 5:36 PM francesco--- via Users  wrote:
>
> Hi All,
>
> I'm facing a really slow export ov vms hosted on a single node cluster, in a 
> local storage. The Vm disk is 600 GB and the effective usage is around 300 
> GB. I estimated that the following process would take up about 15 hours to 
> end:
>
> vdsm 25338 25332 99 04:14 pts/007:40:09 qemu-img measure -O qcow2 
> /rhev/data-center/mnt/_data/6775c41c-7d67-451b-8beb-4fd086eade2e/images/a084fa36-0f93-45c2-a323-ea9ca2d16677/55b3eac5-05b2-4bae-be50-37cde7050697
>
> A strace -p of the pid shows a slow progression to reach the effective size.
>
> lseek(11, 3056795648, SEEK_DATA)= 3056795648
> lseek(11, 3056795648, SEEK_HOLE)= 13407092736
> lseek(14, 128637468672, SEEK_DATA)  = 128637468672
> lseek(14, 128637468672, SEEK_HOLE)  = 317708828672
> lseek(14, 128646250496, SEEK_DATA)  = 128646250496
> lseek(14, 128646250496, SEEK_HOLE)  = 317708828672
> lseek(14, 128637730816, SEEK_DATA)  = 128637730816
> lseek(14, 128637730816, SEEK_HOLE)  = 317708828672
> lseek(14, 128646774784, SEEK_DATA)  = 128646774784
> lseek(14, 128646774784, SEEK_HOLE)  = 317708828672
> lseek(14, 128646709248, SEEK_DATA)  = 128646709248
>
> The process take a single full core, but i don't think this is the problem. 
> The I/O is almost nothing.
>
> Any idea/suggestion?

I think you are hitting this bug:
https://bugzilla.redhat.com/1850660

Looks like you should be able to defragment the file using xfs_fsr,
and after that
you may be able to set a extent size hint using:

xfs_io -c "extsize 1m" 

To prevent the future fragmentation.

I think future qemu-img will set a size hint to avoid this issue.

Nir

>
> Thank you for your time
> Regards
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/QF2QIA4ZRQIE6HQNJSNRBCPM25I3O5D3/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WJZK574UYBQA7TQAITX2OBDXR4S5CSQA/


[ovirt-users] Re: Parent checkpoint ID does not match the actual leaf checkpoint

2020-07-19 Thread Nir Soffer
On Sun, Jul 19, 2020 at 5:38 PM Łukasz Kołaciński 
wrote:

> Hello,
> Thanks to previous answers, I was able to make backups. Unfortunately, we
> had some infrastructure issues and after the host reboots new problems
> appeared. I am not able to do any backup using the commands that worked
> yesterday. I looked through the logs and there is something like this:
>
> 2020-07-17 15:06:30,644+02 ERROR
> [org.ovirt.engine.core.bll.StartVmBackupCommand]
> (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-54)
> [944a1447-4ea5-4a1c-b971-0bc612b6e45e] Failed to execute VM backup
> operation 'StartVmBackup': {}:
> org.ovirt.engine.core.common.errors.EngineException: EngineException:
> org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
> VDSGenericException: VDSErrorException: Failed to StartVmBackupVDS, error =
> Checkpoint Error: {'parent_checkpoint_id': None, 'leaf_checkpoint_id':
> 'cd078706-84c0-4370-a6ec-654ccd6a21aa', 'vm_id':
> '116aa6eb-31a1-43db-9b1e-ad6e32fb9260', 'reason': '*Parent checkpoint ID
> does not match the actual leaf checkpoint*'}, code = 1610 (Failed with
> error unexpected and code 16)
>
>
It looks like engine sent:

parent_checkpoint_id: None

This issue was fix in engine few weeks ago.

Which engine and vdsm versions are you testing?


> at
> deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.VdsHandler.handleVdsResult(VdsHandler.java:114)
> at
> deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.VDSBrokerFrontendImpl.runVdsCommand(VDSBrokerFrontendImpl.java:33)
> at
> deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.CommandBase.runVdsCommand(CommandBase.java:2114)
> at
> deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.StartVmBackupCommand.performVmBackupOperation(StartVmBackupCommand.java:368)
> at
> deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.StartVmBackupCommand.runVmBackup(StartVmBackupCommand.java:225)
> at
> deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.StartVmBackupCommand.performNextOperation(StartVmBackupCommand.java:199)
> at
> deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback.childCommandsExecutionEnded(SerialChildCommandsExecutionCallback.java:32)
> at
> deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.ChildCommandsCallbackBase.doPolling(ChildCommandsCallbackBase.java:80)
> at
> deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller.invokeCallbackMethodsImpl(CommandCallbacksPoller.java:175)
> at
> deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller.invokeCallbackMethods(CommandCallbacksPoller.java:109)
> at
> java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
> at
> java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)
> at
> org.glassfish.javax.enterprise.concurrent//org.glassfish.enterprise.concurrent.internal.ManagedScheduledThreadPoolExecutor$ManagedScheduledFutureTask.access$201(ManagedScheduledThreadPoolExecutor.java:383)
> at
> org.glassfish.javax.enterprise.concurrent//org.glassfish.enterprise.concurrent.internal.ManagedScheduledThreadPoolExecutor$ManagedScheduledFutureTask.run(ManagedScheduledThreadPoolExecutor.java:534)
> at
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
> at
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
> at java.base/java.lang.Thread.run(Thread.java:834)
> at
> org.glassfish.javax.enterprise.concurrent//org.glassfish.enterprise.concurrent.ManagedThreadFactoryImpl$ManagedThread.run(ManagedThreadFactoryImpl.java:250)
>
>
> And the last error is:
>
> 2020-07-17 15:13:45,835+02 ERROR
> [org.ovirt.engine.core.bll.StartVmBackupCommand]
> (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-14)
> [f553c1f2-1c99-4118-9365-ba6b862da936] Failed to execute VM backup
> operation 'GetVmBackupInfo': {}:
> org.ovirt.engine.core.common.errors.EngineException: EngineException:
> org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
> VDSGenericException: VDSErrorException: Failed to GetVmBackupInfoVDS, error
> = No such backup Error: {'vm_id': '116aa6eb-31a1-43db-9b1e-ad6e32fb9260',
> 'backup_id': 'bf1c26f7-c3e5-437c-bb5a-255b8c1b3b73', 'reason': '*VM
> backup not exists: Domain backup job id not found: no domain backup job
> present'*}, code = 1601 (Failed with error unexpected and code 16)
>
>
This is likely a result of the first error. If starting backup failed the
backup entity
is deleted.


> (these errors are from full backup)
>
> Like I said this is very strange because everything was working correctly.
>
>
> Regards
>
> Łukasz Kołaciński
>
> Junior Java Developer
>
> e-mail: l.kolacin...@storware.eu
> 
>
>
>
>
> *[image: STORWARE]* 
>
>
>
> *ul. Leszno 8/44 01-192 Warszawa www.storware.eu
> *
>
> *[image: facebook]* 

[ovirt-users] Re: VM Snapshot inconsistent

2020-07-17 Thread Nir Soffer
On Thu, Jul 16, 2020 at 11:33 AM Arsène Gschwind
 wrote:

> It looks like the Pivot completed successfully, see attached vdsm.log.
> Is there a way to recover that VM?
> Or would it be better to recover the VM from Backup?

This what we see in the log:

1. Merge request recevied

2020-07-13 11:18:30,282+0200 INFO  (jsonrpc/7) [api.virt] START
merge(drive={u'imageID': u'd7bd480d-2c51-4141-a386-113abf75219e',
u'volumeID': u'6197b30d-0732-4cc7-aef0-12f9f6e9565b', u'domainID':
u'33777993-a3a5-4aad-a24c-dfe5e473faca', u'poolID':
u'0002-0002-0002-0002-0289'},
baseVolUUID=u'8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8',
topVolUUID=u'6197b30d-0732-4cc7-aef0-12f9f6e9565b', bandwidth=u'0',
jobUUID=u'720410c3-f1a0-4b25-bf26-cf40aa6b1f97')
from=:::10.34.38.31,39226,
flow_id=4a8b9527-06a3-4be6-9bb9-88630febc227,
vmId=b5534254-660f-44b1-bc83-d616c98ba0ba (api:48)

To track this job, we can use the jobUUID: 720410c3-f1a0-4b25-bf26-cf40aa6b1f97
and the top volume UUID: 6197b30d-0732-4cc7-aef0-12f9f6e9565b

2. Starting the merge

2020-07-13 11:18:30,690+0200 INFO  (jsonrpc/7) [virt.vm]
(vmId='b5534254-660f-44b1-bc83-d616c98ba0ba') Starting merge with
jobUUID=u'720410c3-f1a0-4b25-bf26-cf40aa6b1f97', original
chain=8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8 <
6197b30d-0732-4cc7-aef0-12f9f6e9565b (top), disk='sda', base='sda[1]',
top=None, bandwidth=0, flags=12 (vm:5945)

We see the original chain:
8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8 <
6197b30d-0732-4cc7-aef0-12f9f6e9565b (top)

3. The merge was completed, ready for pivot

2020-07-13 11:19:00,992+0200 INFO  (libvirt/events) [virt.vm]
(vmId='b5534254-660f-44b1-bc83-d616c98ba0ba') Block job ACTIVE_COMMIT
for drive 
/rhev/data-center/mnt/blockSD/33777993-a3a5-4aad-a24c-dfe5e473faca/images/d7bd480d-2c51-4141-a386-113abf75219e/6197b30d-0732-4cc7-aef0-12f9f6e9565b
is ready (vm:5847)

At this point parent volume contains all the data in top volume and we can pivot
to the parent volume.

4. Vdsm detect that the merge is ready, and start the clean thread
that will complete the merge

2020-07-13 11:19:06,166+0200 INFO  (periodic/1) [virt.vm]
(vmId='b5534254-660f-44b1-bc83-d616c98ba0ba') Starting cleanup thread
for job: 720410c3-f1a0-4b25-bf26-cf40aa6b1f97 (vm:5809)

5. Requesting pivot to parent volume:

2020-07-13 11:19:06,717+0200 INFO  (merge/720410c3) [virt.vm]
(vmId='b5534254-660f-44b1-bc83-d616c98ba0ba') Requesting pivot to
complete active layer commit (job
720410c3-f1a0-4b25-bf26-cf40aa6b1f97) (vm:6205)

6. Pivot was successful

2020-07-13 11:19:06,734+0200 INFO  (libvirt/events) [virt.vm]
(vmId='b5534254-660f-44b1-bc83-d616c98ba0ba') Block job ACTIVE_COMMIT
for drive 
/rhev/data-center/mnt/blockSD/33777993-a3a5-4aad-a24c-dfe5e473faca/images/d7bd480d-2c51-4141-a386-113abf75219e/6197b30d-0732-4cc7-aef0-12f9f6e9565b
has completed (vm:5838)

7. Vdsm wait until libvirt updates the xml:

2020-07-13 11:19:06,756+0200 INFO  (merge/720410c3) [virt.vm]
(vmId='b5534254-660f-44b1-bc83-d616c98ba0ba') Pivot completed (job
720410c3-f1a0-4b25-bf26-cf40aa6b1f97) (vm:6219)

8. Syncronizing vdsm metadata

2020-07-13 11:19:06,776+0200 INFO  (merge/720410c3) [vdsm.api] START
imageSyncVolumeChain(sdUUID='33777993-a3a5-4aad-a24c-dfe5e473faca',
imgUUID='d7bd480d-2c51-4141-a386-113abf75219e',
volUUID='6197b30d-0732-4cc7-aef0-12f9f6e9565b',
newChain=['8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8']) from=internal,
task_id=b8f605bd-8549-4983-8fc5-f2ebbe6c4666 (api:48)

We can see the new chain:
['8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8']

2020-07-13 11:19:07,005+0200 INFO  (merge/720410c3) [storage.Image]
Current chain=8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8 <
6197b30d-0732-4cc7-aef0-12f9f6e9565b (top)  (image:1221)

The old chain:
8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8 <
6197b30d-0732-4cc7-aef0-12f9f6e9565b (top)

2020-07-13 11:19:07,006+0200 INFO  (merge/720410c3) [storage.Image]
Unlinking subchain: ['6197b30d-0732-4cc7-aef0-12f9f6e9565b']
(image:1231)
2020-07-13 11:19:07,017+0200 INFO  (merge/720410c3) [storage.Image]
Leaf volume 6197b30d-0732-4cc7-aef0-12f9f6e9565b is being removed from
the chain. Marking it ILLEGAL to prevent data corruption (image:1239)

This matches what we see on storage.

9. Merge job is untracked

2020-07-13 11:19:21,134+0200 INFO  (periodic/1) [virt.vm]
(vmId='b5534254-660f-44b1-bc83-d616c98ba0ba') Cleanup thread

successfully completed, untracking job
720410c3-f1a0-4b25-bf26-cf40aa6b1f97
(base=8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8,
top=6197b30d-0732-4cc7-aef0-12f9f6e9565b) (vm:5752)

This was a successful merge on vdsm side.

We don't see any more requests for the top volume in this log. The next step to
complete the merge it to delete the volume 6197b30d-0732-4cc7-aef0-12f9f6e9565b
but this can be done only on the SPM.

To understand why this did not happen, we need engine log showing this
interaction,
and logs from the SPM host from the same time.

Please file a bug about this and attach these logs (and the vdsm log
you sent here).
Fixing this vm is important but preventing this bug for 

[ovirt-users] Re: VM Snapshot inconsistent

2020-07-17 Thread Nir Soffer
On Thu, Jul 16, 2020 at 11:33 AM Arsène Gschwind
 wrote:
>
> On Wed, 2020-07-15 at 22:54 +0300, Nir Soffer wrote:
>
> On Wed, Jul 15, 2020 at 7:54 PM Arsène Gschwind
>
> <
>
> arsene.gschw...@unibas.ch
>
> > wrote:
>
>
> On Wed, 2020-07-15 at 17:46 +0300, Nir Soffer wrote:
>
>
> What we see in the data you sent:
>
>
>
> Qemu chain:
>
>
>
> $ qemu-img info --backing-chain
>
>
> /dev/33777993-a3a5-4aad-a24c-dfe5e473faca/6197b30d-0732-4cc7-aef0-12f9f6e9565b
>
>
> image: 
> /dev/33777993-a3a5-4aad-a24c-dfe5e473faca/6197b30d-0732-4cc7-aef0-12f9f6e9565b
>
>
> file format: qcow2
>
>
> virtual size: 150G (161061273600 bytes)
>
>
> disk size: 0
>
>
> cluster_size: 65536
>
>
> backing file: 8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8 (actual path:
>
>
> /dev/33777993-a3a5-4aad-a24c-dfe5e473faca/8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8)
>
>
> backing file format: qcow2
>
>
> Format specific information:
>
>
> compat: 1.1
>
>
> lazy refcounts: false
>
>
> refcount bits: 16
>
>
> corrupt: false
>
>
>
> image: 
> /dev/33777993-a3a5-4aad-a24c-dfe5e473faca/8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8
>
>
> file format: qcow2
>
>
> virtual size: 150G (161061273600 bytes)
>
>
> disk size: 0
>
>
> cluster_size: 65536
>
>
> Format specific information:
>
>
> compat: 1.1
>
>
> lazy refcounts: false
>
>
> refcount bits: 16
>
>
> corrupt: false
>
>
>
> Vdsm chain:
>
>
>
> $ cat 6197b30d-0732-4cc7-aef0-12f9f6e9565b.meta
>
>
> CAP=161061273600
>
>
> CTIME=1594060718
>
>
> DESCRIPTION=
>
>
> DISKTYPE=DATA
>
>
> DOMAIN=33777993-a3a5-4aad-a24c-dfe5e473faca
>
>
> FORMAT=COW
>
>
> GEN=0
>
>
> IMAGE=d7bd480d-2c51-4141-a386-113abf75219e
>
>
> LEGALITY=ILLEGAL
>
>
>
> ^^
>
>
> This is the issue, the top volume is illegal.
>
>
>
> PUUID=8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8
>
>
> TYPE=SPARSE
>
>
> VOLTYPE=LEAF
>
>
>
> $ cat 8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8.meta
>
>
> CAP=161061273600
>
>
> CTIME=1587646763
>
>
> DESCRIPTION={"DiskAlias":"cpslpd01_Disk1","DiskDescription":"SAP SLCM
>
>
> H11 HDB D13"}
>
>
> DISKTYPE=DATA
>
>
> DOMAIN=33777993-a3a5-4aad-a24c-dfe5e473faca
>
>
> FORMAT=COW
>
>
> GEN=0
>
>
> IMAGE=d7bd480d-2c51-4141-a386-113abf75219e
>
>
> LEGALITY=LEGAL
>
>
> PUUID=----
>
>
> TYPE=SPARSE
>
>
> VOLTYPE=INTERNAL
>
>
>
> We set volume to ILLEGAL when we merge the top volume into the parent volume,
>
>
> and both volumes contain the same data.
>
>
>
> After we mark the volume as ILLEGAL, we pivot to the parent volume
>
>
> (8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8).
>
>
>
> If the pivot was successful, the parent volume may have new data, and starting
>
>
> the vm using the top volume may corrupt the vm filesystem. The ILLEGAL state
>
>
> prevent this.
>
>
>
> If the pivot was not successful, the vm must be started using the top
>
>
> volume, but it
>
>
> will always fail if the volume is ILLEGAL.
>
>
>
> If the volume is ILLEGAL, trying to merge again when the VM is not running 
> will
>
>
> always fail, since vdsm does not if the pivot succeeded or not, and cannot 
> merge
>
>
> the volume in a safe way.
>
>
>
> Do you have the vdsm from all merge attempts on this disk?
>
>
> This is an extract of the vdsm logs, i may provide the complete log if it 
> would help.
>
>
> Yes, this is only the start of the merge. We see the success message
>
> but this only means the merge
>
> job was started.
>
>
> Please share the complete log, and if needed the next log. The
>
> important messages we look for are:
>
>
> Requesting pivot to complete active layer commit ...
>
>
> Follow by:
>
>
> Pivot completed ...
>
>
> If pivot failed, we expect to see this message:
>
>
> Pivot failed: ...
>
>
> After these messages we may find very important logs that explain why
>
> your disk was left
>
> in an inconsistent state.
>
> It looks like the Pivot completed successfully, see attached vdsm.log.

That's good, I'm looking in your log.

> Is there a way to recover that VM?

If the pivot was successful, qemu started to use the parent volume instead
of the top volume. In thi case you can delete the top vol

[ovirt-users] Re: VM Snapshot inconsistent

2020-07-15 Thread Nir Soffer
On Wed, Jul 15, 2020 at 7:54 PM Arsène Gschwind
 wrote:
>
> On Wed, 2020-07-15 at 17:46 +0300, Nir Soffer wrote:
>
> What we see in the data you sent:
>
>
> Qemu chain:
>
>
> $ qemu-img info --backing-chain
>
> /dev/33777993-a3a5-4aad-a24c-dfe5e473faca/6197b30d-0732-4cc7-aef0-12f9f6e9565b
>
> image: 
> /dev/33777993-a3a5-4aad-a24c-dfe5e473faca/6197b30d-0732-4cc7-aef0-12f9f6e9565b
>
> file format: qcow2
>
> virtual size: 150G (161061273600 bytes)
>
> disk size: 0
>
> cluster_size: 65536
>
> backing file: 8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8 (actual path:
>
> /dev/33777993-a3a5-4aad-a24c-dfe5e473faca/8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8)
>
> backing file format: qcow2
>
> Format specific information:
>
> compat: 1.1
>
> lazy refcounts: false
>
> refcount bits: 16
>
> corrupt: false
>
>
> image: 
> /dev/33777993-a3a5-4aad-a24c-dfe5e473faca/8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8
>
> file format: qcow2
>
> virtual size: 150G (161061273600 bytes)
>
> disk size: 0
>
> cluster_size: 65536
>
> Format specific information:
>
> compat: 1.1
>
> lazy refcounts: false
>
> refcount bits: 16
>
> corrupt: false
>
>
> Vdsm chain:
>
>
> $ cat 6197b30d-0732-4cc7-aef0-12f9f6e9565b.meta
>
> CAP=161061273600
>
> CTIME=1594060718
>
> DESCRIPTION=
>
> DISKTYPE=DATA
>
> DOMAIN=33777993-a3a5-4aad-a24c-dfe5e473faca
>
> FORMAT=COW
>
> GEN=0
>
> IMAGE=d7bd480d-2c51-4141-a386-113abf75219e
>
> LEGALITY=ILLEGAL
>
>
> ^^
>
> This is the issue, the top volume is illegal.
>
>
> PUUID=8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8
>
> TYPE=SPARSE
>
> VOLTYPE=LEAF
>
>
> $ cat 8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8.meta
>
> CAP=161061273600
>
> CTIME=1587646763
>
> DESCRIPTION={"DiskAlias":"cpslpd01_Disk1","DiskDescription":"SAP SLCM
>
> H11 HDB D13"}
>
> DISKTYPE=DATA
>
> DOMAIN=33777993-a3a5-4aad-a24c-dfe5e473faca
>
> FORMAT=COW
>
> GEN=0
>
> IMAGE=d7bd480d-2c51-4141-a386-113abf75219e
>
> LEGALITY=LEGAL
>
> PUUID=----
>
> TYPE=SPARSE
>
> VOLTYPE=INTERNAL
>
>
> We set volume to ILLEGAL when we merge the top volume into the parent volume,
>
> and both volumes contain the same data.
>
>
> After we mark the volume as ILLEGAL, we pivot to the parent volume
>
> (8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8).
>
>
> If the pivot was successful, the parent volume may have new data, and starting
>
> the vm using the top volume may corrupt the vm filesystem. The ILLEGAL state
>
> prevent this.
>
>
> If the pivot was not successful, the vm must be started using the top
>
> volume, but it
>
> will always fail if the volume is ILLEGAL.
>
>
> If the volume is ILLEGAL, trying to merge again when the VM is not running 
> will
>
> always fail, since vdsm does not if the pivot succeeded or not, and cannot 
> merge
>
> the volume in a safe way.
>
>
> Do you have the vdsm from all merge attempts on this disk?
>
> This is an extract of the vdsm logs, i may provide the complete log if it 
> would help.

Yes, this is only the start of the merge. We see the success message
but this only means the merge
job was started.

Please share the complete log, and if needed the next log. The
important messages we look for are:

Requesting pivot to complete active layer commit ...

Follow by:

Pivot completed ...

If pivot failed, we expect to see this message:

Pivot failed: ...

After these messages we may find very important logs that explain why
your disk was left
in an inconsistent state.

Since this looks like a bug and may be useful to others, I think it is
time to file a vdsm bug,
and attach the logs to the bug.

> 2020-07-13 11:18:30,257+0200 INFO  (jsonrpc/5) [api.virt] START 
> merge(drive={u'imageID': u'6c1445b3-33ac-4ec4-8e43-483d4a6da4e3', 
> u'volumeID': u'6172a270-5f73-464d-bebd-8bf0658c1de0', u'domainID': 
> u'a6f2625d-0f21-4d81-b98c-f545d5f86f8e', u'poolID': 
> u'0002-0002-0002-0002-0289'}, ba
> seVolUUID=u'a9d5fe18-f1bd-462e-95f7-42a50e81eb11', 
> topVolUUID=u'6172a270-5f73-464d-bebd-8bf0658c1de0', bandwidth=u'0', 
> jobUUID=u'5059c2ce-e2a0-482d-be93-2b79e8536667') 
> from=:::10.34.38.31,39226, flow_id=4a8b9527-06a3-4be6-9bb9-88630febc227, 
> vmId=b5534254-660f-44b1-bc83-d616c98ba0ba (api:4
> 8)
> 2020-07-13 11:18:30,271+0200 INFO  (jsonrpc/5) [vdsm.api] START 
> getVolumeInfo(sdUUID='a6f2625d-0f21-4d81-b98c-f545d5f86f8e', 
> spUUID='0002-0002-0002-0002-0289', 
> imgUUID='6c1445b3-33ac-4ec4-8

[ovirt-users] Re: VM Snapshot inconsistent

2020-07-15 Thread Nir Soffer
What we see in the data you sent:

Qemu chain:

$ qemu-img info --backing-chain
/dev/33777993-a3a5-4aad-a24c-dfe5e473faca/6197b30d-0732-4cc7-aef0-12f9f6e9565b
image: 
/dev/33777993-a3a5-4aad-a24c-dfe5e473faca/6197b30d-0732-4cc7-aef0-12f9f6e9565b
file format: qcow2
virtual size: 150G (161061273600 bytes)
disk size: 0
cluster_size: 65536
backing file: 8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8 (actual path:
/dev/33777993-a3a5-4aad-a24c-dfe5e473faca/8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8)
backing file format: qcow2
Format specific information:
compat: 1.1
lazy refcounts: false
refcount bits: 16
corrupt: false

image: 
/dev/33777993-a3a5-4aad-a24c-dfe5e473faca/8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8
file format: qcow2
virtual size: 150G (161061273600 bytes)
disk size: 0
cluster_size: 65536
Format specific information:
compat: 1.1
lazy refcounts: false
refcount bits: 16
corrupt: false

Vdsm chain:

$ cat 6197b30d-0732-4cc7-aef0-12f9f6e9565b.meta
CAP=161061273600
CTIME=1594060718
DESCRIPTION=
DISKTYPE=DATA
DOMAIN=33777993-a3a5-4aad-a24c-dfe5e473faca
FORMAT=COW
GEN=0
IMAGE=d7bd480d-2c51-4141-a386-113abf75219e
LEGALITY=ILLEGAL

^^
This is the issue, the top volume is illegal.

PUUID=8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8
TYPE=SPARSE
VOLTYPE=LEAF

$ cat 8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8.meta
CAP=161061273600
CTIME=1587646763
DESCRIPTION={"DiskAlias":"cpslpd01_Disk1","DiskDescription":"SAP SLCM
H11 HDB D13"}
DISKTYPE=DATA
DOMAIN=33777993-a3a5-4aad-a24c-dfe5e473faca
FORMAT=COW
GEN=0
IMAGE=d7bd480d-2c51-4141-a386-113abf75219e
LEGALITY=LEGAL
PUUID=----
TYPE=SPARSE
VOLTYPE=INTERNAL

We set volume to ILLEGAL when we merge the top volume into the parent volume,
and both volumes contain the same data.

After we mark the volume as ILLEGAL, we pivot to the parent volume
(8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8).

If the pivot was successful, the parent volume may have new data, and starting
the vm using the top volume may corrupt the vm filesystem. The ILLEGAL state
prevent this.

If the pivot was not successful, the vm must be started using the top
volume, but it
will always fail if the volume is ILLEGAL.

If the volume is ILLEGAL, trying to merge again when the VM is not running will
always fail, since vdsm does not if the pivot succeeded or not, and cannot merge
the volume in a safe way.

Do you have the vdsm from all merge attempts on this disk?

The most important log is the one showing the original merge. If the merge
succeeded, we should see a log showing the new libvirt chain, which
should contain
only the parent volume.

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LPUSJIQMK2JG2IC5EIODOR3S2JPNLIKS/


[ovirt-users] Re: VM Snapshot inconsistent

2020-07-15 Thread Nir Soffer
On Wed, Jul 15, 2020 at 4:00 PM Arsène Gschwind
 wrote:
>
> On Wed, 2020-07-15 at 15:42 +0300, Nir Soffer wrote:
>
> On Wed, Jul 15, 2020 at 3:12 PM Arsène Gschwind
>
> <
>
> arsene.gschw...@unibas.ch
>
> > wrote:
>
>
> Hi Nir,
>
>
> I've followed your guide, please find attached the informations.
>
> Thanks a lot for your help.
>
>
> Thanks, looking at the data.
>
>
> Quick look in the pdf show that one qemu-img info command failed:
>
>
> ---
>
> lvchange -ay 
> 33777993-a3a5-4aad-a24c-dfe5e473faca/6197b30d-0732-4cc7-aef0-12f9f6e9565b
>
>
> lvs 33777993-a3a5-4aad-a24c-dfe5e473faca/6197b30d-0732-4cc7-aef0-12f9f6e9565b
>
> LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
>
> 6197b30d-0732-4cc7-aef0-12f9f6e9565b
>
> 33777993-a3a5-4aad-a24c-dfe5e473faca -wi-a- 5.00g
>
>
> qemu-img info --backing-chain
>
> /dev/33777993-a3a5-4aad-a24c-dfe5e473faca/6197b30d-0732-4cc7-aef0-12f9f6e9565b
>
> qemu-img: Could not open
>
> '/dev/33777993-a3a5-4aad-a24c-dfe5e473faca/8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8':

It is clear now - qemu could not open the backing file:

lv=8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8

You must activate all the volumes in this image. I think my
instructions was not clear enough.

1. Find all lvs related to this image

2. Activate all of them

for lv_name in lv-name-1 lv-name-2 lv-name-3; do
lvchange -ay vg-name/$lv_name
done

3. Run qemu-img info on the LEAF volume

4. Deactivate the lvs activated in step 2.

>
> ---
>
>
> Maybe this lv was deactivated by vdsm after you activate it? Please
>
> try to activate it again and
>
> run the command again.
>
>
> Sending all the info in text format in the mail message would make it
>
> easier to respond.
>
> I did it again with the same result, and the LV was still activated.
>
> lvchange -ay 
> 33777993-a3a5-4aad-a24c-dfe5e473faca/6197b30d-0732-4cc7-aef0-12f9f6e9565b
>
> lvs 33777993-a3a5-4aad-a24c-dfe5e473faca/6197b30d-0732-4cc7-aef0-12f9f6e9565b
>
>   LV   VG   
> Attr   LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
>
>   6197b30d-0732-4cc7-aef0-12f9f6e9565b 33777993-a3a5-4aad-a24c-dfe5e473faca 
> -wi-a- 5.00g
>
> qemu-img info --backing-chain 
> /dev/33777993-a3a5-4aad-a24c-dfe5e473faca/6197b30d-0732-4cc7-aef0-12f9f6e9565b
>
> qemu-img: Could not open 
> '/dev/33777993-a3a5-4aad-a24c-dfe5e473faca/8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8':
>  Could not open 
> '/dev/33777993-a3a5-4aad-a24c-dfe5e473faca/8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8':
>  No such file or directory
>
> lvs 33777993-a3a5-4aad-a24c-dfe5e473faca/6197b30d-0732-4cc7-aef0-12f9f6e9565b
>
>   LV   VG   
> Attr   LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
>
>   6197b30d-0732-4cc7-aef0-12f9f6e9565b 33777993-a3a5-4aad-a24c-dfe5e473faca 
> -wi-a- 5.00g
>
>
> Sorry for the PDF, it was easier for me, but I will post everything in the 
> mail from now on.
>
>
>
> Arsene
>
>
> On Tue, 2020-07-14 at 23:47 +0300, Nir Soffer wrote:
>
>
> On Tue, Jul 14, 2020 at 7:51 PM Arsène Gschwind
>
>
> <
>
>
> arsene.gschw...@unibas.ch
>
>
>
> wrote:
>
>
>
> On Tue, 2020-07-14 at 19:10 +0300, Nir Soffer wrote:
>
>
>
> On Tue, Jul 14, 2020 at 5:37 PM Arsène Gschwind
>
>
>
> <
>
>
>
> arsene.gschw...@unibas.ch
>
>
>
>
>
> wrote:
>
>
>
>
> Hi,
>
>
>
>
> I running oVirt 4.3.9 with FC based storage.
>
>
>
> I'm running several VM with 3 disks on 3 different SD. Lately we did delete a 
> VM Snapshot and that task failed after a while and since then the Snapshot is 
> inconsistent.
>
>
>
> disk1 : Snapshot still visible in DB and on Storage using LVM commands
>
>
>
> disk2: Snapshot still visible in DB but not on storage anymore (It seems the 
> merge did run correctly)
>
>
>
> disk3: Snapshot still visible in DB but not on storage ansmore (It seems the 
> merge did run correctly)
>
>
>
>
> When I try to delete the snapshot again it runs forever and nothing happens.
>
>
>
>
> Did you try also when the vm is not running?
>
>
>
> Yes I've tried that without success
>
>
>
>
> In general the system is designed so trying again a failed merge will complete
>
>
>
> the merge.
>
>
>
>
> If the merge does complete, there may be some bug that the system cannot
>
>
>
> handle.
>
>
>
>
> Is there a way to suppres

[ovirt-users] Re: VM Snapshot inconsistent

2020-07-15 Thread Nir Soffer
On Wed, Jul 15, 2020 at 3:12 PM Arsène Gschwind
 wrote:
>
> Hi Nir,
>
> I've followed your guide, please find attached the informations.
> Thanks a lot for your help.

Thanks, looking at the data.

Quick look in the pdf show that one qemu-img info command failed:

---
lvchange -ay 
33777993-a3a5-4aad-a24c-dfe5e473faca/6197b30d-0732-4cc7-aef0-12f9f6e9565b

lvs 33777993-a3a5-4aad-a24c-dfe5e473faca/6197b30d-0732-4cc7-aef0-12f9f6e9565b
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
6197b30d-0732-4cc7-aef0-12f9f6e9565b
33777993-a3a5-4aad-a24c-dfe5e473faca -wi-a- 5.00g

qemu-img info --backing-chain
/dev/33777993-a3a5-4aad-a24c-dfe5e473faca/6197b30d-0732-4cc7-aef0-12f9f6e9565b
qemu-img: Could not open
'/dev/33777993-a3a5-4aad-a24c-dfe5e473faca/8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8':
Could not open '/dev/33777993-
a3a5-4aad-a24c-dfe5e473faca/8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8': No
such file or directory
---

Maybe this lv was deactivated by vdsm after you activate it? Please
try to activate it again and
run the command again.

Sending all the info in text format in the mail message would make it
easier to respond.

>
> Arsene
>
> On Tue, 2020-07-14 at 23:47 +0300, Nir Soffer wrote:
>
> On Tue, Jul 14, 2020 at 7:51 PM Arsène Gschwind
>
> <
>
> arsene.gschw...@unibas.ch
>
> > wrote:
>
>
> On Tue, 2020-07-14 at 19:10 +0300, Nir Soffer wrote:
>
>
> On Tue, Jul 14, 2020 at 5:37 PM Arsène Gschwind
>
>
> <
>
>
> arsene.gschw...@unibas.ch
>
>
>
> wrote:
>
>
>
> Hi,
>
>
>
> I running oVirt 4.3.9 with FC based storage.
>
>
> I'm running several VM with 3 disks on 3 different SD. Lately we did delete a 
> VM Snapshot and that task failed after a while and since then the Snapshot is 
> inconsistent.
>
>
> disk1 : Snapshot still visible in DB and on Storage using LVM commands
>
>
> disk2: Snapshot still visible in DB but not on storage anymore (It seems the 
> merge did run correctly)
>
>
> disk3: Snapshot still visible in DB but not on storage ansmore (It seems the 
> merge did run correctly)
>
>
>
> When I try to delete the snapshot again it runs forever and nothing happens.
>
>
>
> Did you try also when the vm is not running?
>
>
> Yes I've tried that without success
>
>
>
> In general the system is designed so trying again a failed merge will complete
>
>
> the merge.
>
>
>
> If the merge does complete, there may be some bug that the system cannot
>
>
> handle.
>
>
>
> Is there a way to suppress that snapshot?
>
>
> Is it possible to merge disk1 with its snapshot using LVM commands and then 
> cleanup the Engine DB?
>
>
>
> Yes but it is complicated. You need to understand the qcow2 chain
>
>
> on storage, complete the merge manually using qemu-img commit,
>
>
> update the metadata manually (even harder), then update engine db.
>
>
>
> The best way - if the system cannot recover, is to fix the bad metadata
>
>
> that cause the system to fail, and the let the system recover itself.
>
>
>
> Which storage domain format are you using? V5? V4?
>
>
> I'm using storage format V5 on FC.
>
>
> Fixing the metadata is not easy.
>
>
> First you have to find the volumes related to this disk. You can find
>
> the disk uuid and storage
>
> domain uuid in engine ui, and then you can find the volumes like this:
>
>
> lvs -o vg_name,lv_name,tags | grep disk-uuid
>
>
> For every lv, you will have a tag MD_N where n is a number. This is
>
> the slot number
>
> in the metadata volume.
>
>
> You need to calculate the offset of the metadata area for every volume using:
>
>
> offset = 1024*1024 + 8192 * N
>
>
> Then you can copy the metadata block using:
>
>
> dd if=/dev/vg-name/metadata bs=512 count=1 skip=$offset
>
> conv=skip_bytes > lv-name.meta
>
>
> Please share these files.
>
>
> This part is not needed in 4.4, we have a new StorageDomain dump API,
>
> that can find the same
>
> info in one command:
>
>
> vdsm-client StorageDomain dump sd_id=storage-domain-uuid | \
>
> jq '.volumes | .[] | select(.image=="disk-uuid")'
>
>
> The second step is to see what is the actual qcow2 chain. Find the
>
> volume which is the LEAF
>
> by grepping the metadata files. In some cases you may have more than
>
> one LEAF (which may
>
> be the problem).
>
>
> Then activate all volumes using:
>
>
> lvchange -ay vg-name/lv-name
>
>
> Now you can get the backing chain using qemu-img and the LEAF volume.
>
>
> qemu-img i

[ovirt-users] Re: VM Snapshot inconsistent

2020-07-14 Thread Nir Soffer
On Tue, Jul 14, 2020 at 7:51 PM Arsène Gschwind
 wrote:
>
> On Tue, 2020-07-14 at 19:10 +0300, Nir Soffer wrote:
>
> On Tue, Jul 14, 2020 at 5:37 PM Arsène Gschwind
>
> <
>
> arsene.gschw...@unibas.ch
>
> > wrote:
>
>
> Hi,
>
>
> I running oVirt 4.3.9 with FC based storage.
>
> I'm running several VM with 3 disks on 3 different SD. Lately we did delete a 
> VM Snapshot and that task failed after a while and since then the Snapshot is 
> inconsistent.
>
> disk1 : Snapshot still visible in DB and on Storage using LVM commands
>
> disk2: Snapshot still visible in DB but not on storage anymore (It seems the 
> merge did run correctly)
>
> disk3: Snapshot still visible in DB but not on storage ansmore (It seems the 
> merge did run correctly)
>
>
> When I try to delete the snapshot again it runs forever and nothing happens.
>
>
> Did you try also when the vm is not running?
>
> Yes I've tried that without success
>
>
> In general the system is designed so trying again a failed merge will complete
>
> the merge.
>
>
> If the merge does complete, there may be some bug that the system cannot
>
> handle.
>
>
> Is there a way to suppress that snapshot?
>
> Is it possible to merge disk1 with its snapshot using LVM commands and then 
> cleanup the Engine DB?
>
>
> Yes but it is complicated. You need to understand the qcow2 chain
>
> on storage, complete the merge manually using qemu-img commit,
>
> update the metadata manually (even harder), then update engine db.
>
>
> The best way - if the system cannot recover, is to fix the bad metadata
>
> that cause the system to fail, and the let the system recover itself.
>
>
> Which storage domain format are you using? V5? V4?
>
> I'm using storage format V5 on FC.

Fixing the metadata is not easy.

First you have to find the volumes related to this disk. You can find
the disk uuid and storage
domain uuid in engine ui, and then you can find the volumes like this:

lvs -o vg_name,lv_name,tags | grep disk-uuid

For every lv, you will have a tag MD_N where n is a number. This is
the slot number
in the metadata volume.

You need to calculate the offset of the metadata area for every volume using:

offset = 1024*1024 + 8192 * N

Then you can copy the metadata block using:

dd if=/dev/vg-name/metadata bs=512 count=1 skip=$offset
conv=skip_bytes > lv-name.meta

Please share these files.

This part is not needed in 4.4, we have a new StorageDomain dump API,
that can find the same
info in one command:

vdsm-client StorageDomain dump sd_id=storage-domain-uuid | \
jq '.volumes | .[] | select(.image=="disk-uuid")'

The second step is to see what is the actual qcow2 chain. Find the
volume which is the LEAF
by grepping the metadata files. In some cases you may have more than
one LEAF (which may
be the problem).

Then activate all volumes using:

lvchange -ay vg-name/lv-name

Now you can get the backing chain using qemu-img and the LEAF volume.

qemu-img info --backing-chain /dev/vg-name/lv-name

If you have more than one LEAF, run this on all LEAFs. Ony one of them
will be correct.

Please share also output of qemu-img.

Once we finished with the volumes, deactivate them:

lvchange -an vg-name/lv-name

Based on the output, we can tell what is the real chain, and what is
the chain as seen by
vdsm metadata, and what is the required fix.

Nir

>
> Thanks.
>
>
> Thanks for any hint or help.
>
> rgds , arsene
>
>
> --
>
>
> Arsène Gschwind <
>
> arsene.gschw...@unibas.ch
>
> >
>
> Universitaet Basel
>
> ___
>
> Users mailing list --
>
> users@ovirt.org
>
>
> To unsubscribe send an email to
>
> users-le...@ovirt.org
>
>
> Privacy Statement:
>
> https://www.ovirt.org/privacy-policy.html
>
>
> oVirt Code of Conduct:
>
> https://www.ovirt.org/community/about/community-guidelines/
>
>
> List Archives:
>
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/5WZ6KO2LVD3ZA2JNNIHJRCXG65HO4LMZ/
>
>
>
> --
>
> Arsène Gschwind
> Fa. Sapify AG im Auftrag der universitaet Basel
> IT Services
> Klinelbergstr. 70 | CH-4056 Basel | Switzerland
> Tel: +41 79 449 25 63 | http://its.unibas.ch
> ITS-ServiceDesk: support-...@unibas.ch | +41 61 267 14 11
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5DZ2TEEFGFGTACLMPVZXSQO7AXJGIB37/


  1   2   3   4   5   6   7   8   9   10   >