[ovirt-users] Re: Sparse VMs from Templates - Storage issues

2021-09-22 Thread Vojtech Juranek
On Wednesday, 22 September 2021 18:09:28 CEST Shantur Rathore wrote:
> I have actually tried many types of storage now and all have this issue.

This is weird. Could you please use file-based storage (e.g. NFS) and post here 
whole exceptions from vdsm log (/var/log/vdsm/vdsm.log) and qemu log (/var/
log/libvirt/qemu/vm_name.log) from the host which runs the VM? Hopefully this 
will give us some hint what is the real issue there.
Thanks
Vojta

> 
> I am not of ideas what to do
> 
> On Wed, Sep 22, 2021 at 4:39 PM Shantur Rathore
>  wrote:
> 
> >
> >
> > Hi Nir,
> >
> >
> >
> > Just to report.
> > As suggested, I created a Posix compliant storage domain with CephFS
> > and copied my templates to CephFS.
> > Now I created VMs from CephFS templates and the storage error happens
> > again.
 As I understand, the storage growth issue is only on iSCSI.
> >
> >
> >
> > Am I doing something wrong?
> >
> >
> >
> > Kind regards,
> > Shantur
> >
> >
> >
> > On Wed, Aug 11, 2021 at 2:42 PM Nir Soffer  wrote:
> > 
> > >
> > >
> > > On Wed, Aug 11, 2021 at 4:24 PM Arik Hadas  wrote:
> > > 
> > > >
> > > >
> > > >
> > > >
> > > > On Wed, Aug 11, 2021 at 2:56 PM Benny Zlotnik 
> > > > wrote:
> > > > 
> > > >>
> > > >>
> > > >> > If your vm is temporary and you like to drop the data written
> > > >> > while
> > > >> > the vm is running, you
> > > >> > could use a temporary disk based on the template. This is called a
> > > >> > "transient disk" in vdsm.
> > > >> >
> > > >> >
> > > >> >
> > > >> > Arik, maybe you remember how transient disks are used in engine?
> > > >> > Do we have an API to run a VM once, dropping the changes to the
> > > >> > disk
> > > >> > done while the VM was running?
> > > >>
> > > >>
> > > >>
> > > >> I think that's how stateless VMs work
> > > >
> > > >
> > > >
> > > >
> > > > +1
> > > > It doesn't work exactly like Nir wrote above - stateless VMs that are
> > > > thin-provisioned would have a qcow volume on top of each template's
> > > > volume and when they starts, their active volume would be a qcow
> > > > volume on top of the aforementioned qcow volume and that active
> > > > volume will be removed when the VM goes down
 But yeah, stateless VMs
> > > > are intended for such use case
> > >
> > >
> > >
> > > I was referring to transient disks - created in vdsm:
> > > https://github.com/oVirt/vdsm/blob/45903d01e142047093bf844628b5d90df12b6
> > > ffb/lib/vdsm/virt/vm.py#L3789
> >
> > >
> > >
> > > This creates a *local* temporary file using qcow2 format, using the
> > > disk on shared
> > > storage as a backing file.
> > >
> > >
> > >
> > > Maybe this is not used by engine?
> 
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/ List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/3UEXYH2IGNDWW
> YEHEHKLAREJS74LMXUI/



signature.asc
Description: This is a digitally signed message part.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EWFNVMHJFES5CICXVUIRDAYAOQSB4Y57/


[ovirt-users] Re: about the vm disk type

2021-09-22 Thread Tommy Sway
OK, thank you!




-Original Message-
From: Vojtech Juranek  
Sent: Thursday, September 23, 2021 1:22 PM
To: Tommy Sway 
Cc: 'users' 
Subject: Re: [ovirt-users] Re: about the vm disk type

> According to what you said, qemU-img info tool was used to query the
result.
> On Block storage, theses are results:
> if preallocated, the storage format is RAW;

correct

> If Thin mode, is QCOW2;

correct

> If the VM is cloned from a template in preallocated mode, the disk 
> consists of two files: the bottom back-end file is RAW, and the front 
> end file is QCOW2.

correct, second one is snapshot (not to modify template image) and snaphosts
are always qcow2

> If the vm is cloned from a Thin template, I haven't had time to test this.

both disks should be qcow2

> According to the above classification, the performance of QCOW2 is 
> worse than that of RAW format.
> Therefore, using preallocation mode for Block storage can not only 
> remove interference at the file system level, but also achieve 
> performance advantages in file types, thus achieving the best I/O effect.
> 
> The last sentence was the result I really wanted, as I was running IO 
> heavy systems on OLVM.

if you want to use only raw file format, you should use preallocated disks
on block storage (on file storage doesn't matter) and avoid snapshots, as
snapshots are always qcow2 format
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/X6QK5FNWU2ZLQHNCHWZDFZTXYJYUVKOI/


[ovirt-users] Re: about the vm disk type

2021-09-22 Thread Vojtech Juranek
> According to what you said, qemU-img info tool was used to query the result.
> On Block storage, theses are results:
> if preallocated, the storage format is RAW;

correct

> If Thin mode, is QCOW2;

correct

> If the VM is cloned from a template in preallocated mode, the disk consists
> of two files: the bottom back-end file is RAW, and the front end file is
> QCOW2.

correct, second one is snapshot (not to modify template image) and snaphosts 
are always qcow2

> If the vm is cloned from a Thin template, I haven't had time to test this.

both disks should be qcow2

> According to the above classification, the performance of QCOW2 is worse
> than that of RAW format.
> Therefore, using preallocation mode for Block storage can not only remove
> interference at the file system level, but also achieve performance
> advantages in file types, thus achieving the best I/O effect.
> 
> The last sentence was the result I really wanted, as I was running IO heavy
> systems on OLVM.

if you want to use only raw file format, you should use preallocated disks on 
block storage (on file storage doesn't matter) and avoid snapshots, as 
snapshots are always qcow2 format

signature.asc
Description: This is a digitally signed message part.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SUDW42FEFBDFUUOYUOX6QC2ZRPEWPMMW/


[ovirt-users] Re: about the OVF_STORE and the xleases volume

2021-09-22 Thread Tommy Sway
Thank you!

lrwxrwxrwx. 1 vdsm kvm 45 Jun 12 23:42 ids ->
/dev/41bc1316-5c1d-4836-a103-5acbbf0c47a1/ids
lrwxrwxrwx. 1 vdsm kvm 47 Jun 12 23:42 inbox ->
/dev/41bc1316-5c1d-4836-a103-5acbbf0c47a1/inbox
lrwxrwxrwx. 1 vdsm kvm 48 Jun 12 23:42 leases ->
/dev/41bc1316-5c1d-4836-a103-5acbbf0c47a1/leases
lrwxrwxrwx. 1 vdsm kvm 48 Jun 12 23:42 master ->
/dev/41bc1316-5c1d-4836-a103-5acbbf0c47a1/master
lrwxrwxrwx. 1 vdsm kvm 50 Jun 12 23:42 metadata ->
/dev/41bc1316-5c1d-4836-a103-5acbbf0c47a1/metadata
lrwxrwxrwx. 1 vdsm kvm 48 Jun 12 23:42 outbox ->
/dev/41bc1316-5c1d-4836-a103-5acbbf0c47a1/outbox
lrwxrwxrwx. 1 vdsm kvm 49 Jun 12 23:42 xleases ->
/dev/41bc1316-5c1d-4836-a103-5acbbf0c47a1/xleases



and what difference between the xleases and leases on the list?






-Original Message-
From: Vojtech Juranek  
Sent: Wednesday, September 22, 2021 6:12 PM
To: users@ovirt.org
Cc: Tommy Sway 
Subject: Re: [ovirt-users] about the OVF_STORE and the xleases volume

On Wednesday, 22 September 2021 10:39:34 CEST Tommy Sway wrote:
> I wonder if the xleases volume mentioned here refers to ovf_store ?

No, xleases is part of the disk space used internally by oVirt (to manage
concurrent access to the resources, e.g. disk image) and shouldn't be
touched by the user.

OVF store is Open Virtualization Format [1] and it's used for storing these
files, see [2] for more details.

[1] https://en.wikipedia.org/wiki/Open_Virtualization_Format
[2] https://www.ovirt.org/develop/release-management/features/storage/
importstoragedomain.html

> 
> 
> 
> 
> * A new xleases volume to support VM leases - this feature adds the
> ability to acquire a lease per virtual machine on shared storage 
> without attaching the lease to a virtual machine disk.
> 
> A VM lease offers two important capabilities:
> 
> * Avoiding split-brain.
> * Starting a VM on another host if the original host becomes
> non-responsive, which improves the availability of HA VMs.


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KP2YXJWQ4TH2ZHEWTPNRD6I7EQMTVQI7/


[ovirt-users] Re: about the power management of the hosts

2021-09-22 Thread Tommy Sway
My test scenario is as follows: Two physical machines in a cluster are not 
configured with power management, and VMS are enabled with high availability. 

After a host is stopped, VMS restart on other physical servers. The physical 
machine that was stopped  then started back, found that there was no abnormal.

 

I was wondering under what conditions you said there might be problems with the 
storage layer ?

 

 

 

 

 

From: users-boun...@ovirt.org  On Behalf Of Strahil 
Nikolov via Users
Sent: Thursday, September 23, 2021 12:55 AM
To: Tommy Sway ; 'Klaas Demter' ; 
users@ovirt.org
Subject: [ovirt-users] Re: about the power management of the hosts

 

It is possible, but without the SPM host being fenced you won't be able to do 
any storage-related tasks. Even snapahot management will be impossible 
withoutmanual intervention (reboot the host from the remote management and then 
mark the host as restarted).

 

Best Regards,

Strahil Nikolov

On Wed, Sep 22, 2021 at 5:42, Tommy Sway

mailto:sz_cui...@163.com> > wrote:

___
Users mailing list -- users@ovirt.org  
To unsubscribe send an email to users-le...@ovirt.org 
 
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QXDDVK3QMNLSIFBO4I6MNKN53VUM4E7Q/

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/X4G7IW5ZWYNTUW3NQZWMWNBV3ZI7ST5V/


[ovirt-users] Re: about the vm disk type

2021-09-22 Thread Tommy Sway
I just want know how to check the type.


According to what you said, qemU-img info tool was used to query the result.
On Block storage, theses are results:
if preallocated, the storage format is RAW;
If Thin mode, is QCOW2;
If the VM is cloned from a template in preallocated mode, the disk consists
of two files: the bottom back-end file is RAW, and the front end file is
QCOW2.
If the vm is cloned from a Thin template, I haven't had time to test this.

According to the above classification, the performance of QCOW2 is worse
than that of RAW format. 
Therefore, using preallocation mode for Block storage can not only remove
interference at the file system level, but also achieve performance
advantages in file types, thus achieving the best I/O effect.

The last sentence was the result I really wanted, as I was running IO heavy
systems on OLVM.

Thank you!





-Original Message-
From: users-boun...@ovirt.org  On Behalf Of Vojtech
Juranek
Sent: Wednesday, September 22, 2021 8:00 PM
To: Tommy Sway 
Cc: 'users' 
Subject: [ovirt-users] Re: about the vm disk type

Sorry for my misleading previous answer, I thought you are asking how to
create pre/thin allocated disk. As for the format, you cannot choose it. As
for the defaults, Benny already answered this.

> I also have block storage on my environment. How do I observe the type 
> of a vm image (LV)?

What do you exactly mean? Do you ask about image format (raw vs. cow)? If
so, you can deduce it from Benny's answer, based on the facts on which
domain is disk stored and if it's pre-allocated or not. If you are asking
how to check it on the storage, you can check disk metadata or you can use
directly qemu- img utility (qemu-img info /path/to/the/disk)

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4QZVJ46HFIXMBEVPQBQIK5F6PH7SEYKS/


[ovirt-users] Re: Hosted Engine cluster version compatib.

2021-09-22 Thread Diggy Mc
Hosted Engine properties cannot be edited since the upgrade.  I also restarted 
the HE and it made no difference.  Even just opening the HE properties and 
clicking OKAY without changing anything yields the following error:

There was an attempt to change Hosted Engine VM values that are locked.


Will the forthcoming 4.4.8.6 HE update (mentioned in a separate thread) fix the 
cluster compatibility upgrade problem?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WPMKJ7VQG5GCANK6LSG4K774747NCPOM/


[ovirt-users] Re: about the power management of the hosts

2021-09-22 Thread Strahil Nikolov via Users
It is possible, but without the SPM host being fenced you won't be able to do 
any storage-related tasks. Even snapahot management will be impossible 
withoutmanual intervention (reboot the host from the remote management and then 
mark the host as restarted).
Best Regards,Strahil Nikolov
 
 
  On Wed, Sep 22, 2021 at 5:42, Tommy Sway wrote:   
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QXDDVK3QMNLSIFBO4I6MNKN53VUM4E7Q/
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/J5R6TNYANNA5LLTDUBETDI4H7KJPK3X7/


[ovirt-users] Re: [ANN] oVirt 4.4.8 Async update #1

2021-09-22 Thread Michal Skrivanek
Hi all,
please be aware of bug https://bugzilla.redhat.com/show_bug.cgi?id=2005221 that 
unfortunately removes the timezone info (Hardware Clock Time Offset) in VM 
properties. It matters mostly to Windows VMs since they use “localtime” so 
after reboot the guest time will probably be wrong. It also breaks the Cluster 
Level update with HE as described in the bug.
Unfortunately there’s no simple way how to restore that since the information 
is lost on 4.4.8 upgrade, if the time matters to you you have to set it again 
for each VM

Please refrain from upgrading engine to 4.4.8.5 and wait for 4.4.8.6
Nodes/hosts are not affected in any way.

Thanks,
michal


> On 27. 8. 2021, at 8:25, Sandro Bonazzola  wrote:
> 
> oVirt 4.4.8 Async update #1
> On August 26th 2021 the oVirt project released an async update to the 
> following packages:
> ovirt-ansible-collection 1.6.2
> ovirt-engine 4.4.8.5
> ovirt-release44 4.4.8.1
> oVirt Node 4.4.8.1
> oVirt Appliance 4.4-20210826
> 
> Fixing the following bugs:
> Bug 1947709  - [IPv6] 
> HostedEngineLocal is an isolated libvirt network, breaking upgrades from 4.3
> Bug 1966873  - [RFE] 
> Create Ansible role for remove stale LUNs example remove_mpath_device.yml
> Bug 1997663  - Keep 
> cinbderlib dependencies optional for 4.4.8
> Bug 1996816  - Cluster 
> upgrade fails with: 'OAuthException invalid_grant: The provided authorization 
> grant for the auth code has expired.
> 
> oVirt Node Changes:
> - Consume above oVirt updates
> - GlusterFS 8.6: https://docs.gluster.org/en/latest/release-notes/8.6/ 
>  
> - Fixes for:
> CVE-2021-22923  curl: 
> Metalink download sends credentials
> CVE-2021-22922  curl: 
> Content not matching hash in Metalink is not being discarded
> 
> 
> Full diff list:
> --- ovirt-node-ng-image-4.4.8.manifest-rpm2021-08-19 07:57:44.081590739 
> +0200
> +++ ovirt-node-ng-image-4.4.8.1.manifest-rpm  2021-08-27 08:11:54.863736688 
> +0200
> @@ -2,7 +2,7 @@
> -ModemManager-glib-1.10.8-3.el8.x86_64
> -NetworkManager-1.32.6-1.el8.x86_64
> -NetworkManager-config-server-1.32.6-1.el8.noarch
> -NetworkManager-libnm-1.32.6-1.el8.x86_64
> -NetworkManager-ovs-1.32.6-1.el8.x86_64
> -NetworkManager-team-1.32.6-1.el8.x86_64
> -NetworkManager-tui-1.32.6-1.el8.x86_64
> +ModemManager-glib-1.10.8-4.el8.x86_64
> +NetworkManager-1.32.8-1.el8.x86_64
> +NetworkManager-config-server-1.32.8-1.el8.noarch
> +NetworkManager-libnm-1.32.8-1.el8.x86_64
> +NetworkManager-ovs-1.32.8-1.el8.x86_64
> +NetworkManager-team-1.32.8-1.el8.x86_64
> +NetworkManager-tui-1.32.8-1.el8.x86_64
> @@ -94 +94 @@
> -curl-7.61.1-18.el8.x86_64
> +curl-7.61.1-18.el8_4.1.x86_64
> @@ -106,4 +106,4 @@
> -device-mapper-1.02.177-5.el8.x86_64
> -device-mapper-event-1.02.177-5.el8.x86_64
> -device-mapper-event-libs-1.02.177-5.el8.x86_64
> -device-mapper-libs-1.02.177-5.el8.x86_64
> +device-mapper-1.02.177-6.el8.x86_64
> +device-mapper-event-1.02.177-6.el8.x86_64
> +device-mapper-event-libs-1.02.177-6.el8.x86_64
> +device-mapper-libs-1.02.177-6.el8.x86_64
> @@ -140,36 +140,36 @@
> -fence-agents-all-4.2.1-74.el8.x86_64
> -fence-agents-amt-ws-4.2.1-74.el8.noarch
> -fence-agents-apc-4.2.1-74.el8.noarch
> -fence-agents-apc-snmp-4.2.1-74.el8.noarch
> -fence-agents-bladecenter-4.2.1-74.el8.noarch
> -fence-agents-brocade-4.2.1-74.el8.noarch
> -fence-agents-cisco-mds-4.2.1-74.el8.noarch
> -fence-agents-cisco-ucs-4.2.1-74.el8.noarch
> -fence-agents-common-4.2.1-74.el8.noarch
> -fence-agents-compute-4.2.1-74.el8.noarch
> -fence-agents-drac5-4.2.1-74.el8.noarch
> -fence-agents-eaton-snmp-4.2.1-74.el8.noarch
> -fence-agents-emerson-4.2.1-74.el8.noarch
> -fence-agents-eps-4.2.1-74.el8.noarch
> -fence-agents-heuristics-ping-4.2.1-74.el8.noarch
> -fence-agents-hpblade-4.2.1-74.el8.noarch
> -fence-agents-ibmblade-4.2.1-74.el8.noarch
> -fence-agents-ifmib-4.2.1-74.el8.noarch
> -fence-agents-ilo-moonshot-4.2.1-74.el8.noarch
> -fence-agents-ilo-mp-4.2.1-74.el8.noarch
> -fence-agents-ilo-ssh-4.2.1-74.el8.noarch
> -fence-agents-ilo2-4.2.1-74.el8.noarch
> -fence-agents-intelmodular-4.2.1-74.el8.noarch
> -fence-agents-ipdu-4.2.1-74.el8.noarch
> -fence-agents-ipmilan-4.2.1-74.el8.noarch
> -fence-agents-kdump-4.2.1-74.el8.x86_64
> -fence-agents-mpath-4.2.1-74.el8.noarch
> -fence-agents-redfish-4.2.1-74.el8.x86_64
> -fence-agents-rhevm-4.2.1-74.el8.noarch
> -fence-agents-rsa-4.2.1-74.el8.noarch
> -fence-agents-rsb-4.2.1-74.el8.noarch
> -fence-agents-sbd-4.2.1-74.el8.noarch
> -fence-agents-scsi-4.2.1-74.el8.noarch
> -fence-agents-vmware-rest-4.2.1-74.el8.noarch
> -fence-agents-vmware-soap-4.2.1-74.el8.noarch
> -fence-agents-wti-4.2.1-74.el8.noarch
> 

[ovirt-users] Re: Hosted Engine cluster version compatib.

2021-09-22 Thread Michal Skrivanek
please check bug https://bugzilla.redhat.com/show_bug.cgi?id=2005221

seems we ended up with a nasty bug in 4.4.8, We will be releasing 4.4.8.6 
shortly, but it won’t fix the problem for those who already upgraded.

If you have a Windows VM, can you please check "Hardware Clock Time Offset” in 
Edit VM/System, whther it has default GMT or a different value that you have 
set before - assuming that you did. For Windows people usually set it to their 
local time zone.

For the HE problem, can you try to actually edit that VM and change something 
(something that’s changeable, e.g. description) and save it? 
Regardless if that works, can you try to restart HE (by moving to global 
maintenance, or if you don’t care too much, just shut it down from within that 
HE VM and let it be restarted automatically)

If you have any additional logs/observation please add that to the bug

Thanks,
michal

> On 17. 9. 2021, at 15:33, Andrea Chierici  
> wrote:
> 
> Really no suggestions at all?
> 
> Andrea
> 
> On 15/09/2021 12:33, Andrea Chierici wrote:
>> Dear all,
>> I have just updated my ovirt installation, with self hosted engine, from 
>> 4.4.5 to 4.4.8.5-1.
>> Everything went smoothly and in a few minutes the system was back up and 
>> running.
>> A little issue is still puzzling me.
>> I am asked to update from 4.5 to 4.6 the cluster and data center 
>> compatibility level. When I try to issue the command from the cluster config 
>> I get this error:
>> 
>>> Error while executing action: Cannot update cluster because the update 
>>> triggered update of the VMs/Templates and it failed for the following: 
>>> HostedEngine. To fix the issue, please go to each of them, edit, change the 
>>> Custom Compatibility Version (or other fields changed previously in the 
>>> cluster dialog) and press OK. If the save does not pass, fix the dialog 
>>> validation. After successful cluster update, you can revert your Custom 
>>> Compatibility Version change (or other changes). If the problem still 
>>> persists, you may refer to the engine.log file for further details.
>> 
>> It's very strange because the config of the hostedengine is "plain" and 
>> there are no constrains on compatibility version, as you can see in this 
>> picture:
>> 
>> 
>> 
>> In any case, if I try to force compatibility with 4.6 I get this error:
>> 
>>> Error while executing action:
>>> 
>>> HostedEngine:
>>> 
>>> There was an attempt to change Hosted Engine VM values that are locked.
>> 
>> So I am stuck. Not a big deal at the moment, but sooner or later I will have 
>> to do this upgrade and I don't know where I am wrong.
>> 
>> Can anybody give a clue?
>> Thanks in advance,
>> 
>> 
>> Andrea
>> 
>> -- 
>> Andrea Chierici - INFN-CNAF  
>> Viale Berti Pichat 6/2, 40127 BOLOGNA
>> Office Tel: +39 051 2095463  
>> SkypeID ataruz
>> --
>> 
>> 
>> ___
>> Users mailing list -- users@ovirt.org 
>> To unsubscribe send an email to users-le...@ovirt.org 
>> 
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html 
>> 
>> oVirt Code of Conduct: 
>> https://www.ovirt.org/community/about/community-guidelines/ 
>> 
>> List Archives: 
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/BQMHMKNLPANYOIWDSLSPBB3UUY4FXRNR/
>>  
>> 
> 
> 
> -- 
> Andrea Chierici - INFN-CNAF   
> Viale Berti Pichat 6/2, 40127 BOLOGNA
> Office Tel: +39 051 2095463   
> SkypeID ataruz
> --
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/2WDJEQYIDZNNIKIVBFAFDX5DZHTLOWST/

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2XV7REDJ2ACAGLZHFCER4DAUBNYBYWGG/


[ovirt-users] Re: Sparse VMs from Templates - Storage issues

2021-09-22 Thread Shantur Rathore
I have actually tried many types of storage now and all have this issue.

I am not of ideas what to do

On Wed, Sep 22, 2021 at 4:39 PM Shantur Rathore
 wrote:
>
> Hi Nir,
>
> Just to report.
> As suggested, I created a Posix compliant storage domain with CephFS
> and copied my templates to CephFS.
> Now I created VMs from CephFS templates and the storage error happens again.
> As I understand, the storage growth issue is only on iSCSI.
>
> Am I doing something wrong?
>
> Kind regards,
> Shantur
>
> On Wed, Aug 11, 2021 at 2:42 PM Nir Soffer  wrote:
> >
> > On Wed, Aug 11, 2021 at 4:24 PM Arik Hadas  wrote:
> > >
> > >
> > >
> > > On Wed, Aug 11, 2021 at 2:56 PM Benny Zlotnik  wrote:
> > >>
> > >> > If your vm is temporary and you like to drop the data written while
> > >> > the vm is running, you
> > >> > could use a temporary disk based on the template. This is called a
> > >> > "transient disk" in vdsm.
> > >> >
> > >> > Arik, maybe you remember how transient disks are used in engine?
> > >> > Do we have an API to run a VM once, dropping the changes to the disk
> > >> > done while the VM was running?
> > >>
> > >> I think that's how stateless VMs work
> > >
> > >
> > > +1
> > > It doesn't work exactly like Nir wrote above - stateless VMs that are 
> > > thin-provisioned would have a qcow volume on top of each template's 
> > > volume and when they starts, their active volume would be a qcow volume 
> > > on top of the aforementioned qcow volume and that active volume will be 
> > > removed when the VM goes down
> > > But yeah, stateless VMs are intended for such use case
> >
> > I was referring to transient disks - created in vdsm:
> > https://github.com/oVirt/vdsm/blob/45903d01e142047093bf844628b5d90df12b6ffb/lib/vdsm/virt/vm.py#L3789
> >
> > This creates a *local* temporary file using qcow2 format, using the
> > disk on shared
> > storage as a backing file.
> >
> > Maybe this is not used by engine?
> >
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3UEXYH2IGNDWWYEHEHKLAREJS74LMXUI/


[ovirt-users] Re: Sparse VMs from Templates - Storage issues

2021-09-22 Thread Shantur Rathore
Hi Nir,

Just to report.
As suggested, I created a Posix compliant storage domain with CephFS
and copied my templates to CephFS.
Now I created VMs from CephFS templates and the storage error happens again.
As I understand, the storage growth issue is only on iSCSI.

Am I doing something wrong?

Kind regards,
Shantur

On Wed, Aug 11, 2021 at 2:42 PM Nir Soffer  wrote:
>
> On Wed, Aug 11, 2021 at 4:24 PM Arik Hadas  wrote:
> >
> >
> >
> > On Wed, Aug 11, 2021 at 2:56 PM Benny Zlotnik  wrote:
> >>
> >> > If your vm is temporary and you like to drop the data written while
> >> > the vm is running, you
> >> > could use a temporary disk based on the template. This is called a
> >> > "transient disk" in vdsm.
> >> >
> >> > Arik, maybe you remember how transient disks are used in engine?
> >> > Do we have an API to run a VM once, dropping the changes to the disk
> >> > done while the VM was running?
> >>
> >> I think that's how stateless VMs work
> >
> >
> > +1
> > It doesn't work exactly like Nir wrote above - stateless VMs that are 
> > thin-provisioned would have a qcow volume on top of each template's volume 
> > and when they starts, their active volume would be a qcow volume on top of 
> > the aforementioned qcow volume and that active volume will be removed when 
> > the VM goes down
> > But yeah, stateless VMs are intended for such use case
>
> I was referring to transient disks - created in vdsm:
> https://github.com/oVirt/vdsm/blob/45903d01e142047093bf844628b5d90df12b6ffb/lib/vdsm/virt/vm.py#L3789
>
> This creates a *local* temporary file using qcow2 format, using the
> disk on shared
> storage as a backing file.
>
> Maybe this is not used by engine?
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZEMCITVILEFHZ2R4QIVUJ26TL6LYMDRY/


[ovirt-users] Managed Block Storage and Templates

2021-09-22 Thread Shantur Rathore
Hi all,

Anyone tried using Templates with Managed Block Storage?
I created a VM on MBS and then took a snapshot.
This worked but as soon as I created a Template from snapshot, the
template got created but there is no disk attached to the template.

Anyone seeing something similar?

Thanks
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Z6SPHZ3XOSXRYE72SWRANTXZCA27RKDY/


[ovirt-users] Re: about the vm disk type

2021-09-22 Thread Vojtech Juranek
Sorry for my misleading previous answer, I thought you are asking how to 
create pre/thin allocated disk. As for the format, you cannot choose it. As 
for the defaults, Benny already answered this.

> I also have block storage on my environment. How do I observe the type of a
> vm image (LV)?

What do you exactly mean? Do you ask about image format (raw vs. cow)? If so, 
you can deduce it from Benny's answer, based on the facts on which domain is 
disk stored and if it's pre-allocated or not. If you are asking how to check 
it on the storage, you can check disk metadata or you can use directly qemu-
img utility (qemu-img info /path/to/the/disk)

signature.asc
Description: This is a digitally signed message part.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VEIAVO4SPBEFHTEP7DK5ZIRDB77EGAMA/


[ovirt-users] Re: about the vm disk type

2021-09-22 Thread Tommy Sway
I also have block storage on my environment. How do I observe the type of a vm 
image (LV)?

 

 

 

From: Benny Zlotnik  
Sent: Wednesday, September 22, 2021 7:30 PM
To: Tommy Sway 
Cc: Vojtech Juranek ; users 
Subject: Re: [ovirt-users] Re: about the vm disk type

 

file-based domains use RAW for both settings, thin-provisioned on block domain 
will use qcow2, otherwise RAW will be used

 

On Wed, Sep 22, 2021 at 1:22 PM Tommy Sway mailto:sz_cui...@163.com> > wrote:

For example :

 



And I check the file on the storage:

 

[root@olvms1 51dcbfae-1100-4e43-9e0a-bb8c578623d7]# cat 
9e4dc022-c450-4f85-89f5-233fa41c07d0.meta

CAP=10737418240

CTIME=1632305740

DESCRIPTION={"DiskAlias":"test09222_Disk1","DiskDescription":""}

DISKTYPE=DATA

DOMAIN=f77091d9-aabc-42db-87b1-b8299765482e

FORMAT=RAW

GEN=0

IMAGE=51dcbfae-1100-4e43-9e0a-bb8c578623d7

LEGALITY=LEGAL

PUUID=----

TYPE=SPARSE

VOLTYPE=LEAF

EOF

[root@olvms1 51dcbfae-1100-4e43-9e0a-bb8c578623d7]#

[root@olvms1 51dcbfae-1100-4e43-9e0a-bb8c578623d7]#

[root@olvms1 51dcbfae-1100-4e43-9e0a-bb8c578623d7]# ll

total 1025

-rw-rw. 1 vdsm kvm 10737418240 Sep 22 18:15 
9e4dc022-c450-4f85-89f5-233fa41c07d0

-rw-rw. 1 vdsm kvm 1048576 Sep 22 18:15 
9e4dc022-c450-4f85-89f5-233fa41c07d0.lease

-rw-r--r--. 1 vdsm kvm 303 Sep 22 18:15 
9e4dc022-c450-4f85-89f5-233fa41c07d0.meta

[root@olvms1 51dcbfae-1100-4e43-9e0a-bb8c578623d7]#

[root@olvms1 51dcbfae-1100-4e43-9e0a-bb8c578623d7]#

[root@olvms1 51dcbfae-1100-4e43-9e0a-bb8c578623d7]# du -h 
./9e4dc022-c450-4f85-89f5-233fa41c07d0

0   ./9e4dc022-c450-4f85-89f5-233fa41c07d0

[root@olvms1 51dcbfae-1100-4e43-9e0a-bb8c578623d7]#

[root@olvms1 51dcbfae-1100-4e43-9e0a-bb8c578623d7]#

 

 

 

 

 

 

 

 

 

 

 

 

 

 

-Original Message-
From: users-boun...@ovirt.org   
mailto:users-boun...@ovirt.org> > On Behalf Of Tommy 
Sway
Sent: Wednesday, September 22, 2021 6:07 PM
To: 'Vojtech Juranek' mailto:vjura...@redhat.com> >; 
users@ovirt.org  
Subject: [ovirt-users] Re: about the vm disk type

 

You mean if it's pre-allocated, it must be RAW, not Qcow2?

The documentation only states that RAW must be pre-allocated, but it does not 
say that qCOW2 cannot use pre-allocation.

 

 

 

 

 

-Original Message-

From: Vojtech Juranek <  vjura...@redhat.com>

Sent: Wednesday, September 22, 2021 6:04 PM

To:   users@ovirt.org

Cc: Tommy Sway <  sz_cui...@163.com>

Subject: Re: [ovirt-users] about the vm disk type

 

On Wednesday, 22 September 2021 09:55:26 CEST Tommy Sway wrote:

> When I create the VM's image disk, I am not asked to select the 

> following type of disk.

 

Actually you are, it's "Allocation Policy" drop down menu.

Thin provisioned == qcow format

Preallocated == raw

 

> 

> 

> What is the default value ?

 

Thin provisioned, i.e. qcow.

 

> 

> 

> Thanks.

> 

> 

> 

> 

> 

> QCOW2 Formatted Virtual Machine Storage

> 

> QCOW2 is a storage format for virtual disks. QCOW stands for QEMU 

> copy-on-write. The QCOW2 format decouples the physical storage layer 

> from the virtual layer by adding a mapping between logical and 

> physical

blocks.

> Each logical block is mapped to its physical offset, which enables 

> storage over-commitment and virtual machine snapshots, where each QCOW 

> volume only represents changes made to an underlying virtual disk.

> 

> The initial mapping points all logical blocks to the offsets in the 

> backing file or volume. When a virtual machine writes data to a QCOW2 

> volume after a snapshot, the relevant block is read from the backing 

> volume, modified with the new information and written into a new 

> snapshot QCOW2 volume. Then the map is updated to point to the new place.

> 

> Raw

> 

> The raw storage format has a performance advantage over QCOW2 in that 

> no formatting is applied to virtual disks stored in the raw format.

> Virtual machine data operations on virtual disks stored in raw format 

> require no additional work from hosts. When a virtual machine writes 

> data to a given offset in its virtual disk, the I/O is written to the 

> same offset on the backing file or logical volume.

> 

> Raw format requires that the entire space of the defined image be 

> preallocated unless using externally managed thin provisioned LUNs 

> from a storage array.

 

 

___

Users mailing list --   users@ovirt.org

To unsubscribe send an email to   
users-le...@ovirt.org Privacy Statement:  
 
https://www.ovirt.org/privacy-policy.html

oVirt Code of Conduct:  
 
https://www.ovirt.org/community/about/community-guidelines/

List Archives:  

[ovirt-users] Re: about the vm disk type

2021-09-22 Thread Benny Zlotnik
file-based domains use RAW for both settings, thin-provisioned on block
domain will use qcow2, otherwise RAW will be used

On Wed, Sep 22, 2021 at 1:22 PM Tommy Sway  wrote:

> For example :
>
>
>
> And I check the file on the storage:
>
>
>
> [root@olvms1 51dcbfae-1100-4e43-9e0a-bb8c578623d7]# cat
> 9e4dc022-c450-4f85-89f5-233fa41c07d0.meta
>
> CAP=10737418240
>
> CTIME=1632305740
>
> DESCRIPTION={"DiskAlias":"test09222_Disk1","DiskDescription":""}
>
> DISKTYPE=DATA
>
> DOMAIN=f77091d9-aabc-42db-87b1-b8299765482e
>
> *FORMAT=RAW*
>
> GEN=0
>
> IMAGE=51dcbfae-1100-4e43-9e0a-bb8c578623d7
>
> LEGALITY=LEGAL
>
> PUUID=----
>
> TYPE=SPARSE
>
> VOLTYPE=LEAF
>
> EOF
>
> [root@olvms1 51dcbfae-1100-4e43-9e0a-bb8c578623d7]#
>
> [root@olvms1 51dcbfae-1100-4e43-9e0a-bb8c578623d7]#
>
> [root@olvms1 51dcbfae-1100-4e43-9e0a-bb8c578623d7]# ll
>
> total 1025
>
> -rw-rw. 1 vdsm kvm 10737418240 Sep 22 18:15
> 9e4dc022-c450-4f85-89f5-233fa41c07d0
>
> -rw-rw. 1 vdsm kvm 1048576 Sep 22 18:15
> 9e4dc022-c450-4f85-89f5-233fa41c07d0.lease
>
> -rw-r--r--. 1 vdsm kvm 303 Sep 22 18:15
> 9e4dc022-c450-4f85-89f5-233fa41c07d0.meta
>
> [root@olvms1 51dcbfae-1100-4e43-9e0a-bb8c578623d7]#
>
> [root@olvms1 51dcbfae-1100-4e43-9e0a-bb8c578623d7]#
>
> [root@olvms1 51dcbfae-1100-4e43-9e0a-bb8c578623d7]# du -h
> ./9e4dc022-c450-4f85-89f5-233fa41c07d0
>
> 0   ./9e4dc022-c450-4f85-89f5-233fa41c07d0
>
> [root@olvms1 51dcbfae-1100-4e43-9e0a-bb8c578623d7]#
>
> [root@olvms1 51dcbfae-1100-4e43-9e0a-bb8c578623d7]#
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> -Original Message-
> From: users-boun...@ovirt.org  On Behalf Of
> Tommy Sway
> Sent: Wednesday, September 22, 2021 6:07 PM
> To: 'Vojtech Juranek' ; users@ovirt.org
> Subject: [ovirt-users] Re: about the vm disk type
>
>
>
> You mean if it's pre-allocated, it must be RAW, not Qcow2?
>
> The documentation only states that RAW must be pre-allocated, but it does
> not say that qCOW2 cannot use pre-allocation.
>
>
>
>
>
>
>
>
>
>
>
> -Original Message-
>
> From: Vojtech Juranek 
>
> Sent: Wednesday, September 22, 2021 6:04 PM
>
> To: users@ovirt.org
>
> Cc: Tommy Sway 
>
> Subject: Re: [ovirt-users] about the vm disk type
>
>
>
> On Wednesday, 22 September 2021 09:55:26 CEST Tommy Sway wrote:
>
> > When I create the VM's image disk, I am not asked to select the
>
> > following type of disk.
>
>
>
> Actually you are, it's "Allocation Policy" drop down menu.
>
> Thin provisioned == qcow format
>
> Preallocated == raw
>
>
>
> >
>
> >
>
> > What is the default value ?
>
>
>
> Thin provisioned, i.e. qcow.
>
>
>
> >
>
> >
>
> > Thanks.
>
> >
>
> >
>
> >
>
> >
>
> >
>
> > QCOW2 Formatted Virtual Machine Storage
>
> >
>
> > QCOW2 is a storage format for virtual disks. QCOW stands for QEMU
>
> > copy-on-write. The QCOW2 format decouples the physical storage layer
>
> > from the virtual layer by adding a mapping between logical and
>
> > physical
>
> blocks.
>
> > Each logical block is mapped to its physical offset, which enables
>
> > storage over-commitment and virtual machine snapshots, where each QCOW
>
> > volume only represents changes made to an underlying virtual disk.
>
> >
>
> > The initial mapping points all logical blocks to the offsets in the
>
> > backing file or volume. When a virtual machine writes data to a QCOW2
>
> > volume after a snapshot, the relevant block is read from the backing
>
> > volume, modified with the new information and written into a new
>
> > snapshot QCOW2 volume. Then the map is updated to point to the new place.
>
> >
>
> > Raw
>
> >
>
> > The raw storage format has a performance advantage over QCOW2 in that
>
> > no formatting is applied to virtual disks stored in the raw format.
>
> > Virtual machine data operations on virtual disks stored in raw format
>
> > require no additional work from hosts. When a virtual machine writes
>
> > data to a given offset in its virtual disk, the I/O is written to the
>
> > same offset on the backing file or logical volume.
>
> >
>
> > Raw format requires that the entire space of the defined image be
>
> > preallocated unless using externally managed thin provisioned LUNs
>
> > from a storage array.
>
>
>
>
>
> ___
>
> Users mailing list -- users@ovirt.org
>
> To unsubscribe send an email to users-le...@ovirt.org Privacy Statement:
> https://www.ovirt.org/privacy-policy.html
>
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
>
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/JGJX4VUOYVBG6AWPKWVMILXINNOFFO2V/
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> 

[ovirt-users] Re: about the OVF_STORE and the xleases volume

2021-09-22 Thread Tommy Sway
*   The OVF_STORE disk will contain all the entities configuration which
are candidates to be registered. The candidates are VMs and Templates which
has at least one disk exists in the Storage Domain OVF contained in the
unregistered_ovf_of_entities table.

 

In my mind, OVF_STORE is also a disk that holds VM or Template configuration
information in XML format, which is used when importing data domains into a
new data center.

But why do I find more than one OVF_STORE disk in the same storage domain?

 

 

 

 

 

From: users-boun...@ovirt.org  On Behalf Of Tommy
Sway
Sent: Wednesday, September 22, 2021 6:58 PM
To: 'Vojtech Juranek' ; users@ovirt.org
Subject: [ovirt-users] Re: about the OVF_STORE and the xleases volume

 

I got it.

 

Today, oVirt supports importing ISO and Export Storage Domains, however,
there is no support for importing an existing Data Storage Domain. A Data
Storage Domain contains disks volumes and VMs/Templates OVF files. The OVF
file is an XML standard representing the VM/Template configuration including
disks, memory, CPU and more. Based on this information stored in the Storage
Domain we can revive entities such as disks, VMs and Templates in the setup
of any Data Center the Storage Domain will be attached to. The usability of
the feature might be useful for various use cases, here are some of them:

*   Recover after the loss of the oVirt Engine's database.
*   Transfer VMs between setups without the need to copy the data into
and out of the export domain.
*   Support migrating Storage Domains between different oVirt
installations.

Storage Domains that can be restored for VMs/Templates must contain
OVF_STORE disks. Since OVF_STORE disk is only supported from a 3.5v Data
Center, the Storage Domains that can be restored have to be managed in a
3.5v Data Center before the disaster. As long as the setup contains 3.5v
Data Centers, the Import Storage Domain feature will automatically be
supported for those Data Centers.

 

 

 

 

 

 

 

 

From: users-boun...@ovirt.org 
mailto:users-boun...@ovirt.org> > On Behalf Of
Tommy Sway
Sent: Wednesday, September 22, 2021 6:48 PM
To: 'Vojtech Juranek' mailto:vjura...@redhat.com> >;
users@ovirt.org  
Subject: [ovirt-users] Re: about the OVF_STORE and the xleases volume

 

Are you referring to the description of this passage?

 

 


Detailed Description


VM/Template configurations (including disks info) are stored on the master
storage domain only for backup purposes and in order to provide the ability
to run VMs without having a running engine/db. This feature aims to change
the current place in which the OVFs are stored while using the existing
 OvfAutoUpdater feature (asynchronous incremental OVF updates).
The expected benefits are:

1.  Having "self contained" Storage Domains which will enable to recover
in case of data loss (oVirt supports registration of unknown disks stored on
storage domain in the engine and adding VM from OVF configuration - so
having the VM OVF stored on the same Storage Domain of it's disks will allow
to recover the vm "completeness" from that Storage Domain to the oVirt
engine).
2.  Moving out from using the master_fs on the storage domain, as part
of this change the OVFs will be stored on a designated volume located on
each Storage Domain.
3.  Adding support for streaming files from the engine to vdsm (will be
discussed later on).

 

 

 

 

-Original Message-
From: Vojtech Juranek mailto:vjura...@redhat.com> > 
Sent: Wednesday, September 22, 2021 6:12 PM
To: users@ovirt.org  
Cc: Tommy Sway mailto:sz_cui...@163.com> >
Subject: Re: [ovirt-users] about the OVF_STORE and the xleases volume

 

On Wednesday, 22 September 2021 10:39:34 CEST Tommy Sway wrote:

> I wonder if the xleases volume mentioned here refers to ovf_store ?

 

No, xleases is part of the disk space used internally by oVirt (to manage
concurrent access to the resources, e.g. disk image) and shouldn't be
touched by the user.

 

OVF store is Open Virtualization Format [1] and it's used for storing these
files, see [2] for more details.

 

[1]  
https://en.wikipedia.org/wiki/Open_Virtualization_Format

[2]  
https://www.ovirt.org/develop/release-management/features/storage/

importstoragedomain.html

 

> 

> 

> 

> 

> *A new xleases volume to support VM leases - this feature adds the

> ability to acquire a lease per virtual machine on shared storage 

> without attaching the lease to a virtual machine disk.

> 

> A VM lease offers two important capabilities:

> 

> *Avoiding split-brain.

> *Starting a VM on another host if the original host becomes

> non-responsive, which improves the 

[ovirt-users] Re: Managed Block Storage issues

2021-09-22 Thread Benny Zlotnik
I see the rule is created in the logs:

MainProcess|jsonrpc/5::DEBUG::2021-09-22
10:39:37,504::supervdsm_server::95::SuperVdsm.ServerCallback::(wrapper)
call add_managed_udev_rule with
('ed1a0e9f-4d30-4896-b965-534861cc0c02',
'/dev/mapper/360014054b727813d1bc4d4cefdade7db') {}
MainProcess|jsonrpc/5::DEBUG::2021-09-22
10:39:37,505::udev::124::SuperVdsm.ServerCallback::(add_managed_udev_rule)
Creating rule 
/etc/udev/rules.d/99-vdsm-managed_ed1a0e9f-4d30-4896-b965-534861cc0c02.rules:
'SYMLINK=="mapper/360014054b727813d1bc4d4cefdade7db",
RUN+="/usr/bin/chown vdsm:qemu $env{DEVNAME}"\n'

While we no longer test backends other than ceph, this used to work
back when we started and it worked for NetApp. Perhaps this rule is
incorrect, can you check this manually?

regarding 2, can you please submit a bug?

On Wed, Sep 22, 2021 at 1:03 PM Shantur Rathore
 wrote:
>
> Hi all,
>
> I am trying to set up Managed block storage and have the following issues.
>
> My setup:
> Latest oVirt Node NG : 4.4.8
> Latest oVirt Engine : 4.4.8
>
> 1. Unable to copy to iSCSI based block storage
>
> I created a MBS with Synology UC3200 as a backend ( supported by
> Cinderlib ). It was created fine but when I try to copy disks to it,
> it fails.
> Upon looking at the logs from SPM, I found "qemu-img" failed with an
> error that it cannot open "/dev/mapper/xx" : Permission Error.
> Had a look through the code and digging out more, I saw that
> a. Sometimes /dev/mapper/ symlink isn't created ( log attached )
> b. The ownership to /dev/mapper/xx and /dev/dm-xx for the new
> device always stays at root:root
>
> I added a udev rule
> ACTION=="add|change", ENV{DM_UUID}=="mpath-*", GROUP="qemu",
> OWNER="vdsm", MODE="0660"
>
> and the disk copied correctly when /dev/mapper/x got created.
>
> 2. Copy progress finishes in UI very early than the actual qemu-img process.
> The UI shows the Copy process is completed successfully but it's
> actually still copying the image.
> This happens both for ceph and iscsi based mbs.
>
> Is there any known workaround to get iSCSI MBS working?
>
> Kind regards,
> Shantur
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/G6TMTW23SUAKR4UOXVSZKXHJY3PVMIDD/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CFELPIEEW2J4DVEBUNJPMQGMAR5JBKL4/


[ovirt-users] Re: about the OVF_STORE and the xleases volume

2021-09-22 Thread Tommy Sway
I got it.

 

Today, oVirt supports importing ISO and Export Storage Domains, however,
there is no support for importing an existing Data Storage Domain. A Data
Storage Domain contains disks volumes and VMs/Templates OVF files. The OVF
file is an XML standard representing the VM/Template configuration including
disks, memory, CPU and more. Based on this information stored in the Storage
Domain we can revive entities such as disks, VMs and Templates in the setup
of any Data Center the Storage Domain will be attached to. The usability of
the feature might be useful for various use cases, here are some of them:

*   Recover after the loss of the oVirt Engine's database.
*   Transfer VMs between setups without the need to copy the data into
and out of the export domain.
*   Support migrating Storage Domains between different oVirt
installations.

Storage Domains that can be restored for VMs/Templates must contain
OVF_STORE disks. Since OVF_STORE disk is only supported from a 3.5v Data
Center, the Storage Domains that can be restored have to be managed in a
3.5v Data Center before the disaster. As long as the setup contains 3.5v
Data Centers, the Import Storage Domain feature will automatically be
supported for those Data Centers.

 

 

 

 

 

 

 

 

From: users-boun...@ovirt.org  On Behalf Of Tommy
Sway
Sent: Wednesday, September 22, 2021 6:48 PM
To: 'Vojtech Juranek' ; users@ovirt.org
Subject: [ovirt-users] Re: about the OVF_STORE and the xleases volume

 

Are you referring to the description of this passage?

 

 


Detailed Description


VM/Template configurations (including disks info) are stored on the master
storage domain only for backup purposes and in order to provide the ability
to run VMs without having a running engine/db. This feature aims to change
the current place in which the OVFs are stored while using the existing
 OvfAutoUpdater feature (asynchronous incremental OVF updates).
The expected benefits are:

1.  Having "self contained" Storage Domains which will enable to recover
in case of data loss (oVirt supports registration of unknown disks stored on
storage domain in the engine and adding VM from OVF configuration - so
having the VM OVF stored on the same Storage Domain of it's disks will allow
to recover the vm "completeness" from that Storage Domain to the oVirt
engine).
2.  Moving out from using the master_fs on the storage domain, as part
of this change the OVFs will be stored on a designated volume located on
each Storage Domain.
3.  Adding support for streaming files from the engine to vdsm (will be
discussed later on).

 

 

 

 

-Original Message-
From: Vojtech Juranek mailto:vjura...@redhat.com> > 
Sent: Wednesday, September 22, 2021 6:12 PM
To: users@ovirt.org  
Cc: Tommy Sway mailto:sz_cui...@163.com> >
Subject: Re: [ovirt-users] about the OVF_STORE and the xleases volume

 

On Wednesday, 22 September 2021 10:39:34 CEST Tommy Sway wrote:

> I wonder if the xleases volume mentioned here refers to ovf_store ?

 

No, xleases is part of the disk space used internally by oVirt (to manage
concurrent access to the resources, e.g. disk image) and shouldn't be
touched by the user.

 

OVF store is Open Virtualization Format [1] and it's used for storing these
files, see [2] for more details.

 

[1]  
https://en.wikipedia.org/wiki/Open_Virtualization_Format

[2]  
https://www.ovirt.org/develop/release-management/features/storage/

importstoragedomain.html

 

> 

> 

> 

> 

> *A new xleases volume to support VM leases - this feature adds the

> ability to acquire a lease per virtual machine on shared storage 

> without attaching the lease to a virtual machine disk.

> 

> A VM lease offers two important capabilities:

> 

> *Avoiding split-brain.

> *Starting a VM on another host if the original host becomes

> non-responsive, which improves the availability of HA VMs.

 

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JGRSZNF5CYJRNPX5N3T5KVHNXGBAWEBR/


[ovirt-users] Re: about the OVF_STORE and the xleases volume

2021-09-22 Thread Tommy Sway
Are you referring to the description of this passage?

 

 


Detailed Description


VM/Template configurations (including disks info) are stored on the master
storage domain only for backup purposes and in order to provide the ability
to run VMs without having a running engine/db. This feature aims to change
the current place in which the OVFs are stored while using the existing
 OvfAutoUpdater feature (asynchronous incremental OVF updates).
The expected benefits are:

1.  Having "self contained" Storage Domains which will enable to recover
in case of data loss (oVirt supports registration of unknown disks stored on
storage domain in the engine and adding VM from OVF configuration - so
having the VM OVF stored on the same Storage Domain of it's disks will allow
to recover the vm "completeness" from that Storage Domain to the oVirt
engine).
2.  Moving out from using the master_fs on the storage domain, as part
of this change the OVFs will be stored on a designated volume located on
each Storage Domain.
3.  Adding support for streaming files from the engine to vdsm (will be
discussed later on).

 

 

 

 

-Original Message-
From: Vojtech Juranek  
Sent: Wednesday, September 22, 2021 6:12 PM
To: users@ovirt.org
Cc: Tommy Sway 
Subject: Re: [ovirt-users] about the OVF_STORE and the xleases volume

 

On Wednesday, 22 September 2021 10:39:34 CEST Tommy Sway wrote:

> I wonder if the xleases volume mentioned here refers to ovf_store ?

 

No, xleases is part of the disk space used internally by oVirt (to manage
concurrent access to the resources, e.g. disk image) and shouldn't be
touched by the user.

 

OVF store is Open Virtualization Format [1] and it's used for storing these
files, see [2] for more details.

 

[1]  
https://en.wikipedia.org/wiki/Open_Virtualization_Format

[2]  
https://www.ovirt.org/develop/release-management/features/storage/

importstoragedomain.html

 

> 

> 

> 

> 

> *A new xleases volume to support VM leases - this feature adds the

> ability to acquire a lease per virtual machine on shared storage 

> without attaching the lease to a virtual machine disk.

> 

> A VM lease offers two important capabilities:

> 

> *Avoiding split-brain.

> *Starting a VM on another host if the original host becomes

> non-responsive, which improves the availability of HA VMs.

 

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XS3GMASJ5TW6Q4S2MZHT72E4FLH5GSBZ/


[ovirt-users] Re: about the vm disk type

2021-09-22 Thread Tommy Sway
For example :

 



And I check the file on the storage:

 

[root@olvms1 51dcbfae-1100-4e43-9e0a-bb8c578623d7]# cat 
9e4dc022-c450-4f85-89f5-233fa41c07d0.meta

CAP=10737418240

CTIME=1632305740

DESCRIPTION={"DiskAlias":"test09222_Disk1","DiskDescription":""}

DISKTYPE=DATA

DOMAIN=f77091d9-aabc-42db-87b1-b8299765482e

FORMAT=RAW

GEN=0

IMAGE=51dcbfae-1100-4e43-9e0a-bb8c578623d7

LEGALITY=LEGAL

PUUID=----

TYPE=SPARSE

VOLTYPE=LEAF

EOF

[root@olvms1 51dcbfae-1100-4e43-9e0a-bb8c578623d7]#

[root@olvms1 51dcbfae-1100-4e43-9e0a-bb8c578623d7]#

[root@olvms1 51dcbfae-1100-4e43-9e0a-bb8c578623d7]# ll

total 1025

-rw-rw. 1 vdsm kvm 10737418240 Sep 22 18:15 
9e4dc022-c450-4f85-89f5-233fa41c07d0

-rw-rw. 1 vdsm kvm 1048576 Sep 22 18:15 
9e4dc022-c450-4f85-89f5-233fa41c07d0.lease

-rw-r--r--. 1 vdsm kvm 303 Sep 22 18:15 
9e4dc022-c450-4f85-89f5-233fa41c07d0.meta

[root@olvms1 51dcbfae-1100-4e43-9e0a-bb8c578623d7]#

[root@olvms1 51dcbfae-1100-4e43-9e0a-bb8c578623d7]#

[root@olvms1 51dcbfae-1100-4e43-9e0a-bb8c578623d7]# du -h 
./9e4dc022-c450-4f85-89f5-233fa41c07d0

0   ./9e4dc022-c450-4f85-89f5-233fa41c07d0

[root@olvms1 51dcbfae-1100-4e43-9e0a-bb8c578623d7]#

[root@olvms1 51dcbfae-1100-4e43-9e0a-bb8c578623d7]#

 

 

 

 

 

 

 

 

 

 

 

 

 

 

-Original Message-
From: users-boun...@ovirt.org   
mailto:users-boun...@ovirt.org> > On Behalf Of Tommy 
Sway
Sent: Wednesday, September 22, 2021 6:07 PM
To: 'Vojtech Juranek' mailto:vjura...@redhat.com> >; 
users@ovirt.org  
Subject: [ovirt-users] Re: about the vm disk type

 

You mean if it's pre-allocated, it must be RAW, not Qcow2?

The documentation only states that RAW must be pre-allocated, but it does not 
say that qCOW2 cannot use pre-allocation.

 

 

 

 

 

-Original Message-

From: Vojtech Juranek <  vjura...@redhat.com>

Sent: Wednesday, September 22, 2021 6:04 PM

To:   users@ovirt.org

Cc: Tommy Sway <  sz_cui...@163.com>

Subject: Re: [ovirt-users] about the vm disk type

 

On Wednesday, 22 September 2021 09:55:26 CEST Tommy Sway wrote:

> When I create the VM's image disk, I am not asked to select the 

> following type of disk.

 

Actually you are, it's "Allocation Policy" drop down menu.

Thin provisioned == qcow format

Preallocated == raw

 

> 

> 

> What is the default value ?

 

Thin provisioned, i.e. qcow.

 

> 

> 

> Thanks.

> 

> 

> 

> 

> 

> QCOW2 Formatted Virtual Machine Storage

> 

> QCOW2 is a storage format for virtual disks. QCOW stands for QEMU 

> copy-on-write. The QCOW2 format decouples the physical storage layer 

> from the virtual layer by adding a mapping between logical and 

> physical

blocks.

> Each logical block is mapped to its physical offset, which enables 

> storage over-commitment and virtual machine snapshots, where each QCOW 

> volume only represents changes made to an underlying virtual disk.

> 

> The initial mapping points all logical blocks to the offsets in the 

> backing file or volume. When a virtual machine writes data to a QCOW2 

> volume after a snapshot, the relevant block is read from the backing 

> volume, modified with the new information and written into a new 

> snapshot QCOW2 volume. Then the map is updated to point to the new place.

> 

> Raw

> 

> The raw storage format has a performance advantage over QCOW2 in that 

> no formatting is applied to virtual disks stored in the raw format.

> Virtual machine data operations on virtual disks stored in raw format 

> require no additional work from hosts. When a virtual machine writes 

> data to a given offset in its virtual disk, the I/O is written to the 

> same offset on the backing file or logical volume.

> 

> Raw format requires that the entire space of the defined image be 

> preallocated unless using externally managed thin provisioned LUNs 

> from a storage array.

 

 

___

Users mailing list --   users@ovirt.org

To unsubscribe send an email to   
users-le...@ovirt.org Privacy Statement:  
 
https://www.ovirt.org/privacy-policy.html

oVirt Code of Conduct:  
 
https://www.ovirt.org/community/about/community-guidelines/

List Archives:  

 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JGJX4VUOYVBG6AWPKWVMILXINNOFFO2V/

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 

[ovirt-users] Re: about the OVF_STORE and the xleases volume

2021-09-22 Thread Vojtech Juranek
On Wednesday, 22 September 2021 10:39:34 CEST Tommy Sway wrote:
> I wonder if the xleases volume mentioned here refers to ovf_store ?

No, xleases is part of the disk space used internally by oVirt (to manage 
concurrent access to the resources, e.g. disk image) and shouldn't be touched 
by the user.

OVF store is Open Virtualization Format [1] and it's used for storing these 
files, see [2] for more details.

[1] https://en.wikipedia.org/wiki/Open_Virtualization_Format
[2] https://www.ovirt.org/develop/release-management/features/storage/
importstoragedomain.html

> 
> 
> 
> 
> * A new xleases volume to support VM leases - this feature adds the
> ability to acquire a lease per virtual machine on shared storage without
> attaching the lease to a virtual machine disk.
> 
> A VM lease offers two important capabilities:
> 
> * Avoiding split-brain.
> * Starting a VM on another host if the original host becomes
> non-responsive, which improves the availability of HA VMs.



signature.asc
Description: This is a digitally signed message part.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7AEPIXWSX2ZJ5BRW6S3ATUEOGA3V65J7/


[ovirt-users] Re: about the vm disk type

2021-09-22 Thread Tommy Sway
You mean if it's pre-allocated, it must be RAW, not Qcow2?
The documentation only states that RAW must be pre-allocated, but it does
not say that qCOW2 cannot use pre-allocation.





-Original Message-
From: Vojtech Juranek  
Sent: Wednesday, September 22, 2021 6:04 PM
To: users@ovirt.org
Cc: Tommy Sway 
Subject: Re: [ovirt-users] about the vm disk type

On Wednesday, 22 September 2021 09:55:26 CEST Tommy Sway wrote:
> When I create the VM's image disk, I am not asked to select the 
> following type of disk.

Actually you are, it's "Allocation Policy" drop down menu.
Thin provisioned == qcow format
Preallocated == raw

> 
> 
> What is the default value ?

Thin provisioned, i.e. qcow.

> 
> 
> Thanks.
> 
> 
> 
> 
> 
> QCOW2 Formatted Virtual Machine Storage
> 
> QCOW2 is a storage format for virtual disks. QCOW stands for QEMU 
> copy-on-write. The QCOW2 format decouples the physical storage layer 
> from the virtual layer by adding a mapping between logical and physical
blocks.
> Each logical block is mapped to its physical offset, which enables 
> storage over-commitment and virtual machine snapshots, where each QCOW 
> volume only represents changes made to an underlying virtual disk.
> 
> The initial mapping points all logical blocks to the offsets in the 
> backing file or volume. When a virtual machine writes data to a QCOW2 
> volume after a snapshot, the relevant block is read from the backing 
> volume, modified with the new information and written into a new 
> snapshot QCOW2 volume. Then the map is updated to point to the new place.
> 
> Raw
> 
> The raw storage format has a performance advantage over QCOW2 in that 
> no formatting is applied to virtual disks stored in the raw format. 
> Virtual machine data operations on virtual disks stored in raw format 
> require no additional work from hosts. When a virtual machine writes 
> data to a given offset in its virtual disk, the I/O is written to the 
> same offset on the backing file or logical volume.
> 
> Raw format requires that the entire space of the defined image be 
> preallocated unless using externally managed thin provisioned LUNs 
> from a storage array.


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JGJX4VUOYVBG6AWPKWVMILXINNOFFO2V/


[ovirt-users] Re: about the vm disk type

2021-09-22 Thread Vojtech Juranek
On Wednesday, 22 September 2021 09:55:26 CEST Tommy Sway wrote:
> When I create the VM's image disk, I am not asked to select the following
> type of disk.

Actually you are, it's "Allocation Policy" drop down menu.
Thin provisioned == qcow format
Preallocated == raw

> 
> 
> What is the default value ?

Thin provisioned, i.e. qcow.

> 
> 
> Thanks.
> 
> 
> 
> 
> 
> QCOW2 Formatted Virtual Machine Storage
> 
> QCOW2 is a storage format for virtual disks. QCOW stands for QEMU
> copy-on-write. The QCOW2 format decouples the physical storage layer from
> the virtual layer by adding a mapping between logical and physical blocks.
> Each logical block is mapped to its physical offset, which enables storage
> over-commitment and virtual machine snapshots, where each QCOW volume only
> represents changes made to an underlying virtual disk.
> 
> The initial mapping points all logical blocks to the offsets in the backing
> file or volume. When a virtual machine writes data to a QCOW2 volume after a
> snapshot, the relevant block is read from the backing volume, modified with
> the new information and written into a new snapshot QCOW2 volume. Then the
> map is updated to point to the new place.
> 
> Raw
> 
> The raw storage format has a performance advantage over QCOW2 in that no
> formatting is applied to virtual disks stored in the raw format. Virtual
> machine data operations on virtual disks stored in raw format require no
> additional work from hosts. When a virtual machine writes data to a given
> offset in its virtual disk, the I/O is written to the same offset on the
> backing file or logical volume.
> 
> Raw format requires that the entire space of the defined image be
> preallocated unless using externally managed thin provisioned LUNs from a
> storage array.



signature.asc
Description: This is a digitally signed message part.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MTSUQ3FSTNQGIB6EZ7U6R7ZIKFLAEVK3/


[ovirt-users] Managed Block Storage issues

2021-09-22 Thread Shantur Rathore
Hi all,

I am trying to set up Managed block storage and have the following issues.

My setup:
Latest oVirt Node NG : 4.4.8
Latest oVirt Engine : 4.4.8

1. Unable to copy to iSCSI based block storage

I created a MBS with Synology UC3200 as a backend ( supported by
Cinderlib ). It was created fine but when I try to copy disks to it,
it fails.
Upon looking at the logs from SPM, I found "qemu-img" failed with an
error that it cannot open "/dev/mapper/xx" : Permission Error.
Had a look through the code and digging out more, I saw that
a. Sometimes /dev/mapper/ symlink isn't created ( log attached )
b. The ownership to /dev/mapper/xx and /dev/dm-xx for the new
device always stays at root:root

I added a udev rule
ACTION=="add|change", ENV{DM_UUID}=="mpath-*", GROUP="qemu",
OWNER="vdsm", MODE="0660"

and the disk copied correctly when /dev/mapper/x got created.

2. Copy progress finishes in UI very early than the actual qemu-img process.
The UI shows the Copy process is completed successfully but it's
actually still copying the image.
This happens both for ceph and iscsi based mbs.

Is there any known workaround to get iSCSI MBS working?

Kind regards,
Shantur
2021-09-22 10:39:13,880+0100 INFO  (jsonrpc/0) [vdsm.api] START 
repoStats(domains=['a7ed0992-91f6-4236-8e18-29c5def2c845']) 
from=127.0.0.1,40072, task_id=16cf6f3d-a87f-4165-a07f-d41f6f9b9175 (api:48)
2021-09-22 10:39:13,881+0100 INFO  (jsonrpc/0) [vdsm.api] FINISH repoStats 
return={'a7ed0992-91f6-4236-8e18-29c5def2c845': {'code': 0, 'lastCheck': '4.9', 
'delay': '0.000471624', 'valid': True, 'version': 5, 'acquired': True, 
'actual': True}} from=127.0.0.1,40072, 
task_id=16cf6f3d-a87f-4165-a07f-d41f6f9b9175 (api:54)
2021-09-22 10:39:14,507+0100 INFO  (Reactor thread) 
[ProtocolDetector.AcceptorImpl] Accepted connection from 127.0.0.1:33470 
(protocoldetector:61)
2021-09-22 10:39:14,514+0100 WARN  (Reactor thread) [vds.dispatcher] unhandled 
write event (betterAsyncore:184)
2021-09-22 10:39:14,514+0100 INFO  (Reactor thread) [ProtocolDetector.Detector] 
Detected protocol stomp from 127.0.0.1:33470 (protocoldetector:125)
2021-09-22 10:39:14,515+0100 INFO  (Reactor thread) [Broker.StompAdapter] 
Processing CONNECT request (stompserver:95)
2021-09-22 10:39:14,516+0100 INFO  (JsonRpc (StompReactor)) 
[Broker.StompAdapter] Subscribe command received (stompserver:124)
2021-09-22 10:39:14,993+0100 INFO  (jsonrpc/4) [vdsm.api] START 
getSpmStatus(spUUID='0eed07c4-782d-11eb-9ca3-00163e7a233b') 
from=10.187.21.239,38252, task_id=30be11cd-e901-4db7-8bd5-cb71e8ff39ac (api:48)
2021-09-22 10:39:14,996+0100 INFO  (jsonrpc/4) [vdsm.api] FINISH getSpmStatus 
return={'spm_st': {'spmStatus': 'SPM', 'spmLver': 28, 'spmId': 8}} 
from=10.187.21.239,38252, task_id=30be11cd-e901-4db7-8bd5-cb71e8ff39ac (api:54)
2021-09-22 10:39:15,001+0100 INFO  (jsonrpc/5) [vdsm.api] START 
getStoragePoolInfo(spUUID='0eed07c4-782d-11eb-9ca3-00163e7a233b') 
from=10.187.21.239,38276, task_id=83c9e302-b093-4016-b1bd-63076e6ac5ac (api:48)
2021-09-22 10:39:15,004+0100 INFO  (jsonrpc/5) [vdsm.api] FINISH 
getStoragePoolInfo return={'info': {'domains': 
'e2627376-2254-4e90-9478-0223ef873214:Active,176ed26c-5e20-4b2d-ab10-855f519e0b0f:Active,a7ed0992-91f6-4236-8e18-29c5def2c845:Active',
 'isoprefix': '', 'lver': 28, 'master_uuid': 
'a7ed0992-91f6-4236-8e18-29c5def2c845', 'master_ver': 1, 'name': 'No 
Description', 'pool_status': 'connected', 'spm_id': 8, 'type': 'ISCSI', 
'version': '5'}, 'dominfo': {'e2627376-2254-4e90-9478-0223ef873214': {'status': 
'Active', 'alerts': [], 'isoprefix': '', 'version': 5, 'disktotal': 
'778107871232', 'diskfree': '740668747776'}, 
'176ed26c-5e20-4b2d-ab10-855f519e0b0f': {'status': 'Active', 'alerts': [], 
'isoprefix': '', 'version': 5, 'disktotal': '1610210082816', 'diskfree': 
'528817848320'}, 'a7ed0992-91f6-4236-8e18-29c5def2c845': {'status': 'Active', 
'alerts': [], 'isoprefix': '', 'version': 5, 'disktotal': '106971529216', 
'diskfree': '18119393280'}}} from=10.187.21.239,38276, 
task_id=83c9e302-b093-4016-b1bd-63076e6ac5ac (api:54)

==> /var/log/vdsm/supervdsm.log <==
MainProcess|mpathhealth::DEBUG::2021-09-22 
10:39:15,525::supervdsm_server::95::SuperVdsm.ServerCallback::(wrapper) call 
dmsetup_run_status with ('multipath',) {}
MainProcess|mpathhealth::DEBUG::2021-09-22 
10:39:15,525::commands::153::common.commands::(start) /usr/bin/taskset 
--cpu-list 0-23 /usr/sbin/dmsetup status --target multipath (cwd None)
MainProcess|mpathhealth::DEBUG::2021-09-22 
10:39:15,539::commands::98::common.commands::(run) SUCCESS:  = b'';  = 0
MainProcess|mpathhealth::DEBUG::2021-09-22 
10:39:15,540::supervdsm_server::102::SuperVdsm.ServerCallback::(wrapper) return 
dmsetup_run_status with b'36001405793765d9df7abd4849da87bdc: 0 4123000832 
multipath 2 0 1 0 2 1 A 0 1 2 8:32 A 0 0 1 E 0 1 2 8:16 A 0 0 1 
\n36001405534d4874d3b27d4d1cd86a0d0: 0 3145728000 multipath 2 0 1 0 2 1 A 0 1 2 
8:96 A 0 0 1 E 0 1 2 8:80 A 0 0 1 \n36001405534d48dbda440d4f9bd8c15df: 0 

[ovirt-users] Re: Frequent events seen like VM "vmname" is not responding

2021-09-22 Thread manoj . sharma99765
Thanks Vojtech Juranek for quick response. 

Could you please guide few steps to decide if host is  really overloaded.

Can you share any documentation link which can help us to solve / troubleshoot 
further.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LA6TOTKWBS4NGWNUQGMZLNGQCJIFZPS6/


[ovirt-users] about the OVF_STORE and the xleases volume

2021-09-22 Thread Tommy Sway
I wonder if the xleases volume mentioned here refers to ovf_store ?

 

 

*   A new xleases volume to support VM leases - this feature adds the
ability to acquire a lease per virtual machine on shared storage without
attaching the lease to a virtual machine disk.

A VM lease offers two important capabilities:

*   Avoiding split-brain.
*   Starting a VM on another host if the original host becomes
non-responsive, which improves the availability of HA VMs.

 

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/G4YOZPDC4ZUZ2PSSPKPSIL6YLLTUCTV5/


[ovirt-users] about the vm disk type

2021-09-22 Thread Tommy Sway
When I create the VM's image disk, I am not asked to select the following
type of disk. 

 

What is the default value ? 

 

Thanks.

 

 

QCOW2 Formatted Virtual Machine Storage

QCOW2 is a storage format for virtual disks. QCOW stands for QEMU
copy-on-write. The QCOW2 format decouples the physical storage layer from
the virtual layer by adding a mapping between logical and physical blocks.
Each logical block is mapped to its physical offset, which enables storage
over-commitment and virtual machine snapshots, where each QCOW volume only
represents changes made to an underlying virtual disk.

The initial mapping points all logical blocks to the offsets in the backing
file or volume. When a virtual machine writes data to a QCOW2 volume after a
snapshot, the relevant block is read from the backing volume, modified with
the new information and written into a new snapshot QCOW2 volume. Then the
map is updated to point to the new place.

Raw

The raw storage format has a performance advantage over QCOW2 in that no
formatting is applied to virtual disks stored in the raw format. Virtual
machine data operations on virtual disks stored in raw format require no
additional work from hosts. When a virtual machine writes data to a given
offset in its virtual disk, the I/O is written to the same offset on the
backing file or logical volume.

Raw format requires that the entire space of the defined image be
preallocated unless using externally managed thin provisioned LUNs from a
storage array.

 

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CZZBMXL2DMJKHS4YLBOIGPMJ4D35TZH5/