[ovirt-users] Re: posix storage migration issue on 4.4 cluster

2021-08-03 Thread Sketch

On Tue, 3 Aug 2021, Nir Soffer wrote:


On Tue, Aug 3, 2021 at 5:51 PM Sketch  wrote:


On the 4.3 cluster, migration works fine with any storage backend.  On
4.4, migration works against gluster or NFS, but fails when the VM is
hosted on POSIX cephfs.


What do you mean by "fails"?

What is the failing operation (move disk when vm is running or not?)
and how does it fail?


Sorry, I guess I didn't explain the issue well enough.  Moving disks 
between ceph and gluster works fine, even while the VM is running.



It appears that the VM fails to start on the new host, but it's not
obvious why from the logs.  Can anyone shed some light or suggest further
debugging?


What doesn't work is live migration of running VMs between hosts running 
4.4.7 (or 4.4.6 before I updated) when their disks are on ceph.  It 
appears that vdsm attempts to launch the VM on the destination host, and 
it either fails to start or dies right after starting (not entirely clear 
from the logs).  Then the running VM gets paused due to a storage error.

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AR3TX4PWIXCKFXO6A3SYOPADNUI5EYP6/


[ovirt-users] Re: Combining Virtual machine image with multiple disks attached

2021-08-03 Thread KK CHN
On Wed, Aug 4, 2021 at 1:38 AM Nir Soffer  wrote:

> On Tue, Aug 3, 2021 at 7:29 PM KK CHN  wrote:
> >
> > I have asked our VM maintainer to run the  command
> >
> > # virsh -r dumpxml vm-name_blah//as Super user
> >
> > But no output :   No matching domains found that was the TTY  output on
> that rhevm node when I executed the command.
> >
> > Then I tried to execute #  virsh list //  it doesn't list any VMs
> !!!   ( How come this ? Does the Rhevm node need to enable any CLI  with
> License key or something to list Vms or  to dumpxml   with   virsh ? or its
> CLI commands ?
>
> RHV undefine the vms when they are not running.
>
> > Any way I want to know what I have to ask the   maintainerto provide
> a working a working  CLI   or ? which do the tasks expected to do with
> command line utilities in rhevm.
> >
> If the vm is not running you can get the vm configuration from ovirt
> using the API:
>
> GET /api/vms/{vm-id}
>
> You may need more API calls to get info about the disks, follow the 
> in the returned xml.
>
> > I have one more question :Which command can I execute on an rhevm
> node  to manually export ( not through GUI portal) a   VMs to   required
> format  ?
> >
> > For example;   1.  I need to get  one  VM and disks attached to it  as
> raw images.  Is this possible how?
> >
> > and another2. VM and disk attached to it as  Ova or( what other good
> format) which suitable to upload to glance ?
>
> Arik can add more info on exporting.
>
> >   Each VMs are around 200 to 300 GB with disk volumes ( so where should
> be the images exported to which path to specify ? to the host node(if the
> host doesn't have space  or NFS mount ? how to specify the target location
> where the VM image get stored in case of NFS mount ( available ?)
>
> You have 2 options:
> - Download the disks using the SDK
> - Export the VM to OVA
>
> When exporting to OVA, you will always get qcow2 images, which you can
> later
> convert to raw using "qemu-img convert"
>
> When downloading the disks, you control the image format, for example
> this will download
> the disk in any format, collapsing all snapshots to the raw format:
>
>  $ python3
> /usr/share/doc/python3-ovirt-engine-sdk4/examples/download_disk.py
> -c engine-dev 3649d84b-6f35-4314-900a-5e8024e3905c /var/tmp/disk1.raw
>
> To perform this which modules/packages need to be installed in the rhevm
host node ?  Does the rhevm hosts come with python3 installed by default ?
or I need to install  python3 on rhevm node ? Then  using pip3 to install
the  download_disk.py / what the module name to install this sdk ?  any
dependency before installing this sdk ? like java need to be installed on
the rhevm node ?

One doubt:  came across  virt v2v while google search,  can virtv2v  be
used in rhevm node to export VMs to images ?  or only from other
hypervisors   to rhevm only virt v2v supports ?

This requires ovirt.conf file:   // ovirt.conf file need to be created
? or already there  in any rhevm node?

>
> $ cat ~/.config/ovirt.conf
> [engine-dev]
> engine_url = https://engine-dev
> username = admin@internal
> password = mypassword
> cafile = /etc/pki/vdsm/certs/cacert.pem
>
> Nir
>
> > Thanks in advance
> >
> >
> > On Mon, Aug 2, 2021 at 8:22 PM Nir Soffer  wrote:
> >>
> >> On Mon, Aug 2, 2021 at 12:22 PM  wrote:
> >> >
> >> > I have  few VMs in   Redhat Virtualisation environment  RHeV ( using
> Rhevm4.1 ) managed by a third party
> >> >
> >> > Now I am in the process of migrating  those VMs to  my cloud setup
> with  OpenStack ussuri  version  with KVM hypervisor and Glance storage.
> >> >
> >> > The third party is making down each VM and giving the each VM image
> with their attached volume disks along with it.
> >> >
> >> > There are three folders  which contain images for each VM .
> >> > These folders contain the base OS image, and attached LVM disk images
> ( from time to time they added hard disks  and used LVM for storing data )
> where data is stored.
> >> >
> >> > Is there a way to  get all these images to be exported as  Single
> image file Instead of  multiple image files from Rhevm it self.  Is this
> possible ?
> >> >
> >> > If possible how to combine e all these disk images to a single image
> and that image  can upload to our  cloud  glance storage as a single image ?
> >>
> >> It is not clear what is the vm you are trying to export. If you share
> >> the libvirt xml
> >> of this vm it will be more clear. You can use "sudo virsh -r dumpxml
> vm-name".
> >>
> >> RHV supports download of disks to one image per disk, which you can move
> >> to another system.
> >>
> >> We also have export to ova, which creates one tar file with all
> exported disks,
> >> if this helps.
> >>
> >> Nir
> >>
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.

[ovirt-users] Re: Host not becoming active due to VDSM failure

2021-08-03 Thread Vinícius Ferrão via Users
As a followup to the mailing list.

Updating the machine solved this issue. But the bugzilla still applies since it 
was blocking the upgrade.

Thank you all.


On 2 Aug 2021, at 13:22, Vinícius Ferrão via Users 
mailto:users@ovirt.org>> wrote:

Hi Ales, Nir.

Sorry for the delayed answer. I didn't had the opportunity to answer it before.

I'm running RHV (not RHVH) on RHEL 8.4 and on top of ppc64le. So it's not 
vanilla oVirt.

Right now its based on:
ovirt-host-4.4.1-4.el8ev.ppc64le

I'm already with nmstate >= 0.3 as I can see:
nmstate-1.0.2-11.el8_4.noarch

VDSM in fact is old, I tried upgrading VDSM but there's a failed dependency on 
openvswitch:
[root@rhvpower ~]# dnf update vdsm
Updating Subscription Management repositories.
Last metadata expiration check: 0:11:15 ago on Mon 02 Aug 2021 12:06:44 PM EDT.
Error:
 Problem 1: package vdsm-python-4.40.70.6-1.el8ev.noarch requires vdsm-network 
= 4.40.70.6-1.el8ev, but none of the providers can be installed
  - package vdsm-4.40.70.6-1.el8ev.ppc64le requires vdsm-python = 
4.40.70.6-1.el8ev, but none of the providers can be installed
  - package vdsm-network-4.40.70.6-1.el8ev.ppc64le requires openvswitch >= 
2.11, but none of the providers can be installed
  - cannot install the best update candidate for package 
vdsm-4.40.35.1-1.el8ev.ppc64le
  - nothing provides openvswitch2.11 needed by 
rhv-openvswitch-1:2.11-7.el8ev.noarch
  - nothing provides openvswitch2.11 needed by 
ovirt-openvswitch-2.11-1.el8ev.noarch
 Problem 2: package vdsm-python-4.40.70.6-1.el8ev.noarch requires vdsm-network 
= 4.40.70.6-1.el8ev, but none of the providers can be installed
  - package vdsm-4.40.70.6-1.el8ev.ppc64le requires vdsm-python = 
4.40.70.6-1.el8ev, but none of the providers can be installed
  - package vdsm-network-4.40.70.6-1.el8ev.ppc64le requires openvswitch >= 
2.11, but none of the providers can be installed
  - cannot install the best update candidate for package 
vdsm-hook-vmfex-dev-4.40.35.1-1.el8ev.noarch
  - nothing provides openvswitch2.11 needed by 
rhv-openvswitch-1:2.11-7.el8ev.noarch
  - nothing provides openvswitch2.11 needed by 
ovirt-openvswitch-2.11-1.el8ev.noarch
(try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use 
not only best candidate packages)

Nothing seems to provide an openvswitch release that satisfies VDSM. There's no 
openvswitch package installed right now, nor available on the repositories:

[root@rhvpower ~]# dnf install openvswitch
Updating Subscription Management repositories.
Last metadata expiration check: 0:15:28 ago on Mon 02 Aug 2021 12:06:44 PM EDT.
Error:
 Problem: cannot install the best candidate for the job
  - nothing provides openvswitch2.11 needed by 
ovirt-openvswitch-2.11-1.el8ev.noarch
(try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use 
not only best candidate packages)

Any ideias on how to past beyond this issue? This is probably only related to 
ppc64le.
I already opened a bugzilla about the openvswitch issue here: 
https://bugzilla.redhat.com/show_bug.cgi?id=1988507

Thank you all.

On 2 Aug 2021, at 02:09, Ales Musil 
mailto:amu...@redhat.com>> wrote:



On Fri, Jul 30, 2021 at 8:54 PM Nir Soffer 
mailto:nsof...@redhat.com>> wrote:
On Fri, Jul 30, 2021 at 7:41 PM Vinícius Ferrão via Users
mailto:users@ovirt.org>> wrote:
...
> restore-net::ERROR::2021-07-30 
> 12:34:56,167::restore_net_config::462::root::(restore) restoration failed.
> Traceback (most recent call last):
>   File "/usr/lib/python3.6/site-packages/vdsm/network/restore_net_config.py", 
> line 460, in restore
> unified_restoration()
>   File "/usr/lib/python3.6/site-packages/vdsm/network/restore_net_config.py", 
> line 112, in unified_restoration
> classified_conf = _classify_nets_bonds_config(available_config)
>   File "/usr/lib/python3.6/site-packages/vdsm/network/restore_net_config.py", 
> line 237, in _classify_nets_bonds_config
> net_info = NetInfo(netswitch.configurator.netinfo())
>   File 
> "/usr/lib/python3.6/site-packages/vdsm/network/netswitch/configurator.py", 
> line 323, in netinfo
> _netinfo = netinfo_get(vdsmnets, compatibility)
>   File "/usr/lib/python3.6/site-packages/vdsm/network/netinfo/cache.py", line 
> 268, in get
> return _get(vdsmnets)
>   File "/usr/lib/python3.6/site-packages/vdsm/network/netinfo/cache.py", line 
> 76, in _get
> extra_info.update(_get_devices_info_from_nmstate(state, devices))
>   File "/usr/lib/python3.6/site-packages/vdsm/network/netinfo/cache.py", line 
> 165, in _get_devices_info_from_nmstate
> nmstate.get_interfaces(state, filter=devices)
>   File "/usr/lib/python3.6/site-packages/vdsm/network/netinfo/cache.py", line 
> 164, in 
> for ifname, ifstate in six.viewitems(
>   File "/usr/lib/python3.6/site-packages/vdsm/network/nmstate/api.py", line 
> 228, in is_dhcp_enabled
> return util_is_dhcp_enabled(family_info)
>   File 
> "/usr/lib/python3.6/site-packages/vdsm/network/nmstate/bridge_util.py", line 
> 137

[ovirt-users] Re: Restored engine backup: The provided authorization grant for the auth code has expired.

2021-08-03 Thread Nicolás

Hi,

As I see this is an issue hard to get help on, I'll ask it otherwise:

Alternatively to backup and restore, is there a way to migrate an 
oVirt-manager installation to other machine? We're trying to move the 
manager machine since the current physical machine is getting short of 
resources, and we already have a prepared physical host to migrate it to.


If there's an alternative way to migrate it, I'd be very grateful if 
someone could shed some light on it.


Thanks.

El 2/8/21 a las 13:02, Nicolás escribió:

Hi Didi,

El 7/4/21 a las 9:27, Yedidyah Bar David escribió:

On Wed, Mar 24, 2021 at 12:07 PM Nicolás  wrote:

Hi,

I'm restoring a full ovirt engine backup, having used the --scope=all
option, for oVirt 4.3.

I restored the backup on a fresh CentOS7 machine. The process went 
well,
but when trying to log into the restored authentication system I get 
the

following message which won't allow me to log in:

    The provided authorization grant for the auth code has expired.

What does that mean and how can it be fixed?

Can you please check also this thread:

https://lists.ovirt.org/archives/list/users@ovirt.org/thread/YH4J7GG7WLOLUFIADZPL6JOPDETJ23CZ/ 



What version was used for backup, and what version for restore?


For backup, version 4.3.8.2-1.el7 of ovirt-engine-tools-backup was used.

For restore, version 4.3.10.4-1.el7 of ovirt-engine-tools-backup was 
used.




Did you have a 3rd-party CA cert installed?


I am using a custom LetsEncrypt certificate in apache. I have this 
certificate configured in httpd and ovirt-websocket-proxy, but it's 
exactly the same certificate I have configured in the oVirt 
installation that was backed up (as I understand it, it's not the same 
case than the one described in the link - I might be wrong). So I 
copied the same certificate on the other side too.



Please verify that it was backed up and restored correctly, or manually
reinstalled after restore.


As per the logs, both processes ended correctly, no errors showed up. 
I also run the 'engine-setup' command on the restored machine, and it 
ended with no errors/warnings.


I'm attaching an engine.log of the restored node in case it helps, 
from the moment I restart the engine and try to log in.


Thanks for any help regarding this, as I can't figure out what else 
could be happening.


Nicolás


Good luck and best regards,
--
Didi




___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3LV4X5KDDOST3VPJ5GDTHYOQTBAWLIJR/


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/54CDGSH5DOHZJLCKLB625LC2FCFEH47H/


[ovirt-users] Re: Combining Virtual machine image with multiple disks attached

2021-08-03 Thread Nir Soffer
On Tue, Aug 3, 2021 at 7:29 PM KK CHN  wrote:
>
> I have asked our VM maintainer to run the  command
>
> # virsh -r dumpxml vm-name_blah//as Super user
>
> But no output :   No matching domains found that was the TTY  output on  that 
> rhevm node when I executed the command.
>
> Then I tried to execute #  virsh list //  it doesn't list any VMs  !!!   
> ( How come this ? Does the Rhevm node need to enable any CLI  with License 
> key or something to list Vms or  to dumpxml   with   virsh ? or its CLI 
> commands ?

RHV undefine the vms when they are not running.

> Any way I want to know what I have to ask the   maintainerto provide a 
> working a working  CLI   or ? which do the tasks expected to do with command 
> line utilities in rhevm.
>
If the vm is not running you can get the vm configuration from ovirt
using the API:

GET /api/vms/{vm-id}

You may need more API calls to get info about the disks, follow the 
in the returned xml.

> I have one more question :Which command can I execute on an rhevm node  
> to manually export ( not through GUI portal) a   VMs to   required format  ?
>
> For example;   1.  I need to get  one  VM and disks attached to it  as raw 
> images.  Is this possible how?
>
> and another2. VM and disk attached to it as  Ova or( what other good 
> format) which suitable to upload to glance ?

Arik can add more info on exporting.

>   Each VMs are around 200 to 300 GB with disk volumes ( so where should be 
> the images exported to which path to specify ? to the host node(if the host 
> doesn't have space  or NFS mount ? how to specify the target location where 
> the VM image get stored in case of NFS mount ( available ?)

You have 2 options:
- Download the disks using the SDK
- Export the VM to OVA

When exporting to OVA, you will always get qcow2 images, which you can later
convert to raw using "qemu-img convert"

When downloading the disks, you control the image format, for example
this will download
the disk in any format, collapsing all snapshots to the raw format:

 $ python3 /usr/share/doc/python3-ovirt-engine-sdk4/examples/download_disk.py
-c engine-dev 3649d84b-6f35-4314-900a-5e8024e3905c /var/tmp/disk1.raw

This requires ovirt.conf file:

$ cat ~/.config/ovirt.conf
[engine-dev]
engine_url = https://engine-dev
username = admin@internal
password = mypassword
cafile = /etc/pki/vdsm/certs/cacert.pem

Nir

> Thanks in advance
>
>
> On Mon, Aug 2, 2021 at 8:22 PM Nir Soffer  wrote:
>>
>> On Mon, Aug 2, 2021 at 12:22 PM  wrote:
>> >
>> > I have  few VMs in   Redhat Virtualisation environment  RHeV ( using 
>> > Rhevm4.1 ) managed by a third party
>> >
>> > Now I am in the process of migrating  those VMs to  my cloud setup with  
>> > OpenStack ussuri  version  with KVM hypervisor and Glance storage.
>> >
>> > The third party is making down each VM and giving the each VM image  with 
>> > their attached volume disks along with it.
>> >
>> > There are three folders  which contain images for each VM .
>> > These folders contain the base OS image, and attached LVM disk images ( 
>> > from time to time they added hard disks  and used LVM for storing data ) 
>> > where data is stored.
>> >
>> > Is there a way to  get all these images to be exported as  Single image 
>> > file Instead of  multiple image files from Rhevm it self.  Is this 
>> > possible ?
>> >
>> > If possible how to combine e all these disk images to a single image and 
>> > that image  can upload to our  cloud  glance storage as a single image ?
>>
>> It is not clear what is the vm you are trying to export. If you share
>> the libvirt xml
>> of this vm it will be more clear. You can use "sudo virsh -r dumpxml 
>> vm-name".
>>
>> RHV supports download of disks to one image per disk, which you can move
>> to another system.
>>
>> We also have export to ova, which creates one tar file with all exported 
>> disks,
>> if this helps.
>>
>> Nir
>>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PSHISFT33TFMDOX4V42LNHNWIM3IKOPA/


[ovirt-users] Re: posix storage migration issue on 4.4 cluster

2021-08-03 Thread Nir Soffer
On Tue, Aug 3, 2021 at 5:51 PM Sketch  wrote:
>
> I currently have two clusters up and running under one engine.  An old
> cluster on 4.3, and a new cluster on 4.4.  In addition to migrating from
> 4.3 to 4.4, we are also migrating from glusterfs to cephfs mounted as
> POSIX storage (not cinderlib, though we may make that conversion after
> moving to 4.4).  I have run into a strange issue, though.
>
> On the 4.3 cluster, migration works fine with any storage backend.  On
> 4.4, migration works against gluster or NFS, but fails when the VM is
> hosted on POSIX cephfs.

What do you mean by "fails"?

What is the failing operation (move disk when vm is running or not?)
and how does it fail?

...
> It appears that the VM fails to start on the new host, but it's not
> obvious why from the logs.  Can anyone shed some light or suggest further
> debugging?

You move the disk when the vm is not running, and after the move the vm will
not start?

If this is the issue, you can check if the disk was copied correctly by creating
a checksum of the disk before the move and after the move.

Here is example run from my system:

$ cat ~/.config/ovirt.conf
[myengine]
engine_url = https://engine-dev
username = admin@internal
password = mypassword
cafile = /etc/pki/vdsm/certs/cacert.pem

$ python3 /usr/share/doc/python3-ovirt-engine-sdk4/examples/checksum_disk.py
-c myengine 3649d84b-6f35-4314-900a-5e8024e3905c
{
"algorithm": "blake2b",
"block_size": 4194304,
"checksum":
"d92a2491f797c148e9a6c90830ed7bd2f471099a70e931f7dd9d86853d650ece"
}

See 
https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/checksum_disk.py

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KLD5NMFNJGBXEXUPVQCD4BHQXZXO4RHF/


[ovirt-users] Re: Combining Virtual machine image with multiple disks attached

2021-08-03 Thread KK CHN
I have asked our VM maintainer to run the  command

# virsh -r dumpxml vm-name_blah//as Super user

But no output :   No matching domains found that was the TTY  output on
that rhevm node when I executed the command.

Then I tried to execute #  virsh list //  it doesn't list any VMs  !!!
 ( How come this ? Does the Rhevm node need to enable any CLI  with License
key or something to list Vms or  to dumpxml   with   virsh ? or its CLI
commands ?

Any way I want to know what I have to ask the   maintainerto provide a
working a working  CLI   or ? which do the tasks expected to do with
command line utilities in rhevm.

I have one more question :Which command can I execute on an rhevm node
to manually export ( not through GUI portal) a   VMs to   required format  ?

For example;   1.  I need to get  one  VM and disks attached to it  as raw
images.  Is this possible how?

and another2. VM and disk attached to it as  Ova or( what other good
format) which suitable to upload to glance ?


  Each VMs are around 200 to 300 GB with disk volumes ( so where should be
the images exported to which path to specify ? to the host node(if the host
doesn't have space  or NFS mount ? how to specify the target location where
the VM image get stored in case of NFS mount ( available ?)

Thanks in advance


On Mon, Aug 2, 2021 at 8:22 PM Nir Soffer  wrote:

> On Mon, Aug 2, 2021 at 12:22 PM  wrote:
> >
> > I have  few VMs in   Redhat Virtualisation environment  RHeV ( using
> Rhevm4.1 ) managed by a third party
> >
> > Now I am in the process of migrating  those VMs to  my cloud setup with
> OpenStack ussuri  version  with KVM hypervisor and Glance storage.
> >
> > The third party is making down each VM and giving the each VM image
> with their attached volume disks along with it.
> >
> > There are three folders  which contain images for each VM .
> > These folders contain the base OS image, and attached LVM disk images (
> from time to time they added hard disks  and used LVM for storing data )
> where data is stored.
> >
> > Is there a way to  get all these images to be exported as  Single image
> file Instead of  multiple image files from Rhevm it self.  Is this possible
> ?
> >
> > If possible how to combine e all these disk images to a single image and
> that image  can upload to our  cloud  glance storage as a single image ?
>
> It is not clear what is the vm you are trying to export. If you share
> the libvirt xml
> of this vm it will be more clear. You can use "sudo virsh -r dumpxml
> vm-name".
>
> RHV supports download of disks to one image per disk, which you can move
> to another system.
>
> We also have export to ova, which creates one tar file with all exported
> disks,
> if this helps.
>
> Nir
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TGBJTVT6EME4TXQ3OHY7L6YXOGZXCRC6/


[ovirt-users] Re: live merge of snapshots failed

2021-08-03 Thread Benny Zlotnik
2021-08-03 15:51:34,917+03 ERROR
[org.ovirt.engine.core.bll.MergeStatusCommand]
(EE-ManagedThreadFactory-commandCoordinator-Thread-2)
[3bf9345d-fab2-490f-ba44-6aa014bbb743] Failed to live merge. Top
volume b43b7c33-5b53-4332-a2e0-f950debb919b is still in qemu chain
[b43b7c33-5b53-4332-a2e0-f950debb919b,
84c005da-cbec-4ace-8619-5a8e2ae5ea75]

Can you attach vdsm logs (from SPM and the host running the VM) so we
can understand why it failed?

On Tue, Aug 3, 2021 at 6:07 PM  wrote:
>
> Hello
> I have a situation with a vm in which I cannot delete the snapshot.
> The whole thing is quite strange because I can delete the snapshot when I 
> create and delete it from the web interface but when I do it with a python 
> script through the API it failes.
> The script does create snapshot-> download snapshot-> delete snapshot and I 
> used the examples from ovirt python sdk on githab to create it ,in general it 
> works prety well.
>
> But on a specific machine (so far) it cannot delete the live snapshot
> Ovirt is 4.3.10 and the guest is a windows 10 pc. Windows 10 guest has 2 
> disks attached both on different fc domains one on an ssd  emc and the other 
> on an hdd emc. Both disks are prealocated.
> I cannot figure out what the problem is so far
> the related engine log:
>
> 2021-08-03 15:51:00,385+03 INFO  
> [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommandCallback]
>  (EE-ManagedThreadFactory-engineScheduled-Thread-61) 
> [3bf9345d-fab2-490f-ba44-6aa014bbb743] Comma
> nd 'RemoveSnapshotSingleDiskLive' (id: 
> '80dc4609-b91f-4e93-bc12-7b2083933e5a') waiting on child command id: 
> '74c83880-581b-4774-ae51-8c4af0c92c53' type:'Merge' to complete
> 2021-08-03 15:51:00,385+03 INFO  
> [org.ovirt.engine.core.bll.MergeCommandCallback] 
> (EE-ManagedThreadFactory-engineScheduled-Thread-61) 
> [3bf9345d-fab2-490f-ba44-6aa014bbb743] Waiting on merge command to complete (
> jobId = 62bf8c83-cd78-42a5-b57d-d67ddfdee8ee)
> 2021-08-03 15:51:00,387+03 INFO  
> [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommandCallback]
>  (EE-ManagedThreadFactory-engineScheduled-Thread-61) 
> [3bf9345d-fab2-490f-ba44-6aa014bbb743] Command 'RemoveSnapshotSingleDiskLive' 
> (id: '87bc90c7-2aa5-4a1b-b58c-54296518658a') waiting on child command id: 
> 'ec806ac6-929f-42d9-a86e-98d6a39a4718' type:'Merge' to complete
> 2021-08-03 15:51:01,388+03 INFO  
> [org.ovirt.engine.core.bll.MergeCommandCallback] 
> (EE-ManagedThreadFactory-engineScheduled-Thread-30) 
> [3bf9345d-fab2-490f-ba44-6aa014bbb743] Waiting on merge command to complete 
> (jobId = c57fb3e5-da20-4838-8db3-31655ba76c1f)
> 2021-08-03 15:51:07,491+03 INFO  
> [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback] 
> (EE-ManagedThreadFactory-engineScheduled-Thread-38) 
> [b929fd4a-8ce7-408f-927d-ab0169879c4e] Command 'MoveImageGroup' (id: 
> '1de1b800-873f-405f-805b-f44397740909') waiting on child command id: 
> 'd1136344-2888-4d63-8fe1-b506426bc8aa' type:'CopyImageGroupWithData' to 
> complete
> 2021-08-03 15:51:11,513+03 INFO  
> [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback] 
> (EE-ManagedThreadFactory-engineScheduled-Thread-41) 
> [3bf9345d-fab2-490f-ba44-6aa014bbb743] Command 'RemoveSnapshot' (id: 
> '04e9d61e-28a2-4ab0-9bb7-5c805ee871e9') waiting on child command id: 
> '87bc90c7-2aa5-4a1b-b58c-54296518658a' type:'RemoveSnapshotSingleDiskLive' to 
> complete
> 2021-08-03 15:51:12,522+03 INFO  
> [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommandCallback]
>  (EE-ManagedThreadFactory-engineScheduled-Thread-76) 
> [3bf9345d-fab2-490f-ba44-6aa014bbb743] Command 'RemoveSnapshotSingleDiskLive' 
> (id: '80dc4609-b91f-4e93-bc12-7b2083933e5a') waiting on child command id: 
> '74c83880-581b-4774-ae51-8c4af0c92c53' type:'Merge' to complete
> 2021-08-03 15:51:12,523+03 INFO  
> [org.ovirt.engine.core.bll.MergeCommandCallback] 
> (EE-ManagedThreadFactory-engineScheduled-Thread-76) 
> [3bf9345d-fab2-490f-ba44-6aa014bbb743] Waiting on merge command to complete 
> (jobId = 62bf8c83-cd78-42a5-b57d-d67ddfdee8ee)
> 2021-08-03 15:51:12,527+03 INFO  
> [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommandCallback]
>  (EE-ManagedThreadFactory-engineScheduled-Thread-76) 
> [3bf9345d-fab2-490f-ba44-6aa014bbb743] Command 'RemoveSnapshotSingleDiskLive' 
> (id: '87bc90c7-2aa5-4a1b-b58c-54296518658a') waiting on child command id: 
> 'ec806ac6-929f-42d9-a86e-98d6a39a4718' type:'Merge' to complete
> 2021-08-03 15:51:13,528+03 INFO  
> [org.ovirt.engine.core.bll.MergeCommandCallback] 
> (EE-ManagedThreadFactory-engineScheduled-Thread-37) 
> [3bf9345d-fab2-490f-ba44-6aa014bbb743] Waiting on merge command to complete 
> (jobId = c57fb3e5-da20-4838-8db3-31655ba76c1f)
> 2021-08-03 15:51:21,635+03 INFO  
> [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback] 
> (EE-ManagedThreadFactory-engineScheduled-Thread-58) 
> [3bf9345d-fab2-490f-ba44-6aa014bbb743] Command 'RemoveSnapshot' (id:

[ovirt-users] live merge of snapshots failed

2021-08-03 Thread g . vasilopoulos
Hello 
I have a situation with a vm in which I cannot delete the snapshot. 
The whole thing is quite strange because I can delete the snapshot when I 
create and delete it from the web interface but when I do it with a python 
script through the API it failes.
The script does create snapshot-> download snapshot-> delete snapshot and I 
used the examples from ovirt python sdk on githab to create it ,in general it 
works prety well.

But on a specific machine (so far) it cannot delete the live snapshot
Ovirt is 4.3.10 and the guest is a windows 10 pc. Windows 10 guest has 2 disks 
attached both on different fc domains one on an ssd  emc and the other on an 
hdd emc. Both disks are prealocated.
I cannot figure out what the problem is so far 
the related engine log:

2021-08-03 15:51:00,385+03 INFO  
[org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommandCallback]
 (EE-ManagedThreadFactory-engineScheduled-Thread-61) 
[3bf9345d-fab2-490f-ba44-6aa014bbb743] Comma
nd 'RemoveSnapshotSingleDiskLive' (id: '80dc4609-b91f-4e93-bc12-7b2083933e5a') 
waiting on child command id: '74c83880-581b-4774-ae51-8c4af0c92c53' 
type:'Merge' to complete
2021-08-03 15:51:00,385+03 INFO  
[org.ovirt.engine.core.bll.MergeCommandCallback] 
(EE-ManagedThreadFactory-engineScheduled-Thread-61) 
[3bf9345d-fab2-490f-ba44-6aa014bbb743] Waiting on merge command to complete (
jobId = 62bf8c83-cd78-42a5-b57d-d67ddfdee8ee)
2021-08-03 15:51:00,387+03 INFO  
[org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommandCallback]
 (EE-ManagedThreadFactory-engineScheduled-Thread-61) 
[3bf9345d-fab2-490f-ba44-6aa014bbb743] Command 'RemoveSnapshotSingleDiskLive' 
(id: '87bc90c7-2aa5-4a1b-b58c-54296518658a') waiting on child command id: 
'ec806ac6-929f-42d9-a86e-98d6a39a4718' type:'Merge' to complete
2021-08-03 15:51:01,388+03 INFO  
[org.ovirt.engine.core.bll.MergeCommandCallback] 
(EE-ManagedThreadFactory-engineScheduled-Thread-30) 
[3bf9345d-fab2-490f-ba44-6aa014bbb743] Waiting on merge command to complete 
(jobId = c57fb3e5-da20-4838-8db3-31655ba76c1f)
2021-08-03 15:51:07,491+03 INFO  
[org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback] 
(EE-ManagedThreadFactory-engineScheduled-Thread-38) 
[b929fd4a-8ce7-408f-927d-ab0169879c4e] Command 'MoveImageGroup' (id: 
'1de1b800-873f-405f-805b-f44397740909') waiting on child command id: 
'd1136344-2888-4d63-8fe1-b506426bc8aa' type:'CopyImageGroupWithData' to complete
2021-08-03 15:51:11,513+03 INFO  
[org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback] 
(EE-ManagedThreadFactory-engineScheduled-Thread-41) 
[3bf9345d-fab2-490f-ba44-6aa014bbb743] Command 'RemoveSnapshot' (id: 
'04e9d61e-28a2-4ab0-9bb7-5c805ee871e9') waiting on child command id: 
'87bc90c7-2aa5-4a1b-b58c-54296518658a' type:'RemoveSnapshotSingleDiskLive' to 
complete
2021-08-03 15:51:12,522+03 INFO  
[org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommandCallback]
 (EE-ManagedThreadFactory-engineScheduled-Thread-76) 
[3bf9345d-fab2-490f-ba44-6aa014bbb743] Command 'RemoveSnapshotSingleDiskLive' 
(id: '80dc4609-b91f-4e93-bc12-7b2083933e5a') waiting on child command id: 
'74c83880-581b-4774-ae51-8c4af0c92c53' type:'Merge' to complete
2021-08-03 15:51:12,523+03 INFO  
[org.ovirt.engine.core.bll.MergeCommandCallback] 
(EE-ManagedThreadFactory-engineScheduled-Thread-76) 
[3bf9345d-fab2-490f-ba44-6aa014bbb743] Waiting on merge command to complete 
(jobId = 62bf8c83-cd78-42a5-b57d-d67ddfdee8ee)
2021-08-03 15:51:12,527+03 INFO  
[org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommandCallback]
 (EE-ManagedThreadFactory-engineScheduled-Thread-76) 
[3bf9345d-fab2-490f-ba44-6aa014bbb743] Command 'RemoveSnapshotSingleDiskLive' 
(id: '87bc90c7-2aa5-4a1b-b58c-54296518658a') waiting on child command id: 
'ec806ac6-929f-42d9-a86e-98d6a39a4718' type:'Merge' to complete
2021-08-03 15:51:13,528+03 INFO  
[org.ovirt.engine.core.bll.MergeCommandCallback] 
(EE-ManagedThreadFactory-engineScheduled-Thread-37) 
[3bf9345d-fab2-490f-ba44-6aa014bbb743] Waiting on merge command to complete 
(jobId = c57fb3e5-da20-4838-8db3-31655ba76c1f)
2021-08-03 15:51:21,635+03 INFO  
[org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback] 
(EE-ManagedThreadFactory-engineScheduled-Thread-58) 
[3bf9345d-fab2-490f-ba44-6aa014bbb743] Command 'RemoveSnapshot' (id: 
'04e9d61e-28a2-4ab0-9bb7-5c805ee871e9') waiting on child command id: 
'87bc90c7-2aa5-4a1b-b58c-54296518658a' type:'RemoveSnapshotSingleDiskLive' to 
complete
2021-08-03 15:51:22,655+03 INFO  
[org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommandCallback]
 (EE-ManagedThreadFactory-engineScheduled-Thread-31) 
[3bf9345d-fab2-490f-ba44-6aa014bbb743] Command 'RemoveSnapshotSingleDiskLive' 
(id: '80dc4609-b91f-4e93-bc12-7b2083933e5a') waiting on child command id: 
'74c83880-581b-4774-ae51-8c4af0c92c53' type:'Merge' to complete
2021-08-03 15:51:22,661+03 INFO  
[org.ovirt.engine.core.bll.MergeCommandCallback] 
(EE-Mana

[ovirt-users] posix storage migration issue on 4.4 cluster

2021-08-03 Thread Sketch
I currently have two clusters up and running under one engine.  An old 
cluster on 4.3, and a new cluster on 4.4.  In addition to migrating from 
4.3 to 4.4, we are also migrating from glusterfs to cephfs mounted as 
POSIX storage (not cinderlib, though we may make that conversion after 
moving to 4.4).  I have run into a strange issue, though.


On the 4.3 cluster, migration works fine with any storage backend.  On 
4.4, migration works against gluster or NFS, but fails when the VM is 
hosted on POSIX cephfs.  Both hosts are running CentOS 8.4 and were fully 
updated to oVirt 4.4.7 today, as well as fully updating the engine (all 
rebooted before this test, as well).


It appears that the VM fails to start on the new host, but it's not 
obvious why from the logs.  Can anyone shed some light or suggest further 
debugging?


Related engine log:

2021-08-03 07:11:51,609-07 INFO  
[org.ovirt.engine.core.bll.MigrateVmToServerCommand] (default task-34) 
[1b2d4416-30f0-452d-b689-291f3b7f7482] Lock Acquired to object 
'EngineLock:{exclusiveLocks='[1fd47e75-d708-43e4-ac0f-67bd28dceefd=VM]', 
sharedLocks=''}'
2021-08-03 07:11:51,679-07 INFO  
[org.ovirt.engine.core.bll.MigrateVmToServerCommand] (default task-34) 
[1b2d4416-30f0-452d-b689-291f3b7f7482] Running command: 
MigrateVmToServerCommand internal: false. Entities affected :  ID: 
1fd47e75-d708-43e4-ac0f-67bd28dceefd Type: VMAction group MIGRATE_VM with role 
type USER
2021-08-03 07:11:51,738-07 INFO  
[org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (default task-34) 
[1b2d4416-30f0-452d-b689-291f3b7f7482] START, MigrateVDSCommand( 
MigrateVDSCommandParameters:{hostId='6ec548e6-9a2a-4885-81da-74d0935b7ba5', 
vmId='1fd47e75-d708-43e4-ac0f-67bd28dceefd', srcHost='ovirt_host1', 
dstVdsId='6c31c294-477d-4fa8-b6ff-12e189918f69', dstHost='ovirt_host2:54321', 
migrationMethod='ONLINE', tunnelMigration='false', migrationDowntime='0', 
autoConverge='true', migrateCompressed='false', migrateEncrypted='false', 
consoleAddress='null', maxBandwidth='2500', enableGuestEvents='true', 
maxIncomingMigrations='2', maxOutgoingMigrations='2', 
convergenceSchedule='[init=[{name=setDowntime, params=[100]}], 
stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, 
action={name=setDowntime, params=[200]}}, {limit=3, action={name=setDowntime, 
params=[300]}}, {limit=4, action={name=setDowntime, params=[400]}}, {limit=6, 
action={name=setDowntime, params=[500]}}, {li
mit=-1, action={name=abort, params=[]}}]]', dstQemu='10.1.88.85'}), log id: 
67f63342
2021-08-03 07:11:51,739-07 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (default task-34) [1b2d4416-30f0-452d-b689-291f3b7f7482] START, MigrateBrokerVDSCommand(HostName = ovirt_host1, MigrateVDSCommandParameters:{hostId='6ec548e6-9a2a-4885-81da-74d0935b7ba5', vmId='1fd47e75-d708-43e4-ac0f-67bd28dceefd', srcHost='ovirt_host1', dstVdsId='6c31c294-477d-4fa8-b6ff-12e189918f69', dstHost='ovirt_host2:54321', migrationMethod='ONLINE', tunnelMigration='false', migrationDowntime='0', autoConverge='true', migrateCompressed='false', migrateEncrypted='false', consoleAddress='null', maxBandwidth='2500', enableGuestEvents='true', maxIncomingMigrations='2', maxOutgoingMigrations='2', convergenceSchedule='[init=[{name=setDowntime, params=[100]}], stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, action={name=setDowntime, params=[200]}}, {limit=3, action={name=setDowntime, params=[300]}}, {limit=4, action={name=setDowntime, params=[400]}}, {limit=6, 
action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, params=[]}}]]', dstQemu='10.1.88.85'}), log id: 37ab0828

2021-08-03 07:11:51,741-07 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (default 
task-34) [1b2d4416-30f0-452d-b689-291f3b7f7482] FINISH, 
MigrateBrokerVDSCommand, return: , log id: 37ab0828
2021-08-03 07:11:51,743-07 INFO  
[org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (default task-34) 
[1b2d4416-30f0-452d-b689-291f3b7f7482] FINISH, MigrateVDSCommand, return: 
MigratingFrom, log id: 67f63342
2021-08-03 07:11:51,750-07 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-34) [1b2d4416-30f0-452d-b689-291f3b7f7482] EVENT_ID: VM_MIGRATION_START(62), Migration started (VM: my_vm_hostname, Source: ovirt_host1, Destination: ovirt_host2, User: ebyrne@FreeIPA). 
2021-08-03 07:11:55,736-07 INFO  [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-21) [28d98b26] VM '1fd47e75-d708-43e4-ac0f-67bd28dceefd' was reported as Down on VDS '6c31c294-477d-4fa8-b6ff-12e189918f69'(ovirt_host2)

2021-08-03 07:11:55,736-07 INFO  
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] 
(ForkJoinPool-1-worker-21) [28d98b26] VM 
'1fd47e75-d708-43e4-ac0f-67bd28dceefd'(my_vm_hostname) was unexpectedly 
detected as 'Down' on VDS '6c31c294-477d-4fa8-b6ff-12e189918f69'(ovirt_host2) 
(expected on '6ec548e6-9a2a-4885-81da-74d0935b7ba5')
2021-

[ovirt-users] Re: Terrible Disk Performance on Windows 10 VM

2021-08-03 Thread regloff
When I did the VM this time, I did pre-allocate all of the space. The first 
time, I was just curious if Windows would even run ok under oVirt and it does 
appear it will. 

Next.. I need to get a couple Solaris VMs actually working. I've seen some 
'hacks' out there that others have reported success with. Just haven't had much 
time to mess with that also.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7KPG76ZNWHHVUUK7KDT2EP4DEJ7FOAIU/


[ovirt-users] Re: Terrible Disk Performance on Windows 10 VM

2021-08-03 Thread regloff
Yes - local as in 5400 RPM SATA - standard desktop, slow storage.. :)

It's still 'slow' being 5400 RPM SATA, but after setting the new VM to 
'VirtIO-SCSI' and loading the driver, the performance is 'as expected'. I don't 
notice with with the Linux VMs because they don't do anything that requires a 
lot of disk I/O. Mostly Ansible/Python education and such. 

https://i.postimg.cc/28f764yb/Untitled.png

I actually have some super fast Serial SCSI SSD drives I am going to use in the 
future. A storage vendor where I worked at ordered a bunch by mistake to 
upgrade our storage array and then left them sitting on-site for like 9 months. 
I contacted them to remind them we still had them in our data center and asked 
if they wanted to come and get them. I joked with our field engineer and told 
him if they didn't want them, I could find a use for them! He actually 
contacted his manager who gave us approval to just 'dispose' of them. So I 
thought why not recycle them? :)

I'm in the process of moving soon for a new job. Once I get settled, I'm going 
to upgrade the storage I use for VMs. Either to those SSDs or maybe a small NAS 
device. Ideally.. a NAS device that can support Serial SCSI. I'll need to get a 
controller and a cable for them, but considering the performance... it should 
be well worth it. And no - I didn't get fired for swiping the drives! Too many 
years invested in IT for something that stupid and I'm just not that kind of 
person anyway. I took a position that's a bit more 'administrative' and less 
technical; but with better pay, so I want to keep my tech skills sharp, just 
because I enjoy it. 

This is just a 'home lab' - nothing that supports anything even remotely 
important. I'm so used to SSD now.. my desktop OS is on SSD, my CentOS machine 
is on SSD.. putting Windows on spinning platters is just painful anymore!
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5EVF6FR7Z46A2PI26EYJGBJBFF7LUGVX/


[ovirt-users] Re: Terrible Disk Performance on Windows 10 VM

2021-08-03 Thread Tony Pearce
I believe "local" in this context is using the local ovirt Host OS disk as
VM storage ie "local storage". The disk info mentioned "WDC WD40EZRZ-00G" =
a single 4TB disk, at 5400RPM.

OP the seek time on that disk will be high. How many VMs are running off
it?

Are you able to try other storage? If you could run some monitoring on the
host, I'd expect to see low throughput and high delay on that disk.

Regards,

Tony Pearce


On Tue, 3 Aug 2021 at 16:54, Gilboa Davara  wrote:

>
> On Fri, Jul 30, 2021 at 5:17 PM  wrote:
>
>> This is a simple one desktop setup I use at home for being a nerd :)
>>
>> So it's a single host 'cluster' using local storage.
>>
>
> Sorry for the late reply.
> Define: local.
> NFS, Gluster or ISCSI?
>
> - Gilboa
>
>
>>
>> Host Info:
>> CentOS Linux 8 - 4.18.0-305.10.2.el8_4.x86_64 (I keep fairly well updated)
>> Intel(R) Core(TM) i5-7500 CPU @ 3.40GHz [Kaby Lake] {Skylake}, 14nm
>> The disk it's on is: WDC WD40EZRZ-00G (5400 RPM platter disk) - it's not
>> the fastest thing in the world, but it should be sufficient.
>>
>> VM info:
>> Windows 10 Professional (Desktop not server)
>> 6144 MB of RAM
>> 2 Virtual CPUS
>>  - Some settings I ran across for 'Performance' mode and a couple I had
>> seen on some similar issues (the similar issues were quite dated)
>> Running in headless mode
>> I/O Threads enabled = 1
>> Multi-Queues enabled
>> Virt-IO-SCSI enabled
>> Random Number generator enabled
>> Added a custom property of 'viodiskcache' = writeback  (Didn't seem to
>> make any significant improvement)
>>
>> As I type this though - I was going to add this link as it's what I
>> followed to install the storage driver during the Windows install and then
>> in the OS after that:
>>
>> https://access.redhat.com/solutions/17463
>>
>> I did notice something.. it says to create a new VM with the 'VirtIO disk
>> interface' and I just noted my VM is setup as 'SATA'.
>>
>> Perhaps that is it. This is just my first attempt at running something
>> other than a Linux Distro under oVirt. When I first installed the Windows
>> guest, I didn't have the Virt-IO package downloaded initially. When Windows
>> couldn't find a storage driver, I found this info out.
>>
>> I think I'll deploy a new Windows guest and try the 'VirtIO-SCSI'
>> interface and see if my performance is any better. It's just a default
>> install of Windows at this point, so that'll be easy. :)
>>
>> Will update this thread either way!
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/4YC5E3MRPKJPFAAQDCTH5CWGPTTN77SU/
>>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/CIZOXVW2N5ND4AW4DASH445WSUMVJ745/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DYUWJ4V4E3HTSOG4KJSPW4DREQQAK5TO/


[ovirt-users] Re: Data recovery from (now unused, but still mounted) Gluster Volume for a single VM

2021-08-03 Thread David White via Users
Hi Patrick,
This would be amazing, if possible.

Checking /gluster_bricks/data/data on the host where I've removed (but not 
replaced) the bricks, I see a single directory.
When I go into that directory, I see two directories:

dom_md
images

If I go into the images directory, I think I see the hash folders that you're 
referring to, and inside each of those, I see the 3 files you referenced.

Unfortunately, those files clearly don't have all of the data.
The parent folder for all of the hash folders is only 687M.

[root@cha1-storage data]# du -skh *
687M31366488-d845-445b-b371-e059bf71f34f

And the "iso" files are small. The one I'm looking at now is only 19M.
It appears that most of the actual data is located in 
/gluster_bricks/data/data/.glusterfs, and all of those folders are totally 
random, incomprehensible directories that I'm not sure how to understand.

Perhaps you were on an older version of Gluster, and the actual data hierarchy 
is different?
I don't know. But I do see the 3 files you referenced, so that's a start, even 
if they are nowhere near the correct size.

Sent with ProtonMail Secure Email.

‐‐‐ Original Message ‐‐‐

On Tuesday, August 3rd, 2021 at 1:49 AM, Patrick Lomakin 
 wrote:

> Greetings, I once wondered how data is stored between replicated bricks. 
> Specifically, how disks are stored on the storage domain in Gluster. I 
> checked a mounted brick via the standard path (path may be different) 
> /gluster/data/data and saw many directories there. Maybe the hierarchy is 
> different, can't check now. But in the end I got a list of directories. Each 
> directory name is a disk image hash. After going to a directory such as /HASH 
> there were 3 files. The first is a disk in raw/iso/qcow2 format (but the file 
> has no extension, I looked at the size) the other two files are the 
> configuration and metadata. I downloaded the disk image file (.iso) to my 
> computer via the curl command and service www.station307.com (no ads). And I 
> got the original .iso which uploaded to the storage domain through the hosted 
> engine interface. Maybe this way you can download the disk image to your 
> computer and then load it via the GUI and connect it to a virtual machine. 
> Good luck!
> 

> Users mailing list -- users@ovirt.org
> 

> To unsubscribe send an email to users-le...@ovirt.org
> 

> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> 

> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> 

> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/A6XITCEX5RNQB37YKDCR4EUKTV6W4HIR/

publickey - dmwhite823@protonmail.com - 0x320CD582.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QFCDE36MUQFFNQQULUYWWI7DBU3GG2KF/


[ovirt-users] Re: Terrible Disk Performance on Windows 10 VM

2021-08-03 Thread Gilboa Davara
On Fri, Jul 30, 2021 at 5:17 PM  wrote:

> This is a simple one desktop setup I use at home for being a nerd :)
>
> So it's a single host 'cluster' using local storage.
>

Sorry for the late reply.
Define: local.
NFS, Gluster or ISCSI?

- Gilboa


>
> Host Info:
> CentOS Linux 8 - 4.18.0-305.10.2.el8_4.x86_64 (I keep fairly well updated)
> Intel(R) Core(TM) i5-7500 CPU @ 3.40GHz [Kaby Lake] {Skylake}, 14nm
> The disk it's on is: WDC WD40EZRZ-00G (5400 RPM platter disk) - it's not
> the fastest thing in the world, but it should be sufficient.
>
> VM info:
> Windows 10 Professional (Desktop not server)
> 6144 MB of RAM
> 2 Virtual CPUS
>  - Some settings I ran across for 'Performance' mode and a couple I had
> seen on some similar issues (the similar issues were quite dated)
> Running in headless mode
> I/O Threads enabled = 1
> Multi-Queues enabled
> Virt-IO-SCSI enabled
> Random Number generator enabled
> Added a custom property of 'viodiskcache' = writeback  (Didn't seem to
> make any significant improvement)
>
> As I type this though - I was going to add this link as it's what I
> followed to install the storage driver during the Windows install and then
> in the OS after that:
>
> https://access.redhat.com/solutions/17463
>
> I did notice something.. it says to create a new VM with the 'VirtIO disk
> interface' and I just noted my VM is setup as 'SATA'.
>
> Perhaps that is it. This is just my first attempt at running something
> other than a Linux Distro under oVirt. When I first installed the Windows
> guest, I didn't have the Virt-IO package downloaded initially. When Windows
> couldn't find a storage driver, I found this info out.
>
> I think I'll deploy a new Windows guest and try the 'VirtIO-SCSI'
> interface and see if my performance is any better. It's just a default
> install of Windows at this point, so that'll be easy. :)
>
> Will update this thread either way!
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/4YC5E3MRPKJPFAAQDCTH5CWGPTTN77SU/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CIZOXVW2N5ND4AW4DASH445WSUMVJ745/