[ovirt-users] Re: oVirt and the future

2021-08-11 Thread Yedidyah Bar David
On Wed, Aug 11, 2021 at 10:03 PM wrote: > > Hi all, > I'm looking for some information about the future of oVirt. With CentOS > going away have they talked about what they will be doing or moving to? I'd > like to see Ubuntu support. I suggest to search the archives of this list - there were

[ovirt-users] oVirt and the future

2021-08-11 Thread thilburn
Hi all, I'm looking for some information about the future of oVirt. With CentOS going away have they talked about what they will be doing or moving to? I'd like to see Ubuntu support. ___ Users mailing list -- users@ovirt.org To unsubscribe send an

[ovirt-users] Automigration of VMs from other hypervisors

2021-08-11 Thread KK CHN
Hi list, I am in the process of migrating 150+ VMs running on Rhevm4.1 toKVM based OpenStack installation ( Ussuri with KVm and glance as image storage.) What I am doing now, manually shutdown each VM through RHVM GUI and export to export domain and scp those image files of each VM to our

[ovirt-users] Re: Sparse VMs from Templates - Storage issues

2021-08-11 Thread Nir Soffer
On Wed, Aug 11, 2021 at 4:24 PM Arik Hadas wrote: > > > > On Wed, Aug 11, 2021 at 2:56 PM Benny Zlotnik wrote: >> >> > If your vm is temporary and you like to drop the data written while >> > the vm is running, you >> > could use a temporary disk based on the template. This is called a >> >

[ovirt-users] Re: Sparse VMs from Templates - Storage issues

2021-08-11 Thread Nir Soffer
On Wed, Aug 11, 2021 at 3:13 PM Shantur Rathore wrote: > > >> Yes, on file based storage a snapshot is a file, and it grows as >> needed. On block based >> storage, a snapshot is a logical volume, and oVirt needs to extend it >> when needed. > > > Forgive my ignorance, coming from vSphere

[ovirt-users] Re: Sparse VMs from Templates - Storage issues

2021-08-11 Thread Arik Hadas
On Wed, Aug 11, 2021 at 2:56 PM Benny Zlotnik wrote: > > If your vm is temporary and you like to drop the data written while > > the vm is running, you > > could use a temporary disk based on the template. This is called a > > "transient disk" in vdsm. > > > > Arik, maybe you remember how

[ovirt-users] Re: Ubuntu 20.04 cloud-init

2021-08-11 Thread Pavel Šipoš
Thank you for your answer. I dig further and figure that cleaning previous cloud-init configuration files on template VM and reinstalling cloud-init package helped. So it's working now. Pavel On 10/08/2021 13:20, Florian Schmid via Users wrote: Hello Pavel, we are also using 4.3 and for

[ovirt-users] Re: Sparse VMs from Templates - Storage issues

2021-08-11 Thread Shantur Rathore
> Yes, on file based storage a snapshot is a file, and it grows as > needed. On block based > storage, a snapshot is a logical volume, and oVirt needs to extend it > when needed. Forgive my ignorance, coming from vSphere background where a filesystem was created on iSCSI LUN. I take that this

[ovirt-users] Re: Sparse VMs from Templates - Storage issues

2021-08-11 Thread Benny Zlotnik
> If your vm is temporary and you like to drop the data written while > the vm is running, you > could use a temporary disk based on the template. This is called a > "transient disk" in vdsm. > > Arik, maybe you remember how transient disks are used in engine? > Do we have an API to run a VM once,

[ovirt-users] Re: Sparse VMs from Templates - Storage issues

2021-08-11 Thread Nir Soffer
On Wed, Aug 11, 2021 at 12:43 AM Shantur Rathore wrote: > > Thanks for the detailed response Nir. > > In my use case, we keep creating VMs from templates and deleting them so we > need the VMs to be created quickly and cloning it will use a lot of time and > storage. That's a good reason to

[ovirt-users] Re: Cannot restart ovirt after massive failure.

2021-08-11 Thread Yedidyah Bar David
On Tue, Aug 10, 2021 at 9:20 PM Gilboa Davara wrote: > > Hello, > > Many thanks again for taking the time to try and help me recover this machine > (even though it would have been far easier to simply redeploy it...) > >> > >> > >> > Sadly enough, it seems that --clean-metadata requires an