[ovirt-users] Re: Important changes to the oVirt Terraform Provider

2022-01-07 Thread Janos Bonic
Hello Marek, hello everyone,

I'm sorry I didn't update you earlier. Unfortunately, we had a key team
member leave our team, which pushed back our release by some time. We are
still pursuing the matter according to the original plan and will release
the TF provider, but we will need some more time to work on it.

We'll keep the repository on GitHub updated with the developments we do.

Once again, I'm sorry for the delay.

Janos


On Wed, Jan 5, 2022, 10:03 PM marek  wrote:

> Hi,
>
> any plan for release?
>
> Marek
> Dne 06/10/2021 v 12:53 Janos Bonic napsal(a):
>
> Dear oVirt community,
>
> We are making sweeping and backwards-incompatible changes to the oVirt
> Terraform provider. *We want your feedback before we make these changes.*
>
> Here’s the short list what we would like to change, please read the
> details below.
>
>1. The current master branch will be renamed to legacy. The usage of
>this provider will be phased out within Red Hat around the end / beginning
>of next year. If you want to create a fork, we are happy to add a link to
>your fork to the readme.
>2. A new main branch will be created and a *new Terraform provider*
>written from scratch on the basis of go-ovirt-client
>. (Preview here
>) This
>provider will only have limited functionality in its first release.
>3. This new provider will be released to the Terraform registry, and
>will have full test coverage and documentation. This provider will be
>released as version v2.0.0 when ready to signal that it is built on the
>Terraform SDK v2.
>4. A copy of this new Terraform provider will be kept in the v1 branch
>and backported to the Terraform SDK v1 for the benefit of the OpenShift
>Installer . We will not tag
>any releases, and we will not release this backported version in binary
>form.
>5. We are hosting a *community call* on the 14th of October at 13:00
>UTC on this link . Please join
>to provide feedback and suggest changes to this plan.
>
> Why are we doing this?
>
> The original Terraform provider
>  for oVirt was
> written four years ago by @Maigard  at
> EMSL-MSC . The oVirt
> fork of this provider is about 2 years old and went through rapid
> expansion, adding a large number of features.
>
> Unfortunately, this continuous rapid growth came at a price: the original
> test infrastructure deteriorated and certain resources, especially the
> virtual machine creation ballooned to a size we feel has become
> unmaintainable.
>
> If you tried to contribute to the Terraform provider recently, you may
> have noticed that our review process has become extremely slow. We can no
> longer run the original tests, and our end to end test suite is not
> integrated outside of the OpenShift CI system. Every change to the provider
> requires one of only 3 people to review the code and also run a manual test
> suite that is currently only runable on one computer.
>
> We also noticed an increasing number of bugs reported on OpenShift on
> oVirt/RHV related to the Terraform provider.
>
> Our original plan was that we would fix the test infrastructure and then
> subsequently slowly transition API calls to go-ovirt-client, but that
> resulted in a PR that is over 5000 lines in code
>  and cannot
> in good conscience be merged in a single piece. Splitting it up is
> difficult, and would likely result in broken functionality where test
> coverage is not present.
> What are we changing for you, the users?
>
> First of all, documentation. You can already preview the documentation
> here
> .
> You will notice that the provider currently only supports a small set of
> features. You can find the full list of features
> 
> we are planning for the first release on GitHub. However, if you are using
> resources like cluster creation, etc. these will currently not work and we
> recommend sticking to the old provider for the time being.
>
> The second big change will be how resources are treated. Instead of
> creating large resources that need to call several of the oVirt APIs to
> create, we will create resources that are only calling one API. This will
> lead to fewer bugs. For example:
>
>- ovirt_vm will create the VM, but not attach any disks or network
>interfaces to it.
>- ovirt_disk_attachment or ovirt_disk_attachments will attach a disk
>to the VM.
>- ovirt_nic will create a network interface.
>- ovirt_vm_start will start the 

[ovirt-users] Re: VM pause and snapshot is in illegal state

2022-01-07 Thread Strahil Nikolov via Users
What kind of Storage domain do you use ?
Best Regards,Strahil Nikolov
 
 
Hi,

One of our VM is pause due to lack of storage space while the VM is in snapshot 
deleting task. Now can't restart or shutdown the VM, and can't delete the 
snapshot.
How i can fix this problem? Any help is much appreciated!

Thank you in advanced!

Regards,

Victor
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XUHMPEFDPRBKYEFJV7DZQA4FIG3SRHC7/
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YBIEIHU3QDHLDHCH6EOFTRP7XRL7XTGD/


[ovirt-users] Re: Ovirt 4.4.9 install fails (guestfish)?

2022-01-07 Thread Andy Kress

Apologies for the delay


yes sir all folders and the uid/gid of the gluste vol is 36


Thanks
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MFGQLMFGJP47SWTE6WGQPUIEAK6XLCX7/


[ovirt-users] Re: Ovirt 4.4.9 install fails (guestfish)?

2022-01-07 Thread Andy Kress

yes sir all hosts, volumes, and bricks have this setting
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CCRTBAX5TBQ5SNMUJ7TGRMUY7JRTFFVU/


[ovirt-users] VM pause and snapshot is in illegal state

2022-01-07 Thread vtse
Hi,

One of our VM is pause due to lack of storage space while the VM is in snapshot 
deleting task. Now can't restart or shutdown the VM, and can't delete the 
snapshot.
How i can fix this problem? Any help is much appreciated!

Thank you in advanced!

Regards,

Victor
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XUHMPEFDPRBKYEFJV7DZQA4FIG3SRHC7/


[ovirt-users] mdadm vs. JBOD

2022-01-07 Thread jonas
Hi,

We are currently building a three node hyper-converged cluster based on oVirt 
Node and Gluster. While discussing the different storage layout we couldn't get 
to a final decision.

Currently our servers are equipped as follows:
- servers 1 & 2:
  - Two 800GB disks for OS
- 100GB RAID1 used as LVM PV for OS
  - Nine 7.68TB disks for Gluster
- 60TB RAID 5 used as LVM PV for Gluster
- server 3
  - Two 800GB disks for OS & Gluster
- 100GB RAID 1 used as LVM PV for OS
- 700GB RAID 1 used as LVM PV for Gluster

Unfortunately I couldn't find much information about mdadm on this topic. The 
hyper-convergence guides ([1], [2]) seem to assume that there is either a 
hardware RAID in place or JBOD is used. Is there some documentation available 
on what to consider when using mdadm? Or would it be more sensible to just use 
JBOD and then add redundancy on the LVM or Gluster level?

If choosing to go with mdadm, what option should I choose in the bricks wizard 
screen (RAID 5 or JBOD)?

[1]: 
https://ovirt.org/documentation/gluster-hyperconverged/chap-Deploying_Hyperconverged.html
[2]: 
https://access.redhat.com/documentation/en-us/red_hat_hyperconverged_infrastructure_for_virtualization/1.8/html/deploying_red_hat_hyperconverged_infrastructure_for_virtualization/rhhi-requirements#rhhi-req-storage
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/V3NWIUK3JHX6AS6ZRHX74J4B5RVN3DLD/


[ovirt-users] Re: Linux VMs cannot boot from software raid0

2022-01-07 Thread Strahil Nikolov via Users
 Hi Vojta

My LVM version on the hypervisor is:
udisks2-lvm2-2.9.0-7.el8.x86_64
llvm-compat-libs-12.0.1-4.module_el8.6.0+1041+0c503ac4.x86_64
lvm2-libs-2.03.14-2.el8.x86_64
lvm2-2.03.14-2.el8.x86_64
libblockdev-lvm-2.24-8.el8.x86_64

LVM on VM:
[root@nextcloud ~]# rpm -qa | grep lvm
lvm2-libs-2.03.12-10.el8.x86_64
lvm2-2.03.12-10.el8.x86_64


VM disk layout is:

[root@nextcloud ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 5G 0 disk 
├─sda1 8:1 0 500M 0 part 
│ └─md127 9:127 0 2G 0 raid0 /boot
└─sda2 8:2 0 4,5G 0 part 
 └─nextcloud--system-root 253:0 0 10G 0 lvm /
sdb 8:16 0 5G 0 disk 
├─sdb1 8:17 0 500M 0 part 
│ └─md127 9:127 0 2G 0 raid0 /boot
└─sdb2 8:18 0 4,5G 0 part 
 └─nextcloud--system-root 253:0 0 10G 0 lvm /
sdc 8:32 0 5G 0 disk 
├─sdc1 8:33 0 500M 0 part 
│ └─md127 9:127 0 2G 0 raid0 /boot
└─sdc2 8:34 0 4,5G 0 part 
 └─nextcloud--system-root 253:0 0 10G 0 lvm /
sdd 8:48 0 5G 0 disk 
├─sdd1 8:49 0 500M 0 part 
│ └─md127 9:127 0 2G 0 raid0 /boot
└─sdd2 8:50 0 4,5G 0 part 
 └─nextcloud--system-root 253:0 0 10G 0 lvm /
sde 8:64 0 1G 0 disk 
└─nextcloud--db-db 253:2 0 4G 0 lvm /var/lib/mysql
sdf 8:80 0 1G 0 disk 
└─nextcloud--db-db 253:2 0 4G 0 lvm /var/lib/mysql
sdg 8:96 0 1G 0 disk 
└─nextcloud--db-db 253:2 0 4G 0 lvm /var/lib/mysql
sdh 8:112 0 1G 0 disk 
└─nextcloud--db-db 253:2 0 4G 0 lvm /var/lib/mysql
sdi 8:128 0 300G 0 disk 
└─data-slow 253:1 0 600G 0 lvm /var/www/html/nextcloud/data
sdj 8:144 0 300G 0 disk 
└─data-slow 253:1 0 600G 0 lvm /var/www/html/nextcloud/data
sr0




I have managed to start my VM as follows:
1. Start the VM
2. Dump the VM xml
3. Destroy
4. Add " to every disk entry in the dump file
5. virsh define VM.xml
6. virsh start VM


Sadly I can't set more than 1 disk as bootable in oVirt.


Best Regards,
Strahil Nikolov В петък, 7 януари 2022 г., 13:38:59 Гринуич+2, Vojtech 
Juranek  написа:  
 
 Hi,

> Hi All,
> I recently migrated from 4.3.10 to 4.4.9 and it seems that booting from
> software raid0 (I have multiple gluster volumes) is not possible with
> Cluster compatibility 4.6 . I've tested creating a fresh VM and it also
> suffers the problem. Changing various options (virtio-scsi to virtio,
> chipset, VM type) did not help . Booting from rescue media shows that the
> data is still there, but grub always drops to rescue. Any hints are
> welcome.
> Host: CentOS Stream 8 with qemu-6.0.0oVirt 4.4.9 (latest)VM OS:
> RHEL7.9/RHEL8.5 Best Regards,Strahil Nikolov

What is the lvm version? There was recently issue [1] with specific lvm version 
(lvm2-2.03.14-1.el8.x86_64) which could cause boot failures.

Vojta

[1] https://bugzilla.redhat.com/show_bug.cgi?id=2026370

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/R2NI7T7QUD7MMIXWFRH4K7ZBTSCZSUWG/
  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NVJQXU2VKABK7XJP3YMI5AXQC2GO2SMP/


[ovirt-users] Re: Linux VMs cannot boot from software raid0

2022-01-07 Thread Strahil Nikolov via Users
 
Hi Vojta,

my storage domains are GlusterFS. I will boot in rescue to see the LVM version 
on the VMs themselves.
Yet, the 2 VMs I tested with, have a different LVM version (EL7 vs EL8).

When I "ls" from the grub rescue I see only:
(hd0) ) (hd0,msdos2) (hd0,msdos1) (md/boot)

As only hd0 is visible (I have 10 disks and at least 4 in the device.map), this 
indicates to me that it is the problem the colleagues from Proxmox has hit. Now 
I'm wondering how to bypass it , so I can power up the RHEL8 VM.

Best Regards,
Strahil Nikolov В петък, 7 януари 2022 г., 13:33:01 Гринуич+2, Vojtech 
Juranek  написа:  
 
 Hi,

> Hi All,
> I recently migrated from 4.3.10 to 4.4.9 and it seems that booting from
> software raid0 (I have multiple gluster volumes) is not possible with
> Cluster compatibility 4.6 . I've tested creating a fresh VM and it also
> suffers the problem. Changing various options (virtio-scsi to virtio,
> chipset, VM type) did not help . Booting from rescue media shows that the
> data is still there, but grub always drops to rescue. Any hints are
> welcome.
> Host: CentOS Stream 8 with qemu-6.0.0oVirt 4.4.9 (latest)VM OS:
> RHEL7.9/RHEL8.5 Best Regards,Strahil Nikolov

What is the lvm version? There was recently issue [1] with specific lvm version 
(lvm2-2.03.14-1.el8.x86_64) which could cause boot failures.

Vojta

[1] https://bugzilla.redhat.com/show_bug.cgi?id=2026370

  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XFVUDXDFZHRUSY4FXTIN442T4U4NY4Z4/


[ovirt-users] Re: Linux VMs cannot boot from software raid0

2022-01-07 Thread Vojtech Juranek
Hi,

> Hi All,
> I recently migrated from 4.3.10 to 4.4.9 and it seems that booting from
> software raid0 (I have multiple gluster volumes) is not possible with
> Cluster compatibility 4.6 . I've tested creating a fresh VM and it also
> suffers the problem. Changing various options (virtio-scsi to virtio,
> chipset, VM type) did not help . Booting from rescue media shows that the
> data is still there, but grub always drops to rescue. Any hints are
> welcome.
> Host: CentOS Stream 8 with qemu-6.0.0oVirt 4.4.9 (latest)VM OS:
> RHEL7.9/RHEL8.5 Best Regards,Strahil Nikolov

What is the lvm version? There was recently issue [1] with specific lvm version 
(lvm2-2.03.14-1.el8.x86_64) which could cause boot failures.

Vojta

[1] https://bugzilla.redhat.com/show_bug.cgi?id=2026370



signature.asc
Description: This is a digitally signed message part.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/R2NI7T7QUD7MMIXWFRH4K7ZBTSCZSUWG/


[ovirt-users] Re: Instability after update

2022-01-07 Thread Andrea Chierici

On 07/01/2022 07:57, Ritesh Chikatwar wrote:

try downgrading in all host and give try

I completed the downgrade and the system seems recovered. Thanks Ritesh, 
you saved my weekend!
What about future upgrades? Any clue on what is going bad on recent qemu 
packages?


Thanks again,

Andrea

--
Andrea Chierici - INFN-CNAF 
Viale Berti Pichat 6/2, 40127 BOLOGNA
Office Tel: +39 051 2095463 
SkypeID ataruz
--


smime.p7s
Description: S/MIME Cryptographic Signature
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GC3HJID56VKHXMYGOS7WZOEKS4HMFG2N/