[ovirt-users] Re: HPE Oneview KVM appliance 8.8.0 / 8.7.0

2024-04-12 Thread Angus Clarke
Hello again,



Uploaded the QCOW2 again and created a new VM from scratch with the expected 
parameters -> boots fine.



The failure scenario comes about when the original VM was set with Q35/UEFI 
with Virtio-SCSI disk type (completely fails to boot - expected) and then 
changed to I440FX with BIOS and change disk type to IDE. I'm not sure which 
change (or both) trigger the issue.

This failure scenario persists if I upload a replacement QCOW2 image and attach 
to the modified VM, including images from previous versions of HPE Oneview. 
This explains why I experienced repeat failures.



Overall this is probably not very interesting as this seems to revolve around 
IDE disk types - I'll feedback to HPE the Virtio-SCSI notes that Gianluca and 
Simon have mentioned.

Thanks a lot
Angus









 On Thu, 11 Apr 2024 14:42:32 +0200 Angus Clarke  wrote ---



Hi Gianluca



Thank you for the detailed instructions - these were excellent, I wasn't aware 
of the "lsinitrd" command before now - thanks!



My VM still sticks at the same point when booting with the virtio-scsi 
configuration. Meh!



I'm encouraged that the image booted ok in your environment => points to 
something specific to my environment.



I've raised a case with Oracle as we are using OLVM. I don't think they'll take 
an interest, let's see. If I get anywhere I'll report back here for the record.



Thanks again

Angus







 On Wed, 10 Apr 2024 23:59:22 +0200 Gianluca Cecchi 
 wrote ---












On Wed, Apr 10, 2024 at 12:29 PM Angus Clarke  wrote:



Hi Gianluca



The software is free from HPE but requires a login, I've shared a link 
separately.



Thanks for taking an interest



Regards

Angus






Apart from other considerations we are privately sharing, in my env that is 
based on Cascade Lake cpu on the host, with local storage domain on filesystem, 
the appliance is able to boot and complete the initial configuration phase 
using your settings: Chipset i440FX w/Bios for the IDE disk type, OS: RHEL7 
x86_64. In my env graphics protocol=VNC, video type=VGA

The constraint for your tweaks is caused by the appliance's operating system 
where all the virtio modules are compiled as modules and they are not included 
into the initramfs. 

So the system doesn't find the boot disk if you set it as virtio or virtio-scsi.

The layout is of bios type with one partition for /boot and other filesystems 
on LVM, / included.

To modify the qcow2 image you can use some tools out there, or use manual steps 
this way:



. connect the disk to an existing rhel 7 / CentOS 7 helper VM where you have 
lvm2 package installed

In my case my VM has one disk named /dev/sda and the HPE qcow2 disk when added 
is then seen as /dev/sdb and its partitions as /dev/sdb1, ...

IMPORTANT: change the disk names below as it appears the appliance disk in your 
env, otherwise you risk to compromise your existing data!!!



IMPORTANT: inside the appliance disk there is a volume group named vg01. Verify 
there is no vg01 volume group already defined in your helper VM otherwise you 
get into troubles



. connect to the helper VM as root user



. the LVM structure of the added disk (PV/VG/LV) should be automatically 
detected

run the command "vgs" and you should see vg01 volume group listed

run the command "lvs vg01" and you should see some logical volumes listed 





. mount the root filesystem of the appliance disk on a directory in your helper 
VM (on /media directory in my case)

# mount /dev/vg01/lv_root /media/



. mount the /boot filesystem of the appliance disk under /media/boot

# mount /dev/sdb1 /media/boot/



.  mount the /var filesystem of the appliance disk under /media/var

# mount /dev/vg01/lv_var /media/var/



. chroot into the appliance disk env

# chroot /media



. create a file with new kernel driver modules you want to include in the new 
initramfs

# vi /etc/dracut.conf.d/virtio.conf

its contents have to be this one line below (similar to the already present 
platform.conf):
# cat /etc/dracut.conf.d/virtio.conf
add_drivers+="virtio virtio_blk virtio_scsi"



. backup the original initramfs

# cp -p /boot/initramfs-3.10.0-1062.1.2.el7.x86_64.img 
/boot/initramfs-3.10.0-1062.1.2.el7.x86_64.bak



. replace the initramfs

# dracut -fv /boot/initramfs-3.10.0-1062.1.2.el7.x86_64.img 
3.10.0-1062.1.2.el7.x86_64
...
*** Creating image file done ***
*** Creating initramfs image file 
'/boot/initramfs-3.10.0-1062.1.2.el7.x86_64.img' done ***
# 

. verify the new contents include virtio modules

# lsinitrd /boot/initramfs-3.10.0-1062.1.2.el7.x86_64.img | grep virtio
-rw-r--r--   1 root     root         7876 Sep 30  2019 
usr/lib/modules/3.10.0-1062.1.2.el7.x86_64/kernel/drivers/block/virtio_blk.ko.xz
-rw-r--r--   1 root     root        12972 Sep 30  2019 
usr/lib/modules/3.10.0-1062.1.2.el7.x86_64/kernel/drivers/char/virtio_console.ko.xz
-rw-r--r--   1 root     root        14304 Sep 

[ovirt-users] Re: HPE Oneview KVM appliance 8.8.0 / 8.7.0

2024-04-11 Thread Angus Clarke
Hi Gianluca



Thank you for the detailed instructions - these were excellent, I wasn't aware 
of the "lsinitrd" command before now - thanks!



My VM still sticks at the same point when booting with the virtio-scsi 
configuration. Meh!



I'm encouraged that the image booted ok in your environment => points to 
something specific to my environment.



I've raised a case with Oracle as we are using OLVM. I don't think they'll take 
an interest, let's see. If I get anywhere I'll report back here for the record.



Thanks again

Angus







 On Wed, 10 Apr 2024 23:59:22 +0200 Gianluca Cecchi 
 wrote ---



On Wed, Apr 10, 2024 at 12:29 PM Angus Clarke  wrote:

Hi Gianluca



The software is free from HPE but requires a login, I've shared a link 
separately.



Thanks for taking an interest



Regards

Angus






Apart from other considerations we are privately sharing, in my env that is 
based on Cascade Lake cpu on the host, with local storage domain on filesystem, 
the appliance is able to boot and complete the initial configuration phase 
using your settings: Chipset i440FX w/Bios for the IDE disk type, OS: RHEL7 
x86_64. In my env graphics protocol=VNC, video type=VGA

The constraint for your tweaks is caused by the appliance's operating system 
where all the virtio modules are compiled as modules and they are not included 
into the initramfs. 

So the system doesn't find the boot disk if you set it as virtio or virtio-scsi.

The layout is of bios type with one partition for /boot and other filesystems 
on LVM, / included.

To modify the qcow2 image you can use some tools out there, or use manual steps 
this way:



. connect the disk to an existing rhel 7 / CentOS 7 helper VM where you have 
lvm2 package installed

In my case my VM has one disk named /dev/sda and the HPE qcow2 disk when added 
is then seen as /dev/sdb and its partitions as /dev/sdb1, ...

IMPORTANT: change the disk names below as it appears the appliance disk in your 
env, otherwise you risk to compromise your existing data!!!



IMPORTANT: inside the appliance disk there is a volume group named vg01. Verify 
there is no vg01 volume group already defined in your helper VM otherwise you 
get into troubles



. connect to the helper VM as root user



. the LVM structure of the added disk (PV/VG/LV) should be automatically 
detected

run the command "vgs" and you should see vg01 volume group listed

run the command "lvs vg01" and you should see some logical volumes listed 





. mount the root filesystem of the appliance disk on a directory in your helper 
VM (on /media directory in my case)

# mount /dev/vg01/lv_root /media/



. mount the /boot filesystem of the appliance disk under /media/boot

# mount /dev/sdb1 /media/boot/



.  mount the /var filesystem of the appliance disk under /media/var

# mount /dev/vg01/lv_var /media/var/



. chroot into the appliance disk env

# chroot /media



. create a file with new kernel driver modules you want to include in the new 
initramfs

# vi /etc/dracut.conf.d/virtio.conf

its contents have to be this one line below (similar to the already present 
platform.conf):
# cat /etc/dracut.conf.d/virtio.conf
add_drivers+="virtio virtio_blk virtio_scsi"



. backup the original initramfs

# cp -p /boot/initramfs-3.10.0-1062.1.2.el7.x86_64.img 
/boot/initramfs-3.10.0-1062.1.2.el7.x86_64.bak



. replace the initramfs

# dracut -fv /boot/initramfs-3.10.0-1062.1.2.el7.x86_64.img 
3.10.0-1062.1.2.el7.x86_64
...
*** Creating image file done ***
*** Creating initramfs image file 
'/boot/initramfs-3.10.0-1062.1.2.el7.x86_64.img' done ***
# 

. verify the new contents include virtio modules

# lsinitrd /boot/initramfs-3.10.0-1062.1.2.el7.x86_64.img | grep virtio
-rw-r--r--   1 root     root         7876 Sep 30  2019 
usr/lib/modules/3.10.0-1062.1.2.el7.x86_64/kernel/drivers/block/virtio_blk.ko.xz
-rw-r--r--   1 root     root        12972 Sep 30  2019 
usr/lib/modules/3.10.0-1062.1.2.el7.x86_64/kernel/drivers/char/virtio_console.ko.xz
-rw-r--r--   1 root     root        14304 Sep 30  2019 
usr/lib/modules/3.10.0-1062.1.2.el7.x86_64/kernel/drivers/net/virtio_net.ko.xz
-rw-r--r--   1 root     root         8188 Sep 30  2019 
usr/lib/modules/3.10.0-1062.1.2.el7.x86_64/kernel/drivers/scsi/virtio_scsi.ko.xz
drwxr-xr-x   2 root     root            0 Apr 10 21:14 
usr/lib/modules/3.10.0-1062.1.2.el7.x86_64/kernel/drivers/virtio
-rw-r--r--   1 root     root         4552 Sep 30  2019 
usr/lib/modules/3.10.0-1062.1.2.el7.x86_64/kernel/drivers/virtio/virtio.ko.xz
-rw-r--r--   1 root     root         9904 Sep 30  2019 
usr/lib/modules/3.10.0-1062.1.2.el7.x86_64/kernel/drivers/virtio/virtio_pci.ko.xz
-rw-r--r--   1 root     root         8332 Sep 30  2019 
usr/lib/modules/3.10.0-1062.1.2.el7.x86_64/kernel/drivers/virtio/virtio_ring.ko.xz



. exit the chroot environment

# exit 

. Now you exited from the chroot env, umount the appliance disk filesystems
# umount /media/var /media/boot
# umount 

[ovirt-users] Re: HPE Oneview KVM appliance 8.8.0 / 8.7.0

2024-04-10 Thread Gianluca Cecchi
On Wed, Apr 10, 2024 at 12:29 PM Angus Clarke  wrote:

> Hi Gianluca
>
> The software is free from HPE but requires a login, I've shared a link
> separately.
>
> Thanks for taking an interest
>
> Regards
> Angus
>

Apart from other considerations we are privately sharing, in my env that is
based on Cascade Lake cpu on the host, with local storage domain on
filesystem, the appliance is able to boot and complete the initial
configuration phase using your settings: Chipset i440FX w/Bios for the IDE
disk type, OS: RHEL7 x86_64. In my env graphics protocol=VNC, video type=VGA
The constraint for your tweaks is caused by the appliance's operating
system where all the virtio modules are compiled as modules and they are
not included into the initramfs.
So the system doesn't find the boot disk if you set it as virtio or
virtio-scsi.
The layout is of bios type with one partition for /boot and other
filesystems on LVM, / included.
To modify the qcow2 image you can use some tools out there, or use manual
steps this way:

. connect the disk to an existing rhel 7 / CentOS 7 helper VM where you
have lvm2 package installed
In my case my VM has one disk named /dev/sda and the HPE qcow2 disk when
added is then seen as /dev/sdb and its partitions as /dev/sdb1, ...
IMPORTANT: change the disk names below as it appears the appliance disk in
your env, otherwise you risk to compromise your existing data!!!
IMPORTANT: inside the appliance disk there is a volume group named vg01.
Verify there is no vg01 volume group already defined in your helper VM
otherwise you get into troubles

. connect to the helper VM as root user

. the LVM structure of the added disk (PV/VG/LV) should be automatically
detected
run the command "vgs" and you should see vg01 volume group listed
run the command "lvs vg01" and you should see some logical volumes listed

. mount the root filesystem of the appliance disk on a directory in your
helper VM (on /media directory in my case)
# mount /dev/vg01/lv_root /media/

. mount the /boot filesystem of the appliance disk under /media/boot
# mount /dev/sdb1 /media/boot/

. mount the /var filesystem of the appliance disk under /media/var
# mount /dev/vg01/lv_var /media/var/

. chroot into the appliance disk env
# chroot /media

. create a file with new kernel driver modules you want to include in the
new initramfs
# vi /etc/dracut.conf.d/virtio.conf

its contents have to be this one line below (similar to the already present
platform.conf):
# cat /etc/dracut.conf.d/virtio.conf
add_drivers+="virtio virtio_blk virtio_scsi"

. backup the original initramfs
# cp -p /boot/initramfs-3.10.0-1062.1.2.el7.x86_64.img
/boot/initramfs-3.10.0-1062.1.2.el7.x86_64.bak

. replace the initramfs
# dracut -fv /boot/initramfs-3.10.0-1062.1.2.el7.x86_64.img
3.10.0-1062.1.2.el7.x86_64
...
*** Creating image file done ***
*** Creating initramfs image file
'/boot/initramfs-3.10.0-1062.1.2.el7.x86_64.img' done ***
#

. verify the new contents include virtio modules

# lsinitrd /boot/initramfs-3.10.0-1062.1.2.el7.x86_64.img | grep virtio
-rw-r--r--   1 root root 7876 Sep 30  2019
usr/lib/modules/3.10.0-1062.1.2.el7.x86_64/kernel/drivers/block/virtio_blk.ko.xz
-rw-r--r--   1 root root12972 Sep 30  2019
usr/lib/modules/3.10.0-1062.1.2.el7.x86_64/kernel/drivers/char/virtio_console.ko.xz
-rw-r--r--   1 root root14304 Sep 30  2019
usr/lib/modules/3.10.0-1062.1.2.el7.x86_64/kernel/drivers/net/virtio_net.ko.xz
-rw-r--r--   1 root root 8188 Sep 30  2019
usr/lib/modules/3.10.0-1062.1.2.el7.x86_64/kernel/drivers/scsi/virtio_scsi.ko.xz
drwxr-xr-x   2 root root0 Apr 10 21:14
usr/lib/modules/3.10.0-1062.1.2.el7.x86_64/kernel/drivers/virtio
-rw-r--r--   1 root root 4552 Sep 30  2019
usr/lib/modules/3.10.0-1062.1.2.el7.x86_64/kernel/drivers/virtio/virtio.ko.xz
-rw-r--r--   1 root root 9904 Sep 30  2019
usr/lib/modules/3.10.0-1062.1.2.el7.x86_64/kernel/drivers/virtio/virtio_pci.ko.xz
-rw-r--r--   1 root root 8332 Sep 30  2019
usr/lib/modules/3.10.0-1062.1.2.el7.x86_64/kernel/drivers/virtio/virtio_ring.ko.xz

. exit the chroot environment
# exit

. Now you exited from the chroot env, umount the appliance disk filesystems
# umount /media/var /media/boot
# umount /media

. disconnect the disk from the helper VM

. create a Red Hat 7.x VM in your oVirt/OLVM env as Q35 / Bios VM with the
appliance disk configured as virtio or virtio-scsi disk

. boot the VM and it should work, apart from the current problem of the
display in your env

Eventually if it boots ok and at the end it works, push HPE to add virtio
modules that are quite the standard for disk in Qemu/KVM based env.
The virtio network starts already ok because it is activated after boot as
a module and it is not needed in the initrd phase but only after it.

Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org

[ovirt-users] Re: HPE Oneview KVM appliance 8.8.0 / 8.7.0

2024-04-10 Thread Angus Clarke
Hi Gianluca



The software is free from HPE but requires a login, I've shared a link 
separately.



Thanks for taking an interest



Regards

Angus







 On Wed, 10 Apr 2024 11:56:54 +0200 Gianluca Cecchi 
 wrote ---



On Wed, Apr 10, 2024 at 11:47 AM Angus Clarke  wrote:

Hello folks



I realise this probably isn't the place for this but someone might be 
interested or have some knowledge.



I deployed the KVM version of HPE Oneview 8.8 to oVirt 4.5 (OLVM 4.5) It came 
as a single QCOW2 disk image.







Is the image download publicly available? Or does it need any form of 
subscription ?



Gianluca___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JYCE3ZTGJHGY2IM2FKEAWGBQFLKAHV7Z/


[ovirt-users] Re: HPE Oneview KVM appliance 8.8.0 / 8.7.0

2024-04-10 Thread Gianluca Cecchi
On Wed, Apr 10, 2024 at 11:47 AM Angus Clarke  wrote:

> Hello folks
>
> I realise this probably isn't the place for this but someone might be
> interested or have some knowledge.
>
> I deployed the KVM version of HPE Oneview 8.8 to oVirt 4.5 (OLVM 4.5) It
> came as a single QCOW2 disk image.
>
>
Is the image download publicly available? Or does it need any form of
subscription ?

Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YXYGVCRH6QURQPOVTOSZTEY7ZT2AYMGQ/