I've now made installation attempts with the OI-hipster-gui-20210430.iso image on these three virtualization systems:
* CentoS 7.9 virt-manager/QEMU * Ubuntu 20.04 virt-manager/QEMU * Ubuntu 20.04 VirtualBox Each VM is given 4GB DRAM, 4 CPUs, and 80GB disk; the latter is not partitioned, so ZFS uses the entire disk. The disk image file is newly created, and so should have nothing but zero blocks. On all three, the GUI installer worked as expected, and I selected a time zone (America/Denver), created one ordinary user account, and supplied a root password. Installation completed normally, and the systems rebooted. On the CentOS-based VM, the one on which I reported boot problems before, after the system rebooted, I logged in, used the network GUI tool to change to static IPv4 addressing, made one ZFS snapshot, ran "sync" (twice) then "poweroff", then took a virt-manager snapshot. On the next reboot, I again got a similar problem to what I reported previously. ZFS: i/o error - all block copies unavailable ZFS: failed to read pool rpool directory object On the Ubuntu virt-manager VM, reboots are problem free, and the VM is fully configured with a large number of installed packages, and is working nicely as part of our test farm. The Ubuntu VirtualBox VM built normally. and rebooted normally, so I took a ZFS snapshot, rebooted, and started to install packages: it seems normal so far. I'm not going to spend time trying to resurrect the VM on CentOS 7, but I'm still willing to try building on that system additional VMs from newer ISO releases for OpenIndiana Hipster. One might be inclined to consider the CentOS-based VM as an example of failure, or bugs, inside the host O/S, or inside QEMU, or perhaps even the physical workstation (a 2015-vintage HP Z440 with 128GB DRAM, and several TB of disk storage, both EXT4 and ZFS). However, that machine runs 80 to 100 simultaneous VMs with other O/Ses, and has been rock solid in its almost six years of service. That would tend to exonerate the hardware, and virt-manager/QEMU, suggesting that something inside OpenIndiana is causing the problem. However, the success of two other VMs from the same ISO image indicates that OpenIndiana is solid. My workstation is essentially one-of-a-kind, so there is no way for me to see whether an apparently identical box from the same vendor would also experience failure of an OpenIndiana VM. ------------------------------------------------------------------------------- - Nelson H. F. Beebe Tel: +1 801 581 5254 - - University of Utah FAX: +1 801 581 4148 - - Department of Mathematics, 110 LCB Internet e-mail: be...@math.utah.edu - - 155 S 1400 E RM 233 be...@acm.org be...@computer.org - - Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ - ------------------------------------------------------------------------------- _______________________________________________ openindiana-discuss mailing list openindiana-discuss@openindiana.org https://openindiana.org/mailman/listinfo/openindiana-discuss