[ovirt-users] Re: Installing Server 2022 VM
This ended up being a problem with an old ISO from Microsoft. The ISO labeled Server2022-December2021 doesn't work, but if we download a new one it's called "Server2022-October2022" and does work. Devin > On Apr 20, 2023, at 10:19 AM, Devin A. Bougie > wrote: > > We've been struggling to install Windows Server 2022 on oVirt. We recently > upgraded to the latest oVirt 4.5 on EL9 hosts, but it didn't help. > > In the past, we could boot a VM from the install CD, add the mass storage > drivers from the virt-io CD, and proceed from there. However, oVirt 4.3 > didn't have a Server 2022 selection. > > After upgrading to oVirt 4.5, we couldn't get it to boot from the CD. It just > gave me Windows boot errors if I do q35 EFI. If I switch back to i440fx I get > the same behavior as in oVirt 4.3 - it'll boot the DVD but doesn't find a > disk with the virtio-scsi drivers. > > We'd greatly appreciate if anyone has successfully installed Server 2022 in > oVirt. > > Many thanks, > Devin > ___ > Users mailing list -- users@ovirt.org > To unsubscribe send an email to users-le...@ovirt.org > Privacy Statement: https://www.ovirt.org/privacy-policy.html > oVirt Code of Conduct: > https://www.ovirt.org/community/about/community-guidelines/ > List Archives: > https://lists.ovirt.org/archives/list/users@ovirt.org/message/TSYQCABC4THLO4RZBO4URCQ6SBDW4UQ6/ ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/PJAM5IDTUSRWPCCO4E25TRIGUBG4BB6P/
[ovirt-users] Re: hosted-engine to standalone
On Thu, Apr 20, 2023 at 4:52 PM Gianluca Cecchi wrote: > > On Thu, Apr 20, 2023 at 2:15 PM carl langlois > wrote: > >> Hi, >> >> I plan to transform my hosted_engine setup to a standalone engine. >> Currently I have 3 hosts that can run the engine. The engine domain is >> located on a glusterfs. I want to simplify this setup by taking 1 of the 3 >> hosts and setting it as a standalone engine and re-installing the other >> host as a standard hypervisor. Also i want to remove the glusterfs. I am >> on 4.3 for now but the plan is to upgrade after this simplification. The >> step i plan to do is: >> >>1. global maintenance >>2. stop engine >>3. backup engine >>4. shutdown engine >>5. install fresh standalone engine and restore from the backup >>6. boot the standalone engine. >>7. after not sure what the step to clean the old engine domain.. >> >> Any suggestion? >> >> Regards, >> Carl >> > > There was a post from David 2 years ago regarding this same scenario: > > https://lists.ovirt.org/archives/list/users@ovirt.org/message/YQWLGVB7NUALLGJ47QYWFTXD25UJX57Q/ > Inline in the message you can find high level steps and also a link to a > github repo (I don't know if and how current) > To be verified how much it does apply to your current version of oVirt. > I think there is not yet official documentation for it (at least for oVirt > as the upstream project) > > HIH, > Gianluca > > Actually the post is from 6 months ago, not 2 years I forgot to disable the time machine before replying... ;-) ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/KR6I5RSRLXPY6EVLAHMEZAY326LJ47KR/
[ovirt-users] Re: hosted-engine to standalone
On Thu, Apr 20, 2023 at 2:15 PM carl langlois wrote: > Hi, > > I plan to transform my hosted_engine setup to a standalone engine. > Currently I have 3 hosts that can run the engine. The engine domain is > located on a glusterfs. I want to simplify this setup by taking 1 of the 3 > hosts and setting it as a standalone engine and re-installing the other > host as a standard hypervisor. Also i want to remove the glusterfs. I am > on 4.3 for now but the plan is to upgrade after this simplification. The > step i plan to do is: > >1. global maintenance >2. stop engine >3. backup engine >4. shutdown engine >5. install fresh standalone engine and restore from the backup >6. boot the standalone engine. >7. after not sure what the step to clean the old engine domain.. > > Any suggestion? > > Regards, > Carl > There was a post from David 2 years ago regarding this same scenario: https://lists.ovirt.org/archives/list/users@ovirt.org/message/YQWLGVB7NUALLGJ47QYWFTXD25UJX57Q/ Inline in the message you can find high level steps and also a link to a github repo (I don't know if and how current) To be verified how much it does apply to your current version of oVirt. I think there is not yet official documentation for it (at least for oVirt as the upstream project) HIH, Gianluca ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/3ECFK4SJ234YEIUSJVKYW3QAIAYDDRPT/
[ovirt-users] Installing Server 2022 VM
We've been struggling to install Windows Server 2022 on oVirt. We recently upgraded to the latest oVirt 4.5 on EL9 hosts, but it didn't help. In the past, we could boot a VM from the install CD, add the mass storage drivers from the virt-io CD, and proceed from there. However, oVirt 4.3 didn't have a Server 2022 selection. After upgrading to oVirt 4.5, we couldn't get it to boot from the CD. It just gave me Windows boot errors if I do q35 EFI. If I switch back to i440fx I get the same behavior as in oVirt 4.3 - it'll boot the DVD but doesn't find a disk with the virtio-scsi drivers. We'd greatly appreciate if anyone has successfully installed Server 2022 in oVirt. Many thanks, Devin ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/TSYQCABC4THLO4RZBO4URCQ6SBDW4UQ6/
[ovirt-users] hosted-engine to standalone
Hi, I plan to transform my hosted_engine setup to a standalone engine. Currently I have 3 hosts that can run the engine. The engine domain is located on a glusterfs. I want to simplify this setup by taking 1 of the 3 hosts and setting it as a standalone engine and re-installing the other host as a standard hypervisor. Also i want to remove the glusterfs. I am on 4.3 for now but the plan is to upgrade after this simplification. The step i plan to do is: 1. global maintenance 2. stop engine 3. backup engine 4. shutdown engine 5. install fresh standalone engine and restore from the backup 6. boot the standalone engine. 7. after not sure what the step to clean the old engine domain.. Any suggestion? Regards, Carl ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/AUZAMMMVITN37EUXNRRM6ELWHOPAT6QV/
[ovirt-users] Not able to create a storage domain of
Hi team, We have installed ovirt 4.4. We have a self hosted Engine setup in the environment which has 1 hosted engine on top of 1 deployment-host. Goal: We want to create a storage domain of POSIX compliant type for mounting a ceph based infrastructure. We have done SRV based resolution in our DNS server but we are unable to create a storage domain. Issue: We are passing the following information: path: :/volumes/xyz/conf/00593e1d-b674-4b00-a289-20bec06761c9 vfs-type: ceph mounting option:rw,name=foo,secret=AQABDzRkTaJCEhAAC7rC6E68ofwULnx6qX/VDA= We get the following errors: vdsm.log== 2023-04-20 11:26:30,318+0530 INFO (jsonrpc/7) [storage.Mount] mounting :/volumes/xyz/conf/2ee9c2d0-873b-4d04-8c46-4c0da02787b8 at /rhev/data-center/mnt/ :_volumes_xyz_conf_2ee9c2d0-873b-4d04-8c46-4c0da02787b8 (mount:207) 2023-04-20 11:26:30,384+0530 ERROR (jsonrpc/7) [storage.HSM] Could not connect to storageServer (hsm:2374) Traceback (most recent call last): File "/usr/lib/python3.6/site-packages/vdsm/storage/hsm.py", line 2371, in connectStorageServer conObj.connect() File "/usr/lib/python3.6/site-packages/vdsm/storage/storageServer.py", line 180, in connect six.reraise(t, v, tb) File "/usr/lib/python3.6/site-packages/six.py", line 703, in reraise raise value File "/usr/lib/python3.6/site-packages/vdsm/storage/storageServer.py", line 171, in connect self._mount.mount(self.options, self._vfsType, cgroup=self.CGROUP) File "/usr/lib/python3.6/site-packages/vdsm/storage/mount.py", line 210, in mount cgroup=cgroup) File "/usr/lib/python3.6/site-packages/vdsm/common/supervdsm.py", line 56, in __call__ return callMethod() File "/usr/lib/python3.6/site-packages/vdsm/common/supervdsm.py", line 54, in **kwargs) File "", line 2, in mount File "/usr/lib64/python3.6/multiprocessing/managers.py", line 772, in _callmethod raise convert_to_error(kind, result) vdsm.storage.mount.MountError: Command ['/usr/bin/mount', '-t', 'ceph', '-o', 'rw,name=foo,secret=AQABDzRkTaJCEhAAC7rC6E68ofwULnx6qX/VDA==', ' :/volumes/xyz/conf/2ee9c2d0-873b-4d04-8c46-4c0da02787b8', '/rhev/data-center/mnt/ :_volumes_xyz_conf_2ee9c2d0-873b-4d04-8c46-4c0da02787b8'] failed with rc=32 out=b'' err=b'mount error 3 = No such process\n' 2023-04-20 11:31:05,715+0530 ERROR (jsonrpc/3) [storage.Dispatcher] FINISH connectStorageServer error=:/volumes/xyz/conf/2ee9c2d0-873b-4d04-8c46-4c0da02787b8 is not a valid hosttail address: (dispatcher:87) Traceback (most recent call last): File "/usr/lib/python3.6/site-packages/vdsm/common/network/address.py", line 42, in hosttail_split raise ValueError ValueError During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/lib/python3.6/site-packages/vdsm/storage/dispatcher.py", line 74, in wrapper result = ctask.prepare(func, *args, **kwargs) File "/usr/lib/python3.6/site-packages/vdsm/storage/task.py", line 110, in wrapper return m(self, *a, **kw) File "/usr/lib/python3.6/site-packages/vdsm/storage/task.py", line 1190, in prepare raise self.error File "/usr/lib/python3.6/site-packages/vdsm/storage/task.py", line 884, in _run return fn(*args, **kargs) File "", line 2, in connectStorageServer File "/usr/lib/python3.6/site-packages/vdsm/common/api.py", line 50, in method ret = func(*args, **kwargs) File "/usr/lib/python3.6/site-packages/vdsm/storage/hsm.py", line 2368, in connectStorageServer conObj = storageServer.ConnectionFactory.createConnection(conInfo) File "/usr/lib/python3.6/site-packages/vdsm/storage/storageServer.py", line 741, in createConnection return ctor(**params) File "/usr/lib/python3.6/site-packages/vdsm/storage/storageServer.py", line 154, in __init__ self._remotePath = fileUtils.normalize_path(spec) File "/usr/lib/python3.6/site-packages/vdsm/storage/fileUtils.py", line 98, in normalize_path host, tail = address.hosttail_split(path) File "/usr/lib/python3.6/site-packages/vdsm/common/network/address.py", line 45, in hosttail_split raise HosttailError('%s is not a valid hosttail address:' % hosttail) vdsm.common.network.address.HosttailError: :/volumes/xyz/conf/2ee9c2d0-873b-4d04-8c46-4c0da02787b8 is not a valid hosttail address: 2023-04-20 11:31:05,715+0530 INFO (jsonrpc/3) [jsonrpc.JsonRpcServer] RPC call StoragePool.connectStorageServer failed (error 451) in 0.00 seconds (__init__:312) engine.log 2023-04-20 11:31:03,818+05 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (default task-29) [6d7913e2-83cf-450d-8746-40f1582d959d] HostName = deployment-host 2023-04-20 11:31:03,818+05 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (default task-29) [6d7913e2-83cf-450d-8746-40f1582d959d] Command
[ovirt-users] Re: Ovirt installation, deployment failed..
> On 19. 4. 2023, at 19:48, John Bnet wrote: > > Hi, > > I'm new to ovirt. I'm trying to get it up and running but I stuck at the > deployement point. > > Can someone have any idea of what is going on? > > this is what i did to install on cent0s8 if you mean CentOS Stream 8... > --> > > > hostnamectl set-hostname node1.local.net > > vi /etc/hosts > > dnf install -y centos-release-ovirt45 https://www.ovirt.org/develop/dev-process/install-nightly-snapshot.html could work better. The nightly repo has a few crucial fixes around ansible. Ansible 2.14 created quite a mess recently. HTH, michal > dnf module enable -y javapackages-tools pki-deps postgresql:12 389-ds > mod_auth_openidc > dnf update > dnf install ovirt-engine > engine-setup > > dnf install cockpit-ovirt-dashboard vdsm-gluster ovirt-host > systemctl enable cockpit.socket > systemctl start cockpit.socket > firewall-cmd --list-services > firewall-cmd --permanent --add-service=cockpit > > hosted-engine --deploy > > an here is a part of the log output: > > > 2023-04-19 09:32:36,628-0400 DEBUG otopi.context context.dumpEnvironment:765 > ENVIRONMENT DUMP - BEGIN > 2023-04-19 09:32:36,628-0400 DEBUG otopi.context context.dumpEnvironment:775 > ENV CORE/internalPackageTransaction=Transaction:'[DNF Transaction]' > 2023-04-19 09:32:36,628-0400 DEBUG otopi.context context.dumpEnvironment:775 > ENV CORE/mainTransaction=Transaction:'[DNF Transaction]' > 2023-04-19 09:32:36,629-0400 DEBUG otopi.context context.dumpEnvironment:779 > ENVIRONMENT DUMP - END > 2023-04-19 09:32:36,629-0400 DEBUG otopi.context context._executeMethod:127 > Stage setup METHOD otopi.plugins.otopi.packagers.yumpackager.Plugin._setup > 2023-04-19 09:32:36,629-0400 DEBUG otopi.context context._executeMethod:136 > otopi.plugins.otopi.packagers.yumpackager.Plugin._setup condition False > 2023-04-19 09:32:36,630-0400 DEBUG otopi.context context._executeMethod:127 > Stage setup METHOD otopi.plugins.gr_he_common.engine.fqdn.Plugin._setup > 2023-04-19 09:32:36,631-0400 DEBUG otopi.context context.dumpEnvironment:765 > ENVIRONMENT DUMP - BEGIN > 2023-04-19 09:32:36,631-0400 DEBUG otopi.context context.dumpEnvironment:775 > ENV COMMAND/dig=NoneType:'None' > 2023-04-19 09:32:36,631-0400 DEBUG otopi.context context.dumpEnvironment:775 > ENV COMMAND/ip=NoneType:'None' > 2023-04-19 09:32:36,631-0400 DEBUG otopi.context context.dumpEnvironment:779 > ENVIRONMENT DUMP - END > 2023-04-19 09:32:36,632-0400 DEBUG otopi.context context._executeMethod:127 > Stage setup METHOD otopi.plugins.gr_he_common.network.bridge.Plugin._setup > 2023-04-19 09:32:36,633-0400 DEBUG otopi.context context._executeMethod:127 > Stage setup METHOD otopi.plugins.gr_he_common.network.gateway.Plugin._setup > 2023-04-19 09:32:36,634-0400 DEBUG otopi.context context._executeMethod:127 > Stage setup METHOD > otopi.plugins.gr_he_common.network.network_check.Plugin._setup > 2023-04-19 09:32:36,634-0400 DEBUG otopi.context context.dumpEnvironment:765 > ENVIRONMENT DUMP - BEGIN > 2023-04-19 09:32:36,634-0400 DEBUG otopi.context context.dumpEnvironment:775 > ENV COMMAND/nc=NoneType:'None' > 2023-04-19 09:32:36,634-0400 DEBUG otopi.context context.dumpEnvironment:775 > ENV COMMAND/ping=NoneType:'None' > 2023-04-19 09:32:36,635-0400 DEBUG otopi.context context.dumpEnvironment:779 > ENVIRONMENT DUMP - END > 2023-04-19 09:32:36,635-0400 DEBUG otopi.context context._executeMethod:127 > Stage setup METHOD otopi.plugins.gr_he_common.vm.cloud_init.Plugin._setup > 2023-04-19 09:32:36,636-0400 DEBUG otopi.context context.dumpEnvironment:765 > ENVIRONMENT DUMP - BEGIN > 2023-04-19 09:32:36,636-0400 DEBUG otopi.context context.dumpEnvironment:775 > ENV COMMAND/ssh-keygen=NoneType:'None' > 2023-04-19 09:32:36,636-0400 DEBUG otopi.context context.dumpEnvironment:779 > ENVIRONMENT DUMP - END > 2023-04-19 09:32:36,637-0400 DEBUG otopi.context context._executeMethod:127 > Stage setup METHOD otopi.plugins.otopi.network.firewalld.Plugin._setup > 2023-04-19 09:32:36,637-0400 DEBUG otopi.context context.dumpEnvironment:765 > ENVIRONMENT DUMP - BEGIN > 2023-04-19 09:32:36,637-0400 DEBUG otopi.context context.dumpEnvironment:775 > ENV COMMAND/firewall-cmd=NoneType:'None' > 2023-04-19 09:32:36,637-0400 DEBUG otopi.context context.dumpEnvironment:775 > ENV COMMAND/python3=NoneType:'None' > 2023-04-19 09:32:36,638-0400 DEBUG otopi.context context.dumpEnvironment:779 > ENVIRONMENT DUMP - END > 2023-04-19 09:32:36,638-0400 DEBUG otopi.context context._executeMethod:127 > Stage setup METHOD otopi.plugins.otopi.network.hostname.Plugin._setup > 2023-04-19 09:32:36,639-0400 DEBUG otopi.context context._executeMethod:127 > Stage setup METHOD otopi.plugins.otopi.services.openrc.Plugin._setup > 2023-04-19 09:32:36,639-0400 DEBUG otopi.context context.dumpEnvironment:765 > ENVIRONMENT DUMP - BEGIN > 2023-04-19 09:32:36,639-0400 DEBUG otopi.context context.dumpEnvironment:775 > ENV COMMAND/rc=NoneType:'None' >