Copy the long xml and save it to a file. Then edit and remove the requirement for the mitigations.
Then set the following alias: alias virsh='virsh -c qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf' Then : virsh undefine HostedEngine virsh define <file> virsh start HostedEngine Maybe it will work :) Best Regards, Strahil NikolovOn Nov 15, 2019 15:30, Christian Reiss <[email protected]> wrote: > > On 15/11/2019 13:30, [email protected] wrote: > > Since there is no guarantee that the oVirt node image and the hosted-engine > > image are aligned, I'd recommend disabling all mitigations during the > > host's boot (only got a list of the Intel flags, sorry: Not rich enough for > > EPYC) and see if that sails through. And if you have no mitigation risk > > issues, to keep the base CPU definition as low as you can stand (your VMs > > applications could miss out on some nice instruction extensions or other > > features if you go rock-bottom). > > Hey, > > Ugh, I am at a loss. I added > > --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< > GRUB_CMDLINE_LINUX='crashkernel=auto > rd.lvm.lv=onn/ovirt-node-ng-4.3.6-0.20190926.0+1 rd.lvm.lv=onn/swap > mitigations=off rhgb quiet' > --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< > > > to /etc/default/grub, created a new grub.cfg and rebooted. > > --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< > [root@node01 ~]# cat /proc/cmdline > BOOT_IMAGE=/ovirt-node-ng-4.3.6-0.20190926.0+1/vmlinuz-3.10.0-1062.1.1.el7.x86_64 > > root=/dev/onn/ovirt-node-ng-4.3.6-0.20190926.0+1 ro crashkernel=auto > rd.lvm.lv=onn/swap mitigations=off rhgb quiet > rd.lvm.lv=onn/ovirt-node-ng-4.3.6-0.20190926.0+1 > img.bootid=ovirt-node-ng-4.3.6-0.20190926.0+1 > --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< > > > Even after clearing the cache and restarting libvirt the issue is still > > --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< > [root@node01 ~]# cat > /var/cache/libvirt/qemu/capabilities/3c76bc41d59c0c7314b1ae8e63f4f765d2cf16abaeea081b3ca1f5d8732f7bb1.xml > > | grep ssb > <property name='ssbd' type='boolean' value='false'/> > <property name='virt-ssbd' type='boolean' value='false'/> > <property name='ssbd' type='boolean' value='false'/> > <property name='virt-ssbd' type='boolean' value='false'/> > --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< > > > and flags are still set (duh) > > --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< > [root@node01 ~]# grep ssbd /proc/cpuinfo | tail -n1 > flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov > pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt > pdpe1gb rdtscp lm constant_tsc art rep_good nopl xtopology nonstop_tsc > extd_apicid aperfmperf eagerfpu pni pclmulqdq monitor ssse3 fma cx16 > sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy > svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs > skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_l2 cpb > cat_l3 cdp_l3 hw_pstate sme retpoline_amd ssbd ibrs ibpb stibp vmmcall > fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb > sha_ni xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total > cqm_mbm_local clzero irperf xsaveerptr arat npt lbrv svm_lock nrip_save > tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold > avic v_vmsave_vmload vgif umip overflow_recov succor smca > --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< > > > Deploying the oVirt hosted engine still works up to the final point, > when it stops with the usual > > --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< > 2019-11-15 14:43:54,758+0100 INFO (jsonrpc/6) [api.virt] FINISH > getStats return={'status': {'message': 'Done', 'code': 0}, 'statsList': > [{'status': 'Down', 'exitMessage': 'the CPU is incompatible with host > CPU: Host CPU does not provide required features: virt-ssbd', > 'statusTime': '4294738860', 'vmId': > 'd116b296-9ae7-4ff3-80b4-73dc228a7b64', 'exitReason': 1, 'exitCode': > 1}]} from=::1,46514, vmId=d116b296-9ae7-4ff3-80b4-73dc228a7b64 (api:54) > --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< > > > I can see that during the final stages (up to this point the engine VM > is up and running) in vdsm.log there is a (super long) line: > > --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< > 2019-11-15 13:36:10,248+0100 INFO (jsonrpc/4) [api.virt] FINISH create > return={'status': {'message': 'Done', 'code': 0}, 'vmList': {'status': > 'WaitForLaunch', 'maxMemSize': 65536, 'acpiEnable': 'true', > 'emulatedMachine': 'pc-i440fx-rhel7.6.0', 'numOfIoThreads': '1', 'vmId': > 'd116b296-9ae7-4ff3-80b4-73dc228a7b64', 'memGuaranteedSize': 1024, > 'timeOffset': '0', 'smpThreadsPerCore': '1', 'cpuType': 'EPYC', > 'guestDiskMapping': {}, 'arch': 'x86_64', 'smp': '4', 'guestNumaNodes': > [{'nodeIndex': 0, 'cpus': '0,1,2,3', 'memory': '16384'}], u'xml': > u'<?xml version=\'1.0\' encoding=\'UTF-8\'?>\n<domain > xmlns:ovirt-tune="http://ovirt.org/vm/tune/1.0" > xmlns:ovirt-vm="http://ovirt.org/vm/1.0" > type="kvm"><name>HostedEngine</name><uuid>d116b296-9ae7-4ff3-80b4-73dc228a7b64</uuid><memory>16777216</memory><currentMemory>16777216</currentMemory><iothreads>1</iothreads><maxMemory > > slots="16">67108864</maxMemory><vcpu current="4">64</vcpu><sysinfo > type="smbios"><system><entry name="manufacturer">oVirt</entry><entry > name="product">OS-NAME:</entry><entry > name="version">OS-VERSION:</entry><entry > name="serial">HOST-SERIAL:</entry><entry > name="uuid">d116b296-9ae7-4ff3-80b4-73dc228a7b64</entry></system></sysinfo><clock > > offset="variable" adjustment="0"><timer name="rtc" > tickpolicy="catchup"/><timer name="pit" tickpolicy="delay"/><timer > name="hpet" present="no"/></clock><features><acpi/></features><cpu > match="exact"><model>EPYC</model><feature name="ibpb" > policy="require"/><feature name="virt-ssbd" policy="require"/><topology > cores="4" threads="1" sockets="16"/><numa><cell id="0" cpus="0,1,2,3" > memory="16777216"/></numa></cpu><cputune/><devices><input type="mouse" > bus="ps2"/><channel type="unix"><target type="virtio" > name="ovirt-guest-agent.0"/><source mode="bind" > path="/var/lib/libvirt/qemu/channels/d116b296-9ae7-4ff3-80b4-73dc228a7b64.ovirt-guest-agent.0"/></channel><channel > > type="unix"><target type="virtio" name="org.qemu.guest_agent.0"/><source > mode="bind" > path="/var/lib/libvirt/qemu/channels/d116b296-9ae7-4ff3-80b4-73dc228a7b64.org.qemu.guest_agent.0"/></channel><sound > > model="ich6"><alias > name="ua-05ce597b-8e43-4360-81ac-2ca13cb4f9d5"/></sound><graphics > type="vnc" port="-1" autoport="yes" passwd="*****" > passwdValidTo="1970-01-01T00:00:01" keymap="en-us"><listen > type="network" network="vdsm-ovirtmgmt"/></graphics><controller > type="scsi" model="virtio-scsi" index="0"><driver iothread="1"/><alias > name="ua-30edc108-3218-43dc-ad43-129ce392930e"/></controller><video><model > type="qxl" vram="32768" heads="1" ram="65536" vgamem="16384"/><alias > name="ua-31187f25-275b-490d-922b-15712b6fabb6"/></video><console > type="unix"><source > path="/var/run/ovirt-vmconsole-console/d116b296-9ae7-4ff3-80b4-73dc228a7b64.sock" > > mode="bind"/><target type="serial" port="0"/><alias > name="ua-7a643a8e-6871-4d63-a38e-632f03566e63"/></console><graphics > type="spice" port="-1" autoport="yes" passwd="*****" > passwdValidTo="1970-01-01T00:00:01" tlsPort="-1"><channel name="main" > mode="secure"/><channel name="inputs" mode="secure"/><channel > name="cursor" mode="secure"/><channel name="playback" > mode="secure"/><channel name="record" mode="secure"/><channel > name="display" mode="secure"/><channel name="smartcard" > mode="secure"/><channel name="usbredir" mode="secure"/><listen > type="network" network="vdsm-ovirtmgmt"/></graphics><controller > type="virtio-serial" index="0" ports="16"><alias > name="ua-833fb61c-213a-4871-b99a-3863958ce070"/></controller><rng > model="virtio"><backend model="random">/dev/urandom</backend><alias > name="ua-91c1c22f-5d21-458b-b1a1-a700ea8b5e5c"/></rng><memballoon > model="virtio"><stats period="5"/><alias > name="ua-943f5866-0165-40bf-a4b6-658072a1d7f5"/></memballoon><controller > type="usb" model="piix3-uhci" index="0"/><serial type="unix"><source > path="/var/run/ovirt-vmconsole-console/d116b296-9ae7-4ff3-80b4-73dc228a7b64.sock" > > mode="bind"/><target port="0"/></serial><channel type="spicevmc"><target > type="virtio" name="com.redhat.spice.0"/></channel><interface > type="bridge"><model type="virtio"/><link state="up"/><source > bridge="ovirtmgmt"/><driver queues="4" name="vhost"/><alias > name="ua-03c1177f-98be-4bed-8dd0-1f1895a0a0c6"/><mac > address="00:16:3e:3b:5d:da"/><mtu size="1500"/><filterref > filter="vdsm-no-mac-spoofing"/><bandwidth/></interface><disk type="file" > device="cdrom" snapshot="no"><driver name="qemu" type="raw" > error_policy="report"/><source file="" > startupPolicy="optional"><seclabel model="dac" type="none" > relabel="no"/></source><target dev="hdc" bus="ide"/><readonly/><alias > name="ua-acc9e0f3-ab5b-4637-876a-96242a52a470"/></disk><disk > snapshot="no" type="file" device="disk"><target dev="vda" > bus="virtio"/><source > file="/rhev/data-center/00000000-0000-0000-0000-000000000000/b77c80b7-a2a5-4627-a48e-8b8a49583c5d/images/e070502c-780b-45a5-98d1-6f6db9a48967/e4066d0d-2a83-4802-8976-09f2a18baf23"><seclabel > > model="dac" type="none" relabel="no"/></source><driver name="qemu" > iothread="1" io="threads" type="raw" error_policy="stop" > cache="none"/><alias > name="ua-e070502c-780b-45a5-98d1-6f6db9a48967"/><serial>e070502c-780b-45a5-98d1-6f6db9a48967</serial></disk><lease><key>e4066d0d-2a83-4802-8976-09f2a18baf23</key><lockspace>b77c80b7-a2a5-4627-a48e-8b8a49583c5d</lockspace><target > > offset="LEASE-OFFSET:e4066d0d-2a83-4802-8976-09f2a18baf23:b77c80b7-a2a5-4627-a48e-8b8a49583c5d" > > path="LEASE-PATH:e4066d0d-2a83-4802-8976-09f2a18baf23:b77c80b7-a2a5-4627-a48e-8b8a49583c5d"/></lease></devices><pm><suspend-to-disk > > enabled="no"/><suspend-to-mem enabled="no"/></pm><os><type arch="x86_64" > machine="pc-i440fx-rhel7.6.0">hvm</type><smbios mode="sysinfo"/><bios > useserial="yes"/></os><metadata><ovirt-tune:qos/><ovirt-vm:vm><ovirt-vm:minGuaranteedMemoryMb > > type="int">1024</ovirt-vm:minGuaranteedMemoryMb><ovirt-vm:clusterVersion>4.3</ovirt-vm:clusterVersion><ovirt-vm:custom/><ovirt-vm:device > > mac_address="00:16:3e:3b:5d:da"><ovirt-vm:custom/></ovirt-vm:device><ovirt-vm:device > > devtype="disk" > name="vda"><ovirt-vm:poolID>00000000-0000-0000-0000-000000000000</ovirt-vm:poolID><ovirt-vm:volumeID>e4066d0d-2a83-4802-8976-09f2a18baf23</ovirt-vm:volumeID><ovirt-vm:shared>exclusive</ovirt-vm:shared><ovirt-vm:imageID>e070502c-780b-45a5-98d1-6f6db9a48967</ovirt-vm:imageID><ovirt-vm:domainID>b77c80b7-a2a5-4627-a48e-8b8a49583c5d</ovirt-vm:domainID></ovirt-vm:device><ovirt-vm:launchPaused>false</ovirt-vm:launchPaused><ovirt-vm:resumeBehavior>auto_resume</ovirt-vm:resumeBehavior></ovirt-vm:vm></metadata></domain>', > > 'smpCoresPerSocket': '4', 'kvmEnable': 'true', 'bootMenuEnable': > 'false', 'devices': [], 'custom': {}, 'maxVCpus': '64', 'statusTime': > '4357530330', 'vmName': 'HostedEngine', 'maxMemSlots': 16}} > from=::1,40284, vmId=d116b296-9ae7-4ff3-80b4-73dc228a7b64 (api:54) > > [...] > > <feature name="virt-ssbd" policy="require"/> > --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< > > > There is sets "<feature name="virt-ssbd" policy="require"/>", which is > added in the xml, ever since it will break. > > > Ayee. > > > -- > Christian Reiss - [email protected] /"\ ASCII Ribbon > [email protected] \ / Campaign > X against HTML > WEB alpha-labs.net / \ in eMails > > GPG Retrieval https://gpg.christian-reiss.de > GPG ID ABCD43C5, 0x44E29126ABCD43C5 > GPG fingerprint = 9549 F537 2596 86BA 733C A4ED 44E2 9126 ABCD 43C5 > > "It's better to reign in hell than to serve in heaven.", > John Milton, Paradise lost. > _______________________________________________ > Users mailing list -- [email protected] > To unsubscribe send an email to [email protected] > Privacy Statement: https://www.ovirt.org/site/privacy-policy/ > oVirt Code of Conduct: > https://www.ovirt.org/community/about/community-guidelines/ > List Archives: > https://lists.ovirt.org/archives/list/[email protected]/message/HW2A4TLFFLKAIDJEOJCYOA6KLKM3HFP3/ _______________________________________________ Users mailing list -- [email protected] To unsubscribe send an email to [email protected] Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/[email protected]/message/KOW6FXJ5WIQACVWGF4IY5KDJ2BO6N6XQ/

