Thanks Gianluca,

So I installed the engine into a separate VM, and didn't go down the
hosted-engine path, although if I was to look at this with physical hosts,
this seems like a really good approach.

To answer Michal's question from earlier, the nested VM inside the oVirt
Hypervisors has been up for 23+ hours and it has not progressed past the
Bios.
Also, with respect to the vdsm-hooks, here's a list.

Dumpxml attached (hopefully with identifying information removed)

vdsm-hook-nestedvt.noarch
vdsm-hook-vmfex-dev.noarch
vdsm-hook-allocate_net.noarch
vdsm-hook-checkimages.noarch
vdsm-hook-checkips.x86_64
vdsm-hook-diskunmap.noarch
vdsm-hook-ethtool-options.noarch
vdsm-hook-extnet.noarch
vdsm-hook-extra-ipv4-addrs.x86_64
vdsm-hook-fakesriov.x86_64
vdsm-hook-fakevmstats.noarch
vdsm-hook-faqemu.noarch
vdsm-hook-fcoe.noarch
vdsm-hook-fileinject.noarch
vdsm-hook-floppy.noarch
vdsm-hook-hostusb.noarch
vdsm-hook-httpsisoboot.noarch
vdsm-hook-hugepages.noarch
vdsm-hook-ipv6.noarch
vdsm-hook-isolatedprivatevlan.noarch
vdsm-hook-localdisk.noarch
vdsm-hook-macbind.noarch
vdsm-hook-macspoof.noarch
vdsm-hook-noipspoof.noarch
vdsm-hook-numa.noarch
vdsm-hook-openstacknet.noarch
vdsm-hook-pincpu.noarch
vdsm-hook-promisc.noarch
vdsm-hook-qemucmdline.noarch
vdsm-hook-qos.noarch
vdsm-hook-scratchpad.noarch
vdsm-hook-smbios.noarch
vdsm-hook-spiceoptions.noarch
vdsm-hook-vhostmd.noarch
vdsm-hook-vmdisk.noarch
vdsm-hook-vmfex.noarch

I'm running ESXi 5.5. For the hypervisor VMs I've set the "Expose Hardware
Assisted Virtualization to the guest OS"

Hypervisor VMs are running CentOS 7.3

[image: Inline images 1]

On 12 May 2017 at 09:36, Gianluca Cecchi <gianluca.cec...@gmail.com> wrote:

>
>
> On Fri, May 12, 2017 at 1:06 PM, Michal Skrivanek <
> michal.skriva...@redhat.com> wrote:
>
>>
>> > On 11 May 2017, at 19:52, Mark Duggan <mdug...@gmail.com> wrote:
>> >
>> > Hi everyone,
>> >
>> > From reading through the mailing list, it does appear that it's
>> possible to have the ovirt nodes/hosts be VMware virtual machines, once I
>> enable the appropriate settings on the VMware side. All seems to have gone
>> well, I can see the hosts in the ovirt interface, but when I attempt to
>> create and start a VM it never gets past printing the SeaBios version and
>> the machine UUID to the screen/console. It doesn't appear to try to boot
>> from the hard disk or an ISO that I've attached.
>> >
>> > Has anyone else encountered similar behaviour?
>>
>> I wouldn’t think you can even get that far.
>> It may work with full emulation (non-kvm) but we kind of enforce it in
>> oVirt so some changes are likely needed.
>> Of course even if you succeed it’s going to be hopelessly slow. (or maybe
>> it is indeed working and just runs very slow)
>>
>> Nested on a KVM hypervisor runs ok
>>
>> Thanks,
>> michal
>>
>>
> In the past I was able to get an Openstack Icehouse environment running
> inside vSphere 5.x for a POC (on poweful physical servers) and performance
> of nested VMs inside the virtual compute nodes was acceptable.
> More recently I configured a standalone ESXi server 6.0 U2 on a Nuc6 with
> 32Gb of ram and 2 ssd disks and on it I have now running 2 kinds of
> environments (just verified they are still on after some months I abandoned
> them to their destiny... ;-)
>
> 1) an ESXi VM acting as a single oVirt host (4.1.1 final or pre, I don't
> remember) with self hosted engine (that itself becomes an L2 VM) and also
> another VM (CentOS 6.8)
> See here a screenshot of the web admin gui with a spice console open after
> connecting to the engine:
> https://drive.google.com/file/d/0BwoPbcrMv8mvanpTUnFuZ2FURms/
> view?usp=sharing
>
> 2) a virtual oVirt gluster environment based on 4.0.5 with 3 Virtual Hosts
> (with one as arbiter node if I remember correctly)
>
> On this second environment I have ovirt01, virt02 and ovirt03 VMs:
>
> [root@ovirt02 ~]# hosted-engine --vm-status
>
>
> --== Host 1 status ==--
>
> Status up-to-date                  : True
> Hostname                           : ovirt01.localdomain.local
> Host ID                            : 1
> Engine status                      : {"reason": "vm not running on this
> host", "health": "bad", "vm": "down", "detail": "unknown"}
> Score                              : 3042
> stopped                            : False
> Local maintenance                  : False
> crc32                              : 2041d7b6
> Host timestamp                     : 15340856
> Extra metadata (valid at timestamp):
> metadata_parse_version=1
> metadata_feature_version=1
> timestamp=15340856 (Fri May 12 14:59:17 2017)
> host-id=1
> score=3042
> maintenance=False
> state=EngineDown
> stopped=False
>
>
> --== Host 2 status ==--
>
> Status up-to-date                  : True
> Hostname                           : 192.168.150.103
> Host ID                            : 2
> Engine status                      : {"health": "good", "vm": "up",
> "detail": "up"}
> Score                              : 3400
> stopped                            : False
> Local maintenance                  : False
> crc32                              : 27a80001
> Host timestamp                     : 15340760
> Extra metadata (valid at timestamp):
> metadata_parse_version=1
> metadata_feature_version=1
> timestamp=15340760 (Fri May 12 14:59:11 2017)
> host-id=2
> score=3400
> maintenance=False
> state=EngineUp
> stopped=False
>
>
> --== Host 3 status ==--
>
> Status up-to-date                  : True
> Hostname                           : ovirt03.localdomain.local
> Host ID                            : 3
> Engine status                      : {"reason": "vm not running on this
> host", "health": "bad", "vm": "down", "detail": "unknown"}
> Score                              : 2986
> stopped                            : False
> Local maintenance                  : False
> crc32                              : 98aed4ec
> Host timestamp                     : 15340475
> Extra metadata (valid at timestamp):
> metadata_parse_version=1
> metadata_feature_version=1
> timestamp=15340475 (Fri May 12 14:59:22 2017)
> host-id=3
> score=2986
> maintenance=False
> state=EngineDown
> stopped=False
> [root@ovirt02 ~]#
>
> The virtual node ovirt02 has the hosted engine vm running on it
> It was some months I didn't come back, but it seems it is still up... ;-)
>
>
> [root@ovirt02 ~]# uptime
>  15:02:18 up 177 days, 13:26,  1 user,  load average: 2.04, 1.46, 1.22
>
> [root@ovirt02 ~]# free
>               total        used        free      shared  buff/cache
> available
> Mem:       12288324     6941068     3977644      595204     1369612
> 4340808
> Swap:       5242876     2980672     2262204
> [root@ovirt02 ~]#
>
> [root@ovirt02 ~]# ps -ef|grep qemu-kvm
> qemu      18982      1  8  2016 ?        14-20:33:44 /usr/libexec/qemu-kvm
> -name HostedEngine -S -machine pc-i440fx-rhel7.2.0,accel=kvm,usb=off
>
> the first node (used for deploy with hostname ovirt01 and with name inside
> oVirt web admin gui of hosted_engine_1) has other 3 L2 VMs running
> [root@ovirt01 ~]# ps -ef|grep qemu-kvm
> qemu     125069      1  1 15:01 ?        00:00:11 /usr/libexec/qemu-kvm
> -name atomic2 -S -machine pc-i440fx-rhel7.2.0,accel=kvm,usb=off
> qemu     125186      1  2 15:02 ?        00:00:18 /usr/libexec/qemu-kvm
> -name centos6 -S -machine pc-i440fx-rhel7.2.0,accel=kvm,usb=off
> qemu     125329      1  1 15:02 ?        00:00:06 /usr/libexec/qemu-kvm
> -name cirros3 -S -machine pc-i440fx-rhel7.2.0,accel=kvm,usb=off
>
> I also tested live migration with success.
>
> Furthermore all the 3 ESXI VMs hat are the 3 oVirt Hypervisors have still
> in place a VMware snapshot, because I was making a test with the idea of
> reverting after preliminary testing and this adds further load...
> see here some screenshots:
>
> ESXi with its 3 VMs that are the 3 oVirt hypervisors
> https://drive.google.com/file/d/0BwoPbcrMv8mvWEtwM3otLU5uUkU/
> view?usp=sharing
>
> oVirt Engine web admin portal with one L2 VM console open
> https://drive.google.com/file/d/0BwoPbcrMv8mvS2I1eEREclBqSU0/
> view?usp=sharing
>
> oVirt Engine web admin Hosts tab
> https://drive.google.com/file/d/0BwoPbcrMv8mvWGcxV0xDUGpINlU/
> view?usp=sharing
>
> oVrt Engine Gluster data domain
> https://drive.google.com/file/d/0BwoPbcrMv8mvVkxMa1R2eGRfV2s/
> view?usp=sharing
>
>
> Let me see and find the configuration settings I set up for it, because
> some months have gone and I then had little time to follow it...
>
> In the mean time, what is the version of your ESXi environment? Because
> settings to put in place changed form version 5 to version 6.
> What are particular settings you already configured for the ESXi VMs you
> plan to use as oVirt hypervisors?
>
> Gianluca
>
<domain type='kvm' id='6'>
  <name>mdvm01</name>
  <uuid>1a9d308c-89d8-47dd-b1a9-253cde6cbdfe</uuid>
  <metadata xmlns:ovirt="http://ovirt.org/vm/tune/1.0";>
    <ovirt:qos/>
  </metadata>
  <maxMemory slots='16' unit='KiB'>4194304</maxMemory>
  <memory unit='KiB'>1048576</memory>
  <currentMemory unit='KiB'>1048576</currentMemory>
  <vcpu placement='static' current='1'>16</vcpu>
  <cputune>
    <shares>1020</shares>
  </cputune>
  <resource>
    <partition>/machine</partition>
  </resource>
  <sysinfo type='smbios'>
    <system>
      <entry name='manufacturer'>oVirt</entry>
      <entry name='product'>oVirt Node</entry>
      <entry name='version'>7-3.1611.el7.centos</entry>
      <entry name='serial'>42200056-37BB-10CA-AF6F-8214B6A07FA5</entry>
      <entry name='uuid'>1a9d308c-89d8-47dd-b1a9-253cde6cbdfe</entry>
    </system>
  </sysinfo>
  <os>
    <type arch='x86_64' machine='pc-i440fx-rhel7.3.0'>hvm</type>
    <smbios mode='sysinfo'/>
  </os>
  <features>
    <acpi/>
  </features>
  <cpu mode='custom' match='exact'>
    <model fallback='allow'>Westmere</model>
    <topology sockets='16' cores='1' threads='1'/>
    <numa>
      <cell id='0' cpus='0' memory='1048576' unit='KiB'/>
    </numa>
  </cpu>
  <clock offset='variable' adjustment='0' basis='utc'>
    <timer name='rtc' tickpolicy='catchup'/>
    <timer name='pit' tickpolicy='delay'/>
    <timer name='hpet' present='no'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>destroy</on_crash>
  <devices>
    <emulator>/usr/libexec/qemu-kvm</emulator>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source 
file='/rhev/data-center/mnt/<redacted>:_KVM__ISO/f6c3b6e5-ac9a-425e-85f6-1a486a6baf73/images/11111111-1111-1111-1111-111111111111/CentOS-7-x86_64-DVD-1611.iso'
 startupPolicy='optional'>
        <seclabel model='selinux' labelskip='yes'/>
      </source>
      <backingStore/>
      <target dev='hdc' bus='ide'/>
      <readonly/>
      <boot order='2'/>
      <alias name='ide0-1-0'/>
      <address type='drive' controller='0' bus='1' target='0' unit='0'/>
    </disk>
    <disk type='file' device='disk' snapshot='no'>
      <driver name='qemu' type='qcow2' cache='none' error_policy='stop' 
io='threads'/>
      <source 
file='/rhev/data-center/00000001-0001-0001-0001-000000000311/418edf3b-1047-4760-b94e-5eda02db5e61/images/57ff2f15-6e3e-434e-8f94-a2305c3bbeb6/13d98362-e5d4-4933-a1be-721c69347617'>
        <seclabel model='selinux' labelskip='yes'/>
      </source>
      <backingStore/>
      <target dev='vda' bus='virtio'/>
      <serial>57ff2f15-6e3e-434e-8f94-a2305c3bbeb6</serial>
      <boot order='1'/>
      <alias name='virtio-disk0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' 
function='0x0'/>
    </disk>
    <controller type='usb' index='0' model='piix3-uhci'>
      <alias name='usb'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' 
function='0x2'/>
    </controller>
    <controller type='scsi' index='0' model='virtio-scsi'>
      <alias name='scsi0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' 
function='0x0'/>
    </controller>
    <controller type='virtio-serial' index='0' ports='16'>
      <alias name='virtio-serial0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' 
function='0x0'/>
    </controller>
    <controller type='ide' index='0'>
      <alias name='ide'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' 
function='0x1'/>
    </controller>
    <controller type='pci' index='0' model='pci-root'>
      <alias name='pci.0'/>
    </controller>
    <interface type='bridge'>
      <mac address='00:1a:4a:16:01:53'/>
      <source bridge='ovirtmgmt'/>
      <target dev='vnet0'/>
      <model type='virtio'/>
      <filterref filter='vdsm-no-mac-spoofing'/>
      <link state='up'/>
      <alias name='net0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' 
function='0x0'/>
    </interface>
    <channel type='unix'>
      <source mode='bind' 
path='/var/lib/libvirt/qemu/channels/1a9d308c-89d8-47dd-b1a9-253cde6cbdfe.com.redhat.rhevm.vdsm'/>
      <target type='virtio' name='com.redhat.rhevm.vdsm' state='disconnected'/>
      <alias name='channel0'/>
      <address type='virtio-serial' controller='0' bus='0' port='1'/>
    </channel>
    <channel type='unix'>
      <source mode='bind' 
path='/var/lib/libvirt/qemu/channels/1a9d308c-89d8-47dd-b1a9-253cde6cbdfe.org.qemu.guest_agent.0'/>
      <target type='virtio' name='org.qemu.guest_agent.0' state='disconnected'/>
      <alias name='channel1'/>
      <address type='virtio-serial' controller='0' bus='0' port='2'/>
    </channel>
    <channel type='spicevmc'>
      <target type='virtio' name='com.redhat.spice.0' state='disconnected'/>
      <alias name='channel2'/>
      <address type='virtio-serial' controller='0' bus='0' port='3'/>
    </channel>
    <input type='mouse' bus='ps2'>
      <alias name='input0'/>
    </input>
    <input type='keyboard' bus='ps2'>
      <alias name='input1'/>
    </input>
    <graphics type='spice' tlsPort='5900' autoport='yes' listen='<redacted>' 
defaultMode='secure' passwdValidTo='2017-05-12T16:56:23'>
      <listen type='network' address='<redacted>' network='vdsm-ovirtmgmt'/>
      <channel name='main' mode='secure'/>
      <channel name='display' mode='secure'/>
      <channel name='inputs' mode='secure'/>
      <channel name='cursor' mode='secure'/>
      <channel name='playback' mode='secure'/>
      <channel name='record' mode='secure'/>
      <channel name='smartcard' mode='secure'/>
      <channel name='usbredir' mode='secure'/>
    </graphics>
    <video>
      <model type='qxl' ram='65536' vram='8192' vgamem='16384' heads='1' 
primary='yes'/>
      <alias name='video0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' 
function='0x0'/>
    </video>
    <memballoon model='virtio'>
      <alias name='balloon0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' 
function='0x0'/>
    </memballoon>
    <rng model='virtio'>
      <backend model='random'>/dev/urandom</backend>
      <alias name='rng0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x08' 
function='0x0'/>
    </rng>
  </devices>
  <seclabel type='dynamic' model='selinux' relabel='yes'>
    <label>system_u:system_r:svirt_t:s0:c130,c816</label>
    <imagelabel>system_u:object_r:svirt_image_t:s0:c130,c816</imagelabel>
  </seclabel>
  <seclabel type='dynamic' model='dac' relabel='yes'>
    <label>+107:+107</label>
    <imagelabel>+107:+107</imagelabel>
  </seclabel>
</domain>
_______________________________________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

Reply via email to