[ovirt-users] Ovirt Nested in VMWare (Working approach)

2018-06-13 Thread RabidCicada
I worked through this previously and posted in IRC yesterday about
working around running ovirt nested under VMWare.  I figured i'd post
it to the mail group so that it would be available in it's entirety
here for future reference:

The following gists explain the 3 major adjustments that have to be
made currently to get nested virt under vmware working:

https://gist.github.com/RabidCicada/993349b3a7eaf35637122f0122fdedb3
https://gist.github.com/RabidCicada/bc01eb0b13195faa26520c0fb666ec34
https://gist.github.com/RabidCicada/37ff6f1ed2afd8dd4edefac735536c69
https://gist.github.com/RabidCicada/40655db1582ca5d07c9bbf2c429cdd01

Hopefully it can save others the huge amount of time it took me to
figure it all out.

Specifically it's a problem with the --machine type at 3 major parts
of the vm creation for the hosted-engine.
1) You have to fix the virt-install to use --machine pc-i440fx-rhel7.2.0
2) You have to fix the guestfish spinup of the partially configured vm
to use --machine pc-i440fx-rhel7.2.0, where they edit the nic config
among other things
3) You have to tell vdsm on the node to use --machine pc-i440fx-rhel7.2.0

I don't know the underlying reason why a --machine type >
pc-i440fx-rhel7.2.0 breaks the system but it ends up resulting in a
bios freeze.

Again, Hope this helps someone.  Suggested only to use for a test/dev system.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UDVRS5I64WLFJJF7YXKXZNTQOZRJ6DOJ/


Re: [ovirt-users] Ovirt nested in VMware

2017-05-16 Thread Gianluca Cecchi
On Tue, May 16, 2017 at 3:17 PM, Mark Duggan  wrote:

> Just bumping this in case anyone has any ideas as to what I might be able
> to do to potentially get this to work.
>
>
Hello,
I found some information about my lab in 2013.
I have verified I was using 5.1 so not exactly 5.5.
For that lab, where I ran Openstack Grizzly (and then IceHouse in a second
step), I was able to use nested VMs inside the virtual Hypervisors (based
on Qemu/KVM).
At that time I followed these two guides from virtuallyghetto for 5.0 and
5.1, that are still available.
It seems I didn't find a specific update for 5.5, so it could be that the
guide for 5.1 is still ok for 5.5.
Can you verify if you followed the same steps in configuring vSphere?
Note that the guides are for nesting Hyper-V but I used the same guidelines
to have a nested Qemu-KVM based hypervisor setup:

The guide for 5.0
http://www.virtuallyghetto.com/2011/07/how-to-enable-support-for-nested-64bit.html


The guide for 5.1
http://www.virtuallyghetto.com/2012/08/how-to-enable-nested-esxi-other.html

One important new setup thing for the 5.1 version was the step 3) in
section 

Step 3 - You will need to add one additional .vmx parameter which tells the
underlying guestOS (Hyper-V) that it is not running as a virtual guest
which in fact it really is. The parameter is hypervisor.cpuid.v0 = FALSE

HIH,
Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt nested in VMware

2017-05-16 Thread Mark Duggan
Just bumping this in case anyone has any ideas as to what I might be able
to do to potentially get this to work.

On 12 May 2017 at 13:57, Mark Duggan  wrote:

> Thanks Gianluca,
>
> So I installed the engine into a separate VM, and didn't go down the
> hosted-engine path, although if I was to look at this with physical hosts,
> this seems like a really good approach.
>
> To answer Michal's question from earlier, the nested VM inside the oVirt
> Hypervisors has been up for 23+ hours and it has not progressed past the
> Bios.
> Also, with respect to the vdsm-hooks, here's a list.
>
> Dumpxml attached (hopefully with identifying information removed)
>
> vdsm-hook-nestedvt.noarch
> vdsm-hook-vmfex-dev.noarch
> vdsm-hook-allocate_net.noarch
> vdsm-hook-checkimages.noarch
> vdsm-hook-checkips.x86_64
> vdsm-hook-diskunmap.noarch
> vdsm-hook-ethtool-options.noarch
> vdsm-hook-extnet.noarch
> vdsm-hook-extra-ipv4-addrs.x86_64
> vdsm-hook-fakesriov.x86_64
> vdsm-hook-fakevmstats.noarch
> vdsm-hook-faqemu.noarch
> vdsm-hook-fcoe.noarch
> vdsm-hook-fileinject.noarch
> vdsm-hook-floppy.noarch
> vdsm-hook-hostusb.noarch
> vdsm-hook-httpsisoboot.noarch
> vdsm-hook-hugepages.noarch
> vdsm-hook-ipv6.noarch
> vdsm-hook-isolatedprivatevlan.noarch
> vdsm-hook-localdisk.noarch
> vdsm-hook-macbind.noarch
> vdsm-hook-macspoof.noarch
> vdsm-hook-noipspoof.noarch
> vdsm-hook-numa.noarch
> vdsm-hook-openstacknet.noarch
> vdsm-hook-pincpu.noarch
> vdsm-hook-promisc.noarch
> vdsm-hook-qemucmdline.noarch
> vdsm-hook-qos.noarch
> vdsm-hook-scratchpad.noarch
> vdsm-hook-smbios.noarch
> vdsm-hook-spiceoptions.noarch
> vdsm-hook-vhostmd.noarch
> vdsm-hook-vmdisk.noarch
> vdsm-hook-vmfex.noarch
>
> I'm running ESXi 5.5. For the hypervisor VMs I've set the "Expose Hardware
> Assisted Virtualization to the guest OS"
>
> Hypervisor VMs are running CentOS 7.3
>
> [image: Inline images 1]
>
> On 12 May 2017 at 09:36, Gianluca Cecchi 
> wrote:
>
>>
>>
>> On Fri, May 12, 2017 at 1:06 PM, Michal Skrivanek <
>> michal.skriva...@redhat.com> wrote:
>>
>>>
>>> > On 11 May 2017, at 19:52, Mark Duggan  wrote:
>>> >
>>> > Hi everyone,
>>> >
>>> > From reading through the mailing list, it does appear that it's
>>> possible to have the ovirt nodes/hosts be VMware virtual machines, once I
>>> enable the appropriate settings on the VMware side. All seems to have gone
>>> well, I can see the hosts in the ovirt interface, but when I attempt to
>>> create and start a VM it never gets past printing the SeaBios version and
>>> the machine UUID to the screen/console. It doesn't appear to try to boot
>>> from the hard disk or an ISO that I've attached.
>>> >
>>> > Has anyone else encountered similar behaviour?
>>>
>>> I wouldn’t think you can even get that far.
>>> It may work with full emulation (non-kvm) but we kind of enforce it in
>>> oVirt so some changes are likely needed.
>>> Of course even if you succeed it’s going to be hopelessly slow. (or
>>> maybe it is indeed working and just runs very slow)
>>>
>>> Nested on a KVM hypervisor runs ok
>>>
>>> Thanks,
>>> michal
>>>
>>>
>> In the past I was able to get an Openstack Icehouse environment running
>> inside vSphere 5.x for a POC (on poweful physical servers) and performance
>> of nested VMs inside the virtual compute nodes was acceptable.
>> More recently I configured a standalone ESXi server 6.0 U2 on a Nuc6 with
>> 32Gb of ram and 2 ssd disks and on it I have now running 2 kinds of
>> environments (just verified they are still on after some months I abandoned
>> them to their destiny... ;-)
>>
>> 1) an ESXi VM acting as a single oVirt host (4.1.1 final or pre, I don't
>> remember) with self hosted engine (that itself becomes an L2 VM) and also
>> another VM (CentOS 6.8)
>> See here a screenshot of the web admin gui with a spice console open
>> after connecting to the engine:
>> https://drive.google.com/file/d/0BwoPbcrMv8mvanpTUnFuZ2FURms
>> /view?usp=sharing
>>
>> 2) a virtual oVirt gluster environment based on 4.0.5 with 3 Virtual
>> Hosts (with one as arbiter node if I remember correctly)
>>
>> On this second environment I have ovirt01, virt02 and ovirt03 VMs:
>>
>> [root@ovirt02 ~]# hosted-engine --vm-status
>>
>>
>> --== Host 1 status ==--
>>
>> Status up-to-date  : True
>> Hostname   : ovirt01.localdomain.local
>> Host ID: 1
>> Engine status  : {"reason": "vm not running on this
>> host", "health": "bad", "vm": "down", "detail": "unknown"}
>> Score  : 3042
>> stopped: False
>> Local maintenance  : False
>> crc32  : 2041d7b6
>> Host timestamp : 15340856
>> Extra metadata (valid at timestamp):
>> metadata_parse_version=1
>> metadata_feature_version=1
>> timestamp=15340856 (Fri May 12 14:59:17 2017)
>> host-id=1
>> score=3042
>> 

Re: [ovirt-users] Ovirt nested in VMware

2017-05-12 Thread Mark Duggan
Thanks Gianluca,

So I installed the engine into a separate VM, and didn't go down the
hosted-engine path, although if I was to look at this with physical hosts,
this seems like a really good approach.

To answer Michal's question from earlier, the nested VM inside the oVirt
Hypervisors has been up for 23+ hours and it has not progressed past the
Bios.
Also, with respect to the vdsm-hooks, here's a list.

Dumpxml attached (hopefully with identifying information removed)

vdsm-hook-nestedvt.noarch
vdsm-hook-vmfex-dev.noarch
vdsm-hook-allocate_net.noarch
vdsm-hook-checkimages.noarch
vdsm-hook-checkips.x86_64
vdsm-hook-diskunmap.noarch
vdsm-hook-ethtool-options.noarch
vdsm-hook-extnet.noarch
vdsm-hook-extra-ipv4-addrs.x86_64
vdsm-hook-fakesriov.x86_64
vdsm-hook-fakevmstats.noarch
vdsm-hook-faqemu.noarch
vdsm-hook-fcoe.noarch
vdsm-hook-fileinject.noarch
vdsm-hook-floppy.noarch
vdsm-hook-hostusb.noarch
vdsm-hook-httpsisoboot.noarch
vdsm-hook-hugepages.noarch
vdsm-hook-ipv6.noarch
vdsm-hook-isolatedprivatevlan.noarch
vdsm-hook-localdisk.noarch
vdsm-hook-macbind.noarch
vdsm-hook-macspoof.noarch
vdsm-hook-noipspoof.noarch
vdsm-hook-numa.noarch
vdsm-hook-openstacknet.noarch
vdsm-hook-pincpu.noarch
vdsm-hook-promisc.noarch
vdsm-hook-qemucmdline.noarch
vdsm-hook-qos.noarch
vdsm-hook-scratchpad.noarch
vdsm-hook-smbios.noarch
vdsm-hook-spiceoptions.noarch
vdsm-hook-vhostmd.noarch
vdsm-hook-vmdisk.noarch
vdsm-hook-vmfex.noarch

I'm running ESXi 5.5. For the hypervisor VMs I've set the "Expose Hardware
Assisted Virtualization to the guest OS"

Hypervisor VMs are running CentOS 7.3

[image: Inline images 1]

On 12 May 2017 at 09:36, Gianluca Cecchi  wrote:

>
>
> On Fri, May 12, 2017 at 1:06 PM, Michal Skrivanek <
> michal.skriva...@redhat.com> wrote:
>
>>
>> > On 11 May 2017, at 19:52, Mark Duggan  wrote:
>> >
>> > Hi everyone,
>> >
>> > From reading through the mailing list, it does appear that it's
>> possible to have the ovirt nodes/hosts be VMware virtual machines, once I
>> enable the appropriate settings on the VMware side. All seems to have gone
>> well, I can see the hosts in the ovirt interface, but when I attempt to
>> create and start a VM it never gets past printing the SeaBios version and
>> the machine UUID to the screen/console. It doesn't appear to try to boot
>> from the hard disk or an ISO that I've attached.
>> >
>> > Has anyone else encountered similar behaviour?
>>
>> I wouldn’t think you can even get that far.
>> It may work with full emulation (non-kvm) but we kind of enforce it in
>> oVirt so some changes are likely needed.
>> Of course even if you succeed it’s going to be hopelessly slow. (or maybe
>> it is indeed working and just runs very slow)
>>
>> Nested on a KVM hypervisor runs ok
>>
>> Thanks,
>> michal
>>
>>
> In the past I was able to get an Openstack Icehouse environment running
> inside vSphere 5.x for a POC (on poweful physical servers) and performance
> of nested VMs inside the virtual compute nodes was acceptable.
> More recently I configured a standalone ESXi server 6.0 U2 on a Nuc6 with
> 32Gb of ram and 2 ssd disks and on it I have now running 2 kinds of
> environments (just verified they are still on after some months I abandoned
> them to their destiny... ;-)
>
> 1) an ESXi VM acting as a single oVirt host (4.1.1 final or pre, I don't
> remember) with self hosted engine (that itself becomes an L2 VM) and also
> another VM (CentOS 6.8)
> See here a screenshot of the web admin gui with a spice console open after
> connecting to the engine:
> https://drive.google.com/file/d/0BwoPbcrMv8mvanpTUnFuZ2FURms/
> view?usp=sharing
>
> 2) a virtual oVirt gluster environment based on 4.0.5 with 3 Virtual Hosts
> (with one as arbiter node if I remember correctly)
>
> On this second environment I have ovirt01, virt02 and ovirt03 VMs:
>
> [root@ovirt02 ~]# hosted-engine --vm-status
>
>
> --== Host 1 status ==--
>
> Status up-to-date  : True
> Hostname   : ovirt01.localdomain.local
> Host ID: 1
> Engine status  : {"reason": "vm not running on this
> host", "health": "bad", "vm": "down", "detail": "unknown"}
> Score  : 3042
> stopped: False
> Local maintenance  : False
> crc32  : 2041d7b6
> Host timestamp : 15340856
> Extra metadata (valid at timestamp):
> metadata_parse_version=1
> metadata_feature_version=1
> timestamp=15340856 (Fri May 12 14:59:17 2017)
> host-id=1
> score=3042
> maintenance=False
> state=EngineDown
> stopped=False
>
>
> --== Host 2 status ==--
>
> Status up-to-date  : True
> Hostname   : 192.168.150.103
> Host ID: 2
> Engine status  : {"health": "good", "vm": "up",
> "detail": "up"}
> Score  : 3400
> stopped 

Re: [ovirt-users] Ovirt nested in VMware

2017-05-12 Thread Gianluca Cecchi
On Fri, May 12, 2017 at 1:06 PM, Michal Skrivanek <
michal.skriva...@redhat.com> wrote:

>
> > On 11 May 2017, at 19:52, Mark Duggan  wrote:
> >
> > Hi everyone,
> >
> > From reading through the mailing list, it does appear that it's possible
> to have the ovirt nodes/hosts be VMware virtual machines, once I enable the
> appropriate settings on the VMware side. All seems to have gone well, I can
> see the hosts in the ovirt interface, but when I attempt to create and
> start a VM it never gets past printing the SeaBios version and the machine
> UUID to the screen/console. It doesn't appear to try to boot from the hard
> disk or an ISO that I've attached.
> >
> > Has anyone else encountered similar behaviour?
>
> I wouldn’t think you can even get that far.
> It may work with full emulation (non-kvm) but we kind of enforce it in
> oVirt so some changes are likely needed.
> Of course even if you succeed it’s going to be hopelessly slow. (or maybe
> it is indeed working and just runs very slow)
>
> Nested on a KVM hypervisor runs ok
>
> Thanks,
> michal
>
>
In the past I was able to get an Openstack Icehouse environment running
inside vSphere 5.x for a POC (on poweful physical servers) and performance
of nested VMs inside the virtual compute nodes was acceptable.
More recently I configured a standalone ESXi server 6.0 U2 on a Nuc6 with
32Gb of ram and 2 ssd disks and on it I have now running 2 kinds of
environments (just verified they are still on after some months I abandoned
them to their destiny... ;-)

1) an ESXi VM acting as a single oVirt host (4.1.1 final or pre, I don't
remember) with self hosted engine (that itself becomes an L2 VM) and also
another VM (CentOS 6.8)
See here a screenshot of the web admin gui with a spice console open after
connecting to the engine:
https://drive.google.com/file/d/0BwoPbcrMv8mvanpTUnFuZ2FURms/view?usp=sharing

2) a virtual oVirt gluster environment based on 4.0.5 with 3 Virtual Hosts
(with one as arbiter node if I remember correctly)

On this second environment I have ovirt01, virt02 and ovirt03 VMs:

[root@ovirt02 ~]# hosted-engine --vm-status


--== Host 1 status ==--

Status up-to-date  : True
Hostname   : ovirt01.localdomain.local
Host ID: 1
Engine status  : {"reason": "vm not running on this
host", "health": "bad", "vm": "down", "detail": "unknown"}
Score  : 3042
stopped: False
Local maintenance  : False
crc32  : 2041d7b6
Host timestamp : 15340856
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=15340856 (Fri May 12 14:59:17 2017)
host-id=1
score=3042
maintenance=False
state=EngineDown
stopped=False


--== Host 2 status ==--

Status up-to-date  : True
Hostname   : 192.168.150.103
Host ID: 2
Engine status  : {"health": "good", "vm": "up",
"detail": "up"}
Score  : 3400
stopped: False
Local maintenance  : False
crc32  : 27a80001
Host timestamp : 15340760
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=15340760 (Fri May 12 14:59:11 2017)
host-id=2
score=3400
maintenance=False
state=EngineUp
stopped=False


--== Host 3 status ==--

Status up-to-date  : True
Hostname   : ovirt03.localdomain.local
Host ID: 3
Engine status  : {"reason": "vm not running on this
host", "health": "bad", "vm": "down", "detail": "unknown"}
Score  : 2986
stopped: False
Local maintenance  : False
crc32  : 98aed4ec
Host timestamp : 15340475
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=15340475 (Fri May 12 14:59:22 2017)
host-id=3
score=2986
maintenance=False
state=EngineDown
stopped=False
[root@ovirt02 ~]#

The virtual node ovirt02 has the hosted engine vm running on it
It was some months I didn't come back, but it seems it is still up... ;-)


[root@ovirt02 ~]# uptime
 15:02:18 up 177 days, 13:26,  1 user,  load average: 2.04, 1.46, 1.22

[root@ovirt02 ~]# free
  totalusedfree  shared  buff/cache
available
Mem:   12288324 6941068 3977644  595204 1369612
4340808
Swap:   5242876 2980672 2262204
[root@ovirt02 ~]#

[root@ovirt02 ~]# ps -ef|grep qemu-kvm
qemu  18982  1  8  2016 ?14-20:33:44 /usr/libexec/qemu-kvm
-name HostedEngine -S -machine pc-i440fx-rhel7.2.0,accel=kvm,usb=off

the first node (used for deploy with hostname ovirt01 and 

Re: [ovirt-users] Ovirt nested in VMware

2017-05-12 Thread Michal Skrivanek

> On 12 May 2017, at 13:16, Mark Duggan  wrote:
> 
> Michal 
> 
> I certainly seem to be able to get that far, I can provide screen grabs if 
> you think it'd be useful. 
> 
> I'm OK with hopelessly slow, for now. It's really just to POC the interface, 
> and work flows. I'm hoping to get my hands on a couple of servers soon so 
> that I can do a more full blooded test. 

Hi Mark,
ok. how long did you wait for anything to happen?
did you install any vdsm hooks on the host?
how does the VM xml look like? (you can see that dumped in vdsm.log or "virsh 
-r dumpxml ”

Thanks,
michal

> 
> Mark 
> 
> On May 12, 2017 07:06, "Michal Skrivanek"  > wrote:
> 
> > On 11 May 2017, at 19:52, Mark Duggan  > > wrote:
> >
> > Hi everyone,
> >
> > From reading through the mailing list, it does appear that it's possible to 
> > have the ovirt nodes/hosts be VMware virtual machines, once I enable the 
> > appropriate settings on the VMware side. All seems to have gone well, I can 
> > see the hosts in the ovirt interface, but when I attempt to create and 
> > start a VM it never gets past printing the SeaBios version and the machine 
> > UUID to the screen/console. It doesn't appear to try to boot from the hard 
> > disk or an ISO that I've attached.
> >
> > Has anyone else encountered similar behaviour?
> 
> I wouldn’t think you can even get that far.
> It may work with full emulation (non-kvm) but we kind of enforce it in oVirt 
> so some changes are likely needed.
> Of course even if you succeed it’s going to be hopelessly slow. (or maybe it 
> is indeed working and just runs very slow)
> 
> Nested on a KVM hypervisor runs ok
> 
> Thanks,
> michal
> >
> > Are there additional debug logs I can look at or enable to help further 
> > diagnose what is happening?
> >
> > Thanks
> >
> > Mark
> >
> >
> > ___
> > Users mailing list
> > Users@ovirt.org 
> > http://lists.ovirt.org/mailman/listinfo/users 
> > 
> 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt nested in VMware

2017-05-12 Thread Mark Duggan
Michal

I certainly seem to be able to get that far, I can provide screen grabs if
you think it'd be useful.

I'm OK with hopelessly slow, for now. It's really just to POC the
interface, and work flows. I'm hoping to get my hands on a couple of
servers soon so that I can do a more full blooded test.

Mark

On May 12, 2017 07:06, "Michal Skrivanek" 
wrote:

>
> > On 11 May 2017, at 19:52, Mark Duggan  wrote:
> >
> > Hi everyone,
> >
> > From reading through the mailing list, it does appear that it's possible
> to have the ovirt nodes/hosts be VMware virtual machines, once I enable the
> appropriate settings on the VMware side. All seems to have gone well, I can
> see the hosts in the ovirt interface, but when I attempt to create and
> start a VM it never gets past printing the SeaBios version and the machine
> UUID to the screen/console. It doesn't appear to try to boot from the hard
> disk or an ISO that I've attached.
> >
> > Has anyone else encountered similar behaviour?
>
> I wouldn’t think you can even get that far.
> It may work with full emulation (non-kvm) but we kind of enforce it in
> oVirt so some changes are likely needed.
> Of course even if you succeed it’s going to be hopelessly slow. (or maybe
> it is indeed working and just runs very slow)
>
> Nested on a KVM hypervisor runs ok
>
> Thanks,
> michal
> >
> > Are there additional debug logs I can look at or enable to help further
> diagnose what is happening?
> >
> > Thanks
> >
> > Mark
> >
> >
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt nested in VMware

2017-05-12 Thread Michal Skrivanek

> On 11 May 2017, at 19:52, Mark Duggan  wrote:
> 
> Hi everyone,
> 
> From reading through the mailing list, it does appear that it's possible to 
> have the ovirt nodes/hosts be VMware virtual machines, once I enable the 
> appropriate settings on the VMware side. All seems to have gone well, I can 
> see the hosts in the ovirt interface, but when I attempt to create and start 
> a VM it never gets past printing the SeaBios version and the machine UUID to 
> the screen/console. It doesn't appear to try to boot from the hard disk or an 
> ISO that I've attached.
> 
> Has anyone else encountered similar behaviour?

I wouldn’t think you can even get that far.
It may work with full emulation (non-kvm) but we kind of enforce it in oVirt so 
some changes are likely needed.
Of course even if you succeed it’s going to be hopelessly slow. (or maybe it is 
indeed working and just runs very slow)

Nested on a KVM hypervisor runs ok

Thanks,
michal
> 
> Are there additional debug logs I can look at or enable to help further 
> diagnose what is happening?
> 
> Thanks
> 
> Mark
> 
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Ovirt nested in VMware

2017-05-11 Thread Mark Duggan
Hi everyone,

>From reading through the mailing list, it does appear that it's possible to
have the ovirt nodes/hosts be VMware virtual machines, once I enable the
appropriate settings on the VMware side. All seems to have gone well, I can
see the hosts in the ovirt interface, but when I attempt to create and
start a VM it never gets past printing the SeaBios version and the machine
UUID to the screen/console. It doesn't appear to try to boot from the hard
disk or an ISO that I've attached.

Has anyone else encountered similar behaviour?

Are there additional debug logs I can look at or enable to help further
diagnose what is happening?

Thanks

Mark
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users