Hmm, looks like /usr/sbin/virt-what in the system vms is returning 'qemu', rather than 'kvm', so cloud-early-config fails to do any setup. Forcing virt-what to return kvm in place of qemu gets things going, but perhaps we'd change cloud-early-config to do the same things for kvm|qemu.
So with those three changes, we have an environment that generally seems to work and can be QA'd. On Fri, Feb 8, 2013 at 11:42 AM, Marcus Sorensen <shadow...@gmail.com> wrote: > Looks like it would work, but the centos 6.3 libvirt isn't new enough. > libvirt says 'unknown os type hvm' on centos, even though I've > verified that os type of 'hvm' works qemu and without kvm modules on > ubuntu 12.04. > > I upgraded libvirt to a fedora version, and it worked (at least the > system vms started coming up, need to wait and see if functions work). > Changes made: > > IsHVMEnabled returns true always > hardcoded _hypervisorType to 'qemu' rather than 'kvm' > > Obviously this was just for the test, we would make these changes some > other way. > > On Fri, Feb 8, 2013 at 11:15 AM, Edison Su <edison...@citrix.com> wrote: >> Wondering, how do you get devcloud-kvm work? > > by modprobing kvm, kvm_intel in the guest it can run virtual machines. > Host system needs to have nested=1 set in it's kvm_intel or kvm_amd > kernel module parameters (default on many distributions now). > >> >>> -----Original Message----- >>> From: Marcus Sorensen [mailto:shadow...@gmail.com] >>> Sent: Friday, February 08, 2013 10:02 AM >>> To: Alex Huang >>> Cc: Sebastien Goasguen; Wido den Hollander; cloudstack- >>> d...@incubator.apache.org >>> Subject: Re: QEMU support in CloudStack >>> >>> I'm running a quick sanity test here... just seeing if switching out kvm >>> with >>> qemu works, all else the same. Looks like there's a setHvsType for the >>> LibvirtVMDef as well, that's hardcoded to 'kvm'. >>> That should be easy to adjust as well, assuming everything just runs with >>> these changes. >>> >>> On Fri, Feb 8, 2013 at 10:54 AM, Alex Huang <alex.hu...@citrix.com> wrote: >>> > In that case, why not create two resources with the kvm resource >>> extending the qemu resource and do what Marcus suggests here in the >>> qemu resource? >>> > >>> > Effectively then we have an agent for qemu and one for kvm and they each >>> can carry their own capabilities. >>> > >>> > --Alex >>> > >>> >> -----Original Message----- >>> >> From: Marcus Sorensen [mailto:shadow...@gmail.com] >>> >> Sent: Friday, February 08, 2013 9:41 AM >>> >> To: Sebastien Goasguen >>> >> Cc: Wido den Hollander; cloudstack-dev@incubator.apache.org >>> >> Subject: Re: QEMU support in CloudStack >>> >> >>> >> You would in theory have to disable the check in the agent startup >>> >> code that looks for the kvm kernel modules, and then libvirt should >>> >> just fall back to qemu for everything automatically. >>> >> >>> >> In LibvirtComputingResource.java, comment out the check for >>> >> IsHVMEnabled, and then rmmod any kvm modules, then try to do stuff >>> >> with that version of cloudstack. >>> >> >>> >> On Fri, Feb 8, 2013 at 10:10 AM, Sebastien Goasguen >>> >> <run...@gmail.com> >>> >> wrote: >>> >> > >>> >> > On Feb 8, 2013, at 3:07 PM, Wido den Hollander <w...@widodh.nl> >>> wrote: >>> >> > >>> >> >> Hi, >>> >> >> >>> >> >> On 02/08/2013 10:34 AM, Dave Cahill wrote: >>> >> >>> Hi, >>> >> >>> >>> >> >>> Recently I encountered two "nested virtualization" use cases >>> >> >>> which >>> >> made me >>> >> >>> want QEMU hypervisor support in CloudStack. I'm interested to >>> >> >>> hear if anyone else is interested in this feature, and any notes >>> >> >>> on how it should be implemented. >>> >> >>> >>> >> >>> Here is a good explanation from OpenStack docs [2] on why they >>> >> support QEMU: >>> >> >>> "From the perspective of the Compute service, the QEMU hypervisor >>> >> >>> is >>> >> very >>> >> >>> similar to the KVM hypervisor. Both are controlled through >>> >> >>> libvirt, both support the same feature set, and all virtual >>> >> >>> machine images that are compatible with KVM are also compatible >>> >> >>> with QEMU. The main >>> >> difference is >>> >> >>> that QEMU does not support native virtualization. Consequently, >>> >> >>> QEMU >>> >> has >>> >> >>> worse performance than KVM and is a poor choice for a production >>> >> >>> deployment." >>> >> >>> >>> >> >> >>> >> >> So, I've been reading into the code and found this on my Ubuntu >>> systems. >>> >> >> >>> >> >> root@stack01:~# ls -l /usr/bin/kvm lrwxrwxrwx 1 root root 18 Oct >>> >> >> 4 02:44 /usr/bin/kvm -> qemu-system- >>> >> x86_64 >>> >> >> root@stack01:~# >>> >> >> >>> >> >> Imho Qemu is Qemu and KVM only comes into play when the kernel >>> >> module 'kvm' and 'kvm_amd' or 'kvm_intel' is loaded. >>> >> >> >>> >> >>> Here are the use cases I encountered: >>> >> >>> >>> >> >>> [Use case: Dev environment] >>> >> >>> Wanted to use Vagrant [1] to create a portable multi-node dev >>> >> >>> environment; however Vagrant uses VirtualBox, which doesn't >>> >> >>> support >>> >> KVM. >>> >> >>> Also, devcloud uses VirtualBox and devcloud-kvm uses >>> >> >>> kvm-within- >>> >> kvm. I >>> >> >>> imagine maintenance of devcloud and devcloud-kvm would be easier >>> >> >>> if devcloud-kvm could use VirtualBox too. >>> >> >>> Note: Of course, I'm aware of devcloud-kvm as an alternative >>> >> >>> for this use case, and I'll be looking into that next. >>> >> >>> >>> >> >>> [Use case: Demo environment] >>> >> >>> We may want to spin up a multi-node CloudStack install in >>> >> >>> Amazon >>> >> AWS >>> >> >>> for demo purposes. >>> >> >>> Again, AWS instances don't support KVM, so this is not >>> >> >>> possible >>> >> without >>> >> >>> QEMU support. >>> >> >>> >>> >> >>> [Implementation ideas] >>> >> >>> The management server currently does a check for KVM support >>> >> ("kvm-ok") >>> >> >>> on the host, and refuses to add the host if that fails. I think >>> >> >>> this check could be removed, as the agent setup scripts will fail >>> >> >>> anyway if the user is trying to setup a certain hypervisor on a >>> >> >>> machine which doesn't >>> >> support >>> >> >>> it. >>> >> >> >>> >> >> This way you could do nested virtualization indeed, but it could >>> >> >> also hurt >>> >> users who have their BIOS set to disabled and could lead to long >>> debugging. >>> >> >> >>> >> >>> Create a new setting in agent.properties like "use_qemu", >>> >> >>> with a default of "false". If the person deploying CloudStack >>> >> >>> agent sets this to "true", cloud-setup-agent and other setup >>> >> >>> scripts would ignore lack of >>> >> KVM >>> >> >>> support as long as QEMU support was available. >>> >> >> >>> >> >> cloud-setup-agent generates a agent.properties, so at that point >>> >> >> it >>> >> doesn't know that the user intents to use the system without KVM >>> support. >>> >> >> >>> >> >>> Lastly, when creating the libvirt XML file for a VM, set >>> >> >>> hypervisor to QEMU rather than KVM in the XML file depending on >>> the config setting. >>> >> >>> >>> >> >> >>> >> >> That's not hard coded. The Agent does a getCapabilities() call to >>> >> >> libvirt >>> >> which returns a list of possible emulators. >>> >> >> >>> >> >> /usr/bin/kvm is just one of them which is returned and matches the >>> >> architecture. >>> >> >> >>> >> > >>> >> > Wido, >>> >> > >>> >> > So can I add a "KVM" host that would in fact just use qemu ? >>> >> > How would I do that ? >>> >> > >>> >> > -sebastien >>> >> > >>> >> > >>> >> >> Wido >>> >> >> >>> >> >>> Thanks for reading, >>> >> >>> Dave. >>> >> >>> >>> >> >>> [1] http://www.vagrantup.com/ >>> >> >>> [2] >>> >> >>> http://docs.openstack.org/trunk/openstack- >>> >> compute/install/yum/content/qemu.html >>> >> >>> >>> >> >> >>> >> >