What platform is the host running? Did you need to change the libvirt xml file at all? It does sound like it can't access the disk in the VM, or having a hard time doing so, or something like that.
On Fri, Feb 15, 2013 at 3:33 AM, Dave Cahill <dcah...@midokura.com> wrote: > Sounds good, thanks Marcus. I'll try the new image tomorrow. > > From investigations today, it looks like the issues I'm having with > devcloud-kvm are more related to a general nested virtualization issue > (maybe related to the baremetal machine I'm using?) rather than anything > CloudStack-specific. > > My devcloud-kvm VM appears to have inherited the properties necessary for > KVM virtualization: > > [root@devcloud-kvm ~]# lsmod | grep kvm > kvm_intel 52762 3 > kvm 312245 1 kvm_intel > > However, when I run VMs with KVM, it either stalls on the SeaBIOS prompt as > mentioned below, or at best, kernel panics with a message like "Kernel > panic - not syncing: No init found. Try passing init= option to kernel." > > Not sure what my next step is, but thought I'd let you know what I found in > case you have any inspiration! > > Thanks, > Dave. > > > On Fri, Feb 15, 2013 at 1:56 PM, Marcus Sorensen <shadow...@gmail.com>wrote: > >> So, I just ran through the install instructions and found the issues. >> I'll be updating the images and instructions tomorrow. >> >> Here are some things I did: >> >> yum erase cloud-* >> rm -rf /usr/share/cloud/ >> rm -rf /var/lib/cloud/ >> rm -rf /var/cache/cloud/ >> rm -rf /tmp/cloud/ >> rm -rf /etc/cloud/ >> rm -rf /var/log/cloud >> rm -rf /var/run/cloud >> >> virsh pool-list >> virsh pool-destroy <local storage pool id> >> virsh pool-undefine <local storage pool id> >> >> edit /etc/passwd, /etc/shadow, /etc/group and remove cloud user/group >> >> This gets rid of all of the old paths and installations. Now I followed >> the doc: >> >> git clone https://git-wip-us.apache.org/repos/asf/incubator-cloudstack.git >> >> cd incubator-cloudstack/packaging/centos63 >> ./package.sh >> cd ../../dist/rpmbuild/RPMS/x86_64 >> rpm -Uvh cloudstack* >> cloudstack-setup-databases cloud:password@localhost --deploy-as root >> cd /root/incubator-cloudstack >> ** new command: mysql < tools/devcloud-kvm/devcloud-kvm.sql >> cloudstack-setup-management >> ** had to go into UI and set up integration port, then 'service >> cloudstack-management restart' will move this to devcloud-kvm.sql >> >> #set up marvin and auto-deploy test advanced zone (optional) >> mvn -P developer,systemvm clean install >> python tools/marvin/marvin/deployDataCenter.py -i >> tools/devcloud-kvm/devcloud-kvm-advanced.cfg >> >> On Thu, Feb 14, 2013 at 7:03 PM, Dave Cahill <dcah...@midokura.com> wrote: >> > Awesome, thanks Marcus! >> > >> > On Fri, Feb 15, 2013 at 10:59 AM, Marcus Sorensen <shadow...@gmail.com >> >wrote: >> > >> >> I'll take a look. I was in the process this afternoon of removing the >> old >> >> packaging from devcloud-kvm and replacing it with the new. Ill push the >> new >> >> image tomorrow. 4.1 as it stands now works for me on a fresh install >> >> though. >> >> On Feb 14, 2013 6:41 PM, "Dave Cahill" <dcah...@midokura.com> wrote: >> >> >> >> > Hi, >> >> > >> >> > I've been working on getting devcloud-kvm up and running using master, >> >> and >> >> > I've hit a few issues - most were due to recent changes in master and >> are >> >> > fixed now thanks to help from Rohit and Marcus. By the way, I should >> note >> >> > that the devcloud-kvm docs on the wiki are really great - couldn't >> have >> >> > gotten this far without them! >> >> > >> >> > *Remaining issue:* >> >> > * System VMs don't launch >> >> > Using the stock devcloud-kvm image and instructions at [1], system VMs >> >> get >> >> > launched, but the agent can't reach them over SSH (Control / >> link-local >> >> > network), so they go into a launch-destroy-relaunch cycle. >> >> > >> >> > When I connect to the system VMs in VNC, I see: >> >> > >> >> > SeaBIOS (version seabios-0.6.1.2-19.e16) >> >> > >> >> > gPXE (http://etherboot.org) - 00:03.0 C900 PCI2.10 PnP BBS >> PMM0620@10C900 >> >> > Press Ctrl-B to configure gPXE (PCI 00:03.0)... >> >> > >> >> > I tried making a few tweaks to the libvirt XML for the system VMs and >> >> > relaunching them using the tweaked XML, but to little effect - as far >> as >> >> I >> >> > can see, it's as though the system VMs aren't recognizing the attached >> >> > disks. Anyone have any hints? Could this be related to Rohit's >> "[BLOCKER] >> >> > SystemVMs come up but don't have agent running" thread? >> >> > >> >> > *Fixed issues:* >> >> > * No logs from agent >> >> > Fix for now with cp /etc/cloudstack/agent/log4j{-cloud,}.xml >> >> > * Paused logs on management server when running via jetty >> >> > Fix for now with cp >> >> > >> >> > >> >> >> client/target/cloud-client-ui-4.1.0-SNAPSHOT/WEB-INF/classes/log4j{-cloud,}.xml >> >> > * console-proxy directory moved, caused maven builds to fail >> >> > Fixed by Rohit in master >> >> > * console-proxy directory moved, devcloud-kvm's custom >> >> > /etc/init.d/cloud-agent is now incorrect >> >> > Changed this line to reflect the new console-proxy dir: >> >> > cp -rp $CODEHOME/services/console-proxy/server/dist/systemvm.* >> >> > /usr/lib64/cloud/common/vms/ >> >> > * Launching the stock devcloud-kvm image using the devcloud-kvm.xml >> >> > definition on an iMac running Ubuntu 12.04 gives: >> >> > error: Failed to start domain devcloud-kvm >> >> > error: internal error guest CPU is not compatible with host CPU >> >> > >> >> > I removed this section: >> >> > <cpu match='exact'> >> >> > <model>Westmere</model> >> >> > <vendor>Intel</vendor> >> >> > <feature policy='require' name='vmx'/> >> >> > </cpu> >> >> > >> >> > And the VM launched correctly. Is there any advantage to this exact >> >> match, >> >> > or should we remove it from devcloud-kvm.xml? >> >> > >> >> > Thanks again to everyone who worked on devcloud-kvm! >> >> > >> >> > Regards, >> >> > Dave. >> >> > >> >> > [1] >> https://cwiki.apache.org/confluence/display/CLOUDSTACK/devcloud-kvm >> >> > >> >> >>