Looks good, and I was going to suggest waiting like that if it turned out
to be a race condition. The only thing I would suggest is maybe a sleep so
it doesn't kill CPU if for some reason the system vm never gets cmdline.
On Sep 12, 2014 1:28 PM, "David Bierce" wrote:
> John —
>
> I’ve submitted
John —
I’ve submitted our patch to work around the issue to review board and tied it
to the original ticket we found. I submitted it against the 4.3 but I know
you’ve been testing the patch on 4.2. If someone could take a look at it for
sanity, please check. It looks like it would be an issu
Actually, I believe the kernel is the problem. The hosts are running CentOS 6,
the systemvm is stock template, Debian 7. This does not seem to be an issue on
Ubuntu KVM hypervisors.
The fact that you are rebuilding systemvms on reboot is exactly why you are not
seeing this issue. New system VMs
You may also want to investigate on whether you are seeing a race condition
with /dev/vport0p1 coming on line and cloud-early-config running. It will
be indicated by a log line in the systemvm /var/log/cloud.log:
log_it "/dev/vport0p1 not loaded, perhaps guest kernel is too old."
Actually, if it
Can you provide more info? Is the host running CentOS 6.x, or is your
systemvm? What is rebooted, the host or the router, and how is it rebooted?
We have what sounds like the same config (CentOS 6.x hosts, stock
community provided systemvm), and are running thousands of virtual routers,
rebooted r
I have found that on CloudStack 4.2 + (when we changed to using the
virtio-socket to send data to the systemvm) when running CentOS 6.X
cloud-early-config fails. On new systemvm creation there is a high chance for
success, but still a chance for failure. After the systemvm has been created a
si