> The problem is that snapd is configuring cloud-init in a way that
ensures that cloud-init will detect all subsequent boots as first ones
if the instance ID is only provided by a configuration ISO
What if snapd also recorded the same instance_id in the _snapd.cfg
file as from first-boot?
Hi Ian,
I've just launched such a container and I see a bunch of non-cloud-init
errors in the log and when I examine `systemctl list-jobs`, I see that
the two running jobs are systemd-logind.service and
snapd.seeded.service:
root@certain-cod:~# systemctl list-jobs
JOB UNIT
** Changed in: cloud-archive/victoria
Status: Fix Committed => Fix Released
** Changed in: neutron (Ubuntu Groovy)
Status: Fix Committed => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
This is not a bug so I'm closing this report as Invalid.
Please join to the #openstack-nova IRC channel on freenode if you have
nova developement related questions. Also for such questions it is
always good to have the code in question pushed as a review to
review.opendev.org so we can see what
Public bug reported:
Issue:
The Neutron DHCP agent bootstraps the DHCP leases file for a network
using all associated subnets[1]. In a multi-segment environment,
however, a DHCP agent can only service a single segment/subnet of a
given network.
The DHCP namespace, then, is configured with an
This is fix-released in 0.32.
** Changed in: cloud-utils
Status: In Progress => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1799953
Title:
growpart does not
Public bug reported:
get_binding_levels can return None if the binding host is None. But the
patch [1] did not handle such case. The code will raise a TypeError.
[1] https://review.opendev.org/c/openstack/neutron/+/606827
Bug was first found in stable/rocky, but the master branch has same
Public bug reported:
Router HA port may be deleted concurrently while the plugin is trying to
update. Then a PortNotFound exception raised.
ERROR was found at rocky deployment, but the master branch has the same code.
2020-12-01 10:52:46.738 62077 ERROR oslo_messaging.rpc.server
Public bug reported:
Example of such failure:
https://5457a4be5df8e2843a26-2385aff7f377e7626fd9afaccc81540d.ssl.cf1.rackcdn.com/764831/1/check
/neutron-fullstack-with-uwsgi/0d31acb/testr_results.html
Maybe we should try to limit number of API workers in the neutron
process and disable services
9 matches
Mail list logo