** Changed in: kolla-ansible
Importance: Low => Undecided
** Changed in: kolla-ansible
Status: Triaged => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
** Also affects: devstack
Importance: Undecided
Status: New
** Changed in: devstack
Status: New => In Progress
** Changed in: devstack
Importance: Undecided => High
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is
** No longer affects: masakari
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1804062
Title:
test_hacking fails for python 3.6.7 and newer
Status in Ubuntu Cloud
** Changed in: kolla-ansible
Status: Triaged => Invalid
** No longer affects: kolla-ansible/rocky
** No longer affects: kolla-ansible/stein
** Changed in: kolla-ansible
Importance: High => Undecided
--
You received this bug notification because you are a member of Yahoo!
Engineering
Cinder team, please respond.
** Changed in: masakari
Assignee: (unassigned) => Radosław Piliszek (yoctozepto)
** Also affects: cinder
Importance: Undecided
Status: New
** Changed in: cinder
Status: New => Confirmed
** Changed in: oslo.db
Status: Confirmed
I appropriated this bug report to track progress of fixing.
We need oslo help anyway.
** Also affects: oslo.db
Importance: Undecided
Status: New
** Also affects: masakari
Importance: Undecided
Status: New
** Changed in: oslo.db
Status: New => Confirmed
** Changed in:
Have you built the 'source' yourself or used the ones published on
DockerHub?
** Tags added: ussuri-ubuntu-source
** Changed in: horizon
Status: New => Invalid
** Changed in: kolla-ansible
Status: New => Incomplete
--
You received this bug notification because you are a member
Public bug reported:
(neutron-server)[neutron@os-controller-1 /]$ neutron-db-manage --subproject
neutron-sfc upgrade --contract
argument --subproject: Invalid String(choices=['vmware-nsx', 'networking-sfc',
'neutron-vpnaas', 'networking-l2gw', 'neutron-fwaas', 'neutron',
** Also affects: kolla-ansible/ussuri
Importance: Undecided
Status: New
** Also affects: kolla-ansible/train
Importance: Undecided
Status: New
** Also affects: kolla-ansible/victoria
Importance: High
Assignee: Michal Nasiadka (mnasiadka)
Status: In Progress
**
** Changed in: masakari
Status: In Progress => Fix Released
** Changed in: masakari
Milestone: None => 10.0.0.0rc1
** Changed in: masakari
Assignee: ZHOU LINHUI (zhoulinhui) => Radosław Piliszek (yoctozepto)
** Also affects: masakari/victoria
Importance:
** Also affects: kolla-ansible
Importance: Undecided
Status: New
** Changed in: kolla-ansible
Importance: Undecided => High
** Also affects: kolla-ansible/victoria
Importance: High
Status: New
--
You received this bug notification because you are a member of Yahoo!
I believe it was fixed already but let Michał see.
** Changed in: keystone
Status: New => Invalid
** Changed in: kolla-ansible
Assignee: (unassigned) => Michal Nasiadka (mnasiadka)
** Changed in: kolla-ansible
Importance: Undecided => High
--
You received this bug notification
** Also affects: kolla
Importance: Undecided
Status: New
** Also affects: kolla-ansible
Importance: Undecided
Status: New
** Changed in: kolla-ansible
Status: New => Triaged
** Changed in: kolla
Status: New => Triaged
--
You received this bug notification
** Changed in: kolla
Status: In Progress => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1277104
Title:
wrong order of assertEquals args
Marked invalid for kolla-ansible as it does not seem to involve us,
please change it if you find it can/has to be fixed in k-a.
** Changed in: kolla-ansible
Status: New => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is
** Also affects: ovsdbapp
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1881424
Title:
Neutron ovs agent fails on rpc_loop iteration:1
Public bug reported:
Hi Neutrinos!
This is from Kolla-Ansible CI, it started happening in Victoria on May 28.
It affects all distros deb-ubu-centos and makes the jobs fail (ovs agent is
dead).
It does *not* affect OVN though.
br-ex exists before iteration:0 and acts fine in iteration:0 but not
** Changed in: neutron
Status: Triaged => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1458890
Title:
[RFE] Add segment support to Neutron
Status in neutron:
I'm notifying nova - maybe they are aware of some issue that could cause
this. To reiterate - this host works as long as cinder volume is not
used? I.e. running an instance from local storage is no problem for it?
(that was what I was thinking with my previous question)
FWIW, it does not look
Public bug reported:
When using LXC with volumes from Cinder, nova-compute logs [1] every
minute.
Also, the instance cannot be started after having been stopped and fails
on [2] (same error, different stack).
Tested using master (Victoria) on Ubuntu 18.04 (bionic) using devstack.
[1]
Hmm, what release is that? (e.g. Train) What distro? I'm notifying
Neutron because it works for me and I have no hints. It should really
say something in logs.
** Also affects: neutron
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of
This has been reported to Kolla via IRC.
** Also affects: openvswitch
Importance: Undecided
Status: New
** Also affects: neutron
Importance: Undecided
Status: New
** Also affects: kolla-ansible
Importance: Undecided
Status: New
--
You received this bug
Affects kolla - flaky images.
** Also affects: kolla
Importance: Undecided
Status: New
** Also affects: kolla/stein
Importance: Undecided
Status: New
** Also affects: kolla/train
Importance: Undecided
Status: New
** Changed in: kolla/stein
Status: New =>
Public bug reported:
Comparing:
https://opendev.org/openstack/neutron/src/commit/5e84289f68402b401d9a35d93473d9517a43e300/devstack/lib/ovn_agent#L78
with:
https://opendev.org/openstack/neutron-lib/src/commit/37deb266394a5c633309b3ad66241bb02695bdb9/neutron_lib/constants.py#L325
And we get MTU
Public bug reported:
OS release: Train
When adding new custom IP protocol rule, the IP protocol is neither required
nor has any default.
Submission without filling this field in results in "Form submission failed"
error w/o any other decent error message.
I suggest making this field required
Public bug reported:
OS release: Train
I have some leftover compute services that are enabled but down.
OSC properly reports them:
$ openstack compute service list
+-+--+-+--+-+---++
| ID | Binary |
** Also affects: devstack
Importance: Undecided
Status: New
** Changed in: devstack
Status: New => Triaged
** Changed in: kolla
Status: Confirmed => Triaged
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to
Public bug reported:
Tested on Train release.
As cloud admin, but not member of the inspected project, visit that project's
port which should have sg:
http://horizon/admin/networks/ports/$UUID
"Security Groups" display "No security group is associated"
With the same credentials OSC displays
yj.bai is working on that for kolla, it is not a bug in the current
state of kolla.
Indeed it is weird that the 4th request fails. Could it be the case that WSGI
pool gets poisoned by the first three requests?
I believe this request is about neutron having difficulty being run behind
Bah, I'm bad at seeing "read more". :-)
So kolla-ansible should remove removed endpoints.
Otherwise the behavior is mostly undefined - may fail or not, depending on
client interface.
Notifying horizon in case they want to strengthen this logic for future.
** Changed in: kolla-ansible
** Changed in: kolla-ansible/train
Status: Fix Committed => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1856296
Title:
upgrade to Train might
Public bug reported:
As the subject goes, nova-api returns 401 for /v2.1 (when no auth
provided). This is far from sensitive information as it is revealed on /
which does not return 401.
I discovered this debugging js-openstack-lib.
This is not a problem for other tested services (neutron,
** Also affects: nova
Importance: Undecided
Status: New
** Changed in: nova
Status: New => Confirmed
** Changed in: cinder
Status: New => Confirmed
** Changed in: kolla-ansible
Status: New => Invalid
** Summary changed:
- Cinder backup failed for restore volume
Public bug reported:
Glance logs these:
Jan 25 10:44:52.170319 ubuntu-bionic-rax-ord-0014151033
devstack@g-api.service[11714]: DEBUG oslo_middleware.cors [-] Request header
'x-auth-token' not in permitted list: ['ACCEPT', 'ACCEPT-LANGUAGE',
'CONTENT-TYPE', 'CACHE-CONTROL', 'CONTENT-LANGUAGE',
Public bug reported:
Updating neutron-lib to 2.0.0 (py3-only release) in upper constraints on master
[1] killed neutron tempest rocky jobs with:
2020-01-16 19:07:29.088781 | controller | Processing
/opt/stack/neutron-tempest-plugin
2020-01-16 19:07:29.825378 | controller | Requirement already
** Also affects: devstack
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1860021
Title:
nova-live-migration fails 100% with
Well, API is versioned to keep compatibility constraints. This is not
Kolla-specific issue, notifying Nova.
** Also affects: nova
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to
ded => Critical
** Changed in: kolla-ansible/ussuri
Status: Triaged => Invalid
** Changed in: kolla-ansible/ussuri
Importance: Critical => Undecided
** Changed in: kolla-ansible/train
Milestone: None => 9.0.0
** Changed in: kolla-ansible/train
Assignee: (unassign
Public bug reported:
In kolla-ansible CI we started getting on Ubuntu for Stein->Train:
Row size too large. The maximum row size for the used table type, not counting
BLOBs, is 8126. This includes storage overhead, check the manual. You have to
change some columns to TEXT or BLOBs
for "ALTER
Asking nova to maybe decrease severity?
** Summary changed:
- CI: nova-compute-ironic reports errors in the ironic scenario
+ kolla-ansible CI: nova-compute-ironic reports errors in the ironic scenario
** Also affects: nova
Importance: Undecided
Status: New
--
You received this bug
So trying to get auth not scoped to a project but domain instead, I get this:
failed: [primary] (item={u'service_type': u'identity', u'name': u'keystone'})
=> {
"action": "os_keystone_service",
"attempts": 5,
"changed": false,
"invocation": {
"module_args": {
** Changed in: kolla-ansible/train
Status: Fix Committed => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1841907
Title:
Neutron bootstrap failing on Ubuntu bionic
** No longer affects: keystone
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1847727
Title:
CI: failing on CRITICAL may be still too strong
Status in
Chris Dent correctly suggested that the message comes from keystone. Is
it possible to lower the severity of this particular event?
(Originally reported to Placement because only it has been found to log
this message.)
** Also affects: keystone
Importance: Undecided
Status: New
--
* Changed in: kolla-ansible
Assignee: Radosław Piliszek (yoctozepto) => (unassigned)
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1846507
Title:
ovs VXLAN over IPv6 conflict
Public bug reported:
keystone was busy (behind haproxy)
nova-conductor:
2019-10-04 15:39:17.103 6 CRITICAL nova [-] Unhandled error: GatewayTimeout:
Gateway Timeout (HTTP 504)
2019-10-04 15:39:17.103 6 ERROR nova Traceback (most recent call last):
2019-10-04 15:39:17.103 6 ERROR nova File
(changed kolla-ansible params so that launchpad does not hide it from
me)
** Changed in: kolla-ansible
Status: Opinion => Triaged
** Changed in: kolla-ansible
Importance: Undecided => Wishlist
--
You received this bug notification because you are a member of Yahoo!
Engineering Team,
*not* arise when ovs is using IPv4
tunnels (kinda counter-intuitively).
Workarounded by using different port. This has no real life meaning
(IMHO) but is undoubtedly an interesting phenomenon.
** Affects: kolla-ansible
Importance: Undecided
Assignee: Radosław Piliszek (yoctozepto
Notifying neutron so that they may shed some light on this observation.
** Changed in: kolla-ansible
Status: Triaged => Opinion
** Changed in: kolla-ansible
Importance: Wishlist => Undecided
** Description changed:
This has been observed while testing CI IPv6.
ovs-agent tries
Public bug reported:
This is re:
http://lists.openstack.org/pipermail/openstack-discuss/2019-August/008210.html
"[keystone] [stein] user_enabled_emulation config problem"
I set:
user_tree_dn = ou=Users,o=UCO
user_objectclass = inetOrgPerson
user_id_attribute = uid
user_name_attribute = uid
reover, enabling the
message driver to allow us to get messages in the queueing system.
After opening the PR, the fellow Radosław Piliszek questioned the proposed
changes. More details can be found there
(https://review.opendev.org/#/c/670626/2), at the PR's comments. In summary, it
was ques
Public bug reported:
Kolla has been hit by
https://opendev.org/openstack/horizon/commit/4e911e2889ebe7f0a577a0323649dceb9cef363c
(Explicitly set LOCALE_PATHS for Horizon apps).
We are compiling messages for all projects in a loop using horizon
manage.py and the mentioned commit caused them to
52 matches
Mail list logo