There are lots of warnings showing up in ovs-vswitchd log
2019-01-24T18:40:59.070Z|00058|connmgr|INFO|br-ex<->tcp:127.0.0.1:6633: 2
flow_mods 10 s ago (2 adds)
2019-01-24T18:40:59.184Z|00059|connmgr|INFO|br-tun<->tcp:127.0.0.1:6633: 10
flow_mods 10 s ago (10 adds)
s: privsep
helper command exited non-zero (1)
2018-06-12 10:37:05.961 1038529 ERROR neutron
** Affects: neutron
Importance: Medium
Assignee: Miguel Angel Ajo (mangelajo)
Status: Confirmed
** Changed in: neutron
Status: New => Confirmed
** Changed in: neutron
Importanc
pp.backend.ovs_idl.transaction [-] Running txn command(idx=0):
DelPortCommand(if_exists=True, bridge=br-int, port=qr-567309c3-a8) do_commit
/usr/lib/python2.7/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:84
** Affects: neutron
Importance: Medium
Assignee: Miguel Angel Ajo (mangela
/bugs/1734320
** Affects: neutron
Importance: High
Assignee: Miguel Angel Ajo (mangelajo)
Status: New
** Changed in: neutron
Importance: Undecided => High
** Changed in: neutron
Assignee: (unassigned) => Miguel Angel Ajo (mangelajo)
** Changed in: neutron
Mil
** Changed in: neutron
Status: Fix Released => Confirmed
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1597461
Title:
L3 HA: 2 masters after reboot of controller
Status in
Public bug reported:
Neutron fails with:
http://logs.openstack.org/84/445884/2/gate/gate-neutron-dsvm-api-ubuntu-
xenial/800e806/logs/screen-q-svc.txt.gz
2017-03-15 21:02:19.418 30853 ERROR neutron.api.v2.resource DBError:
(pymysql.err.InternalError) (1038, u'Out of sort memory, consider
** Also affects: os-log-merger
Importance: Undecided
Status: New
** Changed in: os-log-merger
Status: New => Fix Committed
** Changed in: os-log-merger
Importance: Undecided => Critical
--
You received this bug notification because you are a member of Yahoo!
Engineering
fixed in https://review.openstack.org/433834
** Changed in: openstack-manuals
Status: Incomplete => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1618762
Title:
This seems horizon, please explain any reasoning to bump it back to
neutron. Cheers.
** Project changed: neutron => horizon
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
Please involve any FWaaS folks to discuss on this, and revert the
"Opinion" flag.
** Changed in: neutron
Status: In Progress => Opinion
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
** Also affects: openstack-manuals
Importance: Undecided
Status: New
** Changed in: neutron
Status: New => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1658261
** Project changed: nova => neutron
** Summary changed:
-
neutron.tests.functional.agent.test_l2_ovs_agent.TestOVSAgent.test_assert_pings_*(native)
fail randomly
+ ovsdb native interface timeouts sometimes causing random functional failures
** Changed in: neutron
Importance: Undecided =>
Public bug reported:
Functional tests
neutron.tests.functional.agent.test_l2_ovs_agent.TestOVSAgent.test_assert_pings_*(native)
fail randomly with:
Traceback (most recent call last):
File "neutron/tests/base.py", line 115, in func
return f(self, *args, **kwargs)
File
: Medium
Assignee: Miguel Angel Ajo (mangelajo)
Status: In Progress
** Tags: qos
** Changed in: neutron
Status: New => In Progress
** Changed in: neutron
Importance: Undecided => Medium
** Changed in: neutron
Milestone: None => ocata-3
--
You received
or the loaded core plugin.
This is also a preliminary step to make
https://bugs.launchpad.net/neutron/+bug/1586056 ( Improved validation
mechanism for QoS rules with port types) possible.
** Affects: neutron
Importance: Medium
Assignee: Miguel Angel Ajo (mangelajo)
Status
Public bug reported:
With dvr_snat or dvr mode, if you create a port like described, and then attach
it to
a netns in any of the computes or dvr_snat node, the _floatingips key is not set
by neutron-server on a sync_routers call from l3-agent.
This leads to the FIP namespace not being updated
Public bug reported:
http://logs.openstack.org/00/352200/14/check/gate-tempest-dsvm-neutron-
linuxbridge-ubuntu-xenial/6ba61d3/console.html
2016-11-17 05:06:33.890498 | Captured traceback:
2016-11-17 05:06:33.890507 | ~~~
2016-11-17 05:06:33.890534 | Traceback (most recent
*** This bug is a duplicate of bug 1507761 ***
https://bugs.launchpad.net/bugs/1507761
You're right, this is a duplicate so I marked it as such, please refer
to the other bug.
** This bug has been marked a duplicate of bug 1507761
qos wrong units in max-burst-kbps option (per-second is
** Changed in: neutron
Status: Incomplete => Won't Fix
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1580149
Title:
[RFE] Rename API options related to QoS bandwidth limit rule
I'm moving this to Won't fix for now, let's re-evaluate when we have
microversioning of API.
** Changed in: neutron
Status: New => Won't Fix
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
That looks more like a configuration error, may be you didn't configure
the qos service, or the qos ml2 extension. Otherwise you'd see the
qos_policy_id in the net-create report.
** Changed in: neutron
Status: New => Invalid
--
You received this bug notification because you are a member
, , rule_obj).
https://github.com/openstack/neutron/blob/17d85e4748f05b9785686f1164b6a4fe2963b8eb/neutron/extensions/qos.py#L314
And btw, those are not rule_obj (objects) but rule_cls (classes).
** Affects: neutron
Importance: Wishlist
Assignee: Miguel Angel Ajo (mangelajo)
Status
What do you mean specifically by self-service?, a normal tenant-network?
If it's that, I believe the scenario you're describing is already tested
in the multinode jobs.
I'd recommend you to seek for help in the redhat bugzilla (picking RDO
and explaining the installer and settings you used), or
@boejern-teipel, The bug description doesn't seem to match anymore with
what you're describing in #18, could you open a separate bug for neutron
with the details?
Thank you.
** Changed in: neutron
Status: New => Invalid
--
You received this bug notification because you are a member of
Liberty was tested upstream with pymysql and not the other driver.
Can you change your connection strings to pymysql and use this package:
http://mirror.centos.org/centos/7/cloud/x86_64/openstack-
liberty/common/python2-PyMySQL-0.6.7-2.el7.noarch.rpm
probably available with yum install
I agree, even in some cases there could be errors, because an l2 agent
extension could eventually not be able to handle a setting, and while
the port would be working, some of the characteristics could have not
been set.
** Changed in: neutron
Status: New => Opinion
** Changed in: neutron
** Project changed: networking-qos => neutron
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1585373
Title:
qos-policy update without specify --shared causing it change to
default
This doesn't happen in the stantard ml2 plugin. It could be an
implementation detail in the nsx plugin, from a change of behaviour in
the internal apis may be (sorry for breaking you in that unexpected way
:/ )
** Changed in: neutron
Status: New => Invalid
--
You received this bug
After thinking about it, they are not the same thing, service providers
were designed to have multiple backends to pick from, at resource
creation time.
This is not the case of qos_plugin, where the notification driver is
just a plug to the backend implementation, while we remove the burden of
DB
As per review discussion, this can't really be done until we had API
microversioning, deferring until then.
** Changed in: neutron
Status: In Progress => Won't Fix
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
moved to won't fix until other endpoints use OVOs, and then we can
reconsider this.
** Changed in: neutron
Status: In Progress => Won't Fix
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
Public bug reported:
Somehow, one of my deployments (ubuntu trusty64 based)
root@devstack:~# uname -a
Linux devstack 3.19.0-37-generic #42~14.04.1-Ubuntu SMP Mon Nov 23 15:13:51 UTC
2015 x86_64 x86_64 x86_64 GNU/Linux
results on never-ending output of "ip rule"
# ip rule
0: from all
Public bug reported:
This RFE is a follow up of [1] and it's registered only for completion
to provide visibility on the high level plan. - we cannot tackle this
until [1] and [2] are in place. -
Minimum bandwidth support (opposed to bandwidth limiting), guarantees a
port minimum bandwidth when
Public bug reported:
Problem statement
OpenFlow can be really powerful for programming the dataplane, but when
several features are hooked up into the same virtual switch they need
knowledge of each other implementation (entry table, next table, used
registers, conventions,
This one is not valid anymore, since they are now again "qos" pluggin
with no subclassing.
** Changed in: neutron
Status: In Progress => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
Public bug reported:
neutron security-group-rule-create --direction ingress default
results in:
2016-04-05 15:50:56.772 ERROR neutron.api.v2.resource
[req-67736b7a-6a4c-442c-9536-890ccf5c8d19 admin
3dc1eb0373d34ba9b2edfb41ee98149c] create failed
2016-04-05 15:50:56.772 TRACE
Public bug reported:
Minimum bandwidth support (opposed to bandwidth limiting), guarantees a
port minimum bandwidth when it's neighbours are consuming egress or
ingress traffic and can be throttled in favor of the guaranteed port.
Strict minimum bandwidth support requires scheduling cooperation,
Public bug reported:
The current implementation of bandwidth limiting rules only supports egress
bandwidth
limiting.
Use cases
=
There are cases where ingress bandwidth limiting is more important than
egress limiting, for example when the workload of the cloud is mostly a
consumer of
Public bug reported:
Use case
===
Monitoring of the "segmentation resources".
Logging the status of such resources as we go, (or the pass over a certain
threshold) would allow monitoring solutions to identify tripping over
certain levels, and warn the administrator to take action: cleaning
Public bug reported:
The notification_driver parameter for QoS is just a service provider, that it's
then called
from the QoS plugin, when a policy is created, changed or deleted.
We should look into moving into the standard naming of
"service_providers" and deprecate the other.
Public bug reported:
[root@localhost ~(keystone_admin)]# neutron quota-update --loadbalancer 100
--debug
DEBUG: keystoneauth.session REQ: curl -g -i -X PUT
http://192.168.1.195:9696/v2.0/quotas/100.json -H "User-Agent:
python-neutronclient" -H "Content-Type: application/json" -H "Accept:
I didn't observe that in my deployments, and it's the linux kernel what
it's responsible of that specific behaviour (policing). I suspect it
could be related to your kernel HZ.
What kernel are you using?.
** Changed in: neutron
Status: Incomplete => Invalid
--
You received this bug
Let's not let it expire, there's an almost ready patch by @kevin:
https://review.openstack.org/#/c/218512/
** Changed in: neutron
Status: Expired => Confirmed
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
Public bug reported:
This happens because connection tracking zones don't work in the
IptablesFirewallDriver (they do for Hybrid).
The subclass for the hybrid driver is the one introducing the zone
rules [1]
I remember it was discussed during this review [2], but I cannot see if
there was any
I'd say http://developer.openstack.org/api-ref-networking-v2-ext.html
#qos-ext and http://docs.openstack.org/liberty/networking-guide/adv-
config-qos.html seem reasonable, so closing this.
** Changed in: neutron
Status: In Progress => Fix Committed
** Changed in: neutron
Status:
I agree with @assaf comments here,
We used kilobits to be uniform in the API.
We don't necessarily need to match the low level implementation details,
this is a high level abstraction for any plugin to implement.
What I would look at is the discrepance in "per second"
OVS talks about kBs while
** Changed in: neutron
Status: Fix Released => Confirmed
** Changed in: neutron
Importance: Undecided => Medium
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1541742
Title:
Importance: Medium
Assignee: Miguel Angel Ajo (mangelajo)
Status: In Progress
** Changed in: neutron
Importance: Undecided => High
** Changed in: neutron
Assignee: (unassigned) => Miguel Angel Ajo (mangelajo)
** Changed in: neutron
Importance: High => Medium
--
You
Public bug reported:
Probably we should provide a reconnection mechanism when something on
the OpenFlow connection goes wrong.
2016-01-06 08:23:45.031 11755 DEBUG OfctlService [-] dpid 231386065181514 ->
datapath None _handle_get_datapath
Public bug reported:
pid uid tgidtotal_vmrss nr_ptes swapents
oom_score_adj name
18903 0 18903 123420 15890 60 0 0 ovs-vswitchd
1887998 188783584 12030 114 11550 neutron-
4516998 451676571
Public bug reported:
Using ha-routers, we've found that getting network node agents down for
T>agent_down time, and then bringing them up fires a race condition
during ovs-agent and l3-agent boot.
Even if you set a constraint on ovs-agent being up before l3-agent, that
won't work, because still
Thanks for the info @rushikesh, moved it to Invalid.
** Changed in: neutron
Status: Incomplete => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1513788
Title:
Exception
Marked as invalid, please check comment #3.
Feel free to reopen if that does not work.
** Changed in: neutron
Status: New => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
Marked as won't fix.
A bridge removed from bridge-mappings won't be handled or know by the
neutron agent.
A note was added to documentation:
https://review.openstack.org/168084
** Changed in: neutron
Status: New => Won't Fix
--
You received this bug notification because you are a
1-06 08:16:42.425 |
** Affects: neutron
Importance: Low
Assignee: Miguel Angel Ajo (mangelajo)
Status: New
** Tags: functional-tests
** Changed in: neutron
Importance: Undecided => Low
--
You received this bug notification because you are a member of Yahoo!
Engineering
Public bug reported:
When migrating a VM from one host to another in combination with
neutron, VM can resume at destination host while network is not ready
(race condition)
QEMU has a mechanism to send a few RARPs once migration is done and
before resuming.
Nova needs to coordinate with Qemu
Public bug reported:
https://github.com/openstack/neutron/blob/master/neutron/services/qos/qos_plugin.py#L59
https://github.com/openstack/neutron/blob/master/neutron/services/qos/notification_drivers/message_queue.py
We should be smarter on the message_queue and not send useless notifications,
This bug is incorrect, closing as Invalid.
** Changed in: neutron
Status: Incomplete => Invalid
** Changed in: neutron
Status: Invalid => Won't Fix
** Changed in: neutron
Status: Won't Fix => Invalid
--
You received this bug notification because you are a member of Yahoo!
As per IRC discussion with Rossella, it seems that after migrating the
VM/port to a new host, such field must have been automatically updated.
Could you provide the openvswitch (destination) agent logs? or a
detailed step by step on how to reproduce it ? .
** Changed in: neutron
Status:
Public bug reported:
One of the telco working group requirements is being able to expose a whole PF
(physical
function) on an SR-IOV card.
To indicate that to nova, we need to specify we want a
"physicalfunction" type of port.
It's different from the ironic baremetal ports in the sense that
50-91db-251c139029b2 admin
85b859134de2428d94f6ee910dc545d8] 172.16.175.128 - - [15/Sep/2015 01:05:26]
"PUT /v2.0/ports/b0885ae1-487b-40bc-8fc0-32432a21e39d.json HTTP/1.1" 500 383
0.084317
** Affects: neutron
Importance: Undecided
Assignee: Miguel Angel Ajo (mangelajo)
Public bug reported:
A high spike during the last 24h, under investigation.
Public bug reported:
http://logs.openstack.org/48/217048/1/gate/gate-neutron-
python27/dc83518/testr_results.html.gz
ft1.8256:
neutron.tests.unit.services.qos.test_qos_plugin.TestQosPlugin.test_get_policy_bandwidth_limit_rules_for_policy_with_filters_StringException:
Empty attachments:
Public bug reported:
In the fullstack run logs for server and agents you can find this.
2015-08-20 12:36:28.369 1898 WARNING oslo_config.cfg [-] Option
rabbit_virtual_host from group DEFAULT is deprecated. Use option
rabbit_virtual_host from group oslo_messaging_rabbit.
2015-08-20 12:36:28.369
Public bug reported:
If the CSP is using manually tagging specific tenant networks to follow
specific qos profiles, a tenant could use
neutron port-update port-id --no-qos-policy
or
neutron net-update net-id --no-qos-policy
to shake it of
A possible solution, is not allowing tenants to
://git.openstack.org/cgit/openstack/neutron/tree/neutron/objects/qos/policy.py?h=feature/qos#n68
to guarantee network/port attachment permissions are properly checked.
** Affects: neutron
Importance: Undecided
Assignee: Miguel Angel Ajo (mangelajo)
Status: Confirmed
** Tags: qos
Public bug reported:
All agents with ports connected to such network need to be notified
about the update.
We have several options here:
1) Add handle_network to the extension manager, so it's notified about any
network update
2) force ML2 to update all relevant ports (more RPC messages
Public bug reported:
In configurations where the policy creation is left open to the tenants
by policy.json modification, this is possible:
a) Admin creates policy A, attaches Rule X
b) tenant creates policy B, modifies rule X via API.
AS ADMIN:
[vagrant@devstack ~]$ source
Public bug reported:
For example:
1) You set your private network to a qos policy 'X', limiting egress BW to 1Mbps
2) You create your router, the internal leg gets plugged to the internal network
The internal leg will be limited to egress 1Mbps, which is actually
limiting the network in general
True, this is fixed, Thanks Livnat!
** Changed in: neutron
Assignee: Eugene Nikanorov (enikanorov) = Miguel Angel Ajo (mangelajo)
** Changed in: neutron
Status: In Progress = Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team
Yes, this was fixed by Mike kolesnik or Assaf Muller I think.
Can't remember now.
** Changed in: neutron
Status: In Progress = Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
for neutron-server in a loop as the agent
tries to sync the tunnel, and fail.
This new behaviour was introduced in Kilo by this patch:
https://github.com/openstack/neutron/commit/3db0e619c83892a7aab61807969205253833ff8d
** Affects: neutron
Importance: Undecided
Assignee: Miguel Angel Ajo
the agent
configures patch ports and openflow rules in both bridges.
I will propose a patch to do that.
** Affects: neutron
Importance: Undecided
Assignee: Miguel Angel Ajo (mangelajo)
Status: New
** Changed in: neutron
Assignee: (unassigned) = Miguel Angel Ajo (mangelajo
Public bug reported:
Unit tests failing non deterministically:
neutron.tests.unit.test_dhcp_scheduler.TestNetworksFailover.test_reschedule_network_from_down_agent_StringException:
Empty attachments:
pythonlogging:'neutron.api.extensions'
stderr
stdout
pythonlogging:'': {{{
2015-03-17
Public bug reported:
Part of a recent ProcessMonitor API refactor broke netns_cleanup.py
https://review.openstack.org/#/c/154464/13/neutron/cmd/netns_cleanup.py
Those bits need to be reverted.
** Affects: neutron
Importance: Undecided
Assignee: Miguel Angel Ajo (mangelajo
** Changed in: neutron
Status: In Progress = Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1362171
Title:
Reuse process management classes from dnsmasq for radvd
would fail, and that's ok enough.
I will submit a patch to remove the message logic.
** Affects: neutron
Importance: Undecided
Assignee: Miguel Angel Ajo (mangelajo)
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which
always happen before any attempt to
replace_file
** Affects: neutron
Importance: Undecided
Assignee: Miguel Angel Ajo (mangelajo)
Status: New
** Changed in: neutron
Assignee: (unassigned) = Miguel Angel Ajo (mangelajo)
--
You received this bug notification because you
'
This should be captured and reported as a WARNING instead.
** Affects: neutron
Importance: Undecided
Assignee: Miguel Angel Ajo (mangelajo)
Status: New
** Changed in: neutron
Assignee: (unassigned) = Miguel Angel Ajo (mangelajo)
--
You received this bug notification because you
be prevented by either, getting the full list of .keys() for
the loop, or adding extra locking.
** Affects: neutron
Importance: Undecided
Assignee: Miguel Angel Ajo (mangelajo)
Status: In Progress
** Changed in: neutron
Assignee: (unassigned) = Miguel Angel Ajo (mangelajo
) didn't
start, but the port was created, the port will be left there, and no
disable will remove the actual port.
** Affects: neutron
Importance: Undecided
Assignee: Miguel Angel Ajo (mangelajo)
Status: New
** Description changed:
https://github.com/openstack/neutron/blob
Importance: Undecided
Assignee: Miguel Angel Ajo (mangelajo)
Status: New
** Changed in: neutron
Assignee: (unassigned) = Miguel Angel Ajo (mangelajo)
** Description changed:
+ 014-11-30 16:25:10.113 8086 TRACE neutron.agent.l3_agent Traceback (most
recent call last):
+ 2014-11-30 16
is enabled)
will respawn a second neutron-ns-metadata proxy on each namespace/resource
after upgrade (I-J) and agent restart due the inability to find the old PID
file and external process PID.
** Affects: neutron
Importance: Undecided
Assignee: Miguel Angel Ajo (mangelajo)
Status
Public bug reported:
It was agreed on line 37 here, as a follow up:
https://review.openstack.org/#/c/112798/26/neutron/agent/linux/external_process.py
** Affects: neutron
Importance: Undecided
Assignee: Miguel Angel Ajo (mangelajo)
Status: In Progress
** Changed in: neutron
mismatch_error
2014-09-05 05:23:47.990 | MismatchError: 8 != 4
2014-09-05 05:23:47.990 |
** Affects: neutron
Importance: Undecided
Assignee: Miguel Angel Ajo (mangelajo)
Status: Confirmed
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which
Public bug reported:
neutron/agent/l3_agent.py:
LOG.debug(not hosting snat for router: %, ri.router['id'])
** Affects: neutron
Importance: Undecided
Assignee: Miguel Angel Ajo (mangelajo)
Status: In Progress
** Changed in: neutron
Assignee: (unassigned) = Miguel Angel
*** This bug is a duplicate of bug 1364171 ***
https://bugs.launchpad.net/bugs/1364171
Public bug reported:
http://logs.openstack.org/35/115935/7/check/check-neutron-dsvm-
functional/7dd676a/console.html#_2014-09-01_22_27_23_651
I have seen those randomly appearing.
14-09-01 22:27:08.361 |
Public bug reported:
The network namespace is not mandatory, but that makes
root_wrap non mandatory too, because you could
want to start non-privileged processes outside a
namespace through the same API, covering both
the namespace non-namespace needs.
** Affects: neutron
Importance:
Public bug reported:
This is found during functional testing, when .start() is called with
block=True during sightly high load.
This suggest the default timeout needs to be rised to make this module
work in all situations.
Public bug reported:
setup.cfg doesn't include the data_files entries to get the
etc/neutron/plugins/nuage/nuage_plugin.ini
configuration file installed into system's /etc
https://github.com/openstack/neutron/blob/master/setup.cfg#L24
** Affects: neutron
Importance: Undecided
Public bug reported:
A few tests use called_once_with_args instead of mock's
assert_called_once_with_args
without checking the result.
That means that we're not asserting for that to happen.
Those tests need to be fixed.
[majopela@f20-devstack neutron]$ grep .called_once_with * -R | grep -v
Public bug reported:
Permission elevation via rootwrap, has a massive impact on the network nodes,
increasing setup time 2.5 times compared to plain sudo. [2] [3]
A network node with 192 private networks + 192 routers takes:
- 24 minutes to setup with rootwrap
- 10 minutes to setup with
TRACE neutron.agent.netns_cleanup_util
3rd) There was a third problem that I'm trying to reproduce at the moment, I'll
update in a while.
** Affects: neutron
Importance: Undecided
Assignee: Miguel Angel Ajo (mangelajo)
Status: New
** Changed in: neutron
Assignee
Public bug reported:
it dies on timeout while sending a test success
http://logs.openstack.org/01/68601/2/check/gate-neutron-
python26/69a7a57/console.html#_2014-02-10_10_31_41_262
2014-02-10 10:31:41.312 | File /usr/lib64/python2.6/socket.py, line 303, in
flush
2014-02-10 10:31:41.313 |
Public bug reported:
If you stop an neutron-l3-agent, and you want only the qrouter-* namespaces
cleaned up, you can't.
If you stop a neutron-dhcp-agent, and you want only the qdhcp-*
namespaces cleaned up, you can't.
I propose adding a switch to this tool, so we can properly select which
95 matches
Mail list logo