On 01/22/2015 10:17 AM, Carl Baldwin wrote: > I think this warrants a bug report. Could you file one with what you > know so far?
Carl, Seems as though a recent change introduced a bug. This is on a devstack I just created today, at l3/vpn-agent startup: 2015-01-22 11:55:07.961 4203 TRACE neutron.agent.l3.agent Traceback (most recent call last): 2015-01-22 11:55:07.961 4203 TRACE neutron.agent.l3.agent File "/opt/stack/neutron/neutron/common/utils.py", line 342, in call 2015-01-22 11:55:07.961 4203 TRACE neutron.agent.l3.agent return func(*args, **kwargs) 2015-01-22 11:55:07.961 4203 TRACE neutron.agent.l3.agent File "/opt/stack/neutron/neutron/agent/l3/agent.py", line 584, in process_router 2015-01-22 11:55:07.961 4203 TRACE neutron.agent.l3.agent self._process_external(ri) 2015-01-22 11:55:07.961 4203 TRACE neutron.agent.l3.agent File "/opt/stack/neutron/neutron/agent/l3/agent.py", line 576, in _process_external 2015-01-22 11:55:07.961 4203 TRACE neutron.agent.l3.agent self._update_fip_statuses(ri, existing_floating_ips, fip_statuses) 2015-01-22 11:55:07.961 4203 TRACE neutron.agent.l3.agent UnboundLocalError: local variable 'existing_floating_ips' referenced before assignment 2015-01-22 11:55:07.961 4203 TRACE neutron.agent.l3.agent Traceback (most recent call last): File "/usr/local/lib/python2.7/dist-packages/eventlet/greenpool.py", line 82, in _spawn_n_impl func(*args, **kwargs) File "/opt/stack/neutron/neutron/agent/l3/agent.py", line 1093, in _process_router_update self._process_router_if_compatible(router) File "/opt/stack/neutron/neutron/agent/l3/agent.py", line 1047, in _process_router_if_compatible self._process_added_router(router) File "/opt/stack/neutron/neutron/agent/l3/agent.py", line 1056, in _process_added_router self.process_router(ri) File "/opt/stack/neutron/neutron/common/utils.py", line 345, in call self.logger(e) File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 82, in __exit__ six.reraise(self.type_, self.value, self.tb) File "/opt/stack/neutron/neutron/common/utils.py", line 342, in call return func(*args, **kwargs) File "/opt/stack/neutron/neutron/agent/l3/agent.py", line 584, in process_router self._process_external(ri) File "/opt/stack/neutron/neutron/agent/l3/agent.py", line 576, in _process_external self._update_fip_statuses(ri, existing_floating_ips, fip_statuses) UnboundLocalError: local variable 'existing_floating_ips' referenced before assignment Since that's happening while we're holding the iptables lock I'm assuming no rules are being applied. I'm looking into it now, will file a bug if there isn't already one. -Brian > On Wed, Jan 21, 2015 at 2:24 PM, Brian Haley <brian.ha...@hp.com> wrote: >> On 01/21/2015 02:29 PM, Xavier León wrote: >>> On Tue, Jan 20, 2015 at 10:32 PM, Brian Haley <brian.ha...@hp.com> wrote: >>>> On 01/20/2015 09:20 AM, Xavier León wrote: >>>>> Hi all, >>>>> >>>>> we've been doing some tests with openstack kilo and found >>>>> out a problem: iptables routes are not being injected to the >>>>> router namespace. >>>>> >>>>> Scenario: >>>>> - a private network NOT connected to the outside world. >>>>> - a router with only one interface connected to the private network. >>>>> - a vm instance connected to the private network as well. >> <snip> >>>> Are you sure the l3-agent is running? You should have seen wrapped rules >>>> from >>>> it in most of these tables, for example: >>>> >>>> # Generated by iptables-save v1.4.21 on Tue Jan 20 16:29:19 2015 >>>> *filter >>>> :INPUT ACCEPT [34:10882] >>>> :FORWARD ACCEPT [0:0] >>>> :OUTPUT ACCEPT [1:84] >>>> :neutron-filter-top - [0:0] >>>> :neutron-l3-agent-FORWARD - [0:0] >>>> :neutron-l3-agent-INPUT - [0:0] >>>> :neutron-l3-agent-OUTPUT - [0:0] >>>> :neutron-l3-agent-local - [0:0] >>>> [...] >>> >>> Yes, the l3-agent is up and running. I see these rules when executing >>> the same test in juno but not in kilo. FYI, it's a all-in-one devstack >>> deployment. >>> >>>> >>>> I would check the log files for any errors. >>> >>> There are no errors in the logs. >>> >>> After digging a bit more, we have seen that setting the config value >>> of enable_isolated_metadata to True (default: False) in dhcp_agent.ini >>> solves the problem in our scenario. >>> However, this change in configuration was not necessary before (our >>> tests passed in juno for that matter with that setting to False). So >>> we were wondering if there has been a change in how the metadata >>> service is accessed in such scenarios, a new issue because of the l3 >>> agent refactoring or any other problem in our setup we haven't >>> narrowed yet. >> >> There have been some changes recently in the code, perhaps: >> >> https://review.openstack.org/#/c/135467/ >> >> Or just look at some of the other recent changes in the repository? >> >> -Brian __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev