[Yahoo-eng-team] [Bug 1625013] [NEW] dhcpv6-stateless cause dhcp _generate_opts_per_subnet fail

2016-09-19 Thread ZongKai LI
Public bug reported:

ipv6 subnet with dhcpv6_stateless address mode cause KeyError in
https://github.com/openstack/neutron/blob/master/neutron/agent/linux/dhcp.py#L895
, when force_metadata is configured as True.

2016-09-19 06:55:44.211 ERROR neutron.agent.dhcp.agent [-] Unable to enable 
dhcp for c0eea6e2-f98d-48b9-aab0-67113a82a70e.
2016-09-19 06:55:44.211 TRACE neutron.agent.dhcp.agent Traceback (most recent 
call last):
2016-09-19 06:55:44.211 TRACE neutron.agent.dhcp.agent   File 
"/opt/stack/neutron/neutron/agent/dhcp/agent.py", line 114, in call_driver
2016-09-19 06:55:44.211 TRACE neutron.agent.dhcp.agent getattr(driver, 
action)(**action_kwargs)
2016-09-19 06:55:44.211 TRACE neutron.agent.dhcp.agent   File 
"/opt/stack/neutron/neutron/agent/linux/dhcp.py", line 213, in enable
2016-09-19 06:55:44.211 TRACE neutron.agent.dhcp.agent self.spawn_process()
2016-09-19 06:55:44.211 TRACE neutron.agent.dhcp.agent   File 
"/opt/stack/neutron/neutron/agent/linux/dhcp.py", line 425, in spawn_process
2016-09-19 06:55:44.211 TRACE neutron.agent.dhcp.agent 
self._spawn_or_reload_process(reload_with_HUP=False)
2016-09-19 06:55:44.211 TRACE neutron.agent.dhcp.agent   File 
"/opt/stack/neutron/neutron/agent/linux/dhcp.py", line 434, in 
_spawn_or_reload_process
2016-09-19 06:55:44.211 TRACE neutron.agent.dhcp.agent 
self._output_config_files()
2016-09-19 06:55:44.211 TRACE neutron.agent.dhcp.agent   File 
"/opt/stack/neutron/neutron/agent/linux/dhcp.py", line 467, in 
_output_config_files
2016-09-19 06:55:44.211 TRACE neutron.agent.dhcp.agent 
self._output_opts_file()
2016-09-19 06:55:44.211 TRACE neutron.agent.dhcp.agent   File 
"/opt/stack/neutron/neutron/agent/linux/dhcp.py", line 845, in _output_opts_file
2016-09-19 06:55:44.211 TRACE neutron.agent.dhcp.agent options, 
subnet_index_map = self._generate_opts_per_subnet()
2016-09-19 06:55:44.211 TRACE neutron.agent.dhcp.agent   File 
"/opt/stack/neutron/neutron/agent/linux/dhcp.py", line 895, in 
_generate_opts_per_subnet
2016-09-19 06:55:44.211 TRACE neutron.agent.dhcp.agent subnet_dhcp_ip = 
subnet_to_interface_ip[subnet.id]
2016-09-19 06:55:44.211 TRACE neutron.agent.dhcp.agent KeyError: 
u'7e086056-521a-4d91-b2a7-6d1b3fffb49b'

** Affects: neutron
 Importance: Undecided
 Assignee: ZongKai LI (lzklibj)
     Status: New

** Changed in: neutron
 Assignee: (unassigned) => ZongKai LI (lzklibj)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1625013

Title:
  dhcpv6-stateless cause dhcp _generate_opts_per_subnet fail

Status in neutron:
  New

Bug description:
  ipv6 subnet with dhcpv6_stateless address mode cause KeyError in
  
https://github.com/openstack/neutron/blob/master/neutron/agent/linux/dhcp.py#L895
  , when force_metadata is configured as True.

  2016-09-19 06:55:44.211 ERROR neutron.agent.dhcp.agent [-] Unable to enable 
dhcp for c0eea6e2-f98d-48b9-aab0-67113a82a70e.
  2016-09-19 06:55:44.211 TRACE neutron.agent.dhcp.agent Traceback (most recent 
call last):
  2016-09-19 06:55:44.211 TRACE neutron.agent.dhcp.agent   File 
"/opt/stack/neutron/neutron/agent/dhcp/agent.py", line 114, in call_driver
  2016-09-19 06:55:44.211 TRACE neutron.agent.dhcp.agent getattr(driver, 
action)(**action_kwargs)
  2016-09-19 06:55:44.211 TRACE neutron.agent.dhcp.agent   File 
"/opt/stack/neutron/neutron/agent/linux/dhcp.py", line 213, in enable
  2016-09-19 06:55:44.211 TRACE neutron.agent.dhcp.agent 
self.spawn_process()
  2016-09-19 06:55:44.211 TRACE neutron.agent.dhcp.agent   File 
"/opt/stack/neutron/neutron/agent/linux/dhcp.py", line 425, in spawn_process
  2016-09-19 06:55:44.211 TRACE neutron.agent.dhcp.agent 
self._spawn_or_reload_process(reload_with_HUP=False)
  2016-09-19 06:55:44.211 TRACE neutron.agent.dhcp.agent   File 
"/opt/stack/neutron/neutron/agent/linux/dhcp.py", line 434, in 
_spawn_or_reload_process
  2016-09-19 06:55:44.211 TRACE neutron.agent.dhcp.agent 
self._output_config_files()
  2016-09-19 06:55:44.211 TRACE neutron.agent.dhcp.agent   File 
"/opt/stack/neutron/neutron/agent/linux/dhcp.py", line 467, in 
_output_config_files
  2016-09-19 06:55:44.211 TRACE neutron.agent.dhcp.agent 
self._output_opts_file()
  2016-09-19 06:55:44.211 TRACE neutron.agent.dhcp.agent   File 
"/opt/stack/neutron/neutron/agent/linux/dhcp.py", line 845, in _output_opts_file
  2016-09-19 06:55:44.211 TRACE neutron.agent.dhcp.agent options, 
subnet_index_map = self._generate_opts_per_subnet()
  2016-09-19 06:55:44.211 TRACE neutron.agent.dhcp.agent   File 
"/opt/stack/neutron/neutron/agent/linux/dhcp.py", line 895, in 
_generate_opts_per_subnet
  2016-09-19 06:55:44.211 TRACE neutron.agent.dhcp.agent subnet_dhcp_ip = 
subnet_to_interface_ip[subnet

[Yahoo-eng-team] [Bug 1618343] Re: UT failed for PortsV2 case test_update_port_status_notify_port_event_after_update

2016-08-30 Thread ZongKai LI
** Changed in: networking-ovn
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1618343

Title:
  UT failed for PortsV2 case
  test_update_port_status_notify_port_event_after_update

Status in networking-ovn:
  Invalid
Status in neutron:
  Fix Released

Bug description:
  
http://logs.openstack.org/09/356409/8/check/gate-networking-ovn-python27-ubuntu-xenial/8eee3a5/testr_results.html.gz
  
http://logs.openstack.org/01/358501/2/check/gate-networking-ovn-python27-ubuntu-xenial/898e607/testr_results.html.gz
  
http://logs.openstack.org/94/362494/1/check/gate-networking-ovn-python27-ubuntu-xenial/658a521/testr_results.html.gz

  the above UT reports failed with
  
networking_ovn.tests.unit.ml2.test_mech_driver.TestOVNMechansimDriverPortsV2.test_update_port_status_notify_port_event_after_update.

  Two errors with AttributeError: 'NoneType' object has no attribute 
'transaction' on self._nb_ovn in mech_driver:
     ERROR [neutron.callbacks.manager] Error during notification for 
networking_ovn.ml2.mech_driver.OVNMechanismDriver._process_sg_notification-988068
 security_group, after_create
  Traceback (most recent call last):
    File "/tmp/openstack/neutron/neutron/callbacks/manager.py", line 148, in 
_notify_loop
  callback(resource, event, trigger, **kwargs)
    File "networking_ovn/ml2/mech_driver.py", line 197, in 
_process_sg_notification
  with self._nb_ovn.transaction(check_error=True) as txn:
  AttributeError: 'NoneType' object has no attribute 'transaction'

     ERROR [neutron.plugins.ml2.managers] Mechanism driver 'ovn' failed in 
update_port_postcommit
  Traceback (most recent call last):
    File "/tmp/openstack/neutron/neutron/plugins/ml2/managers.py", line 433, in 
_call_on_drivers
  getattr(driver.obj, method_name)(context)
    File "networking_ovn/ml2/mech_driver.py", line 680, in 
update_port_postcommit
  self.update_port(port, original_port)
    File "networking_ovn/ml2/mech_driver.py", line 684, in update_port
  original_port=original_port)
    File "networking_ovn/ml2/mech_driver.py", line 593, in get_ovn_port_options
  port, original_port=original_port)
    File "networking_ovn/ml2/mech_driver.py", line 820, in 
_get_port_dhcpv4_options
  subnet_dhcp_options = self._nb_ovn.get_subnet_dhcp_options(
  AttributeError: 'NoneType' object has no attribute 'get_subnet_dhcp_options'
  }}}

  Patches like https://review.openstack.org/#/c/326964 
(http://logs.openstack.org/64/326964/7/check/gate-networking-ovn-python27-ubuntu-xenial/a5800ae/testr_results.html.gz),
 they passed UT, for 
  they didn't run test_update_port_status_notify_port_event_after_update, they 
have 125 test cases in 
networking_ovn.tests.unit.ml2.test_mech_driver.TestOVNMechansimDriverPortsV2.
  But the failed 3 ones have 126 test cases.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-ovn/+bug/1618343/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1582500] [NEW] icmp, icmpv6 and ipv6-icmp should raise duplicated sg rule exception

2016-05-16 Thread ZongKai LI
Public bug reported:

For security group rules, when they have 'ethertype'='ipv6' and only
protocol values are different(icmp, or icmpv6, or ipv6-icmp), they
should be considered as duplicated.

e.g. using the following CLI commands to create sg rules, 
SecurityGroupRuleExists exception should raise:
>> neutron security-group-rule-create --ethertype ipv6 --protocol icmp SG_ID
>> neutron security-group-rule-create --ethertype ipv6 --protocol icmpv6 SG_ID
>> neutron security-group-rule-create --ethertype ipv6 --protocol ipv6-icmp 
>> SG_ID

User could understand they are just different alias, and we don't need
"duplicated" entries to deal with.

** Affects: neutron
 Importance: Undecided
 Assignee: ZongKai LI (lzklibj)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1582500

Title:
  icmp, icmpv6 and ipv6-icmp should raise duplicated sg rule exception

Status in neutron:
  In Progress

Bug description:
  For security group rules, when they have 'ethertype'='ipv6' and only
  protocol values are different(icmp, or icmpv6, or ipv6-icmp), they
  should be considered as duplicated.

  e.g. using the following CLI commands to create sg rules, 
SecurityGroupRuleExists exception should raise:
  >> neutron security-group-rule-create --ethertype ipv6 --protocol icmp SG_ID
  >> neutron security-group-rule-create --ethertype ipv6 --protocol icmpv6 SG_ID
  >> neutron security-group-rule-create --ethertype ipv6 --protocol ipv6-icmp 
SG_ID

  User could understand they are just different alias, and we don't need
  "duplicated" entries to deal with.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1582500/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1574472] [NEW] icmpv6 and ipv6-icmp are missed in _validate_port_range

2016-04-25 Thread ZongKai LI
Public bug reported:

For IPv6, it has protocol port checked for "icmp" in
_validate_port_range, but not for "icmpv6" and "ipv6-icmp".

** Affects: neutron
 Importance: Undecided
 Assignee: ZongKai LI (lzklibj)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => ZongKai LI (lzklibj)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1574472

Title:
  icmpv6 and ipv6-icmp are missed in _validate_port_range

Status in neutron:
  New

Bug description:
  For IPv6, it has protocol port checked for "icmp" in
  _validate_port_range, but not for "icmpv6" and "ipv6-icmp".

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1574472/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1568718] [NEW] ovsdb.native.connection.Connection should allow schema_helper to register certain tables, instead of all

2016-04-11 Thread ZongKai LI
Public bug reported:

Patch https://review.openstack.org/#/c/302623 is partial implement for
blueprint routed-networks, it tries to add Chassis table access in
networking-ovn. In Chassis table, it contains column hostname and
external_ids, which are needed for segment-hostname mapping in bp
routed-networks.

To access Chassis table, we need build a connection to OVN_Southbound
DB, and the schema_helper in that connection object will register all
tables in DB [1]. But indeed, we don't need all tables in OVN_Southbound
DB registered. Currently, we only need table Chassis be registered.

We should add a parameter for connection.Connection to allow it only
register certain tables, and in OVS, method register_table [2] can be
used for this purpose.

[1] 
https://github.com/openstack/neutron/blob/master/neutron/agent/ovsdb/native/connection.py#L68-L82
[2] 
https://github.com/open-switch/ops-openvswitch/blob/master/python/ovs/db/idl.py#L1359-L1366

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1568718

Title:
  ovsdb.native.connection.Connection should allow schema_helper to
  register certain tables, instead of all

Status in neutron:
  New

Bug description:
  Patch https://review.openstack.org/#/c/302623 is partial implement for
  blueprint routed-networks, it tries to add Chassis table access in
  networking-ovn. In Chassis table, it contains column hostname and
  external_ids, which are needed for segment-hostname mapping in bp
  routed-networks.

  To access Chassis table, we need build a connection to OVN_Southbound
  DB, and the schema_helper in that connection object will register all
  tables in DB [1]. But indeed, we don't need all tables in
  OVN_Southbound DB registered. Currently, we only need table Chassis be
  registered.

  We should add a parameter for connection.Connection to allow it only
  register certain tables, and in OVS, method register_table [2] can be
  used for this purpose.

  [1] 
https://github.com/openstack/neutron/blob/master/neutron/agent/ovsdb/native/connection.py#L68-L82
  [2] 
https://github.com/open-switch/ops-openvswitch/blob/master/python/ovs/db/idl.py#L1359-L1366

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1568718/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1552070] [NEW] Two functional test method in TestDvrRouter don't work to dual_stack case

2016-03-01 Thread ZongKai LI
Public bug reported:

In TestDvrRouter class, two methods,
_add_fip_agent_gw_port_info_to_router and _add_snat_port_info_to_router
don't work for dual_stack case.

Method generate_dvr_router_info will try to call prepare_router_data to
get a basic router dict, and then call the two methods to process
additional data in router dict.

If we passing dual_stack=True, in the router dict, ex_gw_port  will have
two subnets and two fixed_ips [1], and two subnets and two fixed_ips for
each interface [2], one for v4 and another for v6.

But in _add_fip_agent_gw_port_info_to_router, it will only process the first 
fixed_ip and subnet in ex_gw_port in router dict [3],
and in _add_snat_port_info_to_router, it will only process fixed_ip and subnet 
for the first interface [4].
That will make dual_stack finally fail to work.

[1] 
https://github.com/openstack/neutron/blob/master/neutron/tests/common/l3_test_common.py#L61-L94
[2] 
https://github.com/openstack/neutron/blob/master/neutron/tests/common/l3_test_common.py#L140-L174
[3] 
https://github.com/openstack/neutron/blob/master/neutron/tests/functional/agent/l3/test_dvr_router.py#L230-L251
[4] 
https://github.com/openstack/neutron/blob/master/neutron/tests/functional/agent/l3/test_dvr_router.py#L258-L280

** Affects: neutron
 Importance: Undecided
 Assignee: ZongKai LI (lzklibj)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => ZongKai LI (lzklibj)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1552070

Title:
  Two functional test method in TestDvrRouter don't work to dual_stack
  case

Status in neutron:
  New

Bug description:
  In TestDvrRouter class, two methods,
  _add_fip_agent_gw_port_info_to_router and
  _add_snat_port_info_to_router don't work for dual_stack case.

  Method generate_dvr_router_info will try to call prepare_router_data
  to get a basic router dict, and then call the two methods to process
  additional data in router dict.

  If we passing dual_stack=True, in the router dict, ex_gw_port  will
  have two subnets and two fixed_ips [1], and two subnets and two
  fixed_ips for each interface [2], one for v4 and another for v6.

  But in _add_fip_agent_gw_port_info_to_router, it will only process the first 
fixed_ip and subnet in ex_gw_port in router dict [3],
  and in _add_snat_port_info_to_router, it will only process fixed_ip and 
subnet for the first interface [4].
  That will make dual_stack finally fail to work.

  [1] 
https://github.com/openstack/neutron/blob/master/neutron/tests/common/l3_test_common.py#L61-L94
  [2] 
https://github.com/openstack/neutron/blob/master/neutron/tests/common/l3_test_common.py#L140-L174
  [3] 
https://github.com/openstack/neutron/blob/master/neutron/tests/functional/agent/l3/test_dvr_router.py#L230-L251
  [4] 
https://github.com/openstack/neutron/blob/master/neutron/tests/functional/agent/l3/test_dvr_router.py#L258-L280

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1552070/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1551596] [NEW] Default route not get cleaned after external subnet updated with --no-gateway in DVR scenario

2016-03-01 Thread ZongKai LI
Public bug reported:

After running the following command:

neutron subnet-update --no-gateway 

the default gateway does not get cleaned in the corresponding snat and
fip namespace.

The default gateway in the corresponding fip namespace should be updated
right after running the "neutron subnet-update" command, just like it
gets updated in the opposite operation, update a specified gateway from
no gateway by "neutron subnet-update --gateway ".

** Affects: neutron
 Importance: Undecided
     Assignee: ZongKai LI (lzklibj)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => ZongKai LI (lzklibj)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1551596

Title:
  Default route not get cleaned after external subnet updated with --no-
  gateway in DVR scenario

Status in neutron:
  New

Bug description:
  After running the following command:

  neutron subnet-update --no-gateway 

  the default gateway does not get cleaned in the corresponding snat and
  fip namespace.

  The default gateway in the corresponding fip namespace should be
  updated right after running the "neutron subnet-update" command, just
  like it gets updated in the opposite operation, update a specified
  gateway from no gateway by "neutron subnet-update --gateway ".

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1551596/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1540259] [NEW] uselist should be True to DVRPortBinding orm.relationship

2016-02-01 Thread ZongKai LI
Public bug reported:

In DVR scenario, after a router interface has been bound to multiple hosts, 
when we remove this interface from router, in neutron server log, SQL warning 
will raise:
  SAWarning: Multiple rows returned with uselist=False for eagerly-loaded 
attribute 'Port.dvr_port_binding'

it's caused by
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/models.py#L130,
uselist is set to False. But indeed, table ml2_dvr_port_bindings stores
all bindings for router_interface_distributed ports, and for a that kind
of port, it could have multiple bindings. So it's not a one-to-one
relationship, we should remove "uselist=False" in DVRPortBinding port
orm.relationship.

** Affects: neutron
 Importance: Undecided
 Assignee: ZongKai LI (lzklibj)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => ZongKai LI (lzklibj)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1540259

Title:
  uselist should be True to DVRPortBinding orm.relationship

Status in neutron:
  In Progress

Bug description:
  In DVR scenario, after a router interface has been bound to multiple hosts, 
when we remove this interface from router, in neutron server log, SQL warning 
will raise:
SAWarning: Multiple rows returned with uselist=False for eagerly-loaded 
attribute 'Port.dvr_port_binding'

  it's caused by
  
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/models.py#L130,
  uselist is set to False. But indeed, table ml2_dvr_port_bindings
  stores all bindings for router_interface_distributed ports, and for a
  that kind of port, it could have multiple bindings. So it's not a one-
  to-one relationship, we should remove "uselist=False" in
  DVRPortBinding port orm.relationship.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1540259/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1540779] [NEW] DVR router should not allow manually removed from an agent in 'dvr' mode

2016-02-01 Thread ZongKai LI
Public bug reported:

Per bp/improve-dvr-l3-agent-binding, command "neutron 
l3-agent-list-hosting-router ROUTER" couldn't show bindings for DVR routers on 
agents in 'dvr' mode now.
It's good to hide the implicit *binding* between DVR router and agent in 'dvr' 
mode, for DVR routers should come and go as dvr serviced port on host come and 
go, not for manually managed.
But it still be possible to run "neutron l3-agent-router-remove AGENT ROUTER" 
to remove a DVR router from an agent in 'dvr' mode. This will make DVR router 
namespace deleted, and l3 networking on that node crashed.
We should add a checking for removing router from agent in 'dvr' mode, and 
forbidden processing going on.

** Affects: neutron
 Importance: Undecided
     Assignee: ZongKai LI (lzklibj)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => ZongKai LI (lzklibj)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1540779

Title:
  DVR router should not allow manually removed from an agent in 'dvr'
  mode

Status in neutron:
  New

Bug description:
  Per bp/improve-dvr-l3-agent-binding, command "neutron 
l3-agent-list-hosting-router ROUTER" couldn't show bindings for DVR routers on 
agents in 'dvr' mode now.
  It's good to hide the implicit *binding* between DVR router and agent in 
'dvr' mode, for DVR routers should come and go as dvr serviced port on host 
come and go, not for manually managed.
  But it still be possible to run "neutron l3-agent-router-remove AGENT ROUTER" 
to remove a DVR router from an agent in 'dvr' mode. This will make DVR router 
namespace deleted, and l3 networking on that node crashed.
  We should add a checking for removing router from agent in 'dvr' mode, and 
forbidden processing going on.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1540779/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1536176] [NEW] network owner cannot get all subnets

2016-01-20 Thread ZongKai LI
Public bug reported:

steps:
1, demo tenant create a network net1
2, demo tenant create a subnet sn1 in net1
3, admin create a subnet sn2 in net1
4, demo tenant run "neutron subnet-list"
expected: command output should contains sn1 and sn2
observed: only sn1 can be seen.

in policy.json
[1]"create_subnet": "rule:admin_or_network_owner",
[2]"get_subnet": "rule:admin_or_owner or rule:shared",
from [1], since only admin and network owner can create subnet on tenant 
network, it should make sense to allow network owner to get all subnets on 
her/his network.

** Affects: neutron
     Importance: Undecided
 Assignee: ZongKai LI (lzklibj)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => ZongKai LI (lzklibj)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1536176

Title:
  network owner cannot get all subnets

Status in neutron:
  New

Bug description:
  steps:
  1, demo tenant create a network net1
  2, demo tenant create a subnet sn1 in net1
  3, admin create a subnet sn2 in net1
  4, demo tenant run "neutron subnet-list"
  expected: command output should contains sn1 and sn2
  observed: only sn1 can be seen.

  in policy.json
  [1]"create_subnet": "rule:admin_or_network_owner",
  [2]"get_subnet": "rule:admin_or_owner or rule:shared",
  from [1], since only admin and network owner can create subnet on tenant 
network, it should make sense to allow network owner to get all subnets on 
her/his network.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1536176/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1534954] [NEW] policy rule for update_port is inconsistent

2016-01-16 Thread ZongKai LI
Public bug reported:

For user from a common tenant, per [1]
https://github.com/openstack/neutron/blob/master/etc/policy.json#L77 ,
seems network owner shouldn't have privilege to update port on her/his
network if she/he is not port owner.

But per [2]
https://github.com/openstack/neutron/blob/master/etc/policy.json#L78-L85
, seems network owner still have chance to update port attributes such
as device_owner, fixed_ips, port_security_enabled, mac_learning_enabled,
allowed_address_pairs.

This is inconsistent, per [1], policy rule "rule:admin_or_network_owner"
in [2] should be updated to "admin_or_owner".

** Affects: neutron
 Importance: Undecided
     Assignee: ZongKai LI (lzklibj)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => ZongKai LI (lzklibj)

** Description changed:

  For user from a common tenant, per [1]
  https://github.com/openstack/neutron/blob/master/etc/policy.json#L77 ,
  seems network owner shouldn't have privilege to update port on her/his
  network if she/he is not port owner.
  
  But per [2]
  https://github.com/openstack/neutron/blob/master/etc/policy.json#L78-L85
  , seems network owner still have chance to update port attributes such
  as device_owner, fixed_ips, port_security_enabled, mac_learning_enabled,
  allowed_address_pairs.
  
  This is inconsistent, per [1], policy rule "rule:admin_or_network_owner"
- should be updated to "admin_or_owner".
+ in [2] should be updated to "admin_or_owner".

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1534954

Title:
  policy rule for update_port is inconsistent

Status in neutron:
  In Progress

Bug description:
  For user from a common tenant, per [1]
  https://github.com/openstack/neutron/blob/master/etc/policy.json#L77 ,
  seems network owner shouldn't have privilege to update port on her/his
  network if she/he is not port owner.

  But per [2]
  https://github.com/openstack/neutron/blob/master/etc/policy.json#L78-L85
  , seems network owner still have chance to update port attributes such
  as device_owner, fixed_ips, port_security_enabled,
  mac_learning_enabled, allowed_address_pairs.

  This is inconsistent, per [1], policy rule
  "rule:admin_or_network_owner" in [2] should be updated to
  "admin_or_owner".

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1534954/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1533313] [NEW] _get_dvr_sync_data should be optimized for 'dvr' mode agent

2016-01-12 Thread ZongKai LI
Public bug reported:

Per patch https://review.openstack.org/#/c/239908 for bug "Function
sync_routers always call _get_dvr_sync_data in ha scenario", it will
make _get_dvr_sync_data only be called for agent in 'dvr'/'dvr_snat'
mode, and avoid additional floatingip processing for legacy/HA router
than DVR router.

But for DVR scenario, only query floatingips associated ports on given
host should be necessary, and no need to do additional filter processing
which should be done sql.

** Affects: neutron
 Importance: Undecided
 Assignee: ZongKai LI (lzklibj)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => ZongKai LI (lzklibj)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1533313

Title:
  _get_dvr_sync_data should be optimized for 'dvr' mode agent

Status in neutron:
  In Progress

Bug description:
  Per patch https://review.openstack.org/#/c/239908 for bug "Function
  sync_routers always call _get_dvr_sync_data in ha scenario", it will
  make _get_dvr_sync_data only be called for agent in 'dvr'/'dvr_snat'
  mode, and avoid additional floatingip processing for legacy/HA router
  than DVR router.

  But for DVR scenario, only query floatingips associated ports on given
  host should be necessary, and no need to do additional filter
  processing which should be done sql.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1533313/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1512207] Re: Fix usage of assertions

2016-01-06 Thread ZongKai LI
The target has been done in neutron.

** Changed in: neutron
   Status: In Progress => Invalid

** Changed in: neutron
 Assignee: Swapnil Kulkarni (coolsvap) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1512207

Title:
  Fix usage of assertions

Status in Aodh:
  In Progress
Status in Barbican:
  In Progress
Status in Blazar:
  In Progress
Status in Cinder:
  In Progress
Status in Glance:
  In Progress
Status in OpenStack Dashboard (Horizon):
  In Progress
Status in Ironic:
  In Progress
Status in Ironic Inspector:
  In Progress
Status in OpenStack Identity (keystone):
  Fix Released
Status in Magnum:
  In Progress
Status in Manila:
  Fix Released
Status in neutron:
  Invalid
Status in oslo.utils:
  New
Status in python-novaclient:
  Fix Released
Status in Rally:
  New
Status in refstack:
  In Progress
Status in Sahara:
  Fix Released
Status in Solum:
  In Progress
Status in Stackalytics:
  Fix Released
Status in OpenStack Object Storage (swift):
  In Progress
Status in tempest:
  In Progress
Status in Trove:
  Fix Released
Status in Vitrage:
  In Progress
Status in zaqar:
  Fix Released

Bug description:
  Manila  should use the specific assertion:

self.assertIsTrue/False(observed)

  instead of the generic assertion:

self.assertEqual(True/False, observed)

To manage notifications about this bug go to:
https://bugs.launchpad.net/aodh/+bug/1512207/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1531065] [NEW] duplicately fetch subnet_id in get_subnet_for_dvr

2016-01-04 Thread ZongKai LI
Public bug reported:

In 
https://github.com/openstack/neutron/blob/77a6d114eae9de8078b358a9bd8502fb7c393641/neutron/db/dvr_mac_db.py#L159-L163,
 get_subnet_for_dvr will try to get subnet_id when fixed_ips is not None:
...
def get_subnet_for_dvr(self, context, subnet, fixed_ips=None):
if fixed_ips:
subnet_data = fixed_ips[0]['subnet_id']
else:
subnet_data = subnet
...

But checking its callers :
https://github.com/openstack/neutron/blob/77a6d114eae9de8078b358a9bd8502fb7c393641/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_dvr_neutron_agent.py#L509-L531
 , 
and
https://github.com/openstack/neutron/blob/77a6d114eae9de8078b358a9bd8502fb7c393641/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_dvr_neutron_agent.py#L366-L380
 , they all have similar logic:
...
fixed_ip = fixed_ips[0]
...
subnet_uuid = fixed_ip['subnet_id']
...
subnet_info = self.plugin_rpc.get_subnet_for_dvr(
self.context, subnet_uuid, fixed_ips=fixed_ips)

subnet_id has already be fetched and passed into get_subnet_for_dvr. So
in get_subnet_for_dvr, there is no need to fetch again.

** Affects: neutron
 Importance: Undecided
 Assignee: ZongKai LI (lzklibj)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => ZongKai LI (lzklibj)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1531065

Title:
  duplicately  fetch subnet_id in get_subnet_for_dvr

Status in neutron:
  New

Bug description:
  In 
https://github.com/openstack/neutron/blob/77a6d114eae9de8078b358a9bd8502fb7c393641/neutron/db/dvr_mac_db.py#L159-L163,
 get_subnet_for_dvr will try to get subnet_id when fixed_ips is not None:
  ...
  def get_subnet_for_dvr(self, context, subnet, fixed_ips=None):
  if fixed_ips:
  subnet_data = fixed_ips[0]['subnet_id']
  else:
  subnet_data = subnet
  ...

  But checking its callers :
  
https://github.com/openstack/neutron/blob/77a6d114eae9de8078b358a9bd8502fb7c393641/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_dvr_neutron_agent.py#L509-L531
 , 
  and
  
https://github.com/openstack/neutron/blob/77a6d114eae9de8078b358a9bd8502fb7c393641/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_dvr_neutron_agent.py#L366-L380
 , they all have similar logic:
  ...
  fixed_ip = fixed_ips[0]
  ...
  subnet_uuid = fixed_ip['subnet_id']
  ...
  subnet_info = self.plugin_rpc.get_subnet_for_dvr(
  self.context, subnet_uuid, fixed_ips=fixed_ips)

  subnet_id has already be fetched and passed into get_subnet_for_dvr.
  So in get_subnet_for_dvr, there is no need to fetch again.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1531065/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1529439] [NEW] unify validate_agent_router_combination exceptions for dvr agent_mode

2015-12-26 Thread ZongKai LI
Public bug reported:

in method validate_agent_router_combination
(https://github.com/openstack/neutron/blob/master/neutron/db/l3_agentschedulers_db.py#L157)
, it will validate if the router can be correctly assigned to the agent.

And it will raise two different exceptions for dvr agent_mode agent,
https://github.com/openstack/neutron/blob/master/neutron/db/l3_agentschedulers_db.py#L176-L187
:

is_agent_router_types_incompatible = (
agent_mode == constants.L3_AGENT_MODE_DVR and not is_distributed
or agent_mode == constants.L3_AGENT_MODE_LEGACY and is_distributed
)
if is_agent_router_types_incompatible:
raise l3agentscheduler.RouterL3AgentMismatch(
router_type=router_type, router_id=router['id'],
agent_mode=agent_mode, agent_id=agent['id'])
if agent_mode == constants.L3_AGENT_MODE_DVR and is_distributed:
raise l3agentscheduler.DVRL3CannotAssignToDvrAgent(
router_type=router_type, router_id=router['id'],
agent_id=agent['id'])

this should be unified and simplified for a single reason that "DVR
router on dvr agent_mode agent should be only scheduled, not manually
assigned."

** Affects: neutron
 Importance: Undecided
 Assignee: ZongKai LI (lzklibj)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => ZongKai LI (lzklibj)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1529439

Title:
  unify validate_agent_router_combination exceptions for dvr agent_mode

Status in neutron:
  New

Bug description:
  in method validate_agent_router_combination
  
(https://github.com/openstack/neutron/blob/master/neutron/db/l3_agentschedulers_db.py#L157)
  , it will validate if the router can be correctly assigned to the
  agent.

  And it will raise two different exceptions for dvr agent_mode agent,
  
https://github.com/openstack/neutron/blob/master/neutron/db/l3_agentschedulers_db.py#L176-L187
  :

  is_agent_router_types_incompatible = (
  agent_mode == constants.L3_AGENT_MODE_DVR and not is_distributed
  or agent_mode == constants.L3_AGENT_MODE_LEGACY and is_distributed
  )
  if is_agent_router_types_incompatible:
  raise l3agentscheduler.RouterL3AgentMismatch(
  router_type=router_type, router_id=router['id'],
  agent_mode=agent_mode, agent_id=agent['id'])
  if agent_mode == constants.L3_AGENT_MODE_DVR and is_distributed:
  raise l3agentscheduler.DVRL3CannotAssignToDvrAgent(
  router_type=router_type, router_id=router['id'],
  agent_id=agent['id'])

  this should be unified and simplified for a single reason that "DVR
  router on dvr agent_mode agent should be only scheduled, not manually
  assigned."

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1529439/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1527049] [NEW] is_dvr_serviced in unbind_router_servicenode is duplicated and unnecessary

2015-12-16 Thread ZongKai LI
Public bug reported:

in method unbind_router_servicenode, it will check if there is still any
dvr serviced port exist on node, by getting all ports on host that
related to given router:

for subnet in subnet_ids:
ports = (
self._core_plugin.get_ports_on_host_by_subnet(
context, host, subnet))
for port in ports:
if (n_utils.is_dvr_serviced(port['device_owner'])):
port_found = True
LOG.debug('One or more ports exist on the snat '
  'enabled l3_agent host %(host)s and '
  'router_id %(id)s',
  {'host': host, 'id': router_id})
break
https://github.com/openstack/neutron/blob/master/neutron/db/l3_dvrscheduler_db.py#L293-L303

but the logic in inner for loop here is duplicated and unnecessary, since 
get_ports_on_host_by_subnet will get all dvr serviced ports already, so it 
doesn't need to check again.
in get_ports_on_host_by_subnet:

for port in ports:
device_owner = port['device_owner']
if (utils.is_dvr_serviced(device_owner)):
if port[portbindings.HOST_ID] == host:
port_dict = self.plugin._make_port_dict(port,
process_extensions=False)
ports_by_host.append(port_dict)

https://github.com/openstack/neutron/blob/master/neutron/db/dvr_mac_db.py#L128-L156

** Affects: neutron
 Importance: Undecided
 Assignee: ZongKai LI (lzklibj)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => ZongKai LI (lzklibj)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1527049

Title:
  is_dvr_serviced in unbind_router_servicenode is duplicated and
  unnecessary

Status in neutron:
  New

Bug description:
  in method unbind_router_servicenode, it will check if there is still
  any dvr serviced port exist on node, by getting all ports on host that
  related to given router:

  for subnet in subnet_ids:
  ports = (
  self._core_plugin.get_ports_on_host_by_subnet(
  context, host, subnet))
  for port in ports:
  if (n_utils.is_dvr_serviced(port['device_owner'])):
  port_found = True
  LOG.debug('One or more ports exist on the snat '
'enabled l3_agent host %(host)s and '
'router_id %(id)s',
{'host': host, 'id': router_id})
  break
  
https://github.com/openstack/neutron/blob/master/neutron/db/l3_dvrscheduler_db.py#L293-L303

  but the logic in inner for loop here is duplicated and unnecessary, since 
get_ports_on_host_by_subnet will get all dvr serviced ports already, so it 
doesn't need to check again.
  in get_ports_on_host_by_subnet:

  for port in ports:
  device_owner = port['device_owner']
  if (utils.is_dvr_serviced(device_owner)):
  if port[portbindings.HOST_ID] == host:
  port_dict = self.plugin._make_port_dict(port,
  process_extensions=False)
  ports_by_host.append(port_dict)

  
https://github.com/openstack/neutron/blob/master/neutron/db/dvr_mac_db.py#L128-L156

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1527049/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1524357] [NEW] get_ports cannot get all port for network owner

2015-12-09 Thread ZongKai LI
Public bug reported:

### env ###
upstream code
two demo tenants: demo1 and demo2
demo1 owner network net1 with subnet

### steps ###
1, create rbac rule for demo2 to access net1 as shared;
2, create a port(port-1) on net1 by demo2;
3, run "neutron port-list" to call get_ports to get ports for/by demo1;
expected: return result should contain port-1 for demo1 is network owner;
observed: return result doesn't contain port-1

** Affects: neutron
 Importance: Undecided
 Assignee: ZongKai LI (lzklibj)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => ZongKai LI (lzklibj)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1524357

Title:
  get_ports cannot get all port for network owner

Status in neutron:
  New

Bug description:
  ### env ###
  upstream code
  two demo tenants: demo1 and demo2
  demo1 owner network net1 with subnet

  ### steps ###
  1, create rbac rule for demo2 to access net1 as shared;
  2, create a port(port-1) on net1 by demo2;
  3, run "neutron port-list" to call get_ports to get ports for/by demo1;
  expected: return result should contain port-1 for demo1 is network owner;
  observed: return result doesn't contain port-1

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1524357/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1522709] [NEW] external network can always be seen by other tenant users even not shared or rbac rules allowed

2015-12-03 Thread ZongKai LI
Public bug reported:

### env ###
upstream code
two tenant users: admin, demo


### steps(default by admin) ###
1, create an external network with shared at beginning;
2, verify external network can be seen in non-admin tenant, demo user run 
"neutron net-list";
3, update external network from shared to non;
4, verify external network no longer can be seen in non-admin tenant, demo user 
run "neutron net-list";
expected: output of "net-list" doesn't contain external network;
observed: output of "net-list" contains external network;

5, additional, verify external network is not shared, demo user run "neutron 
net-show EXTERNAL-NETWORK";
network info will show external network shared field is False.

** Affects: neutron
     Importance: Undecided
 Assignee: ZongKai LI (lzklibj)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => ZongKai LI (lzklibj)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1522709

Title:
  external network can always be seen by other tenant users even not
  shared or rbac rules allowed

Status in neutron:
  In Progress

Bug description:
  ### env ###
  upstream code
  two tenant users: admin, demo

  
  ### steps(default by admin) ###
  1, create an external network with shared at beginning;
  2, verify external network can be seen in non-admin tenant, demo user run 
"neutron net-list";
  3, update external network from shared to non;
  4, verify external network no longer can be seen in non-admin tenant, demo 
user run "neutron net-list";
  expected: output of "net-list" doesn't contain external network;
  observed: output of "net-list" contains external network;

  5, additional, verify external network is not shared, demo user run "neutron 
net-show EXTERNAL-NETWORK";
  network info will show external network shared field is False.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1522709/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1522482] [NEW] dvr gateway port fails in _validate_shared_update

2015-12-03 Thread ZongKai LI
Public bug reported:

### env ###
upstream code
DVR enabled
2 network nodes(l3 agent running in dvr_snat mode), 1 compute node(l3 agent 
running in dvr mode)
2 tenants: admin and demo

### steps(by admin) ###
1, create external network and internal network, create subnets for the two 
networks, 
2, create a router(dvr default), attach external network as router gateway, 
attach internal network as router interface;
3, verify router schedules success by "neutron l3-agent-list-hosting-router" or 
"ip netns"
4, create a floatingip, boot a VM on internal subnet, and associate floatingip 
to VM;
5, verify the dvr and floatingip related ports are created by "neutron 
port-list -c device_owner"

6, run "neutron net-update EXTERNAL-NETWORK --shared False"
expected: "Updated network: EXTERNAL-NETWORK"
observed: "Unable to reconfigure sharing settings for network EXTERNAL-NETWORK. 
Multiple tenants are using it."

### analyse ###
1, even all ports on EXTERNAL-NETWORK are belong to admin tenant, still fail to 
update EXTERNAL-NETWORK shared attribute.
2, _validate_shared_update fails to work as expected for not all ports on 
EXTERNAL-NETWORK have tenant_id.
3, following dvr and floatingip related ports have no tenant_id, 
_validate_shared_update need be fixed for them:
fg: network:floatingip_agent_gateway
sg: network:router_centralized_snat

** Affects: neutron
 Importance: Undecided
 Assignee: ZongKai LI (lzklibj)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => ZongKai LI (lzklibj)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1522482

Title:
  dvr gateway port fails in _validate_shared_update

Status in neutron:
  New

Bug description:
  ### env ###
  upstream code
  DVR enabled
  2 network nodes(l3 agent running in dvr_snat mode), 1 compute node(l3 agent 
running in dvr mode)
  2 tenants: admin and demo

  ### steps(by admin) ###
  1, create external network and internal network, create subnets for the two 
networks, 
  2, create a router(dvr default), attach external network as router gateway, 
attach internal network as router interface;
  3, verify router schedules success by "neutron l3-agent-list-hosting-router" 
or "ip netns"
  4, create a floatingip, boot a VM on internal subnet, and associate 
floatingip to VM;
  5, verify the dvr and floatingip related ports are created by "neutron 
port-list -c device_owner"

  6, run "neutron net-update EXTERNAL-NETWORK --shared False"
  expected: "Updated network: EXTERNAL-NETWORK"
  observed: "Unable to reconfigure sharing settings for network 
EXTERNAL-NETWORK. Multiple tenants are using it."

  ### analyse ###
  1, even all ports on EXTERNAL-NETWORK are belong to admin tenant, still fail 
to update EXTERNAL-NETWORK shared attribute.
  2, _validate_shared_update fails to work as expected for not all ports on 
EXTERNAL-NETWORK have tenant_id.
  3, following dvr and floatingip related ports have no tenant_id, 
_validate_shared_update need be fixed for them:
  fg: network:floatingip_agent_gateway
  sg: network:router_centralized_snat

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1522482/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1514068] [NEW] internal subnet case no need to repeatedly create IPDevice in _update_arp_entry

2015-11-07 Thread ZongKai LI
Public bug reported:

_update_arp_entry will create IPDevice to do arp task:
https://github.com/openstack/neutron/blob/master/neutron/agent/l3/dvr_local_router.py#L200-L227
def _update_arp_entry(self, ip, mac, subnet_id, operation):
"""Add or delete arp entry into router namespace for the subnet."""
port = self._get_internal_port(subnet_id)
# update arp entry only if the subnet is attached to the router
if not port:
return False

try:
# TODO(mrsmith): optimize the calls below for bulk calls
interface_name = self.get_internal_device_name(port['id'])
device = ip_lib.IPDevice(interface_name, namespace=self.ns_name)

and methods _process_arp_cache_for_internal_port and _set_subnet_arp_info will 
call _update_arp_entry in their for loop based on arp_entry/port. Per 
arp_entry/port is going to be processed, an IPDevice object(same device in 
namespace) will be created. It's not necessary to do that.
https://github.com/openstack/neutron/blob/master/neutron/agent/l3/dvr_local_router.py#L174-L182
https://github.com/openstack/neutron/blob/master/neutron/agent/l3/dvr_local_router.py#L229-L241

We can create that IPDevice object before code enter the for loop, and
pass it to _update_arp_entry

** Affects: neutron
 Importance: Undecided
 Assignee: ZongKai LI (lzklibj)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => ZongKai LI (lzklibj)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1514068

Title:
  internal subnet case no need to repeatedly create IPDevice in
  _update_arp_entry

Status in neutron:
  New

Bug description:
  _update_arp_entry will create IPDevice to do arp task:
  
https://github.com/openstack/neutron/blob/master/neutron/agent/l3/dvr_local_router.py#L200-L227
  def _update_arp_entry(self, ip, mac, subnet_id, operation):
  """Add or delete arp entry into router namespace for the subnet."""
  port = self._get_internal_port(subnet_id)
  # update arp entry only if the subnet is attached to the router
  if not port:
  return False

  try:
  # TODO(mrsmith): optimize the calls below for bulk calls
  interface_name = self.get_internal_device_name(port['id'])
  device = ip_lib.IPDevice(interface_name, namespace=self.ns_name)

  and methods _process_arp_cache_for_internal_port and _set_subnet_arp_info 
will call _update_arp_entry in their for loop based on arp_entry/port. Per 
arp_entry/port is going to be processed, an IPDevice object(same device in 
namespace) will be created. It's not necessary to do that.
  
https://github.com/openstack/neutron/blob/master/neutron/agent/l3/dvr_local_router.py#L174-L182
  
https://github.com/openstack/neutron/blob/master/neutron/agent/l3/dvr_local_router.py#L229-L241

  We can create that IPDevice object before code enter the for loop, and
  pass it to _update_arp_entry

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1514068/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1513758] [NEW] dhcp-agent with reserved_dhcp_port raise cannot find tap device error

2015-11-06 Thread ZongKai LI
t;/opt/stack/neutron/neutron/agent/linux/ip_lib.py", line 91, in _execute
2015-11-06 05:46:41.665 TRACE neutron.agent.dhcp.agent 
log_fail_as_error=log_fail_as_error)
2015-11-06 05:46:41.665 TRACE neutron.agent.dhcp.agent   File 
"/opt/stack/neutron/neutron/agent/linux/utils.py", line 157, in execute
2015-11-06 05:46:41.665 TRACE neutron.agent.dhcp.agent raise RuntimeError(m)
2015-11-06 05:46:41.665 TRACE neutron.agent.dhcp.agent RuntimeError: 
2015-11-06 05:46:41.665 TRACE neutron.agent.dhcp.agent Command: ['ip', 'netns', 
'exec', u'qdhcp-79673257-aa5e-4d19-91b5-225391b2691c', 'ip', 'route', 'list', 
'dev', 'tapbcd64879-be']
2015-11-06 05:46:41.665 TRACE neutron.agent.dhcp.agent Exit code: 1
2015-11-06 05:46:41.665 TRACE neutron.agent.dhcp.agent Stdin: 
2015-11-06 05:46:41.665 TRACE neutron.agent.dhcp.agent Stdout: 
2015-11-06 05:46:41.665 TRACE neutron.agent.dhcp.agent Stderr: Cannot find 
device "tapbcd64879-be"
2015-11-06 05:46:41.665 TRACE neutron.agent.dhcp.agent 
2015-11-06 05:46:41.665 TRACE neutron.agent.dhcp.agent

** Affects: neutron
 Importance: Undecided
 Assignee: ZongKai LI (lzklibj)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => ZongKai LI (lzklibj)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1513758

Title:
  dhcp-agent with reserved_dhcp_port raise cannot find tap device error

Status in neutron:
  New

Bug description:
  =
  my env
  =
  upstream code.
  2 dhcp-agents, setting dhcp_agents_per_network to 2.
  optional: checkout [1] https://review.openstack.org/#/c/239264/ .

  ===
  steps to reproduce
  ===
  1, create a private net and its subnet, enable_dhcp(True) by default.
  2, verify both two dhcp-agents host net, by "ip netns", and we can find 
dhcp-port tapA is used by dhcp-agent-1, and dhcp-port tapB is used by 
dhcp-agent-2.
  3, stop/kill two dhcp-agnets.
  4, update two dhcp-ports device_id from previous one to "reserved_dhcp_port"
  >>neutron port-update --device_id='reserved_dhcp_port' PORT-ID
  5, start two dhcp-agents again, when dhcp-agent-1 try to setup tapB and 
dhcp-agent-2 try to setup tapA, error like 'Cannot find device "tapX" ' will 
raise.

  ---
  explanation
  ---
  1. step 4 is try to simulate case remove_networks_from_down_agents, when we 
stop/kill a dhcp-agent, even we can check it's no longer alive by "neutron 
agent-status", the dhcp-port it used will still not update its device_id to 
"reserved_dhcp_port" for a while. manually modify it will make things quick.
  2, about patch in [1], it's optional, even without that patch, this issue can 
still raise. But sometime for stale ports existing, this issue will not raise, 
but that's not a good reason to keep stale dhcp-port. That patch will help to 
cleanup stale ports, and make this issue easier to be seen.

  ===
  TRACE log
  ===
  2015-11-06 05:46:41.634 DEBUG neutron.agent.linux.dhcp 
[req-6e9631c6-84b4-4283-a975-cc40819b638d admin 
b7adf07ab24c40cc98f0f4835bb2e43d] Reloading allocations for network: 
79673257-aa5e-4d19-91b5-225391b2691c from (pid=20965) reload_allocations 
/opt/stack/neutron/neutron/agent/linux/dhcp.py:466
  2015-11-06 05:46:41.635 DEBUG neutron.agent.linux.utils 
[req-6e9631c6-84b4-4283-a975-cc40819b638d admin 
b7adf07ab24c40cc98f0f4835bb2e43d] Running command (rootwrap daemon): ['ip', 
'netns', 'exec', 'qdhcp-79673257-aa5e-4d19-91b5-225391b2691c', 'ip', 'route', 
'list', 'dev', 'tapbcd64879-be'] from (pid=20965) execute_rootwrap_daemon 
/opt/stack/neutron/neutron/agent/linux/utils.py:99
  2015-11-06 05:46:41.664 ERROR neutron.agent.linux.utils 
[req-6e9631c6-84b4-4283-a975-cc40819b638d admin 
b7adf07ab24c40cc98f0f4835bb2e43d] 
  Command: ['ip', 'netns', 'exec', 
u'qdhcp-79673257-aa5e-4d19-91b5-225391b2691c', 'ip', 'route', 'list', 'dev', 
'tapbcd64879-be']
  Exit code: 1
  Stdin: 
  Stdout: 
  Stderr: Cannot find device "tapbcd64879-be"

  2015-11-06 05:46:41.665 ERROR neutron.agent.dhcp.agent 
[req-6e9631c6-84b4-4283-a975-cc40819b638d admin 
b7adf07ab24c40cc98f0f4835bb2e43d] Unable to reload_allocations dhcp for 
79673257-aa5e-4d19-91b5-225391b2691c.
  2015-11-06 05:46:41.665 TRACE neutron.agent.dhcp.agent Traceback (most recent 
call last):
  2015-11-06 05:46:41.665 TRACE neutron.agent.dhcp.agent   File 
"/opt/stack/neutron/neutron/agent/dhcp/agent.py", line 115, in call_driver
  2015-11-06 05:46:41.665 TRACE neutron.agent.dhcp.agent getattr(driver, 
action)(**action_kwargs)
  2015-11-06 05:46:41.665 TRACE neutron.agent.dhcp.agent   File 
"/opt/stack/neutron/neutron/agent/linux/dhcp.py", line 467, in 
reload_allocations
  2015-11-06 05:46:41.665 TRACE neutron.agent.dhcp.agent 
self.device_manager.update(self.network, self.interface_name)
  2015-11-

[Yahoo-eng-team] [Bug 1513574] [NEW] firewall rules on DVR FIP fails to work for ingress traffic

2015-11-05 Thread ZongKai LI
Public bug reported:

=
my env
=
controller +network node(dvr_snat) + 2 compute nodes(dvr)
DVR: enable DVR when using devstack to deploy this env
FWaaS: manually git clone neutron-fwaas and to configure, using iptables as 
driver



steps

1) create net, subnet, boot VM-1 on CN-1, VM-2 on CN-2, create router, and 
attach subnet onto router.
2) create external network, set as router gateway net, create 2 floating IPs 
and associate to two VMs.
3) confirm DVR FIP works: fip ns created, iptable rules updated in qrouter ns, 
two VMs are pingable by floating IP.
floating IP like: 192.168.0.4 and 192.168.0.5
4) create firewall rules, firewall policy and create firewall on router. 
firewall rule like: 
fw-r1: ICMP, source: 192.168.0.184/29(none), dest: 192.168.0.0/28(none), 
allow
fw-r2: ICMP, source: 192.168.0.0/28(none), dest: 192.168.0.184/29(none), 
allow
5) confirm firewall rules updated in qrouter ns.
6) on host who has IP like 192.168.0.190, try to ping floating IPs mentioned in 
step 3.
expected: floating IPs should be pingable (for IP 192.168.0.190 is in 
192.168.0.184/29, and two firewall rules allows)
observed: no response, "100% packet loss" from ping command. floating IP fail 
to ping.



more details


firewall iptable rules:

-A INPUT -j neutron-l3-agent-INPUT
-A FORWARD -j neutron-filter-top
-A FORWARD -j neutron-l3-agent-FORWARD
-A OUTPUT -j neutron-filter-top
-A OUTPUT -j neutron-l3-agent-OUTPUT
-A neutron-filter-top -j neutron-l3-agent-local
-A neutron-l3-agent-FORWARD -o rfp-+ -j neutron-l3-agent-iv4322a9b15
-A neutron-l3-agent-FORWARD -i rfp-+ -j neutron-l3-agent-ov4322a9b15
-A neutron-l3-agent-FORWARD -o rfp-+ -j neutron-l3-agent-fwaas-defau
-A neutron-l3-agent-FORWARD -i rfp-+ -j neutron-l3-agent-fwaas-defau
-A neutron-l3-agent-INPUT -m mark --mark 0x1/0x -j ACCEPT
-A neutron-l3-agent-INPUT -p tcp -m tcp --dport 9697 -j DROP
-A neutron-l3-agent-fwaas-defau -j DROP
-A neutron-l3-agent-iv4322a9b15 -m state --state INVALID -j DROP
-A neutron-l3-agent-iv4322a9b15 -m state --state RELATED,ESTABLISHED -j ACCEPT
-A neutron-l3-agent-iv4322a9b15 -s 192.168.0.0/28 -d 192.168.0.184/29 -p icmp 
-j ACCEPT
-A neutron-l3-agent-iv4322a9b15 -s 192.168.0.184/29 -d 192.168.0.0/28 -p icmp 
-j ACCEPT
-A neutron-l3-agent-ov4322a9b15 -m state --state INVALID -j DROP
-A neutron-l3-agent-ov4322a9b15 -m state --state RELATED,ESTABLISHED -j ACCEPT
-A neutron-l3-agent-ov4322a9b15 -s 192.168.0.0/28 -d 192.168.0.184/29 -p icmp 
-j ACCEPT
-A neutron-l3-agent-ov4322a9b15 -s 192.168.0.184/29 -d 192.168.0.0/28 -p icmp 
-j ACCEPT

---
DVR FIP nat iptable rules:
---
1) for 192.168.0.4:
-A PREROUTING -j neutron-l3-agent-PREROUTING
-A OUTPUT -j neutron-l3-agent-OUTPUT
-A POSTROUTING -j neutron-l3-agent-POSTROUTING
-A POSTROUTING -j neutron-postrouting-bottom
-A neutron-l3-agent-OUTPUT -d 192.168.0.4/32 -j DNAT --to-destination 20.0.1.7
-A neutron-l3-agent-POSTROUTING ! -i rfp-4bf3186c-d ! -o rfp-4bf3186c-d -m 
conntrack ! --ctstate DNAT -j ACCEPT
-A neutron-l3-agent-PREROUTING -d 169.254.169.254/32 -i qr-+ -p tcp -m tcp 
--dport 80 -j REDIRECT --to-ports 9697
-A neutron-l3-agent-PREROUTING -d 192.168.0.4/32 -j DNAT --to-destination 
20.0.1.7
-A neutron-l3-agent-float-snat -s 20.0.1.7/32 -j SNAT --to-source 192.168.0.4
-A neutron-l3-agent-snat -j neutron-l3-agent-float-snat
-A neutron-postrouting-bottom -m comment --comment "Perform source NAT on 
outgoing traffic." -j neutron-l3-agent-snat

2) for 192.168.0.5:
-A PREROUTING -j neutron-l3-agent-PREROUTING
-A OUTPUT -j neutron-l3-agent-OUTPUT
-A POSTROUTING -j neutron-l3-agent-POSTROUTING
-A POSTROUTING -j neutron-postrouting-bottom
-A neutron-l3-agent-OUTPUT -d 192.168.0.5/32 -j DNAT --to-destination 20.0.1.6
-A neutron-l3-agent-PREROUTING -d 169.254.169.254/32 -i qr-+ -p tcp -m tcp 
--dport 80 -j REDIRECT --to-ports 9697
-A neutron-l3-agent-PREROUTING -d 192.168.0.5/32 ! -i qr-+ -j DNAT 
--to-destination 20.0.1.6
-A neutron-l3-agent-float-snat -s 20.0.1.6/32 -j SNAT --to-source 192.168.0.5
-A neutron-l3-agent-snat -j neutron-l3-agent-float-snat
-A neutron-postrouting-bottom -m comment --comment "Perform source NAT on 
outgoing traffic." -j neutron-l3-agent-snat

--
tcpdump result: (192.168.0.190 ping 192.168.0.4)
--
1) on fg in fip ns, ingress traffic caught:
fa:16:3e:b3:3e:8c > fa:16:3e:9d:ea:ed, ethertype IPv4 (0x0800), length 98: 
192.168.0.190 > 192.168.0.4: ICMP echo request, id 28356, seq 31, length 64
and fg:
40: fg-59c9ce49-3a:  mtu 1500 qdisc noqueue state 
UNKNOWN group default 
link/ether fa:16:3e:9d:ea:ed brd ff:ff:ff:ff:ff:ff
inet 192.168.0.133/24 brd 192.168.0.255 scope global fg-59c9ce49-3a
   valid_lft forever preferred_lft forever
 

[Yahoo-eng-team] [Bug 1509941] [NEW] case test_kill_process_with_different_signal is not necessary

2015-10-26 Thread ZongKai LI
Public bug reported:

in _test__kill_process, utils.execute has been mocked in following code:
...
with mock.patch.object(utils, 'execute',
   side_effect=exc) as mock_execute:
...
so indeed, no signal will be sent at all, for that, we don't need case 
test_kill_process_with_different_signal, and don't need signal as parameter for 
_test__kill_process

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1509941

Title:
  case test_kill_process_with_different_signal is not necessary

Status in neutron:
  New

Bug description:
  in _test__kill_process, utils.execute has been mocked in following code:
  ...
  with mock.patch.object(utils, 'execute',
 side_effect=exc) as mock_execute:
  ...
  so indeed, no signal will be sent at all, for that, we don't need case 
test_kill_process_with_different_signal, and don't need signal as parameter for 
_test__kill_process

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1509941/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1508801] [NEW] fix treat_vif_port logic

2015-10-22 Thread ZongKai LI
Public bug reported:

Now it is:
...
if not vif_port.ofport:
LOG.warn(_LW("VIF port: %s has no ofport configured, "
 "and might not be able to transmit"), vif_port.vif_id)
if vif_port:
if admin_state_up:
self.port_bound(vif_port, network_id, network_type,
...
logic:
if vif_port:
if not vif_port.ofport:
...
should be better.

** Affects: neutron
 Importance: Undecided
     Assignee: ZongKai LI (lzklibj)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => ZongKai LI (lzklibj)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1508801

Title:
  fix treat_vif_port logic

Status in neutron:
  In Progress

Bug description:
  Now it is:
  ...
  if not vif_port.ofport:
  LOG.warn(_LW("VIF port: %s has no ofport configured, "
   "and might not be able to transmit"), 
vif_port.vif_id)
  if vif_port:
  if admin_state_up:
  self.port_bound(vif_port, network_id, network_type,
  ...
  logic:
  if vif_port:
  if not vif_port.ofport:
  ...
  should be better.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1508801/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1508530] [NEW] _kill_process return value are not used

2015-10-21 Thread ZongKai LI
Public bug reported:

As a "private" method, class AsyncProcess method _kill_process return
value seems useless. And currently, they are only used in UT.

Maybe we should remove useless return value.

** Affects: neutron
 Importance: Undecided
 Assignee: ZongKai LI (lzklibj)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => ZongKai LI (lzklibj)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1508530

Title:
  _kill_process return value are not used

Status in neutron:
  New

Bug description:
  As a "private" method, class AsyncProcess method _kill_process return
  value seems useless. And currently, they are only used in UT.

  Maybe we should remove useless return value.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1508530/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1507489] [NEW] manually reschedule dhcp-agent doesn't update port binding

2015-10-19 Thread ZongKai LI
Public bug reported:

We can use dhcp-agent-network-add/remove to manually reschedule a net between 
dhcp-agents. And "neutron dhcp-agent-list-hosting-net" and ip or ps commands 
can be used to confirm network is rescheduled to new agent.
But dhcp port binding doesn't get updated in db site, we can use "neutron 
port-show" to find that, after network is rescheduled.

Pre-conditions:
2 active dhcp-agents, agent-A and agent-B;
network net-1 is bound on agent-A, use "neutron dhcp-agent-list-hosting-net" 
can to verify this;
port-1 is dhcp port of net-1;
set dhcp_agents_per_network = 1 in /etc/neutron/neutron.conf;

steps:
neutron dhcp-agent-network-remove AGENT-A-ID NET-1-ID ; neutron port-show 
PORT-1-ID
[1]
neutron dhcp-agent-network-add AGENT-B-ID NET-1-ID ; neutron port-show PORT-1-ID
[2]

expected:
[1]:
Field  Value
binding:host_id  EMPTY
binding:profile   {}
binding:vif_details  {}
binding:vif_type unbound
binding:vnic_type   normal
device_id   reserved_dhcp_port


[2]:
Field  Value
binding:host_id  AGENT-B-HOST-ID
binding:profile   {}
binding:vif_details  {"port_filter": true, "ovs_hybrid_plug": true}
binding:vif_type ovs
binding:vnic_type   normal
device_id   dhcpxxx(relate-to-agent-B-host)-NET-1-ID

Actual output:
[1]
Field  Value
binding:host_id  AGENT-A-HOST-ID
binding:profile   {}
binding:vif_details  {"port_filter": true, "ovs_hybrid_plug": true}
binding:vif_type ovs
binding:vnic_type   normal
device_id   dhcpxxx(relate-to-agent-A-host)-NET-1-ID
[2]

Field  Value
binding:host_id  AGENT-A-HOST-ID
binding:profile   {}
binding:vif_details  {"port_filter": true, "ovs_hybrid_plug": true}
binding:vif_type ovs
binding:vnic_type   normal
device_id   dhcpxxx(relate-to-agent-A-host)-NET-1-ID

** Affects: neutron
 Importance: Undecided
 Assignee: ZongKai LI (lzklibj)
 Status: New

** Description changed:

  We can use dhcp-agent-network-add/remove to manually reschedule a net between 
dhcp-agents. And "neutron dhcp-agent-list-hosting-net" and ip or ps commands 
can be used to confirm network is rescheduled to new agent.
  But dhcp port binding doesn't get updated in db site, we can use "neutron 
port-show" to find that, after network is rescheduled.
  
  Pre-conditions:
  2 active dhcp-agents, agent-A and agent-B;
  network net-1 is bound on agent-A, use "neutron dhcp-agent-list-hosting-net" 
can to verify this;
  port-1 is dhcp port of net-1;
  set dhcp_agents_per_network = 1 in /etc/neutron/neutron.conf;
  
  steps:
  neutron dhcp-agent-network-remove AGENT-A-ID NET-1-ID ; neutron port-show 
PORT-1-ID
  [1]
  neutron dhcp-agent-network-add AGENT-B-ID NET-1-ID ; neutron port-show 
PORT-1-ID
  [2]
  
  expected:
- [1]: 
+ [1]:
  Field  Value
  binding:host_id  EMPTY
  binding:profile   {}
  binding:vif_details  {}
  binding:vif_type unbound
  binding:vnic_type   normal
  device_id   reserved_dhcp_port
  
  
  [2]:
  Field  Value
  binding:host_id  AGENT-B-HOST-ID
  binding:profile   {}
  binding:vif_details  {"port_filter": true, "ovs_hybrid_plug": true}
  binding:vif_type ovs
  binding:vnic_type   normal
  device_id   dhcpxxx(relate-to-agent-B-host)-NET-1-ID
  
  Actual output:
  [1]
  Field  Value
  binding:host_id  AGENT-A-HOST-ID
  binding:profile   {}
  binding:vif_details  {"port_filter": true, "ovs_hybrid_plug": true}
  binding:vif_type ovs
  binding:vnic_type   normal
  device_id   dhcpxxx(relate-to-agent-A-host)-NET-1-ID
  [2]
  
  Field  Value
  binding:host_id  AGENT-A-HOST-ID
  binding:profile   {}
  binding:vif_details  {"port_filter": true, "ovs_hybrid_plug": true}
  binding:vif_type ovs
  binding:vnic_type   normal
  device_id   dhcpxxx(relate-to-agent-A-host)-NET-1-ID

** Changed in: neutron
 Assignee: (unassigned) => ZongKai LI (lzklibj)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1507489

Title:
  manually reschedule dhcp-agent doesn't update port binding

Status in neutron:
  New

Bug description:
  We can use dhcp-agent-network-add/remove to manually reschedule a net between 
dhcp-agents. And "neutron dhcp-agent-list-hosting-net" and ip or ps commands 
can be used to confirm network is rescheduled to new agent.
  But dhcp port binding doesn't get updated in db site, we can use "neutron 
port-show" to find that, after network is rescheduled.

  Pre-conditions:
  2 active dhcp-agents, agent-A and 

[Yahoo-eng-team] [Bug 1507492] [NEW] manually schedule dhcp-agent doesn't check dhcp_agents_per_network

2015-10-19 Thread ZongKai LI
Public bug reported:

We can use dhcp-agent-network-add to manually schedule a net to dhcp-
agents. And when we manually schedule dhcp-agent, neutron code should
check configure dhcp_agents_per_network, to verify current manually
scheduling could be support or not.

Pre-conditions:
2 active dhcp-agents, agent-A and agent-B;
network net-1 is bound on agent-A, use "neutron dhcp-agent-list-hosting-net" 
can to verify this;
set dhcp_agents_per_network = 1 in /etc/neutron/neutron.conf;

steps:
directly run "neutron dhcp-agent-network-add AGENT-B-ID NET-1-ID" without 
remove net-1 from agent-A first.

expected:
warning or error to tells that couldn't schedule net-1 onto agent-B, for 
numbers of agents hosting net-1 couldn't be larger than dhcp_agents_per_network 
value.

actual result:
running "neutron dhcp-agent-list-hosting-net NET-1-ID" will output that net-1 
is hosted by agent-A and agent-B now.

** Affects: neutron
 Importance: Undecided
 Assignee: ZongKai LI (lzklibj)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => ZongKai LI (lzklibj)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1507492

Title:
  manually schedule dhcp-agent doesn't check dhcp_agents_per_network

Status in neutron:
  New

Bug description:
  We can use dhcp-agent-network-add to manually schedule a net to dhcp-
  agents. And when we manually schedule dhcp-agent, neutron code should
  check configure dhcp_agents_per_network, to verify current manually
  scheduling could be support or not.

  Pre-conditions:
  2 active dhcp-agents, agent-A and agent-B;
  network net-1 is bound on agent-A, use "neutron dhcp-agent-list-hosting-net" 
can to verify this;
  set dhcp_agents_per_network = 1 in /etc/neutron/neutron.conf;

  steps:
  directly run "neutron dhcp-agent-network-add AGENT-B-ID NET-1-ID" without 
remove net-1 from agent-A first.

  expected:
  warning or error to tells that couldn't schedule net-1 onto agent-B, for 
numbers of agents hosting net-1 couldn't be larger than dhcp_agents_per_network 
value.

  actual result:
  running "neutron dhcp-agent-list-hosting-net NET-1-ID" will output that net-1 
is hosted by agent-A and agent-B now.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1507492/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1496844] [NEW] restart dhcp-agent shouldn't rebuild all dhcp-driver even nothing changed

2015-09-17 Thread ZongKai LI
Public bug reported:

When dhcp-agent is restarted, it will restart all dhcp-drivers even no 
configurations or networks changed.
It's not a big deal in a small scale.
But in a big scale, like a dhcp-agent handles hundreds of networks, it will be 
quite a big cost to rebuild all these dhcp-drivers.

In our environment, a dhcp-agent which has more than 300 networks
binding onto it, will cost more than 2 mins to totally recover to work.
Indeed, nothing changed before we try to restart that dhcp-agent.

It's better to work in a "lazy" mode, like only restart dhcp-drivers
when their configure files need be changed.

** Affects: neutron
 Importance: Undecided
 Assignee: ZongKai LI (lzklibj)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => ZongKai LI (lzklibj)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1496844

Title:
  restart dhcp-agent shouldn't rebuild all dhcp-driver even nothing
  changed

Status in neutron:
  New

Bug description:
  When dhcp-agent is restarted, it will restart all dhcp-drivers even no 
configurations or networks changed.
  It's not a big deal in a small scale.
  But in a big scale, like a dhcp-agent handles hundreds of networks, it will 
be quite a big cost to rebuild all these dhcp-drivers.

  In our environment, a dhcp-agent which has more than 300 networks
  binding onto it, will cost more than 2 mins to totally recover to
  work. Indeed, nothing changed before we try to restart that dhcp-
  agent.

  It's better to work in a "lazy" mode, like only restart dhcp-drivers
  when their configure files need be changed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1496844/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1332450] Re: br-tun lost ports if openvswitch restart when l2pop enabled

2015-09-10 Thread ZongKai LI
Per https://review.openstack.org/#/c/182920/58 , to keep tunnel connectivity, 
reset_bridge
isn't used anymore. And by that, tunnel ports will not be deleted anymore. This 
issue is no longer exist, mark it as invalid.

** Changed in: neutron
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1332450

Title:
  br-tun lost ports if openvswitch restart when l2pop enabled

Status in neutron:
  Invalid

Bug description:
  When openvswitch restart, ovs agent will reset br-tun,  lose all tunnel 
network related ports/flows, and break all tunnel networks.
  If l2 population used, We could maintain all l2 population fdb entries 
locally and recreate ports/flows . if not, set tunnel_sync = True works.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1332450/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1494570] [NEW] Warning Endpoint with ip already exists should be avoidable

2015-09-10 Thread ZongKai LI
Public bug reported:

In tunnel_type.py, method tunnel_sync, endpoint will be queried by
passed host and tunnel ip, and previous checking in tunnel_sync will
check queried endpoint with passed host and tunnel ip in multiple cases
, like whether local_ip and host has changed or upgrade.

But for case which local_ip and host are not changed will not be
checked, this will happened when ovs-agent and ovs restarted. And for
that, previous logic will try to add_endpoint again for passed tunnel ip
and host, and raise endpoint already exists warning.

It doesn't make sense, since local ip and host are not changed, and we
have queried DB twice, we don't need do another DB operation and raise a
warning.

** Affects: neutron
 Importance: Undecided
 Assignee: ZongKai LI (lzklibj)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => ZongKai LI (lzklibj)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1494570

Title:
  Warning Endpoint with ip already exists should be avoidable

Status in neutron:
  New

Bug description:
  In tunnel_type.py, method tunnel_sync, endpoint will be queried by
  passed host and tunnel ip, and previous checking in tunnel_sync will
  check queried endpoint with passed host and tunnel ip in multiple
  cases , like whether local_ip and host has changed or upgrade.

  But for case which local_ip and host are not changed will not be
  checked, this will happened when ovs-agent and ovs restarted. And for
  that, previous logic will try to add_endpoint again for passed tunnel
  ip and host, and raise endpoint already exists warning.

  It doesn't make sense, since local ip and host are not changed, and we
  have queried DB twice, we don't need do another DB operation and raise
  a warning.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1494570/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1435655] Re: Can't manually assign a distributed router to a l3 agent

2015-09-09 Thread ZongKai LI
Case 1: Without restarting l3-agent and ovs-agent on compute node, they will 
keep running as legacy mode.
Case 2: I don't think manually assign a DVR router to l3-agent should be valid, 
DVR router will be scheduled based on port-bindings. So manually disassociate a 
DVR router from l3-agent could be a potential issue, but not manually assign.

** Changed in: neutron
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1435655

Title:
  Can't manually assign a distributed router to a l3 agent

Status in neutron:
  Invalid

Bug description:
  Now neutron does not  allow to manually assign a distributed router to a l3 
agent which is in 'dvr' mode, but in bellow use case, it does not work ok:
  1 case:
  (1)there are two computeA, B nodes which l3 agent are in legacy mode, l2 
agent 'enable_distributed_routing = False'
  (2)create a 'dvr' router, then add subnetA to this 'dvr' router
  (3)create VMs with subnetA  in computeA or B
  (4)modify  'agent_mode=dvr',  'enable_distributed_routing = True' in computeA
  (5)the VMs in  computeA  can't communicate with their gateway

  2 case:
  (1)there is a computeA,  it's 'agent_mode=dvr',  'enable_distributed_routing 
= True'
  (2)create a 'dvr' router, then add subnetA to this 'dvr' router
  (3)create VMs with subnetA  in computeA
  (4)use 'l3-agent-router-remove' remove l3 agent which in computeA from 'dvr' 
router
  (5)the VMs in computeA  can't communicate with their gateway, and can't 
manually assign it's l3 agent to dvr router

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1435655/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1493688] [NEW] port-update shouldn't update lbaas vip port IP

2015-09-09 Thread ZongKai LI
Public bug reported:

lbaas vip port IP cannot be updated by lbaas api like "neutron lb-vip-
update VIP --address NEW_IP", but it can be updated by "neutron port-
update".

This is conflict, if lbaas doesn't support update vip port IP yet,
another API shouldn't make it possible to update.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1493688

Title:
  port-update shouldn't update lbaas vip port IP

Status in neutron:
  New

Bug description:
  lbaas vip port IP cannot be updated by lbaas api like "neutron lb-vip-
  update VIP --address NEW_IP", but it can be updated by "neutron port-
  update".

  This is conflict, if lbaas doesn't support update vip port IP yet,
  another API shouldn't make it possible to update.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1493688/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1460491] [NEW] some neutron log should raise up their log level to error

2015-05-31 Thread ZongKai LI
Public bug reported:

neutron logs like:
https://github.com/openstack/neutron/blob/e933891462408435c580ad42ff737f8bff428fbc/neutron/plugins/ml2/drivers/mech_agent.py#L76
https://github.com/openstack/neutron/blob/e933891462408435c580ad42ff737f8bff428fbc/neutron/plugins/ml2/drivers/mech_agent.py#L187
https://github.com/openstack/neutron/blob/e933891462408435c580ad42ff737f8bff428fbc/neutron/plugins/ml2/drivers/mech_agent.py#L200

they are in warning level or debug level. It may make sense to neutron, but to 
openstack basic feature, booting instances, warning or debug level seems not so 
helpful.
The above log messages will raise when in case booting instances failed, and at 
the same time, we can find some error level log in nova compute log like, 
virt_binding error. That error message tells us that we need check neutron, but 
in neutron log, warning or debug level message doesn't help us to do a quick 
issue locating.

** Affects: neutron
 Importance: Undecided
 Assignee: ZongKai LI (lzklibj)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = ZongKai LI (lzklibj)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1460491

Title:
  some neutron log should raise up their log level to error

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  neutron logs like:
  
https://github.com/openstack/neutron/blob/e933891462408435c580ad42ff737f8bff428fbc/neutron/plugins/ml2/drivers/mech_agent.py#L76
  
https://github.com/openstack/neutron/blob/e933891462408435c580ad42ff737f8bff428fbc/neutron/plugins/ml2/drivers/mech_agent.py#L187
  
https://github.com/openstack/neutron/blob/e933891462408435c580ad42ff737f8bff428fbc/neutron/plugins/ml2/drivers/mech_agent.py#L200

  they are in warning level or debug level. It may make sense to neutron, but 
to openstack basic feature, booting instances, warning or debug level seems not 
so helpful.
  The above log messages will raise when in case booting instances failed, and 
at the same time, we can find some error level log in nova compute log like, 
virt_binding error. That error message tells us that we need check neutron, but 
in neutron log, warning or debug level message doesn't help us to do a quick 
issue locating.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1460491/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1453791] Re: Lbaas Pool and Members from Different SubNets

2015-05-28 Thread ZongKai LI
It's simple to add some limitation to ensure VIP and members are from
same subnet of pool. But I think it's not a good idea to do that.

I did a test, in my case, at first, I got VIP and some members from
pool's subnet, later I created a new subnet, connect it with previous
subnet by a router, and add members from the new subnet to pool. The
client can get response from both from previous members and new members.

So I think to limit members must from the same subnet is not a good
idea, and checking whether members' subnets are accessible to VIP will
make things complex.

** Changed in: neutron
   Status: New = Invalid

** Changed in: neutron
 Assignee: ZongKai LI (lzklibj) = (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1453791

Title:
  Lbaas Pool and Members from Different SubNets

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  
  There is no definite mapping between Pool Subnet ID and Its Members.

  It is possible to Assign another Subnet with different IP for Pool and
  its members.

  For E.g

  A pool is created with subnet 135.254.189.0/24, and its members from
  Instances assigned to Another Subnet (172.21.184.0/24).

  Under the following reference,

  https://etherpad.openstack.org/p/neutron-lbaas-api-proposals

  For Create-Pool,

  Request
  POST /pools.json
  {
  'pool': {
  'tenant_id': 'someid',
  'name': 'some name',
  'subnet_id': 'id-of-subnet-where-members-reside',  --- The 
Subnet must be defined as per the instances Subnet
  'protocol': 'HTTP',
  'lb_method': 'ROUND_ROBIN'
  'admin_state_up': True,
  }
  }

  
  Validation needs to be done such that the instances ( Members ) are created 
for the Pool of the same Subnet.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1453791/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1453791] Re: Lbaas Pool and Members from Different SubNets

2015-05-27 Thread ZongKai LI
** Package changed: neutron-lbaas (Ubuntu) = neutron

** Changed in: neutron
 Assignee: (unassigned) = ZongKai LI (lzklibj)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1453791

Title:
  Lbaas Pool and Members from Different SubNets

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  
  There is no definite mapping between Pool Subnet ID and Its Members.

  It is possible to Assign another Subnet with different IP for Pool and
  its members.

  For E.g

  A pool is created with subnet 135.254.189.0/24, and its members from
  Instances assigned to Another Subnet (172.21.184.0/24).

  Under the following reference,

  https://etherpad.openstack.org/p/neutron-lbaas-api-proposals

  For Create-Pool,

  Request
  POST /pools.json
  {
  'pool': {
  'tenant_id': 'someid',
  'name': 'some name',
  'subnet_id': 'id-of-subnet-where-members-reside',  --- The 
Subnet must be defined as per the instances Subnet
  'protocol': 'HTTP',
  'lb_method': 'ROUND_ROBIN'
  'admin_state_up': True,
  }
  }

  
  Validation needs to be done such that the instances ( Members ) are created 
for the Pool of the same Subnet.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1453791/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1405528] Re: Migration of legacy router to distributed router should remove the original gateway port

2015-03-23 Thread ZongKai LI
** Changed in: neutron
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1405528

Title:
  Migration of legacy router to distributed router should remove the
  original gateway port

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  In the case of update legacy router to distributed router, if hosting
  to the same node, qg port can not be added to snat router because it
  have not been removed from original centralized router, otherwise, qg
  port remains in the original centralized router.  I think original qg
  port should be removed when updating the legacy router to distributed
  router.

  operation defined as follow:
  neutron router-create dvr
  neutron router-gateway-set dvr ext-net

  neutron router-interface-add dvr vxlan-subnet1

  neutron router-update dvr --distributed=true

  we can find  qg port is still in qrouter after updating operation.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1405528/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1434824] [NEW] L3 agent failure to setup floating IPs

2015-03-21 Thread ZongKai LI
Public bug reported:

Error log in l3-agent log on compute node in DVR enabled environment:
2015-03-21 10:23:05.206 18174 ERROR neutron.agent.l3.agent [-] L3 agent failure 
to setup floating IPs
2015-03-21 10:23:05.206 18174 TRACE neutron.agent.l3.agent Traceback (most 
recent call last):
2015-03-21 10:23:05.206 18174 TRACE neutron.agent.l3.agent   File 
/usr/lib/python2.7/site-packages/neutron/agent/l3/agent.py, line 463, in 
_process_external
2015-03-21 10:23:05.206 18174 TRACE neutron.agent.l3.agent if 
ri.router['distributed']:
2015-03-21 10:23:05.206 18174 TRACE neutron.agent.l3.agent   File 
/usr/lib/python2.7/site-packages/neutron/agent/l3/router_info.py, line 233, 
in configure_fip_addresses
2015-03-21 10:23:05.206 18174 TRACE neutron.agent.l3.agent raise 
n_exc.FloatingIpSetupException('L3 agent failure to setup '
2015-03-21 10:23:05.206 18174 TRACE neutron.agent.l3.agent 
FloatingIpSetupException: L3 agent failure to setup floating IPs
2015-03-21 10:23:05.206 18174 TRACE neutron.agent.l3.agent

Steps to reproduce:
1)associate floating-ip to VM-1 on compute node CN1.
2)restart l3-agent on CN1.
3)associate floating-ip to VM-2 on CN1.

** Affects: neutron
 Importance: Undecided
 Assignee: ZongKai LI (lzklibj)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = ZongKai LI (lzklibj)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1434824

Title:
  L3 agent failure to setup floating IPs

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Error log in l3-agent log on compute node in DVR enabled environment:
  2015-03-21 10:23:05.206 18174 ERROR neutron.agent.l3.agent [-] L3 agent 
failure to setup floating IPs
  2015-03-21 10:23:05.206 18174 TRACE neutron.agent.l3.agent Traceback (most 
recent call last):
  2015-03-21 10:23:05.206 18174 TRACE neutron.agent.l3.agent   File 
/usr/lib/python2.7/site-packages/neutron/agent/l3/agent.py, line 463, in 
_process_external
  2015-03-21 10:23:05.206 18174 TRACE neutron.agent.l3.agent if 
ri.router['distributed']:
  2015-03-21 10:23:05.206 18174 TRACE neutron.agent.l3.agent   File 
/usr/lib/python2.7/site-packages/neutron/agent/l3/router_info.py, line 233, 
in configure_fip_addresses
  2015-03-21 10:23:05.206 18174 TRACE neutron.agent.l3.agent raise 
n_exc.FloatingIpSetupException('L3 agent failure to setup '
  2015-03-21 10:23:05.206 18174 TRACE neutron.agent.l3.agent 
FloatingIpSetupException: L3 agent failure to setup floating IPs
  2015-03-21 10:23:05.206 18174 TRACE neutron.agent.l3.agent

  Steps to reproduce:
  1)associate floating-ip to VM-1 on compute node CN1.
  2)restart l3-agent on CN1.
  3)associate floating-ip to VM-2 on CN1.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1434824/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1421042] Re: AttributeError during migration legacy router to DVR router.

2015-03-20 Thread ZongKai LI
** Changed in: neutron
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1421042

Title:
  AttributeError during migration legacy router to DVR router.

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  After commit https://review.openstack.org/124879 is merged, when
  migrating a legacy router to distributed router, AttributeError:
  'LegacyRouter' object has no attribute 'dist_fip_count' is reported
  because ri object in l3 agent is LegacyRouter instead of DvrRouter.
  LegacyRouter doesn't has attribute dist_fip_count.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1421042/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1423067] Re: migrate non-dvr to dvr case, not all compute nodes created qrouter netns

2015-03-11 Thread ZongKai LI
The commands to update router in DVR migration should be:
neutron router-update --admin_state_up=False ROUTER
neutron router-update --distributed=True ROUTER
neutron router-update --admin_state_up=True ROUTER

** Changed in: neutron
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1423067

Title:
  migrate non-dvr to dvr case, not all compute nodes created qrouter
  netns

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  We can use following steps to migrate a non-dvr env to dvr env.
  1) modify related attributes in config files;
  2) restart related services,(like neutron-server, neutron-l3-agent, 
neutron-openvswitch-agent)
  3) update router: neutron router-update --distributed=True ROUTER.
  You need do steps 1) and 2) on all compute nodes, and do step 3) on 
controller node.
  You can do migration in order:
  a) on controller steps 1)+2), on compute nodes steps 1)+2), then on 
controller step 3). or
  b) on controller steps 1)+2)+3), on compute nodes steps 1)+2).
  order b) may enter a non-dvr  dvr co-existed case, neutron network will 
still work.

  I build a 1+2 env. A controller and 2 compute nodes, initially it's a
  non-dvr env. A router, attached with two subnets. I booted two
  instances on these two subnets, two compute nodes.

  When I do dvr migration on my env. I find in some case, not all
  compute nodes created router netns qrouter-* on. The compute nodes
  which dvr enabled but no netns created in migration case, instances on
  them will fail to use neutron network.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1423067/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1428713] Re: migrate non-dvr to dvr case, snat netns not created

2015-03-11 Thread ZongKai LI
The commands to update router in DVR migration should be:
neutron router-update --admin_state_up=False ROUTER
neutron router-update --distributed=True ROUTER
neutron router-update --admin_state_up=True ROUTER

** Changed in: neutron
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1428713

Title:
  migrate non-dvr to dvr case, snat netns not created

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  On a 1+2 env, router has external network attached.
  Use follow steps to migrate from non-dvr to dvr:
  1) modify related config files.
  2) restart related services.
  3) run command neutron router-update --distributed=True ROUTER.

  Now, there's no snat-* netns create on controller node.
  As a workaround, restart neutron-l3-agent on controller node will work.

  And in l3-agent.log, we can find:
  2015-02-28 01:26:21.377 5283 ERROR neutron.agent.l3.agent [-] 'LegacyRouter' 
object has no attribute 'dist_fip_count'
  2015-02-28 01:26:21.377 5283 TRACE neutron.agent.l3.agent Traceback (most 
recent call last):
  2015-02-28 01:26:21.377 5283 TRACE neutron.agent.l3.agent   File 
/usr/lib/python2.7/site-packages/neutron/common/utils.py, line 342, in call
  2015-02-28 01:26:21.377 5283 TRACE neutron.agent.l3.agent return 
func(*args, **kwargs)
  2015-02-28 01:26:21.377 5283 TRACE neutron.agent.l3.agent   File 
/usr/lib/python2.7/site-packages/neutron/agent/l3/agent.py, line 592, in 
process_router
  2015-02-28 01:26:21.377 5283 TRACE neutron.agent.l3.agent 
self.scan_fip_ports(ri)
  2015-02-28 01:26:21.377 5283 TRACE neutron.agent.l3.agent   File 
/usr/lib/python2.7/site-packages/neutron/agent/l3/dvr.py, line 128, in 
scan_fip_ports
  2015-02-28 01:26:21.377 5283 TRACE neutron.agent.l3.agent if not 
ri.router.get('distributed') or ri.dist_fip_count is not None:
  2015-02-28 01:26:21.377 5283 TRACE neutron.agent.l3.agent AttributeError: 
'LegacyRouter' object has no attribute 'dist_fip_count'

  It seems current code is not ready to migrate LegacyRouter to
  DvrRouter.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1428713/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1428713] [NEW] migrate non-dvr to dvr case, snat netns not created

2015-03-05 Thread ZongKai LI
Public bug reported:

On a 1+2 env, router has external network attached.
Use follow steps to migrate from non-dvr to dvr:
1) modify related config files.
2) restart related services.
3) run command neutron router-update --distributed=True ROUTER.

Now, there's no snat-* netns create on controller node.
As a workaround, restart neutron-l3-agent on controller node will work.

And in l3-agent.log, we can find:
2015-02-28 01:26:21.377 5283 ERROR neutron.agent.l3.agent [-] 'LegacyRouter' 
object has no attribute 'dist_fip_count'
2015-02-28 01:26:21.377 5283 TRACE neutron.agent.l3.agent Traceback (most 
recent call last):
2015-02-28 01:26:21.377 5283 TRACE neutron.agent.l3.agent   File 
/usr/lib/python2.7/site-packages/neutron/common/utils.py, line 342, in call
2015-02-28 01:26:21.377 5283 TRACE neutron.agent.l3.agent return 
func(*args, **kwargs)
2015-02-28 01:26:21.377 5283 TRACE neutron.agent.l3.agent   File 
/usr/lib/python2.7/site-packages/neutron/agent/l3/agent.py, line 592, in 
process_router
2015-02-28 01:26:21.377 5283 TRACE neutron.agent.l3.agent 
self.scan_fip_ports(ri)
2015-02-28 01:26:21.377 5283 TRACE neutron.agent.l3.agent   File 
/usr/lib/python2.7/site-packages/neutron/agent/l3/dvr.py, line 128, in 
scan_fip_ports
2015-02-28 01:26:21.377 5283 TRACE neutron.agent.l3.agent if not 
ri.router.get('distributed') or ri.dist_fip_count is not None:
2015-02-28 01:26:21.377 5283 TRACE neutron.agent.l3.agent AttributeError: 
'LegacyRouter' object has no attribute 'dist_fip_count'

It seems current code is not ready to migrate LegacyRouter to DvrRouter.

** Affects: neutron
 Importance: Undecided
 Assignee: ZongKai LI (lzklibj)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = ZongKai LI (lzklibj)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1428713

Title:
  migrate non-dvr to dvr case, snat netns not created

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  On a 1+2 env, router has external network attached.
  Use follow steps to migrate from non-dvr to dvr:
  1) modify related config files.
  2) restart related services.
  3) run command neutron router-update --distributed=True ROUTER.

  Now, there's no snat-* netns create on controller node.
  As a workaround, restart neutron-l3-agent on controller node will work.

  And in l3-agent.log, we can find:
  2015-02-28 01:26:21.377 5283 ERROR neutron.agent.l3.agent [-] 'LegacyRouter' 
object has no attribute 'dist_fip_count'
  2015-02-28 01:26:21.377 5283 TRACE neutron.agent.l3.agent Traceback (most 
recent call last):
  2015-02-28 01:26:21.377 5283 TRACE neutron.agent.l3.agent   File 
/usr/lib/python2.7/site-packages/neutron/common/utils.py, line 342, in call
  2015-02-28 01:26:21.377 5283 TRACE neutron.agent.l3.agent return 
func(*args, **kwargs)
  2015-02-28 01:26:21.377 5283 TRACE neutron.agent.l3.agent   File 
/usr/lib/python2.7/site-packages/neutron/agent/l3/agent.py, line 592, in 
process_router
  2015-02-28 01:26:21.377 5283 TRACE neutron.agent.l3.agent 
self.scan_fip_ports(ri)
  2015-02-28 01:26:21.377 5283 TRACE neutron.agent.l3.agent   File 
/usr/lib/python2.7/site-packages/neutron/agent/l3/dvr.py, line 128, in 
scan_fip_ports
  2015-02-28 01:26:21.377 5283 TRACE neutron.agent.l3.agent if not 
ri.router.get('distributed') or ri.dist_fip_count is not None:
  2015-02-28 01:26:21.377 5283 TRACE neutron.agent.l3.agent AttributeError: 
'LegacyRouter' object has no attribute 'dist_fip_count'

  It seems current code is not ready to migrate LegacyRouter to
  DvrRouter.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1428713/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1427122] [NEW] dvr case with 1 subnet attaches multi routers, fail to create router netns

2015-03-02 Thread ZongKai LI
Public bug reported:

 Environment 
1+2 env with DVR enabled, l3agent on all these nodes are configured with 
router_delete_namespaces = True.
Create router R1, subnet sn1 and sn2, attach sn1 and sn4 to router R1.
Create router R2, subnet sn3 and sn4, attach sn3 and sn4 to router R2.
Boot instances vm1 on sn1 on CN1, vm2 on sn2 on CN2, vm3 on sn3 on CN1, vm4 on 
sn4 on CN2.
Create port p1 on sn1's network by running neutron port-create --name p1 n1,  
and attach p1 to R2 by running neutron router-interface-add R2 port=p1(this 
instruction try to connect sn1 with sn3 and sn4 by router R2)

 Steps to raise issue 
1) delete vm1, and make sure qrouter netns will disappear on CN1
2) create vm5 on sn1 on CN1, but qrouter netns doesn't come out.

 Workaround 
restart l3-agent on CN1.

In normal case(a subnet will only be attached to one router), this issue
will not raise.

** Affects: neutron
 Importance: Undecided
 Assignee: ZongKai LI (lzklibj)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = ZongKai LI (lzklibj)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1427122

Title:
  dvr case with 1 subnet attaches multi routers, fail to create router
  netns

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
   Environment 
  1+2 env with DVR enabled, l3agent on all these nodes are configured with 
router_delete_namespaces = True.
  Create router R1, subnet sn1 and sn2, attach sn1 and sn4 to router R1.
  Create router R2, subnet sn3 and sn4, attach sn3 and sn4 to router R2.
  Boot instances vm1 on sn1 on CN1, vm2 on sn2 on CN2, vm3 on sn3 on CN1, vm4 
on sn4 on CN2.
  Create port p1 on sn1's network by running neutron port-create --name p1 
n1,  and attach p1 to R2 by running neutron router-interface-add R2 
port=p1(this instruction try to connect sn1 with sn3 and sn4 by router R2)

   Steps to raise issue 
  1) delete vm1, and make sure qrouter netns will disappear on CN1
  2) create vm5 on sn1 on CN1, but qrouter netns doesn't come out.

   Workaround 
  restart l3-agent on CN1.

  In normal case(a subnet will only be attached to one router), this
  issue will not raise.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1427122/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1423067] [NEW] migrate non-dvr to dvr case, not all compute nodes created qrouter netns

2015-02-17 Thread ZongKai LI
Public bug reported:

We can use following steps to migrate a non-dvr env to dvr env.
1) modify related attributes in config files;
2) restart related services,(like neutron-server, neutron-l3-agent, 
neutron-openvswitch-agent)
3) update router: neutron router-update --distributed=True ROUTER.
You need do steps 1) and 2) on all compute nodes, and do step 3) on controller 
node.
You can do migration in order:
a) on controller steps 1)+2), on compute nodes steps 1)+2), then on controller 
step 3). or
b) on controller steps 1)+2)+3), on compute nodes steps 1)+2).
order b) may enter a non-dvr  dvr co-existed case, neutron network will still 
work.

I build a 1+2 env. A controller and 2 compute nodes, initially it's a
non-dvr env. A router, attached with two subnets. I booted two instances
on these two subnets, two compute nodes.

When I do dvr migration on my env. I find in some case, not all compute
nodes created router netns qrouter-* on. The compute nodes which dvr
enabled but no netns created in migration case, instances on them will
fail to use neutron network.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1423067

Title:
  migrate non-dvr to dvr case, not all compute nodes created qrouter
  netns

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  We can use following steps to migrate a non-dvr env to dvr env.
  1) modify related attributes in config files;
  2) restart related services,(like neutron-server, neutron-l3-agent, 
neutron-openvswitch-agent)
  3) update router: neutron router-update --distributed=True ROUTER.
  You need do steps 1) and 2) on all compute nodes, and do step 3) on 
controller node.
  You can do migration in order:
  a) on controller steps 1)+2), on compute nodes steps 1)+2), then on 
controller step 3). or
  b) on controller steps 1)+2)+3), on compute nodes steps 1)+2).
  order b) may enter a non-dvr  dvr co-existed case, neutron network will 
still work.

  I build a 1+2 env. A controller and 2 compute nodes, initially it's a
  non-dvr env. A router, attached with two subnets. I booted two
  instances on these two subnets, two compute nodes.

  When I do dvr migration on my env. I find in some case, not all
  compute nodes created router netns qrouter-* on. The compute nodes
  which dvr enabled but no netns created in migration case, instances on
  them will fail to use neutron network.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1423067/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1381886] Re: nova list show incorrect when neutron re-assign floatingip

2014-10-16 Thread ZongKai LI
it's neutron's problem.
the operation in bug description is floating ip re-assignment, this can depart 
into two steps, disassociate from origin port, associate to new port; when 
re-assign floating ip, network changes happened on two(origin, new) instance's 
port, but neutron only send event for new instance port;
add code ensure neutron will first send event for origin/disassociate one, then 
send event for new/associate one, will fix this.

** Project changed: nova = neutron

** Changed in: neutron
   Status: New = Fix Committed

** Changed in: neutron
 Assignee: (unassigned) = ZongKai LI (lzklibj)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1381886

Title:
  nova list show incorrect when neutron re-assign floatingip

Status in OpenStack Neutron (virtual network service):
  Fix Committed

Bug description:
  boot more several instances, create a floatingip, when re-assign the 
floatingip to multi instances, nova list will show incorrect result.
  neutron floatingip-associate floatingip-id instance0-pord-id
  neutron floatingip-associate floatingip-id instance1-port-id
  neutron floatingip-associate floatingip-id instance2-port-id
  nova list
  (nova list result will be like:)
  --
  instance0  fixedip0,  floatingip
  instance1  fixedip1,  floatingip
  instance2  fixedip2,  floatingip

  instance0,1,2, they all have floatingip, but run neutron floatingip-list, 
we can see it only bind to instance2.
  another situation is that after a few time(half a min, or longer), nova 
list can show correct result.
  ---
  instance0  fixedip0
  instance1  fixedip1
  instance2  fixedip2,  floatingip

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1381886/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp