[Yahoo-eng-team] [Bug 2020328] [NEW] Concurrent create VM failed because of vif plug timeout

2023-05-22 Thread Dongcan Ye
Public bug reported:

Environment:
OpenStack Zed
OpenvSwitch 2.17
OVN 22.06
ML2/OVN driver


Using following command create instances:
# openstack server create test_vm --flavor XX --network XX --image 
XX --min 10 --max 10 --availability-zone nova:XX

There maybe at least 2 or 3 instances created failed because of vif
plugging timeout error.

VM port UUID: 6d5eccfa-069e-4058-a1c8-87bec9c1c280

Here is some logs info:

1. ovsdb server log:
record 554: 2023-05-19 10:48:42.399
  table Interface insert row "tap6d5eccfa-06" (8d7bdf97):
  table Open_vSwitch row 1a4db534 (1a4db534) diff:
  table Bridge row "br-int" (ee9ab5d4) diff:
  table Port insert row "tap6d5eccfa-06" (a4be79ec):

record 555: 2023-05-19 10:48:42.411
  table Interface row "tap6d5eccfa-06" (8d7bdf97) diff:
  table Open_vSwitch row 1a4db534 (1a4db534) diff:

record 556: 2023-05-19 10:48:42.909
  table Interface row "tap6d5eccfa-06" (8d7bdf97) diff:

record 557: 2023-05-19 10:48:42.941 "ovs-vsctl (invoked by init (pid 0)): 
ovs-vsctl del-port tap6d5eccfa-06"
  table Open_vSwitch row 1a4db534 (1a4db534) diff:
  table Interface row "tap6d5eccfa-06" (8d7bdf97) diff:
delete row
  table Bridge row "br-int" (ee9ab5d4) diff:
  table Port row "tap6d5eccfa-06" (a4be79ec) diff:
delete row

2. ovs-vswitchd log:
2023-05-19T10:48:42.400Z|13247|jsonrpc|DBG|unix:/run/openvswitch/db.sock: 
received notification, method="update3", 
params=[["monid","Open_vSwitch"],"----",{"Open_vSwitch":{"1a4db534-3133-43e0-922e-dd7d5d76a802":{"modify":{"next_cfg":135}}},"Interface":{"8d7bdf97-1f91-4ef3-9b27-b2a9e7c362e2":{"insert":{"name":"tap6d5eccfa-06"}}},"Port":{"a4be79ec-df95-495b-87e1-fc373582f647":{"insert":{"name":"tap6d5eccfa-06","interfaces":["uuid","8d7bdf97-1f91-4ef3-9b27-b2a9e7c362e2"]}}},"Bridge":{"ee9ab5d4-b72b-479b-87cd-29a19d5540a8":{"modify":{"ports":["uuid","a4be79ec-df95-495b-87e1-fc373582f647"]]
2023-05-19T10:48:42.410Z|13248|bridge|WARN|could not open network device 
tap6d5eccfa-06 (No such device)
2023-05-19T10:48:42.411Z|13249|jsonrpc|DBG|unix:/run/openvswitch/db.sock: send 
request, method="transact", 
params=["Open_vSwitch",{"row":{"cur_cfg":135},"where":[["_uuid","==",["uuid","1a4db534-3133-43e0-922e-dd7d5d76a802"]]],"table":"Open_vSwitch","op":"update"},{"row":{"error":"could
 not open network device tap6d5eccfa-06 (No such 
device)","ofport":-1},"where":[["_uuid","==",["uuid","8d7bdf97-1f91-4ef3-9b27-b2a9e7c362e2"]]],"table":"Interface","op":"update"},{"op":"assert","lock":"ovs_vswitchd"}],
 id=4787
2023-05-19T10:48:42.412Z|13250|poll_loop|DBG|wakeup due to [POLLIN] on fd 13 
(<->/run/openvswitch/db.sock) at ../lib/stream-fd.c:157 (4% CPU usage)
2023-05-19T10:48:42.412Z|13251|jsonrpc|DBG|unix:/run/openvswitch/db.sock: 
received notification, method="update3", 
params=[["monid","Open_vSwitch"],"----",{"Interface":{"8d7bdf97-1f91-4ef3-9b27-b2a9e7c362e2":{"modify":{"error":"could
 not open network device tap6d5eccfa-06 (No such 
device)","ofport":-1}}},"Open_vSwitch":{"1a4db534-3133-43e0-922e-dd7d5d76a802":{"modify":{"cur_cfg":135]

2023-05-19T10:48:42.907Z|13260|vconn|DBG|unix#50: sent (Success): 
OFPT_PORT_STATUS (OF1.5) (xid=0x0): ADD: 57(tap6d5eccfa-06): 
addr:fe:16:3e:dc:bc:b8
 config: 0
 state: 0
 current: 10MB-FD COPPER
 speed: 10 Mbps now, 0 Mbps max
2023-05-19T10:48:42.907Z|13261|bridge|INFO|bridge br-int: added interface 
tap6d5eccfa-06 on port 57
2023-05-19T10:48:42.908Z|13262|hmap|DBG|Dropped 74 log messages in last 6 
seconds (most recently, 1 seconds ago) due to excessive rate
2023-05-19T10:48:42.908Z|13263|hmap|DBG|../ofproto/ofproto-dpif-xlate.c:884: 1 
bucket with 6+ nodes, including 1 bucket with 7 nodes (16 nodes total across 16 
buckets)
2023-05-19T10:48:42.908Z|13264|dpif_netlink|DBG|port_changed: 
dpif:system@ovs-system vport:tap6d5eccfa-06 cmd:1
2023-05-19T10:48:42.908Z|13265|vconn|DBG|unix#50: sent (Success): 
OFPT_PORT_STATUS (OF1.5) (xid=0x0): MOD: 57(tap6d5eccfa-06): 
addr:fe:16:3e:dc:bc:b8
 config: 0
 state: LIVE
 current: 10MB-FD COPPER
 speed: 10 Mbps now, 0 Mbps max

2023-05-19T10:48:42.971Z|13289|bridge|INFO|bridge br-int: deleted interface 
tap6d5eccfa-06 on port 57
2023-05-19T10:48:42.972Z|04671|poll_loop(revalidator62)|DBG|wakeup due to 
[POLLIN] on fd 43 (FIFO pipe:[154647]) at ../lib/ovs-thread.c:378 (0% CPU usage)
2023-05-19T10:48:42.972Z|04651|poll_loop(revalidator63)|DBG|wakeup due to 
[POLLIN] on fd 45 (FIFO pipe:[154648]) at ../lib/ovs-thread.c:378 (0% CPU usage)
2023-05-19T10:48:42.975Z|04672|poll_loop(revalidator62)|DBG|wakeup due to 
[POLLIN] on fd 43 (FIFO pipe:[154647]) at ../lib/ovs-thread.c:378 (0% CPU usage)
2023-05-19T10:48:42.975Z|13290|dpif|DBG|system@ovs-system: failed to query port 
tap6d5eccfa-06: No such device
2023-05-19T10:48:42.982Z|07648|poll_loop(revalidator61)|DBG|wakeup due to 
[POLLIN] on fd 41 (FIFO pipe:[155716]) at ../lib/ovs-thread.c:378 (10% CPU 
usage)

[Yahoo-eng-team] [Bug 1991791] [NEW] [ovn][l3-ha] BFD monitoring not working for network down

2022-10-05 Thread Dongcan Ye
Public bug reported:

OVN version: 21.12.1
OVS version: 2.15.6
OpenStack version: Yoga

In my test environment, two nodes as HV Chassis and enabled these chassis to 
host gateways.
1. Add a network and subnet.
2. Add a router, then attach subnet to router.
3. Set router's external gateway.
4. Binding a floatingip to an instance.

Then if we use 'ip link set dev *** down' to make the physical tunnel
interface down. The floatingip can't access.

# ovn-nbctl list Logical_Router_Port lrp-95c640e8-7249-41ee-a407-98a2bc8741ca
_uuid   : 2b3f9e79-33ee-45aa-9a23-ad9e9e460455
enabled : []
external_ids: 
{"neutron:network_name"=neutron-d1d2dcab-e701-4cc2-8c64-c7d81d5f33e5, 
"neutron:revision_number"="47", 
"neutron:router_name"="801a8e21-e368-4b9f-8119-830d7f743322", 
"neutron:subnet_ids"="a3bbe66c-f963-4337-ae8e-bb5fd3e7bde9"}
gateway_chassis : [11f482cd-18fc-42d4-a3a6-e036ae428371, 
dc861c0a-e8d6-4b98-ae7f-de0e418f3163]
ha_chassis_group: []
ipv6_prefix : []
ipv6_ra_configs : {}
mac : "fa:16:3e:38:5b:50"
name: lrp-95c640e8-7249-41ee-a407-98a2bc8741ca
networks: ["192.168.135.248/20"]
options : {}
peer: []


We can't see the gateway_chassis priority changes.

# ovn-nbctl list gateway_chassis
_uuid   : 11f482cd-18fc-42d4-a3a6-e036ae428371
chassis_name: "d2692315-6389-4e95-b211-a3dd54ec4582"
external_ids: {}
name: 
lrp-95c640e8-7249-41ee-a407-98a2bc8741ca_d2692315-6389-4e95-b211-a3dd54ec4582
options : {}
priority: 2

_uuid   : dc861c0a-e8d6-4b98-ae7f-de0e418f3163
chassis_name: "00b6dd97-6360-4ae5-b1f4-a68cf2e8c70c"
external_ids: {}
name: 
lrp-95c640e8-7249-41ee-a407-98a2bc8741ca_00b6dd97-6360-4ae5-b1f4-a68cf2e8c70c
options : {}
priority: 1


After read the HA guide in OVN[1], it said we can set cpath_down config.
But there is no effective.

ovn-nbctl --wait=hv set NB_Global . options:"bfd-cpath-down"=true
ovs-vsctl set interface ovn-d26923-0 bfd:cpath_down=true


In guide[2], we test three situations, it works for us.
(1) The gateway chass is shutdown.
(2) The ovs-vswitchd is stopped.
(3) The ovn-controller is stopped.


[1] https://docs.ovn.org/en/latest/topics/high-availability.html
[2] https://docs.openstack.org/neutron/latest/admin/ovn/routing.html

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: ovn

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1991791

Title:
  [ovn][l3-ha] BFD monitoring not working for network down

Status in neutron:
  New

Bug description:
  OVN version: 21.12.1
  OVS version: 2.15.6
  OpenStack version: Yoga

  In my test environment, two nodes as HV Chassis and enabled these chassis to 
host gateways.
  1. Add a network and subnet.
  2. Add a router, then attach subnet to router.
  3. Set router's external gateway.
  4. Binding a floatingip to an instance.

  Then if we use 'ip link set dev *** down' to make the physical tunnel
  interface down. The floatingip can't access.

  # ovn-nbctl list Logical_Router_Port lrp-95c640e8-7249-41ee-a407-98a2bc8741ca
  _uuid   : 2b3f9e79-33ee-45aa-9a23-ad9e9e460455
  enabled : []
  external_ids: 
{"neutron:network_name"=neutron-d1d2dcab-e701-4cc2-8c64-c7d81d5f33e5, 
"neutron:revision_number"="47", 
"neutron:router_name"="801a8e21-e368-4b9f-8119-830d7f743322", 
"neutron:subnet_ids"="a3bbe66c-f963-4337-ae8e-bb5fd3e7bde9"}
  gateway_chassis : [11f482cd-18fc-42d4-a3a6-e036ae428371, 
dc861c0a-e8d6-4b98-ae7f-de0e418f3163]
  ha_chassis_group: []
  ipv6_prefix : []
  ipv6_ra_configs : {}
  mac : "fa:16:3e:38:5b:50"
  name: lrp-95c640e8-7249-41ee-a407-98a2bc8741ca
  networks: ["192.168.135.248/20"]
  options : {}
  peer: []


  We can't see the gateway_chassis priority changes.

  # ovn-nbctl list gateway_chassis
  _uuid   : 11f482cd-18fc-42d4-a3a6-e036ae428371
  chassis_name: "d2692315-6389-4e95-b211-a3dd54ec4582"
  external_ids: {}
  name: 
lrp-95c640e8-7249-41ee-a407-98a2bc8741ca_d2692315-6389-4e95-b211-a3dd54ec4582
  options : {}
  priority: 2

  _uuid   : dc861c0a-e8d6-4b98-ae7f-de0e418f3163
  chassis_name: "00b6dd97-6360-4ae5-b1f4-a68cf2e8c70c"
  external_ids: {}
  name: 
lrp-95c640e8-7249-41ee-a407-98a2bc8741ca_00b6dd97-6360-4ae5-b1f4-a68cf2e8c70c
  options : {}
  priority: 1

  
  After read the HA guide in OVN[1], it said we can set cpath_down config.
  But there is no effective.

  ovn-nbctl --wait=hv set NB_Global . options:"bfd-cpath-down"=true
  ovs-vsctl set interface ovn-d26923-0 bfd:cpath_down=true

[Yahoo-eng-team] [Bug 1878314] [NEW] Upgrade Rocky milestone revisions has no effective

2020-05-12 Thread Dongcan Ye
Public bug reported:

Excute command "neutron-db-manage upgrade rocky", but didn't take effect.
It seems lacks tag for rocky milestone revisions.

[1]
https://github.com/openstack/neutron/blob/master/neutron/db/migration/alembic_migrations/versions/rocky/expand/867d39095bf4_port_forwarding.py

** Affects: neutron
 Importance: Undecided
 Assignee: Dongcan Ye (hellochosen)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Dongcan Ye (hellochosen)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1878314

Title:
  Upgrade Rocky milestone revisions has no effective

Status in neutron:
  New

Bug description:
  Excute command "neutron-db-manage upgrade rocky", but didn't take effect.
  It seems lacks tag for rocky milestone revisions.

  [1]
  
https://github.com/openstack/neutron/blob/master/neutron/db/migration/alembic_migrations/versions/rocky/expand/867d39095bf4_port_forwarding.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1878314/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1871621] [NEW] [VPNaaS]: DeprecationWarning: invalid escape sequence

2020-04-08 Thread Dongcan Ye
Public bug reported:

Warning:
  Comments left for invalid file 
neutron_vpnaas/services/vpn/device_drivers/strongswan_ipsec.py

/root/neutron-vpnaas/neutron_vpnaas/services/vpn/device_drivers/ipsec.py:185: 
DeprecationWarning: invalid escape sequence \d
  STATUS_RE = '\d\d\d "([a-f0-9\-]+).* (unrouted|erouted);'
/root/neutron-vpnaas/neutron_vpnaas/services/vpn/device_drivers/ipsec.py:188: 
DeprecationWarning: invalid escape sequence \d
  '\d{3} #\d+: "([a-f0-9\-]+).*established.*newest IPSEC')
/root/neutron-vpnaas/neutron_vpnaas/services/vpn/device_drivers/ipsec.py:190: 
DeprecationWarning: invalid escape sequence \d
  '\d{3} #\d+: "([a-f0-9\-\/x]+).*established.*newest IPSEC')
:185: DeprecationWarning: invalid escape sequence \d
:188: DeprecationWarning: invalid escape sequence \d
:190: DeprecationWarning: invalid escape sequence \d
/root/neutron-vpnaas/neutron_vpnaas/services/vpn/device_drivers/strongswan_ipsec.py:75:
 DeprecationWarning: invalid escape sequence \-
  STATUS_RE = '([a-f0-9\-]+).* (ROUTED|CONNECTING|INSTALLED)'
:75: DeprecationWarning: invalid escape sequence \-

** Affects: neutron
 Importance: Undecided
 Assignee: Dongcan Ye (hellochosen)
 Status: New


** Tags: vpnaas

** Tags added: vpnaas

** Changed in: neutron
 Assignee: (unassigned) => Dongcan Ye (hellochosen)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1871621

Title:
  [VPNaaS]: DeprecationWarning: invalid escape sequence

Status in neutron:
  New

Bug description:
  Warning:
Comments left for invalid file 
neutron_vpnaas/services/vpn/device_drivers/strongswan_ipsec.py

  /root/neutron-vpnaas/neutron_vpnaas/services/vpn/device_drivers/ipsec.py:185: 
DeprecationWarning: invalid escape sequence \d
STATUS_RE = '\d\d\d "([a-f0-9\-]+).* (unrouted|erouted);'
  /root/neutron-vpnaas/neutron_vpnaas/services/vpn/device_drivers/ipsec.py:188: 
DeprecationWarning: invalid escape sequence \d
'\d{3} #\d+: "([a-f0-9\-]+).*established.*newest IPSEC')
  /root/neutron-vpnaas/neutron_vpnaas/services/vpn/device_drivers/ipsec.py:190: 
DeprecationWarning: invalid escape sequence \d
'\d{3} #\d+: "([a-f0-9\-\/x]+).*established.*newest IPSEC')
  :185: DeprecationWarning: invalid escape sequence \d
  :188: DeprecationWarning: invalid escape sequence \d
  :190: DeprecationWarning: invalid escape sequence \d
  
/root/neutron-vpnaas/neutron_vpnaas/services/vpn/device_drivers/strongswan_ipsec.py:75:
 DeprecationWarning: invalid escape sequence \-
STATUS_RE = '([a-f0-9\-]+).* (ROUTED|CONNECTING|INSTALLED)'
  :75: DeprecationWarning: invalid escape sequence \-

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1871621/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1870302] [NEW] [VPNaaS]: test_migrations_sync failed with alembic 1.4.2

2020-04-02 Thread Dongcan Ye
Public bug reported:

Neutron vpnaas functional gate failed with alembic 1.4.2.

2020-04-02 02:16:49.526046 | controller | 
neutron_vpnaas.tests.functional.common.test_migrations_sync.TestModelsMigrationsMysql.test_models_sync
2020-04-02 02:16:49.526063 | controller | 
--
2020-04-02 02:16:49.526079 | controller |
2020-04-02 02:16:49.526094 | controller | Captured traceback:
2020-04-02 02:16:49.526110 | controller | ~~~
2020-04-02 02:16:49.526126 | controller | Traceback (most recent call last):
2020-04-02 02:16:49.526142 | controller |
2020-04-02 02:16:49.526157 | controller |   File 
"/home/zuul/src/opendev.org/openstack/neutron-vpnaas/.tox/dsvm-functional-sswan/lib/python3.6/site-packages/neutron/tests/base.py",
 line 182, in func
2020-04-02 02:16:49.526174 | controller | return f(self, *args, **kwargs)
2020-04-02 02:16:49.526189 | controller |
2020-04-02 02:16:49.526205 | controller |   File 
"/home/zuul/src/opendev.org/openstack/neutron-vpnaas/.tox/dsvm-functional-sswan/lib/python3.6/site-packages/oslo_db/sqlalchemy/test_migrations.py",
 line 598, in test_models_sync
2020-04-02 02:16:49.526221 | controller | "Models and migration scripts 
aren't in sync:\n%s" % msg)
2020-04-02 02:16:49.526245 | controller |
2020-04-02 02:16:49.526263 | controller |   File 
"/home/zuul/src/opendev.org/openstack/neutron-vpnaas/.tox/dsvm-functional-sswan/lib/python3.6/site-packages/unittest2/case.py",
 line 690, in fail
2020-04-02 02:16:49.526279 | controller | raise self.failureException(msg)
2020-04-02 02:16:49.526295 | controller |
2020-04-02 02:16:49.526310 | controller | AssertionError: Models and 
migration scripts aren't in sync:
2020-04-02 02:16:49.526325 | controller | [ [ ( 'modify_type',
2020-04-02 02:16:49.526341 | controller |   None,
2020-04-02 02:16:49.526356 | controller |   'vpn_endpoint_groups',
2020-04-02 02:16:49.526371 | controller |   'endpoint_type',
2020-04-02 02:16:49.526386 | controller |   { 'existing_comment': None,
2020-04-02 02:16:49.526402 | controller | 'existing_nullable': False,
2020-04-02 02:16:49.526417 | controller | 'existing_server_default': 
False},
2020-04-02 02:16:49.526432 | controller |   ENUM('subnet', 'cidr', 'vlan', 
'network', 'router'),
2020-04-02 02:16:49.526447 | controller |   Enum('subnet', 'cidr', 
'network', 'vlan', 'router', name='vpn_endpoint_type'))]]
2020-04-02 02:16:49.526463 | controller |
2020-04-02 02:16:49.526478 | controller |
2020-04-02 02:16:49.526494 | controller | 
neutron_vpnaas.tests.functional.common.test_migrations_sync.TestModelsMigrationsPostgresql.test_models_sync
2020-04-02 02:16:49.526510 | controller | 
---
2020-04-02 02:16:49.526525 | controller |
2020-04-02 02:16:49.526540 | controller | Captured traceback:
2020-04-02 02:16:49.526555 | controller | ~~~
2020-04-02 02:16:49.526570 | controller | Traceback (most recent call last):
2020-04-02 02:16:49.526585 | controller |
2020-04-02 02:16:49.526600 | controller |   File 
"/home/zuul/src/opendev.org/openstack/neutron-vpnaas/.tox/dsvm-functional-sswan/lib/python3.6/site-packages/neutron/tests/base.py",
 line 182, in func
2020-04-02 02:16:49.526616 | controller | return f(self, *args, **kwargs)
2020-04-02 02:16:49.526631 | controller |
2020-04-02 02:16:49.526647 | controller |   File 
"/home/zuul/src/opendev.org/openstack/neutron-vpnaas/.tox/dsvm-functional-sswan/lib/python3.6/site-packages/oslo_db/sqlalchemy/test_migrations.py",
 line 598, in test_models_sync
2020-04-02 02:16:49.526663 | controller | "Models and migration scripts 
aren't in sync:\n%s" % msg)
2020-04-02 02:16:49.526678 | controller |
2020-04-02 02:16:49.526694 | controller |   File 
"/home/zuul/src/opendev.org/openstack/neutron-vpnaas/.tox/dsvm-functional-sswan/lib/python3.6/site-packages/unittest2/case.py",
 line 690, in fail
2020-04-02 02:16:49.526723 | controller | raise self.failureException(msg)
2020-04-02 02:16:49.526741 | controller |
2020-04-02 02:16:49.526757 | controller | AssertionError: Models and 
migration scripts aren't in sync:
2020-04-02 02:16:49.526772 | controller | [ [ ( 'modify_type',
2020-04-02 02:16:49.526788 | controller |   None,
2020-04-02 02:16:49.526803 | controller |   'vpn_endpoint_groups',
2020-04-02 02:16:49.526818 | controller |   'endpoint_type',
2020-04-02 02:16:49.526833 | controller |   { 'existing_comment': None,
2020-04-02 02:16:49.526848 | controller | 'existing_nullable': False,
2020-04-02 02:16:49.526863 | controller | 'existing_server_default': 
False},
2020-04-02 02:16:49.526879 | controller |   ENUM('subnet', 'cidr', 'vlan', 
'network', 'router', name='endpoint_type'),
2020-04-02 02:16:49.526894 | controller |   

[Yahoo-eng-team] [Bug 1861469] Re: [VPNaaS]: functional gate failed

2020-02-03 Thread Dongcan Ye
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1861469

Title:
  [VPNaaS]: functional gate failed

Status in neutron:
  Fix Released

Bug description:
  Currently, neutron-vpnaas functional gate failed after [1] merged.

  2020-01-30 08:19:38.308343 | primary | + 
/opt/stack/new/neutron/tools/configure_for_func_testing.sh:_install_base_deps:110
 :   OVS_BRANCH=v2.12.0
  2020-01-30 08:19:38.310217 | primary | + 
/opt/stack/new/neutron/tools/configure_for_func_testing.sh:_install_base_deps:111
 :   compile_ovs False /usr /var
  2020-01-30 08:19:38.312140 | primary | + 
/opt/stack/new/neutron/devstack/lib/ovs:compile_ovs:113 :   local 
_pwd=/home/zuul/workspace
  2020-01-30 08:19:38.313981 | primary | + 
/opt/stack/new/neutron/devstack/lib/ovs:compile_ovs:114 :   local 
build_modules=False
  2020-01-30 08:19:38.315741 | primary | + 
/opt/stack/new/neutron/devstack/lib/ovs:compile_ovs:115 :   local prefix=/usr
  2020-01-30 08:19:38.317353 | primary | + 
/opt/stack/new/neutron/devstack/lib/ovs:compile_ovs:116 :   local 
localstatedir=/var
  2020-01-30 08:19:38.318879 | primary | + 
/opt/stack/new/neutron/devstack/lib/ovs:compile_ovs:118 :   '[' -n /usr ']'
  2020-01-30 08:19:38.320441 | primary | + 
/opt/stack/new/neutron/devstack/lib/ovs:compile_ovs:119 :   prefix=--prefix=/usr
  2020-01-30 08:19:38.322006 | primary | + 
/opt/stack/new/neutron/devstack/lib/ovs:compile_ovs:122 :   '[' -n /var ']'
  2020-01-30 08:19:38.323614 | primary | + 
/opt/stack/new/neutron/devstack/lib/ovs:compile_ovs:123 :   
localstatedir=--localstatedir=/var
  2020-01-30 08:19:38.325395 | primary | + 
/opt/stack/new/neutron/devstack/lib/ovs:compile_ovs:126 :   
prepare_for_compilation False
  2020-01-30 08:19:38.327128 | primary | + 
/opt/stack/new/neutron/devstack/lib/ovs:prepare_for_compilation:37 :   local 
build_modules=False
  2020-01-30 08:19:38.328890 | primary | + 
/opt/stack/new/neutron/devstack/lib/ovs:prepare_for_compilation:38 :   
OVS_DIR=/opt/stack/new/ovs
  2020-01-30 08:19:38.330435 | primary | + 
/opt/stack/new/neutron/devstack/lib/ovs:prepare_for_compilation:40 :   '[' '!' 
-d /opt/stack/new/ovs ']'
  2020-01-30 08:19:38.332150 | primary | + 
/opt/stack/new/neutron/devstack/lib/ovs:prepare_for_compilation:42 :   
git_timed clone https://github.com/openvswitch/ovs.git /opt/stack/new/ovs
  2020-01-30 08:19:38.333894 | primary | + functions-common:git_timed:616   
:   local count=0
  2020-01-30 08:19:38.335601 | primary | + functions-common:git_timed:617   
:   local timeout=0
  2020-01-30 08:19:38.337175 | primary | + functions-common:git_timed:619   
:   [[ -n 0 ]]
  2020-01-30 08:19:38.338806 | primary | + functions-common:git_timed:620   
:   timeout=0
  2020-01-30 08:19:38.340764 | primary | + functions-common:git_timed:623   
:   time_start git_timed
  2020-01-30 08:19:38.343034 | primary | + functions-common:time_start:2316 
:   local name=git_timed
  2020-01-30 08:19:38.344722 | primary | + functions-common:time_start:2317 
:   local start_time=
  2020-01-30 08:19:38.346523 | primary | + functions-common:time_start:2318 
:   [[ -n '' ]]
  2020-01-30 08:19:38.349859 | primary | ++ functions-common:time_start:2321
 :   date +%s%3N
  2020-01-30 08:19:38.351868 | primary | + functions-common:time_start:2321 
:   _TIME_START[$name]=1580372378349
  2020-01-30 08:19:38.353573 | primary | + functions-common:git_timed:624   
:   timeout -s SIGINT 0 git clone https://github.com/openvswitch/ovs.git 
/opt/stack/new/ovs
  2020-01-30 08:19:38.356044 | primary | fatal: could not create work tree dir 
'/opt/stack/new/ovs': Permission denied
  2020-01-30 08:19:38.358795 | primary | + functions-common:git_timed:627   
:   [[ 128 -ne 124 ]]
  2020-01-30 08:19:38.360353 | primary | + functions-common:git_timed:628   
:   die 628 'git call failed: [git clone' 
https://github.com/openvswitch/ovs.git '/opt/stack/new/ovs]'
  2020-01-30 08:19:38.361878 | primary | + functions-common:die:193 
:   local exitcode=0
  2020-01-30 08:19:38.364694 | primary | + functions-common:die:194 
:   set +o xtrace
  2020-01-30 08:19:38.364740 | primary | [Call Trace]
  2020-01-30 08:19:38.364790 | primary | 
/opt/stack/new/neutron-vpnaas/neutron_vpnaas/tests/contrib/gate_hook.sh:28:configure_host_for_vpn_func_testing
  2020-01-30 08:19:38.364815 | primary | 
/opt/stack/new/neutron-vpnaas/tools/configure_for_vpn_func_testing.sh:46:configure_host_for_func_testing
  2020-01-30 08:19:38.364835 | primary | 
/opt/stack/new/neutron/tools/configure_for_func_testing.sh:298:_install_base_deps
  2020-01-30 08:19:38.364860 | primary | 
/opt/stack/new/neutron/tools/configure_for_func_testing.sh:111:compile_ovs
  2020-01-30 08:19:38.364880 | primary | 

[Yahoo-eng-team] [Bug 1861469] [NEW] [VPNaaS]: functional gate failed

2020-01-30 Thread Dongcan Ye
://github.com/openvswitch/ovs.git /opt/stack/new/ovs]
2020-01-30 08:19:38.367527 | primary | 
/opt/stack/new/devstack/functions-common: line 239: /opt/stack/logs/error.log: 
Permission denied
2020-01-30 08:19:38.368362 | primary | ERROR: the main setup script run by this 
job failed - exit code: 1
2020-01-30 08:19:38.368407 | primary | please look at the relevant log 
files to determine the root cause
2020-01-30 08:19:38.368430 | primary | Running devstack worlddump.py
2020-01-30 08:19:39.635594 | primary | Cleaning up host
2020-01-30 08:19:39.635666 | primary | ... this takes 3 - 4 minutes (logs at 
logs/devstack-gate-cleanup-host.txt.gz)
2020-01-30 08:20:09.592413 | primary |  [WARNING]: No hosts matched, nothing to 
do
2020-01-30 08:20:14.128533 | primary | Done.
2020-01-30 08:20:15.931191 | primary | *** FAILED with status: 1

Patch [1] using new build OVS instead of openvswitch package from OS
repository. DEVSTACK install_package use root privilege.

[1] https://review.opendev.org/#/c/697440/

** Affects: neutron
 Importance: Undecided
 Assignee: Dongcan Ye (hellochosen)
 Status: New


** Tags: vpnaas

** Changed in: neutron
 Assignee: (unassigned) => Dongcan Ye (hellochosen)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1861469

Title:
  [VPNaaS]: functional gate failed

Status in neutron:
  New

Bug description:
  Currently, neutron-vpnaas functional gate failed after [1] merged.

  2020-01-30 08:19:38.308343 | primary | + 
/opt/stack/new/neutron/tools/configure_for_func_testing.sh:_install_base_deps:110
 :   OVS_BRANCH=v2.12.0
  2020-01-30 08:19:38.310217 | primary | + 
/opt/stack/new/neutron/tools/configure_for_func_testing.sh:_install_base_deps:111
 :   compile_ovs False /usr /var
  2020-01-30 08:19:38.312140 | primary | + 
/opt/stack/new/neutron/devstack/lib/ovs:compile_ovs:113 :   local 
_pwd=/home/zuul/workspace
  2020-01-30 08:19:38.313981 | primary | + 
/opt/stack/new/neutron/devstack/lib/ovs:compile_ovs:114 :   local 
build_modules=False
  2020-01-30 08:19:38.315741 | primary | + 
/opt/stack/new/neutron/devstack/lib/ovs:compile_ovs:115 :   local prefix=/usr
  2020-01-30 08:19:38.317353 | primary | + 
/opt/stack/new/neutron/devstack/lib/ovs:compile_ovs:116 :   local 
localstatedir=/var
  2020-01-30 08:19:38.318879 | primary | + 
/opt/stack/new/neutron/devstack/lib/ovs:compile_ovs:118 :   '[' -n /usr ']'
  2020-01-30 08:19:38.320441 | primary | + 
/opt/stack/new/neutron/devstack/lib/ovs:compile_ovs:119 :   prefix=--prefix=/usr
  2020-01-30 08:19:38.322006 | primary | + 
/opt/stack/new/neutron/devstack/lib/ovs:compile_ovs:122 :   '[' -n /var ']'
  2020-01-30 08:19:38.323614 | primary | + 
/opt/stack/new/neutron/devstack/lib/ovs:compile_ovs:123 :   
localstatedir=--localstatedir=/var
  2020-01-30 08:19:38.325395 | primary | + 
/opt/stack/new/neutron/devstack/lib/ovs:compile_ovs:126 :   
prepare_for_compilation False
  2020-01-30 08:19:38.327128 | primary | + 
/opt/stack/new/neutron/devstack/lib/ovs:prepare_for_compilation:37 :   local 
build_modules=False
  2020-01-30 08:19:38.328890 | primary | + 
/opt/stack/new/neutron/devstack/lib/ovs:prepare_for_compilation:38 :   
OVS_DIR=/opt/stack/new/ovs
  2020-01-30 08:19:38.330435 | primary | + 
/opt/stack/new/neutron/devstack/lib/ovs:prepare_for_compilation:40 :   '[' '!' 
-d /opt/stack/new/ovs ']'
  2020-01-30 08:19:38.332150 | primary | + 
/opt/stack/new/neutron/devstack/lib/ovs:prepare_for_compilation:42 :   
git_timed clone https://github.com/openvswitch/ovs.git /opt/stack/new/ovs
  2020-01-30 08:19:38.333894 | primary | + functions-common:git_timed:616   
:   local count=0
  2020-01-30 08:19:38.335601 | primary | + functions-common:git_timed:617   
:   local timeout=0
  2020-01-30 08:19:38.337175 | primary | + functions-common:git_timed:619   
:   [[ -n 0 ]]
  2020-01-30 08:19:38.338806 | primary | + functions-common:git_timed:620   
:   timeout=0
  2020-01-30 08:19:38.340764 | primary | + functions-common:git_timed:623   
:   time_start git_timed
  2020-01-30 08:19:38.343034 | primary | + functions-common:time_start:2316 
:   local name=git_timed
  2020-01-30 08:19:38.344722 | primary | + functions-common:time_start:2317 
:   local start_time=
  2020-01-30 08:19:38.346523 | primary | + functions-common:time_start:2318 
:   [[ -n '' ]]
  2020-01-30 08:19:38.349859 | primary | ++ functions-common:time_start:2321
 :   date +%s%3N
  2020-01-30 08:19:38.351868 | primary | + functions-common:time_start:2321 
:   _TIME_START[$name]=1580372378349
  2020-01-30 08:19:38.353573 | primary | + functions-common:git_timed:624   
:   timeout -s SIGINT 0 git clone https://github.com/openvswitch/ovs.git 
/opt/stack/new/ovs
  2020-01-30 08:19:38.356044 | primary | fatal: could not create work tree dir 
'/opt/stack/new/ovs': Permission denied
  2020-01-30

[Yahoo-eng-team] [Bug 1853223] [NEW] [VPNaaS]: Python3 RuntimeError: dictionary changed size during iteration

2019-11-19 Thread Dongcan Ye
Public bug reported:

PY3 runtime error in driver report status:
Nov 13 03:13:11.221694 ubuntu-bionic-rax-iad-0012769742 
neutron-l3-agent[20209]: ERROR oslo.service.loopingcall [-] Fixed interval 
looping call 
'neutron_vpnaas.services.vpn.device_drivers.strongswan_ipsec.IPsecDriver.report_status'
 failed: RuntimeError: dictionary changed size during iteration
Nov 13 03:13:11.221694 ubuntu-bionic-rax-iad-0012769742 
neutron-l3-agent[20209]: ERROR oslo.service.loopingcall Traceback (most recent 
call last):
Nov 13 03:13:11.221694 ubuntu-bionic-rax-iad-0012769742 
neutron-l3-agent[20209]: ERROR oslo.service.loopingcall   File 
"/usr/local/lib/python3.6/dist-packages/oslo_service/loopingcall.py", line 150, 
in _run_loop
Nov 13 03:13:11.221694 ubuntu-bionic-rax-iad-0012769742 
neutron-l3-agent[20209]: ERROR oslo.service.loopingcall result = 
func(*self.args, **self.kw)
Nov 13 03:13:11.221694 ubuntu-bionic-rax-iad-0012769742 
neutron-l3-agent[20209]: ERROR oslo.service.loopingcall   File 
"/usr/local/lib/python3.6/dist-packages/oslo_log/helpers.py", line 67, in 
wrapper
Nov 13 03:13:11.221694 ubuntu-bionic-rax-iad-0012769742 
neutron-l3-agent[20209]: ERROR oslo.service.loopingcall return 
method(*args, **kwargs)
Nov 13 03:13:11.221694 ubuntu-bionic-rax-iad-0012769742 
neutron-l3-agent[20209]: ERROR oslo.service.loopingcall   File 
"/opt/stack/neutron-vpnaas/neutron_vpnaas/services/vpn/device_drivers/ipsec.py",
 line 1067, in report_status
Nov 13 03:13:11.221694 ubuntu-bionic-rax-iad-0012769742 
neutron-l3-agent[20209]: ERROR oslo.service.loopingcall for process_id, 
process in self.processes.items():
Nov 13 03:13:11.221694 ubuntu-bionic-rax-iad-0012769742 
neutron-l3-agent[20209]: ERROR oslo.service.loopingcall RuntimeError: 
dictionary changed size during iteration
Nov 13 03:13:11.221694 ubuntu-bionic-rax-iad-0012769742 
neutron-l3-agent[20209]: ERROR oslo.service.loopingcall

Please search RuntimeError in log[1][2]

[1]
https://f68629e21ed230feb603-193839948e82df3f7b6031d1afea5d13.ssl.cf5.rackcdn.com/659063/8/check
/neutron-vpnaas-tempest/4e5241b/controller/logs/screen-q-l3.txt.gz

[2]
https://c808c0465a7aa7421965-eb83075bad77f107ed9b57803fd20c1f.ssl.cf2.rackcdn.com/693965/1/check
/neutron-vpnaas-tempest/3dde8e5/controller/logs/screen-q-l3.txt.gz

** Affects: neutron
 Importance: Undecided
 Assignee: Dongcan Ye (hellochosen)
 Status: New


** Tags: vpnaas

** Changed in: neutron
 Assignee: (unassigned) => Dongcan Ye (hellochosen)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1853223

Title:
  [VPNaaS]: Python3 RuntimeError: dictionary changed size during
  iteration

Status in neutron:
  New

Bug description:
  PY3 runtime error in driver report status:
  Nov 13 03:13:11.221694 ubuntu-bionic-rax-iad-0012769742 
neutron-l3-agent[20209]: ERROR oslo.service.loopingcall [-] Fixed interval 
looping call 
'neutron_vpnaas.services.vpn.device_drivers.strongswan_ipsec.IPsecDriver.report_status'
 failed: RuntimeError: dictionary changed size during iteration
  Nov 13 03:13:11.221694 ubuntu-bionic-rax-iad-0012769742 
neutron-l3-agent[20209]: ERROR oslo.service.loopingcall Traceback (most recent 
call last):
  Nov 13 03:13:11.221694 ubuntu-bionic-rax-iad-0012769742 
neutron-l3-agent[20209]: ERROR oslo.service.loopingcall   File 
"/usr/local/lib/python3.6/dist-packages/oslo_service/loopingcall.py", line 150, 
in _run_loop
  Nov 13 03:13:11.221694 ubuntu-bionic-rax-iad-0012769742 
neutron-l3-agent[20209]: ERROR oslo.service.loopingcall result = 
func(*self.args, **self.kw)
  Nov 13 03:13:11.221694 ubuntu-bionic-rax-iad-0012769742 
neutron-l3-agent[20209]: ERROR oslo.service.loopingcall   File 
"/usr/local/lib/python3.6/dist-packages/oslo_log/helpers.py", line 67, in 
wrapper
  Nov 13 03:13:11.221694 ubuntu-bionic-rax-iad-0012769742 
neutron-l3-agent[20209]: ERROR oslo.service.loopingcall return 
method(*args, **kwargs)
  Nov 13 03:13:11.221694 ubuntu-bionic-rax-iad-0012769742 
neutron-l3-agent[20209]: ERROR oslo.service.loopingcall   File 
"/opt/stack/neutron-vpnaas/neutron_vpnaas/services/vpn/device_drivers/ipsec.py",
 line 1067, in report_status
  Nov 13 03:13:11.221694 ubuntu-bionic-rax-iad-0012769742 
neutron-l3-agent[20209]: ERROR oslo.service.loopingcall for process_id, 
process in self.processes.items():
  Nov 13 03:13:11.221694 ubuntu-bionic-rax-iad-0012769742 
neutron-l3-agent[20209]: ERROR oslo.service.loopingcall RuntimeError: 
dictionary changed size during iteration
  Nov 13 03:13:11.221694 ubuntu-bionic-rax-iad-0012769742 
neutron-l3-agent[20209]: ERROR oslo.service.loopingcall

  Please search RuntimeError in log[1][2]

  [1]
  
https://f68629e21ed230feb603-193839948e82df3f7b6031d1afea5d13.ssl.cf5.rackcdn.com/659063/8/check
  /neutron-vpnaas-tempest/4e5241b/controller

[Yahoo-eng-team] [Bug 1852516] [NEW] [VPNaaS]: tempest gate failed

2019-11-13 Thread Dongcan Ye
Public bug reported:

Recently, neutron-vpnaas-tempest gate failed with scenario test:
neutron_vpnaas.tests.tempest.scenario.test_vpnaas.Vpnaas6in4
neutron_vpnaas.tests.tempest.scenario.test_vpnaas.Vpnaas6in6
neutron_vpnaas.tests.tempest.scenario.test_vpnaas.Vpnaas4in4

See log [1][2]
[1] 
https://d0f89e65b04aff25943d-bfab26b1456f69293167016566bc.ssl.cf5.rackcdn.com/693965/1/check/neutron-vpnaas-tempest/e501f2f/testr_results.html.gz
[2] 
https://736c534ce2f78bb48419-4edda77aff6f00cc876f5cc0df654845.ssl.cf2.rackcdn.com/691216/1/check/neutron-vpnaas-tempest/d673ea7/testr_results.html.gz

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: vpnaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1852516

Title:
  [VPNaaS]: tempest gate failed

Status in neutron:
  New

Bug description:
  Recently, neutron-vpnaas-tempest gate failed with scenario test:
  neutron_vpnaas.tests.tempest.scenario.test_vpnaas.Vpnaas6in4
  neutron_vpnaas.tests.tempest.scenario.test_vpnaas.Vpnaas6in6  
  neutron_vpnaas.tests.tempest.scenario.test_vpnaas.Vpnaas4in4

  See log [1][2]
  [1] 
https://d0f89e65b04aff25943d-bfab26b1456f69293167016566bc.ssl.cf5.rackcdn.com/693965/1/check/neutron-vpnaas-tempest/e501f2f/testr_results.html.gz
  [2] 
https://736c534ce2f78bb48419-4edda77aff6f00cc876f5cc0df654845.ssl.cf2.rackcdn.com/691216/1/check/neutron-vpnaas-tempest/d673ea7/testr_results.html.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1852516/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1845628] [NEW] Check UEFI support not insufficient

2019-09-27 Thread Dongcan Ye
Public bug reported:

Now, delete instance, swap volume and snapshot, we always clear NVRAM flags.
This is not right, currently we only check the host support uefi booting for 
guests from "_has_uefi_support".
If an instance not boot with UEFI but the host enable UEFI, we also needs to 
clear NVRAM flags in guest delete_configuration.

IMO, we need to check the instance(guest) and host support UEFI both,
like check the instance's image_meta.

** Affects: nova
 Importance: Undecided
 Assignee: Dongcan Ye (hellochosen)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Dongcan Ye (hellochosen)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1845628

Title:
  Check UEFI support not insufficient

Status in OpenStack Compute (nova):
  New

Bug description:
  Now, delete instance, swap volume and snapshot, we always clear NVRAM flags.
  This is not right, currently we only check the host support uefi booting for 
guests from "_has_uefi_support".
  If an instance not boot with UEFI but the host enable UEFI, we also needs to 
clear NVRAM flags in guest delete_configuration.

  IMO, we need to check the instance(guest) and host support UEFI both,
  like check the instance's image_meta.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1845628/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1841877] [NEW] notification message before reschedule

2019-08-28 Thread Dongcan Ye
Public bug reported:

Nova send notification to Ceilometer, if enabled notification mechanism.
We use HTTP as Ceilometer data publisher, the notification will send to user 
web.

In some situation, compute nodes will have insufficient compute resources to 
create instance.
but we get a create.error message before raise RescheduledException, though the 
instance will reschedule to
other compute nodes and build successful.

This will be confusing and not friendly for users. Not sure we really
need to send notification before reschedule the instance.

** Affects: nova
 Importance: Undecided
 Assignee: Dongcan Ye (hellochosen)
 Status: Opinion

** Changed in: nova
   Status: New => Opinion

** Changed in: nova
 Assignee: (unassigned) => Dongcan Ye (hellochosen)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1841877

Title:
  notification message before reschedule

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  Nova send notification to Ceilometer, if enabled notification mechanism.
  We use HTTP as Ceilometer data publisher, the notification will send to user 
web.

  In some situation, compute nodes will have insufficient compute resources to 
create instance.
  but we get a create.error message before raise RescheduledException, though 
the instance will reschedule to
  other compute nodes and build successful.

  This will be confusing and not friendly for users. Not sure we really
  need to send notification before reschedule the instance.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1841877/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1840139] [NEW] Libvirt: Wrong usage for mem_stats_period_seconds

2019-08-14 Thread Dongcan Ye
Public bug reported:

>From the code, function _guest_add_memory_balloon in [1], if 
>mem_stats_period_seconds set to 0 or negative value, the memory balloon device 
>will disabled.
Is mem_stats_period_seconds can control the virtual memory balloon device 
added? Isn't it only control memory usage statistics?

But when I test with mem_stats_period_seconds=0, the virtual memory
balloon device will be added.

[1]
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py

** Affects: nova
 Importance: Undecided
 Assignee: Dongcan Ye (hellochosen)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Dongcan Ye (hellochosen)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1840139

Title:
  Libvirt: Wrong usage for mem_stats_period_seconds

Status in OpenStack Compute (nova):
  New

Bug description:
  From the code, function _guest_add_memory_balloon in [1], if 
mem_stats_period_seconds set to 0 or negative value, the memory balloon device 
will disabled.
  Is mem_stats_period_seconds can control the virtual memory balloon device 
added? Isn't it only control memory usage statistics?

  But when I test with mem_stats_period_seconds=0, the virtual memory
  balloon device will be added.

  [1]
  https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1840139/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1828721] [NEW] [VPNaaS]: Check restart_check_config enabled

2019-05-12 Thread Dongcan Ye
Public bug reported:

The ipsec.conf.old and ipsec.secrets.old files can only be generated
in pluto start, so in restart we should check restart_check_config
configuration firstly, then check ipsec.conf and ipsec.secrets
changed.

** Affects: neutron
 Importance: Undecided
 Assignee: Dongcan Ye (hellochosen)
 Status: In Progress


** Tags: vpnaas

** Tags added: vpnaas

** Changed in: neutron
 Assignee: (unassigned) => Dongcan Ye (hellochosen)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1828721

Title:
  [VPNaaS]: Check restart_check_config enabled

Status in neutron:
  In Progress

Bug description:
  The ipsec.conf.old and ipsec.secrets.old files can only be generated
  in pluto start, so in restart we should check restart_check_config
  configuration firstly, then check ipsec.conf and ipsec.secrets
  changed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1828721/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1826697] [NEW] [VPNaaS]: Strongswan function gate failed

2019-04-27 Thread Dongcan Ye
snat_rules
2019-04-28 02:27:56.144621 | primary | 2019-04-28 02:27:56.143 | if 
self.iptables_manager.random_fully:
2019-04-28 02:27:56.147378 | primary | 2019-04-28 02:27:56.146 |   File 
"/opt/stack/new/neutron/neutron/agent/linux/iptables_manager.py", line 494, in 
random_fully
2019-04-28 02:27:56.155106 | primary | 2019-04-28 02:27:56.149 | 
version = self._get_version()
2019-04-28 02:27:56.155304 | primary | 2019-04-28 02:27:56.151 |   File 
"/opt/stack/new/neutron/neutron/agent/linux/iptables_manager.py", line 485, in 
_get_version
2019-04-28 02:27:56.155421 | primary | 2019-04-28 02:27:56.152 | 
version = str(self.execute(args, run_as_root=True).split()[1][1:])
2019-04-28 02:27:56.160267 | primary | 2019-04-28 02:27:56.159 |   File 
"/opt/stack/new/neutron/neutron/agent/linux/utils.py", line 147, in execute
2019-04-28 02:27:56.162698 | primary | 2019-04-28 02:27:56.162 | 
returncode=returncode)
2019-04-28 02:27:56.165290 | primary | 2019-04-28 02:27:56.164 | 
neutron_lib.exceptions.ProcessExecutionError: Exit code: 99; Stdin: ; Stdout: ; 
Stderr: 
/opt/stack/new/neutron-vpnaas/.tox/dsvm-functional-sswan/bin/neutron-rootwrap: 
Unauthorized command: iptables --version (no filter matched)

[1] https://review.opendev.org/#/c/636473/
[2] 
http://logs.openstack.org/98/653898/1/check/neutron-vpnaas-dsvm-functional-sswan/c2e8515/job-output.txt.gz

** Affects: neutron
 Importance: Undecided
 Assignee: Dongcan Ye (hellochosen)
 Status: New


** Tags: vpnaas

** Changed in: neutron
 Assignee: (unassigned) => Dongcan Ye (hellochosen)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1826697

Title:
  [VPNaaS]: Strongswan function gate failed

Status in neutron:
  New

Bug description:
  Since commit 30f35e08f92e5262e7a9108684da048d11402b07[1] adds "iptables 
--version" command to check verison.
  neutron-vpnaas strongswan function gate failed as follow:

  2019-04-28 02:27:56.052368 | primary | 2019-04-28 02:27:56.051 | {5} 
neutron_vpnaas.tests.functional.strongswan.test_strongswan_driver.TestStrongSwanScenario.test_strongswan_connection_with_non_ascii_vpnservice_name
 [18.660131s] ... FAILED
  2019-04-28 02:27:56.054781 | primary | 2019-04-28 02:27:56.054 |
  2019-04-28 02:27:56.057326 | primary | 2019-04-28 02:27:56.056 | Captured 
traceback:
  2019-04-28 02:27:56.061622 | primary | 2019-04-28 02:27:56.060 | 
~~~
  2019-04-28 02:27:56.063661 | primary | 2019-04-28 02:27:56.063 | 
Traceback (most recent call last):
  2019-04-28 02:27:56.066004 | primary | 2019-04-28 02:27:56.065 |   File 
"/opt/stack/new/neutron/neutron/tests/base.py", line 176, in func
  2019-04-28 02:27:56.068372 | primary | 2019-04-28 02:27:56.067 | 
return f(self, *args, **kwargs)
  2019-04-28 02:27:56.076076 | primary | 2019-04-28 02:27:56.070 |   File 
"neutron_vpnaas/tests/functional/strongswan/test_strongswan_driver.py", line 
258, in test_strongswan_connection_with_non_ascii_vpnservice_name
  2019-04-28 02:27:56.076220 | primary | 2019-04-28 02:27:56.072 | 
[self.private_nets[1]])
  2019-04-28 02:27:56.076350 | primary | 2019-04-28 02:27:56.075 |   File 
"neutron_vpnaas/tests/functional/common/test_scenario.py", line 487, in 
create_site
  2019-04-28 02:27:56.078818 | primary | 2019-04-28 02:27:56.078 | 
site.router = self.create_router(self.agent, site.info)
  2019-04-28 02:27:56.081207 | primary | 2019-04-28 02:27:56.080 |   File 
"neutron_vpnaas/tests/functional/common/test_scenario.py", line 445, in 
create_router
  2019-04-28 02:27:56.084563 | primary | 2019-04-28 02:27:56.083 | 
agent._process_added_router(info)
  2019-04-28 02:27:56.090754 | primary | 2019-04-28 02:27:56.087 |   File 
"/opt/stack/new/neutron/neutron/agent/l3/agent.py", line 611, in 
_process_added_router
  2019-04-28 02:27:56.093180 | primary | 2019-04-28 02:27:56.092 | 
ri.process()
  2019-04-28 02:27:56.095084 | primary | 2019-04-28 02:27:56.094 |   File 
"/opt/stack/new/neutron/neutron/common/utils.py", line 161, in call
  2019-04-28 02:27:56.096768 | primary | 2019-04-28 02:27:56.096 | 
self.logger(e)
  2019-04-28 02:27:56.099836 | primary | 2019-04-28 02:27:56.099 |   File 
"/opt/stack/new/neutron-vpnaas/.tox/dsvm-functional-sswan/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 220, in __exit__
  2019-04-28 02:27:56.105455 | primary | 2019-04-28 02:27:56.104 | 
self.force_reraise()
  2019-04-28 02:27:56.107435 | primary | 2019-04-28 02:27:56.106 |   File 
"/opt/stack/new/neutron-vpnaas/.tox/dsvm-functional-sswan/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 196, in force_reraise
  2019-04-28 02:27:56.110099 | primary | 2

[Yahoo-eng-team] [Bug 1825456] [NEW] [VPNaaS] Add missing unit test for vpn agent

2019-04-18 Thread Dongcan Ye
Public bug reported:

Code in neutron-vpnaas[1] missing unit test. We can add it.

[1] https://github.com/openstack/neutron-
vpnaas/blob/master/neutron_vpnaas/services/vpn/agent.py

** Affects: neutron
 Importance: Undecided
 Assignee: Dongcan Ye (hellochosen)
 Status: New


** Tags: vpnaas

** Changed in: neutron
 Assignee: (unassigned) => Dongcan Ye (hellochosen)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1825456

Title:
  [VPNaaS] Add missing unit test for vpn agent

Status in neutron:
  New

Bug description:
  Code in neutron-vpnaas[1] missing unit test. We can add it.

  [1] https://github.com/openstack/neutron-
  vpnaas/blob/master/neutron_vpnaas/services/vpn/agent.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1825456/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1822921] [NEW] [VPNaaS]: Skip check process status for HA backup routers

2019-04-02 Thread Dongcan Ye
Public bug reported:

Since we disable vpn process on backup router, we should skips check for
those process status and then report status to neutron server.

** Affects: neutron
 Importance: Undecided
 Assignee: Dongcan Ye (hellochosen)
 Status: In Progress


** Tags: vpnaas

** Changed in: neutron
 Assignee: (unassigned) => Dongcan Ye (hellochosen)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1822921

Title:
  [VPNaaS]:  Skip check process status for HA backup routers

Status in neutron:
  In Progress

Bug description:
  Since we disable vpn process on backup router, we should skips check
  for those process status and then report status to neutron server.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1822921/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1813255] [NEW] nova reschedule instance while updating instance task state

2019-01-24 Thread Dongcan Ye
.py", line 138, in wrapper
localhost nova-compute:2019-01-24 16:38:52.882 2427 ERROR nova.compute.manager 
[instance: 0b6fddfa-27e1-4e71-83dd-c5cd9035dbac] return f(*args, **kwargs)
localhost nova-compute:2019-01-24 16:38:52.882 2427 ERROR nova.compute.manager 
[instance: 0b6fddfa-27e1-4e71-83dd-c5cd9035dbac]
localhost nova-compute:2019-01-24 16:38:52.882 2427 ERROR nova.compute.manager 
[instance: 0b6fddfa-27e1-4e71-83dd-c5cd9035dbac]   File 
"/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 300, in 
wrapped
localhost nova-compute:2019-01-24 16:38:52.882 2427 ERROR nova.compute.manager 
[instance: 0b6fddfa-27e1-4e71-83dd-c5cd9035dbac] return f(context, *args, 
**kwargs)
localhost nova-compute:2019-01-24 16:38:52.882 2427 ERROR nova.compute.manager 
[instance: 0b6fddfa-27e1-4e71-83dd-c5cd9035dbac]
localhost nova-compute:2019-01-24 16:38:52.882 2427 ERROR nova.compute.manager 
[instance: 0b6fddfa-27e1-4e71-83dd-c5cd9035dbac]   File 
"/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 2723, in 
instance_update_and_get_original
localhost nova-compute:2019-01-24 16:38:52.882 2427 ERROR nova.compute.manager 
[instance: 0b6fddfa-27e1-4e71-83dd-c5cd9035dbac] context, instance_uuid, 
values, expected, original=instance_ref))
localhost nova-compute:2019-01-24 16:38:52.882 2427 ERROR nova.compute.manager 
[instance: 0b6fddfa-27e1-4e71-83dd-c5cd9035dbac]
localhost nova-compute:2019-01-24 16:38:52.882 2427 ERROR nova.compute.manager 
[instance: 0b6fddfa-27e1-4e71-83dd-c5cd9035dbac]   File 
"/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 2859, in 
_instance_update
localhost nova-compute:2019-01-24 16:38:52.882 2427 ERROR nova.compute.manager 
[instance: 0b6fddfa-27e1-4e71-83dd-c5cd9035dbac] raise exc(**exc_props)
localhost nova-compute:2019-01-24 16:38:52.882 2427 ERROR nova.compute.manager 
[instance: 0b6fddfa-27e1-4e71-83dd-c5cd9035dbac]
localhost nova-compute:2019-01-24 16:38:52.882 2427 ERROR nova.compute.manager 
[instance: 0b6fddfa-27e1-4e71-83dd-c5cd9035dbac] UnexpectedTaskStateError: 
Conflict updating instance 0b6fddfa-27e1-4e71-83dd-c5cd9035dbac. Expected: 
{'task_state': [u'block_device_mapping']}. Actual: {'task_state': u'spawning'}
localhost nova-compute:2019-01-24 16:38:52.882 2427 ERROR nova.compute.manager 
[instance: 0b6fddfa-27e1-4e71-83dd-c5cd9035dbac]
localhost nova-compute:2019-01-24 16:38:52.882 2427 ERROR nova.compute.manager 
[instance: 0b6fddfa-27e1-4e71-83dd-c5cd9035dbac]
localhost nova-compute:2019-01-24 16:38:53.500 2427 INFO nova.compute.manager 
[req-2b6ee2e1-6f8e-4bc0-80d7-62e41ce02619 28be004cfb19402597149e00a9b4d813 
ab9ba89406a64c76b8240da11f4b52d3 - - -] [instance: 
0b6fddfa-27e1-4e71-83dd-c5cd9035dbac] Terminating instance
localhost nova-compute:2019-01-24 16:38:53.500 2427 INFO nova.compute.manager 
[req-2b6ee2e1-6f8e-4bc0-80d7-62e41ce02619 28be004cfb19402597149e00a9b4d813 
ab9ba89406a64c76b8240da11f4b52d3 - - -] [instance: 
0b6fddfa-27e1-4e71-83dd-c5cd9035dbac] Terminating instance

reschedule node nova-compute log:
localhost nova-compute:2019-01-24 16:39:19.466 28548 INFO 
nova.virt.libvirt.driver [-] [instance: 0b6fddfa-27e1-4e71-83dd-c5cd9035dbac] 
Instance spawned successfully.
localhost nova-compute:2019-01-24 16:39:19.466 28548 INFO 
nova.virt.libvirt.driver [-] [instance: 0b6fddfa-27e1-4e71-83dd-c5cd9035dbac] 
Instance spawned successfully.
localhost nova-compute:2019-01-24 16:39:19.467 28548 INFO nova.compute.manager 
[req-2b6ee2e1-6f8e-4bc0-80d7-62e41ce02619 28be004cfb19402597149e00a9b4d813 
ab9ba89406a64c76b8240da11f4b52d3 - - -] [instance: 
0b6fddfa-27e1-4e71-83dd-c5cd9035dbac] Took 6.32 seconds to spawn the instance 
on the hypervisor.
localhost nova-compute:2019-01-24 16:39:19.467 28548 INFO nova.compute.manager 
[req-2b6ee2e1-6f8e-4bc0-80d7-62e41ce02619 28be004cfb19402597149e00a9b4d813 
ab9ba89406a64c76b8240da11f4b52d3 - - -] [instance: 
0b6fddfa-27e1-4e71-83dd-c5cd9035dbac] Took 6.32 seconds to spawn the instance 
on the hypervisor.

** Affects: nova
 Importance: Undecided
 Assignee: Dongcan Ye (hellochosen)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Dongcan Ye (hellochosen)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1813255

Title:
  nova reschedule instance while updating instance task state

Status in OpenStack Compute (nova):
  New

Bug description:
  While booting an instance, if save expected task_state is 
"block_device_mapping", but it conflicts with actual task_state "spawning". 
Then nova will raise RescheduledException and reschedule to another host.
   
  original node nova-compute log:
  localhost nova-compute:2019-01-24 16:38:52.882 2427 ERROR 
nova.compute.manager [instance: 0b6fddfa-27e1-4e71-83dd-c5cd9035dbac] Traceback 
(most recent call last):
  localhost nova-comp

[Yahoo-eng-team] [Bug 1786213] [NEW] Metering agent: failed to running ip netns commad

2018-08-09 Thread Dongcan Ye
odic_task if 
ip_lib.network_namespace_exists(self.ns_name):
2018-08-09 03:15:08.546 10637 ERROR oslo_service.periodic_task   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py", line 1057, in 
network_namespace_exists
2018-08-09 03:15:08.546 10637 ERROR oslo_service.periodic_task output = 
list_network_namespaces(**kwargs)
2018-08-09 03:15:08.546 10637 ERROR oslo_service.periodic_task   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py", line 1046, in 
list_network_namespaces
2018-08-09 03:15:08.546 10637 ERROR oslo_service.periodic_task return 
privileged.list_netns(**kwargs)
2018-08-09 03:15:08.546 10637 ERROR oslo_service.periodic_task   File 
"/usr/lib/python2.7/site-packages/oslo_privsep/priv_context.py", line 206, in 
_wrap
2018-08-09 03:15:08.546 10637 ERROR oslo_service.periodic_task self.start()
2018-08-09 03:15:08.546 10637 ERROR oslo_service.periodic_task   File 
"/usr/lib/python2.7/site-packages/oslo_privsep/priv_context.py", line 217, in 
start
2018-08-09 03:15:08.546 10637 ERROR oslo_service.periodic_task channel = 
daemon.RootwrapClientChannel(context=self)
2018-08-09 03:15:08.546 10637 ERROR oslo_service.periodic_task   File 
"/usr/lib/python2.7/site-packages/oslo_privsep/daemon.py", line 327, in __init__
2018-08-09 03:15:08.546 10637 ERROR oslo_service.periodic_task raise 
FailedToDropPrivileges(msg)
2018-08-09 03:15:08.546 10637 ERROR oslo_service.periodic_task 
FailedToDropPrivileges: privsep helper command exited non-zero (1)
2018-08-09 03:15:08.546 10637 ERROR oslo_service.periodic_task

It seems missing setup_privsep() call in metering agent init.

** Affects: neutron
     Importance: Undecided
 Assignee: Dongcan Ye (hellochosen)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Dongcan Ye (hellochosen)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1786213

Title:
  Metering agent: failed to running ip netns commad

Status in neutron:
  New

Bug description:
  While neutron-metering-agent start, it fails as following:
  2018-08-09 03:15:08.504 10637 INFO oslo.privsep.daemon 
[req-6dd24e96-e82b-49a5-9d33-80e1ff572502 - - - - -] Running privsep helper: 
['sudo', 'privsep-helper', '--config-file', 
'/usr/share/neutron/neutron-dist.conf', '--config-file', 
'/etc/neutron/neutron.conf', '--config-file', 
'/etc/neutron/metering_agent.ini', '--config-dir', 
'/etc/neutron/conf.d/neutron-metering-agent', '--privsep_context', 
'neutron.privileged.default', '--privsep_sock_path', 
'/tmp/tmp_JYWYs/privsep.sock']
  2018-08-09 03:15:08.525 10637 WARNING oslo.privsep.daemon [-] privsep log:
  2018-08-09 03:15:08.526 10637 WARNING oslo.privsep.daemon [-] privsep log: We 
trust you have received the usual lecture from the local System
  2018-08-09 03:15:08.527 10637 WARNING oslo.privsep.daemon [-] privsep log: 
Administrator. It usually boils down to these three things:
  2018-08-09 03:15:08.527 10637 WARNING oslo.privsep.daemon [-] privsep log:
  2018-08-09 03:15:08.528 10637 WARNING oslo.privsep.daemon [-] privsep log:
 #1) Respect the privacy of others.
  2018-08-09 03:15:08.528 10637 WARNING oslo.privsep.daemon [-] privsep log:
 #2) Think before you type.
  2018-08-09 03:15:08.528 10637 WARNING oslo.privsep.daemon [-] privsep log:
 #3) With great power comes great responsibility.
  2018-08-09 03:15:08.529 10637 WARNING oslo.privsep.daemon [-] privsep log:
  2018-08-09 03:15:08.531 10637 WARNING oslo.privsep.daemon [-] privsep log: 
sudo: no tty present and no askpass program specified
  2018-08-09 03:15:08.544 10637 CRITICAL oslo.privsep.daemon 
[req-6dd24e96-e82b-49a5-9d33-80e1ff572502 - - - - -] privsep helper command 
exited non-zero (1)

  2018-08-09 03:15:08.546 10637 ERROR oslo_service.periodic_task 
[req-6dd24e96-e82b-49a5-9d33-80e1ff572502 - - - - -] Error during 
MeteringAgentWithStateReport._sync_routers_task: FailedToDropPrivileges: 
privsep helper command exited non-zero (1)
  2018-08-09 03:15:08.546 10637 ERROR oslo_service.periodic_task Traceback 
(most recent call last):
  2018-08-09 03:15:08.546 10637 ERROR oslo_service.periodic_task   File 
"/usr/lib/python2.7/site-packages/oslo_service/periodic_task.py", line 220, in 
run_periodic_tasks
  2018-08-09 03:15:08.546 10637 ERROR oslo_service.periodic_task task(self, 
context)
  2018-08-09 03:15:08.546 10637 ERROR oslo_service.periodic_task   File 
"/usr/lib/python2.7/site-packages/neutron/services/metering/agents/metering_agent.py",
 line 189, in _sync_routers_task
  2018-08-09 03:15:08.546 10637 ERROR oslo_service.periodic_task 
self._update_routers(context, routers)
  2018-08-09 03:15:08.546 10637 ERROR oslo_service.periodic_task   File 
"/usr/lib/python2.7/site-packages/neutron/services/metering/agents/metering_agent.py",
 line 212, in _up

[Yahoo-eng-team] [Bug 1786169] [NEW] DVR: Missing fixed_ips info for IPv6 subnets

2018-08-09 Thread Dongcan Ye
Public bug reported:

Reproduce Steps:

preconditions: DVR and DVR_SNAT enabled.

1. Create router, network, IPv4 subnet 
# neutron router-create test_router
# neutron net-create test_net
# neutron subnet-create test_net 40.40.40.0/24 --name test_v4_subnet

2. Create two SLAAC-enabled subnets
# neutron subnet-create --ip-version 6 --ipv6_address_mode=dhcpv6-stateless 
--ipv6_ra_mode=dhcpv6-stateless test_net fdf8:f53b:82e4::51/64
# neutron subnet-create --ip-version 6 --ipv6_address_mode=slaac 
--ipv6_ra_mode=slaac test_net fdf8:f84c:82e4::51/64

3. Attach those subnets(one v4 subnet and two v6 subnets) to router
# neutron router-interface-add test_router test_v4_subnet
# neutron router-interface-add test_router V6_SUBNET1_ID
# neutron router-interface-add test_router V6_SUBNET2_ID

4. Then set gateway for the router.
# neutron router-gateway-set test_router EXTERNAL_NETWORK

The CSNAT router interface for IPv6 will get fixed ip from one of the
IPv6 subnets.


If we set gateway for the router first, and then attach interface to the router.
The CSNAT router interface for IPv6 will get fixed ip from both IPv6 subnets.
In this situation, the csnat IPv6 address will update in 
_update_snat_v6_addrs_after_intf_update[1] after internal interface added.

So we also need to process correctly for multiple IPv6 subnets.

[1]
https://github.com/openstack/neutron/blob/master/neutron/db/l3_dvr_db.py

** Affects: neutron
 Importance: Undecided
 Assignee: Dongcan Ye (hellochosen)
 Status: New


** Tags: l3-dvr-backlog

** Changed in: neutron
 Assignee: (unassigned) => Dongcan Ye (hellochosen)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1786169

Title:
  DVR: Missing fixed_ips info for IPv6 subnets

Status in neutron:
  New

Bug description:
  Reproduce Steps:

  preconditions: DVR and DVR_SNAT enabled.

  1. Create router, network, IPv4 subnet 
  # neutron router-create test_router
  # neutron net-create test_net
  # neutron subnet-create test_net 40.40.40.0/24 --name test_v4_subnet

  2. Create two SLAAC-enabled subnets
  # neutron subnet-create --ip-version 6 --ipv6_address_mode=dhcpv6-stateless 
--ipv6_ra_mode=dhcpv6-stateless test_net fdf8:f53b:82e4::51/64
  # neutron subnet-create --ip-version 6 --ipv6_address_mode=slaac 
--ipv6_ra_mode=slaac test_net fdf8:f84c:82e4::51/64

  3. Attach those subnets(one v4 subnet and two v6 subnets) to router
  # neutron router-interface-add test_router test_v4_subnet
  # neutron router-interface-add test_router V6_SUBNET1_ID
  # neutron router-interface-add test_router V6_SUBNET2_ID

  4. Then set gateway for the router.
  # neutron router-gateway-set test_router EXTERNAL_NETWORK

  The CSNAT router interface for IPv6 will get fixed ip from one of the
  IPv6 subnets.

  
  If we set gateway for the router first, and then attach interface to the 
router.
  The CSNAT router interface for IPv6 will get fixed ip from both IPv6 subnets.
  In this situation, the csnat IPv6 address will update in 
_update_snat_v6_addrs_after_intf_update[1] after internal interface added.

  So we also need to process correctly for multiple IPv6 subnets.

  [1]
  https://github.com/openstack/neutron/blob/master/neutron/db/l3_dvr_db.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1786169/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1781354] [NEW] VPNaaS: IPsec siteconnection status DOWN while using IKE v2

2018-07-12 Thread Dongcan Ye
Public bug reported:

While using IKE policy with version v2, the IPsec siteconnection status
always down, but the network traffic is OK.

>From the ipsec status we can see that the ipsec connection is
established:

# ip netns exec snat-a4d93552-c534-4a2c-96f7-c9b0ea918ba7 ipsec whack --ctlbase 
/var/lib/neutron/ipsec/a4d93552-c534-4a2c-96f7-c9b0ea918ba7/var/run/pluto 
--status
000 Total IPsec connections: loaded 3, active 1
000
000 State Information: DDoS cookies not required, Accepting new IKE connections
000 IKE SAs: total(1), half-open(0), open(0), authenticated(1), anonymous(0)
000 IPsec SAs: total(1), authenticated(1), anonymous(0)
000
000 #2: "b42f6ee6-acf3-4d2d-beb9-f115d68fef55/0x1":500 STATE_PARENT_I3 (PARENT 
SA established); EVENT_SA_REPLACE in 2364s; newest IPSEC; eroute owner; 
isakmp#1; idle; import:admin initiate
000 #2: "b42f6ee6-acf3-4d2d-beb9-f115d68fef55/0x1" esp.2d6840c8@172.16.2.130 
esp.5d0c4043@172.16.2.123 tun.0@172.16.2.130 tun.0@172.16.2.123 ref=0 
refhim=4294901761 Traffic: ESPin=0B ESPout=0B! ESPmax=0B
000 #1: "b42f6ee6-acf3-4d2d-beb9-f115d68fef55/0x1":500 STATE_PARENT_I3 (PARENT 
SA established); EVENT_SA_REPLACE in 2574s; newest ISAKMP; isakmp#0; idle; 
import:admin initiate
000 #1: "b42f6ee6-acf3-4d2d-beb9-f115d68fef55/0x1" ref=0 refhim=0 Traffic:
000
000 Bare Shunt list:
000

I think we should match "PARENT SA" in IKE v2. [1]

[1] https://libreswan.org/wiki/How_to_read_status_output

** Affects: neutron
 Importance: Undecided
 Assignee: Dongcan Ye (hellochosen)
 Status: New


** Tags: vpnaas

** Changed in: neutron
 Assignee: (unassigned) => Dongcan Ye (hellochosen)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1781354

Title:
  VPNaaS: IPsec siteconnection status DOWN while using IKE v2

Status in neutron:
  New

Bug description:
  While using IKE policy with version v2, the IPsec siteconnection
  status always down, but the network traffic is OK.

  From the ipsec status we can see that the ipsec connection is
  established:

  # ip netns exec snat-a4d93552-c534-4a2c-96f7-c9b0ea918ba7 ipsec whack 
--ctlbase 
/var/lib/neutron/ipsec/a4d93552-c534-4a2c-96f7-c9b0ea918ba7/var/run/pluto 
--status
  000 Total IPsec connections: loaded 3, active 1
  000
  000 State Information: DDoS cookies not required, Accepting new IKE 
connections
  000 IKE SAs: total(1), half-open(0), open(0), authenticated(1), anonymous(0)
  000 IPsec SAs: total(1), authenticated(1), anonymous(0)
  000
  000 #2: "b42f6ee6-acf3-4d2d-beb9-f115d68fef55/0x1":500 STATE_PARENT_I3 
(PARENT SA established); EVENT_SA_REPLACE in 2364s; newest IPSEC; eroute owner; 
isakmp#1; idle; import:admin initiate
  000 #2: "b42f6ee6-acf3-4d2d-beb9-f115d68fef55/0x1" esp.2d6840c8@172.16.2.130 
esp.5d0c4043@172.16.2.123 tun.0@172.16.2.130 tun.0@172.16.2.123 ref=0 
refhim=4294901761 Traffic: ESPin=0B ESPout=0B! ESPmax=0B
  000 #1: "b42f6ee6-acf3-4d2d-beb9-f115d68fef55/0x1":500 STATE_PARENT_I3 
(PARENT SA established); EVENT_SA_REPLACE in 2574s; newest ISAKMP; isakmp#0; 
idle; import:admin initiate
  000 #1: "b42f6ee6-acf3-4d2d-beb9-f115d68fef55/0x1" ref=0 refhim=0 Traffic:
  000
  000 Bare Shunt list:
  000

  I think we should match "PARENT SA" in IKE v2. [1]

  [1] https://libreswan.org/wiki/How_to_read_status_output

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1781354/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1781156] [NEW] Randomly choose from multiple IPv6 stateless subnets

2018-07-11 Thread Dongcan Ye
Public bug reported:

If we create a dual-stack network, both with two IPv4 and IPv6 subnets,
we will get one IPv4 and two IPv6 addresses:

# neutron port-create test
neutron CLI is deprecated and will be removed in the future. Use openstack CLI 
instead.
Created a new port:
+---+-+
| Field | Value 
  |
+---+-+
| admin_state_up| True  
  |
| allowed_address_pairs |   
  |
| binding:host_id   |   
  |
| binding:profile   | {}
  |
| binding:vif_details   | {}
  |
| binding:vif_type  | unbound   
  |
| binding:vnic_type | normal
  |
| created_at| 2018-07-11T07:58:14Z  
  |
| description   |   
  |
| device_id |   
  |
| device_owner  |   
  |
| extra_dhcp_opts   |   
  |
| fixed_ips | {"subnet_id": "36c4f1cc-4043-4d62-a6ee-db5704dc929a", 
"ip_address": "30.20.30.3"}   |
|   | {"subnet_id": "1bb187de-dce2-429f-8e0f-f0e5357d5f49", 
"ip_address": "fdf8:f53b:82e4:0:f816:3eff:fe05:ef6e"} |
|   | {"subnet_id": "b7624e84-b956-41d2-a4d6-ca6a150200fc", 
"ip_address": "fdf8:f53c:82e4:0:f816:3eff:fe05:ef6e"} |
| id| b7bc35ea-8c26-4fab-9ef4-8b009d3cba4a  
  |
| mac_address   | fa:16:3e:05:ef:6e 
  |
| name  |   
  |
| network_id| 75f2f23a-ab63-4560-bd61-92023700840d  
  |
| port_security_enabled | True  
  |
| project_id| 213ea3d880074bbdab84918d70747a20  
  |
| qos_policy_id |   
  |
| revision_number   | 2 
  |
| security_groups   | 04efec82-a93c-4d19-ad52-a34a7e1a558c  
  |
| status| DOWN  
  |
| tags  |   
  |
| tenant_id | 213ea3d880074bbdab84918d70747a20  
  |
| updated_at| 2018-07-11T07:58:15Z  
  |
+---+-+

For those IPv4, IPv6 stateful addresses, we choose one of subnets from
the network, I think this can be also suitable for IPv6 stateless
address.

** Affects: neutron
 Importance: Undecided
     Assignee: Dong

[Yahoo-eng-team] [Bug 1779813] [NEW] Alembic migrations: Only exists contract version in database

2018-07-03 Thread Dongcan Ye
Public bug reported:

I had a clean environment(code version: stable/queens), and run DB
update operations.

Step 1. First upgrade to Mitaka:
# neutron-db-manage upgrade mitaka

Result: all things runs ok, the neutron and subprojects are upgrade to
Mitaka version.

Step 2. Then runs the following command:
# neutron-db-manage upgrade expand

Result: meets an error here, exception:
# neutron-db-manage upgrade --expand
INFO  [alembic.runtime.migration] Context impl MySQLImpl.
INFO  [alembic.runtime.migration] Will assume non-transactional DDL.
  Running upgrade (expand) for neutron ...
INFO  [alembic.runtime.migration] Context impl MySQLImpl.
INFO  [alembic.runtime.migration] Will assume non-transactional DDL.
INFO  [alembic.runtime.migration] Running upgrade 4ffceebfcdc -> 7bbb25278f53
INFO  [alembic.runtime.migration] Running upgrade 7bbb25278f53 -> 89ab9a816d70
INFO  [alembic.runtime.migration] Running upgrade 89ab9a816d70 -> c879c5e1ee90
INFO  [alembic.runtime.migration] Running upgrade c879c5e1ee90 -> 8fd3918ef6f4
INFO  [alembic.runtime.migration] Running upgrade 8fd3918ef6f4 -> 4bcd4df1f426
INFO  [alembic.runtime.migration] Running upgrade 4bcd4df1f426 -> b67e765a3524
INFO  [alembic.runtime.migration] Running upgrade 0e66c5227a8a -> 45f8dd33480b
INFO  [alembic.runtime.migration] Running upgrade 45f8dd33480b -> 5abc0278ca73
INFO  [alembic.runtime.migration] Running upgrade 5abc0278ca73 -> d3435b514502
INFO  [alembic.runtime.migration] Running upgrade d3435b514502 -> 30107ab6a3ee
INFO  [alembic.runtime.migration] Running upgrade 30107ab6a3ee -> c415aab1c048
INFO  [alembic.runtime.migration] Running upgrade c415aab1c048 -> a963b38d82f4
INFO  [alembic.runtime.migration] Running upgrade a963b38d82f4 -> 3d0e74aa7d37
INFO  [alembic.runtime.migration] Running upgrade 3d0e74aa7d37 -> 030a959ceafa
INFO  [alembic.runtime.migration] Running upgrade 030a959ceafa -> a5648cfeeadf
INFO  [alembic.runtime.migration] Running upgrade a5648cfeeadf -> 0f5bef0f87d4
INFO  [alembic.runtime.migration] Running upgrade 0f5bef0f87d4 -> 67daae611b6e
INFO  [alembic.runtime.migration] Running upgrade 67daae611b6e -> 6b461a21bcfc
INFO  [alembic.runtime.migration] Running upgrade 6b461a21bcfc -> 5cd92597d11d
INFO  [alembic.runtime.migration] Running upgrade 5cd92597d11d -> 929c968efe70
INFO  [alembic.runtime.migration] Running upgrade 929c968efe70 -> a9c43481023c
INFO  [alembic.runtime.migration] Running upgrade a9c43481023c -> 804a3c76314c
INFO  [alembic.runtime.migration] Running upgrade 804a3c76314c -> 2b42d90729da
INFO  [alembic.runtime.migration] Running upgrade 2b42d90729da -> 62c781cb6192
INFO  [alembic.runtime.migration] Running upgrade 62c781cb6192 -> c8c222d42aa9
INFO  [alembic.runtime.migration] Running upgrade c8c222d42aa9 -> 349b6fd605a6
INFO  [alembic.runtime.migration] Running upgrade 349b6fd605a6 -> 7d32f979895f
INFO  [alembic.runtime.migration] Running upgrade 7d32f979895f -> 594422d373ee
INFO  [alembic.runtime.migration] Running upgrade 594422d373ee -> 61663558142c
  OK
INFO  [alembic.runtime.migration] Context impl MySQLImpl.
INFO  [alembic.runtime.migration] Will assume non-transactional DDL.
Traceback (most recent call last):
  File "/var/lib/kolla/venv/bin/neutron-db-manage", line 10, in 
sys.exit(main())
  File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/db/migration/cli.py", 
line 653, in main
return_val |= bool(CONF.command.func(config, CONF.command.name))
  File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/db/migration/cli.py", 
line 172, in do_upgrade
run_sanity_checks(config, revision)
  File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/db/migration/cli.py", 
line 637, in run_sanity_checks
script_dir.run_env()
  File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/alembic/script/base.py", line 
427, in run_env
util.load_python_file(self.dir, 'env.py')
  File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/alembic/util/pyfiles.py", line 
81, in load_python_file
module = load_module_py(module_id, path)
  File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/alembic/util/compat.py", line 
141, in load_module_py
mod = imp.load_source(module_id, path, fp)
  File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/networking_infoblox/neutron/db/migration/alembic_migrations/env.py",
 line 88, in 
run_migrations_online()
  File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/networking_infoblox/neutron/db/migration/alembic_migrations/env.py",
 line 79, in run_migrations_online
context.run_migrations()
  File "", line 8, in run_migrations
  File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/alembic/runtime/environment.py",
 line 836, in run_migrations
self.get_context().run_migrations(**kw)
  File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/alembic/runtime/migration.py", 
line 321, in run_migrations
for step in self._migrations_fn(heads, self):
  File 

[Yahoo-eng-team] [Bug 1770549] Re: Set instance's description failed

2018-05-11 Thread Dongcan Ye
We can pass -os-compute-api-version with openstackclient, so this bug
seems invaild.

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1770549

Title:
  Set instance's description failed

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Create an instance, using novaclient and openstackclient will get different 
value for the description.
  novaclient: description is None.
  openstackclient: description is instance's name.

  Using novaclient the API request version is 2.53, but the openstackclient is 
2.1.
  If we wants to add description using openstackclient(not implement yet), the 
instance's description will get the instance's name forever.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1770549/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1770549] [NEW] Set instance's description failed

2018-05-10 Thread Dongcan Ye
Public bug reported:

Create an instance, using novaclient and openstackclient will get different 
value for the description.
novaclient: description is None.
openstackclient: description is instance's name.

Using novaclient the API request version is 2.53, but the openstackclient is 
2.1.
If we wants to add description using openstackclient(not implement yet), the 
instance's description will get the instance's name forever.

So I think we no need to check the API version here.

** Affects: nova
 Importance: Undecided
 Assignee: Dongcan Ye (hellochosen)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Dongcan Ye (hellochosen)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1770549

Title:
  Set instance's description failed

Status in OpenStack Compute (nova):
  New

Bug description:
  Create an instance, using novaclient and openstackclient will get different 
value for the description.
  novaclient: description is None.
  openstackclient: description is instance's name.

  Using novaclient the API request version is 2.53, but the openstackclient is 
2.1.
  If we wants to add description using openstackclient(not implement yet), the 
instance's description will get the instance's name forever.

  So I think we no need to check the API version here.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1770549/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1760562] [NEW] [VPNaaS] Check subnets used by IPsec site connection

2018-04-02 Thread Dongcan Ye
Public bug reported:

While removing an router interface, we should check that subnet whether
used by IPsec site connection.

** Affects: neutron
 Importance: Undecided
 Assignee: Dongcan Ye (hellochosen)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => Dongcan Ye (hellochosen)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1760562

Title:
  [VPNaaS] Check subnets used by IPsec site connection

Status in neutron:
  In Progress

Bug description:
  While removing an router interface, we should check that subnet
  whether used by IPsec site connection.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1760562/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1751984] Re: Update tags for QoS policy failed

2018-02-26 Thread Dongcan Ye
** Project changed: neutron => python-openstackclient

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1751984

Title:
  Update tags for QoS policy failed

Status in python-openstackclient:
  New

Bug description:
  $ openstack network qos policy create test
  +-+--+
  | Field   | Value|
  +-+--+
  | description |  |
  | id  | 16088664-18c8-4722-981e-0d852fd37343 |
  | is_default  | False|
  | name| test |
  | project_id  | 67f5a69dc9ac4983bd5072d2ee302ec3 |
  | rules   | []   |
  | shared  | False|
  +-+--+

  $ curl -g -i -X PUT 
http://20.30.40.5:9696/v2.0/qos/policies/16088664-18c8-4722-981e-0d852fd37343/tags
 -H "User-Agent: osc-lib/1.9.0 keystoneauth1/3.4.0 python-requests/2.18.4 
CPython/2.7.12" -H "Content-Type: application/json" -H "X-Auth-Token: 
gABalNw0b6n88tsaJfUwB33PXMDJmHAcNRYB5Vomf0Q3-d5NQ2esV3bELJWXhM2basoVK7VQ-hAVqBV4cLZ1mmVstY6eD3BZ2gEOycm1sJo5QqvBy3u0-sUBQw1lBeL67eRkeVrgB5JrcCp2sRxLMN65cwtm6Aha8_JuFqIBRoNt-n_gxLg"
 -d '{"tags": ["test1"]}'
  HTTP/1.1 400 Bad Request
  Content-Type: application/json
  Content-Length: 118
  X-Openstack-Request-Id: req-9f011921-3321-4d65-a3a4-cc04a9f87bdb
  Date: Tue, 27 Feb 2018 04:21:21 GMT

  {"NeutronError": {"message": "Invalid input for operation: Invalid
  tags body.", "type": "InvalidInput", "detail": ""}}

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-openstackclient/+bug/1751984/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1751984] [NEW] Update tags for QoS policy failed

2018-02-26 Thread Dongcan Ye
Public bug reported:

$ openstack network qos policy create test
+-+--+
| Field   | Value|
+-+--+
| description |  |
| id  | 16088664-18c8-4722-981e-0d852fd37343 |
| is_default  | False|
| name| test |
| project_id  | 67f5a69dc9ac4983bd5072d2ee302ec3 |
| rules   | []   |
| shared  | False|
+-+--+

$ curl -g -i -X PUT 
http://20.30.40.5:9696/v2.0/qos/policies/16088664-18c8-4722-981e-0d852fd37343/tags
 -H "User-Agent: osc-lib/1.9.0 keystoneauth1/3.4.0 python-requests/2.18.4 
CPython/2.7.12" -H "Content-Type: application/json" -H "X-Auth-Token: 
gABalNw0b6n88tsaJfUwB33PXMDJmHAcNRYB5Vomf0Q3-d5NQ2esV3bELJWXhM2basoVK7VQ-hAVqBV4cLZ1mmVstY6eD3BZ2gEOycm1sJo5QqvBy3u0-sUBQw1lBeL67eRkeVrgB5JrcCp2sRxLMN65cwtm6Aha8_JuFqIBRoNt-n_gxLg"
 -d '{"tags": ["test1"]}'
HTTP/1.1 400 Bad Request
Content-Type: application/json
Content-Length: 118
X-Openstack-Request-Id: req-9f011921-3321-4d65-a3a4-cc04a9f87bdb
Date: Tue, 27 Feb 2018 04:21:21 GMT

{"NeutronError": {"message": "Invalid input for operation: Invalid tags
body.", "type": "InvalidInput", "detail": ""}}

** Affects: python-openstackclient
 Importance: Undecided
     Assignee: Dongcan Ye (hellochosen)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Dongcan Ye (hellochosen)

** Description changed:

- $ openstack floating ip create public
- +-+--+
- | Field   | Value|
- +-+--+
- | created_at  | 2018-02-27T04:31:48Z |
- | description |  |
- | fixed_ip_address| None |
- | floating_ip_address | 172.24.4.7   |
- | floating_network_id | 1164bec7-b79a-4fa1-8498-6839fe6b0b0e |
- | id  | 400cf656-4670-4dee-b44b-b8fffb98afae |
- | name| 172.24.4.7   |
- | port_id | None |
- | project_id  | 67f5a69dc9ac4983bd5072d2ee302ec3 |
- | qos_policy_id   | None |
- | revision_number | 0|
- | router_id   | None |
- | status  | DOWN |
- | subnet_id   | None |
- | updated_at  | 2018-02-27T04:31:48Z |
- +-+--+
+ $ openstack network qos policy create test
+ +-+--+
+ | Field   | Value|
+ +-+--+
+ | description |  |
+ | id  | 16088664-18c8-4722-981e-0d852fd37343 |
+ | is_default  | False|
+ | name| test |
+ | project_id  | 67f5a69dc9ac4983bd5072d2ee302ec3 |
+ | rules   | []   |
+ | shared  | False|
+ +-+--+
  
  $ curl -g -i -X PUT 
http://20.30.40.5:9696/v2.0/qos/policies/16088664-18c8-4722-981e-0d852fd37343/tags
 -H "User-Agent: osc-lib/1.9.0 keystoneauth1/3.4.0 python-requests/2.18.4 
CPython/2.7.12" -H "Content-Type: application/json" -H "X-Auth-Token: 
gABalNw0b6n88tsaJfUwB33PXMDJmHAcNRYB5Vomf0Q3-d5NQ2esV3bELJWXhM2basoVK7VQ-hAVqBV4cLZ1mmVstY6eD3BZ2gEOycm1sJo5QqvBy3u0-sUBQw1lBeL67eRkeVrgB5JrcCp2sRxLMN65cwtm6Aha8_JuFqIBRoNt-n_gxLg"
 -d '{"tags": ["test1"]}'
  HTTP/1.1 400 Bad Request
  Content-Type: application/json
  Content-Length: 118
  X-Openstack-Request-Id: req-9f011921-3321-4d65-a3a4-cc04a9f87bdb
  Date: Tue, 27 Feb 2018 04:21:21 GMT
  
  {"NeutronError": {"message": "Invalid input for operation: Invalid tags
  body.", "type": "InvalidInput", "detail": ""}}

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1751984

Title:
  Update tags for QoS policy failed

Status in python-openstackclient:
  New

Bug description:
  $ openstack network qos policy create test
  +-+--+

[Yahoo-eng-team] [Bug 1736755] Re: unit tests error in FIP creation

2017-12-13 Thread Dongcan Ye
** Changed in: neutron
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1736755

Title:
  unit tests error in FIP creation

Status in neutron:
  Invalid

Bug description:
  While debugging unit test for [1] using tox command "tox -e venv --
  python -m testtools.run
  
neutron.tests.unit.extensions.test_l3.L3AgentDbIntTestCase.test_l3_agent_routers_query_floatingips".

  I found that floatingip created with the params:
  (Pdb) data
  {'floatingip': {'tenant_id': '46f70361-ba71-4bd0-9769-3573fd227c4b', 
'port_id': u'3dca5c4e-dee5-4a9c-afbe-d77494c42223', 'floating_network_id': 
u'2bdc683e-5b0c-46ad-a85a-9fc138e5778f'}}

  But these params will error while using neutronclient:
  # neutron floatingip-create --port-id 5b129110-d6ba-4e0f-8d56-3fce7d052213 
public

  DEBUG: keystoneauth.session REQ: curl -g -i -X POST 
http://20.30.40.5:9696/v2.0/floatingips -H "User-Agent: python-neutronclient" 
-H "Content-Type: application/json" -H "Accept: application/json" -H 
"X-Auth-Token: {SHA1}d98e0f7fa754f69fc26bd427b244a335f7f8d97a" -d 
'{"floatingip": {"floating_network_id": "30c2a624-7c53-46a2-a733-b196e7d72b40", 
"port_id": "5b129110-d6ba-4e0f-8d56-3fce7d052213"}}'
  DEBUG: keystoneauth.session RESP: [400] Content-Type: application/json 
Content-Length: 147 X-Openstack-Request-Id: 
req-9998bac0-3f87-4db3-98a3-9c98789d275b Date: Wed, 06 Dec 2017 15:02:58 GMT 
Connection: keep-alive 
  RESP BODY: {"NeutronError": {"message": "Invalid input for operation: IP 
allocation requires subnet_id or ip_address.", "type": "InvalidInput", 
"detail": ""}}

  Function floatingip_with_assoc in[2] create a FIP only with FIP
  network and private port, so I think there are lots of UTs need to
  amend.

  
  [1] https://review.openstack.org/#/c/521707/
  [2] 
https://github.com/openstack/neutron/blob/master/neutron/tests/unit/extensions/test_l3.py#L484

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1736755/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1736755] [NEW] unit tests error in FIP creation

2017-12-06 Thread Dongcan Ye
Public bug reported:

While debugging unit test for [1] using tox command "tox -e venv --
python -m testtools.run
neutron.tests.unit.extensions.test_l3.L3AgentDbIntTestCase.test_l3_agent_routers_query_floatingips".

I found that floatingip created with the params:
(Pdb) data
{'floatingip': {'tenant_id': '46f70361-ba71-4bd0-9769-3573fd227c4b', 'port_id': 
u'3dca5c4e-dee5-4a9c-afbe-d77494c42223', 'floating_network_id': 
u'2bdc683e-5b0c-46ad-a85a-9fc138e5778f'}}

But these params will error while using neutronclient:
# neutron floatingip-create --port-id 5b129110-d6ba-4e0f-8d56-3fce7d052213 
public

DEBUG: keystoneauth.session REQ: curl -g -i -X POST 
http://20.30.40.5:9696/v2.0/floatingips -H "User-Agent: python-neutronclient" 
-H "Content-Type: application/json" -H "Accept: application/json" -H 
"X-Auth-Token: {SHA1}d98e0f7fa754f69fc26bd427b244a335f7f8d97a" -d 
'{"floatingip": {"floating_network_id": "30c2a624-7c53-46a2-a733-b196e7d72b40", 
"port_id": "5b129110-d6ba-4e0f-8d56-3fce7d052213"}}'
DEBUG: keystoneauth.session RESP: [400] Content-Type: application/json 
Content-Length: 147 X-Openstack-Request-Id: 
req-9998bac0-3f87-4db3-98a3-9c98789d275b Date: Wed, 06 Dec 2017 15:02:58 GMT 
Connection: keep-alive 
RESP BODY: {"NeutronError": {"message": "Invalid input for operation: IP 
allocation requires subnet_id or ip_address.", "type": "InvalidInput", 
"detail": ""}}

Function floatingip_with_assoc in[2] create a FIP only with FIP network
and private port, so I think there are lots of UTs need to amend.


[1] https://review.openstack.org/#/c/521707/
[2] 
https://github.com/openstack/neutron/blob/master/neutron/tests/unit/extensions/test_l3.py#L484

** Affects: neutron
     Importance: Undecided
 Assignee: Dongcan Ye (hellochosen)
 Status: New


** Tags: unittest

** Changed in: neutron
 Assignee: (unassigned) => Dongcan Ye (hellochosen)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1736755

Title:
  unit tests error in FIP creation

Status in neutron:
  New

Bug description:
  While debugging unit test for [1] using tox command "tox -e venv --
  python -m testtools.run
  
neutron.tests.unit.extensions.test_l3.L3AgentDbIntTestCase.test_l3_agent_routers_query_floatingips".

  I found that floatingip created with the params:
  (Pdb) data
  {'floatingip': {'tenant_id': '46f70361-ba71-4bd0-9769-3573fd227c4b', 
'port_id': u'3dca5c4e-dee5-4a9c-afbe-d77494c42223', 'floating_network_id': 
u'2bdc683e-5b0c-46ad-a85a-9fc138e5778f'}}

  But these params will error while using neutronclient:
  # neutron floatingip-create --port-id 5b129110-d6ba-4e0f-8d56-3fce7d052213 
public

  DEBUG: keystoneauth.session REQ: curl -g -i -X POST 
http://20.30.40.5:9696/v2.0/floatingips -H "User-Agent: python-neutronclient" 
-H "Content-Type: application/json" -H "Accept: application/json" -H 
"X-Auth-Token: {SHA1}d98e0f7fa754f69fc26bd427b244a335f7f8d97a" -d 
'{"floatingip": {"floating_network_id": "30c2a624-7c53-46a2-a733-b196e7d72b40", 
"port_id": "5b129110-d6ba-4e0f-8d56-3fce7d052213"}}'
  DEBUG: keystoneauth.session RESP: [400] Content-Type: application/json 
Content-Length: 147 X-Openstack-Request-Id: 
req-9998bac0-3f87-4db3-98a3-9c98789d275b Date: Wed, 06 Dec 2017 15:02:58 GMT 
Connection: keep-alive 
  RESP BODY: {"NeutronError": {"message": "Invalid input for operation: IP 
allocation requires subnet_id or ip_address.", "type": "InvalidInput", 
"detail": ""}}

  Function floatingip_with_assoc in[2] create a FIP only with FIP
  network and private port, so I think there are lots of UTs need to
  amend.

  
  [1] https://review.openstack.org/#/c/521707/
  [2] 
https://github.com/openstack/neutron/blob/master/neutron/tests/unit/extensions/test_l3.py#L484

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1736755/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1732890] Re: floatingip-create:Ignore floating_ip_address when using floating_ip_address and subnet

2017-11-20 Thread Dongcan Ye
** Project changed: python-neutronclient => neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1732890

Title:
  floatingip-create:Ignore floating_ip_address when using
  floating_ip_address and subnet

Status in neutron:
  Confirmed

Bug description:
  When I created floating ip with "floating_ip_address" and "subnet", it 
ignored "floating_ip_address".
  $neutron floatingip-create --floating-ip-address 172.24.4.25 --subnet 
d5ece368-35fb-4537-be84-eda656250974
  Created a new floatingip:
  +-+--+
  | Field   | Value|
  +-+--+
  | created_at  | 2017-11-17T09:42:57Z |
  | description |  |
  | fixed_ip_address|  |
  | floating_ip_address | 172.24.4.10  |
  | floating_network_id | fa18e1d7-1f33-48c0-a77f-f192f3c1c6df |
  | id  | 4d6129a4-9076-4e79-b3f0-b05ce68deb05 |
  | port_id |  |
  | project_id  | f0f9361fbf8e495b97eeadae6a81e14d |
  | revision_number | 1|
  | router_id   |  |
  | status  | DOWN |
  | tenant_id   | f0f9361fbf8e495b97eeadae6a81e14d |
  | updated_at  | 2017-11-17T09:42:57Z |
  +-+--+

  This is my REQ:
  DEBUG: keystoneauth.session REQ: curl -g -i -X POST 
http://10.10.10.7:9696/v2.0/floatingips.json -H "User-Agent: 
python-neutronclient" -H "Content-Type: application/json" -H "Accept: 
application/json" -H "X-Auth-Token: 
{SHA1}0996a50cdaac248681cedb7000dbe71c7bd1a3e0" -d '{"floatingip": 
{"floating_network_id": "fa18e1d7-1f33-48c0-a77f-f192f3c1c6df", "subnet_id": 
"d5ece368-35fb-4537-be84-eda656250974", "floating_ip_address": "172.24.4.25"}}'
  And this is my RESP:
  RESP BODY: {"floatingip": {"router_id": null, "status": "DOWN", 
"description": "", "tenant_id": "f0f9361fbf8e495b97eeadae6a81e14d", 
"created_at": "2017-11-17T09:42:57Z", "updated_at": "2017-11-17T09:42:57Z", 
"floating_network_id": "fa18e1d7-1f33-48c0-a77f-f192f3c1c6df", 
"fixed_ip_address": null, "floating_ip_address": "172.24.4.10", 
"revision_number": 1, "project_id": "f0f9361fbf8e495b97eeadae6a81e14d", 
"port_id": null, "id": "4d6129a4-9076-4e79-b3f0-b05ce68deb05"}}

  I think we should make sure the "floating_ip_address" belongs to the "subnet" 
and create it.
  Or we should report a error message when we set these parameters at the same 
time.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1732890/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1732890] Re: floatingip-create:Ignore floating_ip_address when using floating_ip_address and subnet

2017-11-19 Thread Dongcan Ye
@Brian Haley, the reproduce step is both specify subnet and floating-ip-
address:

# neutron floatingip-create --floating-ip-address 172.24.4.101 --subnet
0c280593-3066-4393-bbdc-028b24139314 public

The result is:
Created a new floatingip:
+-+--+
| Field   | Value|
+-+--+
| created_at  | 2017-11-20T02:22:07Z |
| description |  |
| fixed_ip_address|  |
| floating_ip_address | 172.24.4.3   |
| floating_network_id | 30c2a624-7c53-46a2-a733-b196e7d72b40 |
| id  | c94e45c2-05a4-4c00-9cb1-4168db7de6e4 |
| port_id |  |
| project_id  | a349811205044d119b27a9f09a06bf3e |
| revision_number | 0|
| router_id   |  |
| status  | DOWN |
| tags|  |
| tenant_id   | a349811205044d119b27a9f09a06bf3e |
| updated_at  | 2017-11-20T02:22:07Z |
+-+--+

While using openstackclient it always using the floating-ip-address, so
I think floating-ip-address and subnet is optional.

** Changed in: neutron
   Status: Invalid => Confirmed

** Project changed: neutron => python-neutronclient

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1732890

Title:
  floatingip-create:Ignore floating_ip_address when using
  floating_ip_address and subnet

Status in python-neutronclient:
  Confirmed

Bug description:
  When I created floating ip with "floating_ip_address" and "subnet", it 
ignored "floating_ip_address".
  $neutron floatingip-create --floating-ip-address 172.24.4.25 --subnet 
d5ece368-35fb-4537-be84-eda656250974
  Created a new floatingip:
  +-+--+
  | Field   | Value|
  +-+--+
  | created_at  | 2017-11-17T09:42:57Z |
  | description |  |
  | fixed_ip_address|  |
  | floating_ip_address | 172.24.4.10  |
  | floating_network_id | fa18e1d7-1f33-48c0-a77f-f192f3c1c6df |
  | id  | 4d6129a4-9076-4e79-b3f0-b05ce68deb05 |
  | port_id |  |
  | project_id  | f0f9361fbf8e495b97eeadae6a81e14d |
  | revision_number | 1|
  | router_id   |  |
  | status  | DOWN |
  | tenant_id   | f0f9361fbf8e495b97eeadae6a81e14d |
  | updated_at  | 2017-11-17T09:42:57Z |
  +-+--+

  This is my REQ:
  DEBUG: keystoneauth.session REQ: curl -g -i -X POST 
http://10.10.10.7:9696/v2.0/floatingips.json -H "User-Agent: 
python-neutronclient" -H "Content-Type: application/json" -H "Accept: 
application/json" -H "X-Auth-Token: 
{SHA1}0996a50cdaac248681cedb7000dbe71c7bd1a3e0" -d '{"floatingip": 
{"floating_network_id": "fa18e1d7-1f33-48c0-a77f-f192f3c1c6df", "subnet_id": 
"d5ece368-35fb-4537-be84-eda656250974", "floating_ip_address": "172.24.4.25"}}'
  And this is my RESP:
  RESP BODY: {"floatingip": {"router_id": null, "status": "DOWN", 
"description": "", "tenant_id": "f0f9361fbf8e495b97eeadae6a81e14d", 
"created_at": "2017-11-17T09:42:57Z", "updated_at": "2017-11-17T09:42:57Z", 
"floating_network_id": "fa18e1d7-1f33-48c0-a77f-f192f3c1c6df", 
"fixed_ip_address": null, "floating_ip_address": "172.24.4.10", 
"revision_number": 1, "project_id": "f0f9361fbf8e495b97eeadae6a81e14d", 
"port_id": null, "id": "4d6129a4-9076-4e79-b3f0-b05ce68deb05"}}

  I think we should make sure the "floating_ip_address" belongs to the "subnet" 
and create it.
  Or we should report a error message when we set these parameters at the same 
time.

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-neutronclient/+bug/1732890/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1730959] Re: [RFE] Add timestamp to LBaaS resources

2017-11-13 Thread Dongcan Ye
Marked invalid in Octavia:
https://storyboard.openstack.org/#!/story/2001277

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1730959

Title:
  [RFE] Add timestamp to LBaaS resources

Status in neutron:
  Invalid

Bug description:
  Currently most of Neutron resources are support timestamp (like create_at, 
update_at).
  This is also useful for LBaaS related resources, for the sake of monitoring 
resources or querying.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1730959/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1730959] [NEW] [RFE] Add timestamp to LBaaS resources

2017-11-08 Thread Dongcan Ye
Public bug reported:

Currently most of Neutron resources are support timestamp (like create_at, 
update_at).
This is also useful for LBaaS related resources, for the sake of monitoring 
resources or querying.

** Affects: neutron
 Importance: Undecided
 Assignee: Dongcan Ye (hellochosen)
 Status: New


** Tags: lbaas

** Changed in: neutron
 Assignee: (unassigned) => Dongcan Ye (hellochosen)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1730959

Title:
  [RFE] Add timestamp to LBaaS resources

Status in neutron:
  New

Bug description:
  Currently most of Neutron resources are support timestamp (like create_at, 
update_at).
  This is also useful for LBaaS related resources, for the sake of monitoring 
resources or querying.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1730959/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1726677] [NEW] Missing RBAC access_as_external unit tests

2017-10-23 Thread Dongcan Ye
Public bug reported:

Currently, most of the RBAC tests in [1] are related to shared.
We can add some tests to cover the access as external situation.

[1]
https://github.com/openstack/neutron/blob/master/neutron/tests/unit/db/test_rbac_db_mixin.py

** Affects: neutron
 Importance: Undecided
 Assignee: Dongcan Ye (hellochosen)
 Status: New


** Tags: unittest

** Changed in: neutron
 Assignee: (unassigned) => Dongcan Ye (hellochosen)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1726677

Title:
  Missing RBAC access_as_external unit tests

Status in neutron:
  New

Bug description:
  Currently, most of the RBAC tests in [1] are related to shared.
  We can add some tests to cover the access as external situation.

  [1]
  
https://github.com/openstack/neutron/blob/master/neutron/tests/unit/db/test_rbac_db_mixin.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1726677/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1723912] [NEW] Refactor securitygroup fullstack tests

2017-10-16 Thread Dongcan Ye
Public bug reported:

Currently, all securitygroup fullstack tests are located in one method: 
test_securitygroup
As per Jakub Libosvar suggests, we can seperates into one scenario one test.

** Affects: neutron
 Importance: Undecided
 Assignee: Dongcan Ye (hellochosen)
 Status: New


** Tags: fullstack

** Changed in: neutron
 Assignee: (unassigned) => Dongcan Ye (hellochosen)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1723912

Title:
  Refactor securitygroup fullstack tests

Status in neutron:
  New

Bug description:
  Currently, all securitygroup fullstack tests are located in one method: 
test_securitygroup
  As per Jakub Libosvar suggests, we can seperates into one scenario one test.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1723912/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1722989] [NEW] Fullstack job failed while allocating port

2017-10-11 Thread Dongcan Ye
Public bug reported:

The fullstack job failed while allocating port:

Example:
http://logs.openstack.org/53/511353/1/check/gate-neutron-dsvm-fullstack-ubuntu-xenial/bbaf465/testr_results.html.gz
http://logs.openstack.org/64/485564/4/check/gate-neutron-dsvm-fullstack-ubuntu-xenial/cd49955/testr_results.html.gz

Traceback (most recent call last):
  File 
"/opt/stack/new/neutron/.tox/dsvm-fullstack/local/lib/python2.7/site-packages/fixtures/fixture.py",
 line 197, in setUp
self._setUp()
  File "neutron/tests/common/exclusive_resources/port.py", line 35, in _setUp
super(ExclusivePort, self)._setUp()
  File "neutron/tests/common/exclusive_resources/resource_allocator.py", line 
34, in _setUp
self.resource = self.ra.allocate()
  File 
"/opt/stack/new/neutron/.tox/dsvm-fullstack/local/lib/python2.7/site-packages/oslo_concurrency/lockutils.py",
 line 274, in inner
return f(*args, **kwargs)
  File "neutron/tests/common/exclusive_resources/resource_allocator.py", line 
76, in allocate
resource = str(self._allocator_function())
  File "neutron/tests/common/net_helpers.py", line 228, in 
get_free_namespace_port
return get_unused_port(used_ports, start, end)
  File "neutron/tests/common/net_helpers.py", line 196, in get_unused_port
end = int(port_range.split()[0]) - 1
ValueError: invalid literal for int() with base 10: 'State'

** Affects: neutron
 Importance: Undecided
 Assignee: Dongcan Ye (hellochosen)
 Status: New


** Tags: fullstack

** Changed in: neutron
 Assignee: (unassigned) => Dongcan Ye (hellochosen)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1722989

Title:
  Fullstack job failed while allocating port

Status in neutron:
  New

Bug description:
  The fullstack job failed while allocating port:

  Example:
  
http://logs.openstack.org/53/511353/1/check/gate-neutron-dsvm-fullstack-ubuntu-xenial/bbaf465/testr_results.html.gz
  
http://logs.openstack.org/64/485564/4/check/gate-neutron-dsvm-fullstack-ubuntu-xenial/cd49955/testr_results.html.gz

  Traceback (most recent call last):
File 
"/opt/stack/new/neutron/.tox/dsvm-fullstack/local/lib/python2.7/site-packages/fixtures/fixture.py",
 line 197, in setUp
  self._setUp()
File "neutron/tests/common/exclusive_resources/port.py", line 35, in _setUp
  super(ExclusivePort, self)._setUp()
File "neutron/tests/common/exclusive_resources/resource_allocator.py", line 
34, in _setUp
  self.resource = self.ra.allocate()
File 
"/opt/stack/new/neutron/.tox/dsvm-fullstack/local/lib/python2.7/site-packages/oslo_concurrency/lockutils.py",
 line 274, in inner
  return f(*args, **kwargs)
File "neutron/tests/common/exclusive_resources/resource_allocator.py", line 
76, in allocate
  resource = str(self._allocator_function())
File "neutron/tests/common/net_helpers.py", line 228, in 
get_free_namespace_port
  return get_unused_port(used_ports, start, end)
File "neutron/tests/common/net_helpers.py", line 196, in get_unused_port
  end = int(port_range.split()[0]) - 1
  ValueError: invalid literal for int() with base 10: 'State'

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1722989/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1707819] [NEW] Allowed address pairs allows update with invalid cidr

2017-07-31 Thread Dongcan Ye
Public bug reported:

Subnet info:
$ neutron subnet-show 68a42a05-2024-44b3-9086-e97704452724
neutron CLI is deprecated and will be removed in the future. Use openstack CLI 
instead.
+---+--+
| Field | Value|
+---+--+
| allocation_pools  | {"start": "10.20.0.2", "end": "10.20.0.254"} |
| cidr  | 10.20.0.0/24 |
| created_at| 2017-04-21T07:08:39Z |
| description   |  |
| dns_nameservers   |  |
| enable_dhcp   | False|
| gateway_ip| 10.20.0.1|
| host_routes   |  |
| id| 68a42a05-2024-44b3-9086-e97704452724 |
| ip_version| 4|
| ipv6_address_mode |  |
| ipv6_ra_mode  |  |
| name  | test_subnet  |
| network_id| 9cd01eb4-906a-4c68-b705-0520bfe1b1e6 |
| project_id| 6d0a93fb8cfc4c2f84e3936d95a17bad |
| revision_number   | 2|
| service_types |  |
| subnetpool_id |  |
| tags  |  |
| tenant_id | 6d0a93fb8cfc4c2f84e3936d95a17bad |
| updated_at| 2017-04-21T07:08:39Z |
+---+--+


$ neutron port-update 31250c3c-69ec-462c-8ec8-195beeeff3f2  
--allowed-address-pairs type=dict list=true ip_address=10.20.0.201/24
neutron CLI is deprecated and will be removed in the future. Use openstack CLI 
instead.
Updated port: 31250c3c-69ec-462c-8ec8-195beeeff3f2

** Affects: neutron
     Importance: Undecided
 Assignee: Dongcan Ye (hellochosen)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Dongcan Ye (hellochosen)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1707819

Title:
  Allowed address pairs allows update with invalid cidr

Status in neutron:
  New

Bug description:
  Subnet info:
  $ neutron subnet-show 68a42a05-2024-44b3-9086-e97704452724
  neutron CLI is deprecated and will be removed in the future. Use openstack 
CLI instead.
  +---+--+
  | Field | Value|
  +---+--+
  | allocation_pools  | {"start": "10.20.0.2", "end": "10.20.0.254"} |
  | cidr  | 10.20.0.0/24 |
  | created_at| 2017-04-21T07:08:39Z |
  | description   |  |
  | dns_nameservers   |  |
  | enable_dhcp   | False|
  | gateway_ip| 10.20.0.1|
  | host_routes   |  |
  | id| 68a42a05-2024-44b3-9086-e97704452724 |
  | ip_version| 4|
  | ipv6_address_mode |  |
  | ipv6_ra_mode  |  |
  | name  | test_subnet  |
  | network_id| 9cd01eb4-906a-4c68-b705-0520bfe1b1e6 |
  | project_id| 6d0a93fb8cfc4c2f84e3936d95a17bad |
  | revision_number   | 2|
  | service_types |  |
  | subnetpool_id |  |
  | tags  |  |
  | tenant_id | 6d0a93fb8cfc4c2f84e3936d95a17bad |
  | updated_at| 2017-04-21T07:08:39Z |
  +---+--+

  
  $ neutron port-update 31250c3c-69ec-462c-8ec8-195beeeff3f2  
--allowed-address-pairs type=dict list=true ip_address=10.20.0.201/24
  neutron CLI is deprecated and will be removed in the future. Use openstack 
CLI instead.
  Updated port: 31250c3c-69ec-462c-8ec8-195beeeff3f2

To manage notifications about this bug go t

[Yahoo-eng-team] [Bug 1698791] [NEW] [RFE] Add support getting segment vlan usage

2017-06-19 Thread Dongcan Ye
Public bug reported:

In some private cloud, admin user need to know which VLAN ids are
available for the purpose of management, and this also helpful for
create VLAN provider network with specified VLAN id.

We can implement an API, which can get used VLAN ids from database.

** Affects: neutron
 Importance: Undecided
 Assignee: Dongcan Ye (hellochosen)
 Status: New


** Tags: rfe

** Changed in: neutron
 Assignee: (unassigned) => Dongcan Ye (hellochosen)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1698791

Title:
  [RFE] Add support getting segment vlan usage

Status in neutron:
  New

Bug description:
  In some private cloud, admin user need to know which VLAN ids are
  available for the purpose of management, and this also helpful for
  create VLAN provider network with specified VLAN id.

  We can implement an API, which can get used VLAN ids from database.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1698791/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1697410] [NEW] Add router interface checking for duplicated subnets should ingore external network

2017-06-12 Thread Dongcan Ye
Public bug reported:

When adding router interface by subnet, it will check the router
attached port, this will cause add router interface failed in some
situation.

===Steps to reproduce===
$ neutron net-create test
$ neutron subnet-create test 192.168.138.0/24 --name test-subnet
$ neutron router-create test-router

# In this step, ext-net has cidr 192.168.128.0/20
$ neutron router-gateway-set test-router ext-net

$ neutron router-interface-add test-router test-subnet
neutron CLI is deprecated and will be removed in the future. Use openstack CLI 
instead.
Bad router request: Cidr 192.168.138.0/24 of subnet 
72a72809-371b-47e1-a70f-15fb9a342760 overlaps with cidr 192.168.128.0/20 of 
subnet ec039c18-2eba-47c6-b219-fdc76a0caf66.
Neutron server returns request_ids: ['req-8b9b3d8f-44c9-437c-9b2b-98d1761d4b1c']

** Affects: neutron
 Importance: Undecided
 Assignee: Dongcan Ye (hellochosen)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Dongcan Ye (hellochosen)

** Summary changed:

- Add router interface for duplicated subnets should not check external network
+ Add router interface checking for duplicated subnets should ingore external 
network

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1697410

Title:
  Add router interface checking for duplicated subnets should ingore
  external network

Status in neutron:
  New

Bug description:
  When adding router interface by subnet, it will check the router
  attached port, this will cause add router interface failed in some
  situation.

  ===Steps to reproduce===
  $ neutron net-create test
  $ neutron subnet-create test 192.168.138.0/24 --name test-subnet
  $ neutron router-create test-router

  # In this step, ext-net has cidr 192.168.128.0/20
  $ neutron router-gateway-set test-router ext-net

  $ neutron router-interface-add test-router test-subnet
  neutron CLI is deprecated and will be removed in the future. Use openstack 
CLI instead.
  Bad router request: Cidr 192.168.138.0/24 of subnet 
72a72809-371b-47e1-a70f-15fb9a342760 overlaps with cidr 192.168.128.0/20 of 
subnet ec039c18-2eba-47c6-b219-fdc76a0caf66.
  Neutron server returns request_ids: 
['req-8b9b3d8f-44c9-437c-9b2b-98d1761d4b1c']

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1697410/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1696309] [NEW] net-ip-availability-list not support filter by project id

2017-06-07 Thread Dongcan Ye
Public bug reported:

List network ip availability filtered by project id not supported.

$ neutron net-ip-availability-list --project-id 6d0a93fb8cfc4c2f84e3936d95a17bad
neutron CLI is deprecated and will be removed in the future. Use openstack CLI 
instead.
+--+--+--+---+--+
| network_id   | tenant_id| 
network_name | total_ips | used_ips |
+--+--+--+---+--+
| 34bed001-306a-4b1c-a441-c9f6bf95b361 | b76ff5120e234f11a7e7a35a5b60277e | 
private-net  |   253 |2 |
| 4ebdcf94-6ec9-498c-a2c6-b7746aaf09f5 | c04b15f261854c13bff01610313e7b99 | 
dvr-net  |   253 |4 |
| e374af03-4461-4316-bf8c-d95f5ed8526c | 274428ff074a4b639809fb28a52c2621 | 
private-net  |   253 |5 |
| bd1aa0a3-fe2f-42b1-b4e0-6405d4609279 | ed343dbff2384a07bf5871f0cac018f5 | 
private-net  |   253 |2 |
| 3a4a15f6-5eb3-4f4e-acf5-940131030e9f | c04b15f261854c13bff01610313e7b99 | 
ceph-net |   253 |7 |
| 9cd01eb4-906a-4c68-b705-0520bfe1b1e6 | 6d0a93fb8cfc4c2f84e3936d95a17bad | 
net12|   253 |1 |

** Affects: neutron
 Importance: Undecided
 Assignee: Dongcan Ye (hellochosen)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Dongcan Ye (hellochosen)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1696309

Title:
  net-ip-availability-list not support filter by project id

Status in neutron:
  New

Bug description:
  List network ip availability filtered by project id not supported.

  $ neutron net-ip-availability-list --project-id 
6d0a93fb8cfc4c2f84e3936d95a17bad
  neutron CLI is deprecated and will be removed in the future. Use openstack 
CLI instead.
  
+--+--+--+---+--+
  | network_id   | tenant_id| 
network_name | total_ips | used_ips |
  
+--+--+--+---+--+
  | 34bed001-306a-4b1c-a441-c9f6bf95b361 | b76ff5120e234f11a7e7a35a5b60277e | 
private-net  |   253 |2 |
  | 4ebdcf94-6ec9-498c-a2c6-b7746aaf09f5 | c04b15f261854c13bff01610313e7b99 | 
dvr-net  |   253 |4 |
  | e374af03-4461-4316-bf8c-d95f5ed8526c | 274428ff074a4b639809fb28a52c2621 | 
private-net  |   253 |5 |
  | bd1aa0a3-fe2f-42b1-b4e0-6405d4609279 | ed343dbff2384a07bf5871f0cac018f5 | 
private-net  |   253 |2 |
  | 3a4a15f6-5eb3-4f4e-acf5-940131030e9f | c04b15f261854c13bff01610313e7b99 | 
ceph-net |   253 |7 |
  | 9cd01eb4-906a-4c68-b705-0520bfe1b1e6 | 6d0a93fb8cfc4c2f84e3936d95a17bad | 
net12|   253 |1 |

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1696309/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1684519] [NEW] Add missing unittests in segment db

2017-04-20 Thread Dongcan Ye
Public bug reported:

Most of unit tests in segment db[1] are missing.
[1] 
https://github.com/openstack/neutron/blob/master/neutron/tests/unit/db/test_segments_db.py

** Affects: neutron
 Importance: Undecided
 Assignee: Dongcan Ye (hellochosen)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Dongcan Ye (hellochosen)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1684519

Title:
  Add missing unittests in segment db

Status in neutron:
  New

Bug description:
  Most of unit tests in segment db[1] are missing.
  [1] 
https://github.com/openstack/neutron/blob/master/neutron/tests/unit/db/test_segments_db.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1684519/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1684158] [NEW] Add tenant_id attribute for ha network

2017-04-19 Thread Dongcan Ye
Public bug reported:

Currently, create a ha router will create a ha network, but the network lack of 
tenant_id attribute.
It worth to know the ha network's owner for the administrate purpose.

** Affects: neutron
 Importance: Undecided
 Assignee: Dongcan Ye (hellochosen)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Dongcan Ye (hellochosen)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1684158

Title:
  Add tenant_id attribute for ha network

Status in neutron:
  New

Bug description:
  Currently, create a ha router will create a ha network, but the network lack 
of tenant_id attribute.
  It worth to know the ha network's owner for the administrate purpose.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1684158/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1654210] [NEW] Get more detail about tag extension

2017-01-05 Thread Dongcan Ye
Public bug reported:

Currently we only know Neutron tags extension support in some external system, 
like Kuryr.
This is insufficiency, we need to get which network objects(network, subnet or 
others) support tag.
In project Kuryr we need this check before mapping Neutron resources and docker 
resources.

** Affects: neutron
 Importance: Undecided
 Assignee: Dongcan Ye (hellochosen)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Dongcan Ye (hellochosen)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1654210

Title:
  Get more detail about tag extension

Status in neutron:
  New

Bug description:
  Currently we only know Neutron tags extension support in some external 
system, like Kuryr.
  This is insufficiency, we need to get which network objects(network, subnet 
or others) support tag.
  In project Kuryr we need this check before mapping Neutron resources and 
docker resources.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1654210/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1641535] [NEW] FIP failed to remove in router's standby node

2016-11-14 Thread Dongcan Ye
Public bug reported:

ENV

1. Server side:
   enable router_distributed and l3_ha

2. Agent side:
   all L3 agent mode is dvr_snat (include network nodes and compute nodes)


How to reprocude:
=
associate floatingip  -->  disassociate floatingip  --> reassociate floatingip

We hit trace info in l3 agent:
http://paste.openstack.org/show/589071/


Analysis
==
When we processing floatingip (In the situation router's attribute is ha + 
dvr), in ha_router we only remove floatingip if ha state is 'master'[1], and in 
dvr_local_router we remove it's related IP rule.
Then we reassociate floatingip, it will hit RTNETLINK error. Because we had 
already delete the realted IP rule.


[1] 
https://github.com/openstack/neutron/blob/master/neutron/agent/l3/ha_router.py#L273

** Affects: neutron
 Importance: Undecided
 Assignee: Dongcan Ye (hellochosen)
 Status: New


** Tags: l3-dvr-backlog l3-ha

** Changed in: neutron
 Assignee: (unassigned) => Dongcan Ye (hellochosen)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1641535

Title:
  FIP failed to remove in router's standby node

Status in neutron:
  New

Bug description:
  ENV
  
  1. Server side:
 enable router_distributed and l3_ha

  2. Agent side:
 all L3 agent mode is dvr_snat (include network nodes and compute nodes)

  
  How to reprocude:
  =
  associate floatingip  -->  disassociate floatingip  --> reassociate floatingip

  We hit trace info in l3 agent:
  http://paste.openstack.org/show/589071/

  
  Analysis
  ==
  When we processing floatingip (In the situation router's attribute is ha + 
dvr), in ha_router we only remove floatingip if ha state is 'master'[1], and in 
dvr_local_router we remove it's related IP rule.
  Then we reassociate floatingip, it will hit RTNETLINK error. Because we had 
already delete the realted IP rule.

  
  [1] 
https://github.com/openstack/neutron/blob/master/neutron/agent/l3/ha_router.py#L273

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1641535/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1613642] [NEW] Add a config option for address scope

2016-08-16 Thread Dongcan Ye
Public bug reported:

Now in L3 agent, we always initialize address scope iptable rules[1]
even though we didn't use address scope function. In my sight, this
procedure is unnecessary.

Can we add a config option in L3 agent, if we want to use address scope,
we can enable it and initialize address scope iptables. Not sure about
this, please correct me if I'm wrong.

[1]
https://github.com/openstack/neutron/blob/master/neutron/agent/l3/router_info.py#L857

** Affects: neutron
 Importance: Undecided
 Assignee: Dongcan Ye (hellochosen)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Dongcan Ye (hellochosen)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1613642

Title:
  Add a config option for address scope

Status in neutron:
  New

Bug description:
  Now in L3 agent, we always initialize address scope iptable rules[1]
  even though we didn't use address scope function. In my sight, this
  procedure is unnecessary.

  Can we add a config option in L3 agent, if we want to use address
  scope, we can enable it and initialize address scope iptables. Not
  sure about this, please correct me if I'm wrong.

  [1]
  
https://github.com/openstack/neutron/blob/master/neutron/agent/l3/router_info.py#L857

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1613642/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1605066] [NEW] [Neutron][VPNaaS] Failed to create ipsec site connection

2016-07-21 Thread Dongcan Ye
Public bug reported:

Code repo: neutron-vpnaas master
OS: Centos7
ipsec device driver: libreswan-3.15-5.el7_1.x86_64

In /etc/neutron/vpn_agent.ini, vpn_device_driver is
neutron_vpnaas.services.vpn.device_drivers.libreswan_ipsec.LibreSwanDriver.

Before running neutron-vpn-agent, I had checked ipsec status, it seems normal:
# ipsec verify
Verifying installed system and configuration files

Version check and ipsec on-path [OK]
Libreswan 3.15 (netkey) on 3.10.0-123.el7.x86_64
Checking for IPsec support in kernel[OK]
 NETKEY: Testing XFRM related proc values
 ICMP default/send_redirects[OK]
 ICMP default/accept_redirects  [OK]
 XFRM larval drop   [OK]
Pluto ipsec.conf syntax [OK]
Hardware random device  [N/A]
Two or more interfaces found, checking IP forwarding[OK]
Checking rp_filter  [OK]
Checking that pluto is running  [OK]
 Pluto listening for IKE on udp 500 [OK]
 Pluto listening for IKE/NAT-T on udp 4500  [OK]
 Pluto ipsec.secret syntax  [OK]
Checking 'ip' command   [OK]
Checking 'iptables' command [OK]
Checking 'prelink' command does not interfere with FIPSChecking for obsolete 
ipsec.conf options [OK]
Opportunistic Encryption[DISABLED]

After create ikepolicy, ipsecpolicy and vpn service, create an 
ipsec-site-connection failed,
ipsec whack --ctlbase status code in vpn-agent.log returns 1 which means not 
running.

Then I trace the code, I think the problem is in function enable(), call 
self.ensure_configs()[1] may have some problems.
ensure_configs[2] in libreswan_ipsec.py will override, I'm not confirm the root 
cause is ipsec checknss (which create nssdb).
If call self.ensure_configs() failed, we can't start ipsec pluto daemon.


Here is the running ipsec process:
# ps aux |grep ipsec
root 3  0.0  0.0   9648  1368 pts/17   S+   12:59   0:00 /bin/sh 
/sbin/ipsec checknss 
/opt/stack/data/neutron/ipsec/f75151f6-ef01-4a68-9747-eb52f4e629f5/etc
root 4  0.0  0.0  37400  3300 pts/17   S+   12:59   0:00 certutil -N -d 
sql:/etc/ipsec.d --empty-password
root 25893  0.0  0.0   9040   668 pts/0S+   13:40   0:00 grep 
--color=auto ipsec
root 26396  0.0  0.1 335268  4588 ?Ssl  08:58   0:00 
/usr/libexec/ipsec/pluto --config /etc/ipsec.conf --nofork

[1] 
https://github.com/openstack/neutron-vpnaas/blob/master/neutron_vpnaas/services/vpn/device_drivers/ipsec.py#L304
[2] 
https://github.com/openstack/neutron-vpnaas/blob/master/neutron_vpnaas/services/vpn/device_drivers/libreswan_ipsec.py#L59

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: vpnaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1605066

Title:
  [Neutron][VPNaaS] Failed to create ipsec site connection

Status in neutron:
  New

Bug description:
  Code repo: neutron-vpnaas master
  OS: Centos7
  ipsec device driver: libreswan-3.15-5.el7_1.x86_64

  In /etc/neutron/vpn_agent.ini, vpn_device_driver is
  neutron_vpnaas.services.vpn.device_drivers.libreswan_ipsec.LibreSwanDriver.

  Before running neutron-vpn-agent, I had checked ipsec status, it seems normal:
  # ipsec verify
  Verifying installed system and configuration files

  Version check and ipsec on-path   [OK]
  Libreswan 3.15 (netkey) on 3.10.0-123.el7.x86_64
  Checking for IPsec support in kernel  [OK]
   NETKEY: Testing XFRM related proc values
   ICMP default/send_redirects  [OK]
   ICMP default/accept_redirects[OK]
   XFRM larval drop [OK]
  Pluto ipsec.conf syntax   [OK]
  Hardware random device[N/A]
  Two or more interfaces found, checking IP forwarding  [OK]
  Checking rp_filter[OK]
  Checking that pluto is running[OK]
   Pluto listening for IKE on udp 500   [OK]
   Pluto listening for IKE/NAT-T on udp 4500[OK]
   Pluto ipsec.secret syntax[OK]
  Checking 'ip' command [OK]
  Checking 'iptables' command   [OK]
  Checking 'prelink' command does not interfere with FIPSChecking for obsolete 
ipsec.conf options   [OK]
  Opportunistic Encryption  [DISABLED]

  After create ikepolicy, ipsecpolicy and vpn service, create an 
ipsec-site-connection failed,
  ipsec whack --ctlbase status code in vpn-agent.log returns 

[Yahoo-eng-team] [Bug 1602320] [NEW] ha + distributed router: keepalived process kill vrrp child process

2016-07-12 Thread Dongcan Ye
Public bug reported:

Code Repo: mitaka
keepalived version: 1.2.13
node mode: 4 nodes(containers), dvr_snat(l3 agent_mode)
OS: Centos 7

I both configure router_distributed and l3_ha True. Then I create a
router, using neutron l3-agent-list-hosting-router command, the result
show 1 active, 3 standby.

Then I add a router interface, there are more than 1 active.
I trace the /var/log/messages, in the original active l3 agent node:
2016-07-12T16:33:32.083140+08:00 localhost Keepalived[1320437]: VRRP child 
process(1320438) died: Respawning
2016-07-12T16:33:32.083613+08:00 localhost Keepalived[1320437]: Starting VRRP 
child process, pid=1340135

Strace info:
http://paste.openstack.org/show/530791/

This is not always failed, sometimes there was only 1 active. Maybe this
is related to the environment, because I can't reproduce in VMs.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: l3-dvr-backlog l3-ha

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1602320

Title:
  ha + distributed router:  keepalived process kill vrrp child process

Status in neutron:
  New

Bug description:
  Code Repo: mitaka
  keepalived version: 1.2.13
  node mode: 4 nodes(containers), dvr_snat(l3 agent_mode)
  OS: Centos 7

  I both configure router_distributed and l3_ha True. Then I create a
  router, using neutron l3-agent-list-hosting-router command, the result
  show 1 active, 3 standby.

  Then I add a router interface, there are more than 1 active.
  I trace the /var/log/messages, in the original active l3 agent node:
  2016-07-12T16:33:32.083140+08:00 localhost Keepalived[1320437]: VRRP child 
process(1320438) died: Respawning
  2016-07-12T16:33:32.083613+08:00 localhost Keepalived[1320437]: Starting VRRP 
child process, pid=1340135

  Strace info:
  http://paste.openstack.org/show/530791/

  This is not always failed, sometimes there was only 1 active. Maybe
  this is related to the environment, because I can't reproduce in VMs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1602320/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1595795] [NEW] [BGP][devstack] Install bgp failed because of Permission denied

2016-06-23 Thread Dongcan Ye
Public bug reported:

Environment:
OS: Ubuntu 14.04
Code repo: master

Install Neutron bgp in DevStack failed.
http://paste.openstack.org/show/521784/

** Affects: neutron
 Importance: Undecided
 Assignee: Dongcan Ye (hellochosen)
 Status: New


** Tags: l3-bgp

** Changed in: neutron
 Assignee: (unassigned) => Dongcan Ye (hellochosen)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1595795

Title:
  [BGP][devstack] Install bgp failed because of Permission denied

Status in neutron:
  New

Bug description:
  Environment:
  OS: Ubuntu 14.04
  Code repo: master

  Install Neutron bgp in DevStack failed.
  http://paste.openstack.org/show/521784/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1595795/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1589969] [NEW] [qos][postgresql] neutron qos-bandwidth-limit-rule-create failed

2016-06-07 Thread Dongcan Ye
Public bug reported:

Neutron version is Liberty and db backend is PostgreSQL.

Using following command to create qos ratelimit:
$ neutron qos-policy-create bw-limiter
$ neutron qos-bandwidth-limit-rule-create bw-limiter --max-kbps 3000 \
  --max-burst-kbps 300

ERROR log in neutron server:
http://paste.openstack.org/show/508633/

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1589969

Title:
  [qos][postgresql] neutron qos-bandwidth-limit-rule-create failed

Status in neutron:
  New

Bug description:
  Neutron version is Liberty and db backend is PostgreSQL.

  Using following command to create qos ratelimit:
  $ neutron qos-policy-create bw-limiter
  $ neutron qos-bandwidth-limit-rule-create bw-limiter --max-kbps 3000 \
--max-burst-kbps 300

  ERROR log in neutron server:
  http://paste.openstack.org/show/508633/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1589969/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1589745] [NEW] availability_zone missing in dhcp and l3 conf

2016-06-06 Thread Dongcan Ye
Public bug reported:

Now availability_zone attribute can be defined in dhcp-agent and l3-agent.
And we can follow the networking guide[1] to configure availability_zone of 
those two agents.
But oslo-config-generator did not generate availability_zone in dhcp_agent.ini 
and l3_agent.ini.

[1]http://docs.openstack.org/mitaka/networking-guide/adv-config-
availability-zone.html

** Affects: neutron
 Importance: Undecided
 Assignee: Dongcan Ye (hellochosen)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1589745

Title:
   availability_zone missing in dhcp and l3 conf

Status in neutron:
  In Progress

Bug description:
  Now availability_zone attribute can be defined in dhcp-agent and l3-agent.
  And we can follow the networking guide[1] to configure availability_zone of 
those two agents.
  But oslo-config-generator did not generate availability_zone in 
dhcp_agent.ini and l3_agent.ini.

  [1]http://docs.openstack.org/mitaka/networking-guide/adv-config-
  availability-zone.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1589745/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1586058] Re: Online an new neutron l3 agent, the router ha_state is active

2016-05-26 Thread Dongcan Ye
$ neutron port-show 960ea43e-5644-47f8-ab30-a6cdd6dde664
+---+---+
| Field | Value 
|
+---+---+
| admin_state_up| True  
|
| allowed_address_pairs |   
|
| binding:host_id   | node-A
|
| binding:profile   | {}
|
| binding:vif_details   | {"port_filter": true, "ovs_hybrid_plug": false}   
|
| binding:vif_type  | ovs   
|
| binding:vnic_type | normal
|
| created_at| 2016-05-27T01:24:32   
|
| description   |   
|
| device_id | a54ffb2d-bbfe-4c4b-85be-5c09140273c2  
|
| device_owner  | network:router_ha_interface   
|
| dns_name  |   
|
| extra_dhcp_opts   |   
|
| fixed_ips | {"subnet_id": "097af809-cd4b-4801-adf4-26ce04584791", 
"ip_address": "169.254.192.20"} |
| id| 960ea43e-5644-47f8-ab30-a6cdd6dde664  
|
| mac_address   | fa:16:3e:c0:6f:24 
|
| name  | HA port tenant a17b3b9772b74811a3d62ebefd65d3b0   
|
| network_id| dc3c02c5-c1c5-4950-83db-939740e73916  
|
| port_security_enabled | False 
|
| security_groups   |   
|
| status| ACTIVE
|
| tenant_id |   
|
| updated_at| 2016-05-27T02:28:02   
|
+---+---+

$ ip netns exec qrouter-a54ffb2d-bbfe-4c4b-85be-5c09140273c2 ping 169.254.192.21
PING 169.254.192.21 (169.254.192.21) 56(84) bytes of data.
>From 169.254.192.20 icmp_seq=1 Destination Host Unreachable
>From 169.254.192.20 icmp_seq=2 Destination Host Unreachable
>From 169.254.192.20 icmp_seq=3 Destination Host Unreachable
>From 169.254.192.20 icmp_seq=4 Destination Host Unreachable
>From 169.254.192.20 icmp_seq=5 Destination Host Unreachable
>From 169.254.192.20 icmp_seq=6 Destination Host Unreachable
>From 169.254.192.20 icmp_seq=7 Destination Host Unreachable
>From 169.254.192.20 icmp_seq=8 Destination Host Unreachable


@Assaf Muller, it seems strange, I do another test in other node, stop l3 
agent, create router(both ha and ha+distributed), then start l3 agent, it seems 
OK, there only one active ha router, and vip address only in  one namespace.

Thanks for your kindly reply, I will do more tests.

** Changed in: neutron
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1586058

Title:
  Online an new neutron l3 agent, the router ha_state is active

Status in neutron:
  Invalid

Bug description:
  When we manually online an new l3 agent, after sync router info from
  Neutron server,  neutron-keepalived-state-change will always write
  master in it's state file and notify other agents, even though there
  still have an active ha router in other node.

  == Step to reproduce ===
  1. stop neutron-l3-agent on node-A
  2. create ha and distribute router test_router
  3. we can see l3 agents hosting router info:
  $ neutron l3-agent-list-hosting-router test_router
  
+--+++---+--+
  | id 

[Yahoo-eng-team] [Bug 1586058] [NEW] Online an new neutron l3 agent, the router ha_state is active

2016-05-26 Thread Dongcan Ye
Public bug reported:

When we manually online an new l3 agent, after sync router info from
Neutron server,  neutron-keepalived-state-change will always write
master in it's state file and notify other agents, even though there
still have an active ha router in other node.

== Step to reproduce ===
1. stop neutron-l3-agent on node-A
2. create ha and distribute router test_router
3. we can see l3 agents hosting router info:
$ neutron l3-agent-list-hosting-router test_router
+--+++---+--+
| id   | host   | admin_state_up | alive | 
ha_state |
+--+++---+--+
| 678cc6a6-3b28-462d-9374-4a7fb42193dc | compute1   | True   | :-)   | 
standby  |
| de274597-e35b-4ae5-baba-01c26bbed5b9 | compute2   | True   | :-)   | 
active   |
| 1c301ec4-9320-44e0-a4e0-a0441fee849a | network| True   | :-)   | 
standby  |
| 4ae490f7-8791-4460-8545-c172af58de8f | network2   | True   | :-)   | 
standby  |
+--+++---+--+

4. start neutron-l3-agent on node-A, after sync, we can see there are
two active ha routers in compute2 and node-A.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1586058

Title:
  Online an new neutron l3 agent, the router ha_state is active

Status in neutron:
  New

Bug description:
  When we manually online an new l3 agent, after sync router info from
  Neutron server,  neutron-keepalived-state-change will always write
  master in it's state file and notify other agents, even though there
  still have an active ha router in other node.

  == Step to reproduce ===
  1. stop neutron-l3-agent on node-A
  2. create ha and distribute router test_router
  3. we can see l3 agents hosting router info:
  $ neutron l3-agent-list-hosting-router test_router
  
+--+++---+--+
  | id   | host   | admin_state_up | alive 
| ha_state |
  
+--+++---+--+
  | 678cc6a6-3b28-462d-9374-4a7fb42193dc | compute1   | True   | :-)   
| standby  |
  | de274597-e35b-4ae5-baba-01c26bbed5b9 | compute2   | True   | :-)   
| active   |
  | 1c301ec4-9320-44e0-a4e0-a0441fee849a | network| True   | :-)   
| standby  |
  | 4ae490f7-8791-4460-8545-c172af58de8f | network2   | True   | :-)   
| standby  |
  
+--+++---+--+

  4. start neutron-l3-agent on node-A, after sync, we can see there are
  two active ha routers in compute2 and node-A.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1586058/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1563684] [NEW] VMware: rebuild an instance failed

2016-03-30 Thread Dongcan Ye
Public bug reported:

Description
===
Nova version: master
Virt driver: VCDriver


Steps to reproduce
=
1. Nova boot an instance.
2. Cinder create a volume.
3. Attach volume to the instance.
4. After attached success,  using nova image-create command, create a snapshot 
for the instance.
5. Rebuild the instance with snapshot image.

In nova-compute, error info:
2016-03-30 12:16:14.801 9290 DEBUG oslo_vmware.exceptions [-] Fault 
GenericVmConfigFault not matched. get_fault_class 
/usr/lib/python2.7/site-packages/oslo_vmware/exceptions.py:250
2016-03-30 12:16:14.801 9290 ERROR oslo_vmware.common.loopingcall [-] in fixed 
duration looping call
2016-03-30 12:16:14.801 9290 TRACE oslo_vmware.common.loopingcall Traceback 
(most recent call last):
2016-03-30 12:16:14.801 9290 TRACE oslo_vmware.common.loopingcall   File 
"/usr/lib/python2.7/site-packages/oslo_vmware/common/loopingcall.py", line 76, 
in _inner
2016-03-30 12:16:14.801 9290 TRACE oslo_vmware.common.loopingcall 
self.f(*self.args, **self.kw)
2016-03-30 12:16:14.801 9290 TRACE oslo_vmware.common.loopingcall   File 
"/usr/lib/python2.7/site-packages/oslo_vmware/api.py", line 417, in _poll_task
2016-03-30 12:16:14.801 9290 TRACE oslo_vmware.common.loopingcall raise 
task_ex
2016-03-30 12:16:14.801 9290 TRACE oslo_vmware.common.loopingcall 
VMwareDriverException: Disk 
'/vmfs/volumes/546a3532-ca1f27f1-e66a-1458d04cf670/volume-dbbdd17d-bb5c-4804-9d30-a6bd8a96eca3/volume-dbbdd17d-bb5c-4804-9d30-a6bd8a96eca3.vmdk'
 cannot be opened for writing. It might be shared with some other VM.
2016-03-30 12:16:14.801 9290 TRACE oslo_vmware.common.loopingcall
2016-03-30 12:16:14.802 9290 ERROR nova.compute.manager 
[req-b0c16cc2-fd52-4c85-9ff9-c67ff68ee410 4412e38ec9814b96a03e63097ec51f1a 
8f75187cd29f4715881f450646fc6e08 - - -] [instance: 
e338fd04-859f-4fa4-8f1d-cd2e297b0c05] Setting instance vm_state to ERROR
2016-03-30 12:16:14.802 9290 TRACE nova.compute.manager [instance: 
e338fd04-859f-4fa4-8f1d-cd2e297b0c05] Traceback (most recent call last):
2016-03-30 12:16:14.802 9290 TRACE nova.compute.manager [instance: 
e338fd04-859f-4fa4-8f1d-cd2e297b0c05]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 6418, in 
_error_out_instance_on_exception
2016-03-30 12:16:14.802 9290 TRACE nova.compute.manager [instance: 
e338fd04-859f-4fa4-8f1d-cd2e297b0c05] yield
2016-03-30 12:16:14.802 9290 TRACE nova.compute.manager [instance: 
e338fd04-859f-4fa4-8f1d-cd2e297b0c05]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 3087, in 
rebuild_instance
2016-03-30 12:16:14.802 9290 TRACE nova.compute.manager [instance: 
e338fd04-859f-4fa4-8f1d-cd2e297b0c05] self._rebuild_default_impl(**kwargs)
2016-03-30 12:16:14.802 9290 TRACE nova.compute.manager [instance: 
e338fd04-859f-4fa4-8f1d-cd2e297b0c05]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2934, in 
_rebuild_default_impl
2016-03-30 12:16:14.802 9290 TRACE nova.compute.manager [instance: 
e338fd04-859f-4fa4-8f1d-cd2e297b0c05] 
block_device_info=new_block_device_info)
2016-03-30 12:16:14.802 9290 TRACE nova.compute.manager [instance: 
e338fd04-859f-4fa4-8f1d-cd2e297b0c05]   File 
"/usr/lib/python2.7/site-packages/nova/virt/vmwareapi/driver.py", line 481, in 
spawn
2016-03-30 12:16:14.802 9290 TRACE nova.compute.manager [instance: 
e338fd04-859f-4fa4-8f1d-cd2e297b0c05] admin_password, network_info, 
block_device_info)
2016-03-30 12:16:14.802 9290 TRACE nova.compute.manager [instance: 
e338fd04-859f-4fa4-8f1d-cd2e297b0c05]   File 
"/usr/lib/python2.7/site-packages/nova/virt/vmwareapi/vmops.py", line 683, in 
spawn
2016-03-30 12:16:14.802 9290 TRACE nova.compute.manager [instance: 
e338fd04-859f-4fa4-8f1d-cd2e297b0c05] 
vm_util.power_on_instance(self._session, instance, vm_ref=vm_ref)
2016-03-30 12:16:14.802 9290 TRACE nova.compute.manager [instance: 
e338fd04-859f-4fa4-8f1d-cd2e297b0c05]   File 
"/usr/lib/python2.7/site-packages/nova/virt/vmwareapi/vm_util.py", line 1394, 
in power_on_instance
2016-03-30 12:16:14.802 9290 TRACE nova.compute.manager [instance: 
e338fd04-859f-4fa4-8f1d-cd2e297b0c05] session._wait_for_task(poweron_task)
2016-03-30 12:16:14.802 9290 TRACE nova.compute.manager [instance: 
e338fd04-859f-4fa4-8f1d-cd2e297b0c05]   File 
"/usr/lib/python2.7/site-packages/nova/virt/vmwareapi/driver.py", line 681, in 
_wait_for_task
2016-03-30 12:16:14.802 9290 TRACE nova.compute.manager [instance: 
e338fd04-859f-4fa4-8f1d-cd2e297b0c05] return self.wait_for_task(task_ref)
2016-03-30 12:16:14.802 9290 TRACE nova.compute.manager [instance: 
e338fd04-859f-4fa4-8f1d-cd2e297b0c05]   File 
"/usr/lib/python2.7/site-packages/oslo_vmware/api.py", line 380, in 
wait_for_task
2016-03-30 12:16:14.802 9290 TRACE nova.compute.manager [instance: 
e338fd04-859f-4fa4-8f1d-cd2e297b0c05] return evt.wait()
2016-03-30 12:16:14.802 9290 TRACE nova.compute.manager [instance: 

[Yahoo-eng-team] [Bug 1562266] Re: VMware: resize instance change instance's hypervisor_hostname

2016-03-29 Thread Dongcan Ye
** Changed in: nova
   Status: Invalid => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1562266

Title:
  VMware: resize instance change instance's  hypervisor_hostname

Status in OpenStack Compute (nova):
  New

Bug description:
  Nova version: master
  Virt driver: VCDriver

  Nova compute1   <-->   VMware cluster1
  Nova compute2   <-->   VMware cluster2

  Resize an instance(original hypervisor_hostname is
  domain-c9(cluster2)) in VMware, makes instance's hostname
  changed(Before we verify,  hypervisor_hostname had changed to
  domain-c7(cluster1)).

  Then we checked in vCenter, the instance still located in cluster2.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1562266/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1562266] [NEW] VMware: resize instance change instance's hypervisor_hostname

2016-03-26 Thread Dongcan Ye
Public bug reported:

Nova version: master
Virt driver: VCDriver

Nova compute1   <-->   VMware cluster1
Nova compute2   <-->   VMware cluster2

Resize an instance(original hypervisor_hostname is domain-c9(cluster2))
in VMware, makes instance's hostname changed(Before we verify,
hypervisor_hostname had changed to domain-c7(cluster1)).

Then we checked in vCenter, the instance still located in cluster2, just
migrate from one ESXi host to another.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: vmware

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1562266

Title:
  VMware: resize instance change instance's  hypervisor_hostname

Status in OpenStack Compute (nova):
  New

Bug description:
  Nova version: master
  Virt driver: VCDriver

  Nova compute1   <-->   VMware cluster1
  Nova compute2   <-->   VMware cluster2

  Resize an instance(original hypervisor_hostname is
  domain-c9(cluster2)) in VMware, makes instance's hostname
  changed(Before we verify,  hypervisor_hostname had changed to
  domain-c7(cluster1)).

  Then we checked in vCenter, the instance still located in cluster2,
  just migrate from one ESXi host to another.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1562266/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1555644] [NEW] VMware: Extending virtual disk failed with error: capacity

2016-03-10 Thread Dongcan Ye
le 
"/usr/lib/python2.7/site-packages/eventlet/event.py", line 121, in wait
2016-03-10 17:31:56.350 3211 TRACE nova.compute.manager [instance: 
22ea4a58-2296-40fc-b69b-a511a3b6d925] return hubs.get_hub().switch()
2016-03-10 17:31:56.350 3211 TRACE nova.compute.manager [instance: 
22ea4a58-2296-40fc-b69b-a511a3b6d925]   File 
"/usr/lib/python2.7/site-packages/eventlet/hubs/hub.py", line 294, in switch
2016-03-10 17:31:56.350 3211 TRACE nova.compute.manager [instance: 
22ea4a58-2296-40fc-b69b-a511a3b6d925] return self.greenlet.switch()
2016-03-10 17:31:56.350 3211 TRACE nova.compute.manager [instance: 
22ea4a58-2296-40fc-b69b-a511a3b6d925]   File 
"/usr/lib/python2.7/site-packages/oslo_vmware/common/loopingcall.py", line 76, 
in _inner
2016-03-10 17:31:56.350 3211 TRACE nova.compute.manager [instance: 
22ea4a58-2296-40fc-b69b-a511a3b6d925] self.f(*self.args, **self.kw)
2016-03-10 17:31:56.350 3211 TRACE nova.compute.manager [instance: 
22ea4a58-2296-40fc-b69b-a511a3b6d925]   File 
"/usr/lib/python2.7/site-packages/oslo_vmware/api.py", line 417, in _poll_task
2016-03-10 17:31:56.350 3211 TRACE nova.compute.manager [instance: 
22ea4a58-2296-40fc-b69b-a511a3b6d925] raise task_ex
2016-03-10 17:31:56.350 3211 TRACE nova.compute.manager [instance: 
22ea4a58-2296-40fc-b69b-a511a3b6d925] VMwareDriverException: 
\u6307\u5b9a\u7684\u53c2\u6570\u9519\u8bef\u3002
2016-03-10 17:31:56.350 3211 TRACE nova.compute.manager [instance: 
22ea4a58-2296-40fc-b69b-a511a3b6d925] capacity
2016-03-10 17:31:56.350 3211 TRACE nova.compute.manager [instance: 
22ea4a58-2296-40fc-b69b-a511a3b6d925]
2016-03-10 17:31:56.352 3211 INFO nova.compute.manager 
[req-a3e93241-5f54-485a-a7f0-2e1e0ebad92d 4412e38ec9814b96a03e63097ec51f1a 
8f75187cd29f4715881f450646fc6e08 - - -] [instance: 
22ea4a58-2296-40fc-b69b-a511a3b6d925] Terminating instance


Because the cache_image_folder already exists, it not update image size.  In 
_use_disk_image_as_linked_clone function, root_gb greater than image file size, 
so it excutes  _extend_virtual_disk error.

** Affects: nova
 Importance: Undecided
 Assignee: Dongcan Ye (hellochosen)
 Status: New


** Tags: vmware

** Changed in: nova
 Assignee: (unassigned) => Dongcan Ye (hellochosen)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1555644

Title:
  VMware: Extending virtual disk failed with error: capacity

Status in OpenStack Compute (nova):
  New

Bug description:
  Scenario A:
  1. image disk type: sparse
  2. image size(2.3G)
  3. flavor1(root_disk: 5G)
  4. use_linked_clone

  Scenario B:
  1. image disk type: sparse
  2. image size(2.3G)
  3. flavor2(root_disk: 6G)
  4. use_linked_clone

  I boot an instance with sparse image disk, image size is 2.3G, Nova
  flavor root disk is 5G, everything got well(Scenario A).

  Then I boot another instance  with new flavor root disk 6G(Scenario B),  it 
raises error:
  2016-03-10 17:31:56.350 3211 ERROR nova.compute.manager 
[req-a3e93241-5f54-485a-a7f0-2e1e0ebad92d 4412e38ec9814b96a03e63097ec51f1a 
8f75187cd29f4715881f450646fc6e08 - - -] [instance: 
22ea4a58-2296-40fc-b69b-a511a3b6d925] Instance failed to spawn
  2016-03-10 17:31:56.350 3211 TRACE nova.compute.manager [instance: 
22ea4a58-2296-40fc-b69b-a511a3b6d925] Traceback (most recent call last):
  2016-03-10 17:31:56.350 3211 TRACE nova.compute.manager [instance: 
22ea4a58-2296-40fc-b69b-a511a3b6d925]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2461, in 
_build_resources
  2016-03-10 17:31:56.350 3211 TRACE nova.compute.manager [instance: 
22ea4a58-2296-40fc-b69b-a511a3b6d925] yield resources
  2016-03-10 17:31:56.350 3211 TRACE nova.compute.manager [instance: 
22ea4a58-2296-40fc-b69b-a511a3b6d925]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2333, in 
_build_and_run_instance
  2016-03-10 17:31:56.350 3211 TRACE nova.compute.manager [instance: 
22ea4a58-2296-40fc-b69b-a511a3b6d925] block_device_info=block_device_info)
  2016-03-10 17:31:56.350 3211 TRACE nova.compute.manager [instance: 
22ea4a58-2296-40fc-b69b-a511a3b6d925]   File 
"/usr/lib/python2.7/site-packages/nova/virt/vmwareapi/driver.py", line 480, in 
spawn
  2016-03-10 17:31:56.350 3211 TRACE nova.compute.manager [instance: 
22ea4a58-2296-40fc-b69b-a511a3b6d925] admin_password, network_info, 
block_device_info)
  2016-03-10 17:31:56.350 3211 TRACE nova.compute.manager [instance: 
22ea4a58-2296-40fc-b69b-a511a3b6d925]   File 
"/usr/lib/python2.7/site-packages/nova/virt/vmwareapi/vmops.py", line 636, in 
spawn
  2016-03-10 17:31:56.350 3211 TRACE nova.compute.manager [instance: 
22ea4a58-2296-40fc-b69b-a511a3b6d925] 
self._use_disk_image_as_linked_clone(vm_ref, vi)
  2016-03-10 17:31:56.350 3211 TRACE nova.compute.manager [instance: 
2

[Yahoo-eng-team] [Bug 1549597] Re: VMware: Get wrong image cahe folder makes create vm slowly

2016-02-24 Thread Dongcan Ye
** Changed in: nova
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1549597

Title:
  VMware: Get wrong image cahe folder makes create vm slowly

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Nova virt driver: VMware   repo: master

  In imagecache.py we build image cache folder as "self._base_folder/image_id". 
[1] 
  If we use remove_unused_base_images  configuration in nova.conf,  it  ensure  
image cache folder is unique per compute node.
   
  But when we fetching image, the  cache_image_folder function gets  
self.datastore as a parameter[2], datastore ip address may different with nova 
compute node host_ip, this is never get the right  image cache folder. 

  
  [1] 
https://github.com/openstack/nova/blob/master/nova/virt/vmwareapi/imagecache.py/#L200
  [2] 
https://github.com/openstack/nova/blob/master/nova/virt/vmwareapi/vmops.py/#L113

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1549597/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1549597] [NEW] VMware: Get wrong image cahe folder makes create vm slowly

2016-02-24 Thread Dongcan Ye
Public bug reported:

Nova virt driver: VMware   repo: master

In imagecache.py we build image cache folder as "self._base_folder/image_id". 
[1] 
If we use remove_unused_base_images  configuration in nova.conf,  it  ensure  
image cache folder is unique per compute node.
 
But when we fetching image, the  cache_image_folder function gets  
self.datastore as a parameter[2], datastore ip address may different with nova 
compute node host_ip, this is never get the right  image cache folder. 


[1] 
https://github.com/openstack/nova/blob/master/nova/virt/vmwareapi/imagecache.py/#L200
[2] 
https://github.com/openstack/nova/blob/master/nova/virt/vmwareapi/vmops.py/#L113

** Affects: nova
 Importance: Undecided
 Assignee: Dongcan Ye (hellochosen)
 Status: New


** Tags: vmware

** Tags added: vmware

** Changed in: nova
 Assignee: (unassigned) => Dongcan Ye (hellochosen)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1549597

Title:
  VMware: Get wrong image cahe folder makes create vm slowly

Status in OpenStack Compute (nova):
  New

Bug description:
  Nova virt driver: VMware   repo: master

  In imagecache.py we build image cache folder as "self._base_folder/image_id". 
[1] 
  If we use remove_unused_base_images  configuration in nova.conf,  it  ensure  
image cache folder is unique per compute node.
   
  But when we fetching image, the  cache_image_folder function gets  
self.datastore as a parameter[2], datastore ip address may different with nova 
compute node host_ip, this is never get the right  image cache folder. 

  
  [1] 
https://github.com/openstack/nova/blob/master/nova/virt/vmwareapi/imagecache.py/#L200
  [2] 
https://github.com/openstack/nova/blob/master/nova/virt/vmwareapi/vmops.py/#L113

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1549597/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1532104] [NEW] VMware: Make http connection pool configurable

2016-01-08 Thread Dongcan Ye
Public bug reported:

When nova-compute restarting, it will  trigger soap requests to vCenter Server.
oslo.vmware set default http pool size to 10. [1] 

This is not satisfy in large scale environment, in nova-compute log, there are 
lots of warning like: 
WARNING urllib3.connectionpool [-] Connection pool is full, discarding 
connection

In this situation, users can not set the http pool size, the way we tune 
connection_pool is modify oslo.vmware code.
We can make it configurable in Nova vCenter driver side.

[1] https://review.openstack.org/#/c/206804/

** Affects: nova
 Importance: Undecided
 Assignee: Dongcan Ye (hellochosen)
 Status: New


** Tags: vmware

** Changed in: nova
 Assignee: (unassigned) => Dongcan Ye (hellochosen)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1532104

Title:
  VMware: Make http connection pool configurable

Status in OpenStack Compute (nova):
  New

Bug description:
  When nova-compute restarting, it will  trigger soap requests to vCenter 
Server.
  oslo.vmware set default http pool size to 10. [1] 

  This is not satisfy in large scale environment, in nova-compute log, there 
are lots of warning like: 
  WARNING urllib3.connectionpool [-] Connection pool is full, discarding 
connection

  In this situation, users can not set the http pool size, the way we tune 
connection_pool is modify oslo.vmware code.
  We can make it configurable in Nova vCenter driver side.

  [1] https://review.openstack.org/#/c/206804/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1532104/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1530042] [NEW] Missing iptables-ipv6 package cause ovs agent create flow failed

2015-12-29 Thread Dongcan Ye
RuntimeError(m)
2015-12-29 10:59:50.607 1 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent RuntimeError: 
2015-12-29 10:59:50.607 1 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent Command: ['sudo', 
'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip6tables-save', '-c']
2015-12-29 10:59:50.607 1 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent Exit code: 96
2015-12-29 10:59:50.607 1 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent Stdout: ''
2015-12-29 10:59:50.607 1 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent Stderr: 
'/usr/bin/neutron-rootwrap: Executable not found: ip6tables-save (filter match 
= ip6tables-save)\n'
2015-12-29 10:59:50.607 1 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
2015-12-29 10:59:50.611 1 DEBUG 
neutron.plugins.openvswitch.agent.ovs_neutron_agent [-] Agent rpc_loop - 
iteration:1005 completed. Processed ports statistics: {'ancillary': {'removed': 
0, 'added': 0}, 'regular': {'updated': 0, 'added': 0, 'removed': 0}}. 
Elapsed:1.472 rpc_loop 
/usr/lib/python2.6/site-packages/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py:1349
2015-12-29 10:59:51.139 1 DEBUG 
neutron.plugins.openvswitch.agent.ovs_neutron_agent [-] Agent rpc_loop - 
iteration:1006 started rpc_loop 
/usr/lib/python2.6/site-packages/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py:1248

This problem can avoid by using neutron sanity check tool.

** Affects: neutron
 Importance: Undecided
 Assignee: Dongcan Ye (hellochosen)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Dongcan Ye (hellochosen)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1530042

Title:
  Missing iptables-ipv6 package cause ovs agent create flow failed

Status in neutron:
  New

Bug description:
  In some scenario, Like OpenStack Kolla, system may lack iptables-ipv6 package.
  This may  cause  command ip6tables-save or ip6tables-restore invalid. In this 
situation, when enabling security group, may cause ovs agent error:

  2015-12-29 10:59:50.020 1 DEBUG neutron.agent.securitygroups_rpc [-] 
Preparing device filters for 1 new devices setup_port_filters 
/usr/lib/python2.6/site-packages/neutron/agent/securitygroups_rpc.py:345
  2015-12-29 10:59:50.021 1 INFO neutron.agent.securitygroups_rpc [-] Preparing 
filters for devices set([u'5735d7ab-6acf-4240-82a3-857da9ddeaf6'])
  2015-12-29 10:59:50.022 1 DEBUG neutron.agent.securitygroups_rpc [-] Get 
security group information for devices via rpc 
[u'5735d7ab-6acf-4240-82a3-857da9ddeaf6'] security_group_info_for_devices 
/usr/lib/python2.6/site-packages/neutron/agent/securitygroups_rpc.py:99
  2015-12-29 10:59:50.023 1 DEBUG neutron.openstack.common.rpc.amqp [-] Making 
synchronous call on q-plugin ... multicall 
/usr/lib/python2.6/site-packages/neutron/openstack/common/rpc/amqp.py:566
  2015-12-29 10:59:50.023 1 DEBUG neutron.openstack.common.rpc.amqp [-] MSG_ID 
is c49309ed837d4b9e8470314dece984e1 multicall 
/usr/lib/python2.6/site-packages/neutron/openstack/common/rpc/amqp.py:569
  2015-12-29 10:59:50.024 1 DEBUG neutron.openstack.common.rpc.amqp [-] 
UNIQUE_ID is 0fd2207db4ab4f37831c913ce9ba4c18. _add_unique_id 
/usr/lib/python2.6/site-packages/neutron/openstack/common/rpc/amqp.py:350
  2015-12-29 10:59:50.103 1 DEBUG neutron.agent.linux.iptables_firewall [-] 
Preparing device (5735d7ab-6acf-4240-82a3-857da9ddeaf6) filter 
prepare_port_filter 
/usr/lib/python2.6/site-packages/neutron/agent/linux/iptables_firewall.py:75
  2015-12-29 10:59:50.104 1 DEBUG neutron.agent.securitygroups_rpc [-] Update 
security group information for ports [u'5735d7ab-6acf-4240-82a3-857da9ddeaf6'] 
prepare_devices_filter 
/usr/lib/python2.6/site-packages/neutron/agent/securitygroups_rpc.py:228
  2015-12-29 10:59:50.105 1 DEBUG neutron.agent.securitygroups_rpc [-] Update 
security group information _update_security_group_info 
/usr/lib/python2.6/site-packages/neutron/agent/securitygroups_rpc.py:234
  2015-12-29 10:59:50.106 1 DEBUG neutron.agent.linux.iptables_firewall [-] 
Update rules of security group (1f21c2f6-8da1-4e90-af81-6ea9f0197c5f) 
update_security_group_rules 
/usr/lib/python2.6/site-packages/neutron/agent/linux/iptables_firewall.py:67
  2015-12-29 10:59:50.107 1 DEBUG neutron.agent.linux.iptables_firewall [-] 
Update members of security group (1f21c2f6-8da1-4e90-af81-6ea9f0197c5f) 
update_security_group_members 
/usr/lib/python2.6/site-packages/neutron/agent/linux/iptables_firewall.py:71
  2015-12-29 10:59:50.111 1 DEBUG neutron.openstack.common.lockutils [-] Got 
semaphore "iptables" lock 
/usr/lib/python2.6/site-packages/neutron/openstack/common/lockutils.py:171
  2015-12-29 10:59:50.112 1 DEBUG neutron.openstack.common.lockutils [-] 
Attempting to grab file lock "iptables" lock 
/usr/lib/python2.6/site-packages/neutron/openstack/common/lockutils.py:181
  2015-12-29 10:59:50.113 1 DEBUG neutron.openstack

[Yahoo-eng-team] [Bug 1489857] [NEW] Create ICMP security group rule with type but no code will success

2015-08-28 Thread Dongcan Ye
Public bug reported:

If the user create  ICMP security group rule, specifing the required ICMP type, 
but no code, it will success.
I think neutron-server should raise an error here.

** Affects: neutron
 Importance: Undecided
 Assignee: Dongcan Ye (hellochosen)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Dongcan Ye (hellochosen)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1489857

Title:
  Create ICMP security group rule with type but no code will success

Status in neutron:
  New

Bug description:
  If the user create  ICMP security group rule, specifing the required ICMP 
type, but no code, it will success.
  I think neutron-server should raise an error here.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1489857/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1488764] [NEW] Create IPSec site connection with IPSec policy that specifies AH-ESP protocol error

2015-08-26 Thread Dongcan Ye
Public bug reported:

Create IPSec site connection with IPSec policy that specifies AH-ESP
protocol leads to the following error:


2015-08-26 13:29:10.976 ERROR neutron.agent.linux.utils 
[req-7b4a7ccc-286e-4267-9d50-d84afa5b5663 demo 
99b8d178a6784d749920414ac08bce66] 
Command: ['ip', 'netns', 'exec', 
u'qrouter-552bb850-4b33-4bf9-8d6a-c7f47f6e2d27', 'ipsec', 'addconn', 
'--ctlbase', 
u'/opt/stack/data/neutron/ipsec/552bb850-4b33-4bf9-8d6a-c7f47f6e2d27/var/run/pluto.ctl',
 '--defaultroutenexthop', u'172.24.4.3', '--config', 
u'/opt/stack/data/neutron/ipsec/552bb850-4b33-4bf9-8d6a-c7f47f6e2d27/etc/ipsec.conf',
 u'a9587a5c-ff6e-4257-89c1-475300fc8622']
Exit code: 34
Stdin: 
Stdout: 034 Must do at AH or ESP, not neither. 

Stderr: WARNING: /opt/stack/data/neutron/ipsec/552bb850-4b33-4bf9-8d6a-
c7f47f6e2d27/etc/ipsec.co

2015-08-26 13:29:10.976 ERROR neutron_vpnaas.services.vpn.device_drivers.ipsec 
[req-7b4a7ccc-286e-4267-9d50-d84afa5b5663 demo 
99b8d178a6784d749920414ac08bce66] Failed to enable vpn process on router 
552bb850-4b33-4bf9-8d6a-c7f47f6e2d27
2015-08-26 13:29:10.976 TRACE neutron_vpnaas.services.vpn.device_drivers.ipsec 
Traceback (most recent call last):
2015-08-26 13:29:10.976 TRACE neutron_vpnaas.services.vpn.device_drivers.ipsec  
 File 
/opt/stack/neutron-vpnaas/neutron_vpnaas/services/vpn/device_drivers/ipsec.py,
 line 251, in enable
2015-08-26 13:29:10.976 TRACE neutron_vpnaas.services.vpn.device_drivers.ipsec  
   self.start()
2015-08-26 13:29:10.976 TRACE neutron_vpnaas.services.vpn.device_drivers.ipsec  
 File 
/opt/stack/neutron-vpnaas/neutron_vpnaas/services/vpn/device_drivers/ipsec.py,
 line 433, in start
2015-08-26 13:29:10.976 TRACE neutron_vpnaas.services.vpn.device_drivers.ipsec  
   ipsec_site_conn['id']
2015-08-26 13:29:10.976 TRACE neutron_vpnaas.services.vpn.device_drivers.ipsec  
 File 
/opt/stack/neutron-vpnaas/neutron_vpnaas/services/vpn/device_drivers/ipsec.py,
 line 332, in _execute
2015-08-26 13:29:10.976 TRACE neutron_vpnaas.services.vpn.device_drivers.ipsec  
   extra_ok_codes=extra_ok_codes)
2015-08-26 13:29:10.976 TRACE neutron_vpnaas.services.vpn.device_drivers.ipsec  
 File /opt/stack/neutron/neutron/agent/linux/ip_lib.py, line 719, in execute
2015-08-26 13:29:10.976 TRACE neutron_vpnaas.services.vpn.device_drivers.ipsec  
   extra_ok_codes=extra_ok_codes, **kwargs)
2015-08-26 13:29:10.976 TRACE neutron_vpnaas.services.vpn.device_drivers.ipsec  
 File /opt/stack/neutron/neutron/agent/linux/utils.py, line 153, in execute
2015-08-26 13:29:10.976 TRACE neutron_vpnaas.services.vpn.device_drivers.ipsec  
   raise RuntimeError(m)
2015-08-26 13:29:10.976 TRACE neutron_vpnaas.services.vpn.device_drivers.ipsec 
RuntimeError: 
2015-08-26 13:29:10.976 TRACE neutron_vpnaas.services.vpn.device_drivers.ipsec 
Command: ['ip', 'netns', 'exec', 
u'qrouter-552bb850-4b33-4bf9-8d6a-c7f47f6e2d27', 'ipsec', 'addconn', 
'--ctlbase', 
u'/opt/stack/data/neutron/ipsec/552bb850-4b33-4bf9-8d6a-c7f47f6e2d27/var/run/pluto.ctl',
 '--defaultroutenexthop', u'172.24.4.3', '--config', 
u'/opt/stack/data/neutron/ipsec/552bb850-4b33-4bf9-8d6a-c7f47f6e2d27/etc/ipsec.conf',
 u'a9587a5c-ff6e-4257-89c1-475300fc8622']
2015-08-26 13:29:10.976 TRACE neutron_vpnaas.services.vpn.device_drivers.ipsec 
Exit code: 34
2015-08-26 13:29:10.976 TRACE neutron_vpnaas.services.vpn.device_drivers.ipsec 
Stdin: 
2015-08-26 13:29:10.976 TRACE neutron_vpnaas.services.vpn.device_drivers.ipsec 
Stdout: 034 Must do at AH or ESP, not neither. 
2015-08-26 13:29:10.976 TRACE neutron_vpnaas.services.vpn.device_drivers.ipsec 
2015-08-26 13:29:10.976 TRACE neutron_vpnaas.services.vpn.device_drivers.ipsec 
Stderr: WARNING: 
/opt/stack/data/neutron/ipsec/552bb850-4b33-4bf9-8d6a-c7f47f6e2d27/etc/ipsec.co
2015-08-26 13:29:10.976 TRACE neutron_vpnaas.services.vpn.device_drivers.ipsec 
2015-08-26 13:29:10.976 TRACE neutron_vpnaas.services.vpn.device_drivers.ipsec


It seems Openswan doesn't support AH-ESP combined.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: vpnaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1488764

Title:
  Create IPSec site connection with IPSec policy that specifies AH-ESP
  protocol error

Status in neutron:
  New

Bug description:
  Create IPSec site connection with IPSec policy that specifies AH-ESP
  protocol leads to the following error:

  
  2015-08-26 13:29:10.976 ERROR neutron.agent.linux.utils 
[req-7b4a7ccc-286e-4267-9d50-d84afa5b5663 demo 
99b8d178a6784d749920414ac08bce66] 
  Command: ['ip', 'netns', 'exec', 
u'qrouter-552bb850-4b33-4bf9-8d6a-c7f47f6e2d27', 'ipsec', 'addconn', 
'--ctlbase', 
u'/opt/stack/data/neutron/ipsec/552bb850-4b33-4bf9-8d6a-c7f47f6e2d27/var/run/pluto.ctl',
 '--defaultroutenexthop', u'172.24.4.3', '--config', 
u'/opt/stack/data/neutron/ipsec/552bb850-4b33-4bf9-8d6a-c7f47f6e2d27/etc/ipsec.conf',
 u'a9587a5c-ff6e-4257-89c1-475300fc8622']

[Yahoo-eng-team] [Bug 1405135] Re: Neutron lbaas can't update vip session-persistence's cookie_name

2015-08-25 Thread Dongcan Ye
** Changed in: neutron
 Assignee: Dongcan Ye (hellochosen) = (unassigned)

** Changed in: neutron
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1405135

Title:
  Neutron lbaas can't update vip  session-persistence's cookie_name

Status in neutron:
  Invalid

Bug description:
  When I want to update loadblance vip session_persistence's
  cookie_name, neutron-client returns ok, but the database still shows
  NULL.

  Show the vip info:
  $ neutron lb-vip-show 38f4d333-66a6-4012-8edf-b8549238aa22
  +-+--+
  | Field   | Value|
  +-+--+
  | address | 10.10.136.62 |
  | admin_state_up  | True |
  | connection_limit| -1   |
  | description |  |
  | id  | 38f4d333-66a6-4012-8edf-b8549238aa22 |
  | name| ye_vip   |
  | pool_id | 7f4c278b-1630-4299-9478-b8653d345ec6 |
  | port_id | d213d0e6-557d-4b10-8ee9-b2c70ccdc7a8 |
  | protocol| HTTP |
  | protocol_port   | 80   |
  | session_persistence | {type: SOURCE_IP}|
  | status  | ACTIVE   |
  | status_description  |  |
  | subnet_id   | b5017991-b63c-4bd0-a7e5-b3eaa8d81c23 |
  | tenant_id   | 5b969b39b06a4528bbd4198315377eb0 |

  Use lb-vip-update command update cookie_name[1]:
  $ neutron lb-vip-update 38f4d333-66a6-4012-8edf-b8549238aa22  
--session-persistence type=dict type=SOURCE_IP,[cookie_name=test]

  
  In database, as we see it still NULL.

  mysql select * from sessionpersistences where 
vip_id=38f4d333-66a6-4012-8edf-b8549238aa22;
  +--+---+-+
  | vip_id   | type  | cookie_name |
  +--+---+-+
  | 38f4d333-66a6-4012-8edf-b8549238aa22 | SOURCE_IP | NULL|
  +--+---+-+
  1 row in set (0.00 sec)

  [1]https://wiki.openstack.org/wiki/Neutron/LBaaS/CLI

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1405135/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1486836] [NEW] floating ip can‘t work because instance mac address conflict

2015-08-19 Thread Dongcan Ye
Public bug reported:

Problem Description
=

1.  Create an instance,  fixed_ip is allocating by dhcp agent. 
2.  Associate a floating_ip to the instance,  then ssh or ping floating_ip.
3.  Delete the instance, floating_ip is disassociated.
4.  Create another instance, specify the same fixed_ip in step 1 which dhcp  
agent auto allocated.
5.  Associate the same floating_ip in step 2 , then ssh or ping floating_ip 
will failed.

The reason for this issue is the arp entry in router namespace is old
instance mac address.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1486836

Title:
  floating ip can‘t work because instance mac address conflict

Status in neutron:
  New

Bug description:
  Problem Description
  =

  1.  Create an instance,  fixed_ip is allocating by dhcp agent. 
  2.  Associate a floating_ip to the instance,  then ssh or ping floating_ip.
  3.  Delete the instance, floating_ip is disassociated.
  4.  Create another instance, specify the same fixed_ip in step 1 which dhcp  
agent auto allocated.
  5.  Associate the same floating_ip in step 2 , then ssh or ping floating_ip 
will failed.

  The reason for this issue is the arp entry in router namespace is old
  instance mac address.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1486836/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1470052] [NEW] vm failed to start which image adaptertype type is scsi

2015-06-30 Thread Dongcan Ye
Public bug reported:

I followed by OpenStack Configuration reference[1], download the cloud image[2]
and convert to vmdk:

$ qemu-img convert -f qcow2 trusty-server-cloudimg-amd64-disk1.img -O
vmdk ubuntu.vmdk

and then use glance image-create command to upload image:

$ glance image-create --name ubuntu-thick-scsi --is-public True --disk-format 
vmdk --container-format bare --property vmware_adaptertype=lsiLogic  \  
   --property vmware_disktype=preallocated --property 
vmware_ostype=ubuntu64Guest  ubuntu.vmdk  
+---+--+  
| Property  | Value|  
+---+--+  
| Property 'vmware_adaptertype' | lsiLogic |  
| Property 'vmware_disktype'| preallocated |  
| Property 'vmware_ostype'  | ubuntu64Guest|  
| checksum  | 676e7fc58d2314db6a264c11804b2d4c |  
| container_format  | bare |  
| created_at| 2015-06-26T23:55:36  |  
| deleted   | False|  
| deleted_at| None |  
| disk_format   | vmdk |  
| id| e79d4815-932b-4be6-b90c-0515f826c615 |  
| is_public | True |  
| min_disk  | 0|  
| min_ram   | 0|  
| name  | ubuntu-thick-scsi|  
| owner | 93a022fd03d94b649d0127498e6149cf |  
| protected | False|  
| size  | 852230144|  
| status| active   |  
| updated_at| 2015-06-26T23:56:39  |  
| virtual_size  | None |  
+---+--+ 

I created an instance in dashboard successful,  But it failed to enter guest 
system.
I doubt the instance does not have a controller to support scsi disk, when 
using ide , instance runs well. 


[1]http://docs.openstack.org/kilo/config-reference/content/vmware.html
[2]http://cloud-images.ubuntu.com/trusty/current/trusty-server-cloudimg-amd64-disk1.img

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: vmware

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1470052

Title:
  vm failed to start which image adaptertype type is scsi

Status in OpenStack Compute (Nova):
  New

Bug description:
  I followed by OpenStack Configuration reference[1], download the cloud 
image[2]
  and convert to vmdk:

  $ qemu-img convert -f qcow2 trusty-server-cloudimg-amd64-disk1.img -O
  vmdk ubuntu.vmdk

  and then use glance image-create command to upload image:

  $ glance image-create --name ubuntu-thick-scsi --is-public True 
--disk-format vmdk --container-format bare --property 
vmware_adaptertype=lsiLogic  \  
 --property vmware_disktype=preallocated --property 
vmware_ostype=ubuntu64Guest  ubuntu.vmdk  
  +---+--+  
  | Property  | Value|  
  +---+--+  
  | Property 'vmware_adaptertype' | lsiLogic |  
  | Property 'vmware_disktype'| preallocated |  
  | Property 'vmware_ostype'  | ubuntu64Guest|  
  | checksum  | 676e7fc58d2314db6a264c11804b2d4c |  
  | container_format  | bare |  
  | created_at| 2015-06-26T23:55:36  |  
  | deleted   | False|  
  | deleted_at| None |  
  | disk_format   | vmdk |  
  | id| e79d4815-932b-4be6-b90c-0515f826c615 |  
  | is_public | True |  
  | min_disk  | 0|  
  | min_ram   | 0|  
  | name  | ubuntu-thick-scsi|  
  | owner | 93a022fd03d94b649d0127498e6149cf |  
  | protected   

[Yahoo-eng-team] [Bug 1449526] [NEW] When vlan id is out range of config_file, creating network still success.

2015-04-28 Thread Dongcan Ye
Public bug reported:

In neutron ml2 config file /etc/neutron/plugins/ml2/ml2_conf.ini,
specify vlan range from 1002 to 1030,

[ml2_type_vlan]
network_vlan_ranges =physnet2:1002:1030

When I specifying vlan id 1070 , the vlan id is out range of ml2 config
file. But the network creates success.

In the bug fix, I will check the vlan segement id with config_file's
vlan range.

** Affects: neutron
 Importance: Undecided
 Assignee: Dongcan Ye (hellochosen)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Dongcan Ye (hellochosen)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1449526

Title:
  When vlan id is out range of config_file, creating network still
  success.

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  In neutron ml2 config file /etc/neutron/plugins/ml2/ml2_conf.ini,
  specify vlan range from 1002 to 1030,

  [ml2_type_vlan]
  network_vlan_ranges =physnet2:1002:1030

  When I specifying vlan id 1070 , the vlan id is out range of ml2
  config file. But the network creates success.

  In the bug fix, I will check the vlan segement id with config_file's
  vlan range.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1449526/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1441980] [NEW] When port group not found in ESXi , get TypeError: 'NoneType' object is not iterable

2015-04-09 Thread Dongcan Ye
Public bug reported:

In my environment, nova version is 2014.1.3, use nova-network vlan mode.

When creating an instance use VLAN manager, the instance get network info from 
VIF, like the bridge name.
If bridge name and vlan id not found in the cluster, in 
network_util.get_vlanid_and_vswitch_for_portgroup returns None, 
but in function ensure_vlan_bridge, there are two variables accepting None 
value. 
so it raise TypeError: 'NoneType' object is not iterable.

2015-04-09 03:34:20.399 9891 ERROR nova.compute.manager 
[req-5c2cef4b-e0b7-4adf-a48d-8fc19d53ed64 4fdbdb45ea224242b9c983a4d80ae639 
188a8948c92d47099be6bddf09782e1f] [instance: 
37da5bc7-0a23-4a81-8cf6-cf5cefe5851a] Instance failed to spawn
2015-04-09 03:34:20.399 9891 TRACE nova.compute.manager [instance: 
37da5bc7-0a23-4a81-8cf6-cf5cefe5851a] Traceback (most recent call last):
2015-04-09 03:34:20.399 9891 TRACE nova.compute.manager [instance: 
37da5bc7-0a23-4a81-8cf6-cf5cefe5851a]   File 
/usr/lib/python2.6/site-packages/nova/compute/manager.py, line 1737, in _spawn
2015-04-09 03:34:20.399 9891 TRACE nova.compute.manager [instance: 
37da5bc7-0a23-4a81-8cf6-cf5cefe5851a] block_device_info)
2015-04-09 03:34:20.399 9891 TRACE nova.compute.manager [instance: 
37da5bc7-0a23-4a81-8cf6-cf5cefe5851a]   File 
/usr/lib/python2.6/site-packages/nova/virt/vmwareapi/driver.py, line 632, in 
spawn
2015-04-09 03:34:20.399 9891 TRACE nova.compute.manager [instance: 
37da5bc7-0a23-4a81-8cf6-cf5cefe5851a] admin_password, network_info, 
block_device_info)
2015-04-09 03:34:20.399 9891 TRACE nova.compute.manager [instance: 
37da5bc7-0a23-4a81-8cf6-cf5cefe5851a]   File 
/usr/lib/python2.6/site-packages/nova/virt/vmwareapi/vmops.py, line 301, in 
spawn
2015-04-09 03:34:20.399 9891 TRACE nova.compute.manager [instance: 
37da5bc7-0a23-4a81-8cf6-cf5cefe5851a] vif_infos = _get_vif_infos()
2015-04-09 03:34:20.399 9891 TRACE nova.compute.manager [instance: 
37da5bc7-0a23-4a81-8cf6-cf5cefe5851a]   File 
/usr/lib/python2.6/site-packages/nova/virt/vmwareapi/vmops.py, line 291, in 
_get_vif_infos
2015-04-09 03:34:20.399 9891 TRACE nova.compute.manager [instance: 
37da5bc7-0a23-4a81-8cf6-cf5cefe5851a] self._is_neutron)
2015-04-09 03:34:20.399 9891 TRACE nova.compute.manager [instance: 
37da5bc7-0a23-4a81-8cf6-cf5cefe5851a]   File 
/usr/lib/python2.6/site-packages/nova/virt/vmwareapi/vif.py, line 153, in 
get_network_ref
2015-04-09 03:34:20.399 9891 TRACE nova.compute.manager [instance: 
37da5bc7-0a23-4a81-8cf6-cf5cefe5851a] create_vlan=create_vlan)
2015-04-09 03:34:20.399 9891 TRACE nova.compute.manager [instance: 
37da5bc7-0a23-4a81-8cf6-cf5cefe5851a]   File 
/usr/lib/python2.6/site-packages/nova/virt/vmwareapi/vif.py, line 86, in 
ensure_vlan_bridge
2015-04-09 03:34:20.399 9891 TRACE nova.compute.manager [instance: 
37da5bc7-0a23-4a81-8cf6-cf5cefe5851a] pg_vlanid, pg_vswitch = 
_get_pg_info(session, bridge, cluster)
2015-04-09 03:34:20.399 9891 TRACE nova.compute.manager [instance: 
37da5bc7-0a23-4a81-8cf6-cf5cefe5851a] TypeError: 'NoneType' object is not 
iterable
2015-04-09 03:34:20.399 9891 TRACE nova.compute.manager [instance: 
37da5bc7-0a23-4a81-8cf6-cf5cefe5851a]


Should give a return value in network_util.get_vlanid_and_vswitch_for_portgroup 
when port group not found in ESXi host,
Or raise an exception ?

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: vmware

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1441980

Title:
  When port group not found in ESXi , get TypeError: 'NoneType' object
  is not iterable

Status in OpenStack Compute (Nova):
  New

Bug description:
  In my environment, nova version is 2014.1.3, use nova-network vlan
  mode.

  When creating an instance use VLAN manager, the instance get network info 
from VIF, like the bridge name.
  If bridge name and vlan id not found in the cluster, in 
network_util.get_vlanid_and_vswitch_for_portgroup returns None, 
  but in function ensure_vlan_bridge, there are two variables accepting None 
value. 
  so it raise TypeError: 'NoneType' object is not iterable.

  2015-04-09 03:34:20.399 9891 ERROR nova.compute.manager 
[req-5c2cef4b-e0b7-4adf-a48d-8fc19d53ed64 4fdbdb45ea224242b9c983a4d80ae639 
188a8948c92d47099be6bddf09782e1f] [instance: 
37da5bc7-0a23-4a81-8cf6-cf5cefe5851a] Instance failed to spawn
  2015-04-09 03:34:20.399 9891 TRACE nova.compute.manager [instance: 
37da5bc7-0a23-4a81-8cf6-cf5cefe5851a] Traceback (most recent call last):
  2015-04-09 03:34:20.399 9891 TRACE nova.compute.manager [instance: 
37da5bc7-0a23-4a81-8cf6-cf5cefe5851a]   File 
/usr/lib/python2.6/site-packages/nova/compute/manager.py, line 1737, in _spawn
  2015-04-09 03:34:20.399 9891 TRACE nova.compute.manager [instance: 
37da5bc7-0a23-4a81-8cf6-cf5cefe5851a] block_device_info)
  2015-04-09 03:34:20.399 9891 TRACE nova.compute.manager [instance: 

[Yahoo-eng-team] [Bug 1422270] [NEW] Nova vmwareapi session pass more arguments when raise an exception

2015-02-16 Thread Dongcan Ye
Public bug reported:

On Icehouse  nova-2014.1.3, when vmware api session call a method meets
VimFaultException , it pass two arguments to
error_util.get_fault_class().

Code like this:
raise error_util.get_fault_class(fault)(str(excep))

But in error_util.get_fault_class() it accepts one argument.

There will occurs an type error:
TypeError: __init__() takes exactly 1 argument (2 given)

** Affects: nova
 Importance: Undecided
 Assignee: Dongcan Ye (hellochosen)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = Dongcan Ye (hellochosen)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1422270

Title:
  Nova vmwareapi  session pass more arguments when raise an exception

Status in OpenStack Compute (Nova):
  New

Bug description:
  On Icehouse  nova-2014.1.3, when vmware api session call a method
  meets VimFaultException , it pass two arguments to
  error_util.get_fault_class().

  Code like this:
  raise error_util.get_fault_class(fault)(str(excep))

  But in error_util.get_fault_class() it accepts one argument.

  There will occurs an type error:
  TypeError: __init__() takes exactly 1 argument (2 given)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1422270/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1413847] Re: Neutron client return 400 when request uri is too long

2015-02-04 Thread Dongcan Ye
Add test in master, neutron-server returns correctly.

** Changed in: neutron
   Status: In Progress = Invalid

** Changed in: neutron
 Assignee: Dongcan Ye (hellochosen) = (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1413847

Title:
  Neutron client return 400 when request uri is too long

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  When I get security group rule by  admin, neutron client return 400:

   ERROR: neutronclient.shell htmlbodyh1400 Bad request/h1
  Your browser sent an invalid request.
  /body/html
  Traceback (most recent call last):
File /usr/lib/python2.6/site-packages/neutronclient/shell.py, line 554, 
in run_subcommand
  return run_command(cmd, cmd_parser, sub_argv)
File /usr/lib/python2.6/site-packages/neutronclient/shell.py, line 83, in 
run_command
  return cmd.run(known_args)
File /usr/lib/python2.6/site-packages/neutronclient/common/command.py, 
line 34, in run
  return super(OpenStackCommand, self).run(parsed_args)
File /usr/lib/python2.6/site-packages/cliff/display.py, line 84, in run
  column_names, data = self.take_action(parsed_args)
File /usr/lib/python2.6/site-packages/neutronclient/common/command.py, 
line 40, in take_action
  return self.get_data(parsed_args)
File 
/usr/lib/python2.6/site-packages/neutronclient/neutron/v2_0/__init__.py, line 
615, in get_data
  self.extend_list(data, parsed_args)
File 
/usr/lib/python2.6/site-packages/neutronclient/neutron/v2_0/securitygroup.py, 
line 171, in extend_list
  _get_sec_group_list(sec_group_ids[i: i + chunk_size]))
File 
/usr/lib/python2.6/site-packages/neutronclient/neutron/v2_0/securitygroup.py, 
line 153, in _get_sec_group_list
  **search_opts).get('security_groups', [])
File /usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py, line 
101, in with_params
  ret = self.function(instance, *args, **kwargs)
File /usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py, line 
476, in list_security_groups
  retrieve_all, **_params)
File /usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py, line 
1330, in list
  for r in self._pagination(collection, path, **params):
File /usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py, line 
1343, in _pagination
  res = self.get(path, params=params)
File /usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py, line 
1316, in get
  headers=headers, params=params)
File /usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py, line 
1301, in retry_request
  headers=headers, params=params)
File /usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py, line 
1244, in do_request
  self._handle_fault_response(status_code, replybody)
File /usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py, line 
1211, in _handle_fault_response
  exception_handler_v20(status_code, des_error_body)
File /usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py, line 
81, in exception_handler_v20
  message=message)
  NeutronClientException: htmlbodyh1400 Bad request/h1
  Your browser sent an invalid request.
  /body/html

  I have print the uri len, it's larger than the default max uri len for 
eventlet.wsgi.server.
  In the neutron client code, it had  split the security-groups ids list, so 
the uri len is less than 8192. (In my environment, after split uri_len is 8159)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1413847/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1413847] [NEW] Neutron client return 400 when request uri is too long

2015-01-22 Thread Dongcan Ye
Public bug reported:

When I get security group rule by  admin, neutron client return 400:

 ERROR: neutronclient.shell htmlbodyh1400 Bad request/h1
Your browser sent an invalid request.
/body/html
Traceback (most recent call last):
  File /usr/lib/python2.6/site-packages/neutronclient/shell.py, line 554, in 
run_subcommand
return run_command(cmd, cmd_parser, sub_argv)
  File /usr/lib/python2.6/site-packages/neutronclient/shell.py, line 83, in 
run_command
return cmd.run(known_args)
  File /usr/lib/python2.6/site-packages/neutronclient/common/command.py, line 
34, in run
return super(OpenStackCommand, self).run(parsed_args)
  File /usr/lib/python2.6/site-packages/cliff/display.py, line 84, in run
column_names, data = self.take_action(parsed_args)
  File /usr/lib/python2.6/site-packages/neutronclient/common/command.py, line 
40, in take_action
return self.get_data(parsed_args)
  File 
/usr/lib/python2.6/site-packages/neutronclient/neutron/v2_0/__init__.py, line 
615, in get_data
self.extend_list(data, parsed_args)
  File 
/usr/lib/python2.6/site-packages/neutronclient/neutron/v2_0/securitygroup.py, 
line 171, in extend_list
_get_sec_group_list(sec_group_ids[i: i + chunk_size]))
  File 
/usr/lib/python2.6/site-packages/neutronclient/neutron/v2_0/securitygroup.py, 
line 153, in _get_sec_group_list
**search_opts).get('security_groups', [])
  File /usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py, line 
101, in with_params
ret = self.function(instance, *args, **kwargs)
  File /usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py, line 
476, in list_security_groups
retrieve_all, **_params)
  File /usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py, line 
1330, in list
for r in self._pagination(collection, path, **params):
  File /usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py, line 
1343, in _pagination
res = self.get(path, params=params)
  File /usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py, line 
1316, in get
headers=headers, params=params)
  File /usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py, line 
1301, in retry_request
headers=headers, params=params)
  File /usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py, line 
1244, in do_request
self._handle_fault_response(status_code, replybody)
  File /usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py, line 
1211, in _handle_fault_response
exception_handler_v20(status_code, des_error_body)
  File /usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py, line 
81, in exception_handler_v20
message=message)
NeutronClientException: htmlbodyh1400 Bad request/h1
Your browser sent an invalid request.
/body/html

I have print the uri len, it's larger than the default max uri len for 
eventlet.wsgi.server.
In the neutron client code, it had  split the security-groups ids list, so the 
uri len is less than 8192. (In my environment, after split uri_len is 8159)

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1413847

Title:
  Neutron client return 400 when request uri is too long

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  When I get security group rule by  admin, neutron client return 400:

   ERROR: neutronclient.shell htmlbodyh1400 Bad request/h1
  Your browser sent an invalid request.
  /body/html
  Traceback (most recent call last):
File /usr/lib/python2.6/site-packages/neutronclient/shell.py, line 554, 
in run_subcommand
  return run_command(cmd, cmd_parser, sub_argv)
File /usr/lib/python2.6/site-packages/neutronclient/shell.py, line 83, in 
run_command
  return cmd.run(known_args)
File /usr/lib/python2.6/site-packages/neutronclient/common/command.py, 
line 34, in run
  return super(OpenStackCommand, self).run(parsed_args)
File /usr/lib/python2.6/site-packages/cliff/display.py, line 84, in run
  column_names, data = self.take_action(parsed_args)
File /usr/lib/python2.6/site-packages/neutronclient/common/command.py, 
line 40, in take_action
  return self.get_data(parsed_args)
File 
/usr/lib/python2.6/site-packages/neutronclient/neutron/v2_0/__init__.py, line 
615, in get_data
  self.extend_list(data, parsed_args)
File 
/usr/lib/python2.6/site-packages/neutronclient/neutron/v2_0/securitygroup.py, 
line 171, in extend_list
  _get_sec_group_list(sec_group_ids[i: i + chunk_size]))
File 
/usr/lib/python2.6/site-packages/neutronclient/neutron/v2_0/securitygroup.py, 
line 153, in _get_sec_group_list
  **search_opts).get('security_groups', [])
File /usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py, line 
101, in with_params
  ret = self.function(instance, *args, **kwargs)
File 

[Yahoo-eng-team] [Bug 1405135] [NEW] [neutron_lbaas] Neutron lbaas can't update vip session-persistence's cookie_name

2014-12-23 Thread Dongcan Ye
Public bug reported:

When I want to update loadblance vip session_persistence's cookie_name,
neutron-client returns ok, but the database still shows NULL.

Show the vip info:
$ neutron lb-vip-show 38f4d333-66a6-4012-8edf-b8549238aa22
+-+--+
| Field   | Value|
+-+--+
| address | 10.10.136.62 |
| admin_state_up  | True |
| connection_limit| -1   |
| description |  |
| id  | 38f4d333-66a6-4012-8edf-b8549238aa22 |
| name| ye_vip   |
| pool_id | 7f4c278b-1630-4299-9478-b8653d345ec6 |
| port_id | d213d0e6-557d-4b10-8ee9-b2c70ccdc7a8 |
| protocol| HTTP |
| protocol_port   | 80   |
| session_persistence | {type: SOURCE_IP}|
| status  | ACTIVE   |
| status_description  |  |
| subnet_id   | b5017991-b63c-4bd0-a7e5-b3eaa8d81c23 |
| tenant_id   | 5b969b39b06a4528bbd4198315377eb0 |

Use lb-vip-update command update cookie_name[1]:
$ neutron lb-vip-update 38f4d333-66a6-4012-8edf-b8549238aa22  
--session-persistence type=dict type=SOURCE_IP,[cookie_name=test]


In database, as we see it still NULL.

mysql select * from sessionpersistences where 
vip_id=38f4d333-66a6-4012-8edf-b8549238aa22;
+--+---+-+
| vip_id   | type  | cookie_name |
+--+---+-+
| 38f4d333-66a6-4012-8edf-b8549238aa22 | SOURCE_IP | NULL|
+--+---+-+
1 row in set (0.00 sec)

[1]https://wiki.openstack.org/wiki/Neutron/LBaaS/CLI

** Affects: neutron
 Importance: Undecided
 Assignee: Dongcan Ye (hellochosen)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Dongcan Ye (hellochosen)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1405135

Title:
  [neutron_lbaas] Neutron lbaas can't update vip  session-persistence's
  cookie_name

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  When I want to update loadblance vip session_persistence's
  cookie_name, neutron-client returns ok, but the database still shows
  NULL.

  Show the vip info:
  $ neutron lb-vip-show 38f4d333-66a6-4012-8edf-b8549238aa22
  +-+--+
  | Field   | Value|
  +-+--+
  | address | 10.10.136.62 |
  | admin_state_up  | True |
  | connection_limit| -1   |
  | description |  |
  | id  | 38f4d333-66a6-4012-8edf-b8549238aa22 |
  | name| ye_vip   |
  | pool_id | 7f4c278b-1630-4299-9478-b8653d345ec6 |
  | port_id | d213d0e6-557d-4b10-8ee9-b2c70ccdc7a8 |
  | protocol| HTTP |
  | protocol_port   | 80   |
  | session_persistence | {type: SOURCE_IP}|
  | status  | ACTIVE   |
  | status_description  |  |
  | subnet_id   | b5017991-b63c-4bd0-a7e5-b3eaa8d81c23 |
  | tenant_id   | 5b969b39b06a4528bbd4198315377eb0 |

  Use lb-vip-update command update cookie_name[1]:
  $ neutron lb-vip-update 38f4d333-66a6-4012-8edf-b8549238aa22  
--session-persistence type=dict type=SOURCE_IP,[cookie_name=test]

  
  In database, as we see it still NULL.

  mysql select * from sessionpersistences where 
vip_id=38f4d333-66a6-4012-8edf-b8549238aa22;
  +--+---+-+
  | vip_id   | type  | cookie_name |
  +--+---+-+
  | 38f4d333-66a6-4012-8edf-b8549238aa22 | SOURCE_IP | NULL|
  +--+---+-+
  1 row in set (0.00 sec)

  [1]https://wiki.openstack.org/wiki/Neutron/LBaaS/CLI

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1405135/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https

[Yahoo-eng-team] [Bug 1364814] [NEW] Neutron multiple api workers can't send cast message to agent when use zeromq

2014-09-03 Thread Dongcan Ye
Public bug reported:

When I set api_workers  0 in Neutron configuration, delelting or adding router 
interface, Neutron L3 agent can't receive message from Neutron Server. 
In this situation, L3 agent report state can cast to Neutron Server, meanwhile 
it can receive cast message from Neutron Server.(use call method)

Obviously, Neutron Server can use cast method for sending message to L3
agent, But why cast routers_updated fails? This also occurs in other
Neutron agent.

Then I make a test, write some codes in  Neutron server starts or
l3_router_plugins, sends cast periodic message to L3 agent directly.
From L3 agent rpc-zmq-receiver log file shows it receives message from
Neutron Server.

By the way, all things well when api_workers = 0.

Test environment:
neutron(master) + oslo.messaging(master) + zeromq

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1364814

Title:
  Neutron multiple api workers can't send cast message to agent when use
  zeromq

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  When I set api_workers  0 in Neutron configuration, delelting or adding 
router interface, Neutron L3 agent can't receive message from Neutron Server. 
  In this situation, L3 agent report state can cast to Neutron Server, 
meanwhile it can receive cast message from Neutron Server.(use call method)

  Obviously, Neutron Server can use cast method for sending message to
  L3 agent, But why cast routers_updated fails? This also occurs in
  other Neutron agent.

  Then I make a test, write some codes in  Neutron server starts or
  l3_router_plugins, sends cast periodic message to L3 agent directly.
  From L3 agent rpc-zmq-receiver log file shows it receives message from
  Neutron Server.

  By the way, all things well when api_workers = 0.

  Test environment:
  neutron(master) + oslo.messaging(master) + zeromq

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1364814/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361088] [NEW] Get VM metadata information by l3 agent, resource temporarily unavailable

2014-08-25 Thread Dongcan Ye
Public bug reported:

When boot a VM assign name and password, I have met a run-time error. In
L3 agent configuration file  I have enabled enable_metadata_proxy.

Trace info from l3-agent.log:

2014-08-18 16:56:11.971 3281 ERROR neutron.agent.linux.utils 
[req-3c9892ce-0d64-4cdd-ac27-dd8736076c18 None]
Command: ['sudo', 'ip', 'netns', 'exec', 
'qrouter-2123c965-410d-4dc0-ab3c-240c0969b525', 'neutron-ns-metadata-proxy', 
'--pid_file=/var/lib/neutron/external/pids/2123c965-410d-4dc0-ab3c-240c0969b525.pid',
 '--metadata_proxy_socket=/var/lib/neutron/metadata_proxy', 
'--router_id=2123c965-410d-4dc0-ab3c-240c0969b525', 
'--state_path=/var/lib/neutron', '--metadata_port=9697', '--verbose', 
'--log-file=neutron-ns-metadata-proxy-2123c965-410d-4dc0-ab3c-240c0969b525.log',
 '--log-dir=/var/log/neutron']
Exit code: 1
Stdout: ''
Stderr: '2014-08-18 16:56:11.908 3861 INFO neutron.common.config [-] 
Logging enabled!\n2014-08-18 16:56:11.916 3861 ERROR neutron.agent.linux.daemon 
[-] Error while handling pidfile: 
/var/lib/neutron/external/pids/2123c965-410d-4dc0-ab3c-240c0969b525.pid\n2014-08-18
 16:56:11.916 3861 TRACE neutron.agent.linux.daemon Traceback (most recent call 
last):\n2014-08-18 16:56:11.916 3861 TRACE neutron.agent.linux.daemon   File 
/usr/lib/python2.6/site-packages/neutron/agent/linux/daemon.py, line 37, in 
__init__\n2014-08-18 16:56:11.916 3861 TRACE neutron.agent.linux.daemon 
fcntl.flock(self.fd, fcntl.LOCK_EX | fcntl.LOCK_NB)\n2014-08-18 16:56:11.916 
3861 TRACE neutron.agent.linux.daemon IOError: [Errno 11] Resource temporarily 
unavailable\n2014-08-18 16:56:11.916 3861 TRACE neutron.agent.linux.daemon \n'
2014-08-18 16:56:11.972 3281 ERROR neutron.agent.l3_agent 
[req-3c9892ce-0d64-4cdd-ac27-dd8736076c18 None] Failed synchronizing routers
2014-08-18 16:56:11.972 3281 TRACE neutron.agent.l3_agent Traceback (most 
recent call last):
2014-08-18 16:56:11.972 3281 TRACE neutron.agent.l3_agent   File 
/usr/lib/python2.6/site-packages/neutron/agent/l3_agent.py, line 879, in 
_sync_routers_task
2014-08-18 16:56:11.972 3281 TRACE neutron.agent.l3_agent 
self._process_routers(routers, all_routers=True)
2014-08-18 16:56:11.972 3281 TRACE neutron.agent.l3_agent   File 
/usr/lib/python2.6/site-packages/neutron/agent/l3_agent.py, line 812, in 
_process_routers
2014-08-18 16:56:11.972 3281 TRACE neutron.agent.l3_agent 
self._router_added(r['id'], r)
2014-08-18 16:56:11.972 3281 TRACE neutron.agent.l3_agent   File 
/usr/lib/python2.6/site-packages/neutron/agent/l3_agent.py, line 368, in 
_router_added
2014-08-18 16:56:11.972 3281 TRACE neutron.agent.l3_agent 
self._spawn_metadata_proxy(ri.router_id, ri.ns_name)
2014-08-18 16:56:11.972 3281 TRACE neutron.agent.l3_agent   File 
/usr/lib/python2.6/site-packages/neutron/agent/l3_agent.py, line 409, in 
_spawn_metadata_proxy
2014-08-18 16:56:11.972 3281 TRACE neutron.agent.l3_agent 
pm.enable(callback)
2014-08-18 16:56:11.972 3281 TRACE neutron.agent.l3_agent   File 
/usr/lib/python2.6/site-packages/neutron/agent/linux/external_process.py, 
line 54, in enable
2014-08-18 16:56:11.972 3281 TRACE neutron.agent.l3_agent 
ip_wrapper.netns.execute(cmd)
2014-08-18 16:56:11.972 3281 TRACE neutron.agent.l3_agent   File 
/usr/lib/python2.6/site-packages/neutron/agent/linux/ip_lib.py, line 466, in 
execute
2014-08-18 16:56:11.972 3281 TRACE neutron.agent.l3_agent 
check_exit_code=check_exit_code)
2014-08-18 16:56:11.972 3281 TRACE neutron.agent.l3_agent   File 
/usr/lib/python2.6/site-packages/neutron/agent/linux/utils.py, line 78, in 
execute
2014-08-18 16:56:11.972 3281 TRACE neutron.agent.l3_agent raise 
RuntimeError(m)
2014-08-18 16:56:11.972 3281 TRACE neutron.agent.l3_agent RuntimeError:

when spawn neutron-ns-metadata-proxy, using file-lock lock the pidfile which on 
behalf of router id is failed.
But the router already exists when neutron-ns-metadata-proxy starts.

** Affects: neutron
 Importance: Undecided
 Status: New

** Summary changed:

- Get VM metadata infomation by l3 agent, resource  temporarily unavailable
+ Get VM metadata information by l3 agent, resource  temporarily unavailable

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1361088

Title:
  Get VM metadata information by l3 agent, resource  temporarily
  unavailable

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  When boot a VM assign name and password, I have met a run-time error.
  In L3 agent configuration file  I have enabled enable_metadata_proxy.

  Trace info from l3-agent.log:

  2014-08-18 16:56:11.971 3281 ERROR neutron.agent.linux.utils 
[req-3c9892ce-0d64-4cdd-ac27-dd8736076c18 None]
  Command: ['sudo', 'ip', 'netns', 'exec', 
'qrouter-2123c965-410d-4dc0-ab3c-240c0969b525', 'neutron-ns-metadata-proxy',