[Yahoo-eng-team] [Bug 1082248] Re: Use uuidutils instead of uuid.uuid4()

2016-11-06 Thread Tuan
** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1082248

Title:
  Use uuidutils instead of uuid.uuid4()

Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  New
Status in Sahara:
  New

Bug description:
  Openstack common has a wrapper for generating uuids.

  We should only use that function when generating uuids for
  consistency.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1082248/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1082248] Re: Use uuidutils instead of uuid.uuid4()

2016-11-06 Thread Tuan
** Also affects: sahara
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1082248

Title:
  Use uuidutils instead of uuid.uuid4()

Status in neutron:
  Fix Released
Status in Sahara:
  New

Bug description:
  Openstack common has a wrapper for generating uuids.

  We should only use that function when generating uuids for
  consistency.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1082248/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1429684] Re: Nova and Brick can log each other out of iscsi sessions

2016-11-06 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1429684

Title:
  Nova and Brick can log each other out of iscsi sessions

Status in Cinder:
  Invalid
Status in OpenStack Compute (nova):
  Expired

Bug description:
  Brick and nova are not synchronized with the same connect_volume lock.
  This can cause nova or cinder to logout of an iscsi portal when the
  other one is attempting to use it. if nova and cinder are on the same
  node.

  This may seem like a rare situation but commonly occurs in our CI
  system as we perform many operations involving both Nova and Brick
  concurrently. Likely when attaching/detaching to an instance while
  attaching to the node directly for image operations.


  In the below case, cinder logged out of the iscsi session while nova
  was retrying rescans attempting to detect the new LUN.

  Cinder volume logs:

  2015-03-07 17:27:14.288 28940 DEBUG oslo_concurrency.processutils [-]
  Running cmd (subprocess): sudo cinder-rootwrap
  /etc/cinder/rootwrap.conf iscsiadm -m node -T
  iqn.1992-08.com.netapp:sn.60a8eb0cc4bb11e4b041123478563412:vs.40815 -p
  10.250.119.127:3260 --logout execute /usr/local/lib/python2.7/dist-
  packages/oslo_concurrency/processutils.py:199

  2015-03-07 17:27:14.875 28940 DEBUG oslo_concurrency.processutils [-]
  CMD "sudo cinder-rootwrap /etc/cinder/rootwrap.conf iscsiadm -m node
  -T iqn.1992-08.com.netapp:sn.60a8eb0cc4bb11e4b041123478563412:vs.40815
  -p 10.250.119.127:3260 --logout" returned: 0 in 0.588s execute
  /usr/local/lib/python2.7/dist-
  packages/oslo_concurrency/processutils.py:225

  2015-03-07 17:27:14.876 28940 DEBUG cinder.brick.initiator.connector
  [-] iscsiadm ('--logout',): stdout=Logging out of session [sid: 1,
  target:
  iqn.1992-08.com.netapp:sn.60a8eb0cc4bb11e4b041123478563412:vs.40815,
  portal: 10.250.119.127,3260]

  Logout of [sid: 1, target:
  iqn.1992-08.com.netapp:sn.60a8eb0cc4bb11e4b041123478563412:vs.40815,
  portal: 10.250.119.127,3260] successful.


  Nova compute logs:


  2015-03-07 17:27:12.617 DEBUG nova.virt.libvirt.volume [req-
  55f33c70-ec85-4041-aaf6-205f74abf979
  VolumesV1SnapshotTestJSON-1966398854
  VolumesV1SnapshotTestJSON-1188982339] iscsiadm ('--rescan',):
  stdout=Rescanning session [sid: 1, target:
  iqn.1992-08.com.netapp:sn.60a8eb0cc4bb11e4b041123478563412:vs.40815,
  portal: 10.250.119.127,3260]

   stderr= _run_iscsiadm
  /opt/stack/new/nova/nova/virt/libvirt/volume.py:364

  2015-03-07 17:27:12.617 WARNING nova.virt.libvirt.volume [req-
  55f33c70-ec85-4041-aaf6-205f74abf979
  VolumesV1SnapshotTestJSON-1966398854
  VolumesV1SnapshotTestJSON-1188982339] ISCSI volume not yet found at:
  vdb. Will rescan & retry.  Try number: 0

  2015-03-07 17:27:12.618 DEBUG oslo_concurrency.processutils [req-
  55f33c70-ec85-4041-aaf6-205f74abf979
  VolumesV1SnapshotTestJSON-1966398854
  VolumesV1SnapshotTestJSON-1188982339] Running cmd (subprocess): sudo
  nova-rootwrap /etc/nova/rootwrap.conf iscsiadm -m node -T
  iqn.1992-08.com.netapp:sn.60a8eb0cc4bb11e4b041123478563412:vs.40815 -p
  10.250.119.127:3260 --rescan execute /usr/local/lib/python2.7/dist-
  packages/oslo_concurrency/processutils.py:199

  2015-03-07 17:27:13.503 DEBUG oslo_concurrency.processutils [req-
  55f33c70-ec85-4041-aaf6-205f74abf979
  VolumesV1SnapshotTestJSON-1966398854
  VolumesV1SnapshotTestJSON-1188982339] CMD "sudo nova-rootwrap
  /etc/nova/rootwrap.conf iscsiadm -m node -T
  iqn.1992-08.com.netapp:sn.60a8eb0cc4bb11e4b041123478563412:vs.40815 -p
  10.250.119.127:3260 --rescan" returned: 0 in 0.885s execute
  /usr/local/lib/python2.7/dist-
  packages/oslo_concurrency/processutils.py:225

  2015-03-07 17:27:13.504 DEBUG nova.virt.libvirt.volume [req-
  55f33c70-ec85-4041-aaf6-205f74abf979
  VolumesV1SnapshotTestJSON-1966398854
  VolumesV1SnapshotTestJSON-1188982339] iscsiadm ('--rescan',):
  stdout=Rescanning session [sid: 1, target:
  iqn.1992-08.com.netapp:sn.60a8eb0cc4bb11e4b041123478563412:vs.40815,
  portal: 10.250.119.127,3260]

   stderr= _run_iscsiadm
  /opt/stack/new/nova/nova/virt/libvirt/volume.py:364

  2015-03-07 17:27:14.504 WARNING nova.virt.libvirt.volume [req-
  55f33c70-ec85-4041-aaf6-205f74abf979
  VolumesV1SnapshotTestJSON-1966398854
  VolumesV1SnapshotTestJSON-1188982339] ISCSI volume not yet found at:
  vdb. Will rescan & retry.  Try number: 1

  2015-03-07 17:27:14.505 DEBUG oslo_concurrency.processutils [req-
  55f33c70-ec85-4041-aaf6-205f74abf979
  VolumesV1SnapshotTestJSON-1966398854
  VolumesV1SnapshotTestJSON-1188982339] Running cmd (subprocess): sudo
  nova-rootwrap /etc/nova/rootwrap.conf iscsiadm -m node -T
  iqn.1992-08.com.netapp:sn.60a8eb0cc4bb11e4b041123478563412:vs.40815 -p
  10.250.119.127:3260 --rescan execute 

[Yahoo-eng-team] [Bug 1620341] Re: Removing unused base images removes backing files of active instances

2016-11-06 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1620341

Title:
  Removing unused base images removes backing files of active instances

Status in OpenStack Compute (nova):
  Expired

Bug description:
  I've been experiencing dangerous issue that my backing files located on 
shared storage in _base folder are being removed by nova-compute. It's being 
happen on Juno, Kilo and Liberty releases. The shared storage mount 
/var/lib/nova/instances are configured on NFSv3. Backing image ids exists in 
/var/lib/nova/instances/locks/ folder for affected files. I don't know for 
sure, how the mechanism preventing _base files from deletion works - if it 
depends on locks folder or if it depends on locking files on shared storage, 
but from my point of view this is bug by design and the mechanism should be 
redesigned to not rely on client which is actually compute node. It causes many 
impacts on stability and security of users data!
  I want to ask for considering some new cleaning system, because current 
cleaning worker is designed for indepenent compute nodes without shared storage 
and it looks like it was not well adapted for configurations with shared 
storage. Maybe developers should consider some central mechanism and fetching 
data about used and unused _base files from database, not relying what is 
running on not on compute node locally.
  I can't reproduce this problem anymore because I had to disable cleaning 
unused base images and deploy own, secure worker.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1620341/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1639685] [NEW] glance image-create creates an image with no name

2016-11-06 Thread Ravichandran Valavandan
Public bug reported:

I'd expected glance image-create command to return the usage syntax,
instead it did this.

[rvalavan@AlfredWallace ~]$ glance image-create
+--+--+
| Property | Value|
+--+--+
| checksum | None |
| container_format | None |
| created_at   | 2016-11-07T04:07:50Z |
| disk_format  | None |
| id   | 5173c08b-97e5-4e73-8479-3c0a3263bc74 |
| min_disk | 0|
| min_ram  | 0|
| name | None |
| owner| 7315fbd1e2d84b6fa107c02c604c70b5 |
| protected| False|
| size | None |
| status   | queued   |
| tags | []   |
| updated_at   | 2016-11-07T04:07:50Z |
| virtual_size | None |
| visibility   | private  |
+--+--+
[rvalavan@AlfredWallace ~]$ glance image-list
+--+--+
| ID   | Name |
+--+--+
| 5173c08b-97e5-4e73-8479-3c0a3263bc74 |  |
+--+--+

[rvalavan@AlfredWallace ~]$ glance --version
2.5.0


[rvalavan@AlfredWallace ~]$ nova --version
6.0.0


After deleting the image, recreated with debug option:

[rvalavan@AlfredWallace ~]$ glance --debug image-create
DEBUG:keystoneclient.session:REQ: curl -g -i -X GET 
http://192.168.154.32:5000/v2.0 -H "Accept: application/json" -H "User-Agent: 
python-keystoneclient"
INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): 
192.168.154.32
DEBUG:requests.packages.urllib3.connectionpool:"GET /v2.0 HTTP/1.1" 200 233
DEBUG:keystoneclient.session:RESP: [200] Date: Mon, 07 Nov 2016 04:17:11 GMT 
Server: Apache/2.4.6 (CentOS) Vary: X-Auth-Token,Accept-Encoding 
x-openstack-request-id: req-ba78762c-d651-475d-864f-dc54ea1aec15 
Content-Encoding: gzip Content-Length: 233 Connection: close Content-Type: 
application/json
RESP BODY: {"version": {"status": "deprecated", "updated": 
"2016-08-04T00:00:00Z", "media-types": [{"base": "application/json", "type": 
"application/vnd.openstack.identity-v2.0+json"}], "id": "v2.0", "links": 
[{"href": "http://192.168.154.32:5000/v2.0/;, "rel": "self"}, {"href": 
"http://docs.openstack.org/;, "type": "text/html", "rel": "describedby"}]}}

DEBUG:keystoneclient.auth.identity.v2:Making authentication request to 
http://192.168.154.32:5000/v2.0/tokens
INFO:requests.packages.urllib3.connectionpool:Resetting dropped connection: 
192.168.154.32
DEBUG:requests.packages.urllib3.connectionpool:"POST /v2.0/tokens HTTP/1.1" 200 
1099
DEBUG:keystoneclient.session:REQ: curl -g -i -X GET 
http://192.168.154.32:9292/v2/schemas/image -H "User-Agent: 
python-glanceclient" -H "Content-Type: application/octet-stream" -H 
"X-Auth-Token: {SHA1}63edd458acbd166ddc0f3e59bcd10fb0c9878e87"
INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): 
192.168.154.32
DEBUG:requests.packages.urllib3.connectionpool:"GET /v2/schemas/image HTTP/1.1" 
200 4137
DEBUG:keystoneclient.session:RESP: [200] Content-Type: application/json; 
charset=UTF-8 Content-Length: 4137 X-Openstack-Request-Id: 
req-ea95ff03-78fd-4ec3-91a4-e68f86421f8d Date: Mon, 07 Nov 2016 04:17:11 GMT 
Connection: keep-alive
RESP BODY: {"additionalProperties": {"type": "string"}, "name": "image", 
"links": [{"href": "{self}", "rel": "self"}, {"href": "{file}", "rel": 
"enclosure"}, {"href": "{schema}", "rel": "describedby"}], "properties": 
{"status": {"readOnly": true, "enum": ["queued", "saving", "active", "killed", 
"deleted", "pending_delete", "deactivated"], "type": "string", "description": 
"Status of the image"}, "tags": {"items": {"type": "string", "maxLength": 255}, 
"type": "array", "description": "List of strings related to the image"}, 
"kernel_id": {"pattern": 
"^([0-9a-fA-F]){8}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){12}$",
 "type": ["null", "string"], "description": "ID of image stored in Glance that 
should be used as the kernel when booting an AMI-style image.", "is_base": 
false}, "container_format": {"enum": [null, "ami", "ari", "aki", "bare", "ovf", 
"ova", "docker"], "type": ["null", "string"], "description": "Format of the 
container"}, "min_ram": {"type": "integer", "d
 escription": "Amount of ram (in MB) required to boot image."}, "ramdisk_id": 
{"pattern": 

[Yahoo-eng-team] [Bug 1639566] [NEW] [RFE] Add support for local SNAT

2016-11-06 Thread Igor Shafran
Public bug reported:

[Existing problem]
Currently, when the User wants to allow multiple VMs to access external 
networks (e.g. internet), he can either assign a floating IP to each VM (DNAT), 
or assign just one floating IP to the router that he uses as a default gateway 
for all the VMs (SNAT).

The downside of DNAT is that the number of external IP addresses is very
limited, and therefore it requires that the User either "switch"
floating IPs between VMs (complicated), or obtain enough external IPs
(expensive).

The downside of SNAT is that all outbound traffic from the VMs that use
it as default gateway will go through the server that hosts the router
(a Neutron Network Node), effectively creating a network bottleneck and
single point of failure for multiple VMs.

[Proposal]
Add an additional SNAT model (henceforth referred to as "Local SNAT") that 
places the NAT/PAT on each Compute Node, and lets the underlying networking 
infrastructure decide how to handle the outbound traffic. In order for this 
design to work in a real world deployment, the underlying networking 
infrastructure needs to allow Compute Nodes to access the external network 
(e.g. WWW).

When the Compute Node can route outbound traffic, VMs hosted on it do
not need to be routed through the Network Node. Instead, they will be
routed locally from the Compute Node.

This will require changes to the local routing rules on each Compute
Node.

The change should be reflected in Neutron database, as it affects router
Ports configuration and should be persistent.

[Benefits]
Improvement is to be expected, since outbound traffic is routed locally and not 
through the Network Node effectively reducing network bottleneck on Network 
Node. 


[Functionality difference]
The main functionality difference between the Neutron reference implementation 
of SNAT and "Local SNAT", is that with Neutron SNAT the User reserves an 
external IP address (from a limited pre-allocated pool), which is used to 
masquerade multiple VMs of that same user (therefore, sharing the same external 
IP).

With the "Local SNAT" solution, in contrast, the User may not reserve
any external IP in Neutron, and the "external IP" from which each VM
will go out is arbitrarily selected by the underlying networking
infrastructure (similar to the way external IPs are allocated to home
internet routers, or to mobile phones).

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: rfe

** Summary changed:

- Add support for local SNAT
+ [RFE] Add support for local SNAT

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1639566

Title:
  [RFE] Add support for local SNAT

Status in neutron:
  New

Bug description:
  [Existing problem]
  Currently, when the User wants to allow multiple VMs to access external 
networks (e.g. internet), he can either assign a floating IP to each VM (DNAT), 
or assign just one floating IP to the router that he uses as a default gateway 
for all the VMs (SNAT).

  The downside of DNAT is that the number of external IP addresses is
  very limited, and therefore it requires that the User either "switch"
  floating IPs between VMs (complicated), or obtain enough external IPs
  (expensive).

  The downside of SNAT is that all outbound traffic from the VMs that
  use it as default gateway will go through the server that hosts the
  router (a Neutron Network Node), effectively creating a network
  bottleneck and single point of failure for multiple VMs.

  [Proposal]
  Add an additional SNAT model (henceforth referred to as "Local SNAT") that 
places the NAT/PAT on each Compute Node, and lets the underlying networking 
infrastructure decide how to handle the outbound traffic. In order for this 
design to work in a real world deployment, the underlying networking 
infrastructure needs to allow Compute Nodes to access the external network 
(e.g. WWW).

  When the Compute Node can route outbound traffic, VMs hosted on it do
  not need to be routed through the Network Node. Instead, they will be
  routed locally from the Compute Node.

  This will require changes to the local routing rules on each Compute
  Node.

  The change should be reflected in Neutron database, as it affects
  router Ports configuration and should be persistent.

  [Benefits]
  Improvement is to be expected, since outbound traffic is routed locally and 
not through the Network Node effectively reducing network bottleneck on Network 
Node. 

  
  [Functionality difference]
  The main functionality difference between the Neutron reference 
implementation of SNAT and "Local SNAT", is that with Neutron SNAT the User 
reserves an external IP address (from a limited pre-allocated pool), which is 
used to masquerade multiple VMs of that same user (therefore, sharing the same 
external IP).

  With the "Local SNAT" solution, in contrast, the User may not reserve
  any 

[Yahoo-eng-team] [Bug 1604397] Re: [SRU] python-swiftclient is missing in requirements.txt (for glare)

2016-11-06 Thread Launchpad Bug Tracker
This bug was fixed in the package python-glance-store - 0.18.0-0ubuntu3

---
python-glance-store (0.18.0-0ubuntu3) zesty; urgency=medium

  * d/gbp.conf: Update default config.

 -- Corey Bryant   Fri, 04 Nov 2016 08:09:18
-0400

** Changed in: python-glance-store (Ubuntu Zesty)
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1604397

Title:
  [SRU] python-swiftclient is missing in requirements.txt (for glare)

Status in Ubuntu Cloud Archive:
  In Progress
Status in Ubuntu Cloud Archive newton series:
  In Progress
Status in Glance:
  New
Status in python-glance-store package in Ubuntu:
  Fix Released
Status in python-glance-store source package in Yakkety:
  Fix Committed
Status in python-glance-store source package in Zesty:
  Fix Released

Bug description:
  [Description]
  [Test Case]
  I'm using UCA glance packages (version "13.0.0~b1-0ubuntu1~cloud0").
  And I've got this error:
  <30>Jul 18 16:03:45 node-2 glance-glare[17738]: ERROR: Store swift could not 
be configured correctly. Reason: Missing dependency python_swiftclient.

  Installing "python-swiftclient" fix the problem.

  In master
  (https://github.com/openstack/glance/blob/master/requirements.txt)
  package "python-swiftclient" is not included in requirements.txt. So
  UCA packages don't have proper dependencies.

  I think requirements.txt should be updated (add python-swiftclient
  there). This change should affect UCA packages.

  [Regression Potential]
  Minimal as this just adds a new dependency.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1604397/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1639539] [NEW] security group/security group rule bulk creation is broken

2016-11-06 Thread Isaku Yamahata
Public bug reported:

security group/security group rule bulk creation is (potentially)
broken.

neutron/db/securitygroups_db.SecurityGroupDbMixin.create_security_group[_rule]_bluk
calls _create_bulk which calls create_security_group[_rule]() within db 
transaction.
This implentation isn't correct because create_security_group[_rule] calls 
AFTER_CRATE callback
which shouldn't be called within db transaction.

The solution is to implement its own bulk create operation like 
neutron.plugins.ml2.plugin.Ml2Plugin._create_bluk_ml2
which knows about precommit/postcommit.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1639539

Title:
  security group/security group rule bulk creation is broken

Status in neutron:
  New

Bug description:
  security group/security group rule bulk creation is (potentially)
  broken.

  
neutron/db/securitygroups_db.SecurityGroupDbMixin.create_security_group[_rule]_bluk
  calls _create_bulk which calls create_security_group[_rule]() within db 
transaction.
  This implentation isn't correct because create_security_group[_rule] calls 
AFTER_CRATE callback
  which shouldn't be called within db transaction.

  The solution is to implement its own bulk create operation like 
neutron.plugins.ml2.plugin.Ml2Plugin._create_bluk_ml2
  which knows about precommit/postcommit.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1639539/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1595515] Re: IpConntrackManager class in ip_conntrack.py should be a singleton to be used by both SG and FWaaS

2016-11-06 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/38
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=468b2f1b8bfc6091ba7384f4f9266fdd1b15a2b4
Submitter: Jenkins
Branch:master

commit 468b2f1b8bfc6091ba7384f4f9266fdd1b15a2b4
Author: Chandan Dutta Chowdhury 
Date:   Thu Jun 23 18:16:55 2016 +0530

IP Conntrack Manager changes for FWaaS v2

IpConntrackManager class should be a singleton
to be used by both SG and FWaaS v2 API at the same time

Change-Id: I4a9f3d9b3ac7afe989c0efb1fa4e7fd792cd9610
Closes-Bug: 1595515


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1595515

Title:
  IpConntrackManager class in ip_conntrack.py should be a singleton to
  be used by both SG and FWaaS

Status in neutron:
  Fix Released

Bug description:
  The FWaaS V2 APIs is going to configure security rules at a port
  level. It will need to use connection tracking and zone configuration
  methods defined in the ip_conntrack.py and iptables_firewall.py in
  neutron project.

  Some methods in the IptablesFirewallDriver in iptables_firewall needs
  to be moved to IpConntrackManager class in ip_conntrack.py. As
  IpConntrackManager will be used by both SG and FWaaS V2 APIs and both
  of them can be used at the same time, the IpConntrackManager should be
  a singleton responsible for allocating and reclaiming zones assigned
  to ports.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1595515/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp