[Yahoo-eng-team] [Bug 1945747] [NEW] GET security group rule is missing description attribute

2021-10-01 Thread Salvatore Orlando
Public bug reported:

The description attribute is missed attribute in
_make_security_group_rule_dict

Create sec group rule with desc

stack@bionic-template:~/devstack$ openstack security group rule create 
--description "test rule" --remote-ip 0.0.0.0/0 --ingress 
ff57f76f-93a0-4bf3-b538-c88df40fdc40
+---+--+
| Field | Value 

   |
+---+--+
| created_at| 2021-10-01T06:35:50Z  

   |
| description   | test rule 

   |
| direction | ingress   

   |
| ether_type| IPv4  

   |
| id| 389eb45e-58ac-471c-b966-a3c8784009f7  

   |
| location  | cloud='', project.domain_id='default', 
project.domain_name=, project.id='f2527eb734c745eca32b1dfbd9107563', 
project.name='admin', region_name='RegionOne', zone= |
| name  | None  

   |
| port_range_max| None  

   |
| port_range_min| None  

   |
| project_id| f2527eb734c745eca32b1dfbd9107563  

   |
| protocol  | None  

   |
| remote_group_id   | None  

   |
| remote_ip_prefix  | None  

   |
| revision_number   | 0 

   |
| security_group_id | ff57f76f-93a0-4bf3-b538-c88df40fdc40  

   |
| tags  | []

   |
| updated_at| 2021-10-01T06:35:50Z  

   |
+---+--+


Example get (no description)

RESP BODY: {"security_group_rule": {"id":
"389eb45e-58ac-471c-b966-a3c8784009f7", "tenant_id":
"f2527eb734c745eca32b1dfbd9107563", "security_group_id":
"ff57f76f-93a0-4bf3-b538-c88df40fdc40", "ethertype": "IPv4",
"direction": "ingress", "protocol": null, "port_range_min": null,
"port_range_max": null, "remote_ip_prefix": "0.0.0.0/0",
"remote_group_id": null, "local_ip_prefix": null, "created_at":
"2021-10-01T06:35:50Z", "updated_at": "2021-10-01T06:35:50Z",
"revision_number": 0, "project_id": "f2527eb734c745eca32b1dfbd9107563"}}

Potential fix (patch applies to stable/ussuri, not master)

diff --git a/neutron/db/securitygroups_db.py b/neutron/db/securitygroups_db.py
index 28238358ae..0c848bbe38 100644
--- 

[Yahoo-eng-team] [Bug 1816740] [NEW] FWaaS v2 - incorrect shared rule check

2019-02-20 Thread Salvatore Orlando
Public bug reported:

Reference: http://git.openstack.org/cgit/openstack/neutron-
fwaas/tree/neutron_fwaas/db/firewall/v2/firewall_db_v2.py#n644

def _check_if_rules_shared_for_policy_shared(self, context, fwp_db, fwp):
if fwp['shared']:
rules_in_db = fwp_db.rule_associations
for entry in rules_in_db:
fwr_db = self._get_firewall_rule(context,
 entry.firewall_rule_id)
if not fwp_db['shared']:
raise f_exc.FirewallPolicySharingConflict(
firewall_rule_id=fwr_db['id'],
firewall_policy_id=fwp_db['id'])

The logic above will always raise an exception if a policy is changed
from not shared to shared. There is most likely a typo in:

if not fwp_db['shared']:

as it should be:

if not fwr_db['shared']:

** Affects: neutron
 Importance: Undecided
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1816740

Title:
  FWaaS v2 - incorrect shared rule check

Status in neutron:
  New

Bug description:
  Reference: http://git.openstack.org/cgit/openstack/neutron-
  fwaas/tree/neutron_fwaas/db/firewall/v2/firewall_db_v2.py#n644

  def _check_if_rules_shared_for_policy_shared(self, context, fwp_db, fwp):
  if fwp['shared']:
  rules_in_db = fwp_db.rule_associations
  for entry in rules_in_db:
  fwr_db = self._get_firewall_rule(context,
   entry.firewall_rule_id)
  if not fwp_db['shared']:
  raise f_exc.FirewallPolicySharingConflict(
  firewall_rule_id=fwr_db['id'],
  firewall_policy_id=fwp_db['id'])

  The logic above will always raise an exception if a policy is changed
  from not shared to shared. There is most likely a typo in:

  if not fwp_db['shared']:

  as it should be:

  if not fwr_db['shared']:

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1816740/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1570259] [NEW] Pecan: 'fields' query parameter not handled anywhere

2016-04-14 Thread Salvatore Orlando
Public bug reported:

The pecan framework currently does not handle properly the 'fields'
query parameter as when specifiied it is not sent down to the plugin.
Instead field selection happens while processing the response.

This is not entirely wrong, but since plugins have the capability of
doing field selection they should allowed to use it.

** Affects: neutron
 Importance: Medium
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: In Progress


** Tags: pecan

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1570259

Title:
  Pecan: 'fields' query parameter not handled anywhere

Status in neutron:
  In Progress

Bug description:
  The pecan framework currently does not handle properly the 'fields'
  query parameter as when specifiied it is not sent down to the plugin.
  Instead field selection happens while processing the response.

  This is not entirely wrong, but since plugins have the capability of
  doing field selection they should allowed to use it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1570259/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1542507] [NEW] Pecan: quota list does not include tenant_id

2016-02-05 Thread Salvatore Orlando
Public bug reported:

The operation for listing quotas for all tenants omits a little, but
important detail - the tenant-id from response items.

This is happening because the quota resource is now treated as any other 
resource and therefore subject to policy checks, response formatting etc. etc.
Since the RESOURCE_ATTRIBUTE_MAP for this resource has no tenant_id attribute 
the attribute is removed as alien (unexpected stuff returned by the plugin).

We should really add that attribute to the RESOURCE_ATTRIBUTE_MAP

** Affects: neutron
 Importance: Medium
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: In Progress


** Tags: pecan

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1542507

Title:
  Pecan: quota list does not include tenant_id

Status in neutron:
  In Progress

Bug description:
  The operation for listing quotas for all tenants omits a little, but
  important detail - the tenant-id from response items.

  This is happening because the quota resource is now treated as any other 
resource and therefore subject to policy checks, response formatting etc. etc.
  Since the RESOURCE_ATTRIBUTE_MAP for this resource has no tenant_id attribute 
the attribute is removed as alien (unexpected stuff returned by the plugin).

  We should really add that attribute to the RESOURCE_ATTRIBUTE_MAP

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1542507/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1542277] [NEW] Pecan: startup fails to associate plugins with some resources

2016-02-05 Thread Salvatore Orlando
Public bug reported:

In some cases, legit resources for supported extensions end up not
having a plugin associated.

This happens because the startup process explicitly checks the
supported_extension_aliases attribute, whereas it should instead do a
deeper check leveraging the helper method
get_supported_extension_aliases provided by the extension manager.

for instance the rbac extension will not work until this is fixed.

** Affects: neutron
 Importance: Medium
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: New


** Tags: pecan

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1542277

Title:
  Pecan: startup fails to associate plugins with some resources

Status in neutron:
  New

Bug description:
  In some cases, legit resources for supported extensions end up not
  having a plugin associated.

  This happens because the startup process explicitly checks the
  supported_extension_aliases attribute, whereas it should instead do a
  deeper check leveraging the helper method
  get_supported_extension_aliases provided by the extension manager.

  for instance the rbac extension will not work until this is fixed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1542277/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1537924] [NEW] Pecan: filter values are not converted

2016-01-25 Thread Salvatore Orlando
Public bug reported:

The code [1], in order to work as expected, needs a call like [2].
Otherwise every filter value is sent down to the plugin as a string and the 
plugin will not like that.


[1] 
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/pecan_wsgi/controllers/root.py#n159
[2] 
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/api/api_common.py#n32

** Affects: neutron
 Importance: Undecided
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: New


** Tags: pecan

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1537924

Title:
  Pecan: filter values are not converted

Status in neutron:
  New

Bug description:
  The code [1], in order to work as expected, needs a call like [2].
  Otherwise every filter value is sent down to the plugin as a string and the 
plugin will not like that.

  
  [1] 
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/pecan_wsgi/controllers/root.py#n159
  [2] 
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/api/api_common.py#n32

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1537924/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1537929] [NEW] Pecan: controller lookup for resources with dashes fails

2016-01-25 Thread Salvatore Orlando
Public bug reported:

The controller lookup process is unable to route the request to any resource 
with a dash in it.
This happens because the controllers are stored according to their resource 
name, where dashes are replaced by underscores, and the pecan lookup process, 
quite dumbly, does not perform this simple conversion.

The author of the code in question should consider an alternative career
path far away from programming and engineering in general.

** Affects: neutron
 Importance: Undecided
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: New


** Tags: pecan

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1537929

Title:
  Pecan: controller lookup for resources with dashes fails

Status in neutron:
  New

Bug description:
  The controller lookup process is unable to route the request to any resource 
with a dash in it.
  This happens because the controllers are stored according to their resource 
name, where dashes are replaced by underscores, and the pecan lookup process, 
quite dumbly, does not perform this simple conversion.

  The author of the code in question should consider an alternative
  career path far away from programming and engineering in general.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1537929/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1537936] [NEW] Pecan: put does not return resource names in responses

2016-01-25 Thread Salvatore Orlando
Public bug reported:

clients are hardly going to work with an issue like this

** Affects: neutron
 Importance: Undecided
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: New


** Tags: pecan

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1537936

Title:
  Pecan: put does not return resource names in responses

Status in neutron:
  New

Bug description:
  clients are hardly going to work with an issue like this

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1537936/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1528510] [NEW] Pecan: startup assumes controllers specify plugins

2015-12-22 Thread Salvatore Orlando
Public bug reported:

At startup, the Pecan API server associates a plugin (core or service) to every 
Neutron resource.
With this association, every Pecan controller gets a plugin where calls should 
be dispatched.

However, this association is not performed for 'pecanized extensions'
[1]. A 'pecanized' extension is a Neutron API extension which is able to
return Pecan controllers. The plugin association is instead currently
performed only for those extensions for which a controller is generated
on-the-fly using the generic CollectionController and ItemController.

This approach has the drawback that the API extension descriptor should have 
the logic to identify a plugin for the API itself.
While this is not a bad idea, it requires extensions descriptors to identify a 
plugin, thus duplicating, in a way, what's already done by the extension 
manager.

For this reason it is advisable to do plugin association for all
extensions during pecan startup unless until the Pecan framework won't
rely anymore on the home grown extension manager.


[1]
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/pecan_wsgi/startup.py#n86

** Affects: neutron
 Importance: Medium
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1528510

Title:
  Pecan: startup assumes controllers specify plugins

Status in neutron:
  New

Bug description:
  At startup, the Pecan API server associates a plugin (core or service) to 
every Neutron resource.
  With this association, every Pecan controller gets a plugin where calls 
should be dispatched.

  However, this association is not performed for 'pecanized extensions'
  [1]. A 'pecanized' extension is a Neutron API extension which is able
  to return Pecan controllers. The plugin association is instead
  currently performed only for those extensions for which a controller
  is generated on-the-fly using the generic CollectionController and
  ItemController.

  This approach has the drawback that the API extension descriptor should have 
the logic to identify a plugin for the API itself.
  While this is not a bad idea, it requires extensions descriptors to identify 
a plugin, thus duplicating, in a way, what's already done by the extension 
manager.

  For this reason it is advisable to do plugin association for all
  extensions during pecan startup unless until the Pecan framework won't
  rely anymore on the home grown extension manager.


  [1]
  
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/pecan_wsgi/startup.py#n86

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1528510/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1506935] [NEW] Release Liberty

2015-10-16 Thread Salvatore Orlando
Public bug reported:

The vmware-nsx subproject owners kindly ask the neutron release team to
push a tag for the Liberty release.

Thanks in advance.

** Affects: neutron
 Importance: Undecided
 Status: New

** Affects: vmware-nsx
 Importance: Undecided
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: New


** Tags: release-subproject

** Also affects: neutron
   Importance: Undecided
   Status: New

** Tags added: release-subproject

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1506935

Title:
  Release Liberty

Status in neutron:
  New
Status in vmware-nsx:
  New

Bug description:
  The vmware-nsx subproject owners kindly ask the neutron release team
  to push a tag for the Liberty release.

  Thanks in advance.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1506935/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1505831] [NEW] Pecan: policy evaluation error can trigger 500 response

2015-10-13 Thread Salvatore Orlando
Public bug reported:

in [1] if policy_method == enforce an PolicyNotAuthorizedException is triggered.
However, the exception translation hook is not called, most likely because the 
on_error hook is not installed on other policy hooks.
This might be logical and should therefore not be considered a pecan bug.

The policy hook should take this into account and handle the exception.

[1]
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/pecan_wsgi/hooks/policy_enforcement.py#n94

** Affects: neutron
 Importance: Undecided
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1505831

Title:
  Pecan: policy evaluation error can trigger 500 response

Status in neutron:
  New

Bug description:
  in [1] if policy_method == enforce an PolicyNotAuthorizedException is 
triggered.
  However, the exception translation hook is not called, most likely because 
the on_error hook is not installed on other policy hooks.
  This might be logical and should therefore not be considered a pecan bug.

  The policy hook should take this into account and handle the
  exception.

  [1]
  
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/pecan_wsgi/hooks/policy_enforcement.py#n94

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1505831/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1505843] [NEW] Pecan: quota management API broken

2015-10-13 Thread Salvatore Orlando
Public bug reported:

The quota management APIs in Pecan simply do not work.

The pecan controller framework try to treat quota as a resource, and
even create resource and collection controllers for this resource.
However, this fails as the plugins do not implement a quota interface.

In the current WSGI framework indeed quota management is performed by a
special controller which interacts directly with the driver and
implements its own authZ logic.

The pecan framework should implement quota management correctly,
possibly avoiding to carry on "special" behaviours from the current WSGI
framework

** Affects: neutron
 Importance: High
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1505843

Title:
  Pecan: quota management API broken

Status in neutron:
  In Progress

Bug description:
  The quota management APIs in Pecan simply do not work.

  The pecan controller framework try to treat quota as a resource, and
  even create resource and collection controllers for this resource.
  However, this fails as the plugins do not implement a quota interface.

  In the current WSGI framework indeed quota management is performed by
  a special controller which interacts directly with the driver and
  implements its own authZ logic.

  The pecan framework should implement quota management correctly,
  possibly avoiding to carry on "special" behaviours from the current
  WSGI framework

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1505843/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1505406] [NEW] Queries for fetching quotas are not scoped

2015-10-12 Thread Salvatore Orlando
Public bug reported:

get_tenant_quotas retrieves quotas for a tenant without scoping the
query with the tenant_id issuing the request [1]; even if the API
extension has an explicit authorisation check (...) [2], it is advisable
to scope the query so that this problem is avoided.

This is particularly relevant as with the pecan framework quota
management APIs are not anymore "special" from an authZ perspective, but
use the same authorization  hook as any other API.


[1] 
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/db/quota/driver.py#n50
[2] 
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/extensions/quotasv2.py#n87

** Affects: neutron
 Importance: Medium
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1505406

Title:
  Queries for fetching quotas are not scoped

Status in neutron:
  New

Bug description:
  get_tenant_quotas retrieves quotas for a tenant without scoping the
  query with the tenant_id issuing the request [1]; even if the API
  extension has an explicit authorisation check (...) [2], it is
  advisable to scope the query so that this problem is avoided.

  This is particularly relevant as with the pecan framework quota
  management APIs are not anymore "special" from an authZ perspective,
  but use the same authorization  hook as any other API.

  
  [1] 
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/db/quota/driver.py#n50
  [2] 
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/extensions/quotasv2.py#n87

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1505406/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1501948] [NEW] Quota enforcement does not work in Pecan

2015-10-01 Thread Salvatore Orlando
Public bug reported:

Pecan still uses old-style, reservation-less quota enforcement [1]

Unfortunately this just does not work.
There are two independent issues:
- only extension resources are being registered with the quota engine, because 
resource registration for core resources used to happen in the "API router" 
[2]. This is clear from the following message in the logs:

DEBUG neutron.pecan_wsgi.hooks.quota_enforcement [req-6643e848-0cec-
45d9-88d8-35f49a60b8b5 demo 3f3039040f0e434d8e10d7f43dabfe75] Unknown
quota resources ['network']

- the enforcement hook still passes the plural to the resource's count
method. The plural resource name parameter was removed during liberty
[3] as it was not necessary, and this causes a non negligible issue that
it's being interpreted as the tenant_id. [4]


[1] 
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/pecan_wsgi/hooks/quota_enforcement.py
[2] 
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/api/v2/router.py
[3] 
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/quota/resource.py#n134
[4] 
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/pecan_wsgi/hooks/quota_enforcement.py#n48

** Affects: neutron
 Importance: High
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1501948

Title:
  Quota enforcement does not work in Pecan

Status in neutron:
  In Progress

Bug description:
  Pecan still uses old-style, reservation-less quota enforcement [1]

  Unfortunately this just does not work.
  There are two independent issues:
  - only extension resources are being registered with the quota engine, 
because resource registration for core resources used to happen in the "API 
router" [2]. This is clear from the following message in the logs:

  DEBUG neutron.pecan_wsgi.hooks.quota_enforcement [req-6643e848-0cec-
  45d9-88d8-35f49a60b8b5 demo 3f3039040f0e434d8e10d7f43dabfe75] Unknown
  quota resources ['network']

  - the enforcement hook still passes the plural to the resource's count
  method. The plural resource name parameter was removed during liberty
  [3] as it was not necessary, and this causes a non negligible issue
  that it's being interpreted as the tenant_id. [4]

  
  [1] 
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/pecan_wsgi/hooks/quota_enforcement.py
  [2] 
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/api/v2/router.py
  [3] 
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/quota/resource.py#n134
  [4] 
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/pecan_wsgi/hooks/quota_enforcement.py#n48

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1501948/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1499339] [NEW] sec_group rule quota usage unreliable

2015-09-24 Thread Salvatore Orlando
Public bug reported:

Security group rules are now being deleted with query.delete
while efficient, this prevents sqlalchemy events from being fired (see 
http://docs.openstack.org/developer/neutron/devref/quota.html#exceptions-and-caveats)

It might be worth to have this fixed before releasing RC-1; even if impact of 
this bug is not really serious.
After a delete the quota tracker is not marked as dirty, and therefore it 
reports an incorrect, but higher usage data.
As a result a tenant might not be allowed to use all of its quota (but just 
total - 1). This will however be fixed by the next get operation.

** Affects: neutron
 Importance: Undecided
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1499339

Title:
  sec_group rule quota usage unreliable

Status in neutron:
  New

Bug description:
  Security group rules are now being deleted with query.delete
  while efficient, this prevents sqlalchemy events from being fired (see 
http://docs.openstack.org/developer/neutron/devref/quota.html#exceptions-and-caveats)

  It might be worth to have this fixed before releasing RC-1; even if impact of 
this bug is not really serious.
  After a delete the quota tracker is not marked as dirty, and therefore it 
reports an incorrect, but higher usage data.
  As a result a tenant might not be allowed to use all of its quota (but just 
total - 1). This will however be fixed by the next get operation.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1499339/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1499358] [NEW] M2: Quota usage tracking needs tests

2015-09-24 Thread Salvatore Orlando
Public bug reported:

Functional tests are needed to avoid breaking the mechanisms that
regulate quota usage tracking in ML2.

With proper tests in place these bugs could have been avoided:
https://bugs.launchpad.net/neutron/+bug/1499339
https://bugs.launchpad.net/neutron/+bug/1497459

** Affects: neutron
 Importance: Medium
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1499358

Title:
  M2: Quota usage tracking needs tests

Status in neutron:
  New

Bug description:
  Functional tests are needed to avoid breaking the mechanisms that
  regulate quota usage tracking in ML2.

  With proper tests in place these bugs could have been avoided:
  https://bugs.launchpad.net/neutron/+bug/1499339
  https://bugs.launchpad.net/neutron/+bug/1497459

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1499358/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1497459] [NEW] port usage tracking not reliable anymore

2015-09-18 Thread Salvatore Orlando
Public bug reported:

Patch https://review.openstack.org/#/c/13/23 modified 
neutron.db.ipam_backend in order to ensure a sqlalchemy event is triggered when 
deleting a port. This caused an issue when transaction isolation level is below 
repeatable read as the sqlalchemy ORM mapper throws an exception if the record 
is deleted by another transaction.
Patch https://review.openstack.org/#/c/224289/ fixed this but reinstated 
query.delete which does not trigger the sqlalchemy event.

It might be worth considering just handling the sqlalchemy orm exception
in this case; alternatively usage tracking for ports might be disabled.

A related question is why the logic for deleting a port resides in the
ipam module, but probably it should not be answered here.

** Affects: neutron
 Importance: Medium
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1497459

Title:
  port usage tracking not reliable anymore

Status in neutron:
  In Progress

Bug description:
  Patch https://review.openstack.org/#/c/13/23 modified 
neutron.db.ipam_backend in order to ensure a sqlalchemy event is triggered when 
deleting a port. This caused an issue when transaction isolation level is below 
repeatable read as the sqlalchemy ORM mapper throws an exception if the record 
is deleted by another transaction.
  Patch https://review.openstack.org/#/c/224289/ fixed this but reinstated 
query.delete which does not trigger the sqlalchemy event.

  It might be worth considering just handling the sqlalchemy orm
  exception in this case; alternatively usage tracking for ports might
  be disabled.

  A related question is why the logic for deleting a port resides in the
  ipam module, but probably it should not be answered here.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1497459/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1488282] Re: Gate failures with 'the resource could not be found'

2015-08-26 Thread Salvatore Orlando
Actually the root cause for the failure I did observe was different.

This appears to be a genuine nova error where a server is deleted by
another test while the list operation is in progress. It is also
interesting that nova fails with a 404 here - this appears to really be
a bug.

In support of my thesis I can provide examples where the same failure
trace occurs with[1], [2] and without [3], [4] neutron

Also, during a server list operation there is no interaction between
nova and neutron.

[1] 
http://logs.openstack.org/04/215604/15/gate/gate-tempest-dsvm-neutron-full/37eb7aa/console.html
[2] 
http://logs.openstack.org/04/215604/15/gate/gate-tempest-dsvm-neutron-full/37eb7aa/logs/screen-n-api.txt.gz#_2015-08-26_15_32_57_698
[3] 
http://logs.openstack.org/67/214067/2/gate/gate-tempest-dsvm-full/1b348a3/console.html
[4] 
http://logs.openstack.org/67/214067/2/gate/gate-tempest-dsvm-full/1b348a3/logs/screen-n-api.txt.gz#_2015-08-25_14_50_13_779

** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1488282

Title:
  Gate failures with 'the resource could not be found'

Status in neutron:
  Confirmed
Status in OpenStack Compute (nova):
  New

Bug description:
  There have been spurious failures happening in the gate. The most
  prominent one is:

  
  ft1.186: 
tempest.api.compute.admin.test_servers.ServersAdminTestJSON.test_list_servers_by_admin_with_all_tenants[id-9f5579ae-19b4-4985-a091-2a5d56106580]_StringException:
 Empty attachments:
stderr
stdout

  pythonlogging:'': {{{
  2015-08-24 22:55:50,083 32355 INFO [tempest_lib.common.rest_client] 
Request (ServersAdminTestJSON:test_list_servers_by_admin_with_all_tenants): 404 
GET 
http://127.0.0.1:8774/v2/fb99c79318b54e668713b25afc52f81a/servers/detail?all_tenants=
 0.834s
  2015-08-24 22:55:50,083 32355 DEBUG[tempest_lib.common.rest_client] 
Request - Headers: {'X-Auth-Token': 'omitted', 'Accept': 'application/json', 
'Content-Type': 'application/json'}
  Body: None
  Response - Headers: {'content-length': '78', 'date': 'Mon, 24 Aug 2015 
22:55:50 GMT', 'connection': 'close', 'content-type': 'application/json; 
charset=UTF-8', 'x-compute-request-id': 
'req-387b21a9-4ada-48ee-89ed-9acfe5274ef7', 'status': '404'}
  Body: {itemNotFound: {message: The resource could not be 
found., code: 404}}
  }}}

  Traceback (most recent call last):
File tempest/api/compute/admin/test_servers.py, line 81, in 
test_list_servers_by_admin_with_all_tenants
  body = self.client.list_servers(detail=True, **params)
File tempest/services/compute/json/servers_client.py, line 159, in 
list_servers
  resp, body = self.get(url)
File 
/opt/stack/new/tempest/.tox/full/local/lib/python2.7/site-packages/tempest_lib/common/rest_client.py,
 line 271, in get
  return self.request('GET', url, extra_headers, headers)
File 
/opt/stack/new/tempest/.tox/full/local/lib/python2.7/site-packages/tempest_lib/common/rest_client.py,
 line 643, in request
  resp, resp_body)
File 
/opt/stack/new/tempest/.tox/full/local/lib/python2.7/site-packages/tempest_lib/common/rest_client.py,
 line 695, in _error_checker
  raise exceptions.NotFound(resp_body)
  tempest_lib.exceptions.NotFound: Object not found
  Details: {u'code': 404, u'message': u'The resource could not be found.'}

  
  but there are other similar failure modes. This seems to be related to bug 
#1269284

  The logstash query:

  message:tempest_lib.exceptions.NotFound: Object not found AND
  build_name:gate-tempest-dsvm-neutron-full

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwidGVtcGVzdF9saWIuZXhjZXB0aW9ucy5Ob3RGb3VuZDogT2JqZWN0IG5vdCBmb3VuZFwiIEFORCBidWlsZF9uYW1lOlwiZ2F0ZS10ZW1wZXN0LWRzdm0tbmV1dHJvbi1mdWxsXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0NDA0NjIwNzcyMjksIm1vZGUiOiIiLCJhbmFseXplX2ZpZWxkIjoiIn0=

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1488282/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1481346] [NEW] MH: router delete might return a 500 error

2015-08-04 Thread Salvatore Orlando
Public bug reported:

If a logical router has been removed from the backend, and the DB is an
inconsistent state where no NSX mapping is stored for the neutron
logical router, the backend will fail when attempting eletion of the
router, causing the neutron operation to return a 500.

** Affects: neutron
 Importance: Undecided
 Status: New

** Affects: neutron/juno
 Importance: Undecided
 Status: New

** Affects: vmware-nsx
 Importance: Undecided
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: New

** Also affects: neutron
   Importance: Undecided
   Status: New

** Also affects: neutron/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1481346

Title:
  MH: router delete might return a 500 error

Status in neutron:
  New
Status in neutron juno series:
  New
Status in vmware-nsx:
  New

Bug description:
  If a logical router has been removed from the backend, and the DB is
  an inconsistent state where no NSX mapping is stored for the neutron
  logical router, the backend will fail when attempting eletion of the
  router, causing the neutron operation to return a 500.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1481346/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1479309] [NEW] Wrong pre-delete checks for distributed routers

2015-07-29 Thread Salvatore Orlando
Public bug reported:

The pre-delete checks [1] do not take into account DVR interfaces. This
means that they will fail to raise an error when deleting a router with
DVR interfaces on it, thus causing the router to be removed from the
backend and leaving the system in an inconsistent state (as the
subsequent db operation will fail)


[1] 
http://git.openstack.org/cgit/openstack/vmware-nsx/tree/vmware_nsx/neutron/plugins/vmware/plugins/base.py#n1573

** Affects: neutron
 Importance: Undecided
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: New

** Affects: neutron/juno
 Importance: Undecided
 Status: New

** Affects: vmware-nsx
 Importance: High
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: New

** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: neutron
 Assignee: (unassigned) = Salvatore Orlando (salvatore-orlando)

** Also affects: neutron/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1479309

Title:
  Wrong pre-delete checks for distributed routers

Status in neutron:
  New
Status in neutron juno series:
  New
Status in vmware-nsx:
  New

Bug description:
  The pre-delete checks [1] do not take into account DVR interfaces.
  This means that they will fail to raise an error when deleting a
  router with DVR interfaces on it, thus causing the router to be
  removed from the backend and leaving the system in an inconsistent
  state (as the subsequent db operation will fail)

  
  [1] 
http://git.openstack.org/cgit/openstack/vmware-nsx/tree/vmware_nsx/neutron/plugins/vmware/plugins/base.py#n1573

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1479309/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1478879] [NEW] Enable extra dhcp opt extension

2015-07-28 Thread Salvatore Orlando
Public bug reported:

This extension can be supported without effort by the NSX-mh plugin and
it should be added to supported extension aliases.

** Affects: neutron
 Importance: Undecided
 Status: New

** Affects: neutron/juno
 Importance: Undecided
 Status: New

** Affects: vmware-nsx
 Importance: High
 Assignee: Aaron Rosen (arosen)
 Status: New

** Also affects: neutron
   Importance: Undecided
   Status: New

** Also affects: neutron/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1478879

Title:
  Enable extra dhcp opt extension

Status in neutron:
  New
Status in neutron juno series:
  New
Status in vmware-nsx:
  New

Bug description:
  This extension can be supported without effort by the NSX-mh plugin
  and it should be added to supported extension aliases.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1478879/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1477975] [NEW] too many dumps of log options values

2015-07-24 Thread Salvatore Orlando
Public bug reported:

A gate test usually dumps them twice [1], and the dumps become three if there 
are api workers.
If rpc workers are enabled as well, there will be 4 dumps.

[1] http://logs.openstack.org/08/188608/17/check/gate-tempest-dsvm-
neutron-full/625c79d/logs/screen-q-svc.txt.gz

oslo_service is dumping option values already. Probably neutron does not
need to do that anymore.

** Affects: neutron
 Importance: Medium
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) = Salvatore Orlando (salvatore-orlando)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1477975

Title:
  too many dumps of log options values

Status in neutron:
  In Progress

Bug description:
  A gate test usually dumps them twice [1], and the dumps become three if there 
are api workers.
  If rpc workers are enabled as well, there will be 4 dumps.

  [1] http://logs.openstack.org/08/188608/17/check/gate-tempest-dsvm-
  neutron-full/625c79d/logs/screen-q-svc.txt.gz

  oslo_service is dumping option values already. Probably neutron does
  not need to do that anymore.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1477975/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1474019] [NEW] test for subscriptions - ignore

2015-07-13 Thread Salvatore Orlando
Public bug reported:

you know what? meh.

** Affects: neutron
 Importance: Undecided
 Status: Invalid

** Changed in: neutron
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1474019

Title:
  test for subscriptions - ignore

Status in neutron:
  Invalid

Bug description:
  you know what? meh.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1474019/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1468934] Re: Neutron might use robust quota enforcement

2015-07-07 Thread Salvatore Orlando
Sorted.
I was wondering indeed where did the rfe bug go!

** Also affects: neutron
   Importance: Undecided
   Status: New

** No longer affects: vmware-nsx

** Changed in: neutron
   Importance: Undecided = Wishlist

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1468934

Title:
  Neutron might use robust quota enforcement

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Neutron can allow exceeding the quota in certain cases.  Some
  investigation revealed that quotas in Neutron are subject to a race
  where parallel requests can each check quota and find there is just
  enough left to fulfill its individual request.

  Neutron has no concept of reservation and optimistically assumes that
  a resource count before performing a request is all that's needed.

  Also, it does not take into account at all that API operations might
  create resources as a side effect, and that resources can be created
  even from RPC calls.

  The goal of this RFE is to ensure quota enforcement is done in a decent way 
in Neutron.
  Yeah, even quota management is pretty terrible, but let's start with quota 
enforcement

  Oh... by the way, the patches are already under review [1]

  Note: I am filing this RFE as the patches [1] did not land by the
  liberty-1 deadline and I failed to resubmit the already approved Kilo
  spec [2] because I'm an indolent procrastinator.

  
  [1] 
https://review.openstack.org/#/q/status:open+project:openstack/neutron+branch:master+topic:bp/better-quotas,n,z
  [2] 
http://specs.openstack.org/openstack/neutron-specs/specs/kilo-backlog/better-quotas.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1468934/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1466642] [NEW] Intermittent failure in AgentManagementTestJSON.test_list_agent

2015-06-18 Thread Salvatore Orlando
Public bug reported:

This failure is fairly rare (6 occurrences in 48 hours):
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwibmV1dHJvbi50ZXN0cy5hcGkuYWRtaW4udGVzdF9hZ2VudF9tYW5hZ2VtZW50LkFnZW50TWFuYWdlbWVudFRlc3RKU09OLnRlc3RfbGlzdF9hZ2VudFwiIEFORCBtZXNzYWdlOlwiRkFJTEVEXCIgbWVzc2FnZTpcIm5ldXRyb24udGVzdHMuYXBpLmFkbWluLnRlc3RfYWdlbnRfbWFuYWdlbWVudC5BZ2VudE1hbmFnZW1lbnRUZXN0SlNPTi50ZXN0X2xpc3RfYWdlbnRcIiBBTkQgbWVzc2FnZTpcIkZBSUxFRFwiIEFORCB0YWdzOmNvbnNvbGUuaHRtbCIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiMTcyODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQzNDY1ODM3MzIxMn0=

Query:
message:neutron.tests.api.admin.test_agent_management.AgentManagementTestJSON.test_list_agent
AND message:FAILED
message:neutron.tests.api.admin.test_agent_management.AgentManagementTestJSON.test_list_agent
AND message:FAILED AND tags:console.html

the failure itself is rather silly. The test expects description to be
None, whereas it is an empty string -
http://logs.openstack.org/08/188608/6/check/check-neutron-dsvm-
api/fea6d1d/console.html#_2015-06-18_14_32_40_302

Note: it looks similar to 1442494 but the failure mode is quite
different.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1466642

Title:
  Intermittent failure in AgentManagementTestJSON.test_list_agent

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  This failure is fairly rare (6 occurrences in 48 hours):
  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwibmV1dHJvbi50ZXN0cy5hcGkuYWRtaW4udGVzdF9hZ2VudF9tYW5hZ2VtZW50LkFnZW50TWFuYWdlbWVudFRlc3RKU09OLnRlc3RfbGlzdF9hZ2VudFwiIEFORCBtZXNzYWdlOlwiRkFJTEVEXCIgbWVzc2FnZTpcIm5ldXRyb24udGVzdHMuYXBpLmFkbWluLnRlc3RfYWdlbnRfbWFuYWdlbWVudC5BZ2VudE1hbmFnZW1lbnRUZXN0SlNPTi50ZXN0X2xpc3RfYWdlbnRcIiBBTkQgbWVzc2FnZTpcIkZBSUxFRFwiIEFORCB0YWdzOmNvbnNvbGUuaHRtbCIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiMTcyODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQzNDY1ODM3MzIxMn0=

  Query:
  
message:neutron.tests.api.admin.test_agent_management.AgentManagementTestJSON.test_list_agent
  AND message:FAILED
  
message:neutron.tests.api.admin.test_agent_management.AgentManagementTestJSON.test_list_agent
  AND message:FAILED AND tags:console.html

  the failure itself is rather silly. The test expects description to be
  None, whereas it is an empty string -
  http://logs.openstack.org/08/188608/6/check/check-neutron-dsvm-
  api/fea6d1d/console.html#_2015-06-18_14_32_40_302

  Note: it looks similar to 1442494 but the failure mode is quite
  different.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1466642/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1463363] [NEW] NSX-mh: Decimal RXTX factor not honoured

2015-06-09 Thread Salvatore Orlando
Public bug reported:

A decimal RXTX factor, which is allowed by nova flavors, is not honoured
by the NSX-mh plugin, but simply truncated to integer.

To reproduce:

* Create a neutron queue
* Create a neutron net / subnet using the queue
* Create a new flavor which uses an RXTX factor other than an integer value
* Boot a VM on the net above using the flavor
* View the NSX queue for the VM's VIF -- notice it does not have the RXTX 
factor applied correctly (for instance if it's 1.2 it does not multiply it at 
all, if it's 3.4 it applies a RXTX factor of 3)

** Affects: neutron
 Importance: Undecided
 Status: New

** Affects: vmware-nsx
 Importance: Undecided
 Status: New

** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1463363

Title:
  NSX-mh: Decimal RXTX factor not honoured

Status in OpenStack Neutron (virtual network service):
  New
Status in VMware NSX:
  New

Bug description:
  A decimal RXTX factor, which is allowed by nova flavors, is not
  honoured by the NSX-mh plugin, but simply truncated to integer.

  To reproduce:

  * Create a neutron queue
  * Create a neutron net / subnet using the queue
  * Create a new flavor which uses an RXTX factor other than an integer value
  * Boot a VM on the net above using the flavor
  * View the NSX queue for the VM's VIF -- notice it does not have the RXTX 
factor applied correctly (for instance if it's 1.2 it does not multiply it at 
all, if it's 3.4 it applies a RXTX factor of 3)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1463363/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1463184] Re: Failing to launch VM if RX_TX*Network Qos is greater than Maximum value is 2147483

2015-06-09 Thread Salvatore Orlando
Triaging

** Changed in: neutron
 Assignee: (unassigned) = Salvatore Orlando (salvatore-orlando)

** Also affects: vmware-nsx
   Importance: Undecided
   Status: New

** No longer affects: neutron

** Changed in: vmware-nsx
   Status: New = Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1463184

Title:
  Failing to launch VM if RX_TX*Network Qos is greater than Maximum
  value is 2147483

Status in VMware NSX:
  Incomplete

Bug description:
  Unable to launch VMs when net QOS rate that is RX_TX of flavor
  multiplied by Network QOS max bandwidth value is greater than 2147483.

  ERROR Server Error Message: Invalid bandwidth settings. Maximum value
  is 2147483.

  
  If default bandwidth is 1 GB/s and RX_TX for a flavor is set to 5 it does not 
work and fails while launching instance.

  neutron server log :

  neutron.openstack.common.rpc.com ERROR Failed to publish message to
  topic 'notifications.info': [Errno 32] Broken pipe#012Traceback (most
  recent call last):#012  File /usr/lib/python2.7/dist-
  packages/neutron/openstack/common/rpc/impl_kombu.py, line 579, in
  ensure#012return method(*args, **kwargs)#012  File
  /usr/lib/python2.7/dist-
  packages/neutron/openstack/common/rpc/impl_kombu.py, line 690, in
  _publish#012publisher = cls(self.conf, self.channel, topic,
  **kwargs)#012  File /usr/lib/python2.7/dist-
  packages/neutron/openstack/common/rpc/impl_kombu.py, line 392, in
  __init__#012super(NotifyPublisher, self).__init__(conf, channel,
  topic, **kwargs)#012  File /usr/lib/python2.7/dist-
  packages/neutron/openstack/common/rpc/impl_kombu.py, line 368, in
  __init__#012**options)#012  File /usr/lib/python2.7/dist-
  packages/neutron/openstack/common/rpc/impl_kombu.py, line 315, in
  __init__#012self.reconnect(channel)#012  File /usr/lib/python2.7
  /dist-packages/neutron/openstack/common/rpc/impl_kombu.py, line 395,
  in reconnect#012super(NotifyPublisher,
  self).reconnect(channel)#012  File /usr/lib/python2.7/dist-
  packages/neutron/openstack/common/rpc/impl_kombu.py, line 323, in
  reconnect#012routing_key=self.routing_key)#012  File
  /usr/lib/python2.7/dist-packages/kombu/messaging.py, line 82, in
  __init__#012self.revive(self._channel)#012  File
  /usr/lib/python2.7/dist-packages/kombu/messaging.py, line 216, in
  revive#012self.declare()#012  File /usr/lib/python2.7/dist-
  packages/kombu/messaging.py, line 102, in declare#012
  self.exchange.declare()#012  File /usr/lib/python2.7/dist-
  packages/kombu/entity.py, line 166, in declare#012nowait=nowait,
  passive=passive,#012  File /usr/lib/python2.7/dist-
  packages/amqp/channel.py, line 604, in exchange_declare#012
  self._send_method((40, 10), args)#012  File /usr/lib/python2.7/dist-
  packages/amqp/abstract_channel.py, line 62, in _send_method#012
  self.channel_id, method_sig, a

  neutron.plugins.vmware.api_client log :

  ERROR Server Error Message: Invalid bandwidth settings. Maximum value
  is 2147483.

To manage notifications about this bug go to:
https://bugs.launchpad.net/vmware-nsx/+bug/1463184/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1463090] Re: Neutron QOS does not work with fraction RX_TX flavor value

2015-06-09 Thread Salvatore Orlando
*** This bug is a duplicate of bug 1463363 ***
https://bugs.launchpad.net/bugs/1463363

My apologies I did not notice your bug report as it was targeting
neutron rather than vmware-nsx and I filed another one as this was
discovered independently in internal testing

** This bug has been marked a duplicate of bug 1463363
   NSX-mh: Decimal RXTX factor not honoured

** Also affects: vmware-nsx
   Importance: Undecided
   Status: New

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1463090

Title:
  Neutron QOS does not work with fraction RX_TX flavor value

Status in VMware NSX:
  New

Bug description:
  We have noticed this in Icehouse. Neutron QOS queue does not get
  created with scaled max value when the RX_TX value of nova flavor is a
  fraction e.x. 0.5 or 1.5

  It worked  correctly when we tested with RX_TX value of 5.0. Here we
  saw neutron queue got created with max value of 5120, since the queue
  value that was attached to network is 1024.

  However for fraction values of flavor the neutron queue max value
  stayed the same at 1024.  To reproduce :

  1. Create flavors with appropriate flavor

  2. Create neutron queue with 1024 max value

  3. Create network with queue id for earlier neutron queue

  4. Spin VMs with appropriate flavor

  5. Run neutron queue-list on controller. Check for max value.

To manage notifications about this bug go to:
https://bugs.launchpad.net/vmware-nsx/+bug/1463090/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1463085] Re: Fraction RX_TX value not getting applied to Neutron Queue QOS

2015-06-09 Thread Salvatore Orlando
*** This bug is a duplicate of bug 1463363 ***
https://bugs.launchpad.net/bugs/1463363

** This bug is no longer a duplicate of bug 1463090
   Neutron QOS does not work with fraction RX_TX flavor value
** This bug has been marked a duplicate of bug 1463363
   NSX-mh: Decimal RXTX factor not honoured

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1463085

Title:
  Fraction RX_TX value not getting applied to Neutron Queue QOS

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  I noticed this in Icehouse using Neutron backed by NSX.

  Not sure if it is related to a similar issue that was fixed :
  https://review.openstack.org/#/c/188560/1

  Following are steps to reproduce it :

  1. Create flavor with RX_TX as 0.25 and 1.5

  2. Create a neutron qos queue with max bandwidth value of 1024

  3. Create a network with queue_id attached

  4. Spin a VM with flavor having RX_TX value of 0.25

  5. Execute neturon queue-list.

  
+--+--+-+--+-+--+-+
  | id   | name | min |  max | 
qos_marking | dscp | default |
  
+--+--+-+--+-+--+-+
  | 1b67eeb5-8568-43da-a7de-4bf4248b874d | qostestqueue |   0 | 1024 | 
untrusted   |0 | False   |
  | efe223ab-c8aa-46bc-bc62-504dba6f7960 | qostestqueue |   0 | 1024 | 
untrusted   |0 | False   |
  
+--+--+-+--+-+--+-+
  
   
  Expected max value was 256 instead we see 1024. Similar issue is seen 
creating flavor with RX_TX of 1.5

  It works for non fractional RX_TX values e.x. for RX_TX we do see a
  neutron queue with max value of 5120

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1463085/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1461102] Re: cascade in orm relationships shadows ON DELETE CASCADE

2015-06-02 Thread Salvatore Orlando
** Changed in: neutron
   Status: New = Opinion

** Changed in: neutron
   Importance: Medium = Wishlist

** Changed in: neutron
Milestone: liberty-1 = None

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1461102

Title:
  cascade in orm relationships shadows ON DELETE CASCADE

Status in OpenStack Neutron (virtual network service):
  Opinion

Bug description:
  In [1] there is a good discussion on how the 'cascade' property
  specified for sqlachemy.orm.relationship interacts with the 'ON DELETE
  CASCADE' specified in DDL.

  I stumbled on this when I was doing some DB access profiling and
  noticed multiple DELETE statements were emitted for a delete subnet
  operation [2], whereas I expected a single DELETE statement only; I
  expected that the cascade behaviour configured on db tables would have
  taken care of DNS servers, host routes, etc.

  What is happening is that sqlalchemy is perform orm-level cascading
  rather than relying on the database foreign key cascade options. And
  it's doing this because we told it to do so. As the SQLAlchemy
  documentation points out [3] there is no need to add the complexity of
  orm relationships if foreign keys are correctly configured on the
  database, and the passive_deletes option should be used.

  Enabling such option in place of all the cascade options for relationship 
caused a single DELETE statement to be issued [4].
  This is not a massive issue (possibly the time spent in extra queries is just 
.5ms), but surely it is something worth doing - if nothing else because it 
seems Neutron is not using SQLAlchemy in the correct way.

  As someone who's been doing this mistake for ages, for what is worth
  this has been for me a moment where I realized that sometimes it's
  good to be told RTFM.

  
  [1] http://docs.sqlalchemy.org/en/latest/orm/cascades.html
  [2] http://paste.openstack.org/show/256289/
  [3] http://docs.sqlalchemy.org/en/latest/orm/collections.html#passive-deletes
  [4] http://paste.openstack.org/show/256301/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1461102/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1461102] [NEW] cascade in orm relationships shadows ON DELETE CASCADE

2015-06-02 Thread Salvatore Orlando
Public bug reported:

In [1] there is a good discussion on how the 'cascade' property
specified for sqlachemy.orm.relationship interacts with the 'ON DELETE
CASCADE' specified in DDL.

I stumbled on this when I was doing some DB access profiling and noticed
multiple DELETE statements were emitted for a delete subnet operation
[2], whereas I expected a single DELETE statement only; I expected that
the cascade behaviour configured on db tables would have taken care of
DNS servers, host routes, etc.

What is happening is that sqlalchemy is perform orm-level cascading
rather than relying on the database foreign key cascade options. And
it's doing this because we told it to do so. As the SQLAlchemy
documentation points out [3] there is no need to add the complexity of
orm relationships if foreign keys are correctly configured on the
database, and the passive_deletes option should be used.

Enabling such option in place of all the cascade options for relationship 
caused a single DELETE statement to be issued [4].
This is not a massive issue (possibly the time spent in extra queries is just 
.5ms), but surely it is something worth doing - if nothing else because it 
seems Neutron is not using SQLAlchemy in the correct way.

As someone who's been doing this mistake for ages, for what is worth
this has been for me a moment where I realized that sometimes it's good
to be told RTFM.


[1] http://docs.sqlalchemy.org/en/latest/orm/cascades.html
[2] http://paste.openstack.org/show/256289/
[3] http://docs.sqlalchemy.org/en/latest/orm/collections.html#passive-deletes
[4] http://paste.openstack.org/show/256301/

** Affects: neutron
 Importance: Medium
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: New


** Tags: db

** Changed in: neutron
   Importance: Undecided = Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1461102

Title:
  cascade in orm relationships shadows ON DELETE CASCADE

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  In [1] there is a good discussion on how the 'cascade' property
  specified for sqlachemy.orm.relationship interacts with the 'ON DELETE
  CASCADE' specified in DDL.

  I stumbled on this when I was doing some DB access profiling and
  noticed multiple DELETE statements were emitted for a delete subnet
  operation [2], whereas I expected a single DELETE statement only; I
  expected that the cascade behaviour configured on db tables would have
  taken care of DNS servers, host routes, etc.

  What is happening is that sqlalchemy is perform orm-level cascading
  rather than relying on the database foreign key cascade options. And
  it's doing this because we told it to do so. As the SQLAlchemy
  documentation points out [3] there is no need to add the complexity of
  orm relationships if foreign keys are correctly configured on the
  database, and the passive_deletes option should be used.

  Enabling such option in place of all the cascade options for relationship 
caused a single DELETE statement to be issued [4].
  This is not a massive issue (possibly the time spent in extra queries is just 
.5ms), but surely it is something worth doing - if nothing else because it 
seems Neutron is not using SQLAlchemy in the correct way.

  As someone who's been doing this mistake for ages, for what is worth
  this has been for me a moment where I realized that sometimes it's
  good to be told RTFM.

  
  [1] http://docs.sqlalchemy.org/en/latest/orm/cascades.html
  [2] http://paste.openstack.org/show/256289/
  [3] http://docs.sqlalchemy.org/en/latest/orm/collections.html#passive-deletes
  [4] http://paste.openstack.org/show/256301/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1461102/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1453322] [NEW] The strange case of the quota_items config option

2015-05-08 Thread Salvatore Orlando
Public bug reported:

The quota_items option [1] declares which resources will be subject to quota 
limiting.
Then the code, for each value, there, will look for an option named 
quota_resource_name, when registering resources with the quota engine. This 
happens at module load time.

This is pretty much how networks, ports, and subnets have been registering 
themselves with the quota engine so far.
All the other resources for which neutron does quota limiting, are a bit 
smarter, and register themselves when the respective API extensions are loaded.

Indeed there are config options for routers, floating ips, and other resources, 
which are not listed in quota_items. While this is not an error, it is surely 
confusing for operators. 
In order to avoid making the configuration schema dependent on the value for a 
conf option (eg: requiring a quota_meh option if 'meh' is in quota_items), the 
system picks a 'default limit' for all resources for which there is no 
corresponding limit. This avoid failures but is, again, fairly confusing.
Registration happens at module load. This is probably not great from a 
practical perspective. It's also bad for maintainability, and it is a bit ugly 
from a coding style perspective.
And finally it is unclear why resource registration is done in a way for core 
resources and in another way for all other resources. Consistency is somewhat 
important.

the quota_items options should probably just be deprecated

[1]
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/quota.py#n35

** Affects: neutron
 Importance: Undecided
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1453322

Title:
  The strange case of the quota_items config option

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  The quota_items option [1] declares which resources will be subject to quota 
limiting.
  Then the code, for each value, there, will look for an option named 
quota_resource_name, when registering resources with the quota engine. This 
happens at module load time.

  This is pretty much how networks, ports, and subnets have been registering 
themselves with the quota engine so far.
  All the other resources for which neutron does quota limiting, are a bit 
smarter, and register themselves when the respective API extensions are loaded.

  Indeed there are config options for routers, floating ips, and other 
resources, which are not listed in quota_items. While this is not an error, it 
is surely confusing for operators. 
  In order to avoid making the configuration schema dependent on the value for 
a conf option (eg: requiring a quota_meh option if 'meh' is in quota_items), 
the system picks a 'default limit' for all resources for which there is no 
corresponding limit. This avoid failures but is, again, fairly confusing.
  Registration happens at module load. This is probably not great from a 
practical perspective. It's also bad for maintainability, and it is a bit ugly 
from a coding style perspective.
  And finally it is unclear why resource registration is done in a way for core 
resources and in another way for all other resources. Consistency is somewhat 
important.

  the quota_items options should probably just be deprecated

  [1]
  http://git.openstack.org/cgit/openstack/neutron/tree/neutron/quota.py#n35

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1453322/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1450244] [NEW] In admin context is_advsvc should be True

2015-04-29 Thread Salvatore Orlando
Public bug reported:

Currently the is_advsvc setting on the Context object is always calculated with 
a policy check [1].
When is_admin is set to True the Context is being explicitly built to have 
admin rights. 
This seems kind of reasonable. It will still be possible to define policies 
when a user with a advsvc role can perform operations not even an admin can 
do (if that makes any sense).
This just for those contexts which are built inside the business logic to gain 
access to the whole database.

I am not sure if this can be of any practical use - for instance it might serve 
a similar purpose of get_admin_context.
However, it will spare an unnecessary check in the policy engine.
Moreover, It is going to simplify quite a bit implementation of light unit 
tests with minimal harness. For instance unit tests which only cover DB 
operations.

[1]
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/context.py#n68

** Affects: neutron
 Importance: Low
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1450244

Title:
  In admin context is_advsvc should be True

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Currently the is_advsvc setting on the Context object is always calculated 
with a policy check [1].
  When is_admin is set to True the Context is being explicitly built to have 
admin rights. 
  This seems kind of reasonable. It will still be possible to define policies 
when a user with a advsvc role can perform operations not even an admin can 
do (if that makes any sense).
  This just for those contexts which are built inside the business logic to 
gain access to the whole database.

  I am not sure if this can be of any practical use - for instance it might 
serve a similar purpose of get_admin_context.
  However, it will spare an unnecessary check in the policy engine.
  Moreover, It is going to simplify quite a bit implementation of light unit 
tests with minimal harness. For instance unit tests which only cover DB 
operations.

  [1]
  http://git.openstack.org/cgit/openstack/neutron/tree/neutron/context.py#n68

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1450244/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1449462] [NEW] read_deleted in neutron context serves no purpose

2015-04-28 Thread Salvatore Orlando
Public bug reported:

According to the docstring at [1] I can specify read_delete=yes or only to see 
deleted records when performing queries.
However, Neutron does not perform soft deletes.
Also I kind of know a little Neutron's DB management layer and I'm pretty sure 
it never uses read_deleted anywhere.
As far as I remember no plugin makes use of it either.
According to git history this was added with an initial commit for Neutron 
context [2], which was probably more or less a cut  paste from nova.

It is worth removing that parameter before somebody actually and tries
to set it to 'yes' or 'only'

[1] http://git.openstack.org/cgit/openstack/neutron/tree/neutron/context.py#n44
[2] https://review.openstack.org/#/c/7952/

** Affects: neutron
 Importance: Low
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1449462

Title:
  read_deleted in neutron context serves no purpose

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  According to the docstring at [1] I can specify read_delete=yes or only to 
see deleted records when performing queries.
  However, Neutron does not perform soft deletes.
  Also I kind of know a little Neutron's DB management layer and I'm pretty 
sure it never uses read_deleted anywhere.
  As far as I remember no plugin makes use of it either.
  According to git history this was added with an initial commit for Neutron 
context [2], which was probably more or less a cut  paste from nova.

  It is worth removing that parameter before somebody actually and tries
  to set it to 'yes' or 'only'

  [1] 
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/context.py#n44
  [2] https://review.openstack.org/#/c/7952/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1449462/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1448067] Re: Failed to shutdown neutron

2015-04-24 Thread Salvatore Orlando
** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1448067

Title:
  Failed to shutdown neutron

Status in Grenade - OpenStack upgrade testing:
  Fix Released

Bug description:
  From grenade log [1]
  28 hits in 7 days (most hits are in the past 36 hours) [2]. Failure rate: 100%

  2015-04-23 18:52:09.774 | 1 main 
/opt/stack/new/grenade/projects/50_neutron/shutdown.sh
  2015-04-23 18:52:09.775 | + die 'Failed to shutdown neutron'

  It seems somehow related to bug 1285232 which recently resurface.
  Attaching the bug report to neutron as well as the problem might ultimately 
be there.

  [1] 
http://logs.openstack.org/10/176710/2/check/check-grenade-dsvm-neutron/db257bc/logs/grenade.sh.txt.gz
  [2] 
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiRmFpbGVkIHRvIHNodXRkb3duIG5ldXRyb25cIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQyOTg3MTkzNzczMn0=

To manage notifications about this bug go to:
https://bugs.launchpad.net/grenade/+bug/1448067/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1448067] [NEW] Failed to shutdown neutron

2015-04-24 Thread Salvatore Orlando
Public bug reported:

From grenade log [1]
28 hits in 7 days (most hits are in the past 36 hours) [2]. Failure rate: 100%

2015-04-23 18:52:09.774 | 1 main 
/opt/stack/new/grenade/projects/50_neutron/shutdown.sh
2015-04-23 18:52:09.775 | + die 'Failed to shutdown neutron'

It seems somehow related to bug 1285232 which recently resurface.
Attaching the bug report to neutron as well as the problem might ultimately be 
there.

[1] 
http://logs.openstack.org/10/176710/2/check/check-grenade-dsvm-neutron/db257bc/logs/grenade.sh.txt.gz
[2] 
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiRmFpbGVkIHRvIHNodXRkb3duIG5ldXRyb25cIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQyOTg3MTkzNzczMn0=

** Affects: grenade
 Importance: Undecided
 Status: New

** Affects: neutron
 Importance: Undecided
 Status: New

** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1448067

Title:
  Failed to shutdown neutron

Status in Grenade - OpenStack upgrade testing:
  New
Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  From grenade log [1]
  28 hits in 7 days (most hits are in the past 36 hours) [2]. Failure rate: 100%

  2015-04-23 18:52:09.774 | 1 main 
/opt/stack/new/grenade/projects/50_neutron/shutdown.sh
  2015-04-23 18:52:09.775 | + die 'Failed to shutdown neutron'

  It seems somehow related to bug 1285232 which recently resurface.
  Attaching the bug report to neutron as well as the problem might ultimately 
be there.

  [1] 
http://logs.openstack.org/10/176710/2/check/check-grenade-dsvm-neutron/db257bc/logs/grenade.sh.txt.gz
  [2] 
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiRmFpbGVkIHRvIHNodXRkb3duIG5ldXRyb25cIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQyOTg3MTkzNzczMn0=

To manage notifications about this bug go to:
https://bugs.launchpad.net/grenade/+bug/1448067/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1447344] [NEW] DHCP agent: metadata network broken for DVR

2015-04-22 Thread Salvatore Orlando
Public bug reported:

When the 'metadata network' feature is enabled, the DHCP at [1] will not spawn 
a metadata proxy for DVR routers.
This should be fixed.

[1]
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/agent/dhcp/agent.py#n357

** Affects: neutron
 Importance: Medium
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: In Progress


** Tags: juno-backport-potential kilo-backport-potential l3-dvr-backlog

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1447344

Title:
  DHCP agent: metadata network broken for DVR

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  When the 'metadata network' feature is enabled, the DHCP at [1] will not 
spawn a metadata proxy for DVR routers.
  This should be fixed.

  [1]
  
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/agent/dhcp/agent.py#n357

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1447344/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1446021] [NEW] Remove get_admin_roles in neutron/policy.py

2015-04-19 Thread Salvatore Orlando
Public bug reported:

There is a comment in that method that clearly states its temporary
nature.


# NOTE(salvatore-orlando): This function provides a solution for
# populating implicit contexts with the appropriate roles so that
# they correctly pass policy checks, and will become superseded
# once all explicit policy checks are removed from db logic and
# plugin modules. For backward compatibility it returns the literal
# admin if ADMIN_CTX_POLICY is not defined

link:
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/policy.py#n473

It is probably time function and all associated logic is removed - as it
constitutes a blocker for adoption of oslo_policy

** Affects: neutron
 Importance: High
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1446021

Title:
  Remove get_admin_roles in neutron/policy.py

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  There is a comment in that method that clearly states its temporary
  nature.

  
  # NOTE(salvatore-orlando): This function provides a solution for
  # populating implicit contexts with the appropriate roles so that
  # they correctly pass policy checks, and will become superseded
  # once all explicit policy checks are removed from db logic and
  # plugin modules. For backward compatibility it returns the literal
  # admin if ADMIN_CTX_POLICY is not defined

  link:
  http://git.openstack.org/cgit/openstack/neutron/tree/neutron/policy.py#n473

  It is probably time function and all associated logic is removed - as
  it constitutes a blocker for adoption of oslo_policy

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1446021/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1445690] [NEW] legacy admin rule does not work and is not needed anymore

2015-04-17 Thread Salvatore Orlando
Public bug reported:

in neutron/policy.py:

def check_is_admin(context):
Verify context has admin rights according to policy settings.
init()
# the target is user-self
credentials = context.to_dict()
target = credentials
# Backward compatibility: if ADMIN_CTX_POLICY is not
# found, default to validating role:admin
admin_policy = (ADMIN_CTX_POLICY if ADMIN_CTX_POLICY in _ENFORCER.rules
else 'role:admin')
return _ENFORCER.enforce(admin_policy, target, credentials)

if ADMIN_CTX_POLICY is not specified the enforcer checks role:admin,
which since it does not exist among rules loaded from file, defaults to
TrueCheck. This is wrong, and to an extent even dangerous because if
ADMIN_CTX_POLICY is missing, then every context would be regarded as an
admin context. Thankfully this was only for backward compatibility and
is not necessary anymore.

A similar mistake is done for ADVSVC_CTX_POLICY. This is even more
puzzling because there was no backward compatibility requirmeent there,

Obviously the unit tests supposed to ensure the correct behaviour of the
backward compatibility tweak are validating something completely
different.

** Affects: neutron
 Importance: Medium
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1445690

Title:
  legacy admin rule does not work and is not needed anymore

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  in neutron/policy.py:

  def check_is_admin(context):
  Verify context has admin rights according to policy settings.
  init()
  # the target is user-self
  credentials = context.to_dict()
  target = credentials
  # Backward compatibility: if ADMIN_CTX_POLICY is not
  # found, default to validating role:admin
  admin_policy = (ADMIN_CTX_POLICY if ADMIN_CTX_POLICY in _ENFORCER.rules
  else 'role:admin')
  return _ENFORCER.enforce(admin_policy, target, credentials)

  if ADMIN_CTX_POLICY is not specified the enforcer checks role:admin,
  which since it does not exist among rules loaded from file, defaults
  to TrueCheck. This is wrong, and to an extent even dangerous because
  if ADMIN_CTX_POLICY is missing, then every context would be regarded
  as an admin context. Thankfully this was only for backward
  compatibility and is not necessary anymore.

  A similar mistake is done for ADVSVC_CTX_POLICY. This is even more
  puzzling because there was no backward compatibility requirmeent
  there,

  Obviously the unit tests supposed to ensure the correct behaviour of
  the backward compatibility tweak are validating something completely
  different.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1445690/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1445169] [NEW] Quota model does not use HasTenant mixin

2015-04-16 Thread Salvatore Orlando
Public bug reported:

This is a minor issue, but in the future (ie: when we'll finally move to 
renaming tenant_id to project_id) this could cause problem.
As it's a super easy fix it might be worth doing it.

** Affects: neutron
 Importance: Undecided
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: New


** Tags: db

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1445169

Title:
  Quota model does not use HasTenant mixin

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  This is a minor issue, but in the future (ie: when we'll finally move to 
renaming tenant_id to project_id) this could cause problem.
  As it's a super easy fix it might be worth doing it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1445169/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1439817] [NEW] IP set full error in kernel log

2015-04-02 Thread Salvatore Orlando
Public bug reported:

This is appearing in some logs upstream:
http://logs.openstack.org/73/170073/1/experimental/check-tempest-dsvm-
neutron-full-non-isolated/ac882e3/logs/kern_log.txt.gz#_Apr__2_13_03_06

And it has also been reported by andreaf in IRC as having been observed
downstream.

Logstash is not very helpful as this manifests only with a job currently in the 
experimental queue.
As said job runs in non-isolated mode, accruing of elements in the IPset until 
it reaches saturation is onet things that might need to be investigated.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1439817

Title:
  IP set full error in kernel log

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  This is appearing in some logs upstream:
  http://logs.openstack.org/73/170073/1/experimental/check-tempest-dsvm-
  neutron-full-non-
  isolated/ac882e3/logs/kern_log.txt.gz#_Apr__2_13_03_06

  And it has also been reported by andreaf in IRC as having been
  observed downstream.

  Logstash is not very helpful as this manifests only with a job currently in 
the experimental queue.
  As said job runs in non-isolated mode, accruing of elements in the IPset 
until it reaches saturation is onet things that might need to be investigated.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1439817/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1433554] Re: DVR: metada network not created for NSX-mh

2015-03-24 Thread Salvatore Orlando
Addressed for stable/juno by: https://review.openstack.org/167295

** Also affects: neutron
   Importance: Undecided
   Status: New

** Also affects: neutron/juno
   Importance: Undecided
   Status: New

** Changed in: neutron/juno
 Assignee: (unassigned) = Salvatore Orlando (salvatore-orlando)

** Changed in: neutron/juno
   Status: New = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1433554

Title:
  DVR: metada network not created for NSX-mh

Status in OpenStack Neutron (virtual network service):
  New
Status in neutron juno series:
  In Progress
Status in VMware NSX:
  Fix Committed

Bug description:
  When creating a distributed router, instances attached to it do not
  have metadata access.

  This is happening because the metadata network is not being created
  and connected to the router - since the process for handling metadata
  network has not been updated with the new interface type for DVR
  router ports.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1433554/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1433553] Re: DVR: remove interface fails on NSX-mh

2015-03-24 Thread Salvatore Orlando
Addressed for stable/juno by: https://review.openstack.org/167295

** Also affects: neutron
   Importance: Undecided
   Status: New

** Also affects: neutron/juno
   Importance: Undecided
   Status: New

** Changed in: neutron/juno
 Assignee: (unassigned) = Salvatore Orlando (salvatore-orlando)

** Changed in: neutron/juno
   Status: New = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1433553

Title:
  DVR: remove interface fails on NSX-mh

Status in OpenStack Neutron (virtual network service):
  New
Status in neutron juno series:
  In Progress
Status in VMware NSX:
  Fix Committed

Bug description:
  The DVR mixin, which the MH plugin is now using, assumes that routers
  are deployed on l3 agents, which is not the case for VMware plugins.

  While it is generally wrong that a backend agnostic management layer
  makes assumptions about the backend, the VMware plugins should work
  around this issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1433553/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1433550] Re: DVR: VMware NSX plugins do not need centralized snat interfaces

2015-03-24 Thread Salvatore Orlando
Addressed for stable/juno by: https://review.openstack.org/167295

** Also affects: neutron
   Importance: Undecided
   Status: New

** Also affects: neutron/juno
   Importance: Undecided
   Status: New

** Changed in: neutron/juno
   Status: New = In Progress

** Changed in: neutron/juno
 Assignee: (unassigned) = Salvatore Orlando (salvatore-orlando)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1433550

Title:
  DVR: VMware NSX plugins do not need centralized snat interfaces

Status in OpenStack Neutron (virtual network service):
  New
Status in neutron juno series:
  In Progress
Status in VMware NSX:
  Fix Committed

Bug description:
  When creating a distributed router, a centralized SNAT port is
  created.

  However since the NSX backend does not need it to implement
  distributed routing, this is just a waste of resources (port and IP
  address). Also, it might confuse users with admin privileges as they
  won't know what these ports are doing.

  So even if they do no harm they should be removed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1433550/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1430523] [NEW] Deprecate the config quota driver

2015-03-10 Thread Salvatore Orlando
Public bug reported:

This driver is still in the neutron code base, but it's probably unused and 
already bitrotting.
We have been defaulting for several release cycles to the DB driver.

The config driver is very trivial and has a lot of shortcomings, which make it 
pretty much incompatible with 'modern day' openstack.
For instance:
- it is unable to set per-tenant quotas
- switch to/from db quota driver not supported
- need to restart server to change quota limits

it is time we move towards its deprecation, with aim of removal for
liberty

** Affects: neutron
 Importance: Medium
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: New


** Tags: quota

** Changed in: neutron
   Importance: Undecided = Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1430523

Title:
  Deprecate the config quota driver

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  This driver is still in the neutron code base, but it's probably unused and 
already bitrotting.
  We have been defaulting for several release cycles to the DB driver.

  The config driver is very trivial and has a lot of shortcomings, which make 
it pretty much incompatible with 'modern day' openstack.
  For instance:
  - it is unable to set per-tenant quotas
  - switch to/from db quota driver not supported
  - need to restart server to change quota limits

  it is time we move towards its deprecation, with aim of removal for
  liberty

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1430523/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1430519] [NEW] Quota drivers should not throw QuotaResourceUknown

2015-03-10 Thread Salvatore Orlando
Public bug reported:

Resource registration for Quota enforcement and management is performed
by the quota engine [1].

However the task of verifying whether a resource is registered is left
to the drivers [2]. This is conceptually wrong, and it also has the not
so nice effect that the engine must pass registered resources to the
driver as a parameter in limit_check.

[1] http://git.openstack.org/cgit/openstack/neutron/tree/neutron/quota.py#n238
[2] 
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/db/quota_db.py#n115

This was discovered during the implementation of a reservation system
within the quota engine. The bug, albeit not critical, grants for a
standalone patch not squashed into the commits for
https://blueprints.launchpad.net/neutron/+spec/better-quotas

** Affects: neutron
 Importance: Medium
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: New


** Tags: quota

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1430519

Title:
  Quota drivers should not throw QuotaResourceUknown

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Resource registration for Quota enforcement and management is
  performed by the quota engine [1].

  However the task of verifying whether a resource is registered is left
  to the drivers [2]. This is conceptually wrong, and it also has the
  not so nice effect that the engine must pass registered resources to
  the driver as a parameter in limit_check.

  [1] http://git.openstack.org/cgit/openstack/neutron/tree/neutron/quota.py#n238
  [2] 
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/db/quota_db.py#n115

  This was discovered during the implementation of a reservation system
  within the quota engine. The bug, albeit not critical, grants for a
  standalone patch not squashed into the commits for
  https://blueprints.launchpad.net/neutron/+spec/better-quotas

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1430519/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1332502] Re: Intermittent UT failure for VMware 'adv' plugin

2015-02-27 Thread Salvatore Orlando
This plugin does not exist anymore

** Changed in: neutron
   Status: Incomplete = Won't Fix

** Changed in: neutron
 Assignee: Salvatore Orlando (salvatore-orlando) = (unassigned)

** Changed in: neutron
   Importance: High = Undecided

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1332502

Title:
  Intermittent UT failure for VMware 'adv' plugin

Status in OpenStack Neutron (virtual network service):
  Won't Fix

Bug description:
  Failure occurs in
  
neutron.tests.unit.vmware.vshield.test_vpnaas_plugin.TestVpnPlugin.test_create_vpnservice_with_invalid_route

  logstash query:
  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiRkFJTFwiIEFORCBtZXNzYWdlOlwibmV1dHJvbi50ZXN0cy51bml0LnZtd2FyZS52c2hpZWxkLnRlc3RfdnBuYWFzX3BsdWdpbi5UZXN0VnBuUGx1Z2luLnRlc3RfY3JlYXRlX3ZwbnNlcnZpY2Vfd2l0aF9pbnZhbGlkX3JvdXRlclwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI2MDQ4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxNDAzMjYzNzAzNzI4fQ==

  Error log:
  
http://logs.openstack.org/47/101447/3/check/gate-neutron-python26/69da3af/console.html

  Introduced by: offending patch not yet known - could be a latent
  problem accidentally uncovered by other patches.

  Hits in past 7 days: 9 (1 in gate queue)

  
  setting priority as high as for anything affecting gate stability.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1332502/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1426121] [NEW] vmw nsx: add/remove interface on dvr is broken

2015-02-26 Thread Salvatore Orlando
Public bug reported:

When the NSX specific extension was dropped in favour of the community
one, there was a side effect that unfortunately caused add/remove
interface operations to fail when executed passing a subnet id.

This should be fixed soon and backported to Juno.
Icehouse is not affected.

** Affects: neutron
 Importance: Undecided
 Status: New

** Affects: neutron/juno
 Importance: High
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: New

** Affects: vmware-nsx
 Importance: High
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: New

** Also affects: neutron
   Importance: Undecided
   Status: New

** Summary changed:

- add/remove interface on dist router is broken
+ vmw nsx: add/remove interface on dvr is broken

** Also affects: neutron/juno
   Importance: Undecided
   Status: New

** Changed in: neutron/juno
   Importance: Undecided = High

** Changed in: neutron/juno
 Assignee: (unassigned) = Salvatore Orlando (salvatore-orlando)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1426121

Title:
  vmw nsx: add/remove interface on dvr is broken

Status in OpenStack Neutron (virtual network service):
  New
Status in neutron juno series:
  New
Status in VMware NSX:
  New

Bug description:
  When the NSX specific extension was dropped in favour of the community
  one, there was a side effect that unfortunately caused add/remove
  interface operations to fail when executed passing a subnet id.

  This should be fixed soon and backported to Juno.
  Icehouse is not affected.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1426121/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1420278] [NEW] API workers might not work when sync thread is enabled

2015-02-10 Thread Salvatore Orlando
Public bug reported:

API workers are started with a fork().
It is well known that this operation uses CoW on the child process - and this 
should not constitute a problem.
However, the status synch thread is started at plugin initialization and might 
be already running when the API workers are forked.
The NSX API client,  extensively used by this thread, uses an eventlet 
semaphore to grab backend connections from a pool.
It is therefore possible that when a worker process is forked it receives 
semaphore which are in busy state. Once forked these semaphores are new 
objects, and they will never be unblocked. The API worker therefore simply 
hangs.

This behaviour has been confirmed by observation on the field

** Affects: neutron
 Importance: Undecided
 Status: Invalid

** Affects: neutron/icehouse
 Importance: High
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: New

** Affects: neutron/juno
 Importance: High
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: New

** Affects: vmware-nsx
 Importance: High
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: In Progress

** Also affects: neutron
   Importance: Undecided
   Status: New

** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

** Also affects: neutron/juno
   Importance: Undecided
   Status: New

** Changed in: neutron/juno
 Assignee: (unassigned) = Salvatore Orlando (salvatore-orlando)

** Changed in: neutron/icehouse
 Assignee: (unassigned) = Salvatore Orlando (salvatore-orlando)

** Changed in: neutron
   Status: New = Invalid

** Changed in: neutron/juno
   Importance: Undecided = High

** Changed in: neutron/icehouse
   Importance: Undecided = High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1420278

Title:
  API workers might not work when sync thread is enabled

Status in OpenStack Neutron (virtual network service):
  Invalid
Status in neutron icehouse series:
  New
Status in neutron juno series:
  New
Status in VMware NSX:
  In Progress

Bug description:
  API workers are started with a fork().
  It is well known that this operation uses CoW on the child process - and this 
should not constitute a problem.
  However, the status synch thread is started at plugin initialization and 
might be already running when the API workers are forked.
  The NSX API client,  extensively used by this thread, uses an eventlet 
semaphore to grab backend connections from a pool.
  It is therefore possible that when a worker process is forked it receives 
semaphore which are in busy state. Once forked these semaphores are new 
objects, and they will never be unblocked. The API worker therefore simply 
hangs.

  This behaviour has been confirmed by observation on the field

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1420278/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1416179] Re: Missing API to get list of provider network types

2015-02-08 Thread Salvatore Orlando
This is probably more of a lacking feature on the server side.
Without exposure there for available provider types, the only thing the CLI 
could do is to return a hardcoded list, which is probably not the right thing 
to do.

** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: python-neutronclient
   Status: New = Invalid

** Changed in: neutron
   Status: New = Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1416179

Title:
  Missing API to get list of provider network types

Status in OpenStack Neutron (virtual network service):
  Confirmed
Status in Python client library for Neutron:
  Invalid

Bug description:
  There is currently no way to get the list of available provider
  network types from Neutron. The list may vary based on which plugin is
  used. Allowing clients to query for this list will remove having to
  hardcode the types in clients such as Horizon.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1416179/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1417180] [NEW] Unable to load extensions with same module name

2015-02-02 Thread Salvatore Orlando
Public bug reported:

https://review.openstack.org/#/c/151375/ introduced a check for avoiding
loading an extension if its module name was already scanned previously.

While this is generally fine, it does not take into account that
different extensions might have the same module name, even if different
paths. This applies for instance in this case:

salvatore@ubuntu:/opt/stack/neutron$ find ./neutron -name qos.py
./neutron/plugins/cisco/extensions/qos.py
./neutron/plugins/vmware/extensions/qos.py

both cisco and vmware plugins declare extensions in a module named
'qos'. However, such extensions are completely different as they have a
different alias  (and in this case can hardly work together in the same
deployment).

In this specific case, the vmware plugin is the one for which the
extension is not being loaded since the cisco plugin's module is read
first. While this does not break the plugin, it does break unit tests as
for some reason the cisco's plugin extension path info is not removed
once the cisco unit tests complete running.

It should be however noted that in general it might be ok to have
distinct extensions in modules with the same name. Extension aliases are
instead supposed to be unique and should be used to discriminate

** Affects: neutron
 Importance: High
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1417180

Title:
  Unable to load extensions with same module name

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  https://review.openstack.org/#/c/151375/ introduced a check for
  avoiding loading an extension if its module name was already scanned
  previously.

  While this is generally fine, it does not take into account that
  different extensions might have the same module name, even if
  different paths. This applies for instance in this case:

  salvatore@ubuntu:/opt/stack/neutron$ find ./neutron -name qos.py
  ./neutron/plugins/cisco/extensions/qos.py
  ./neutron/plugins/vmware/extensions/qos.py

  both cisco and vmware plugins declare extensions in a module named
  'qos'. However, such extensions are completely different as they have
  a different alias  (and in this case can hardly work together in the
  same deployment).

  In this specific case, the vmware plugin is the one for which the
  extension is not being loaded since the cisco plugin's module is read
  first. While this does not break the plugin, it does break unit tests
  as for some reason the cisco's plugin extension path info is not
  removed once the cisco unit tests complete running.

  It should be however noted that in general it might be ok to have
  distinct extensions in modules with the same name. Extension aliases
  are instead supposed to be unique and should be used to discriminate

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1417180/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1416593] [NEW] vmware unit tests are still skipped in openstack/neutron

2015-01-30 Thread Salvatore Orlando
Public bug reported:

Despite having fixed the problem that caused the skip during the adv
services spin off, the vmware unit tests are still skipped because the
__init__.py file is missing (they're just not a test package for testr)

This has recently caused a regression which has been correctly caught in
the repository where the decomposed plugin is being prepped up. Until
the decomposition is complete however, it might be worth ensuring unit
tests are executed as part of the py27 job on openstack/neutron

** Affects: neutron
 Importance: High
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1416593

Title:
  vmware unit tests are still skipped in openstack/neutron

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Despite having fixed the problem that caused the skip during the adv
  services spin off, the vmware unit tests are still skipped because the
  __init__.py file is missing (they're just not a test package for
  testr)

  This has recently caused a regression which has been correctly caught
  in the repository where the decomposed plugin is being prepped up.
  Until the decomposition is complete however, it might be worth
  ensuring unit tests are executed as part of the py27 job on
  openstack/neutron

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1416593/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1416596] [NEW] vmware unit tests broken

2015-01-30 Thread Salvatore Orlando
Public bug reported:

commit 79c97120de9cff4d0992b5d41ff4bbf05e890f89 introduced a constraint which 
causes a vmware unit test to fail.
This unit test indeed directly exercises the plugin - creating a context with 
get_admin_context. For such context, tenant_id is None, and the DB constraint 
on the default security group table fails.

** Affects: neutron
 Importance: Undecided
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: New

** Affects: vmware-nsx
 Importance: Undecided
 Status: New

** Also affects: vmware-nsx
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1416596

Title:
  vmware unit tests broken

Status in OpenStack Neutron (virtual network service):
  New
Status in VMware NSX:
  New

Bug description:
  commit 79c97120de9cff4d0992b5d41ff4bbf05e890f89 introduced a constraint which 
causes a vmware unit test to fail.
  This unit test indeed directly exercises the plugin - creating a context with 
get_admin_context. For such context, tenant_id is None, and the DB constraint 
on the default security group table fails.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1416596/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1414199] [NEW] Delete subnet can fail for SLAAC/DHCP_STATELESS with 409

2015-01-23 Thread Salvatore Orlando
Public bug reported:

The routine for deleting a subnet checks whether it's ok to remove a
subnet in the following way:

1) Query IP allocations on the subnet for ports that can be automatically 
deleted
2) Remove those allocations
3) Check again subnet allocations. If there is any other allocation this means 
that there are some IPs that cannot be automatically deleted
4) Raise a conflict exception

In the case of SLAAC or DHCP_STATELESS IPv6 subnets, every IP address can be 
automatically deleted - and that's where the problem lies.
Indeed the query performed at step #3 [1] and [2] for ML2 plugin, might cause a 
failure during subnet deletion if:
- The transaction isolation level is set to READ COMMITTED
- The subnet address mode is either SLAAC or DHCP STATELESS
- A port is created concurrently with the delete subnet procedure and an IP 
address is assigned to it.

These circumstances are quite unlikely to occur, but far from
impossible. They are indeed seen in gate tests [3].

It is advisable to provide a fix for this issue. To this aim it is
probably worth noting that the check #3 is rather pointless for subnets
with automatic address mode.


[1] 
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/db/db_base_plugin_v2.py#n1240
[2] 
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/plugins/ml2/plugin.py#n810
[3] 
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiVW5hYmxlIHRvIGNvbXBsZXRlIG9wZXJhdGlvbiBvbiBzdWJuZXRcIiBBTkQgbWVzc2FnZTpcIk9uZSBvciBtb3JlIHBvcnRzIGhhdmUgYW4gSVAgYWxsb2NhdGlvbiBmcm9tIHRoaXMgc3VibmV0XCIgQU5EIHRhZ3M6Y29uc29sZSBBTkQgYnVpbGRfc3RhdHVzOkZBSUxVUkUgQU5EIGJ1aWxkX25hbWU6XCJjaGVjay1ncmVuYWRlLWRzdm0tbmV1dHJvblwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiIxNzI4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxNDIyMDQ1MDA1OTE3LCJtb2RlIjoiIiwiYW5hbHl6ZV9maWVsZCI6IiJ9

** Affects: neutron
 Importance: Undecided
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1414199

Title:
  Delete subnet can fail for SLAAC/DHCP_STATELESS with 409

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  The routine for deleting a subnet checks whether it's ok to remove a
  subnet in the following way:

  1) Query IP allocations on the subnet for ports that can be automatically 
deleted
  2) Remove those allocations
  3) Check again subnet allocations. If there is any other allocation this 
means that there are some IPs that cannot be automatically deleted
  4) Raise a conflict exception

  In the case of SLAAC or DHCP_STATELESS IPv6 subnets, every IP address can be 
automatically deleted - and that's where the problem lies.
  Indeed the query performed at step #3 [1] and [2] for ML2 plugin, might cause 
a failure during subnet deletion if:
  - The transaction isolation level is set to READ COMMITTED
  - The subnet address mode is either SLAAC or DHCP STATELESS
  - A port is created concurrently with the delete subnet procedure and an IP 
address is assigned to it.

  These circumstances are quite unlikely to occur, but far from
  impossible. They are indeed seen in gate tests [3].

  It is advisable to provide a fix for this issue. To this aim it is
  probably worth noting that the check #3 is rather pointless for
  subnets with automatic address mode.

  
  [1] 
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/db/db_base_plugin_v2.py#n1240
  [2] 
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/plugins/ml2/plugin.py#n810
  [3] 
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiVW5hYmxlIHRvIGNvbXBsZXRlIG9wZXJhdGlvbiBvbiBzdWJuZXRcIiBBTkQgbWVzc2FnZTpcIk9uZSBvciBtb3JlIHBvcnRzIGhhdmUgYW4gSVAgYWxsb2NhdGlvbiBmcm9tIHRoaXMgc3VibmV0XCIgQU5EIHRhZ3M6Y29uc29sZSBBTkQgYnVpbGRfc3RhdHVzOkZBSUxVUkUgQU5EIGJ1aWxkX25hbWU6XCJjaGVjay1ncmVuYWRlLWRzdm0tbmV1dHJvblwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiIxNzI4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxNDIyMDQ1MDA1OTE3LCJtb2RlIjoiIiwiYW5hbHl6ZV9maWVsZCI6IiJ9

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1414199/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1376037] Re: NSX: switch chaining logic is obsolete

2015-01-14 Thread Salvatore Orlando
** Also affects: vmware-nsx
   Importance: Undecided
   Status: New

** Changed in: neutron
 Assignee: Salvatore Orlando (salvatore-orlando) = (unassigned)

** Changed in: vmware-nsx
 Assignee: (unassigned) = Salvatore Orlando (salvatore-orlando)

** Changed in: vmware-nsx
   Status: New = In Progress

** Changed in: vmware-nsx
   Importance: Undecided = Medium

** Changed in: neutron
   Status: In Progress = Won't Fix

** No longer affects: vmware-nsx

** Also affects: neutron/juno
   Importance: Undecided
   Status: New

** Also affects: vmware-nsx
   Importance: Undecided
   Status: New

** Changed in: vmware-nsx
   Status: New = In Progress

** Changed in: vmware-nsx
   Importance: Undecided = Medium

** Changed in: vmware-nsx
 Assignee: (unassigned) = Salvatore Orlando (salvatore-orlando)

** Changed in: neutron/juno
   Importance: Undecided = Medium

** Changed in: neutron/juno
 Assignee: (unassigned) = Salvatore Orlando (salvatore-orlando)

** Changed in: neutron
Milestone: kilo-2 = None

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1376037

Title:
  NSX: switch chaining logic is obsolete

Status in OpenStack Neutron (virtual network service):
  Won't Fix
Status in neutron juno series:
  New
Status in VMware NSX:
  In Progress

Bug description:
  The NSX plugin implements a logical switch chaining logic for
  implementing flat/vlan neutron networks with a very large number of
  ports on NSX backends for which the number of ports per logical switch
  is limited.

  This limitation however pertains exclusively to now old and
  discontinued versions of NSX, and therefore the corresponding logic
  for creating such chained switches can be removed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1376037/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1410777] [NEW] Floating IP ops lock wait timeout

2015-01-14 Thread Salvatore Orlando
Public bug reported:

Under heavy load floating IP operations can trigger a lock wait timeout,
thus causing the operation itself to fail.

The reason for the timeout is the usual untimely eventlet yield which
can be triggered in many places during the operation. The chances of
this happening are increased by the fact that _update_fip_assoc (called
within a DB transaction) does several interactions with the NSX backend.

Unfortunately it is not practical to change the logic of the plugin in a
way such that _update_fip_assoc does not go to the backend anymore,
especially because the fix would be so extensive that it would be hardly
backportable. An attempt in this direction also did not provide a
solution: https://review.openstack.org/#/c/138078/

** Affects: neutron
 Importance: Undecided
 Status: Won't Fix

** Affects: neutron/juno
 Importance: High
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: New

** Affects: vmware-nsx
 Importance: High
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: In Progress

** Also affects: neutron
   Importance: Undecided
   Status: New

** No longer affects: neutron

** Also affects: neutron
   Importance: Undecided
   Status: New

** Also affects: neutron/juno
   Importance: Undecided
   Status: New

** No longer affects: neutron

** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: neutron/juno
 Assignee: (unassigned) = Salvatore Orlando (salvatore-orlando)

** Changed in: neutron
   Status: New = Won't Fix

** Changed in: neutron/juno
   Importance: Undecided = High

** Changed in: vmware-nsx
   Importance: Undecided = High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1410777

Title:
  Floating IP ops lock wait timeout

Status in OpenStack Neutron (virtual network service):
  Won't Fix
Status in neutron juno series:
  New
Status in VMware NSX:
  In Progress

Bug description:
  Under heavy load floating IP operations can trigger a lock wait
  timeout, thus causing the operation itself to fail.

  The reason for the timeout is the usual untimely eventlet yield which
  can be triggered in many places during the operation. The chances of
  this happening are increased by the fact that _update_fip_assoc
  (called within a DB transaction) does several interactions with the
  NSX backend.

  Unfortunately it is not practical to change the logic of the plugin in
  a way such that _update_fip_assoc does not go to the backend anymore,
  especially because the fix would be so extensive that it would be
  hardly backportable. An attempt in this direction also did not provide
  a solution: https://review.openstack.org/#/c/138078/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1410777/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1405311] [NEW] Incorrect check for security groups in create_port

2014-12-23 Thread Salvatore Orlando
Public bug reported:

The check will fail if security groups in the request body are an empty
string.

In http://git.openstack.org/cgit/stackforge/vmware-
nsx/tree/vmware_nsx/neutron/plugins/vmware/plugins/base.py#n1127 the
code should not raise if security groups are an empty list

This causes tempest's smoke and full test suites to fail always

** Affects: neutron
 Importance: Critical
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: New

** Affects: vmware-nsx
 Importance: Critical
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: New

** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: neutron
   Importance: Undecided = Critical

** Changed in: neutron
Milestone: None = kilo-2

** Changed in: neutron
 Assignee: (unassigned) = Salvatore Orlando (salvatore-orlando)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1405311

Title:
  Incorrect check for security groups in create_port

Status in OpenStack Neutron (virtual network service):
  New
Status in VMware NSX:
  New

Bug description:
  The check will fail if security groups in the request body are an
  empty string.

  In http://git.openstack.org/cgit/stackforge/vmware-
  nsx/tree/vmware_nsx/neutron/plugins/vmware/plugins/base.py#n1127 the
  code should not raise if security groups are an empty list

  This causes tempest's smoke and full test suites to fail always

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1405311/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1238439] Re: admin can not delete External Network because floatingip

2014-11-20 Thread Salvatore Orlando
** No longer affects: neutron/juno

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1238439

Title:
  admin can not delete External Network because floatingip

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  In Progress

Bug description:
  HI

  in admin role, I create External Network and router.  and create
  tenant A and userA.

  Now the userA login, create network and router, create VM1 and assign
  Floating IP , access well ,perfectly.

  Now I try to in admin roles  delete it.

  1:  delete userA ,  no problem
  2: delete tenantA ,  no problem
  3: delete vm1, no problem
  4: delete router, no problem
  5: delete External Networknet,  report error,  I try to delete the port in 
sub panel, also fail. 
  check the Neutrion server log  

  TRACE neutron.api.v2.resource L3PortInUse: Port 2e5fa663-22e0-4c9e-
  87cc-e89c12eff955 has owner network:floatingip and therefore cannot be
  deleted directly via the port API.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1238439/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1238439] Re: admin can not delete External Network because floatingip

2014-11-19 Thread Salvatore Orlando
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

** Also affects: neutron/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1238439

Title:
  admin can not delete External Network because floatingip

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  New
Status in neutron juno series:
  New

Bug description:
  HI

  in admin role, I create External Network and router.  and create
  tenant A and userA.

  Now the userA login, create network and router, create VM1 and assign
  Floating IP , access well ,perfectly.

  Now I try to in admin roles  delete it.

  1:  delete userA ,  no problem
  2: delete tenantA ,  no problem
  3: delete vm1, no problem
  4: delete router, no problem
  5: delete External Networknet,  report error,  I try to delete the port in 
sub panel, also fail. 
  check the Neutrion server log  

  TRACE neutron.api.v2.resource L3PortInUse: Port 2e5fa663-22e0-4c9e-
  87cc-e89c12eff955 has owner network:floatingip and therefore cannot be
  deleted directly via the port API.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1238439/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1357055] Re: Race to delete shared subnet in Tempest neutron full jobs

2014-10-08 Thread Salvatore Orlando
I came to the same conclusions as Alex: the servers are not deleted
hence the error.

However, the logging which Alex is claiming for is already there.
Indeed here are the delete operations on teardown for a failing test:

salvatore@trustillo:~$ cat tempest.txt.gz | grep -i 
ServerRescueNegativeTestJSON.*tearDownClass.*DELETE
2014-10-07 17:49:04.444 25908 INFO tempest.common.rest_client 
[req-75c758b3-d8cb-48d6-9cb6-3670147aca41 None] Request 
(ServerRescueNegativeTestJSON:tearDownClass): 202 DELETE 
http://127.0.0.1:8774/v2/829473406bb545c895a5cd0320624812/os-volumes/ffc6-ba25-413d-8ff1-839d3643299d
 0.135s
2014-10-07 17:49:04.444 25908 DEBUG tempest.common.rest_client 
[req-75c758b3-d8cb-48d6-9cb6-3670147aca41 None] Request 
(ServerRescueNegativeTestJSON:tearDownClass): 202 DELETE 
http://127.0.0.1:8774/v2/829473406bb545c895a5cd0320624812/os-volumes/ffc6-ba25-413d-8ff1-839d3643299d
 0.135s
2014-10-07 17:52:21.452 25908 INFO tempest.common.rest_client 
[req-d0fa5615-9e64-4faa-bd8d-2ad1ac6afb53 None] Request 
(ServerRescueNegativeTestJSON:tearDownClass): 204 DELETE 
http://127.0.0.1:9696/v2.0/routers/0ecd1539-70af-4500-aa4d-9e131fa1fffc 0.238s
2014-10-07 17:52:21.452 25908 DEBUG tempest.common.rest_client 
[req-d0fa5615-9e64-4faa-bd8d-2ad1ac6afb53 None] Request 
(ServerRescueNegativeTestJSON:tearDownClass): 204 DELETE 
http://127.0.0.1:9696/v2.0/routers/0ecd1539-70af-4500-aa4d-9e131fa1fffc 0.238s
2014-10-07 17:52:21.513 25908 INFO tempest.common.rest_client 
[req-89562c45-7448-41bb-8e3e-0beec8460aab None] Request 
(ServerRescueNegativeTestJSON:tearDownClass): 409 DELETE 
http://127.0.0.1:9696/v2.0/subnets/9614d778-66b3-4b81-83fc-f7a47602ceb2 0.060s
2014-10-07 17:52:21.514 25908 DEBUG tempest.common.rest_client 
[req-89562c45-7448-41bb-8e3e-0beec8460aab None] Request 
(ServerRescueNegativeTestJSON:tearDownClass): 409 DELETE 
http://127.0.0.1:9696/v2.0/subnets/9614d778-66b3-4b81-83fc-f7a47602ceb2 0.060s


No DELETE server command is specified.
Instead for a successful test the two servers are deleted.

2014-09-26 11:48:05.532 7755 INFO tempest.common.rest_client 
[req-6d9072aa-dbcb-4398-b4c4-46aeb2140e4b None] Request 
(ServerRescueNegativeTestJSON:tearDownClass): 202 DELETE 
http://127.0.0.1:8774/v2/0cdef32ce1b746fa957a013a3638b3ec/os-volumes/a24f4dc2-bfb5-4a60-be05-9051f08cc447
 0.086s
2014-09-26 11:50:06.733 7755 INFO tempest.common.rest_client 
[req-f3752c9f-8de5-4bde-98c9-879c5a37ff44 None] Request 
(ServerRescueNegativeTestJSON:tearDownClass): 204 DELETE 
http://127.0.0.1:8774/v2/0cdef32ce1b746fa957a013a3638b3ec/servers/1d754b6e-128f-4b42-88ab-9dbefedd887f
 0.155s
2014-09-26 11:50:06.882 7755 INFO tempest.common.rest_client 
[req-dcb05efc-229a-4f40-ac67-2e95a80373c0 None] Request 
(ServerRescueNegativeTestJSON:tearDownClass): 204 DELETE 
http://127.0.0.1:8774/v2/0cdef32ce1b746fa957a013a3638b3ec/servers/d26d1144-58d2-4900-93af-bf7fecdd7a60
 0.148s
2014-09-26 11:50:09.531 7755 INFO tempest.common.rest_client 
[req-73246abe-5e0e-4d16-88dc-75ca05593b2c None] Request 
(ServerRescueNegativeTestJSON:tearDownClass): 204 DELETE 
http://127.0.0.1:9696/v2.0/routers/755eeb0d-6eb8-4acd-b36d-91bd4787cf4e 0.180s
2014-09-26 11:50:09.583 7755 INFO tempest.common.rest_client 
[req-549a5b98-dbb1-43ae-b4ce-f8182ebc10e2 None] Request 
(ServerRescueNegativeTestJSON:tearDownClass): 204 DELETE 
http://127.0.0.1:9696/v2.0/subnets/999c8033-7684-4fc0-a09e-1ccb70196278 0.051s
2014-09-26 11:50:09.662 7755 INFO tempest.common.rest_client 
[req-c74637c0-f985-42c4-b5b0-f980bc431858 None] Request 
(ServerRescueNegativeTestJSON:tearDownClass): 204 DELETE 
http://127.0.0.1:9696/v2.0/networks/ef773f01-7b53-4207-bc75-120563f36d7f 0.078s
2014-09-26 11:50:09.809 7755 INFO tempest.common.rest_client [-] Request 
(ServerRescueNegativeTestJSON:tearDownClass): 204 DELETE 
http://127.0.0.1:35357/v2.0/users/fe26c8cb709644e1862d1b69c63b802b 0.146s
2014-09-26 11:50:09.877 7755 INFO tempest.common.rest_client 
[req-ea281fc3-70e2-487d-977b-7aa65db86722 None] Request 
(ServerRescueNegativeTestJSON:tearDownClass): 204 DELETE 
http://127.0.0.1:9696/v2.0/security-groups/10b16895-5b18-48c5-88e4-9c384eb9d1b8 
0.043s
2014-09-26 11:50:10.016 7755 INFO tempest.common.rest_client [-] Request 
(ServerRescueNegativeTestJSON:tearDownClass): 204 DELETE 
http://127.0.0.1:35357/v2.0/tenants/0cdef32ce1b746fa957a013a3638b3ec 0.137s

This happens consistently.
Also note that in the case of the failing tests the same events are logged both 
at DEBUG and INFO level. This might indicate that some concurrency problem 
among test runners is probably installing an additional log handler, but I have 
no idea whether this is even possible.
What is probably happening is that the servers class variable gets resetted, 
and therefore the servers are not removed on resource_cleanup.

However, this still has to be proved. Further logging might be added to
this aim, which might be helpful to validate this hypothesis (I could
not find any clue through static code and log 

[Yahoo-eng-team] [Bug 1376211] Re: Retry mechanism does not work on startup when used with MySQL

2014-10-01 Thread Salvatore Orlando
This bug affects neutron since after a reboot it might be possible that
the service will fail at startup if it's started before the mysql
service.

** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1376211

Title:
  Retry mechanism does not work on startup when used with MySQL

Status in OpenStack Neutron (virtual network service):
  New
Status in Oslo Database library:
  New

Bug description:
  This is initially revealed as Red Hat bug:
  https://bugzilla.redhat.com/show_bug.cgi?id=1144181

  The problem shows up when Neutron or any other oslo.db based projects
  start while MySQL server is not up yet. Instead of retrying connection
  as per max_retries and retry_interval, service just crashes with
  return code 1.

  This is because during engine initialization, engine.execute(SHOW
  VARIABLES LIKE 'sql_mode') is called, which opens the connection,
  *before* _test_connection() succeeds. So the server just bail out to
  sys.exit() at the top of the stack.

  This behaviour was checked for both oslo.db 0.4.0 and 1.0.1.

  I suspect this is a regression from the original db code from oslo-
  incubator though I haven't checked it specifically.

  The easiest way to reproduce the traceback is:

  1. stop MariaDB.
  2. execute the following Python script:

  '''
  import oslo.db.sqlalchemy.session

  url = 'mysql://neutron:123456@10.35.161.235/neutron'
  engine = oslo.db.sqlalchemy.session.EngineFacade(url)
  '''

  The following traceback can be seen in service log:

  2014-10-01 13:46:10.588 5812 TRACE neutron Traceback (most recent call last):
  2014-10-01 13:46:10.588 5812 TRACE neutron   File /usr/bin/neutron-server, 
line 10, in module
  2014-10-01 13:46:10.588 5812 TRACE neutron sys.exit(main())
  2014-10-01 13:46:10.588 5812 TRACE neutron   File 
/usr/lib/python2.7/site-packages/neutron/server/__init__.py, line 47, in main
  2014-10-01 13:46:10.588 5812 TRACE neutron neutron_api = 
service.serve_wsgi(service.NeutronApiService)
  2014-10-01 13:46:10.588 5812 TRACE neutron   File 
/usr/lib/python2.7/site-packages/neutron/service.py, line 105, in serve_wsgi
  2014-10-01 13:46:10.588 5812 TRACE neutron LOG.exception(_('Unrecoverable 
error: please check log '
  2014-10-01 13:46:10.588 5812 TRACE neutron   File 
/usr/lib/python2.7/site-packages/neutron/openstack/common/excutils.py, line 
82, in __exit__
  2014-10-01 13:46:10.588 5812 TRACE neutron six.reraise(self.type_, 
self.value, self.tb)
  2014-10-01 13:46:10.588 5812 TRACE neutron   File 
/usr/lib/python2.7/site-packages/neutron/service.py, line 102, in serve_wsgi
  2014-10-01 13:46:10.588 5812 TRACE neutron service.start()
  2014-10-01 13:46:10.588 5812 TRACE neutron   File 
/usr/lib/python2.7/site-packages/neutron/service.py, line 73, in start
  2014-10-01 13:46:10.588 5812 TRACE neutron self.wsgi_app = 
_run_wsgi(self.app_name)
  2014-10-01 13:46:10.588 5812 TRACE neutron   File 
/usr/lib/python2.7/site-packages/neutron/service.py, line 168, in _run_wsgi
  2014-10-01 13:46:10.588 5812 TRACE neutron app = 
config.load_paste_app(app_name)
  2014-10-01 13:46:10.588 5812 TRACE neutron   File 
/usr/lib/python2.7/site-packages/neutron/common/config.py, line 182, in 
load_paste_app
  2014-10-01 13:46:10.588 5812 TRACE neutron app = 
deploy.loadapp(config:%s % config_path, name=app_name)
  2014-10-01 13:46:10.588 5812 TRACE neutron   File 
/usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py, line 247, in 
loadapp
  2014-10-01 13:46:10.588 5812 TRACE neutron return loadobj(APP, uri, 
name=name, **kw)
  2014-10-01 13:46:10.588 5812 TRACE neutron   File 
/usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py, line 272, in 
loadobj
  2014-10-01 13:46:10.588 5812 TRACE neutron return context.create()
  2014-10-01 13:46:10.588 5812 TRACE neutron   File 
/usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py, line 710, in create
  2014-10-01 13:46:10.588 5812 TRACE neutron return 
self.object_type.invoke(self)
  2014-10-01 13:46:10.588 5812 TRACE neutron   File 
/usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py, line 144, in invoke
  2014-10-01 13:46:10.588 5812 TRACE neutron **context.local_conf)
  2014-10-01 13:46:10.588 5812 TRACE neutron   File 
/usr/lib/python2.7/site-packages/paste/deploy/util.py, line 56, in fix_call
  2014-10-01 13:46:10.588 5812 TRACE neutron val = callable(*args, **kw)
  2014-10-01 13:46:10.588 5812 TRACE neutron   File 
/usr/lib/python2.7/site-packages/paste/urlmap.py, line 25, in urlmap_factory
  2014-10-01 13:46:10.588 5812 TRACE neutron app = loader.get_app(app_name, 
global_conf=global_conf)
  2014-10-01 13:46:10.588 5812 TRACE neutron   File 
/usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py, line 350, in 
get_app
  2014-10-01 13:46:10.588 5812 TRACE neutron name=name, 

[Yahoo-eng-team] [Bug 1376037] [NEW] NSX: switch chaining logic is obsolete

2014-09-30 Thread Salvatore Orlando
Public bug reported:

The NSX plugin implements a logical switch chaining logic for
implementing flat/vlan neutron networks with a very large number of
ports on NSX backends for which the number of ports per logical switch
is limited.

This limitation however pertains exclusively to now old and discontinued
versions of NSX, and therefore the corresponding logic for creating such
chained switches can be removed.

** Affects: neutron
 Importance: Medium
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: New


** Tags: vmware

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1376037

Title:
  NSX: switch chaining logic is obsolete

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  The NSX plugin implements a logical switch chaining logic for
  implementing flat/vlan neutron networks with a very large number of
  ports on NSX backends for which the number of ports per logical switch
  is limited.

  This limitation however pertains exclusively to now old and
  discontinued versions of NSX, and therefore the corresponding logic
  for creating such chained switches can be removed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1376037/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1374398] Re: Non admin user can update router port

2014-09-29 Thread Salvatore Orlando
I am tempted to mark it as invalid rather than won't fix - however it
would be a possible bug if the router interface was added by an admin
user, in which case I think the router port belongs to the admin rather
than the tenant.

Even in that case however, we'll have to discuss whether it's ok for an
admin to create the router port on behalf of the tenant and assign it to
the tenant itself.

The behaviour reported in this bug report depicts a tenant which messes up its 
own network configuration.
If a deployers wants to prevents scenarios like this, he should be able to add 
a policy where non-admin updates to port for which 
device_owner=network:router_interface


** Changed in: neutron
   Status: Won't Fix = Invalid

** Changed in: neutron
   Status: Invalid = Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1374398

Title:
  Non admin user can update router port

Status in OpenStack Neutron (virtual network service):
  Incomplete

Bug description:
  Non admin user can update router's port 
http://paste.openstack.org/show/115575/.
  This can caused problems as server's won't get information about this change 
until next DHCP request so connectivity to and from this network will be lost.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1374398/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1267310] Re: port-list should not list the dhcp ports for normal user

2014-09-29 Thread Salvatore Orlando
The DHCP port belongs to the tenant, which is therefore entitles to see
it.

Deployers wishing to prevent that MIGHT configure policies to remove network 
ports from responses.
This is possible in theory, even if I would strongly advise against as this 
kind of settings end up making openstack applications not portable across 
deployments.

** Changed in: neutron
   Status: In Progress = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1267310

Title:
  port-list should not list the dhcp ports for normal user

Status in OpenStack Neutron (virtual network service):
  Won't Fix

Bug description:
  with non-admin user, I can list the dhcp port, and If I tried to
  update the fixed ips of these dhcp ports, it does not reflect to
  dhcpagent at all, I mean the nic device's ip in the dhcp namesapce.

  So I think we should not allow normal user to view the dhcp port at the first 
place.
  [root@controller ~]# neutron port-list
  
+--+--+---+--+
  | id   | name | mac_address   | fixed_ips 
   |
  
+--+--+---+--+
  | 1a5a2236-9b66-4b6d-953d-664fad6be3bb |  | fa:16:3e:cf:52:b3 | 
{subnet_id: e38cf289-3b4b-4684-90e0-d44d2ee1cb90, ip_address: 10.0.1.3} 
 |
  | 381e244e-4012-4a49-83d3-f252fa4e41a1 |  | fa:16:3e:cf:94:bd | 
{subnet_id: e38cf289-3b4b-4684-90e0-d44d2ee1cb90, ip_address: 10.0.1.7} 
 |
  | 3bba05d3-10ec-49f1-9335-1103f791584b |  | fa:16:3e:fe:aa:6f | 
{subnet_id: e38cf289-3b4b-4684-90e0-d44d2ee1cb90, ip_address: 10.0.1.6} 
 |
  | 939d5696-0780-40c6-a626-a9a9df933553 |  | fa:16:3e:c7:5b:73 | 
{subnet_id: e38cf289-3b4b-4684-90e0-d44d2ee1cb90, ip_address: 10.0.1.4} 
 |
  | ad89d303-9e8c-43bb-a029-b341340a92bb |  | fa:16:3e:21:6d:98 | 
{subnet_id: c8e59b09-60d3-4996-8692-02334ee0e658, ip_address: 
192.168.230.3} |
  | cb350109-39d3-444c-bc33-538c22415171 |  | fa:16:3e:f4:d3:e8 | 
{subnet_id: e38cf289-3b4b-4684-90e0-d44d2ee1cb90, ip_address: 10.0.1.5} 
 |
  | d1e79c7c-d500-475f-8e21-2c1958f0a136 |  | fa:16:3e:2d:c7:a1 | 
{subnet_id: e38cf289-3b4b-4684-90e0-d44d2ee1cb90, ip_address: 10.0.1.1} 
 |
  | ddc076f6-16aa-4f12-9745-2ac27dd5a38a |  | fa:16:3e:e0:04:44 | 
{subnet_id: e38cf289-3b4b-4684-90e0-d44d2ee1cb90, ip_address: 10.0.1.8} 
 |
  | f2a4df5c-e719-46cc-9bdb-bf9771a2c205 |  | fa:16:3e:01:73:5e | 
{subnet_id: e38cf289-3b4b-4684-90e0-d44d2ee1cb90, ip_address: 10.0.1.2} 
 |
  
+--+--+---+--+
  [root@controller ~]# neutron port-show 1a5a2236-9b66-4b6d-953d-664fad6be3bb
  
+---+-+
  | Field | Value   
|
  
+---+-+
  | admin_state_up| True
|
  | allowed_address_pairs | 
|
  | device_id | 
dhcpd3377d3c-a0d1-5d71-9947-f17125c357bb-20f45603-b76a-4a89-9674-0127e39fc895   
|
  | device_owner  | network:dhcp
|
  | extra_dhcp_opts   | 
|
  | fixed_ips | {subnet_id: 
e38cf289-3b4b-4684-90e0-d44d2ee1cb90, ip_address: 10.0.1.3} |
  | id| 1a5a2236-9b66-4b6d-953d-664fad6be3bb
|
  | mac_address   | fa:16:3e:cf:52:b3   
|
  | name  | 
|
  | network_id| 20f45603-b76a-4a89-9674-0127e39fc895
|
  | security_groups   | 
|
  | status| ACTIVE  
|
  | tenant_id | c8a625a4c71b401681e25e3ad294b255
|
  

[Yahoo-eng-team] [Bug 1358206] Re: ovsdb_monitor.SimpleInterfaceMonitor throws eventlet.timeout.Timeout(5)

2014-09-18 Thread Salvatore Orlando
** Changed in: neutron
   Status: Fix Released = Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1358206

Title:
  ovsdb_monitor.SimpleInterfaceMonitor throws
  eventlet.timeout.Timeout(5)

Status in OpenStack Neutron (virtual network service):
  Fix Committed

Bug description:
  This is found during functional testing, when .start() is called with
  block=True during sightly high load.

  This suggest the default timeout needs to be rised to make this module
  work in all situations.

  
https://review.openstack.org/#/c/112798/14/neutron/agent/linux/ovsdb_monitor.py
  (I will extract patch from here)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1358206/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1370112] [NEW] NSX plugin should set VNIC_TYPE

2014-09-16 Thread Salvatore Orlando
Public bug reported:

This nova commit: 
http://git.openstack.org/cgit/openstack/nova/commit/?id=a8a5d44c8aca218f00649232c2b8a46aee59b77e
made VNIC_TYPE a compulsory port bindings attribute.

This broke the NSX plugin which is now not able to boot VMs anymore. Probably 
other plugins are affected.
Whether VNIC_TYPE is really a required attribute questionable; the fact that 
port bindings is such a messy interface that can cause this kind of breakages 
is at least annoying.

Regardless, all plugins must now adapt. 
This will also be fixed once a general fix for bug 1370077 is introduced - 
nevertheless, the NSX plugin can't risk staying broken for more time, and also 
its 3rd party integration tests are disabled because of this. For this reason 
we're opening a bug specific for this plugin to fast-track a fix for it.

** Affects: neutron
 Importance: High
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: In Progress


** Tags: vmware

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1370112

Title:
  NSX plugin should set VNIC_TYPE

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  This nova commit: 
http://git.openstack.org/cgit/openstack/nova/commit/?id=a8a5d44c8aca218f00649232c2b8a46aee59b77e
  made VNIC_TYPE a compulsory port bindings attribute.

  This broke the NSX plugin which is now not able to boot VMs anymore. Probably 
other plugins are affected.
  Whether VNIC_TYPE is really a required attribute questionable; the fact that 
port bindings is such a messy interface that can cause this kind of breakages 
is at least annoying.

  Regardless, all plugins must now adapt. 
  This will also be fixed once a general fix for bug 1370077 is introduced - 
nevertheless, the NSX plugin can't risk staying broken for more time, and also 
its 3rd party integration tests are disabled because of this. For this reason 
we're opening a bug specific for this plugin to fast-track a fix for it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1370112/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1349617] Re: SSHException: Error reading SSH protocol banner[Errno 104] Connection reset by peer

2014-09-04 Thread Salvatore Orlando
** Also affects: tempest
   Importance: Undecided
   Status: New

** Changed in: tempest
 Assignee: (unassigned) = Salvatore Orlando (salvatore-orlando)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1349617

Title:
  SSHException: Error reading SSH protocol banner[Errno 104] Connection
  reset by peer

Status in Grenade - OpenStack upgrade testing:
  New
Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Compute (Nova):
  Confirmed
Status in Tempest:
  New

Bug description:
  Noticed a drop in categorized bugs on grenade jobs, so looking at
  latest I see this:

  http://logs.openstack.org/63/108363/5/gate/gate-grenade-dsvm-partial-
  ncpu/1458072/console.html

  Running this query:

  message:Failed to establish authenticated ssh connection to cirros@
  AND message:(Error reading SSH protocol banner[Errno 104] Connection
  reset by peer). Number attempts: 18. Retry after 19 seconds. AND
  tags:console

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiRmFpbGVkIHRvIGVzdGFibGlzaCBhdXRoZW50aWNhdGVkIHNzaCBjb25uZWN0aW9uIHRvIGNpcnJvc0BcIiBBTkQgbWVzc2FnZTpcIihFcnJvciByZWFkaW5nIFNTSCBwcm90b2NvbCBiYW5uZXJbRXJybm8gMTA0XSBDb25uZWN0aW9uIHJlc2V0IGJ5IHBlZXIpLiBOdW1iZXIgYXR0ZW1wdHM6IDE4LiBSZXRyeSBhZnRlciAxOSBzZWNvbmRzLlwiIEFORCB0YWdzOlwiY29uc29sZVwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI2MDQ4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxNDA2NTkwMTEwMzMyLCJtb2RlIjoiIiwiYW5hbHl6ZV9maWVsZCI6IiJ9

  I get 28 hits in 7 days, and it seems to be very particular to grenade
  jobs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/grenade/+bug/1349617/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1349617] Re: SSHException: Error reading SSH protocol banner[Errno 104] Connection reset by peer

2014-09-04 Thread Salvatore Orlando
So this is what I found out.

Instance log from a failing istance [1]. The important bit there is
cirros-apply-local already run per instance, and not no userdata for
datasource as initially thought. That was just me being stupid and
thinking the public key was part of user data. That was really silly.

cirros-apply-local already run per instance seems to appear in the console 
log for all SSH protocol banner failures [2]. the presence of duplicates makes 
it difficult to prove correlation with SSH protocol banner failures.
However, they key here is that local testing revealing that when the SSH 
connection fails there is no authorized_keys file in /home/cirros/.ssh. This 
obviously explains the authentication failure. Whether the subsequent SSH 
protocol banner errors are due to the cited MTU problems or else it has to be 
clarified yet.
What is certain is that cirros processes the data source containing the public 
SSH key before starting sshd. So the auth failures cannot be due to the init 
process not being yet complete.

The cirros initialization process executes a set of steps on an instance basis. 
This steps include setting public ssh keys.
On an instance basis  means that these steps are not executed at each boot 
but once per instance.

cirros-apply local [3] is the step which processes, among other things, ssh 
public keys.
It is called by the cirros-per scripts [4], which at the end of its execution 
writes a marker file [5]. The cirros-per process will terminate if when 
executed the marker file is already present [6]

During the failing test it has been observed the following:

from the console log: 
[3.696172] rtc_cmos 00:01: setting system clock to 2014-09-04 19:05:27 UTC 
(1409857527)

from the cirros-apply marker directory:
$ ls -le /var/lib/cirros/sem/
total 3
-rw-r--r--1 root root35 Thu Sep  4 13:06:28 2014 
instance.197ce1ac-e2df-4d3a-b392-4803383ddf74.check-version
-rw-r--r--1 root root22 Thu Sep  4 13:05:07 2014 
instance.197ce1ac-e2df-4d3a-b392-4803383ddf74.cirros-apply-local
-rw-r--r--1 root root24 Thu Sep  4 13:06:31 2014 
instance.197ce1ac-e2df-4d3a-b392-4803383ddf74.userdata

as cirros defaults to MDT (UTC -6), this means the apply-local marker has been 
applied BEFORE instance boot.
This is consistent with the situation we're seeing where the failure always 
occur after events such as resize or stop.
The ssh public key should be applied in the first boot of the VM. When it's 
restarted the process is skipped as the key should already be there. 
Unfortunately the key isn't there, which is a bit of a mystery, especially 
since the instance is powered off in a graceful way thanks to [7].

Nevertheless when an instance receives a shutdown signal it sends a TERM signal 
to all processes. Meaning that the apply-local spawned by cirros-per at [4] can 
be killed before it actually writes the key.
However, cirros-per even if it retrieves the return code it writes the marker 
in any case [5]. 
This creates the conditions for a situation where the marker can be present 
without having actually completed the apply-local phase. As a result it is 
possible to have guests without SSH public key which manifest the failure 
reported in this bug.

Why is this happening only recently.
It seems a paradox, but [7] might be the reason.
This patch (and a few similar others) introduced soft instance shutdown. Soft 
instance shutdown avoid the abrupt shutdown which can actually leave the 
cirros-init process incomplete.
However, since cirros-per writes the marker regardless whether the process it 
called terminated with 0 or other code, it does not guarantee a successful 
completion.

On the other hand, introducing [7] added a delay before stopping the
instance. For instance in the case [8] it took 13 seconds. Previously
tempest was just immediately powering off the instance, not giving it a
chance to run cirros-init. Now, with the added dealy, the cirros-init
process is executed and this might be the reason that this failure,
which was previously occasional, has recently become the biggest gate
breaker.

What are the possible solutions?
1) fix cirros-per to not write the marker if the called process returned a 
non-zero value. The feasibility of this depends on wheter cirros-apply can be 
considered idempotent
2) adjust tempest to wait a little after the instance become ACTIVE. It could 
wait a fixed amount of time just to ensure instance initialization is completed.
3) Your proposal here.


[1] http://paste.openstack.org/show/106049/
[2] 
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiY2lycm9zLWFwcGx5LWxvY2FsIGFscmVhZHkgcnVuIHBlciBpbnN0YW5jZVwiICBBTkQgdGFnczpjb25zb2xlIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiIxNzI4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxNDA5ODc0NDg4NDM0fQ==
[3] 
http://bazaar.launchpad.net/~cirros-dev/cirros/0.3/view/head:/src/sbin/cirros-apply
[4] 

[Yahoo-eng-team] [Bug 1362528] [NEW] cirros starts with file system in read only mode

2014-08-28 Thread Salvatore Orlando
Public bug reported:

Query:
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiU3RhcnRpbmcgZHJvcGJlYXIgc3NoZDogbWtkaXI6IGNhbid0IGNyZWF0ZSBkaXJlY3RvcnkgJy9ldGMvZHJvcGJlYXInOiBSZWFkLW9ubHkgZmlsZSBzeXN0ZW1cIiBBTkQgdGFnczpcImNvbnNvbGVcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQwOTIxNzMzOTM5OSwibW9kZSI6IiIsImFuYWx5emVfZmllbGQiOiIifQ==

The VM boots incorrectly, the SSH service does not start, and the
connection fails.

http://logs.openstack.org/16/110016/7/gate/gate-tempest-dsvm-neutron-pg-
full/603e3c6/console.html#_2014-08-26_08_59_39_951


Only observed with neutron, 1 gate hit in 7 days.
No hint about the issue in syslog or libvirt logs.

** Affects: neutron
 Importance: Medium
 Status: New

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: gate-failure

** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: neutron
   Importance: Undecided = Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1362528

Title:
  cirros starts with file system in read only mode

Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Compute (Nova):
  New

Bug description:
  Query:
  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiU3RhcnRpbmcgZHJvcGJlYXIgc3NoZDogbWtkaXI6IGNhbid0IGNyZWF0ZSBkaXJlY3RvcnkgJy9ldGMvZHJvcGJlYXInOiBSZWFkLW9ubHkgZmlsZSBzeXN0ZW1cIiBBTkQgdGFnczpcImNvbnNvbGVcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQwOTIxNzMzOTM5OSwibW9kZSI6IiIsImFuYWx5emVfZmllbGQiOiIifQ==

  The VM boots incorrectly, the SSH service does not start, and the
  connection fails.

  http://logs.openstack.org/16/110016/7/gate/gate-tempest-dsvm-neutron-
  pg-full/603e3c6/console.html#_2014-08-26_08_59_39_951

  
  Only observed with neutron, 1 gate hit in 7 days.
  No hint about the issue in syslog or libvirt logs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1362528/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362618] [NEW] Old confusing policies in policy.json

2014-08-28 Thread Salvatore Orlando
Public bug reported:

The following policies have not been used since grizzly:

subnets:private:read: rule:admin_or_owner,
subnets:private:write: rule:admin_or_owner,
subnets:shared:read: rule:regular_user,
subnets:shared:write: rule:admin_only,

keeping them confuses users and leads to think this syntax for
specifying policies is still valid.

** Affects: neutron
 Importance: Medium
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1362618

Title:
  Old confusing policies in policy.json

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  The following policies have not been used since grizzly:

  subnets:private:read: rule:admin_or_owner,
  subnets:private:write: rule:admin_or_owner,
  subnets:shared:read: rule:regular_user,
  subnets:shared:write: rule:admin_only,

  keeping them confuses users and leads to think this syntax for
  specifying policies is still valid.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1362618/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361176] [NEW] DB: Some tables still explicitly set mysql_engine

2014-08-25 Thread Salvatore Orlando
Public bug reported:

After commit commit 466e89970f11918a809aafe8a048d138d4664299 migrations
should not anymore explicitly specify the engine used for mysql.

There are still some migrations which do that, and they should be
amended.

** Affects: neutron
 Importance: Low
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: In Progress


** Tags: db

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1361176

Title:
  DB: Some tables still explicitly set mysql_engine

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  After commit commit 466e89970f11918a809aafe8a048d138d4664299
  migrations should not anymore explicitly specify the engine used for
  mysql.

  There are still some migrations which do that, and they should be
  amended.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1361176/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1349617] Re: test_volume_boot_pattern fails in grenade with SSHException: Error reading SSH protocol banner[Errno 104] Connection reset by peer

2014-08-24 Thread Salvatore Orlando
** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: neutron
   Importance: Undecided = High

** Changed in: neutron
 Assignee: (unassigned) = Salvatore Orlando (salvatore-orlando)

** Changed in: neutron
   Importance: High = Critical

** Changed in: neutron
Milestone: None = juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1349617

Title:
  test_volume_boot_pattern fails in grenade with SSHException: Error
  reading SSH protocol banner[Errno 104] Connection reset by peer

Status in Grenade - OpenStack upgrade testing:
  New
Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Compute (Nova):
  New

Bug description:
  Noticed a drop in categorized bugs on grenade jobs, so looking at
  latest I see this:

  http://logs.openstack.org/63/108363/5/gate/gate-grenade-dsvm-partial-
  ncpu/1458072/console.html

  Running this query:

  message:Failed to establish authenticated ssh connection to cirros@
  AND message:(Error reading SSH protocol banner[Errno 104] Connection
  reset by peer). Number attempts: 18. Retry after 19 seconds. AND
  tags:console

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiRmFpbGVkIHRvIGVzdGFibGlzaCBhdXRoZW50aWNhdGVkIHNzaCBjb25uZWN0aW9uIHRvIGNpcnJvc0BcIiBBTkQgbWVzc2FnZTpcIihFcnJvciByZWFkaW5nIFNTSCBwcm90b2NvbCBiYW5uZXJbRXJybm8gMTA0XSBDb25uZWN0aW9uIHJlc2V0IGJ5IHBlZXIpLiBOdW1iZXIgYXR0ZW1wdHM6IDE4LiBSZXRyeSBhZnRlciAxOSBzZWNvbmRzLlwiIEFORCB0YWdzOlwiY29uc29sZVwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI2MDQ4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxNDA2NTkwMTEwMzMyLCJtb2RlIjoiIiwiYW5hbHl6ZV9maWVsZCI6IiJ9

  I get 28 hits in 7 days, and it seems to be very particular to grenade
  jobs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/grenade/+bug/1349617/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1360011] Re: SSH Auth fails in AdvancedNetworkOps scenario

2014-08-22 Thread Salvatore Orlando
This failure occurs only in two tests:

test_server_connectivity_resize - after going to ACTIVE from VERIFY_RESIZE
test_server_connectivity_start_stop - after going to ACTIVE from SHUTOFF

The mysql job has 2.5 times more failures the postgres job. This is
probably not down to the DB backend, but to the fact that postgres jobs
do not use config drive.

Recent changes in shutoff process might be the cause of this failure.
Adding therefore nova to affected projects.

** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1360011

Title:
  SSH Auth fails in AdvancedNetworkOps scenario

Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Compute (Nova):
  New

Bug description:
  Affects all neutron full jobs and check-grenade-dsvm-partial-ncpu
  The latter runs nova network.

  In the past 7 days:
  105 hits (12 in gate)
  grenade: 30
  neutron-standard: 1
  neutron-full: 74

  in the past 36 hours:
  72 hits (8 in gate)
  grenade: 0
  neutron-standard: 1
  neutron-full: 71

  Something apparently has fixed the issue in the grenade test but
  screwed the neutron tests.

  Logstash query (from console, as there is no clue in logs) available
  at [1]

  
  The issue manifests as a failure to authenticate to the server (SSH server 
responds).
  then paramiko starts returning errors like [2], until the timeout expires

  [1] 
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiVFJBQ0VcIiBBTkQgbWVzc2FnZTpcIlNTSEV4Y2VwdGlvbjogRXJyb3IgcmVhZGluZyBTU0ggcHJvdG9jb2wgYmFubmVyW0Vycm5vIDEwNF0gQ29ubmVjdGlvbiByZXNldCBieSBwZWVyXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6ImN1c3RvbSIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJmcm9tIjoiMjAxNC0wOC0yMFQxMTo1NDoyMCswMDowMCIsInRvIjoiMjAxNC0wOC0yMVQyMzo1NDoyMCswMDowMCIsInVzZXJfaW50ZXJ2YWwiOiIwIn0sInN0YW1wIjoxNDA4NjY1MjkzODA2LCJtb2RlIjoiIiwiYW5hbHl6ZV9maWVsZCI6IiJ9
  [2] 
http://logs.openstack.org/10/98010/5/gate/gate-tempest-dsvm-neutron-full/aca3f89/console.html#_2014-08-21_08_36_14_931

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1360011/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1360011] [NEW] SSH Auth fails in AdvancedNetworkOps scenario

2014-08-21 Thread Salvatore Orlando
Public bug reported:

Affects all neutron full jobs and check-grenade-dsvm-partial-ncpu
The latter runs nova network.

In the past 7 days:
105 hits (12 in gate)
grenade: 30
neutron-standard: 1
neutron-full: 74

in the past 36 hours:
72 hits (8 in gate)
grenade: 0
neutron-standard: 1
neutron-full: 71

Something apparently has fixed the issue in the grenade test but screwed
the neutron tests.

Logstash query (from console, as there is no clue in logs) available at
[1]


The issue manifests as a failure to authenticate to the server (SSH server 
responds).
then paramiko starts returning errors like [2], until the timeout expires

[1] 
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiVFJBQ0VcIiBBTkQgbWVzc2FnZTpcIlNTSEV4Y2VwdGlvbjogRXJyb3IgcmVhZGluZyBTU0ggcHJvdG9jb2wgYmFubmVyW0Vycm5vIDEwNF0gQ29ubmVjdGlvbiByZXNldCBieSBwZWVyXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6ImN1c3RvbSIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJmcm9tIjoiMjAxNC0wOC0yMFQxMTo1NDoyMCswMDowMCIsInRvIjoiMjAxNC0wOC0yMVQyMzo1NDoyMCswMDowMCIsInVzZXJfaW50ZXJ2YWwiOiIwIn0sInN0YW1wIjoxNDA4NjY1MjkzODA2LCJtb2RlIjoiIiwiYW5hbHl6ZV9maWVsZCI6IiJ9
[2] 
http://logs.openstack.org/10/98010/5/gate/gate-tempest-dsvm-neutron-full/aca3f89/console.html#_2014-08-21_08_36_14_931

** Affects: neutron
 Importance: High
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: New


** Tags: neutron-full-job

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1360011

Title:
  SSH Auth fails in AdvancedNetworkOps scenario

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Affects all neutron full jobs and check-grenade-dsvm-partial-ncpu
  The latter runs nova network.

  In the past 7 days:
  105 hits (12 in gate)
  grenade: 30
  neutron-standard: 1
  neutron-full: 74

  in the past 36 hours:
  72 hits (8 in gate)
  grenade: 0
  neutron-standard: 1
  neutron-full: 71

  Something apparently has fixed the issue in the grenade test but
  screwed the neutron tests.

  Logstash query (from console, as there is no clue in logs) available
  at [1]

  
  The issue manifests as a failure to authenticate to the server (SSH server 
responds).
  then paramiko starts returning errors like [2], until the timeout expires

  [1] 
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiVFJBQ0VcIiBBTkQgbWVzc2FnZTpcIlNTSEV4Y2VwdGlvbjogRXJyb3IgcmVhZGluZyBTU0ggcHJvdG9jb2wgYmFubmVyW0Vycm5vIDEwNF0gQ29ubmVjdGlvbiByZXNldCBieSBwZWVyXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6ImN1c3RvbSIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJmcm9tIjoiMjAxNC0wOC0yMFQxMTo1NDoyMCswMDowMCIsInRvIjoiMjAxNC0wOC0yMVQyMzo1NDoyMCswMDowMCIsInVzZXJfaW50ZXJ2YWwiOiIwIn0sInN0YW1wIjoxNDA4NjY1MjkzODA2LCJtb2RlIjoiIiwiYW5hbHl6ZV9maWVsZCI6IiJ9
  [2] 
http://logs.openstack.org/10/98010/5/gate/gate-tempest-dsvm-neutron-full/aca3f89/console.html#_2014-08-21_08_36_14_931

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1360011/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1357314] [NEW] NSX: floating ip status can be incorrectly reset

2014-08-15 Thread Salvatore Orlando
Public bug reported:

This method can return None if:
1) an active floating ip is associated
2) a down floating ip is disassociated

Due to the default value for statsu being ACTIVE this implies that when
a floating IP is associated at create-time its status is reset.

** Affects: neutron
 Importance: High
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: New


** Tags: vmware

** Also affects: neutron
   Importance: Undecided
   Status: New

** No longer affects: python-neutronclient

** Changed in: neutron
   Importance: Undecided = High

** Changed in: neutron
 Assignee: (unassigned) = Salvatore Orlando (salvatore-orlando)

** Changed in: neutron
Milestone: None = juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1357314

Title:
  NSX: floating ip status can be incorrectly reset

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  This method can return None if:
  1) an active floating ip is associated
  2) a down floating ip is disassociated

  Due to the default value for statsu being ACTIVE this implies that
  when a floating IP is associated at create-time its status is reset.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1357314/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1357048] [NEW] NSX: restriction on distributed update must be lifted

2014-08-14 Thread Salvatore Orlando
Public bug reported:

The NSX backend from version 4.1 does not have anymore any restriction on 
transformation of centralized routers in distributed.
Version 3.x instead could not transform distributed routers into centralized, 
which is anyway consistent with the current DVR extension.

The current restriction specific for the NSX plugin must therefore be
lifted.

** Affects: neutron
 Importance: Medium
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: New


** Tags: vmware

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1357048

Title:
  NSX: restriction on distributed update must be lifted

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  The NSX backend from version 4.1 does not have anymore any restriction on 
transformation of centralized routers in distributed.
  Version 3.x instead could not transform distributed routers into centralized, 
which is anyway consistent with the current DVR extension.

  The current restriction specific for the NSX plugin must therefore be
  lifted.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1357048/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1355882] [NEW] get_floating_ip_pools for neutron v2 API inconsistent with nova network API

2014-08-12 Thread Salvatore Orlando
Public bug reported:

Commit e00bdd7aa8c1ac9f1ae5057eb2f774f34a631845 change
get_floating_ip_pools in a way that it now return a list of names rather
than a list whose elements are in the form {'name': 'pool_name'}.

The implementation of this method in nova.network.neutron_v2.api has not
been adjusted thus causing
tempest.api.compute.floating_ips.test_list_floating_ips.FloatingIPDetailsTestJSON
to always fail with neutron

The fix is straightforward.

** Affects: nova
 Importance: Undecided
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: New


** Tags: neutron-full-job

** Also affects: nova
   Importance: Undecided
   Status: New

** Changed in: nova
 Assignee: (unassigned) = Salvatore Orlando (salvatore-orlando)

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1355882

Title:
  get_floating_ip_pools for neutron v2 API inconsistent with nova
  network API

Status in OpenStack Compute (Nova):
  New

Bug description:
  Commit e00bdd7aa8c1ac9f1ae5057eb2f774f34a631845 change
  get_floating_ip_pools in a way that it now return a list of names
  rather than a list whose elements are in the form {'name':
  'pool_name'}.

  The implementation of this method in nova.network.neutron_v2.api has
  not been adjusted thus causing
  
tempest.api.compute.floating_ips.test_list_floating_ips.FloatingIPDetailsTestJSON
  to always fail with neutron

  The fix is straightforward.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1355882/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1355502] [NEW] NSX - add note in configuration files regarding distributed routers

2014-08-11 Thread Salvatore Orlando
Public bug reported:

In order to leverage distributed routing with the NSX plugin - the 
replication_mode parameter should be set to 'service'.
Otherwise the backend wil throw 409 errors resulting in 500 NSX errors.

This should be noted in the configuration files.

** Affects: neutron
 Importance: Low
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: New


** Tags: vmware

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1355502

Title:
  NSX - add note in configuration files regarding distributed routers

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  In order to leverage distributed routing with the NSX plugin - the 
replication_mode parameter should be set to 'service'.
  Otherwise the backend wil throw 409 errors resulting in 500 NSX errors.

  This should be noted in the configuration files.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1355502/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1353936] [NEW] Nova/Neutron: raise 404 if floating IP not found

2014-08-07 Thread Salvatore Orlando
Public bug reported:

Recent commit [1] change the floating_ips API code to ensure the
FloatingIPPoolNotFound exception, raised by nova.network.neutronv2.api,
is handled.

However, a 400 error is raised in this case.
This is not correct, as the error expected is a 404.

404 is indeed the error expected by tempest negative cases, and the
error that would be raised in this situation by nova/network.

It might be argued that 400 might be more appropriate as the root cause
is a bad pool id in a request, but currently tempest and nova-network
are enforcing a different logic, so neutron should just adhere to it.

[1]
http://git.openstack.org/cgit/openstack/nova/commit/?id=e7d7fbecbdd6d15849f0f59b25755ae4385c0385

** Affects: nova
 Importance: Undecided
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: In Progress


** Tags: neutron-full-job

** Tags added: neutron-full-job

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1353936

Title:
  Nova/Neutron: raise 404 if floating IP not found

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  Recent commit [1] change the floating_ips API code to ensure the
  FloatingIPPoolNotFound exception, raised by
  nova.network.neutronv2.api, is handled.

  However, a 400 error is raised in this case.
  This is not correct, as the error expected is a 404.

  404 is indeed the error expected by tempest negative cases, and the
  error that would be raised in this situation by nova/network.

  It might be argued that 400 might be more appropriate as the root
  cause is a bad pool id in a request, but currently tempest and nova-
  network are enforcing a different logic, so neutron should just adhere
  to it.

  [1]
  
http://git.openstack.org/cgit/openstack/nova/commit/?id=e7d7fbecbdd6d15849f0f59b25755ae4385c0385

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1353936/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1354188] [NEW] Small glitch in heal script

2014-08-07 Thread Salvatore Orlando
Public bug reported:

If the script detects that a foreign key must be removed, and the
columns that it references must be removed as well, then the foreign key
removal would fail as the column would not exist anymore.

This has been detected by accident in the following way:

on a grizzly deployment running the vmware plugin, disable it and switch to 
another plugin. Then upgrade to havana.
Then enable again vmware plugin and upgrade to head. The upgrade process will 
fail.

As a matter of fact, this error will show likely show up with the
migration reorganization in progress, so it's worth fixing.

** Affects: neutron
 Importance: Medium
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: In Progress


** Tags: db

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1354188

Title:
  Small glitch in heal script

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  If the script detects that a foreign key must be removed, and the
  columns that it references must be removed as well, then the foreign
  key removal would fail as the column would not exist anymore.

  This has been detected by accident in the following way:

  on a grizzly deployment running the vmware plugin, disable it and switch to 
another plugin. Then upgrade to havana.
  Then enable again vmware plugin and upgrade to head. The upgrade process will 
fail.

  As a matter of fact, this error will show likely show up with the
  migration reorganization in progress, so it's worth fixing.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1354188/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1354195] [NEW] Migration fails on downgrade

2014-08-07 Thread Salvatore Orlando
Public bug reported:

Happens if the healing step has been executed or load balancing is
enabled and if the downgrade target revision is havana or earlier.

Traceback (most recent call last):
  File /usr/local/bin/neutron-db-manage, line 10, in module
sys.exit(main())
  File /opt/stack/neutron/neutron/db/migration/cli.py, line 175, in main
CONF.command.func(config, CONF.command.name)
  File /opt/stack/neutron/neutron/db/migration/cli.py, line 85, in 
do_upgrade_downgrade
do_alembic_command(config, cmd, revision, sql=CONF.command.sql)
  File /opt/stack/neutron/neutron/db/migration/cli.py, line 63, in 
do_alembic_command
getattr(alembic_command, cmd)(config, *args, **kwargs)
  File /usr/local/lib/python2.7/dist-packages/alembic/command.py, line 151, 
in downgrade
script.run_env()
  File /usr/local/lib/python2.7/dist-packages/alembic/script.py, line 203, in 
run_env
util.load_python_file(self.dir, 'env.py')
  File /usr/local/lib/python2.7/dist-packages/alembic/util.py, line 212, in 
load_python_file
module = load_module_py(module_id, path)
  File /usr/local/lib/python2.7/dist-packages/alembic/compat.py, line 58, in 
load_module_py
mod = imp.load_source(module_id, path, fp)
  File /opt/stack/neutron/neutron/db/migration/alembic_migrations/env.py, 
line 125, in module
run_migrations_online()
  File /opt/stack/neutron/neutron/db/migration/alembic_migrations/env.py, 
line 109, in run_migrations_online
options=build_options())
  File string, line 7, in run_migrations
  File /usr/local/lib/python2.7/dist-packages/alembic/environment.py, line 
688, in run_migrations
self.get_context().run_migrations(**kw)
  File /usr/local/lib/python2.7/dist-packages/alembic/migration.py, line 258, 
in run_migrations
change(**kw)
  File 
/opt/stack/neutron/neutron/db/migration/alembic_migrations/versions/e197124d4b9_add_unique_constrain.py,
 line 62, in downgrade
type_='unique'
  File string, line 7, in drop_constraint
  File string, line 1, in lambda
  File /usr/local/lib/python2.7/dist-packages/alembic/util.py, line 329, in go
return fn(*arg, **kw)
  File /usr/local/lib/python2.7/dist-packages/alembic/operations.py, line 
841, in drop_constraint
self.impl.drop_constraint(const)
  File /usr/local/lib/python2.7/dist-packages/alembic/ddl/impl.py, line 138, 
in drop_constraint
self._exec(schema.DropConstraint(const))
  File /usr/local/lib/python2.7/dist-packages/alembic/ddl/impl.py, line 76, 
in _exec
conn.execute(construct, *multiparams, **params)
  File /usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 
727, in execute
return meth(self, multiparams, params)
  File /usr/local/lib/python2.7/dist-packages/sqlalchemy/sql/ddl.py, line 67, 
in _execute_on_connection
return connection._execute_ddl(self, multiparams, params)
  File /usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 
781, in _execute_ddl
compiled
  File /usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 
954, in _execute_context
context)
  File /usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 
1116, in _handle_dbapi_exception
exc_info
  File /usr/local/lib/python2.7/dist-packages/sqlalchemy/util/compat.py, line 
189, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb)
  File /usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 
947, in _execute_context
context)
  File /usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/default.py, 
line 435, in do_execute
cursor.execute(statement, parameters)
  File /usr/lib/python2.7/dist-packages/MySQLdb/cursors.py, line 174, in 
execute
self.errorhandler(self, exc, value)
  File /usr/lib/python2.7/dist-packages/MySQLdb/connections.py, line 36, in 
defaulterrorhandler
raise errorclass, errorvalue
sqlalchemy.exc.OperationalError: (OperationalError) (1553, Cannot drop index 
'uniq_member0pool_id0address0port': needed in a foreign key constraint) 'ALTER 
TABLE members DROP INDEX uniq_member0pool_id0address0port' ()

** Affects: neutron
 Importance: Low
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: In Progress


** Tags: db

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1354195

Title:
  Migration fails on downgrade

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  Happens if the healing step has been executed or load balancing is
  enabled and if the downgrade target revision is havana or earlier.

  Traceback (most recent call last):
File /usr/local/bin/neutron-db-manage, line 10, in module
  sys.exit(main())
File /opt/stack/neutron/neutron/db/migration/cli.py, line 175, in main
  CONF.command.func(config, CONF.command.name)
File /opt/stack/neutron/neutron/db/migration/cli.py, line 85, in 
do_upgrade_downgrade

[Yahoo-eng-team] [Bug 1354218] [NEW] heal script is not idempotent

2014-08-07 Thread Salvatore Orlando
Public bug reported:

tested with mysql
1) upgrade head
2) downgrade havana
3) upgrade head
4) BOOM - http://paste.openstack.org/show/91769/
5) This is easy [1]
6) Try 1-3 again
7) BOOM - http://paste.openstack.org/show/91770/
8) This is easy as well [2]
9) Repeat again steps 1-3
10) BOOM - http://paste.openstack.org/show/91771/

I'm clueless so far about the last failure.

[1]

--- a/neutron/db/migration/alembic_migrations/heal_script.py
+++ b/neutron/db/migration/alembic_migrations/heal_script.py
@@ -103,12 +103,12 @@ def parse_modify_command(command):
 #  autoincrement=None, existing_type=None,
 #  existing_server_default=False, existing_nullable=None,
 #  existing_autoincrement=None, schema=None, **kw)
+bind = op.get_bind()
 for modified, schema, table, column, existing, old, new in command:
 if modified.endswith('type'):
 modified = 'type_'
 elif modified.endswith('nullable'):
 modified = 'nullable'
-bind = op.get_bind()
 insp = sqlalchemy.engine.reflection.Inspector.from_engine(bind)
 if column in insp.get_primary_keys(table) and new:
 return


[2]
--- a/neutron/db/migration/alembic_migrations/heal_script.py
+++ b/neutron/db/migration/alembic_migrations/heal_script.py
@@ -103,12 +103,12 @@ def parse_modify_command(command):
 #  autoincrement=None, existing_type=None,
 #  existing_server_default=False, existing_nullable=None,
 #  existing_autoincrement=None, schema=None, **kw)
+bind = op.get_bind()
 for modified, schema, table, column, existing, old, new in command:
 if modified.endswith('type'):
 modified = 'type_'
 elif modified.endswith('nullable'):
 modified = 'nullable'
-bind = op.get_bind()
 insp = sqlalchemy.engine.reflection.Inspector.from_engine(bind)
 if column in insp.get_primary_keys(table) and new:
 return
@@ -123,7 +123,7 @@ def parse_modify_command(command):
 existing['existing_server_default'] = default.arg
 else:
 existing['existing_server_default'] = default.arg.compile(
-dialect=bind.engine.name)
+dialect=bind.dialect)
 kwargs.update(existing)
 op.alter_column(table, column, **kwargs)

** Affects: neutron
 Importance: High
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: New


** Tags: db

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1354218

Title:
  heal script is not idempotent

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  tested with mysql
  1) upgrade head
  2) downgrade havana
  3) upgrade head
  4) BOOM - http://paste.openstack.org/show/91769/
  5) This is easy [1]
  6) Try 1-3 again
  7) BOOM - http://paste.openstack.org/show/91770/
  8) This is easy as well [2]
  9) Repeat again steps 1-3
  10) BOOM - http://paste.openstack.org/show/91771/

  I'm clueless so far about the last failure.

  [1]

  --- a/neutron/db/migration/alembic_migrations/heal_script.py
  +++ b/neutron/db/migration/alembic_migrations/heal_script.py
  @@ -103,12 +103,12 @@ def parse_modify_command(command):
   #  autoincrement=None, existing_type=None,
   #  existing_server_default=False, existing_nullable=None,
   #  existing_autoincrement=None, schema=None, **kw)
  +bind = op.get_bind()
   for modified, schema, table, column, existing, old, new in command:
   if modified.endswith('type'):
   modified = 'type_'
   elif modified.endswith('nullable'):
   modified = 'nullable'
  -bind = op.get_bind()
   insp = sqlalchemy.engine.reflection.Inspector.from_engine(bind)
   if column in insp.get_primary_keys(table) and new:
   return

  
  [2]
  --- a/neutron/db/migration/alembic_migrations/heal_script.py
  +++ b/neutron/db/migration/alembic_migrations/heal_script.py
  @@ -103,12 +103,12 @@ def parse_modify_command(command):
   #  autoincrement=None, existing_type=None,
   #  existing_server_default=False, existing_nullable=None,
   #  existing_autoincrement=None, schema=None, **kw)
  +bind = op.get_bind()
   for modified, schema, table, column, existing, old, new in command:
   if modified.endswith('type'):
   modified = 'type_'
   elif modified.endswith('nullable'):
   modified = 'nullable'
  -bind = op.get_bind()
   insp = sqlalchemy.engine.reflection.Inspector.from_engine(bind)
   if column in insp.get_primary_keys(table) and new:
   return
  @@ -123,7 +123,7

[Yahoo-eng-team] [Bug 1348584] [NEW] KeyError in nova.compute.api.API.external_instance_event

2014-07-25 Thread Salvatore Orlando
Public bug reported:

The fix for bug 1333654 ensured events for instance without host are not 
accepted.
However, the instances without the host are still being passed to the compute 
API layer.

This is likely to result in keyerrors as the one found here:
http://logs.openstack.org/51/109451/2/check/check-tempest-dsvm-neutron-
full/ad70f74/logs/screen-n-api.txt.gz#_2014-07-25_01_41_48_068

The fix for this bug should be straightforward.

** Affects: nova
 Importance: Undecided
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = Salvatore Orlando (salvatore-orlando)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1348584

Title:
  KeyError in nova.compute.api.API.external_instance_event

Status in OpenStack Compute (Nova):
  New

Bug description:
  The fix for bug 1333654 ensured events for instance without host are not 
accepted.
  However, the instances without the host are still being passed to the compute 
API layer.

  This is likely to result in keyerrors as the one found here:
  http://logs.openstack.org/51/109451/2/check/check-tempest-dsvm-
  neutron-full/ad70f74/logs/screen-n-api.txt.gz#_2014-07-25_01_41_48_068

  The fix for this bug should be straightforward.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1348584/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1341791] [NEW] NSX net-gateway extension: cannot update devices

2014-07-14 Thread Salvatore Orlando
Public bug reported:

Once a network gateway is defined, the NSX extension does not allow the
device list to be updated:
https://github.com/openstack/neutron/blob/master/neutron/plugins/vmware/extensions/networkgw.py#L44

This forces user to destroy the gateway, recreate it, and re-establish
all connections everytime a gateway device needs to be replaced.

Allowing for gateway device update, which is supported by the backend,
will spare the users a lot of pain.

** Affects: neutron
 Importance: Medium
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: New


** Tags: vmware

** Changed in: neutron
   Importance: Undecided = Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1341791

Title:
  NSX net-gateway extension: cannot update devices

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Once a network gateway is defined, the NSX extension does not allow
  the device list to be updated:
  
https://github.com/openstack/neutron/blob/master/neutron/plugins/vmware/extensions/networkgw.py#L44

  This forces user to destroy the gateway, recreate it, and re-establish
  all connections everytime a gateway device needs to be replaced.

  Allowing for gateway device update, which is supported by the backend,
  will spare the users a lot of pain.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1341791/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1340210] [NEW] NSX: _convert_to_nsx_transport_zones should not be in the plugin class

2014-07-10 Thread Salvatore Orlando
Public bug reported:

This is clearly a utility function, and should be therefore moved into
the neutron.plugins.vmware.common.nsx_utils module.

** Affects: neutron
 Importance: Low
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: New


** Tags: vmware

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1340210

Title:
  NSX: _convert_to_nsx_transport_zones should not be in the plugin class

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  This is clearly a utility function, and should be therefore moved into
  the neutron.plugins.vmware.common.nsx_utils module.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1340210/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1340431] [NEW] NSX: network gateway connection doesn't validate vlan id

2014-07-10 Thread Salvatore Orlando
Public bug reported:

when the transport type for a network gateway connection is vlan, the
neutron code does not validate that the segmentation id is between 0 and
4095.

The requests is then sent to NSX where it fails. However a 500 error is
returned to the neutron API user because of the backend failure.

The operation should return a 400 and possibly not go at all to the
backend.

** Affects: neutron
 Importance: Medium
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: New


** Tags: vmware

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1340431

Title:
  NSX: network gateway connection doesn't validate vlan id

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  when the transport type for a network gateway connection is vlan, the
  neutron code does not validate that the segmentation id is between 0
  and 4095.

  The requests is then sent to NSX where it fails. However a 500 error
  is returned to the neutron API user because of the backend failure.

  The operation should return a 400 and possibly not go at all to the
  backend.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1340431/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1338672] [NEW] Nova might spawn without waiting for network-vif-plugged event

2014-07-07 Thread Salvatore Orlando
Public bug reported:

This applies only when the nova/neutron event reporting mechanism is
enabled.

It has been observed that in some cases Nova spawns an instance without
waiting for network-vif-plugged event, even if the vif was unplugged and
then plugged again.

This happens because the status of the VIF in the network info cache is not 
updated when such events are received.
Therefore the cache contains an out-of-date value and the VIF might already be 
in status ACTIVE when the instance is being spawned. However there is no 
guarantee that this would be the actual status of the VIF.

For instance in this case there are only two instances in which nova
starts waiting for 'network-vif-plugged' on f800d4a8-0a01-475f-
bd34-8d975ce6f1ab. However this instance is used in
tempest.api.compute.servers.test_server_actions, and the tests in this
suite should trigger more than 2 events requiring a respawn of an
instance after unplugging vifs.

From what can be gathered by logs, this issue, if confirmed, should
occur only when actions such as stop, resize, reboot_hard are executed
on a instance.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1338672

Title:
  Nova might spawn without waiting for network-vif-plugged event

Status in OpenStack Compute (Nova):
  New

Bug description:
  This applies only when the nova/neutron event reporting mechanism is
  enabled.

  It has been observed that in some cases Nova spawns an instance
  without waiting for network-vif-plugged event, even if the vif was
  unplugged and then plugged again.

  This happens because the status of the VIF in the network info cache is not 
updated when such events are received.
  Therefore the cache contains an out-of-date value and the VIF might already 
be in status ACTIVE when the instance is being spawned. However there is no 
guarantee that this would be the actual status of the VIF.

  For instance in this case there are only two instances in which nova
  starts waiting for 'network-vif-plugged' on f800d4a8-0a01-475f-
  bd34-8d975ce6f1ab. However this instance is used in
  tempest.api.compute.servers.test_server_actions, and the tests in this
  suite should trigger more than 2 events requiring a respawn of an
  instance after unplugging vifs.

  From what can be gathered by logs, this issue, if confirmed, should
  occur only when actions such as stop, resize, reboot_hard are executed
  on a instance.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1338672/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1329546] Re: Upon rebuild instances might never get to Active state

2014-07-07 Thread Salvatore Orlando
Contrary to what claimed in the bug description, the actual root cause
is instead a different one, and it's in neutron.

For events like rebuilding or rebooting an instance a VIF disappears and 
reappears rather quickly.
In this case the OVS agent loop starts processing the VIF, and then it skips 
processing when it realizes it's not anymore on the integration bridge.

However it keeps it into the set of 'current' VIFs. This means that when
the VIF is plugged again it's not processed and hence the problem.

Removing nova from affected projects. Patch will follow up soon.



** No longer affects: nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1329546

Title:
  Upon rebuild instances might never get to Active state

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  VMware mine sweeper for Neutron (*) recently showed a 100% failure
  rate on tempest.api.compute.v3.servers.test_server_actions

  Logs for two instances of these failures are available at [1] and [2]
  The failure manifested as an instance unable to go active after a rebuild.
  A bit of instrumentation and log analysis revealed no obvious error on the 
neutron side - and also that the instance was actually in running state even 
if its take state was rebuilding/spawning

  N-API logs [3] revealed that the instance spawn was timing out on a
  missed notification from neutron regarding VIF plug - however the same
  log showed such notification was received [4]

  It turns out that, after rebuild, the instance network cache had still
  'active': False for the instance's VIF, even if the status for the
  corresponding port was 'ACTIVE'. This happened because after the
  network-vif-plugged event was received, nothing triggered a refresh of
  the instance network info. For this reason, the VM, after a rebuild,
  kept waiting for an even which obviously was never sent from neutron.

  While this manifested only on mine sweeper - this appears to be a nova bug - 
manifesting in vmware minesweeper only because of the way the plugin 
synchronizes with the backend for reporting the operational status of a port.
  A simple solution for this problem would be to reload the instance network 
info cache when network-vif-plugged events are received by nova. (But as the 
reporter knows nothing about nova this might be a very bad idea as well)

  [1] http://208.91.1.172/logs/neutron/98278/2/413209/testr_results.html
  [2] http://208.91.1.172/logs/neutron/73234/34/413213/testr_results.html
  [3] 
http://208.91.1.172/logs/neutron/73234/34/413213/logs/screen-n-cpu.txt.gz?level=WARNING#_2014-06-06_01_46_36_219
  [4] 
http://208.91.1.172/logs/neutron/73234/34/413213/logs/screen-n-cpu.txt.gz?level=DEBUG#_2014-06-06_01_41_31_767

  (*) runs libvirt/KVM + NSX

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1329546/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1333654] Re: Timeout waiting for vif plugging callback for instance

2014-07-01 Thread Salvatore Orlando
Addressed by: https://review.openstack.org/#/c/103865/

I'm not sure if I'm missing something in the commit message, but it was
not automatically added.

** No longer affects: neutron

** Changed in: nova
   Status: New = In Progress

** Changed in: nova
 Assignee: (unassigned) = Salvatore Orlando (salvatore-orlando)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1333654

Title:
  Timeout waiting for vif plugging callback for instance

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  The neutron full job is exhibiting a rather high number of cases where 
network-vif-plugged timeout are reported.
  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOiBcIlRpbWVvdXQgd2FpdGluZyBmb3IgdmlmIHBsdWdnaW5nIGNhbGxiYWNrIGZvciBpbnN0YW5jZVwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiIxNzI4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxNDAzNjA5MTk0NDg4LCJtb2RlIjoiIiwiYW5hbHl6ZV9maWVsZCI6IiJ9

  95.78% of this kind of messages appear for the neutron full job. However, 
only a fraction of those cause build failures, but that's because the way the 
tests are executed.
  This error is currently being masked by another bug as tempest tries to get 
the console log of a VM in error state: 
https://bugs.launchpad.net/tempest/+bug/1332414

  This bug will target both neutron and nova pending a better triage.
  Fixing this is of paramount importance to get the full job running.

  Note: This is different from
  https://bugs.launchpad.net/nova/+bug/1321872 and
  https://bugs.launchpad.net/nova/+bug/1329546

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1333654/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1333654] [NEW] Timeout waiting for vif plugging callback for instance

2014-06-24 Thread Salvatore Orlando
Public bug reported:

The neutron full job is exhibiting a rather high number of cases where 
network-vif-plugged timeout are reported.
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOiBcIlRpbWVvdXQgd2FpdGluZyBmb3IgdmlmIHBsdWdnaW5nIGNhbGxiYWNrIGZvciBpbnN0YW5jZVwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiIxNzI4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxNDAzNjA5MTk0NDg4LCJtb2RlIjoiIiwiYW5hbHl6ZV9maWVsZCI6IiJ9

95.78% of this kind of messages appear for the neutron full job. However, only 
a fraction of those cause build failures, but that's because the way the tests 
are executed.
This error is currently being masked by another bug as tempest tries to get the 
console log of a VM in error state: 
https://bugs.launchpad.net/tempest/+bug/1332414

This bug will target both neutron and nova pending a better triage.
Fixing this is of paramount importance to get the full job running.

Note: This is different from
https://bugs.launchpad.net/nova/+bug/1321872 and
https://bugs.launchpad.net/nova/+bug/1329546

** Affects: neutron
 Importance: High
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: New

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: neutron-full-job

** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: neutron
   Importance: Undecided = High

** Changed in: neutron
 Assignee: (unassigned) = Salvatore Orlando (salvatore-orlando)

** Changed in: neutron
Milestone: None = juno-2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1333654

Title:
  Timeout waiting for vif plugging callback for instance

Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Compute (Nova):
  New

Bug description:
  The neutron full job is exhibiting a rather high number of cases where 
network-vif-plugged timeout are reported.
  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOiBcIlRpbWVvdXQgd2FpdGluZyBmb3IgdmlmIHBsdWdnaW5nIGNhbGxiYWNrIGZvciBpbnN0YW5jZVwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiIxNzI4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxNDAzNjA5MTk0NDg4LCJtb2RlIjoiIiwiYW5hbHl6ZV9maWVsZCI6IiJ9

  95.78% of this kind of messages appear for the neutron full job. However, 
only a fraction of those cause build failures, but that's because the way the 
tests are executed.
  This error is currently being masked by another bug as tempest tries to get 
the console log of a VM in error state: 
https://bugs.launchpad.net/tempest/+bug/1332414

  This bug will target both neutron and nova pending a better triage.
  Fixing this is of paramount importance to get the full job running.

  Note: This is different from
  https://bugs.launchpad.net/nova/+bug/1321872 and
  https://bugs.launchpad.net/nova/+bug/1329546

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1333654/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1332502] [NEW] Intermittent UT failure for VMware 'adv' plugin

2014-06-20 Thread Salvatore Orlando
Public bug reported:

Failure occurs in
neutron.tests.unit.vmware.vshield.test_vpnaas_plugin.TestVpnPlugin.test_create_vpnservice_with_invalid_route

logstash query:
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiRkFJTFwiIEFORCBtZXNzYWdlOlwibmV1dHJvbi50ZXN0cy51bml0LnZtd2FyZS52c2hpZWxkLnRlc3RfdnBuYWFzX3BsdWdpbi5UZXN0VnBuUGx1Z2luLnRlc3RfY3JlYXRlX3ZwbnNlcnZpY2Vfd2l0aF9pbnZhbGlkX3JvdXRlclwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI2MDQ4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxNDAzMjYzNzAzNzI4fQ==

Error log:
http://logs.openstack.org/47/101447/3/check/gate-neutron-python26/69da3af/console.html

Introduced by: offending patch not yet known - could be a latent problem
accidentally uncovered by other patches.

Hits in past 7 days: 9 (1 in gate queue)


setting priority as high as for anything affecting gate stability.

** Affects: neutron
 Importance: High
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: New


** Tags: gate-failure vmware

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1332502

Title:
  Intermittent UT failure for VMware 'adv' plugin

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Failure occurs in
  
neutron.tests.unit.vmware.vshield.test_vpnaas_plugin.TestVpnPlugin.test_create_vpnservice_with_invalid_route

  logstash query:
  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiRkFJTFwiIEFORCBtZXNzYWdlOlwibmV1dHJvbi50ZXN0cy51bml0LnZtd2FyZS52c2hpZWxkLnRlc3RfdnBuYWFzX3BsdWdpbi5UZXN0VnBuUGx1Z2luLnRlc3RfY3JlYXRlX3ZwbnNlcnZpY2Vfd2l0aF9pbnZhbGlkX3JvdXRlclwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI2MDQ4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxNDAzMjYzNzAzNzI4fQ==

  Error log:
  
http://logs.openstack.org/47/101447/3/check/gate-neutron-python26/69da3af/console.html

  Introduced by: offending patch not yet known - could be a latent
  problem accidentally uncovered by other patches.

  Hits in past 7 days: 9 (1 in gate queue)

  
  setting priority as high as for anything affecting gate stability.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1332502/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1332017] [NEW] NSX: lock NSX sync cache while doing synchronization

2014-06-19 Thread Salvatore Orlando
Public bug reported:

This is somewhat related to bug 1329650 but more about an enhancement than a 
fix.
Basically in order to avoid any sort of race in access to the NSX sync cache, 
NSXsync operations should acquire a lock before operating on the NSX sync cache.

** Affects: neutron
 Importance: Medium
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: New


** Tags: vmware

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1332017

Title:
  NSX: lock NSX sync cache while doing synchronization

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  This is somewhat related to bug 1329650 but more about an enhancement than a 
fix.
  Basically in order to avoid any sort of race in access to the NSX sync cache, 
NSXsync operations should acquire a lock before operating on the NSX sync cache.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1332017/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1332062] [NEW] NSX: raise when the dscp is specified and qos_marking is trusted

2014-06-19 Thread Salvatore Orlando
Public bug reported:

The plugin currently just logs the information and creates a queue without dscp 
setting.
This result in the creation of an object which differs from the user request.

It would be more correct to raise a 400 error to notify the user of the
invalid setting.

** Affects: neutron
 Importance: Low
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: In Progress


** Tags: icehouse-backport-potential vmware

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1332062

Title:
  NSX: raise when the dscp is specified and qos_marking is trusted

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  The plugin currently just logs the information and creates a queue without 
dscp setting.
  This result in the creation of an object which differs from the user request.

  It would be more correct to raise a 400 error to notify the user of
  the invalid setting.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1332062/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1332353] [NEW] NSX: wrong src IP address in VM connection via floating IP

2014-06-19 Thread Salvatore Orlando
Public bug reported:

Scenario:

Two VM on the same network (VM_1 and VM_2) with internal addresses INT_1
and INT_2 both associated with floating IPs FIP_1 and FIP_2.

VM_1 connects to VM_2 (e.g.: ssh) through VM_2 floating IP
e.g.: VM_1 ssh user@FIP_2

on VM_2 the ssh connection has:
- INT_2 as local address
- FIP_2 as remote address

This is not entirely correct.
It would be advisable to have instead FIP_1 as remote address.

** Affects: neutron
 Importance: Medium
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: New


** Tags: icehouse-backport-potential vmware

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1332353

Title:
  NSX: wrong src IP address in VM connection via floating IP

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Scenario:

  Two VM on the same network (VM_1 and VM_2) with internal addresses
  INT_1 and INT_2 both associated with floating IPs FIP_1 and FIP_2.

  VM_1 connects to VM_2 (e.g.: ssh) through VM_2 floating IP
  e.g.: VM_1 ssh user@FIP_2

  on VM_2 the ssh connection has:
  - INT_2 as local address
  - FIP_2 as remote address

  This is not entirely correct.
  It would be advisable to have instead FIP_1 as remote address.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1332353/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1329546] [NEW] Upon rebuild instances might never get to Active state

2014-06-12 Thread Salvatore Orlando
Public bug reported:

VMware mine sweeper for Neutron (*) recently showed a 100% failure rate
on tempest.api.compute.v3.servers.test_server_actions

Logs for two instances of these failures are available at [1] and [2]
The failure manifested as an instance unable to go active after a rebuild.
A bit of instrumentation and log analysis revealed no obvious error on the 
neutron side - and also that the instance was actually in running state even 
if its take state was rebuilding/spawning

N-API logs [3] revealed that the instance spawn was timing out on a
missed notification from neutron regarding VIF plug - however the same
log showed such notification was received [4]

It turns out that, after rebuild, the instance network cache had still
'active': False for the instance's VIF, even if the status for the
corresponding port was 'ACTIVE'. This happened because after the
network-vif-plugged event was received, nothing triggered a refresh of
the instance network info. For this reason, the VM, after a rebuild,
kept waiting for an even which obviously was never sent from neutron.

While this manifested only on mine sweeper - this appears to be a nova bug - 
manifesting in vmware minesweeper only because of the way the plugin 
synchronizes with the backend for reporting the operational status of a port.
A simple solution for this problem would be to reload the instance network info 
cache when network-vif-plugged events are received by nova. (But as the 
reporter knows nothing about nova this might be a very bad idea as well)

[1] http://208.91.1.172/logs/neutron/98278/2/413209/testr_results.html
[2] http://208.91.1.172/logs/neutron/73234/34/413213/testr_results.html
[3] 
http://208.91.1.172/logs/neutron/73234/34/413213/logs/screen-n-cpu.txt.gz?level=WARNING#_2014-06-06_01_46_36_219
[4] 
http://208.91.1.172/logs/neutron/73234/34/413213/logs/screen-n-cpu.txt.gz?level=DEBUG#_2014-06-06_01_41_31_767

(*) runs libvirt/KVM + NSX

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1329546

Title:
  Upon rebuild instances might never get to Active state

Status in OpenStack Compute (Nova):
  New

Bug description:
  VMware mine sweeper for Neutron (*) recently showed a 100% failure
  rate on tempest.api.compute.v3.servers.test_server_actions

  Logs for two instances of these failures are available at [1] and [2]
  The failure manifested as an instance unable to go active after a rebuild.
  A bit of instrumentation and log analysis revealed no obvious error on the 
neutron side - and also that the instance was actually in running state even 
if its take state was rebuilding/spawning

  N-API logs [3] revealed that the instance spawn was timing out on a
  missed notification from neutron regarding VIF plug - however the same
  log showed such notification was received [4]

  It turns out that, after rebuild, the instance network cache had still
  'active': False for the instance's VIF, even if the status for the
  corresponding port was 'ACTIVE'. This happened because after the
  network-vif-plugged event was received, nothing triggered a refresh of
  the instance network info. For this reason, the VM, after a rebuild,
  kept waiting for an even which obviously was never sent from neutron.

  While this manifested only on mine sweeper - this appears to be a nova bug - 
manifesting in vmware minesweeper only because of the way the plugin 
synchronizes with the backend for reporting the operational status of a port.
  A simple solution for this problem would be to reload the instance network 
info cache when network-vif-plugged events are received by nova. (But as the 
reporter knows nothing about nova this might be a very bad idea as well)

  [1] http://208.91.1.172/logs/neutron/98278/2/413209/testr_results.html
  [2] http://208.91.1.172/logs/neutron/73234/34/413213/testr_results.html
  [3] 
http://208.91.1.172/logs/neutron/73234/34/413213/logs/screen-n-cpu.txt.gz?level=WARNING#_2014-06-06_01_46_36_219
  [4] 
http://208.91.1.172/logs/neutron/73234/34/413213/logs/screen-n-cpu.txt.gz?level=DEBUG#_2014-06-06_01_41_31_767

  (*) runs libvirt/KVM + NSX

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1329546/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1329560] [NEW] NSX sync thread might fail because of a KeyError

2014-06-12 Thread Salvatore Orlando
Public bug reported:

In a few instances the following failure has been observed:
http://208.91.1.172/logs/neutron/94439/5/413257/logs/screen-q-svc.txt.gz?#_2014-06-12_06_09_35_248

As a result, the NSX sync thread fails - operational statuses are not
updated anymore - and VMs fail to boot because of that.

The issue appears when a not-yet cached logical port is deleted while the 
synchronization is in process.
The failure does not show up in the synchronisation running when the element is 
deleted, but rather in the next run of the sync thread.

** Affects: neutron
 Importance: Medium
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: New


** Tags: vmwarw

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1329560

Title:
  NSX sync thread might fail because of a KeyError

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  In a few instances the following failure has been observed:
  
http://208.91.1.172/logs/neutron/94439/5/413257/logs/screen-q-svc.txt.gz?#_2014-06-12_06_09_35_248

  As a result, the NSX sync thread fails - operational statuses are not
  updated anymore - and VMs fail to boot because of that.

  The issue appears when a not-yet cached logical port is deleted while the 
synchronization is in process.
  The failure does not show up in the synchronisation running when the element 
is deleted, but rather in the next run of the sync thread.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1329560/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1328181] [NEW] NSX: remove_router_interface might fail because of NAT rule mismatch

2014-06-09 Thread Salvatore Orlando
Public bug reported:

The remove_router_interface for the VMware NSX plugin expects a precise number 
of SNAT rules for a subnet.
If the actual number of NAT rules differs from the expected one, an exception 
is raised.

The reasons for this might be:
- earlier failure in remove_router_interface
- NSX API client tampering with NSX objects
- etc.

In any case, the remove_router_interface operation should succeed
removing every match for the NAT rule to delete from the NSX logical
router.

sample traceback: http://paste.openstack.org/show/83427/

** Affects: neutron
 Importance: Medium
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: In Progress


** Tags: havana-backport-potential icehouse-backport-potential vmware

** Summary changed:

- NSX: remote_router_interface might fail because of NAT rule mismatch
+ NSX: remove_router_interface might fail because of NAT rule mismatch

** Tags added: havana-backport-potential icehouse-backport-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1328181

Title:
  NSX: remove_router_interface might fail because of NAT rule mismatch

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  The remove_router_interface for the VMware NSX plugin expects a precise 
number of SNAT rules for a subnet.
  If the actual number of NAT rules differs from the expected one, an exception 
is raised.

  The reasons for this might be:
  - earlier failure in remove_router_interface
  - NSX API client tampering with NSX objects
  - etc.

  In any case, the remove_router_interface operation should succeed
  removing every match for the NAT rule to delete from the NSX logical
  router.

  sample traceback: http://paste.openstack.org/show/83427/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1328181/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1320278] [NEW] NSX: Fetch gateway device id mappings more efficiently

2014-05-16 Thread Salvatore Orlando
Public bug reported:

In:
https://github.com/openstack/neutron/blob/master/neutron/plugins/vmware/plugins/base.py#L2008

a query is performed for each gateway device.
This could be optimized into a single query.

** Affects: neutron
 Importance: Low
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: New


** Tags: vmware

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1320278

Title:
  NSX: Fetch gateway device id mappings more efficiently

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  In:
  
https://github.com/openstack/neutron/blob/master/neutron/plugins/vmware/plugins/base.py#L2008

  a query is performed for each gateway device.
  This could be optimized into a single query.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1320278/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1319354] [NEW] intermittent failure in unit tests because of netscaler lbaas driver

2014-05-14 Thread Salvatore Orlando
Public bug reported:

In some cases the python 27 jobs fails at startup with the following message:
import 
err...@pneutron.tests.unit.services.loadbalancer.drivers.netscaler.test_netscaler_driver

This caused a few failures in upstream checks, altough it's now hard to
say which ones hit the gate as both check and gate queues seem to be
running 'gate-neutron-python27'

logstash:
http://logstash.openstack.org/#eyJzZWFyY2giOiJidWlsZF9uYW1lOmdhdGUtbmV1dHJvbi1weXRob24yNyBBTkQgdGFnczpjb25zb2xlICBBTkQgbWVzc2FnZTplcnJvcnNAUG5ldXRyb24udGVzdHMudW5pdC5zZXJ2aWNlcy5sb2FkYmFsYW5jZXIuZHJpdmVycy5uZXRzY2FsZXIudGVzdF9uZXRzY2FsZXJfZHJpdmVyIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiJjdXN0b20iLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsiZnJvbSI6IjIwMTQtMDUtMDVUMDc6MDM6MjQrMDA6MDAiLCJ0byI6IjIwMTQtMDUtMDVUMTQ6MDM6MjQrMDA6MDAiLCJ1c2VyX2ludGVydmFsIjoiMCJ9LCJzdGFtcCI6MTQwMDA2NjMyNTcxOH0=

** Affects: neutron
 Importance: Medium
 Status: New


** Tags: unittest

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1319354

Title:
  intermittent failure in unit tests because of netscaler lbaas driver

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  In some cases the python 27 jobs fails at startup with the following message:
  import 
err...@pneutron.tests.unit.services.loadbalancer.drivers.netscaler.test_netscaler_driver

  This caused a few failures in upstream checks, altough it's now hard
  to say which ones hit the gate as both check and gate queues seem to
  be running 'gate-neutron-python27'

  logstash:
  
http://logstash.openstack.org/#eyJzZWFyY2giOiJidWlsZF9uYW1lOmdhdGUtbmV1dHJvbi1weXRob24yNyBBTkQgdGFnczpjb25zb2xlICBBTkQgbWVzc2FnZTplcnJvcnNAUG5ldXRyb24udGVzdHMudW5pdC5zZXJ2aWNlcy5sb2FkYmFsYW5jZXIuZHJpdmVycy5uZXRzY2FsZXIudGVzdF9uZXRzY2FsZXJfZHJpdmVyIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiJjdXN0b20iLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsiZnJvbSI6IjIwMTQtMDUtMDVUMDc6MDM6MjQrMDA6MDAiLCJ0byI6IjIwMTQtMDUtMDVUMTQ6MDM6MjQrMDA6MDAiLCJ1c2VyX2ludGVydmFsIjoiMCJ9LCJzdGFtcCI6MTQwMDA2NjMyNTcxOH0=

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1319354/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1319423] [NEW] NSX: l2 gw extension unnecessarily extends fault map

2014-05-14 Thread Salvatore Orlando
Public bug reported:

Almost all the exceptions added to the fault map in [1] are already
extending a mapped extension.

Extending the fault map could then be avoided.

[1]
https://github.com/openstack/neutron/blob/master/neutron/plugins/vmware/dbexts/networkgw_db.py#L89

** Affects: neutron
 Importance: Low
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: In Progress


** Tags: vmware

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1319423

Title:
  NSX: l2 gw extension unnecessarily extends fault map

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  Almost all the exceptions added to the fault map in [1] are already
  extending a mapped extension.

  Extending the fault map could then be avoided.

  [1]
  
https://github.com/openstack/neutron/blob/master/neutron/plugins/vmware/dbexts/networkgw_db.py#L89

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1319423/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


  1   2   >