[Yahoo-eng-team] [Bug 1524231] [NEW] [RFE]neutron fwaas should be support share fw to specify tenant

2015-12-09 Thread zhaobo
Public bug reported:

[Existing problem]
Now, fwaas just contain the 'shared' field, when it is True, it can be fetched 
by all tenants.  But there is more requirements now, the enterprise who have 
the strong fw(more legitimate fw-rules/policies) want to share / sell its fw 
service to some tenants through our cloud system. 

[Proposal]
Now neutron can not fulfill this task until import rbac policies in L release. 
I think we could base on the existing rbac policies  mechanism to extend more 
resources which may have this application scene.  We could control the fw 
shared like existing network shared or maybe more cover.

[What is the enhancement?]
Share FW more sophisticated  to other specified tenants

** Affects: neutron
 Importance: Undecided
 Assignee: zhaobo (zhaobo6)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => zhaobo (zhaobo6)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1524231

Title:
  [RFE]neutron fwaas should be support share fw to specify tenant

Status in neutron:
  New

Bug description:
  [Existing problem]
  Now, fwaas just contain the 'shared' field, when it is True, it can be 
fetched by all tenants.  But there is more requirements now, the enterprise who 
have the strong fw(more legitimate fw-rules/policies) want to share / sell its 
fw service to some tenants through our cloud system. 

  [Proposal]
  Now neutron can not fulfill this task until import rbac policies in L 
release. I think we could base on the existing rbac policies  mechanism to 
extend more resources which may have this application scene.  We could control 
the fw shared like existing network shared or maybe more cover.

  [What is the enhancement?]
  Share FW more sophisticated  to other specified tenants

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1524231/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1368661] Re: Unit tests sometimes fail because of stale pyc files

2015-12-09 Thread hardik
** Also affects: gnocchi
   Importance: Undecided
   Status: New

** Changed in: gnocchi
 Assignee: (unassigned) => hardik (hardik-parekh047)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1368661

Title:
  Unit tests sometimes fail because of stale pyc files

Status in congress:
  Fix Released
Status in Gnocchi:
  Invalid
Status in Ironic:
  Fix Released
Status in Magnum:
  Fix Released
Status in Mistral:
  Fix Released
Status in Monasca:
  Fix Committed
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released
Status in oslo.service:
  In Progress
Status in python-cinderclient:
  Fix Committed
Status in python-congressclient:
  In Progress
Status in python-glanceclient:
  In Progress
Status in python-heatclient:
  Fix Committed
Status in python-keystoneclient:
  Fix Committed
Status in python-magnumclient:
  Fix Released
Status in python-mistralclient:
  New
Status in python-neutronclient:
  In Progress
Status in Python client library for Sahara:
  Fix Committed
Status in python-solumclient:
  In Progress
Status in python-swiftclient:
  In Progress
Status in python-troveclient:
  Fix Committed
Status in Python client library for Zaqar:
  Fix Committed
Status in Solum:
  In Progress
Status in OpenStack Object Storage (swift):
  New
Status in Trove:
  Fix Released
Status in zaqar:
  In Progress

Bug description:
  Because python creates pyc files during tox runs, certain changes in
  the tree, like deletes of files, or switching branches, can create
  spurious errors. This can be suppressed by PYTHONDONTWRITEBYTECODE=1
  in the tox.ini.

To manage notifications about this bug go to:
https://bugs.launchpad.net/congress/+bug/1368661/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1368661] Re: Unit tests sometimes fail because of stale pyc files

2015-12-09 Thread Julien Danjou
** Changed in: gnocchi
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1368661

Title:
  Unit tests sometimes fail because of stale pyc files

Status in congress:
  Fix Released
Status in Gnocchi:
  Invalid
Status in Ironic:
  Fix Released
Status in Magnum:
  Fix Released
Status in Mistral:
  Fix Released
Status in Monasca:
  Fix Committed
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released
Status in oslo.service:
  In Progress
Status in python-cinderclient:
  Fix Committed
Status in python-congressclient:
  In Progress
Status in python-glanceclient:
  In Progress
Status in python-heatclient:
  Fix Committed
Status in python-keystoneclient:
  Fix Committed
Status in python-magnumclient:
  Fix Released
Status in python-mistralclient:
  New
Status in python-neutronclient:
  In Progress
Status in Python client library for Sahara:
  Fix Committed
Status in python-solumclient:
  In Progress
Status in python-swiftclient:
  In Progress
Status in python-troveclient:
  Fix Committed
Status in Python client library for Zaqar:
  Fix Committed
Status in Solum:
  In Progress
Status in OpenStack Object Storage (swift):
  New
Status in Trove:
  Fix Released
Status in zaqar:
  In Progress

Bug description:
  Because python creates pyc files during tox runs, certain changes in
  the tree, like deletes of files, or switching branches, can create
  spurious errors. This can be suppressed by PYTHONDONTWRITEBYTECODE=1
  in the tox.ini.

To manage notifications about this bug go to:
https://bugs.launchpad.net/congress/+bug/1368661/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1506213] Re: nova.cmd.baseproxy handles errors incorrectly

2015-12-09 Thread Markus Zoeller (markus_z)
We changed the release management from a "delayed-release" to a 
"direct-release" model [1]. It seems that the fix for this bug merged
in the timeframe where we made the transition to the new model and 
therefore wasn't closed with "Fix-Released" at it should be. 
=> Manually closing this bug with "Fix-Released".

[1] "openstack-dev" ML, 2015-11-23, Doug Hellmann,
"[release] process change for closing bugs when patches merge"
http://lists.openstack.org/pipermail/openstack-dev/2015-November/080288.html

** Changed in: nova
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1506213

Title:
  nova.cmd.baseproxy handles errors incorrectly

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Nova from master.

  The module doesn't print error message. If an error occurs in
  nova.cmd.baseproxy the method exit_with_error is executed that looks
  as follows:

  def exit_with_error(msg, errno=-1):
  print(msg) and sys.exit(errno)

  So in python 2.7 this method terminates the application without
  printing anything (unable to flush on time) and in python 3.4 it does
  strange things because print() returns None.

  I noticed this bug when I was trying to run nova-novncproxy without
  novnc istalled. nova-novncproxy was being terminated without any
  prints. Then I debugged it and found out that it tries to send an
  error message but it fails.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1506213/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1522239] Re: Useless element in migrate_server shcema

2015-12-09 Thread Markus Zoeller (markus_z)
We changed the release management from a "delayed-release" to a 
"direct-release" model [1]. It seems that the fix for this bug merged
in the timeframe where we made the transition to the new model and 
therefore wasn't closed with "Fix-Released" at it should be. 
=> Manually closing this bug with "Fix-Released".

[1] "openstack-dev" ML, 2015-11-23, Doug Hellmann,
"[release] process change for closing bugs when patches merge"
http://lists.openstack.org/pipermail/openstack-dev/2015-November/080288.html

** Changed in: nova
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1522239

Title:
  Useless element in migrate_server shcema

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  -host = copy.deepcopy(parameter_types.hostname)
  -host['type'] = ['string', 'null']

  These 2 lines could be removed from schema file

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1522239/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1515637] Re: Double detach volume causes server fault

2015-12-09 Thread Markus Zoeller (markus_z)
We changed the release management from a "delayed-release" to a 
"direct-release" model [1]. It seems that the fix for this bug merged
in the timeframe where we made the transition to the new model and 
therefore wasn't closed with "Fix-Released" at it should be. 
=> Manually closing this bug with "Fix-Released".

[1] "openstack-dev" ML, 2015-11-23, Doug Hellmann,
"[release] process change for closing bugs when patches merge"
http://lists.openstack.org/pipermail/openstack-dev/2015-November/080288.html

** Changed in: nova
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1515637

Title:
  Double detach volume causes server fault

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  If volume in 'detaching'  state and detach operation is called nova-
  api fails:

  2015-11-10 05:18:19.253 ERROR nova.api.openstack.extensions 
[req-05889195-e70d-4761-a5c6-a69ddfe05d62 
tempest-ServerActionsTestJSON-653602906 
tempest-ServerActionsTestJSON-743378399] Unexpected exception in API method
  2015-11-10 05:18:19.253 TRACE nova.api.openstack.extensions Traceback (most 
recent call last):
  2015-11-10 05:18:19.253 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/openstack/extensions.py", line 478, in wrapped
  2015-11-10 05:18:19.253 TRACE nova.api.openstack.extensions return 
f(*args, **kwargs)
  2015-11-10 05:18:19.253 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/openstack/compute/volumes.py", line 395, in delete
  2015-11-10 05:18:19.253 TRACE nova.api.openstack.extensions 
self.compute_api.detach_volume(context, instance, volume)
  2015-11-10 05:18:19.253 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/compute/api.py", line 235, in wrapped
  2015-11-10 05:18:19.253 TRACE nova.api.openstack.extensions return 
func(self, context, target, *args, **kwargs)
  2015-11-10 05:18:19.253 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/compute/api.py", line 224, in inner
  2015-11-10 05:18:19.253 TRACE nova.api.openstack.extensions return 
function(self, context, instance, *args, **kwargs)
  2015-11-10 05:18:19.253 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/compute/api.py", line 205, in inner
  2015-11-10 05:18:19.253 TRACE nova.api.openstack.extensions return 
f(self, context, instance, *args, **kw)
  2015-11-10 05:18:19.253 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/compute/api.py", line 3098, in detach_volume
  2015-11-10 05:18:19.253 TRACE nova.api.openstack.extensions 
self._detach_volume(context, instance, volume)
  2015-11-10 05:18:19.253 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/compute/api.py", line 3080, in _detach_volume
  2015-11-10 05:18:19.253 TRACE nova.api.openstack.extensions 
self.volume_api.begin_detaching(context, volume['id'])
  2015-11-10 05:18:19.253 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/volume/cinder.py", line 235, in wrapper
  2015-11-10 05:18:19.253 TRACE nova.api.openstack.extensions 
six.reraise(exc_value, None, exc_trace)
  2015-11-10 05:18:19.253 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/volume/cinder.py", line 224, in wrapper
  2015-11-10 05:18:19.253 TRACE nova.api.openstack.extensions res = 
method(self, ctx, volume_id, *args, **kwargs)
  2015-11-10 05:18:19.253 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/volume/cinder.py", line 335, in begin_detaching
  2015-11-10 05:18:19.253 TRACE nova.api.openstack.extensions 
cinderclient(context).volumes.begin_detaching(volume_id)
  2015-11-10 05:18:19.253 TRACE nova.api.openstack.extensions   File 
"/usr/local/lib/python2.7/dist-packages/cinderclient/v2/volumes.py", line 454, 
in begin_detaching
  2015-11-10 05:18:19.253 TRACE nova.api.openstack.extensions return 
self._action('os-begin_detaching', volume)
  2015-11-10 05:18:19.253 TRACE nova.api.openstack.extensions   File 
"/usr/local/lib/python2.7/dist-packages/cinderclient/v2/volumes.py", line 402, 
in _action
  2015-11-10 05:18:19.253 TRACE nova.api.openstack.extensions return 
self.api.client.post(url, body=body)
  2015-11-10 05:18:19.253 TRACE nova.api.openstack.extensions   File 
"/usr/local/lib/python2.7/dist-packages/cinderclient/client.py", line 104, in 
post
  2015-11-10 05:18:19.253 TRACE nova.api.openstack.extensions return 
self._cs_request(url, 'POST', **kwargs)
  2015-11-10 05:18:19.253 TRACE nova.api.openstack.extensions   File 
"/usr/local/lib/python2.7/dist-packages/cinderclient/client.py", line 98, in 
_cs_request
  2015-11-10 05:18:19.253 TRACE nova.api.openstack.extensions return 
self.request(url, method, **kwargs)
  2015-11-10 05:18:19.253 TRACE nova.api.openstack.extensions   File 

[Yahoo-eng-team] [Bug 1498926] Re: Unable to create subnet from IPv6 subnetpool

2015-12-09 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/227167
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=1ce1bd4d4e83ab91388756978aab35c0a15ce4f5
Submitter: Jenkins
Branch:master

commit 1ce1bd4d4e83ab91388756978aab35c0a15ce4f5
Author: Frode Nordahl 
Date:   Thu Sep 24 10:05:00 2015 +0200

Remove disabled attribute from select fields on submit

Subsqeuently we get the value included in the POST-request.

Replaces custom logic implemented for subnetpools that has weaknesses and
does not work any more.

Change-Id: I5ce5ff0dcd7ba812a92e1aaa82c770064f0302c0
Closes-Bug: 1498926


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1498926

Title:
  Unable to create subnet from IPv6 subnetpool

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Running on devstack.

  Horizon error message:
  Error: Failed to create subnet "" for network "test6": Bad subnets request: 
Cannot allocate IPv4 subnet from IPv6 subnet pool

  Request to neutron:
  Request body: {u'subnet': {u'name': u'test6', u'enable_dhcp': True, 
u'network_id': u'6f39a9f3-b973-4b51-a93c-fb48ab7c0cff', u'tenant_id': 
u'0fb99ea304674ea9a8c25f41903e942c', u'ip_version': 4, u'prefixlen': u'64', 
u'subnetpool_id': u'b3587615-7e82-46a6-a7d0-eb634381cbbd'}}

  However this succeeds when testing the relevant patches on
  stable/kilo.

  There is either a weakness in the horizon.forms.js handling of
  creating a hidden field for the ip_version due to disabled field not
  posting or something has changed elsewhere that prevents this from
  working.

  I will investigate further, please comment if you have any insights.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1498926/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1524264] [NEW] RFE]should support share vpn to specify tenant

2015-12-09 Thread zhaobo
Public bug reported:

[Application scene]
Tenant A have a vpn,  and A doesn't want to share own vpn to those A doesn't  
believe or no payment, so tenant A may be the  vpn supplier. Tenant B want to 
use the vpn through A shared  to B.  Generally, one share to specified ones to 
use owned vpn is an normal thing.

[Proposal]
Now vpn didn't contain the 'shared' field, so we should extend it and fulfill 
the function of share to specified tenants  based on rbac policies.

** Affects: neutron
 Importance: Undecided
 Assignee: zhaobo (zhaobo6)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => zhaobo (zhaobo6)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1524264

Title:
  RFE]should support share vpn  to specify tenant

Status in neutron:
  New

Bug description:
  [Application scene]
  Tenant A have a vpn,  and A doesn't want to share own vpn to those A doesn't  
believe or no payment, so tenant A may be the  vpn supplier. Tenant B want to 
use the vpn through A shared  to B.  Generally, one share to specified ones to 
use owned vpn is an normal thing.

  [Proposal]
  Now vpn didn't contain the 'shared' field, so we should extend it and fulfill 
the function of share to specified tenants  based on rbac policies.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1524264/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1524220] [NEW] can update the gateway of subnet with needless "0" in the ip address via cli

2015-12-09 Thread ibm-cloud-qa
Public bug reported:

[Summary]
can update the gateway of subnet with needless "0" in the ip address via cli

[Topo]
devstack all-in-one node

[Description and expect result]
if update the gateway of subnet with needless "0" in the ip address, the 
needless "0" can be cut off

[Reproduceable or not]
reproduceable 

[Recreate Steps]
1) create 1 subnet:
root@45-59:/opt/stack/devstack# neutron subnet-show sub-test
+---++
| Field | Value  |
+---++
| allocation_pools  | {"start": "100.0.0.100", "end": "100.0.0.200"} |
| cidr  | 100.0.0.0/24   |
| dns_nameservers   ||
| enable_dhcp   | True   |
| gateway_ip| 100.0.0.1  |
| host_routes   ||
| id| 00dfe80b-911f-4cf1-8874-77639e6082c5   |
| ip_version| 4  |
| ipv6_address_mode ||
| ipv6_ra_mode  ||
| name  | sub-test   |
| network_id| 79292c3a-1c85-4014-b0d7-0f078f1a4ee8   |
| subnetpool_id ||
| tenant_id | 71209fa21a7343e3b778ec5f4ff45252   |
+---++

2)update the gateway of subnet with needless "0" in the ip address:
root@45-59:/opt/stack/devstack# neutron subnet-update  --gateway 100.0.0.001 
sub-test
Updated subnet: sub-test
root@45-59:/opt/stack/devstack# neutron subnet-show sub-test
+---++
| Field | Value  |
+---++
| allocation_pools  | {"start": "100.0.0.100", "end": "100.0.0.200"} |
| cidr  | 100.0.0.0/24   |
| dns_nameservers   ||
| enable_dhcp   | True   |
| gateway_ip| 100.0.0.001>>>ISSUE, should be 100.0.0.1  
|
| host_routes   ||
| id| 00dfe80b-911f-4cf1-8874-77639e6082c5   |
| ip_version| 4  |
| ipv6_address_mode ||
| ipv6_ra_mode  ||
| name  | sub-test   |
| network_id| 79292c3a-1c85-4014-b0d7-0f078f1a4ee8   |
| subnetpool_id ||
| tenant_id | 71209fa21a7343e3b778ec5f4ff45252   |
+---++
root@45-59:/opt/stack/devstack# 

3) if update the gateway of subnet with needless "0" in the ip address
via dashboard, no this issue


[Configration]
reproduceable bug, no need

[logs]
reproduceable bug, no need

[Root cause anlyze or debug inf]
reproduceable bug

[Attachment]
None

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1524220

Title:
  can update the gateway of subnet with needless "0" in the ip address
  via cli

Status in neutron:
  New

Bug description:
  [Summary]
  can update the gateway of subnet with needless "0" in the ip address via cli

  [Topo]
  devstack all-in-one node

  [Description and expect result]
  if update the gateway of subnet with needless "0" in the ip address, the 
needless "0" can be cut off

  [Reproduceable or not]
  reproduceable 

  [Recreate Steps]
  1) create 1 subnet:
  root@45-59:/opt/stack/devstack# neutron subnet-show sub-test
  +---++
  | Field | Value  |
  +---++
  | allocation_pools  | {"start": "100.0.0.100", "end": "100.0.0.200"} |
  | cidr  | 100.0.0.0/24   |
  | dns_nameservers   ||
  | enable_dhcp   | True   |
  | gateway_ip| 100.0.0.1  |
  | host_routes   ||
  | id| 

[Yahoo-eng-team] [Bug 1524256] [NEW] cloud-init-blocknet generates too many log files

2015-12-09 Thread Christian Schmidt
Public bug reported:

The /inits/upstart/cloud-init-blocknet.conf Upstart script logs to a
file that reflects the network interface name, e.g. cloud-init-blocknet-
network-interface_eth0.log for eth0.

When running Docker, a lot of interfaces are created on the fly with
randomly generated names, e.g. veth7679c8e.log. Every time a new Docker
container is started, an interface is created, and this triggers the
cloud-init-blocknet script. Every invocation generates a log file
containing a single line, "network-interface/veth7679c8e[1408819.97]:
/run/cloud-init/local-finished found".

If you start and stop your Docker containers a lot (this is not
unusual), this generates a huge number of files in /var/log/upstart (I
just deleted ~100,000 files from one server).

I suggest that the cloud-init-blocknet logs to a single log file or not
at all when these interfaces are created on the fly.

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1524256

Title:
  cloud-init-blocknet generates too many log files

Status in cloud-init:
  New

Bug description:
  The /inits/upstart/cloud-init-blocknet.conf Upstart script logs to a
  file that reflects the network interface name, e.g. cloud-init-
  blocknet-network-interface_eth0.log for eth0.

  When running Docker, a lot of interfaces are created on the fly with
  randomly generated names, e.g. veth7679c8e.log. Every time a new
  Docker container is started, an interface is created, and this
  triggers the cloud-init-blocknet script. Every invocation generates a
  log file containing a single line, "network-
  interface/veth7679c8e[1408819.97]: /run/cloud-init/local-finished
  found".

  If you start and stop your Docker containers a lot (this is not
  unusual), this generates a huge number of files in /var/log/upstart (I
  just deleted ~100,000 files from one server).

  I suggest that the cloud-init-blocknet logs to a single log file or
  not at all when these interfaces are created on the fly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1524256/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1524276] [NEW] Inconsistent IP usage in help strings

2015-12-09 Thread Andreas Jaeger
Public bug reported:

During translation of nova for liberty release, we noticed many
inconsistent usages, especially in the capitalization of IP. IT should
really be "IP" or as plural "IPs".

** Affects: nova
 Importance: High
 Assignee: Andreas Jaeger (jaegerandi)
 Status: In Progress

** Changed in: nova
 Assignee: (unassigned) => Andreas Jaeger (jaegerandi)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1524276

Title:
  Inconsistent IP usage in help strings

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  During translation of nova for liberty release, we noticed many
  inconsistent usages, especially in the capitalization of IP. IT should
  really be "IP" or as plural "IPs".

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1524276/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1524301] [NEW] image_meta scsi model ignored

2015-12-09 Thread Mehdi Abaakouk
Public bug reported:

Hi,
We use virtio-scsi by adding into our glance image properties 
hw_scsi_model=virtio-scsi and hw_disk_bus=scsi

It works well most of the times, but when the instance got more than six
disks attached, additional disk doesn't have virtio-scsi enabled.

I have dig into the issue, nova seems generated a correct xml with "one" 
virtio-scsi and attached disks to it but libvirt transforms the xml. It adds 
another scsi controller (that use 53c895a driver instead of virtio-scsi) and 
attach some disks to
this controllers.

Extract of the bugged libvirt xml built by libvirt:


  


  


Inside the vm, with lspci I see these controllers:

00:04.0 SCSI storage controller: Red Hat, Inc Virtio SCSI
00:05.0 SCSI storage controller: LSI Logic / Symbios Logic 53c895a

Some disks are attached to pci-00:04.0 and some other to pci-00:05.0 .

Our current workaround is:

 --- nova/virt/libvirt/driver.py.orig2015-12-09 12:18:49.016279849 +0100
 +++ nova/virt/libvirt/driver.py 2015-12-09 12:19:47.042987865 +0100
 @@ -3247,10 +3269,12 @@
  if (image_meta and
  image_meta.get('properties', {}).get('hw_scsi_model')):
  hw_scsi_model = image_meta['properties']['hw_scsi_model']
 -scsi_controller = vconfig.LibvirtConfigGuestController()
 -scsi_controller.type = 'scsi'
 -scsi_controller.model = hw_scsi_model
 -devices.append(scsi_controller)
 +for i in range(0, 3):
 +scsi_controller = vconfig.LibvirtConfigGuestController()
 +scsi_controller.type = 'scsi'
 +scsi_controller.index = i
 +scsi_controller.model = hw_scsi_model
 +devices.append(scsi_controller)

Cheers,

** Affects: nova
 Importance: Undecided
 Status: New

** Description changed:

  Hi,
  We use virtio-scsi by adding into our glance image properties 
hw_scsi_model=virtio-scsi and hw_disk_bus=scsi
  
  It works well most of the times, but when the instance got more than six
  disks attached, additional disk doesn't have virtio-scsi enabled.
  
- I have dig into the issue, nova seems generated a correct xml with "one" 
virtio-scsi and attached disks to it but libvirt transforms the xml. It adds 
another scsi controller (that use 53c895a driver instead of virtio-scsi) and 
attach some disks to 
+ I have dig into the issue, nova seems generated a correct xml with "one" 
virtio-scsi and attached disks to it but libvirt transforms the xml. It adds 
another scsi controller (that use 53c895a driver instead of virtio-scsi) and 
attach some disks to
  this controllers.
  
  Extract of the bugged libvirt xml built by libvirt:
  
  
-   
+   
  
  
-   
+   
  
  
  Inside the vm, with lspci I see these controllers:
  
- 00:04.0 SCSI storage controller: Red Hat, Inc Virtio SCSI 
+ 00:04.0 SCSI storage controller: Red Hat, Inc Virtio SCSI
  00:05.0 SCSI storage controller: LSI Logic / Symbios Logic 53c895a
  
  Some disks are attached to pci-00:04.0 and some other to pci-00:05.0 .
  
  Our current workaround is:
  
- --- nova/virt/libvirt/driver.py.orig2015-12-09 12:18:49.016279849 +0100
- +++ nova/virt/libvirt/driver.py 2015-12-09 12:19:47.042987865 +0100
- @@ -3247,10 +3269,12 @@
-  if (image_meta and
-  image_meta.get('properties', {}).get('hw_scsi_model')):
-  hw_scsi_model = image_meta['properties']['hw_scsi_model']
- -scsi_controller = vconfig.LibvirtConfigGuestController()
- -scsi_controller.type = 'scsi'
- -scsi_controller.model = hw_scsi_model
- -devices.append(scsi_controller)
- +for i in range(0, 3):
- +scsi_controller = vconfig.LibvirtConfigGuestController()
- +scsi_controller.type = 'scsi'
- +scsi_controller.index = i
- +scsi_controller.model = hw_scsi_model
- +devices.append(scsi_controller)
- 
+  --- nova/virt/libvirt/driver.py.orig2015-12-09 12:18:49.016279849 +0100
+  +++ nova/virt/libvirt/driver.py 2015-12-09 12:19:47.042987865 +0100
+  @@ -3247,10 +3269,12 @@
+   if (image_meta and
+   image_meta.get('properties', {}).get('hw_scsi_model')):
+   hw_scsi_model = image_meta['properties']['hw_scsi_model']
+  -scsi_controller = vconfig.LibvirtConfigGuestController()
+  -scsi_controller.type = 'scsi'
+  -scsi_controller.model = hw_scsi_model
+  -devices.append(scsi_controller)
+  +for i in range(0, 3):
+  +scsi_controller = vconfig.LibvirtConfigGuestController()
+  +scsi_controller.type = 'scsi'
+  +scsi_controller.index = i
+  +scsi_controller.model = hw_scsi_model
+  +devices.append(scsi_controller)
  
  Cheers,

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, 

[Yahoo-eng-team] [Bug 1472449] Re: download error when the image status is not active

2015-12-09 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/199549
Committed: 
https://git.openstack.org/cgit/openstack/python-glanceclient/commit/?id=44d0b02c67ce7926f40377d9367a0f61124ed26d
Submitter: Jenkins
Branch:master

commit 44d0b02c67ce7926f40377d9367a0f61124ed26d
Author: Long Quan Sha 
Date:   Wed Jul 8 09:29:53 2015 +0800

Fix the download error when the image locations are blank

When the image locations are blank, glance client will get a http response
with no content, glance client should show user no data could be found,
instead of processing the blank response body that will lead to exception.

Glance client will also get a 204 response when an image is in a queued
state (this is true for 'master' and liberty/kilo/juno based servers).

Closes-Bug: #1472449

Co-Authored-by: Stuart McLaren 
Change-Id: I5d3d02d6aa7c8dd054cd2933e15b4a26e91afea1


** Changed in: python-glanceclient
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1472449

Title:
  download error when the image status is not active

Status in Glance:
  Fix Released
Status in python-glanceclient:
  Fix Released

Bug description:
  
  When the locations is blank, downloading image will show python error, but 
the error message is not correct.

  [root@vm134 pe]# glance image-show 9be94a27-367f-4a26-ae7a-045db3cb7332
  +--+--+
  | Property | Value|
  +--+--+
  | checksum | None |
  | container_format | None |
  | created_at   | 2015-07-02T09:09:22Z |
  | disk_format  | None |
  | id   | 9be94a27-367f-4a26-ae7a-045db3cb7332 |
  | locations| []   |
  | min_disk | 0|
  | min_ram  | 0|
  | name | test |
  | owner| e4b36a5b654942328943a835339a6289 |
  | protected| False|
  | size | None |
  | status   | queued   |
  | tags | []   |
  | updated_at   | 2015-07-02T09:09:22Z |
  | virtual_size | None |
  | visibility   | private  |
  +--+--+
  [root@vm134 pe]# glance image-download 9be94a27-367f-4a26-ae7a-045db3cb7332 
--file myimg
  iter() returned non-iterator of type 'NoneType'

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1472449/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1524356] [NEW] a level binding implement issue in _check_driver_to_bind

2015-12-09 Thread RaoFei
Public bug reported:

in the function of _check_driver_to_bind, the below row 3 condition will never 
be satisfied.
the type of Level.segment_id  is string, but Segment_to_bind is a list dict.

1for level in binding_levels:
2if (level.driver == driver and
3level.segment_id in segments_to_bind):
4return False

** Affects: neutron
 Importance: Undecided
 Status: New

** Attachment added: "level binding bug.png"
   
https://bugs.launchpad.net/bugs/1524356/+attachment/4531970/+files/level%20binding%20bug.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1524356

Title:
  a level binding implement issue in _check_driver_to_bind

Status in neutron:
  New

Bug description:
  in the function of _check_driver_to_bind, the below row 3 condition will 
never be satisfied.
  the type of Level.segment_id  is string, but Segment_to_bind is a list dict.

  1for level in binding_levels:
  2if (level.driver == driver and
  3level.segment_id in segments_to_bind):
  4return False

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1524356/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1451943] Re: project list cache keeps growing

2015-12-09 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/235609
Committed: 
https://git.openstack.org/cgit/openstack/django_openstack_auth/commit/?id=91dec7239da7eb7c6dad385b42aa8aa5a7efa422
Submitter: Jenkins
Branch:master

commit 91dec7239da7eb7c6dad385b42aa8aa5a7efa422
Author: lin-hua-cheng 
Date:   Thu Oct 15 16:09:05 2015 -0700

Revert - Cache the User's Project by Token ID

The caching is done only per process, so the cleanup during logout
does not really work since the during could be handled by another
process. So the cache will just keep on growing.

This reverts commit bd9fd598e6c2ff11f8c31098cd25c7a42a97d761.

Depends-On: I793fbee44eb5f9befc316efe6716971b0e32172b
Change-Id: If878d77533ea5fac86fbb73127f26908f1097091
Closes-Bug: #1451943


** Changed in: django-openstack-auth
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1451943

Title:
  project list cache keeps growing

Status in django-openstack-auth:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  The project list is cache on the dict per process,  if running on
  multi-process, project switch or logout does not remove the project
  list from the cache.

To manage notifications about this bug go to:
https://bugs.launchpad.net/django-openstack-auth/+bug/1451943/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1524357] [NEW] get_ports cannot get all port for network owner

2015-12-09 Thread ZongKai LI
Public bug reported:

### env ###
upstream code
two demo tenants: demo1 and demo2
demo1 owner network net1 with subnet

### steps ###
1, create rbac rule for demo2 to access net1 as shared;
2, create a port(port-1) on net1 by demo2;
3, run "neutron port-list" to call get_ports to get ports for/by demo1;
expected: return result should contain port-1 for demo1 is network owner;
observed: return result doesn't contain port-1

** Affects: neutron
 Importance: Undecided
 Assignee: ZongKai LI (lzklibj)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => ZongKai LI (lzklibj)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1524357

Title:
  get_ports cannot get all port for network owner

Status in neutron:
  New

Bug description:
  ### env ###
  upstream code
  two demo tenants: demo1 and demo2
  demo1 owner network net1 with subnet

  ### steps ###
  1, create rbac rule for demo2 to access net1 as shared;
  2, create a port(port-1) on net1 by demo2;
  3, run "neutron port-list" to call get_ports to get ports for/by demo1;
  expected: return result should contain port-1 for demo1 is network owner;
  observed: return result doesn't contain port-1

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1524357/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1524363] [NEW] Owner metadata is missing when regenerating libvirt XML after host reboot

2015-12-09 Thread Simon Pasquier
Public bug reported:

Environment:

devstack running OpenStack from master.

Steps to reproduce:

1. Make sure that resume_guests_state_on_host_boot=True in nova.conf.
2. Boot an instance.
3. Check that the libvirt XML contains a nova:owner element with the project 
and user ids [1].
4. Stop the nova-compute service.
5. Destroy the instance using virsh. This is to simulate the reboot of the host.
6. Restart the nova-compute service.
7. Check that the instance is respawned.

Expected result:

The project id and user id are still present in the libvirt XML.

Actual result:

The project id and user id are missing [2].

[1] http://paste.openstack.org/show/481314/
[2] http://paste.openstack.org/show/481315/

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1524363

Title:
  Owner metadata is missing when regenerating libvirt XML after host
  reboot

Status in OpenStack Compute (nova):
  New

Bug description:
  Environment:

  devstack running OpenStack from master.

  Steps to reproduce:

  1. Make sure that resume_guests_state_on_host_boot=True in nova.conf.
  2. Boot an instance.
  3. Check that the libvirt XML contains a nova:owner element with the project 
and user ids [1].
  4. Stop the nova-compute service.
  5. Destroy the instance using virsh. This is to simulate the reboot of the 
host.
  6. Restart the nova-compute service.
  7. Check that the instance is respawned.

  Expected result:

  The project id and user id are still present in the libvirt XML.

  Actual result:

  The project id and user id are missing [2].

  [1] http://paste.openstack.org/show/481314/
  [2] http://paste.openstack.org/show/481315/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1524363/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1506187] Re: [SRU] Azure: cloud-init should use VM unique ID

2015-12-09 Thread Launchpad Bug Tracker
This bug was fixed in the package cloud-init - 0.7.7~bzr1149-0ubuntu5

---
cloud-init (0.7.7~bzr1149-0ubuntu5) wily; urgency=medium

  * Microsoft Azure: use stable VM instance ID over SharedConfig.xml
(LP: #1506187):
- d/patches/lp-1506187-azure_use_unique_vm_id.patch: use DMI data for the
  stable VM instance ID
- d/cloud-init.preinst: migrate existing instances to stable VM instance
  ID on upgrade from prior versions of cloud-init.

 -- Ben Howard   Fri, 20 Nov 2015 17:26:09 -0700

** Changed in: cloud-init (Ubuntu Wily)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1506187

Title:
  [SRU] Azure: cloud-init should use VM unique ID

Status in cloud-init:
  In Progress
Status in cloud-init package in Ubuntu:
  Fix Released
Status in cloud-init source package in Precise:
  New
Status in cloud-init source package in Trusty:
  Fix Released
Status in cloud-init source package in Vivid:
  New
Status in cloud-init source package in Wily:
  Fix Released
Status in cloud-init source package in Xenial:
  Fix Released

Bug description:
  SRU JUSTIFICATION

  [IMPACT] On Azure, the InstanceID is currently detected via a fabric
  provided XML file. With the new CRP stack, this ID is not guaranteed
  to be stable. As a result instances may go re-provision upon reboot.

  [FIX] Use DMI data to detect the instance ID and migrate existing
  instances to the new ID.

  [REGRESSION POTENTIAL] The fix is both in the cloud-init code and in
  the packaging. If the instance ID is not properly migrated, then a
  reboot may trigger re-provisioning.

  [TEST CASES]
  1. Boot instance on Azure.
  2. Apply cloud-init from -proposed. A migration message should apply.
  3. Get the new instance ID:
 $ sudo cat /sys/class/dmi/id/product_uuid
  4. Confirm that /var/lib/cloud/instance is a symlink to 
/var/lib/cloud/instances/
  5. Re-install cloud-init and confirm that migration message is NOT displayed.

  [TEST CASE 2]
  1. Build new cloud-image from -proposed
  2. Boot up instance
  3. Confirm that /sys/class/dmi/id/product_uuid is used to get instance ID 
(see /var/log/cloud-init.log)

  
  [ORIGINAL REPORT]
  The Azure datasource currently uses the InstanceID from the SharedConfig.xml 
file.  On our new CRP stack, this ID is not guaranteed to be stable and could 
change if the VM is deallocated.  If the InstanceID changes then cloud-init 
will attempt to reprovision the VM, which could result in temporary loss of 
access to the VM.

  Instead cloud-init should switch to use the VM Unique ID, which is
  guaranteed to be stable everywhere for the lifetime of the VM.  The VM
  unique ID is explained here: https://azure.microsoft.com/en-us/blog
  /accessing-and-using-azure-vm-unique-id/

  In short, the unique ID is available via DMI, and can be accessed with
  the command 'dmidecode | grep UUID' or even easier via sysfs in the
  file "/sys/devices/virtual/dmi/id/product_uuid".

  Steve

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1506187/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1524394] [NEW] neutron-openvswitch-agent fails to start when root_helper operates in a different context

2015-12-09 Thread Tom Carroll
Public bug reported:

Version: Liberty
Compute hypervisor: XenServer 6.5
Compute vm: Ubuntu 14.04.3

This issue appears in liberty--and not before--when running XenServer
hypervisor. In this environment, root-helper is set to /usr/bin/neutron-
rootwrap-xen-dom0, which executes commands in the hypervisor's Dom0
context. This problem keeps the neutron-openvswitch-agent from starting
and thus breaking the networking on the compute nodes.

A backtrace will be appended. The gist of the problem is that
ip_lib.get_devices()  does not use root_helper to obtain a list of the
network interfaces when the network namespace is the global namespace.
Thus, it obtains the interfaces of the compute virtual machine
environment and not the Dom0 environment.

I've appended two patches, one for ip_lib that corrects the listing and
one to netwrap to allow find. There are security implications by
permitting the execution of `find' in netwrap.

Backtrace from openvswitch-agent.log:

 2015-12-09 07:44:10.274 11884 CRITICAL neutron [-] RuntimeError:
Command: ['/usr/bin/neutron-rootwrap-xen-dom0', '/etc/neutron/rootwrap.conf', 
'ip', 'addr', 'show', 'br-int', 'to', '192.168.1.26']
Exit code: 96
Stdin:
Stdout:
Stderr: Traceback (most recent call last):
  File "/usr/bin/neutron-rootwrap-xen-dom0", line 119, in run_command
{'cmd': json.dumps(user_args), 'cmd_input': json.dumps(cmd_input)})
  File "/usr/lib/python2.7/dist-packages/XenAPI.py", line 245, in __call__
return self.__send(self.__name, args)
  File "/usr/lib/python2.7/dist-packages/XenAPI.py", line 149, in xenapi_request
result = _parse_result(getattr(self, methodname)(*full_params))
  File "/usr/lib/python2.7/dist-packages/XenAPI.py", line 219, in _parse_result
raise Failure(result['ErrorDescription'])
Failure: ['XENAPI_PLUGIN_FAILURE', 'run_command', 'PluginError', 'Device 
"br-int" does not exist.\n']
2015-12-09 07:44:10.274 11884 ERROR neutron Traceback (most recent call last):
2015-12-09 07:44:10.274 11884 ERROR neutron   File 
"/usr/bin/neutron-openvswitch-agent", line 10, in 
2015-12-09 07:44:10.274 11884 ERROR neutron sys.exit(main())
2015-12-09 07:44:10.274 11884 ERROR neutron   File 
"/usr/lib/python2.7/dist-packages/neutron/cmd/eventlet/plugins/ovs_neutron_agent.py",
 line 20, in main
2015-12-09 07:44:10.274 11884 ERROR neutron agent_main.main()
2015-12-09 07:44:10.274 11884 ERROR neutron   File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/openvswitch/agent/main.py",
 line 49, in main
2015-12-09 07:44:10.274 11884 ERROR neutron mod.main()
2015-12-09 07:44:10.274 11884 ERROR neutron   File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/ovs_ofctl/main.py",
 line 36, in main
2015-12-09 07:44:10.274 11884 ERROR neutron 
ovs_neutron_agent.main(bridge_classes)
2015-12-09 07:44:10.274 11884 ERROR neutron   File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 1899, in main2015-12-09 07:44:10.274 11884 ERROR neutron 
validate_local_ip(agent_config['local_ip'])
2015-12-09 07:44:10.274 11884 ERROR neutron   File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 1876, in validate_local_ip
2015-12-09 07:44:10.274 11884 ERROR neutron if not 
ip_lib.IPWrapper().get_device_by_ip(local_ip):
2015-12-09 07:44:10.274 11884 ERROR neutron   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/linux/ip_lib.py", line 131, in 
get_device_by_ip
2015-12-09 07:44:10.274 11884 ERROR neutron if device.addr.list(to=ip):
2015-12-09 07:44:10.274 11884 ERROR neutron   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/linux/ip_lib.py", line 514, in 
list
2015-12-09 07:44:10.274 11884 ERROR neutron for line in self._run(options, 
tuple(args)).split('\n'):
2015-12-09 07:44:10.274 11884 ERROR neutron   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/linux/ip_lib.py", line 274, in 
_run
2015-12-09 07:44:10.274 11884 ERROR neutron return 
self._parent._run(options, self.COMMAND, args)
2015-12-09 07:44:10.274 11884 ERROR neutron   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/linux/ip_lib.py", line 70, in 
_run
2015-12-09 07:44:10.274 11884 ERROR neutron log_fail_2015-12-09 
07:44:10.274 11884 ERROR neutron log_fail_as_error=self.log_fail_as_error)
2015-12-09 07:44:10.274 11884 ERROR neutron   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/linux/ip_lib.py", line 89, in 
_execute
2015-12-09 07:44:10.274 11884 ERROR neutron 
log_fail_as_error=log_fail_as_error)
2015-12-09 07:44:10.274 11884 ERROR neutron   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/linux/utils.py", line 159, in 
execute
2015-12-09 07:44:10.274 11884 ERROR neutron raise RuntimeError(m)
2015-12-09 07:44:10.274 11884 ERROR neutron RuntimeError:
2015-12-09 07:44:10.274 11884 ERROR neutron Command: 
['/usr/bin/neutron-rootwrap-xen-dom0', 

[Yahoo-eng-team] [Bug 1489059] Re: "db type could not be determined" running py34

2015-12-09 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/253994
Committed: 
https://git.openstack.org/cgit/openstack/barbican/commit/?id=2acc1f491118570d713230cdd066ddd6956e2710
Submitter: Jenkins
Branch:master

commit 2acc1f491118570d713230cdd066ddd6956e2710
Author: Kenji Yasui 
Date:   Mon Dec 7 04:00:22 2015 +

Fix db error when running python34 Unit tests

If tests for py27 is executed before py34 tests, then
there is a chance that py34 related tests may fail.
The following patch fixes it.

Ref: https://review.openstack.org/#/q/status:merged++topic:bug/1489059,n,z

TrivialFix

Change-Id: I397fe9c6847f3e6d4640adbb2712f2197e72ae47
Closes-bug: #1489059


** Changed in: barbican
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1489059

Title:
  "db type could not be determined" running py34

Status in Aodh:
  Fix Released
Status in Barbican:
  Fix Released
Status in cloudkitty:
  Fix Committed
Status in Glance:
  Fix Committed
Status in glance_store:
  Fix Committed
Status in heat:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  In Progress
Status in Ironic:
  Fix Released
Status in ironic-lib:
  In Progress
Status in OpenStack Identity (keystone):
  Fix Committed
Status in keystoneauth:
  Fix Released
Status in keystonemiddleware:
  Fix Committed
Status in Manila:
  Fix Released
Status in networking-midonet:
  In Progress
Status in networking-ofagent:
  New
Status in neutron:
  Fix Released
Status in python-glanceclient:
  Fix Released
Status in python-keystoneclient:
  Fix Committed
Status in Sahara:
  Fix Released
Status in senlin:
  Fix Committed
Status in tap-as-a-service:
  New
Status in tempest:
  In Progress
Status in zaqar:
  In Progress

Bug description:
  When running tox for the first time, the py34 execution fails with an
  error saying "db type could not be determined".

  This issue is know to be caused when the run of py27 preceeds py34 and
  can be solved erasing the .testrepository and running "tox -e py34"
  first of all.

To manage notifications about this bug go to:
https://bugs.launchpad.net/aodh/+bug/1489059/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1523646] Re: Nova/Cinder Key Manager for Barbican Uses Stale Cache

2015-12-09 Thread Dave McCowan
** Also affects: castellan
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1523646

Title:
  Nova/Cinder Key Manager for Barbican Uses Stale Cache

Status in castellan:
  New
Status in Cinder:
  In Progress
Status in OpenStack Compute (nova):
  In Progress

Bug description:
  The Key Manger for Barbican, implemented in Nova and Cinder, caches a value 
of barbican_client to save extra
  calls to Keystone for authentication.  However, the cached value of 
barbican_client is only valid for the current
  context.  A check needs to be made to ensure the context has not changed 
before using the saved value.

  The symptoms for using a stale cache value include getting the following 
error message when creating
  an encrypted volume.

  From CLI:
  ---
  openstack volume create --size 1 --type LUKS encrypted_volume
  The server has either erred or is incapable of performing the requested 
operation. (HTTP 500) (Request-ID: req-aea6be92-020e-41ed-ba88-44a1f5235ab0)

  
  In cinder.log
  ---
  2015-12-03 09:09:03.648 TRACE cinder.volume.api Traceback (most recent call 
last):
  2015-12-03 09:09:03.648 TRACE cinder.volume.api   File 
"/usr/lib/python2.7/site-packages/taskflow/engines/action_engine/executor.py", 
line 82, in _exe
  cute_task
  2015-12-03 09:09:03.648 TRACE cinder.volume.api result = 
task.execute(**arguments)
  2015-12-03 09:09:03.648 TRACE cinder.volume.api   File 
"/opt/stack/cinder/cinder/volume/flows/api/create_volume.py", line 409, in 
execute
  2015-12-03 09:09:03.648 TRACE cinder.volume.api source_volume)
  2015-12-03 09:09:03.648 TRACE cinder.volume.api   File 
"/opt/stack/cinder/cinder/volume/flows/api/create_volume.py", line 338, in 
_get_encryption_key_
  id
  2015-12-03 09:09:03.648 TRACE cinder.volume.api encryption_key_id = 
key_manager.create_key(context)
  2015-12-03 09:09:03.648 TRACE cinder.volume.api   File 
"/opt/stack/cinder/cinder/keymgr/barbican.py", line 147, in create_key
  2015-12-03 09:09:03.648 TRACE cinder.volume.api LOG.exception(_LE("Error 
creating key."))
  ….
  2015-12-03 09:09:03.648 TRACE cinder.volume.api   File 
"/usr/lib/python2.7/site-packages/keystoneclient/session.py", line 502, in post
  2015-12-03 09:09:03.648 TRACE cinder.volume.api return self.request(url, 
'POST', **kwargs)
  2015-12-03 09:09:03.648 TRACE cinder.volume.api   File 
"/usr/lib/python2.7/site-packages/keystoneclient/utils.py", line 337, in inner
  2015-12-03 09:09:03.648 TRACE cinder.volume.api return func(*args, 
**kwargs)
  2015-12-03 09:09:03.648 TRACE cinder.volume.api   File 
"/usr/lib/python2.7/site-packages/keystoneclient/session.py", line 402, in 
request
  2015-12-03 09:09:03.648 TRACE cinder.volume.api raise 
exceptions.from_response(resp, method, url)
  2015-12-03 09:09:03.648 TRACE cinder.volume.api Unauthorized: The request you 
have made requires authentication. (Disable debug mode to suppress these 
details.) (HTTP 401) (Request-ID: req-d2c52e0b-c16d-43ec-a7a0-763f1270)

To manage notifications about this bug go to:
https://bugs.launchpad.net/castellan/+bug/1523646/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1524404] [NEW] grenade jobs on stable/liberty broken with oslo.middleware>=3

2015-12-09 Thread Matt Riedemann
Public bug reported:

Seen here:

http://logs.openstack.org/11/246211/19/check/gate-grenade-
dsvm/06a815e/logs/new/screen-n-api.txt.gz?level=TRACE

2015-12-09 07:52:44.729 3354 ERROR nova Traceback (most recent call last):
2015-12-09 07:52:44.729 3354 ERROR nova   File "/usr/local/bin/nova-api", line 
10, in 
2015-12-09 07:52:44.729 3354 ERROR nova sys.exit(main())
2015-12-09 07:52:44.729 3354 ERROR nova   File 
"/opt/stack/new/nova/nova/cmd/api.py", line 55, in main
2015-12-09 07:52:44.729 3354 ERROR nova server = service.WSGIService(api, 
use_ssl=should_use_ssl)
2015-12-09 07:52:44.729 3354 ERROR nova   File 
"/opt/stack/new/nova/nova/service.py", line 328, in __init__
2015-12-09 07:52:44.729 3354 ERROR nova self.app = 
self.loader.load_app(name)
2015-12-09 07:52:44.729 3354 ERROR nova   File 
"/opt/stack/new/nova/nova/wsgi.py", line 543, in load_app
2015-12-09 07:52:44.729 3354 ERROR nova return deploy.loadapp("config:%s" % 
self.config_path, name=name)
2015-12-09 07:52:44.729 3354 ERROR nova   File 
"/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 247, in 
loadapp
2015-12-09 07:52:44.729 3354 ERROR nova return loadobj(APP, uri, name=name, 
**kw)
2015-12-09 07:52:44.729 3354 ERROR nova   File 
"/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 272, in 
loadobj
2015-12-09 07:52:44.729 3354 ERROR nova return context.create()
2015-12-09 07:52:44.729 3354 ERROR nova   File 
"/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 710, in 
create
2015-12-09 07:52:44.729 3354 ERROR nova return self.object_type.invoke(self)
2015-12-09 07:52:44.729 3354 ERROR nova   File 
"/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 144, in 
invoke
2015-12-09 07:52:44.729 3354 ERROR nova **context.local_conf)
2015-12-09 07:52:44.729 3354 ERROR nova   File 
"/usr/local/lib/python2.7/dist-packages/paste/deploy/util.py", line 55, in 
fix_call
2015-12-09 07:52:44.729 3354 ERROR nova val = callable(*args, **kw)
2015-12-09 07:52:44.729 3354 ERROR nova   File 
"/opt/stack/new/nova/nova/api/openstack/urlmap.py", line 160, in urlmap_factory
2015-12-09 07:52:44.729 3354 ERROR nova app = loader.get_app(app_name, 
global_conf=global_conf)
2015-12-09 07:52:44.729 3354 ERROR nova   File 
"/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 350, in 
get_app
2015-12-09 07:52:44.729 3354 ERROR nova name=name, 
global_conf=global_conf).create()
2015-12-09 07:52:44.729 3354 ERROR nova   File 
"/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 710, in 
create
2015-12-09 07:52:44.729 3354 ERROR nova return self.object_type.invoke(self)
2015-12-09 07:52:44.729 3354 ERROR nova   File 
"/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 144, in 
invoke
2015-12-09 07:52:44.729 3354 ERROR nova **context.local_conf)
2015-12-09 07:52:44.729 3354 ERROR nova   File 
"/usr/local/lib/python2.7/dist-packages/paste/deploy/util.py", line 55, in 
fix_call
2015-12-09 07:52:44.729 3354 ERROR nova val = callable(*args, **kw)
2015-12-09 07:52:44.729 3354 ERROR nova   File 
"/opt/stack/new/nova/nova/api/auth.py", line 73, in pipeline_factory
2015-12-09 07:52:44.729 3354 ERROR nova return _load_pipeline(loader, 
pipeline)
2015-12-09 07:52:44.729 3354 ERROR nova   File 
"/opt/stack/new/nova/nova/api/auth.py", line 57, in _load_pipeline
2015-12-09 07:52:44.729 3354 ERROR nova filters = [loader.get_filter(n) for 
n in pipeline[:-1]]
2015-12-09 07:52:44.729 3354 ERROR nova   File 
"/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 354, in 
get_filter
2015-12-09 07:52:44.729 3354 ERROR nova name=name, 
global_conf=global_conf).create()
2015-12-09 07:52:44.729 3354 ERROR nova   File 
"/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 366, in 
filter_context
2015-12-09 07:52:44.729 3354 ERROR nova FILTER, name=name, 
global_conf=global_conf)
2015-12-09 07:52:44.729 3354 ERROR nova   File 
"/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 458, in 
get_context
2015-12-09 07:52:44.729 3354 ERROR nova section)
2015-12-09 07:52:44.729 3354 ERROR nova   File 
"/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 517, in 
_context_from_explicit
2015-12-09 07:52:44.729 3354 ERROR nova value = import_string(found_expr)
2015-12-09 07:52:44.729 3354 ERROR nova   File 
"/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 22, in 
import_string
2015-12-09 07:52:44.729 3354 ERROR nova return 
pkg_resources.EntryPoint.parse("x=" + s).load(False)
2015-12-09 07:52:44.729 3354 ERROR nova   File 
"/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 2380, 
in load
2015-12-09 07:52:44.729 3354 ERROR nova return self.resolve()
2015-12-09 07:52:44.729 3354 ERROR nova   File 
"/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 

[Yahoo-eng-team] [Bug 1524444] [NEW] auto-select rpc version info logging is too chatty

2015-12-09 Thread Matt Riedemann
Public bug reported:

This shows up 392 this in the n-api log run here:

"Automatically selected compute RPC version 4.5 from minimum service
version 2"

http://logs.openstack.org/92/253192/6/check/gate-grenade-
dsvm/76ad561/logs/new/screen-n-api.txt.gz#_2015-12-09_16_50_42_905

About 1.3 million times in gate runs in 24 hours:

http://logstash.openstack.org/#dashboard/file/logstash.json?query=message:%5C%22Automatically%20selected%20compute%20RPC%20version%5C%22%20AND%20message:%5C%22RPC%20version%5C%22

We should be smarter about logging that, or it could indicate there is
something wrong here in how often we're looking that up.

** Affects: nova
 Importance: Undecided
 Assignee: Dan Smith (danms)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/152

Title:
  auto-select rpc version info logging is too chatty

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  This shows up 392 this in the n-api log run here:

  "Automatically selected compute RPC version 4.5 from minimum service
  version 2"

  http://logs.openstack.org/92/253192/6/check/gate-grenade-
  dsvm/76ad561/logs/new/screen-n-api.txt.gz#_2015-12-09_16_50_42_905

  About 1.3 million times in gate runs in 24 hours:

  
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message:%5C%22Automatically%20selected%20compute%20RPC%20version%5C%22%20AND%20message:%5C%22RPC%20version%5C%22

  We should be smarter about logging that, or it could indicate there is
  something wrong here in how often we're looking that up.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/152/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1524081] Re: rbac extension has a bad updated timestamp

2015-12-09 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/254982
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=6b43ef8ef721b27be99dc479a90e96ec08234469
Submitter: Jenkins
Branch:master

commit 6b43ef8ef721b27be99dc479a90e96ec08234469
Author: Kevin Benton 
Date:   Tue Dec 8 14:16:49 2015 -0800

Fix timestamp in RBAC extension

The previous timestamp had an invalid TZ offset.
This patch just sets it to UTC like the others.

Change-Id: I58689d2ae88979a1119475267998c09e18915083
Closes-Bug: #1524081


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1524081

Title:
  rbac extension has a bad updated timestamp

Status in neutron:
  Fix Released

Bug description:
  The timestamp the rbac extension provides has a bad field for the
  timezone:
  
https://github.com/openstack/neutron/blob/c51f56f57b5cf67fc5e174b2d7a219990b666809/neutron/extensions/rbac.py#L100

  This could cause issues for clients that try to parse that date.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1524081/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1503415] Re: We should not bypass bytes decode/encode

2015-12-09 Thread Akihiro Motoki
** Also affects: python-neutronclient/liberty
   Importance: Undecided
   Status: New

** Changed in: python-neutronclient/liberty
   Status: New => Fix Committed

** Changed in: python-neutronclient/liberty
Milestone: None => 3.1.1

** Changed in: python-neutronclient/liberty
   Importance: Undecided => High

** Changed in: python-neutronclient/liberty
   Importance: High => Medium

** Tags removed: liberty-backport-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1503415

Title:
  We should not bypass bytes decode/encode

Status in neutron:
  Fix Released
Status in python-neutronclient:
  Fix Committed
Status in python-neutronclient liberty series:
  Fix Committed

Bug description:
  The commit fcf289797c063088f9003359dfd1c7d4f41ed5ef[1] introduces the
  pattern:

if six.PY3:
  if isinstance(line, bytes):
try:
  line = line.decode(encoding='utf-8')
except UnicodeError:
  pass
   # concat line with a string

  which is not working if a UnicodeError is raised: in such case line is
  not decoded and line is not converted in a string and the
  concatenation with the string fails. We should ensure that line is
  converted to a string

  [1] 
https://github.com/openstack/python-neutronclient/commit/fcf289797c063088f9003359dfd1c7d4f41ed5ef
  [2] 
https://github.com/openstack/python-neutronclient/blob/db7eb557403da7c0e1eca7e12f6ddab6bc0d1fc1/neutronclient/tests/unit/test_cli20.py#L77-L81

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1503415/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1517818] Re: filters broken for rbac policy retrieval

2015-12-09 Thread Akihiro Motoki
** Also affects: python-neutronclient/liberty
   Importance: Undecided
   Status: New

** Changed in: python-neutronclient/liberty
Milestone: None => 3.1.1

** Changed in: python-neutronclient/liberty
   Status: New => Fix Committed

** Changed in: python-neutronclient/liberty
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1517818

Title:
  filters broken for rbac policy retrieval

Status in neutron:
  Fix Released
Status in python-neutronclient:
  Fix Committed
Status in python-neutronclient liberty series:
  Fix Committed

Bug description:
  I leave a policy in rbac. This policy is created by admin user . I
  stay in use the same user. And now I can use  neutron rbac-update [any
  values] , then it will return error.

  
  repro
  --
  neutron rbac-list
  
+--+--+
  | id   | object_id
|
  
+--+--+
  | d14a977d-c19f-4bf5-abe1-d5820456385e | a80d09eb-9ef2-47a4-baac-90133894366a 
|
  
+--+--+

  neutron rbac-update 222
  
-
  Conflict: RBAC policy on object a80d09eb-9ef2-47a4-baac-90133894366a cannot 
be removed because other objects depend on it.
  Details: Callback 
neutron.plugins.ml2.plugin.Ml2Plugin.validate_network_rbac_policy_change failed 
with "Unable to reconfigure sharing settings for network 
a80d09eb-9ef2-47a4-baac-90133894366a. Multiple tenants are using it."
  log
  ---
  2015-11-19 10:05:43.024 ERROR neutron.callbacks.manager 
[req-99ef207b-7422-4bb7-a257-4c7ee00ee114 admin 
5d73438ed76a4399b8d2996a699146c5] Error during notification for 
neutron.plugins.ml2.plugin.Ml2Plugin.validate_network_rbac_policy_change 
rbac-policy, before_update
  2015-11-19 10:05:43.024 TRACE neutron.callbacks.manager Traceback (most 
recent call last):
  2015-11-19 10:05:43.024 TRACE neutron.callbacks.manager   File 
"/opt/stack/neutron/neutron/callbacks/manager.py", line 141, in _notify_loop
  2015-11-19 10:05:43.024 TRACE neutron.callbacks.manager 
callback(resource, event, trigger, **kwargs)
  2015-11-19 10:05:43.024 TRACE neutron.callbacks.manager   File 
"/opt/stack/neutron/neutron/db/db_base_plugin_v2.py", line 142, in 
validate_network_rbac_policy_change
  2015-11-19 10:05:43.024 TRACE neutron.callbacks.manager tenant_to_check = 
None
  2015-11-19 10:05:43.024 TRACE neutron.callbacks.manager   File 
"/opt/stack/neutron/neutron/db/db_base_plugin_v2.py", line 190, in 
ensure_no_tenant_ports_on_network
  2015-11-19 10:05:43.024 TRACE neutron.callbacks.manager raise 
n_exc.InvalidSharedSetting(network=network_id)
  2015-11-19 10:05:43.024 TRACE neutron.callbacks.manager InvalidSharedSetting: 
Unable to reconfigure sharing settings for network 
a80d09eb-9ef2-47a4-baac-90133894366a. Multiple tenants are using it.
  2015-11-19 10:05:43.024 TRACE neutron.callbacks.manager

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1517818/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1522577] Re: Endpoint create status code differs from docs

2015-12-09 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/255409
Committed: 
https://git.openstack.org/cgit/openstack/keystone-specs/commit/?id=3ecbedb96c7b028b064472bdd69139910c68e237
Submitter: Jenkins
Branch:master

commit 3ecbedb96c7b028b064472bdd69139910c68e237
Author: Samuel de Medeiros Queiroz 
Date:   Wed Dec 9 14:58:23 2015 -0300

Fix Create Endpoint API Status Code

The v3 API returns 200 OK on endpoint creation. This patch fixes our
documentation to make it consistent.

Closes-Bug: #1522577

Change-Id: I66c053f633b7e7ac38715e9d62c3cb2ecad2cc43


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1522577

Title:
  Endpoint create status code differs from docs

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  /v3 API returns 200 OK on endpoint creation. API documentation says it
  returns 201.

  As we can't fix the status code (according to the API working group
  guidelines [1]), we need to fix our docs.

  [1] http://specs.openstack.org/openstack/api-
  wg/guidelines/evaluating_api_changes.html#evaluating-api-changes

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1522577/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1524404] Re: grenade jobs on stable/liberty broken with oslo.middleware>=3

2015-12-09 Thread Matt Riedemann
Actually this is the right thing happening.  The change that's failing
is the upper-constraints change for stable/liberty:

https://review.openstack.org/#/c/246211/

So if we just never fix that, upper-constraints on stable/liberty is
frozen, which is maybe acceptable.

** Changed in: nova/liberty
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1524404

Title:
  grenade jobs on stable/liberty broken with oslo.middleware>=3

Status in OpenStack Compute (nova):
  Invalid
Status in OpenStack Compute (nova) liberty series:
  Invalid

Bug description:
  Seen here:

  http://logs.openstack.org/11/246211/19/check/gate-grenade-
  dsvm/06a815e/logs/new/screen-n-api.txt.gz?level=TRACE

  2015-12-09 07:52:44.729 3354 ERROR nova Traceback (most recent call last):
  2015-12-09 07:52:44.729 3354 ERROR nova   File "/usr/local/bin/nova-api", 
line 10, in 
  2015-12-09 07:52:44.729 3354 ERROR nova sys.exit(main())
  2015-12-09 07:52:44.729 3354 ERROR nova   File 
"/opt/stack/new/nova/nova/cmd/api.py", line 55, in main
  2015-12-09 07:52:44.729 3354 ERROR nova server = service.WSGIService(api, 
use_ssl=should_use_ssl)
  2015-12-09 07:52:44.729 3354 ERROR nova   File 
"/opt/stack/new/nova/nova/service.py", line 328, in __init__
  2015-12-09 07:52:44.729 3354 ERROR nova self.app = 
self.loader.load_app(name)
  2015-12-09 07:52:44.729 3354 ERROR nova   File 
"/opt/stack/new/nova/nova/wsgi.py", line 543, in load_app
  2015-12-09 07:52:44.729 3354 ERROR nova return deploy.loadapp("config:%s" 
% self.config_path, name=name)
  2015-12-09 07:52:44.729 3354 ERROR nova   File 
"/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 247, in 
loadapp
  2015-12-09 07:52:44.729 3354 ERROR nova return loadobj(APP, uri, 
name=name, **kw)
  2015-12-09 07:52:44.729 3354 ERROR nova   File 
"/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 272, in 
loadobj
  2015-12-09 07:52:44.729 3354 ERROR nova return context.create()
  2015-12-09 07:52:44.729 3354 ERROR nova   File 
"/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 710, in 
create
  2015-12-09 07:52:44.729 3354 ERROR nova return 
self.object_type.invoke(self)
  2015-12-09 07:52:44.729 3354 ERROR nova   File 
"/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 144, in 
invoke
  2015-12-09 07:52:44.729 3354 ERROR nova **context.local_conf)
  2015-12-09 07:52:44.729 3354 ERROR nova   File 
"/usr/local/lib/python2.7/dist-packages/paste/deploy/util.py", line 55, in 
fix_call
  2015-12-09 07:52:44.729 3354 ERROR nova val = callable(*args, **kw)
  2015-12-09 07:52:44.729 3354 ERROR nova   File 
"/opt/stack/new/nova/nova/api/openstack/urlmap.py", line 160, in urlmap_factory
  2015-12-09 07:52:44.729 3354 ERROR nova app = loader.get_app(app_name, 
global_conf=global_conf)
  2015-12-09 07:52:44.729 3354 ERROR nova   File 
"/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 350, in 
get_app
  2015-12-09 07:52:44.729 3354 ERROR nova name=name, 
global_conf=global_conf).create()
  2015-12-09 07:52:44.729 3354 ERROR nova   File 
"/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 710, in 
create
  2015-12-09 07:52:44.729 3354 ERROR nova return 
self.object_type.invoke(self)
  2015-12-09 07:52:44.729 3354 ERROR nova   File 
"/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 144, in 
invoke
  2015-12-09 07:52:44.729 3354 ERROR nova **context.local_conf)
  2015-12-09 07:52:44.729 3354 ERROR nova   File 
"/usr/local/lib/python2.7/dist-packages/paste/deploy/util.py", line 55, in 
fix_call
  2015-12-09 07:52:44.729 3354 ERROR nova val = callable(*args, **kw)
  2015-12-09 07:52:44.729 3354 ERROR nova   File 
"/opt/stack/new/nova/nova/api/auth.py", line 73, in pipeline_factory
  2015-12-09 07:52:44.729 3354 ERROR nova return _load_pipeline(loader, 
pipeline)
  2015-12-09 07:52:44.729 3354 ERROR nova   File 
"/opt/stack/new/nova/nova/api/auth.py", line 57, in _load_pipeline
  2015-12-09 07:52:44.729 3354 ERROR nova filters = [loader.get_filter(n) 
for n in pipeline[:-1]]
  2015-12-09 07:52:44.729 3354 ERROR nova   File 
"/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 354, in 
get_filter
  2015-12-09 07:52:44.729 3354 ERROR nova name=name, 
global_conf=global_conf).create()
  2015-12-09 07:52:44.729 3354 ERROR nova   File 
"/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 366, in 
filter_context
  2015-12-09 07:52:44.729 3354 ERROR nova FILTER, name=name, 
global_conf=global_conf)
  2015-12-09 07:52:44.729 3354 ERROR nova   File 
"/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 458, in 
get_context
  2015-12-09 07:52:44.729 3354 ERROR nova section)
  2015-12-09 

[Yahoo-eng-team] [Bug 1449492] Re: Cinder not working with IPv6 ISCSI

2015-12-09 Thread Walt Boring
the os-brick patch is here:

https://review.openstack.org/#/c/234425/

** Changed in: os-brick
   Status: New => In Progress

** Changed in: os-brick
 Assignee: (unassigned) => Lukas Bezdicka (social-b)

** Changed in: os-brick
   Importance: Undecided => Medium

** Changed in: cinder
   Status: New => In Progress

** No longer affects: cinder

** Changed in: nova
 Assignee: Tony Breeds (o-tony) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1449492

Title:
  Cinder not working with IPv6 ISCSI

Status in OpenStack Compute (nova):
  In Progress
Status in os-brick:
  In Progress

Bug description:
  Testing configuring Openstack completely with IPv6

  Found that IP address parsing was thrown in a lot of cases because of
  need to have '[]' encasing the address, or not for use with URLs and
  the parsing of some user space 3rd party C binaries - iscsiadm for
  example. All the others are best left by using a name set to the IPv6
  address in the /etc/hosts file, iSCSI though its not possible.

  Got Cinder working by setting iscsi_ip_address
  (/etc/cinder/cinder.conf) to '[$my_ip]' where my ip is an IPv6 address
  like 2001:db08::1 (not RFC documentation address ?) and changing one
  line of python iin the nova virt/libvirt/volume.py code:

  
  --- nova/virt/libvirt/volume.py.orig2015-04-27 23:00:00.208075644 +1200
  +++ nova/virt/libvirt/volume.py 2015-04-27 22:38:08.938643636 +1200
  @@ -833,7 +833,7 @@
   def _get_host_device(self, transport_properties):
   """Find device path in devtemfs."""
   device = ("ip-%s-iscsi-%s-lun-%s" %
  -  (transport_properties['target_portal'],
  +  
(transport_properties['target_portal'].replace('[','').replace(']',''),
  transport_properties['target_iqn'],
  transport_properties.get('target_lun', 0)))
   if self._get_transport() == "default":

  Nova-compute was looking for '/dev/disk/by-path/ip-[2001:db08::1]:3260
  -iscsi-iqn.2010-10.org.openstack:*' when there were no '[]' in the
  udev generated path

  This one can't be worked around by using the /etc/hosts file. iscsiadm
  and tgt ned the IPv6 address wrapped in '[]', and iscsadm uses it in
  output.  The above patch could be matched with a bi ihte cinder code
  that puts '[]' around iscsi_ip_address if if it is not supplied.

  More work is obvioulsy need on a convention for writing IPv6 addresses
  in the Openstack configuration files, and there will be a lot of
  places where code will need to be tweaked.

  Lets start by fixing this blooper/lo hanging one  first though as it
  makes it possible to get Cinder working in a pure IPv6 environment.
  Above may be a bit of a hack, but only one one code path needs
  adjustment...

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1449492/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1518402] Re: Make MetadataProxyHandler configurable

2015-12-09 Thread Shih-Hao Li
** Changed in: neutron
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1518402

Title:
  Make MetadataProxyHandler configurable

Status in neutron:
  Invalid

Bug description:
  Vendors may need additional processing when handling metadata
  requests.

  For example, a metadata request going through a router may not carry
  the router_id in the header. If the network_id in the request header is
  the routed-to network instead of the original network, metadata agent
  can not find the corresponding port via the _get_port function.

  This problem can be solved by providing vendor's own _get_port function
  that does additional lookups. To achieve this goal, MetadataProxyHandler
  class needs to be configurable.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1518402/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1410494] Re: sqlalchemy-migrate>0.9.1 and <0.9.5 causes glance test failures

2015-12-09 Thread Kevin Carter
** No longer affects: openstack-ansible

** No longer affects: openstack-ansible/juno

** No longer affects: openstack-ansible/icehouse

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1410494

Title:
  sqlalchemy-migrate>0.9.1 and <0.9.5 causes glance test failures

Status in Glance:
  Fix Released
Status in sqlalchemy-migrate:
  Fix Committed

Bug description:
  The error is seen as

  OperationalError: (OperationalError) cannot start a transaction within
  a transaction u'/*\n * This is necessary because SQLite does not
  support\n * RENAME INDEX or ALTER TABLE CHANGE COLUMN.\n */\nBEGIN
  TRANSACTION;' ()

  More info at:
  http://paste.openstack.org/show/157276/

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1410494/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1524515] [NEW] get sql-based Domain-specific driver configuration with incorrect group in URL, expected response 404, actual 403

2015-12-09 Thread Thomas Hsiao
Public bug reported:

get sql-based Domain-specific driver configuration with incorrect group
in URL, expected response 404, actual 403:

With sql-based Domain-specific driver configuration set up connection to a 
openldap or  ad backend for a domain,
if an invalid/typo group name (e.g. [identity2], instead of [identity]) in the 
request url for this domain is provided,  we expect the response code 404 (not 
found), but actual is 403 (forbidden).  The user actually has the permission to 
access the configuration. 403 forbidden seems misleading. 

Example:
~$ curl -k -H "X-Auth-Token:ADMIN" -XDELETE 
http://localhost:35357/v3/domains/6a006689702640ba92d5e536b238e893/config/invalidgroup

Actual:
{"error": {"message": "Invalid domain specific configuration: Group identity2 
is not supported for domain specific configurations", "code": 403, "title": 
"Forbidden"}}

Expected:
~$ curl -k -H "X-Auth-Token:ADMIN" -XDELETE 
http://localhost:35357/v3/domains/6a006689702640ba92d5e536b238e893/config/identity2
{"error": {"message": "Invalid domain specific configuration: Group identity2 
is not supported for domain specific configurations", "code": 404, "title": 
"Not Found"}}

** Affects: keystone
 Importance: Undecided
 Assignee: Thomas Hsiao (thomas-hsiao)
 Status: New

** Changed in: keystone
 Assignee: (unassigned) => Thomas Hsiao (thomas-hsiao)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1524515

Title:
  get sql-based Domain-specific driver configuration with incorrect
  group in URL, expected response 404, actual 403

Status in OpenStack Identity (keystone):
  New

Bug description:
  get sql-based Domain-specific driver configuration with incorrect
  group in URL, expected response 404, actual 403:

  With sql-based Domain-specific driver configuration set up connection to a 
openldap or  ad backend for a domain,
  if an invalid/typo group name (e.g. [identity2], instead of [identity]) in 
the request url for this domain is provided,  we expect the response code 404 
(not found), but actual is 403 (forbidden).  The user actually has the 
permission to access the configuration. 403 forbidden seems misleading. 

  Example:
  ~$ curl -k -H "X-Auth-Token:ADMIN" -XDELETE 
http://localhost:35357/v3/domains/6a006689702640ba92d5e536b238e893/config/invalidgroup

  Actual:
  {"error": {"message": "Invalid domain specific configuration: Group identity2 
is not supported for domain specific configurations", "code": 403, "title": 
"Forbidden"}}

  Expected:
  ~$ curl -k -H "X-Auth-Token:ADMIN" -XDELETE 
http://localhost:35357/v3/domains/6a006689702640ba92d5e536b238e893/config/identity2
  {"error": {"message": "Invalid domain specific configuration: Group identity2 
is not supported for domain specific configurations", "code": 404, "title": 
"Not Found"}}

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1524515/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1524510] [NEW] Remove Neutron FWaaS static example configuration files

2015-12-09 Thread OpenStack Infra
Public bug reported:

https://review.openstack.org/254698
commit 6713d0ac33a8a6355b4263bde01d55c01636039b
Author: Martin Hickey 
Date:   Tue Dec 8 11:18:04 2015 +

Remove Neutron FWaaS static example configuration files

Oslo config generator was introduced in patch [1] to
automatically generate the sample Neutron FWaaS configuration
files.

This patch removes the static example configuration files from
the repository as they are now redundant.

[1] https://review.openstack.org/#/c/251974/

DocImpact: Update the docs that FWaaS no longer includes static example
configuration files. Instead, use tools/generate_config_file_samples.sh
to generate them and the files generated now end with .sample extension.

Change-Id: I31be3295606ba25929e9af9f40a035ff2b615234
Partially-Implements: blueprint autogen-neutron-conf-file
Partial-bug: #1199963
Depends-On: Ic8208850a27408c8fbeed80ecdb43345aa7dfaa4

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: neutron-fwaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1524510

Title:
  Remove Neutron FWaaS static example configuration files

Status in neutron:
  New

Bug description:
  https://review.openstack.org/254698
  commit 6713d0ac33a8a6355b4263bde01d55c01636039b
  Author: Martin Hickey 
  Date:   Tue Dec 8 11:18:04 2015 +

  Remove Neutron FWaaS static example configuration files
  
  Oslo config generator was introduced in patch [1] to
  automatically generate the sample Neutron FWaaS configuration
  files.
  
  This patch removes the static example configuration files from
  the repository as they are now redundant.
  
  [1] https://review.openstack.org/#/c/251974/
  
  DocImpact: Update the docs that FWaaS no longer includes static example
  configuration files. Instead, use tools/generate_config_file_samples.sh
  to generate them and the files generated now end with .sample extension.
  
  Change-Id: I31be3295606ba25929e9af9f40a035ff2b615234
  Partially-Implements: blueprint autogen-neutron-conf-file
  Partial-bug: #1199963
  Depends-On: Ic8208850a27408c8fbeed80ecdb43345aa7dfaa4

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1524510/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1512744] Re: Unable to retrieve LDAP domain user and group list on Horizon.

2015-12-09 Thread Max Yatsenko
The bug was not reproduces, due to this reason a status will be set to
"Invalid".

** Changed in: fuel-plugins
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1512744

Title:
   Unable to retrieve LDAP domain user and group list on Horizon.

Status in Fuel Plugins:
  Invalid
Status in OpenStack Identity (keystone):
  Invalid

Bug description:
  I'm using openstack 7.0 with LDAP plugin "
  ldap-1.0-1.0.0-1.noarch.rpm"

  I need add LDAP user on new project on keystone.tld domain. Project
  creation on this domain working fine, but when i tryed to add  LDAP
  users on this project i see an error:  "Unable to retrieve LDAP domain
  user/group list "

  https://screencloud.net/v/xS09

  I can not use a user unless add him to the project

  On version 1.0.0 LDAP plugin this working fine, without critical
  problems.

  When i use CLI, i see erorr:
  openstack --os-auth-url http://172.16.0.3:5000/v3 --os-username Administrator 
--os-password Pass1234 --os-user-domain-name keystone.tld  user list
  ERROR: openstack Expecting to find domain in project - the server could not 
comply with the request since it is either malformed or otherwise incorrect. 
The client is assumed to be in error. (HTTP 400) (Request-ID: 
req-8f456d5d-afba-4289-957a-4eed91ee75cc)

  Log message on FuelUI (get all avaliable users on project):
  GET 
http://192.168.0.2:35357/v3/users?domain_id=19bca8582eae47b891e6b9d45fd6225b_project_id=ae96f8daec6c405a9e3b5d509a39db83
 HTTP/1.1" 500 143 keystoneclient.session: DEBUG: RESP: keystoneclient.session: 
DEBUG: Request returned failure status: 500

  Mirantis LDAP server  172.16.57.146 working fine.

  My LDAP settings:

  [ldap]
  suffix=dc=keystone,dc=tld
  query_scope=sub
  user_id_attribute=cn
  user=cn=Administrator,cn=Users,dc=keystone,dc=tld
  user_objectclass=person
  user_name_attribute=cn
  password=Pass1234
  user_allow_delete=False
  user_tree_dn=dc=keystone,dc=tld
  user_pass_attribute=userPassword
  user_enabled_attribute=enabled
  user_allow_create=False
  user_allow_update=False
  user_filter=
  url=ldap://172.16.57.146

To manage notifications about this bug go to:
https://bugs.launchpad.net/fuel-plugins/+bug/1512744/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1489627] Re: Incorrect use of os.path.join() in nova/api/openstack/common.py

2015-12-09 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/218309
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=7a09f727f3027e4f0d54d99f1579f6595720d094
Submitter: Jenkins
Branch:master

commit 7a09f727f3027e4f0d54d99f1579f6595720d094
Author: EdLeafe 
Date:   Thu Aug 27 21:54:12 2015 +

Replace os.path.join() for URLs

Since os.path.join() is OS-dependent, it should not be used for creating
URLs. This patch replaces the use of os.path.join() in
nova/api/openstack with common.url_join(), which uses the more correct
"/".join(), while preserving the behavior of removing duplicate slashes
inside the URL and adding a trailing slash with an empty final element.
It also adds some tests to ensure that the generate_href() method in
nova/api/openstack/compute/views/versions.py works after the refactoring
to use common.url_join()

Closes-Bug: 1489627
Change-Id: I32948dd1fcf0839b34e446d9e4b08f9c39d17c8f


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1489627

Title:
  Incorrect use of os.path.join() in nova/api/openstack/common.py

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Three of the link manipulation methods in nova/api/openstack/common.py
  rejoin the URL parts by using os.path.join(). This is incorrect, as it
  is OS-dependent, and can result in invalid URLs under Windows.
  Generally the urlparse module would be the best choice, but since
  these URL fragments aren't created with urlparse.urlparse() or
  urlparse.urlsplit(), the equivalent reconstruction methods in that
  module won't work. It is simpler and cleaner to just use "/".join().

  Additionally, there are no unit tests for these methods, so tests will
  have to be added first before we can fix the methods, so that we have
  some assurance that we are not breaking anything.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1489627/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1524164] Re: Decompose OFAgent mechanism driver from neutron tree completely

2015-12-09 Thread fumihiko kakuma
** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1524164

Title:
  Decompose OFAgent mechanism driver from neutron tree completely

Status in networking-ofagent:
  In Progress
Status in neutron:
  New

Bug description:
  All 3rd-party code is required to be removed from the neutron tree[1].
  We move the definition for ofagent mechanism driver to networking-ofagent
  from neutron tree.

  [1] http://docs.openstack.org/developer/neutron/devref/contribute.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-ofagent/+bug/1524164/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1523224] Re: nova.api unexpected exception glanceclient.exc.HTTPInternalServerError

2015-12-09 Thread aginwala
Cool.

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1523224

Title:
  nova.api unexpected exception glanceclient.exc.HTTPInternalServerError

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  damo@controller01:~$ dpkg -l | grep nova
  ii  nova-api2:12.0.0-0ubuntu2~cloud0  
all  OpenStack Compute - API frontend
  ii  nova-cert   2:12.0.0-0ubuntu2~cloud0  
all  OpenStack Compute - certificate management
  ii  nova-common 2:12.0.0-0ubuntu2~cloud0  
all  OpenStack Compute - common files
  ii  nova-conductor  2:12.0.0-0ubuntu2~cloud0  
all  OpenStack Compute - conductor service
  ii  nova-consoleauth2:12.0.0-0ubuntu2~cloud0  
all  OpenStack Compute - Console Authenticator
  ii  nova-novncproxy 2:12.0.0-0ubuntu2~cloud0  
all  OpenStack Compute - NoVNC proxy
  ii  nova-scheduler  2:12.0.0-0ubuntu2~cloud0  
all  OpenStack Compute - virtual machine scheduler
  ii  python-nova 2:12.0.0-0ubuntu2~cloud0  
all  OpenStack Compute Python libraries
  ii  python-novaclient   2:2.30.1-1~cloud0 
all  client library for OpenStack Compute API
  damo@controller01:~$

  Installing OpenStack as per the OpenStack Documentation for Ubuntu and
  page http://docs.openstack.org/liberty/install-guide-ubuntu/nova-
  verify.html when the last command is executed the following occurs:

  damo@controller01:~$ nova image-list
  ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
   (HTTP 500) (Request-ID: 
req-2294aff0-9d05-448c-80ae-daaa9d24785f)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1523224/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1524568] [NEW] Automatically generate neutron LBaaS configuration files

2015-12-09 Thread OpenStack Infra
Public bug reported:

https://review.openstack.org/252981
commit e719861c00ab1e50e271c3bcdbc0b9130353d2d4
Author: Martin Hickey 
Date:   Thu Dec 3 14:39:29 2015 +

Automatically generate neutron LBaaS configuration files

This adds a new tox environment, genconfig, which generates sample
neutron LBaaS configuration file using oslo-config-generator.

DocImpact: Update the docs that LBaaS no longer includes static example
configuration files. Instead, use tools/generate_config_file_samples.sh
to generate them and the files generated now end with .sample extension.

Partially-Implements: blueprint autogen-neutron-conf-file

Change-Id: I25507f3bc6e995580aa91a912c2cf4110757df15
Partial-bug: #1199963

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: neutron-lbaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1524568

Title:
  Automatically generate neutron LBaaS configuration files

Status in neutron:
  New

Bug description:
  https://review.openstack.org/252981
  commit e719861c00ab1e50e271c3bcdbc0b9130353d2d4
  Author: Martin Hickey 
  Date:   Thu Dec 3 14:39:29 2015 +

  Automatically generate neutron LBaaS configuration files
  
  This adds a new tox environment, genconfig, which generates sample
  neutron LBaaS configuration file using oslo-config-generator.
  
  DocImpact: Update the docs that LBaaS no longer includes static example
  configuration files. Instead, use tools/generate_config_file_samples.sh
  to generate them and the files generated now end with .sample extension.
  
  Partially-Implements: blueprint autogen-neutron-conf-file
  
  Change-Id: I25507f3bc6e995580aa91a912c2cf4110757df15
  Partial-bug: #1199963

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1524568/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1516469] Re: endpoints not show correctly when using "endpoint_filter.sql" as catalog's backend driver

2015-12-09 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/250032
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=f86448a3113fc594e78d3d9410f44c1f64a9ad58
Submitter: Jenkins
Branch:master

commit f86448a3113fc594e78d3d9410f44c1f64a9ad58
Author: Dave Chen 
Date:   Thu Nov 26 05:39:59 2015 +0800

Ensure endpoints returned is filtered correctly

This patch move some logic to manager layer, so that endpoints
filtered by endpoint_group project association will be included
in catalog when issue a project scoped token and using
`endpoint_filter.sql` as catalog's backend driver.

This make sure that call `list_endpoints_for_project` API has
the same endpoints with that in catalog returned for project
scoped token.

Change-Id: I56f4eb6fc524650677b627295dd4338d55164c39
Closes-Bug: #1516469


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1516469

Title:
  endpoints not show correctly when using "endpoint_filter.sql" as
  catalog's backend driver

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  If the endpoint group project association was created, and set
  "endpoint_filter.sql" as catalog's backend driver. All of the
  endpoints associated with the project and match the criterion defined
  in the "endpoint group" should be given after a project scoped token
  was return.

  But currently, those endpoints can *only* be show if using call
  `list_endpoints_for_project` API explicitly by CURL but cannot get
  back when the project scoped token was issued.

  Steps to reproduce this issue.

  -Create endpoint group.

  $curl -g -i -X POST http://10.239.48.36:5000/v3/OS-EP-
  FILTER/endpoint_groups -H "X-Auth-
  Token:a85e07129aa54f61a46395543a3146af" -H "Content-Type:
  application/json" -d '{"endpoint_group": {"description": "endpoint
  group description", "filters": {"interface": "admin"}, "name":
  "endpoint_group_name"}}'

  - Create endpoint_group project association

  $curl -g -i -X PUT http://10.239.48.36:5000/v3/OS-EP-
  
FILTER/endpoint_groups/ea1af6e153bf4b87a88b5962de8cdae8/projects/927e252fb44d4b5cac9d4fb24d85be41
  -H "X-Auth-Token:a85e07129aa54f61a46395543a3146af" -H "Content-Type:
  application/json"

  - Get endpoint for the project, this will return all of the endpoints
  matched the rule defined in the endpoint group.

  $curl -g -i -X GET 
http://10.239.48.36:5000/v3/OS-EP-FILTER/projects/927e252fb44d4b5cac9d4fb24d85be41/endpoints
 -H "X-Auth-Token:a85e07129aa54f61a46395543a3146af" -H "Content-Type: 
application/json"
  ...
  {
  "endpoints": [
  {
  "region_id": "RegionOne",
  "links": {
  "self": 
"http://10.239.48.36:5000/v3/endpoints/3f6fb8738db8427a997dbcc791b7901d;
  },
  "url": "http://10.239.48.36:8773/;,
  "region": "RegionOne",
  "enabled": true,
  "interface": "admin",
  "service_id": "a3338a6847e94766831ea7d9d43598cc",
  "id": "3f6fb8738db8427a997dbcc791b7901d"
  },
  {
  "region_id": "RegionOne",
  "links": {
  "self": 
"http://10.239.48.36:5000/v3/endpoints/dd69f161f8a24612a7ffe796b45b8cd2;
  },
  "url": "http://10.239.48.36:8774/v2.1/$(tenant_id)s",
  "region": "RegionOne",
  "enabled": true,
  "interface": "admin",
  "service_id": "a147aa8896c4429aacf0f2eefd39098e",
  "id": "dd69f161f8a24612a7ffe796b45b8cd2"
  },
  {
  "region_id": "RegionOne",
  "links": {
  "self": 
"http://10.239.48.36:5000/v3/endpoints/0d70f9fd5a85446c99fee79388adf9dc;
  },
  "url": "http://10.239.48.36:9292;,
  "region": "RegionOne",
  "enabled": true,
  "interface": "admin",
  "service_id": "4c367805e2a147589a14310d1486ab01",
  "id": "0d70f9fd5a85446c99fee79388adf9dc"
  },
  {
  "region_id": null,
  "links": {
  "self": 
"http://10.239.48.36:5000/v3/endpoints/5be3023ddf984fcf942b2a396eb0167b;
  },
  "url": "http://127.0.0.0:20;,
  "region": null,
  "enabled": true,
  "interface": "internal",
  "service_id": "69da5bbf65aa4565b9833655075e7a8a",
  "id": "5be3023ddf984fcf942b2a396eb0167b"
  },
  {
  "region_id": "RegionOne",
  "links": {
  "self": 
"http://10.239.48.36:5000/v3/endpoints/9393be9c7eda41d89a28f2ffb486dc7c;
  },
  "url": "http://10.239.48.36:35357/v2.0;,
  

[Yahoo-eng-team] [Bug 1524576] [NEW] Enhance ML2 Type Driver interface to include network ID

2015-12-09 Thread Sukhdev Kapur
Public bug reported:

An enhancement is needed to the ML2 Type Driver interface to include
network ID. This allows the vendors the flexibility on their back-end
systems to group or associate resources based upon the network ID.

** Affects: neutron
 Importance: Undecided
 Assignee: Sukhdev Kapur (sukhdev-8)
 Status: In Progress


** Tags: ml2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1524576

Title:
  Enhance ML2 Type Driver interface to include network ID

Status in neutron:
  In Progress

Bug description:
  An enhancement is needed to the ML2 Type Driver interface to include
  network ID. This allows the vendors the flexibility on their back-end
  systems to group or associate resources based upon the network ID.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1524576/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1460150] Re: no way to get v3 openrc file

2015-12-09 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/186846
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=46a739531f9c2eb55c65000eca6b5737b16145ab
Submitter: Jenkins
Branch:master

commit 46a739531f9c2eb55c65000eca6b5737b16145ab
Author: David Lyle 
Date:   Fri May 29 11:10:12 2015 -0600

Adding download for openrc file for keystone v3

The existing openrc file download only works for keystone v2. Regardless
of whether v3 is enabled in Horizon.

This adds support for both. A v2.0 and a v3 compatible openrc file
download. A couple of different situations are covered.

1) support for keystone v2 only: OPENSTACK_API_VERSION={'identity': 2.0}
In this case only the v2 option is shown.

2) Use of keystone v3 in a potentially mixed environment. Since it is
possible to use keystone v2 and v3 in the same enviroment, having
OPENSTACK_API_VERSION={'identity': 3} displays options for downloading
v2 or v3 compatible openrc files.

Rationale for making the existing methods and urls support v3+. By
moving the v2.0 functionality to new version specific methods, they can
be more easily excised when v2 is obsolete and we're left with the newer
version support.

Change-Id: I29c62dc7436cc39adc1a4af9d90ceb6767e7a177
Closes-Bug: #1460150


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1460150

Title:
  no way to get v3 openrc file

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  The is currently no way to get a keystone v3 ready version of the
  openrc file.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1460150/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1524567] Re: nova-manage vm list error

2015-12-09 Thread aginwala
** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1524567

Title:
  nova-manage vm list error

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  
  ==> /var/log/nova/nova-manage.log <==
  2015-12-09 11:57:04.173 10154 CRITICAL nova 
[req-95df183e-c4b7-4a01-974e-a5a7ca236c34 - - - - -] AttributeError: 'NoneType' 
object has no attribute 'name'
  2015-12-09 11:57:04.173 10154 TRACE nova Traceback (most recent call last):
  2015-12-09 11:57:04.173 10154 TRACE nova   File "/usr/bin/nova-manage", line 
10, in 
  2015-12-09 11:57:04.173 10154 TRACE nova sys.exit(main())
  2015-12-09 11:57:04.173 10154 TRACE nova   File 
"/usr/lib/python2.7/site-packages/nova/cmd/manage.py", line 1382, in main
  2015-12-09 11:57:04.173 10154 TRACE nova ret = fn(*fn_args, **fn_kwargs)
  2015-12-09 11:57:04.173 10154 TRACE nova   File 
"/usr/lib/python2.7/site-packages/nova/cmd/manage.py", line 688, in list
  2015-12-09 11:57:04.173 10154 TRACE nova instance_type.name,
  2015-12-09 11:57:04.173 10154 TRACE nova AttributeError: 'NoneType' object 
has no attribute 'name'
  2015-12-09 11:57:04.173 10154 TRACE nova

  ==> /var/log/nova/nova-scheduler.log <==

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1524567/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1524531] [NEW] ip_lib_force_root doesn't force root, breaking neutron on XenServer compute nodes

2015-12-09 Thread Tom Carroll
Public bug reported:

Version: Liberty
Compute hypervisor: XenServer 6.5
Compute vm: Ubuntu 14.04.3

With this option set, it is documented that all ip_lib commands should
be executed with the assistance of the defined root_helper. This does
not occur. This is necessary as root_helper executes commands in the
Dom0 context.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: xenserver

** Patch added: "ip_lib.patch2"
   
https://bugs.launchpad.net/bugs/1524531/+attachment/4532189/+files/ip_lib.patch2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1524531

Title:
  ip_lib_force_root doesn't force root, breaking neutron on XenServer
  compute nodes

Status in neutron:
  New

Bug description:
  Version: Liberty
  Compute hypervisor: XenServer 6.5
  Compute vm: Ubuntu 14.04.3

  With this option set, it is documented that all ip_lib commands should
  be executed with the assistance of the defined root_helper. This does
  not occur. This is necessary as root_helper executes commands in the
  Dom0 context.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1524531/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1506356] Re: There is no "[vnc]" option group in nova.conf.sample

2015-12-09 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/235396
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=68200d7585c384adb8a688376cc8e5e013395a34
Submitter: Jenkins
Branch:master

commit 68200d7585c384adb8a688376cc8e5e013395a34
Author: Shunya Kitada 
Date:   Thu Oct 15 23:33:32 2015 +0900

Add "vnc" option group for sample nova.conf file

There is no "[vnc]" section in nova.conf.sample generated by
command "tox -egenconfig".
In addition, the "[default]" section has vnc options.

This patch moves vnc options from "[default]" section to
"[vnc]" section.

Change-Id: I5cf69729aa9e2bb868f26b82eaaa28187ce7a7a3
Closes-Bug: #1506356


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1506356

Title:
  There is no "[vnc]" option group in nova.conf.sample

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  I try to generate the sample nova.conf file, run the following.
  $ tox -egenconfig

  But, there is no "[vnc]" option group nova.conf.sample.

  "[vnc]" option group is defined in "vnc/__init__.py",
  but "nova.vnc" namespace is not defined in 
"etc/nova/nova-config-generator.conf".

  vnc/__init__.py
  ```
  vnc_opts = [
   cfg.StrOpt('novncproxy_base_url',
  default='http://127.0.0.1:6080/vnc_auto.html',
  help='Location of VNC console proxy, in the form '
   '"http://127.0.0.1:6080/vnc_auto.html;',
  deprecated_group='DEFAULT',
  deprecated_name='novncproxy_base_url'),
  ...
  ]

  CONF = cfg.CONF
  CONF.register_opts(vnc_opts, group='vnc')
  ```

  
  I resolved this, following 3 steps.
  Not sure if this is the correct fix or not.

  1. Define "nova.vnc" namespace in "etc/nova/nova-config-generator.conf",
  ```
 [DEFAULT]
 output_file = etc/nova/nova.conf.sample
 ...
 namespace = nova.virt
   > namespace = nova.vnc
 namespace = nova.openstack.common.memorycache
 ...
  ```

  
  2. Define "nova.vnc" entry_point in setup.cfg.

  ```
 [entry_points]
 oslo.config.opts =
 nova = nova.opts:list_opts
 nova.api = nova.api.opts:list_opts
 nova.cells = nova.cells.opts:list_opts
 nova.compute = nova.compute.opts:list_opts
 nova.network = nova.network.opts:list_opts
 nova.network.neutronv2 = nova.network.neutronv2.api:list_opts
 nova.scheduler = nova.scheduler.opts:list_opts
 nova.virt = nova.virt.opts:list_opts
   > nova.vnc = nova.vnc.opts:list_opts
   ...
  ```

  
  3. Create "nova/vnc/opts.py".

  ```
  # Licensed under the Apache License, Version 2.0 (the "License"); you may not
  # use this file except in compliance with the License. You may obtain a copy
  # of the License at
  #
  # http://www.apache.org/licenses/LICENSE-2.0
  #
  # Unless required by applicable law or agreed to in writing, software
  # distributed under the License is distributed on an "AS IS" BASIS,
  # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  # See the License for the specific language governing permissions and
  # limitations under the License.

  import nova.vnc

  
  def list_opts():
  return [
  ('vnc', nova.vnc.vnc_opts),
  ]
  ```

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1506356/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1524567] [NEW] nova-manage vm list error

2015-12-09 Thread Kevin Fox
Public bug reported:


==> /var/log/nova/nova-manage.log <==
2015-12-09 11:57:04.173 10154 CRITICAL nova 
[req-95df183e-c4b7-4a01-974e-a5a7ca236c34 - - - - -] AttributeError: 'NoneType' 
object has no attribute 'name'
2015-12-09 11:57:04.173 10154 TRACE nova Traceback (most recent call last):
2015-12-09 11:57:04.173 10154 TRACE nova   File "/usr/bin/nova-manage", line 
10, in 
2015-12-09 11:57:04.173 10154 TRACE nova sys.exit(main())
2015-12-09 11:57:04.173 10154 TRACE nova   File 
"/usr/lib/python2.7/site-packages/nova/cmd/manage.py", line 1382, in main
2015-12-09 11:57:04.173 10154 TRACE nova ret = fn(*fn_args, **fn_kwargs)
2015-12-09 11:57:04.173 10154 TRACE nova   File 
"/usr/lib/python2.7/site-packages/nova/cmd/manage.py", line 688, in list
2015-12-09 11:57:04.173 10154 TRACE nova instance_type.name,
2015-12-09 11:57:04.173 10154 TRACE nova AttributeError: 'NoneType' object has 
no attribute 'name'
2015-12-09 11:57:04.173 10154 TRACE nova

==> /var/log/nova/nova-scheduler.log <==

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1524567

Title:
  nova-manage vm list error

Status in OpenStack Compute (nova):
  New

Bug description:
  
  ==> /var/log/nova/nova-manage.log <==
  2015-12-09 11:57:04.173 10154 CRITICAL nova 
[req-95df183e-c4b7-4a01-974e-a5a7ca236c34 - - - - -] AttributeError: 'NoneType' 
object has no attribute 'name'
  2015-12-09 11:57:04.173 10154 TRACE nova Traceback (most recent call last):
  2015-12-09 11:57:04.173 10154 TRACE nova   File "/usr/bin/nova-manage", line 
10, in 
  2015-12-09 11:57:04.173 10154 TRACE nova sys.exit(main())
  2015-12-09 11:57:04.173 10154 TRACE nova   File 
"/usr/lib/python2.7/site-packages/nova/cmd/manage.py", line 1382, in main
  2015-12-09 11:57:04.173 10154 TRACE nova ret = fn(*fn_args, **fn_kwargs)
  2015-12-09 11:57:04.173 10154 TRACE nova   File 
"/usr/lib/python2.7/site-packages/nova/cmd/manage.py", line 688, in list
  2015-12-09 11:57:04.173 10154 TRACE nova instance_type.name,
  2015-12-09 11:57:04.173 10154 TRACE nova AttributeError: 'NoneType' object 
has no attribute 'name'
  2015-12-09 11:57:04.173 10154 TRACE nova

  ==> /var/log/nova/nova-scheduler.log <==

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1524567/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1521793] Re: l3ha with L2pop disabled breaks neutron

2015-12-09 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/252100
Committed: 
https://git.openstack.org/cgit/openstack/openstack-ansible/commit/?id=00609ea56e09f5dd0488062b64a03420d296e614
Submitter: Jenkins
Branch:master

commit 00609ea56e09f5dd0488062b64a03420d296e614
Author: Kevin Carter 
Date:   Tue Dec 1 17:35:06 2015 -0600

Fix neutron issue w/ l2pop

This change resolves an issue where neutron is not able to bind to
a given port because l2 population is disabled and no vxlan multicast
group has been defined. To resolve this the `neutron_l2_population`
variable is being defined and set to "False" in the os_neutron defaults
and the vxlan multicast group will now contain a default value instead
of an empty string.

The change also removes the neutron_l2_population checks in the tasks
and templates because the variable is now being defined.

Change-Id: Ic2973626d88781bfc67a4275afcf9feffeb63f36
Closes-Bug: #1521793
Co-Authored-by: Ville Vuorinen 
Signed-off-by: Kevin Carter 


** Changed in: openstack-ansible
   Status: In Progress => Fix Released

** Changed in: openstack-ansible/liberty
   Status: Triaged => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1521793

Title:
  l3ha with L2pop disabled breaks neutron

Status in neutron:
  New
Status in openstack-ansible:
  Fix Released
Status in openstack-ansible liberty series:
  In Progress
Status in openstack-ansible trunk series:
  Fix Released

Bug description:
  when using l3ha the system will fail to build a vm if L2 population is
  disabled under most circumstances. To resolve this issue the variable
  `neutron_l2_population` should be set to "true" by default. The
  current train of thought was that we'd use L3HA by default however due
  to current differences in the neutron linux bridge agent it seems that
  is impossible and will require additional upstream work within
  neutron. In the near term we should re-enable l2 pop by default and
  effectively disable the built in L3HA.

  This issue was reported in the channel by @Ville Vuorinen (IRC: kysse),
  see 
http://eavesdrop.openstack.org/irclogs/%23openstack-ansible/%23openstack-ansible.2015-12-01.log.html
 from 18:47 onwards.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1521793/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1491120] Re: L3 agent failover does not work

2015-12-09 Thread Miguel Lavalle
** Changed in: neutron
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1491120

Title:
  L3 agent failover does not work

Status in neutron:
  Fix Released

Bug description:
  Based off Juno GA release bits, testing with L3 agent failover feature by 
adding following line to neutron.conf file:
  allow_automatic_l3agent_failover = True

  If we create a couple routers and networks with it, and shutdown
  network node with router namespace on it, the router namespace will
  failover to other network node (total 3 in my case) after 10 minutes.

  If we create 10+ routers and networks associated with each other,
  failover does not happen at all if we shutdown any network node. No
  routers get migrated to other network node; no errors in neutron logs.

  We have tested this on 3 different systems not with DVR configured.

  BTW, is there a way to configure how long for failover to happen?
  sometimes 10 minutes is a little too long.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1491120/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1524562] [NEW] No error raised if PUT/GET/PATCH/DELETE domain-specific driver configuration database store with an invalid domain id

2015-12-09 Thread Thomas Hsiao
Public bug reported:

No error raised if PUT/GET/PATCH/DELETE sql-based domain driver
configuration with a invalid domain id:

For domain-specific driver configuration database store, Identity API creates 
the configuration options into the database even though the provided domain id 
is the the request url is invalid.
For example, a user can create config options using an invalid domain id 
(123456789) as shown below:

~$ curl -s \
>   -H "X-Auth-Token: ADMIN" \
>   -H "Content-Type: application/json" \
>   -d '
> {
>"config":{
>   "identity":{
>  "driver":"ldap"
>   },
>   "ldap":{
>  .
>  "tls_req_cert":"demand",
>  "user_tree_dn":"ou=Users50,dc=cdl,dc=hp,dc=com",
>  "group_allow_update":"False"
>   }
>}
> } ' \
>   -XPUT "http://localhost:35357/v3/domains/123456789/config/;

{"config": {"identity": {"driver":
"keystone.identity.backends.ldap.Identity"}, "ldap":
{"user_allow_update": "False", "user_name_attribute": "cn",
"use_pool": "True", "user_objectclass": "posixAccount",
"group_id_attribute": "gidNumber", "user_allow_create": "False",
"tls_req_cert": "demand"...}}}

Once the config options created in the database, the user can even use
this invalid domain id to get/update/delete the config options, an
example as shown below:

~$ curl -k -H "X-Auth-Token:ADMIN"
http://localhost:35357/v3/domains/123456789/config/

{"config": {"identity": {"driver":
"keystone.identity.backends.ldap.Identity"}, "ldap":
{"user_allow_update": "False", "group_allow_delete": "False",
"group_name_attribute": "cn", "suffix": "dc=cdl,dc=hp,dc=com", ..,
"group_allow_update": "False"...}}}

** Affects: keystone
 Importance: Undecided
 Assignee: Thomas Hsiao (thomas-hsiao)
 Status: New

** Summary changed:

- No error raised if PUT/GET/PATCH/DELETE sql-based domain driver configuration 
with a invalid domain id
+ No error raised if PUT/GET/PATCH/DELETE  domain-specific driver configuration 
database store with an invalid domain id

** Description changed:

  No error raised if PUT/GET/PATCH/DELETE sql-based domain driver
  configuration with a invalid domain id:
  
- For domain-specific driver configuration database store, Identity API creates 
the configuration options into the database even when the provided domain id is 
the url is invalid.
+ For domain-specific driver configuration database store, Identity API creates 
the configuration options into the database even though the provided domain id 
is the the request url is invalid.
  For example, a user can create config options using an invalid domain id 
(123456789) as shown below:
  
  ~$ curl -s \
  >   -H "X-Auth-Token: ADMIN" \
  >   -H "Content-Type: application/json" \
  >   -d '
  > {
  >"config":{
  >   "identity":{
  >  "driver":"ldap"
  >   },
  >   "ldap":{
  >  .
  >  "tls_req_cert":"demand",
  >  "user_tree_dn":"ou=Users50,dc=cdl,dc=hp,dc=com",
  >  "group_allow_update":"False"
  >   }
  >}
  > } ' \
  >   -XPUT "http://localhost:35357/v3/domains/123456789/config/;
  
  {"config": {"identity": {"driver":
  "keystone.identity.backends.ldap.Identity"}, "ldap":
  {"user_allow_update": "False", "user_name_attribute": "cn",
  "use_pool": "True", "user_objectclass": "posixAccount",
  "group_id_attribute": "gidNumber", "user_allow_create": "False",
  "tls_req_cert": "demand"...}}}
  
  Once the config options created in the database, the user can even use
  this invalid domain id to get/update/delete the config options, an
  example as shown below:
  
  ~$ curl -k -H "X-Auth-Token:ADMIN"
  http://localhost:35357/v3/domains/123456789/config/
  
  {"config": {"identity": {"driver":
  "keystone.identity.backends.ldap.Identity"}, "ldap":
  {"user_allow_update": "False", "group_allow_delete": "False",
  "group_name_attribute": "cn", "suffix": "dc=cdl,dc=hp,dc=com", ..,
  "group_allow_update": "False"...}}}

** Changed in: keystone
 Assignee: (unassigned) => Thomas Hsiao (thomas-hsiao)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1524562

Title:
  No error raised if PUT/GET/PATCH/DELETE  domain-specific driver
  configuration database store with an invalid domain id

Status in OpenStack Identity (keystone):
  New

Bug description:
  No error raised if PUT/GET/PATCH/DELETE sql-based domain driver
  configuration with a invalid domain id:

  For domain-specific driver configuration database store, Identity API creates 
the configuration options into the database even though the provided domain id 
is the the request url is invalid.
  For example, a user can create config options using an invalid domain id 
(123456789) as shown below:

  ~$ curl -s \
  >   -H "X-Auth-Token: ADMIN" \
  >   -H "Content-Type: application/json" \
  >   -d '
  > {
 

[Yahoo-eng-team] [Bug 1524006] Re: NotImplementedError raised while create network using Nova REST API

2015-12-09 Thread Atsushi SAKAI
Would you ask your questions in following URL?
https://ask.openstack.org/en/questions/


** Changed in: openstack-api-site
   Status: New => Invalid

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1524006

Title:
  NotImplementedError raised while create network using Nova REST API

Status in OpenStack Compute (nova):
  Invalid
Status in openstack-api-site:
  Invalid

Bug description:
  Hi,

  I've stucked with error:
  {"computeFault":
  {"message": "Unexpected API Error. Please report this at
  http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.\n
  ", "code": 500}}

  It happened when I try create network for non-admin project using Nova REST 
API
  Source: http://developer.openstack.org/api-ref-compute-v2.1.html

  POST
  http://10.1.244.10:8774/v2.1/090aee2684a04e8397193a118a6e91b0/os-networks

  POST data:
  {
  "network": {
  "label": "scale-1-net",
  "cidr": "172.1.0.0/24",
  "mtu": 9000,
  "dhcp_server": "172.1.0.2",
  "enable_dhcp": false,
  "share_address": true,
  "allowed_start": "172.1.0.10",
  "allowed_end": "172.1.0.200"
  }
  }

  What's wrong?
  or
  What's the solution?

  Thanks a lot.
  ---
  Release: 2.1.0 on 2015-12-08 06:12
  SHA: 60c5c2798004984738d171055dbc2a6fd37a85fe
  Source: 
http://git.openstack.org/cgit/openstack/nova/tree/api-guide/source/index.rst
  URL: http://developer.openstack.org/api-guide/compute/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1524006/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1524418] Re: gate-grenade-dsvm-multinode fails with "AttributeError: 'LocalManager' object has no attribute 'l3driver'"

2015-12-09 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/255435
Committed: 
https://git.openstack.org/cgit/openstack/oslo.messaging/commit/?id=3ee86964fa460882d8fcac8686edd0e6bfb12008
Submitter: Jenkins
Branch:master

commit 3ee86964fa460882d8fcac8686edd0e6bfb12008
Author: Mehdi Abaakouk 
Date:   Wed Dec 9 19:37:40 2015 +0100

Revert "default of kombu_missing_consumer_retry_timeout"

This reverts commit 8c03a6db6c0396099e7425834998da5478a1df7c.

Closes-bug: #1524418
Change-Id: I35538a6c15d6402272e4513bc1beaa537b0dd7b9


** Changed in: oslo.messaging
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1524418

Title:
  gate-grenade-dsvm-multinode fails with "AttributeError: 'LocalManager'
  object has no attribute 'l3driver'"

Status in OpenStack Compute (nova):
  Confirmed
Status in oslo.messaging:
  Fix Released

Bug description:
  Seen here:

  http://logs.openstack.org/38/249138/9/check/gate-grenade-dsvm-
  multinode/5b991dc/logs/new/screen-n-api.txt.gz?level=TRACE

  2015-12-09 12:44:56.998 ERROR nova.api.openstack.extensions 
[req-e2625ff8-3f5c-494c-8d84-a91d6d9c862b cinder_grenade cinder_grenade] 
Unexpected exception in API method
  2015-12-09 12:44:56.998 15804 ERROR nova.api.openstack.extensions Traceback 
(most recent call last):
  2015-12-09 12:44:56.998 15804 ERROR nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/api/openstack/extensions.py", line 478, in wrapped
  2015-12-09 12:44:56.998 15804 ERROR nova.api.openstack.extensions return 
f(*args, **kwargs)
  2015-12-09 12:44:56.998 15804 ERROR nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/api/validation/__init__.py", line 73, in wrapper
  2015-12-09 12:44:56.998 15804 ERROR nova.api.openstack.extensions return 
func(*args, **kwargs)
  2015-12-09 12:44:56.998 15804 ERROR nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/api/openstack/compute/floating_ips.py", line 291, in 
_remove_floating_ip
  2015-12-09 12:44:56.998 15804 ERROR nova.api.openstack.extensions 
disassociate_floating_ip(self, context, instance, address)
  2015-12-09 12:44:56.998 15804 ERROR nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/api/openstack/compute/floating_ips.py", line 79, in 
disassociate_floating_ip
  2015-12-09 12:44:56.998 15804 ERROR nova.api.openstack.extensions 
self.network_api.disassociate_floating_ip(context, instance, address)
  2015-12-09 12:44:56.998 15804 ERROR nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/network/api.py", line 49, in wrapped
  2015-12-09 12:44:56.998 15804 ERROR nova.api.openstack.extensions return 
func(self, context, *args, **kwargs)
  2015-12-09 12:44:56.998 15804 ERROR nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/network/base_api.py", line 77, in wrapper
  2015-12-09 12:44:56.998 15804 ERROR nova.api.openstack.extensions res = 
f(self, context, *args, **kwargs)
  2015-12-09 12:44:56.998 15804 ERROR nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/network/api.py", line 240, in disassociate_floating_ip
  2015-12-09 12:44:56.998 15804 ERROR nova.api.openstack.extensions 
affect_auto_assigned)
  2015-12-09 12:44:56.998 15804 ERROR nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/utils.py", line 1100, in wrapper
  2015-12-09 12:44:56.998 15804 ERROR nova.api.openstack.extensions return 
func(*args, **kwargs)
  2015-12-09 12:44:56.998 15804 ERROR nova.api.openstack.extensions   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 
150, in inner
  2015-12-09 12:44:56.998 15804 ERROR nova.api.openstack.extensions return 
func(*args, **kwargs)
  2015-12-09 12:44:56.998 15804 ERROR nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/network/floating_ips.py", line 456, in 
disassociate_floating_ip
  2015-12-09 12:44:56.998 15804 ERROR nova.api.openstack.extensions 
fixed_ip.instance_uuid)
  2015-12-09 12:44:56.998 15804 ERROR nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/network/floating_ips.py", line 490, in 
_disassociate_floating_ip
  2015-12-09 12:44:56.998 15804 ERROR nova.api.openstack.extensions 
do_disassociate()
  2015-12-09 12:44:56.998 15804 ERROR nova.api.openstack.extensions   File 
"/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", line 
271, in inner
  2015-12-09 12:44:56.998 15804 ERROR nova.api.openstack.extensions return 
f(*args, **kwargs)
  2015-12-09 12:44:56.998 15804 ERROR nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/network/floating_ips.py", line 483, in do_disassociate
  2015-12-09 12:44:56.998 15804 ERROR nova.api.openstack.extensions 
self.l3driver.remove_floating_ip(address, fixed.address,
  2015-12-09 12:44:56.998 15804 ERROR 

[Yahoo-eng-team] [Bug 1520722] Re: Automatically generate neutron core configuration files

2015-12-09 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/254984
Committed: 
https://git.openstack.org/cgit/openstack/openstack-manuals/commit/?id=39670ef35ab37d8e7a4db483f0f8e2a453c7a07b
Submitter: Jenkins
Branch:master

commit 39670ef35ab37d8e7a4db483f0f8e2a453c7a07b
Author: Matthew Kassawara 
Date:   Tue Dec 8 15:02:50 2015 -0700

[config-ref] Include neutron config files

For Mitaka, neutron implements automatic generation of sample
configuration files and removes static sample configuration
files from the neutron source tree. Therefore, the
configuration reference must include local versions of
sample configuration files similar to other projects that
implement automatic generation of sample configuration files.

Change-Id: If92d048a837ffbd9ba8664559ddabe886f448a32
Closes-Bug: #1520722


** Changed in: openstack-manuals
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1520722

Title:
  Automatically generate neutron core configuration files

Status in neutron:
  Fix Released
Status in openstack-manuals:
  Fix Released

Bug description:
  https://review.openstack.org/204206
  \Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit 71190773e14260fab96e78e65a290356cdc08581
  Author: Martin Hickey 
  Date:   Mon Nov 9 23:37:37 2015 +

  Automatically generate neutron core configuration files
  
  This adds a new tox environment, genconfig, which generates sample
  neutron core configuration file using oslo-config-generator.
  
  Updates to some configuration option help messages to reflect useful
  details that were missing in the code but were present in config files.
  
  It also adds details to devref on how to update config files.
  
  Partially-Implements: blueprint autogen-neutron-conf-file
  
  DocImpact
  
  Change-Id: I1c6dc4e7d479f1b7c755597caded24a0f018c712
  Closes-bug: #1199963
  Co-Authored-By: Louis Taylor 

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1520722/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1524568] Re: Automatically generate neutron LBaaS configuration files

2015-12-09 Thread Henry Gessau
This bug seems to have been erroneously created by some automation.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1524568

Title:
  Automatically generate neutron LBaaS configuration files

Status in neutron:
  Invalid

Bug description:
  https://review.openstack.org/252981
  commit e719861c00ab1e50e271c3bcdbc0b9130353d2d4
  Author: Martin Hickey 
  Date:   Thu Dec 3 14:39:29 2015 +

  Automatically generate neutron LBaaS configuration files
  
  This adds a new tox environment, genconfig, which generates sample
  neutron LBaaS configuration file using oslo-config-generator.
  
  DocImpact: Update the docs that LBaaS no longer includes static example
  configuration files. Instead, use tools/generate_config_file_samples.sh
  to generate them and the files generated now end with .sample extension.
  
  Partially-Implements: blueprint autogen-neutron-conf-file
  
  Change-Id: I25507f3bc6e995580aa91a912c2cf4110757df15
  Partial-bug: #1199963

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1524568/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1499142] Re: test coverage for toast service error

2015-12-09 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/231180
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=2b547305e8d8d036d4ddf936c072f2fb3dd3e0cc
Submitter: Jenkins
Branch:master

commit 2b547305e8d8d036d4ddf936c072f2fb3dd3e0cc
Author: matthewjsloane 
Date:   Mon Oct 5 13:54:48 2015 -0700

Added test coverage for toast.service.js

Test coverage added to framework/widgets/toast/toast.service.js and
modified multiple tests to handle type of 'error', which was not tested
before.

Closes-Bug: #1499142

Change-Id: Ib34a8334e984b9600e9c2de606b43a2a3486a9d0
Co-Authored-By: Peter Tang 


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1499142

Title:
  test coverage for toast service error

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  ../toast.service.js

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1499142/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1524602] [NEW] Return availability_zone_hints as string when net-create

2015-12-09 Thread Hirofumi Ichihara
Public bug reported:

In neutron with availability zone extensions, we receive the return
value with string as availability_zone_hints although we expect list.


   $ neutron net-create --availability-zone-hint zone-1 
--availability-zone-hint zone-2 net1
Created a new network:
+---+--+
| Field | Value|
+---+--+
| admin_state_up| True |
| availability_zone_hints   | ["zone-1", "zone-2"] |
| id| 0ef0597c-4aab-4235-8513-bf5d8304fe64 |
| mtu   | 0|
| name  | net1 |
| port_security_enabled | True |
| provider:network_type | vxlan|
| provider:physical_network |  |
| provider:segmentation_id  | 1054 |
| router:external   | False|
| shared| False|
| status| ACTIVE   |
| subnets   |  |
| tenant_id | 32f5512c7b3f47fb8924588ff9ad603b |
+---+--+

** Affects: neutron
 Importance: Undecided
 Assignee: Hirofumi Ichihara (ichihara-hirofumi)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Hirofumi Ichihara (ichihara-hirofumi)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1524602

Title:
  Return availability_zone_hints as string when net-create

Status in neutron:
  New

Bug description:
  In neutron with availability zone extensions, we receive the return
  value with string as availability_zone_hints although we expect list.

  
 $ neutron net-create --availability-zone-hint zone-1 
--availability-zone-hint zone-2 net1
  Created a new network:
  +---+--+
  | Field | Value|
  +---+--+
  | admin_state_up| True |
  | availability_zone_hints   | ["zone-1", "zone-2"] |
  | id| 0ef0597c-4aab-4235-8513-bf5d8304fe64 |
  | mtu   | 0|
  | name  | net1 |
  | port_security_enabled | True |
  | provider:network_type | vxlan|
  | provider:physical_network |  |
  | provider:segmentation_id  | 1054 |
  | router:external   | False|
  | shared| False|
  | status| ACTIVE   |
  | subnets   |  |
  | tenant_id | 32f5512c7b3f47fb8924588ff9ad603b |
  +---+--+

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1524602/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1524627] [NEW] nova libvirt xml generate wrong metadata

2015-12-09 Thread Tardis Xu
Public bug reported:

Environment:

devstack running OpenStack from master.

Steps to reproduce:

1. login as demo/demo
2. boot a instance
3. virsh dumpxml , view metadata:

demo
demo

4. login as admin/admin
5. hard boot the instance
6. virsh dumpxml , view metadata:

admin
admin


Expected result:

The project and user metadata cannot get from current context.

Actual result:

The project and user metadata all get from current context.

** Affects: nova
 Importance: Medium
 Assignee: Tardis Xu (xiaoxubeii)
 Status: In Progress


** Tags: compute libvirt xml

** Changed in: nova
   Status: New => In Progress

** Changed in: nova
   Importance: Undecided => Medium

** Changed in: nova
 Assignee: (unassigned) => Tardis Xu (xiaoxubeii)

** Tags added: compute

** Tags added: libvirt xml

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1524627

Title:
  nova libvirt xml generate wrong metadata

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Environment:

  devstack running OpenStack from master.

  Steps to reproduce:

  1. login as demo/demo
  2. boot a instance
  3. virsh dumpxml , view metadata:
  
  demo
  demo
  
  4. login as admin/admin
  5. hard boot the instance
  6. virsh dumpxml , view metadata:
  
  admin
  admin
  

  Expected result:

  The project and user metadata cannot get from current context.

  Actual result:

  The project and user metadata all get from current context.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1524627/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1524421] [NEW] Host Manager reads deleted instance info on startup

2015-12-09 Thread Ed Leafe
Public bug reported:

In Kilo we added the ability of the HostManager to track information
about the instances on compute nodes, so that filters that needed this
information didn't have to constantly make database calls to get it.
However, the call that is used by HostManager will return deleted
instance records, which is not useful information and degrades the
performance. In a large deployment, the overhead of loading the records
for thousands of deleted instances is causing serious performance
issues.

** Affects: nova
 Importance: Undecided
 Assignee: Ed Leafe (ed-leafe)
 Status: New


** Tags: scheduler

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1524421

Title:
  Host Manager reads deleted instance info on startup

Status in OpenStack Compute (nova):
  New

Bug description:
  In Kilo we added the ability of the HostManager to track information
  about the instances on compute nodes, so that filters that needed this
  information didn't have to constantly make database calls to get it.
  However, the call that is used by HostManager will return deleted
  instance records, which is not useful information and degrades the
  performance. In a large deployment, the overhead of loading the
  records for thousands of deleted instances is causing serious
  performance issues.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1524421/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1524418] [NEW] gate-grenade-dsvm-multinode fails with "AttributeError: 'LocalManager' object has no attribute 'l3driver'"

2015-12-09 Thread Matt Riedemann
Public bug reported:

Seen here:

http://logs.openstack.org/38/249138/9/check/gate-grenade-dsvm-
multinode/5b991dc/logs/new/screen-n-api.txt.gz?level=TRACE

2015-12-09 12:44:56.998 ERROR nova.api.openstack.extensions 
[req-e2625ff8-3f5c-494c-8d84-a91d6d9c862b cinder_grenade cinder_grenade] 
Unexpected exception in API method
2015-12-09 12:44:56.998 15804 ERROR nova.api.openstack.extensions Traceback 
(most recent call last):
2015-12-09 12:44:56.998 15804 ERROR nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/api/openstack/extensions.py", line 478, in wrapped
2015-12-09 12:44:56.998 15804 ERROR nova.api.openstack.extensions return 
f(*args, **kwargs)
2015-12-09 12:44:56.998 15804 ERROR nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/api/validation/__init__.py", line 73, in wrapper
2015-12-09 12:44:56.998 15804 ERROR nova.api.openstack.extensions return 
func(*args, **kwargs)
2015-12-09 12:44:56.998 15804 ERROR nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/api/openstack/compute/floating_ips.py", line 291, in 
_remove_floating_ip
2015-12-09 12:44:56.998 15804 ERROR nova.api.openstack.extensions 
disassociate_floating_ip(self, context, instance, address)
2015-12-09 12:44:56.998 15804 ERROR nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/api/openstack/compute/floating_ips.py", line 79, in 
disassociate_floating_ip
2015-12-09 12:44:56.998 15804 ERROR nova.api.openstack.extensions 
self.network_api.disassociate_floating_ip(context, instance, address)
2015-12-09 12:44:56.998 15804 ERROR nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/network/api.py", line 49, in wrapped
2015-12-09 12:44:56.998 15804 ERROR nova.api.openstack.extensions return 
func(self, context, *args, **kwargs)
2015-12-09 12:44:56.998 15804 ERROR nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/network/base_api.py", line 77, in wrapper
2015-12-09 12:44:56.998 15804 ERROR nova.api.openstack.extensions res = 
f(self, context, *args, **kwargs)
2015-12-09 12:44:56.998 15804 ERROR nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/network/api.py", line 240, in disassociate_floating_ip
2015-12-09 12:44:56.998 15804 ERROR nova.api.openstack.extensions 
affect_auto_assigned)
2015-12-09 12:44:56.998 15804 ERROR nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/utils.py", line 1100, in wrapper
2015-12-09 12:44:56.998 15804 ERROR nova.api.openstack.extensions return 
func(*args, **kwargs)
2015-12-09 12:44:56.998 15804 ERROR nova.api.openstack.extensions   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 
150, in inner
2015-12-09 12:44:56.998 15804 ERROR nova.api.openstack.extensions return 
func(*args, **kwargs)
2015-12-09 12:44:56.998 15804 ERROR nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/network/floating_ips.py", line 456, in 
disassociate_floating_ip
2015-12-09 12:44:56.998 15804 ERROR nova.api.openstack.extensions 
fixed_ip.instance_uuid)
2015-12-09 12:44:56.998 15804 ERROR nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/network/floating_ips.py", line 490, in 
_disassociate_floating_ip
2015-12-09 12:44:56.998 15804 ERROR nova.api.openstack.extensions 
do_disassociate()
2015-12-09 12:44:56.998 15804 ERROR nova.api.openstack.extensions   File 
"/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", line 
271, in inner
2015-12-09 12:44:56.998 15804 ERROR nova.api.openstack.extensions return 
f(*args, **kwargs)
2015-12-09 12:44:56.998 15804 ERROR nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/network/floating_ips.py", line 483, in do_disassociate
2015-12-09 12:44:56.998 15804 ERROR nova.api.openstack.extensions 
self.l3driver.remove_floating_ip(address, fixed.address,
2015-12-09 12:44:56.998 15804 ERROR nova.api.openstack.extensions 
AttributeError: 'LocalManager' object has no attribute 'l3driver'
2015-12-09 12:44:56.998 15804 ERROR nova.api.openstack.extensions 


Started in the last 24 hours:

http://logstash.openstack.org/#dashboard/file/logstash.json?query=message:%5C%22AttributeError:%20'LocalManager'%20object%20has%20no%20attribute%20'l3driver'%5C%22%20AND%20tags:%5C%22screen-n-api.txt%5C%22

Thinking it's something in oslo.messaging 3.1.0 that was merged into
upper-constraints on 12/8:

https://review.openstack.org/#/c/254571/

** Affects: nova
 Importance: Undecided
 Status: New

** Affects: oslo.messaging
 Importance: Undecided
 Status: New

** Also affects: oslo.messaging
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1524418

Title:
  gate-grenade-dsvm-multinode fails with "AttributeError: 'LocalManager'
  object has no attribute 'l3driver'"

Status in OpenStack Compute (nova):