[Yahoo-eng-team] [Bug 1534198] [NEW] neutron-lbaas minimal gate jobs should not build amphora image

2016-01-14 Thread Michael Johnson
Public bug reported:

The neutron-lbaas minimal gate jobs are building the amphora image using
diskimage- builder.  It should not be building the image for these
checks as they are API tests.

** Affects: neutron
 Importance: Undecided
 Assignee: Michael Johnson (johnsom)
 Status: New


** Tags: lbaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1534198

Title:
  neutron-lbaas minimal gate jobs should not build amphora image

Status in neutron:
  New

Bug description:
  The neutron-lbaas minimal gate jobs are building the amphora image
  using diskimage- builder.  It should not be building the image for
  these checks as they are API tests.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1534198/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1523930] Re: Javascript message catalogs need to be retreived from plugins

2016-01-14 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/255590
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=b56d278582b5df6da4a260f08656a668e166ab8b
Submitter: Jenkins
Branch:master

commit b56d278582b5df6da4a260f08656a668e166ab8b
Author: Thai Tran 
Date:   Mon Jan 11 13:46:05 2016 -0800

Support javascript translation for plugin

It's not possible for plugins to contribute translations to the javascript
message catalog. Right now our files are hardcoded to allow only
contributions from horizon and openstack_dashboard. This patch fixes
the issue.

Change-Id: Idde2fc6ac0bf7f762a595cf139ed5184dad64540
Closes-Bug: #1523930


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1523930

Title:
  Javascript message catalogs need to be retreived from plugins

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  It's not possible for plugins to contribute translations to the
  javascript message catalog. Right now our files are hardcoded to allow
  only contributions from horizon and openstack_dashboard applications

  openstack_dashboard/templates/horizon/_script_i18n.html

  I believe the solution will be to look through all of the applications, 
possibly filtering by a setting in the enabled file in order to dynamically 
build and use this page:
  horizon/templates/horizon/_script_i18n.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1523930/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1489059] Re: "db type could not be determined" running py34

2016-01-14 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/267299
Committed: 
https://git.openstack.org/cgit/openstack/kolla/commit/?id=e09e20cdbe0e956765b3ced712d26e5f7a00e75e
Submitter: Jenkins
Branch:master

commit e09e20cdbe0e956765b3ced712d26e5f7a00e75e
Author: MD NADEEM 
Date:   Thu Jan 14 10:09:12 2016 +0530

Put py34 first in the env order of tox

To solve the problem of "db type could not
be determined" on py34 we have to run first
the py34 env to, then, run py27.

This patch puts py34 first on the tox.ini list
of envs to avoid this problem to happen.
Closes-Bug: #1489059

Change-Id: I4f791dfa620eacdd76cd46f193e190071ab64b6c


** Changed in: kolla
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1489059

Title:
  "db type could not be determined" running py34

Status in Aodh:
  Fix Released
Status in Barbican:
  Fix Released
Status in Bareon:
  Fix Released
Status in cloudkitty:
  Fix Committed
Status in Fuel for OpenStack:
  In Progress
Status in Glance:
  Fix Committed
Status in glance_store:
  Fix Committed
Status in hacking:
  Fix Released
Status in heat:
  Fix Released
Status in Ironic:
  Fix Released
Status in ironic-lib:
  Fix Committed
Status in OpenStack Identity (keystone):
  Fix Released
Status in keystoneauth:
  Fix Released
Status in keystonemiddleware:
  Fix Released
Status in kolla:
  Fix Released
Status in Manila:
  Fix Released
Status in Murano:
  Fix Committed
Status in networking-midonet:
  Fix Released
Status in networking-ofagent:
  New
Status in neutron:
  Fix Released
Status in python-glanceclient:
  Fix Released
Status in python-keystoneclient:
  Fix Released
Status in python-muranoclient:
  Fix Released
Status in python-solumclient:
  In Progress
Status in python-swiftclient:
  In Progress
Status in Rally:
  Fix Released
Status in Sahara:
  Fix Released
Status in OpenStack Search (Searchlight):
  Fix Released
Status in senlin:
  Fix Released
Status in tap-as-a-service:
  New
Status in tempest:
  Fix Released
Status in zaqar:
  Fix Released

Bug description:
  When running tox for the first time, the py34 execution fails with an
  error saying "db type could not be determined".

  This issue is know to be caused when the run of py27 preceeds py34 and
  can be solved erasing the .testrepository and running "tox -e py34"
  first of all.

To manage notifications about this bug go to:
https://bugs.launchpad.net/aodh/+bug/1489059/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1524274] Re: [OSSA 2016-001] Unprivileged api user can access host data using instance snapshot (CVE-2015-7548)

2016-01-14 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/264814
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=915fdbbfb82272b87cd80210943372b09351cf88
Submitter: Jenkins
Branch:master

commit 915fdbbfb82272b87cd80210943372b09351cf88
Author: Matthew Booth 
Date:   Fri Dec 11 13:40:54 2015 +

Fix backing file detection in libvirt live snapshot

When doing a live snapshot, the libvirt driver creates an intermediate
qcow2 file with the same backing file as the original disk. However,
it calls qemu-img info without specifying the input format explicitly.
An authenticated user can write data to a raw disk which will cause
this code to misinterpret the disk as a qcow2 file with a
user-specified backing file on the host, and return an arbitrary host
file as the backing file.

This bug does not appear to result in a data leak in this case, but
this is hard to verify. It certainly results in corrupt output.

Closes-Bug: #1524274

Change-Id: I11485f077d28f4e97529a691e55e3e3c0bea8872


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1524274

Title:
  [OSSA 2016-001] Unprivileged api user can access host data using
  instance snapshot (CVE-2015-7548)

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Security Advisory:
  Fix Released

Bug description:
  This issue is being treated as a potential security risk under
  embargo. Please do not make any public mention of embargoed (private)
  security vulnerabilities before their coordinated publication by the
  OpenStack Vulnerability Management Team in the form of an official
  OpenStack Security Advisory. This includes discussion of the bug or
  associated fixes in public forums such as mailing lists, code review
  systems and bug trackers. Please also avoid private disclosure to
  other individuals not already approved for access to this information,
  and provide this same reminder to those who are made aware of the
  issue prior to publication. All discussion should remain confined to
  this private bug report, and any proposed fixes should be added to the
  bug as attachments.

  There is a qcow2 format vulnerability in LibvirtDriver.snapshot. The
  impact is that on an affected system, an unprivileged api user can
  retrieve any file on the host readable by the nova user. This includes
  guest data of other instances on the same host, and credentials used
  by nova to access other services externally.

  LibvirtDriver.snapshot does:

  source_format = libvirt_utils.get_disk_type(disk_path)
  ...
  snapshot_backend = self.image_backend.snapshot(instance,
  disk_path,
  image_type=source_format)
  ...
  snapshot_backend.snapshot_extract(out_path, image_format)

  libvirt_utils.get_disk_type falls back to image inspection for disks
  which are not lvm, rbd or ploop, which means raw and qcow2 images.

  The vulnerability only exists when a user can write to a raw volume
  which is later erroneously detected as qcow2. This means that the
  vulnerability is only present on systems using the libvirt driver
  which have defined use_cow_images=False in nova.conf. This is not the
  default, so by default nova is not vulnerable.

  libvirt.utils.extract_snapshot() expects to be reading from an
  instance disk and writing to a temporary directory created by nova for
  storing snapshots before transferring them to glance. As nova directly
  creates this directory and its contents, the 'qemu-img convert'
  process does not need to run privileged. This means that the exposure
  is limited to files directly readable by the nova user.

  Unfortunately, as is clear from the context this includes all instance
  data which, despite being owned by the qemu user, is world readable.
  Additionally, because the qemu-img process is executed by nova
  directly, it does not benefit from any confinement by libvirt.
  Specifically, SELinux is not likely to be a defence on a typical
  deployment.

  I have tested this exploit on a Fedora 23 system running devstack as
  of 8th Dec 2015:

  Ensure nova.conf contains use_cow_images = False in the DEFAULT
  section.

  As an unprivileged api user, do:
  $ nova boot --image cirros --flavor m1.tiny foo

  Somewhere, run:
  $ qemu-img create -f qcow2 -o backing_file=/etc/passwd bad.qcow2
  Ensure bad.qcow2 is available in the foo instance.

  Log into foo, and execute as root:
  # dd if=bad.qcow2 of=/dev/vda conv=fsync

  As an unprivileged api user, do:
  $ nova image-create foo passwd
  $ glance image-download  --file passwd

  The unprivileged api now has the contents of /etc/passwd from the host
  locally.

  Mitigations:

  Nova is not vulnerable by default. The user must have configured 
use_cow_images=False.

[Yahoo-eng-team] [Bug 1530027] Re: Bad request when create_image in Citrix XenServer CI

2016-01-14 Thread Kairat Kushaev
Marking this as Invalid until there will be reproducible case.

** Changed in: glance
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1530027

Title:
  Bad request when create_image in Citrix XenServer CI

Status in Glance:
  Invalid
Status in tempest:
  Invalid

Bug description:
  In this gate test[2], Bad request raise when Citrix XenServer CI
  failed.

  detail show in here [1].

  
  [1] 
http://dd6b71949550285df7dc-dda4e480e005aaa13ec303551d2d8155.r49.cf1.rackcdn.com/34/252234/4/21189/run_tests.log
  [2] https://review.openstack.org/#/c/252234/

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1530027/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1529836] Re: Fix deprecated library function (os.popen()).

2016-01-14 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/266849
Committed: 
https://git.openstack.org/cgit/openstack/senlin/commit/?id=4ca80795b006f4e0320590bd4f3a072e1532572b
Submitter: Jenkins
Branch:master

commit 4ca80795b006f4e0320590bd4f3a072e1532572b
Author: caoyue 
Date:   Wed Jan 13 19:30:56 2016 +0800

Replace deprecated library function os.popen() with subprocess

os.popen() is deprecated since python 2.6. Resolved with use of
subprocess module.

Change-Id: Ifa5f6bbfabf2d5ebfdbee21920aa12c0a8e30517
Closes-Bug: #1529836


** Changed in: senlin
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1529836

Title:
  Fix deprecated library function (os.popen()).

Status in Ceilometer:
  In Progress
Status in Cinder:
  In Progress
Status in devstack:
  In Progress
Status in Glance:
  In Progress
Status in heat:
  In Progress
Status in OpenStack Dashboard (Horizon):
  In Progress
Status in OpenStack Identity (keystone):
  Fix Released
Status in keystoneauth:
  In Progress
Status in keystonemiddleware:
  In Progress
Status in Manila:
  Fix Released
Status in Murano:
  Fix Released
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in oslo-incubator:
  In Progress
Status in python-keystoneclient:
  Fix Released
Status in Python client library for Zaqar:
  In Progress
Status in Sahara:
  Fix Released
Status in senlin:
  Fix Released
Status in OpenStack Object Storage (swift):
  In Progress
Status in tempest:
  In Progress

Bug description:
  Deprecated library function os.popen is still in use at some places.
  Need to replace it using subprocess module.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1529836/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1531428] Re: could't find availability_zones list

2016-01-14 Thread Fahri Cihan Demirci
Yes, that's the way I see that file on my installation. So this problem
was related with an error in starting the nova-api service. Since you
have found a solution for your problem and it didn't have to do with
nova code per se, I am marking this bug as invalid. Thank you for
working on your problem and reporting your findings.

** Changed in: nova
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1531428

Title:
  could't find availability_zones list

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  when i logged in my devstack environment and clicked "servers"of “system 
panel”,
  i got this errorr:could't find availability_zones list
  then i had a look at n-api.log,i got this :
  1-06 16:00:11.755 DEBUG nova.osapi_compute.wsgi.server 
[req-3fbf163d-9ee9-4b82-b583-2133eeadb3c8 admin demo] 
(32324) accepted ('192.168.1.109', 48960) from 
(pid=32324) server 
/usr/local/lib/python2.7/dist-packages/eventlet/wsgi.py:826
  2016-01-06 16:00:11.759 DEBUG keystoneauth.session 
[req-3fbf163d-9ee9-4b82-b583-2133eeadb3c8 admin demo] 
REQ: curl -g -i --cacert "/opt/stack/data/ca-bundle.pem" -X GET 
http://192.168.1.109:35357/v3/auth/tokens -H "X-Subject-Token: 
{SHA1}cece495a5656aebe24c73b55512f2971fdf9b9a4" -H "User-Agent: 
python-keystoneclient" -H "Accept: application/json" -H "X-Auth-Token: 
{SHA1}c2ccc47b4b64ccefde16fd185e2dc22d51b9fe0e" from (pid=32324) 
_http_log_request 
/usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py:206
  2016-01-06 16:00:11.817 DEBUG keystoneauth.session 
[req-3fbf163d-9ee9-4b82-b583-2133eeadb3c8 admin demo] 
RESP: [200] Content-Length: 4856 X-Subject-Token: 
{SHA1}cece495a5656aebe24c73b55512f2971fdf9b9a4 Vary: X-Auth-Token Keep-Alive: 
timeout=5, max=100 Server: Apache/2.4.7 (Ubuntu) Connection: Keep-Alive Date: 
Wed, 06 Jan 2016 08:00:11 GMT Content-Type: application/json 
x-openstack-request-id: req-ffd54554-5e13-4e64-a891-3a537234bc11 
  RESP BODY: {"token": {"methods": ["token", "password"], "roles": [{"id": 
"0b334f71f26742c5b18163e379b86439", "name": "admin"}], "expires_at": 
"2016-01-06T08:54:13.065141Z", "project": {"domain": {"id": "default", "name": 
"Default"}, "id": "0e585c25ef3e41aeb69321672a3bd047", "name": "demo"}, 
"catalog": "", "user": {"domain": {"id": "default", "name": 
"Default"}, "id": "dac7f9e6ea9f4e2a8716a40e76497c13", "name": "admin"}, 
"audit_ids": ["tNyvTREyQeuxmnJDdBP8WA", "tKPffHVyQ7-c9mXsXo4NJA"], "issued_at": 
"2016-01-06T07:54:16.341202Z"}}
   from (pid=32324) _http_log_response 
/usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py:235
  2016-01-06 16:00:11.821 DEBUG nova.api.openstack.wsgi 
[req-4a2f6f5e-c5b8-403d-9336-a1a3becd4c29 admin demo] 
Calling method '>' from (pid=32324) _process_stack 
/opt/stack/nova/nova/api/openstack/wsgi.py:798
  2016-01-06 16:00:11.837 INFO nova.osapi_compute.wsgi.server 
[req-4a2f6f5e-c5b8-403d-9336-a1a3becd4c29 admin demo] 
192.168.1.109 "GET 
/v2.1/0e585c25ef3e41aeb69321672a3bd047/os-aggregates HTTP/1.1" status: 200 len: 
285 time: 0.0796518
  2016-01-06 16:00:11.842 DEBUG nova.osapi_compute.wsgi.server 
[req-4a2f6f5e-c5b8-403d-9336-a1a3becd4c29 admin demo] 
(32324) accepted ('192.168.1.109', 48962) from 
(pid=32324) server 
/usr/local/lib/python2.7/dist-packages/eventlet/wsgi.py:826
  2016-01-06 16:00:11.846 DEBUG nova.api.openstack.wsgi 
[req-136fd612-bb9e-45e2-a8fa-423e5322c64c admin demo] 
Calling method '>' from (pid=32324) _process_stack 
/opt/stack/nova/nova/api/openstack/wsgi.py:798
  2016-01-06 16:00:11.848 ERROR nova.api.openstack.extensions 
[req-136fd612-bb9e-45e2-a8fa-423e5322c64c admin demo] 
Unexpected exception in API method
  2016-01-06 16:00:11.848 TRACE nova.api.openstack.extensions 
Traceback (most recent call last):
  2016-01-06 16:00:11.848 TRACE nova.api.openstack.extensions 
  File "/opt/stack/nova/nova/api/openstack/extensions.py", line 
478, in wrapped
  2016-01-06 16:00:11.848 TRACE nova.api.openstack.extensions 
return f(*args, **kwargs)
  2016-01-06 16:00:11.848 TRACE nova.api.openstack.extensions 
  File 
"/opt/stack/nova/nova/api/openstack/compute/availability_zone.py", line 120, in 
detail
  2016-01-06 16:00:11.848 TRACE nova.api.openstack.extensions 
return 

[Yahoo-eng-team] [Bug 1303802] Re: qemu image convert fails in snapshot

2016-01-14 Thread Thomas Herve
** No longer affects: heat

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1303802

Title:
  qemu image convert fails in snapshot

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  Periodically in the gate we see a failure by qemu image convert in
  snapshot:

  2014-04-07 01:31:29.470 29554 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/openstack/common/processutils.py", line 193, in 
execute
  2014-04-07 01:31:29.470 29554 TRACE oslo.messaging.rpc.dispatcher cmd=' 
'.join(cmd))
  2014-04-07 01:31:29.470 29554 TRACE oslo.messaging.rpc.dispatcher 
ProcessExecutionError: Unexpected error while running command.
  2014-04-07 01:31:29.470 29554 TRACE oslo.messaging.rpc.dispatcher Command: 
qemu-img convert -f qcow2 -O qcow2 
/opt/stack/data/nova/instances/4ff6dc10-eac8-41d2-a645-3a0e0ba07c8a/disk 
/opt/stack/data/nova/instances/snapshots/tmpcVpCxJ/33eb0bb2b49648c69770b47db3211a86
  2014-04-07 01:31:29.470 29554 TRACE oslo.messaging.rpc.dispatcher Exit code: 1
  2014-04-07 01:31:29.470 29554 TRACE oslo.messaging.rpc.dispatcher Stdout: ''
  2014-04-07 01:31:29.470 29554 TRACE oslo.messaging.rpc.dispatcher Stderr: 
'qemu-img: error while reading sector 0: Input/output error\n'

  qemu-img is very obtuse on what the actual issue is, so it's unclear
  if this is a corrupt disk, or a totally missing disk.

  The user visible face of this is on operations like shelve where the
  instance will believe that it's still in active state -
  http://logs.openstack.org/02/85602/1/gate/gate-tempest-dsvm-
  full/20ed964/console.html#_2014-04-07_01_44_29_309

  Even though everything is broken instead.

  Logstash query:
  
http://logstash.openstack.org/#eyJzZWFyY2giOiJcInFlbXUtaW1nOiBlcnJvclwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI2MDQ4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxMzk2ODc2MTQ4NDc3fQ==

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1303802/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1466360] Re: check edit qos spec form has new value

2016-01-14 Thread Masco
** Changed in: horizon
   Status: In Progress => Invalid

** Changed in: horizon
 Assignee: Masco (masco) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1466360

Title:
  check edit qos spec form has new value

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  in edit qos spec, the form is submitting without checking for new value.
  it is unnecessary api call. to avoid this we have to submit the form only if 
there is a change in the value.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1466360/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1533181] Re: Typo in network security group comment

2016-01-14 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/266336
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=783bb16d865cbb5afea03490635baaca4a0757d2
Submitter: Jenkins
Branch:master

commit 783bb16d865cbb5afea03490635baaca4a0757d2
Author: Karthik Suresh 
Date:   Tue Jan 12 17:58:33 2016 +0530

Corrected typo in fetch security groups comment

Edited comment from returns image to security group

Change-Id: Ia41c1bbaa7e3291eb010e7441f9f60dea03339e7
Closes-Bug: #1533181


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1533181

Title:
  Typo in network security group comment

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Typo in comment for fetching a list of security groups under
  rest/api/network folder. Comment says it fetches an image list.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1533181/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1534066] [NEW] Resize failed when instance get large

2016-01-14 Thread leehom
Public bug reported:

I'm using OpenStack Juno rel
$ rpm -qa |grep nova
python-nova-2014.2.2-1.el7.noarch
openstack-nova-scheduler-2014.2.2-1.el7.noarch
python-novaclient-2.20.0-1.el7.centos.noarch
openstack-nova-common-2014.2.2-1.el7.noarch
openstack-nova-api-2014.2.2-1.el7.noarch
openstack-nova-cert-2014.2.2-1.el7.noarch
openstack-nova-console-2014.2.2-1.el7.noarch
openstack-nova-conductor-2014.2.2-1.el7.noarch
openstack-nova-novncproxy-2014.2.2-1.el7.noarch

Describe about the problem
1. At first, 
/usr/lib/python2.7/site-packages/neutronclient/client.py:do_request() is failed 
with exceptions.Unauthorized
-> Token is not expired, but after checking keystone's log, it is somehow 
deleted
2. When try to do authenticate(), it's failed with 
exceptions.NoAuthURLProvided()


Here is log found from nova-compute

2016-01-07 16:08:42.396 1334 DEBUG neutronclient.client 
[req-7d95c82d-b26a-4584-851b-95de5c99f17c ]
REQ: curl -i https://hf1-neutron.qa.webex.com:443/v2.0/extensions.json -X GET 
-H "X-Auth-Token: 119b7628442f4e19b4bd1041d05c2afa" -H "User-Agent: 
python-neutronclient"
 http_log_req /usr/lib/python2.7/site-packages/neutronclient/common/utils.py:140
2016-01-07 16:08:42.473 1334 DEBUG neutronclient.client 
[req-7d95c82d-b26a-4584-851b-95de5c99f17c ] RESP:401 {'content-length': '23', 
'www-authenticate': "Keystone 
uri='https://hf1-keystone-srv.qa.webex.com:443/v2.0'", 'connection': 
'keep-alive', 'date': 'Thu, 07 Jan 2016 16:08:42 GMT', 'content-type': 
'text/plain', 'x-openstack-request-id': 
'req-36035958-9e5d-44db-a12a-0ed0764a7d0e'} Authentication required
 http_log_resp 
/usr/lib/python2.7/site-packages/neutronclient/common/utils.py:149
2016-01-07 16:08:42.474 1334 ERROR nova.compute.manager 
[req-7d95c82d-b26a-4584-851b-95de5c99f17c None] [instance: 
fc8d35d5-874d-4c09-9f23-9bf7587b2127] Setting instance vm_state to ERROR
2016-01-07 16:08:42.474 1334 TRACE nova.compute.manager [instance: 
fc8d35d5-874d-4c09-9f23-9bf7587b2127] Traceback (most recent call last):
2016-01-07 16:08:42.474 1334 TRACE nova.compute.manager [instance: 
fc8d35d5-874d-4c09-9f23-9bf7587b2127]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 3904, in 
finish_resize
2016-01-07 16:08:42.474 1334 TRACE nova.compute.manager [instance: 
fc8d35d5-874d-4c09-9f23-9bf7587b2127] disk_info, image)
2016-01-07 16:08:42.474 1334 TRACE nova.compute.manager [instance: 
fc8d35d5-874d-4c09-9f23-9bf7587b2127]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 3843, in 
_finish_resize
2016-01-07 16:08:42.474 1334 TRACE nova.compute.manager [instance: 
fc8d35d5-874d-4c09-9f23-9bf7587b2127] migration_p)
2016-01-07 16:08:42.474 1334 TRACE nova.compute.manager [instance: 
fc8d35d5-874d-4c09-9f23-9bf7587b2127]   File 
"/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 1241, in 
migrate_instance_finish
2016-01-07 16:08:42.474 1334 TRACE nova.compute.manager [instance: 
fc8d35d5-874d-4c09-9f23-9bf7587b2127] if not 
self._has_port_binding_extension(context, refresh_cache=True):
2016-01-07 16:08:42.474 1334 TRACE nova.compute.manager [instance: 
fc8d35d5-874d-4c09-9f23-9bf7587b2127]   File 
"/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 471, in 
_has_port_binding_extension
2016-01-07 16:08:42.474 1334 TRACE nova.compute.manager [instance: 
fc8d35d5-874d-4c09-9f23-9bf7587b2127] 
self._refresh_neutron_extensions_cache(context)
2016-01-07 16:08:42.474 1334 TRACE nova.compute.manager [instance: 
fc8d35d5-874d-4c09-9f23-9bf7587b2127]   File 
"/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 463, in 
_refresh_neutron_extensions_cache
2016-01-07 16:08:42.474 1334 TRACE nova.compute.manager [instance: 
fc8d35d5-874d-4c09-9f23-9bf7587b2127] extensions_list = 
neutron.list_extensions()['extensions']
2016-01-07 16:08:42.474 1334 TRACE nova.compute.manager [instance: 
fc8d35d5-874d-4c09-9f23-9bf7587b2127]   File 
"/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 98, in 
with_params
2016-01-07 16:08:42.474 1334 TRACE nova.compute.manager [instance: 
fc8d35d5-874d-4c09-9f23-9bf7587b2127] ret = self.function(instance, *args, 
**kwargs)
2016-01-07 16:08:42.474 1334 TRACE nova.compute.manager [instance: 
fc8d35d5-874d-4c09-9f23-9bf7587b2127]   File 
"/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 300, in 
list_extensions
2016-01-07 16:08:42.474 1334 TRACE nova.compute.manager [instance: 
fc8d35d5-874d-4c09-9f23-9bf7587b2127] return self.get(self.extensions_path, 
params=_params)
2016-01-07 16:08:42.474 1334 TRACE nova.compute.manager [instance: 
fc8d35d5-874d-4c09-9f23-9bf7587b2127]   File 
"/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 1320, in 
get
2016-01-07 16:08:42.474 1334 TRACE nova.compute.manager [instance: 
fc8d35d5-874d-4c09-9f23-9bf7587b2127] headers=headers, params=params)
2016-01-07 16:08:42.474 1334 TRACE nova.compute.manager [instance: 

[Yahoo-eng-team] [Bug 1508421] Re: Projects dropdown fails due to incomplete Keystone endpoint URL

2016-01-14 Thread Matthias Runge
** Changed in: django-openstack-auth
Milestone: None => 2.1.1

** No longer affects: horizon

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1508421

Title:
  Projects dropdown fails due to incomplete Keystone endpoint URL

Status in django-openstack-auth:
  Fix Released

Bug description:
  Problem Description
  ===

  The 'Projects' dropdown in our Horizon dashboard (the one on the top
  left) is empty since we switched to Openstack Kilo. We investigated the
  issue and found the following error message in horizon.log:

AuthorizationFailure: Authorization Failed: The resource could not
  be found. (HTTP 404) http://10.0.81.10:5000

  [You will find a full stack trace in stacktrace.txt in the attached
  tarball.]

  Further investigation (see keystone.pcap in the attached tarball for a
  packet trace) revealed that horizon is trying to access
  http://10.0.81.10:5000/tokens (as opposed to the correct URL,
  http://10.0.81.10:5000/v2.0/tokens). We found the problem could be
  worked around by appending missing versioning information in backend.py
  (see below), but it should be possible to fix this in a cleaner manner.

  
  Environment
  ===

  We are running Openstack Kilo with the Ubuntu Cloud packages, some of
  them modified locally with backported bugfixes. You will find these
  packages at https://launchpad.net/~syseleven-platform/+archive/ubuntu/kilo. 
  In particular, we are running the following Horizon package:

  https://launchpad.net/~syseleven-platform/+archive/ubuntu/kilo

  
  Configuration
  =

  You will find our full Horizon configuration in local_settings.py in the
  attached tarball. Relevant points:

  * OPENSTACK_KEYSTONE_URL is http://10.0.81.10:5000/v2.0
  * OPENSTACK_API_VERSIONS is configured for Keystone 2.0
  * The identity endpoints as reported by Keystone itself do not contain
versioning information (the way it is supposed to be as of Kilo).

  
  Steps to reproduce
  ==

  * Run Horizon/Kilo (with the Ubuntu Cloud packages or our modified
packages; both should exhibit this problem)
  * Configure the end points and OPENSTACK_KEYSTONE_URL as described under
"Configuration"
  * Log into the web interface.

  This should yield an empty Projects dropdown list and the stacktrace in
  stacktrace.txt in /var/log/horizon/horizon.log).

  
  Workaround
  ==

  I modified /usr/lib/python2.7/dist-packages/openstack_auth/backend.py to
  append versioning information to the endpoint URL if it is missing. This
  can be used to work around the problem in a pinch, but I do not consider
  it a clean fix.

  
  Files
  =

  I attached a couple of files to illustrate the problem (you will find
  all of these in horizon-projects-dropdown.tar):

  backend.py  The modified backend.py described under "Workaround"
  endpoints.txt   A list of identity endpoints as reported by Keystone
  keystone.pcap   A packet capture of Horizon's interactions with
  Keystone
  local_settings.py   Our Horizon configuration
  stacktrace.txt  The stack trace that appears in 
  /var/log/horizon/horizon.log

To manage notifications about this bug go to:
https://bugs.launchpad.net/django-openstack-auth/+bug/1508421/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1526630] Re: Unexpected API Error returned when boot an instance with an wrong octal ip address as v4-fixed-ip

2016-01-14 Thread Fahri Cihan Demirci
Yep, the patch seems to be merged on January the 4th, 2016.  Novaclient
version 3.2.0 released on January the 16th, 2016 contains it. Therefore
I am marking this report as invalid since a patch related to another
issue fixes the problem you observed. Thank you for verifying that the
fix was successful.

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1526630

Title:
  Unexpected API Error returned when boot an instance with an wrong
  octal ip address as v4-fixed-ip

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  [Summary]
  Unexpected API Error returned when boot an instance with an wrong octal ip 
address as v4-fixed-ip

  [Topo]
  devstack all-in-one node

  [Description and expect result]
  no Unexpected API Error, should return an error says that "Invalid input for 
field/attribute fixed_ip"

  [Reproduceable or not]
  reproduceable 

  [Recreate Steps]
  1) create a network/subnet:
  root@45-59:/opt/stack/devstack# neutron net-list | grep net2
  | 2de63c95-f645-492c-9197-5d4d5244a8ba | net2| 
47eb5e03-c16a-4303-923c-21a061f2909e 1.0.0.0/24  |
  root@45-59:/opt/stack/devstack# 

  
  2) launch an instance with an wrong octal ip address as v4-fixed-ip, 
Unexpected API Error returned:   ISSUE
  root@45-59:/opt/stack/devstack# nova  boot  --flavor 1 --image  
cirros-0.3.4-x86_64-uec  --availability-zone nova --nic 
net-id=2de63c95-f645-492c-9197-

  5d4d5244a8ba,v4-fixed-ip=1.0.0.087  inst2
  ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
   (HTTP 500) (Request-ID: 
req-17cc4e5a-af9f-44b9-9c4a-b87706682bf3)
  root@45-59:/opt/stack/devstack# 

  Note: 1.0.0.087 is an ip address in octal format, but it's a wrong
  address, octal address can not contain number character bigger than 7.

  [Configration]
  reproduceable bug, no need

  [logs]
  reproduceable bug, no need

  [Root cause anlyze or debug inf]
  reproduceable bug

  [Attachment]
  None

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1526630/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1534083] [NEW] Glance api config file lost the configuration item "filesystem_store_datadir" default value

2016-01-14 Thread YaoZheng_ZTE
Public bug reported:

The config item "filesystem_store_datadir "default value  lost, so
after  install  glance,  users have to manually configure it.

** Affects: glance
 Importance: Undecided
 Assignee: YaoZheng_ZTE (zheng-yao1)
 Status: New

** Changed in: glance
 Assignee: (unassigned) => YaoZheng_ZTE (zheng-yao1)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1534083

Title:
  Glance api config file lost  the configuration item
  "filesystem_store_datadir" default value

Status in Glance:
  New

Bug description:
  The config item "filesystem_store_datadir "default value  lost, so
  after  install  glance,  users have to manually configure it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1534083/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1534083] Re: Glance api config file lost the configuration item "filesystem_store_datadir" default value

2016-01-14 Thread wangxiyuan
/var/lib/glance/images seems good.  It's what in the official guideline.

** Project changed: glance => glance-store

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1534083

Title:
  Glance api config file lost  the configuration item
  "filesystem_store_datadir" default value

Status in glance_store:
  New

Bug description:
  The config item "filesystem_store_datadir "default value  lost, so
  after  install  glance,  users have to manually configure it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance-store/+bug/1534083/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1534140] [NEW] keystone-manage bootstrap should not create user/project if it fails

2016-01-14 Thread Dave Chen
Public bug reported:

If `keystone-manage bootstrap` fails with the role already exists (this
may happen if someone use OSC CLI created a role but someone else want
to boostrap a set of `user`, `project` or `role` without aware of  the
role has already created.), the project or user can still be created
successfully.

And then if redefine the role, `keystone-manage bootstrap` will still
fail since `user`, `project` have been created, but `keystone-manage
bootstrap` cannot handle with this.

See the example:
dave@shldeOTCopen005:~$ keystone-manage bootstrap --bootstrap-username 
bootstrap_user --bootstrap-project-name bootstrap_project --bootstrap-role-name 
admin --bootstrap-password abc123
25784 TRACE keystone details=_('Duplicate Entry'))
25784 TRACE keystone Conflict: Conflict occurred attempting to store role - 
Duplicate Entry
25784 TRACE keystone


change the role to `bootstrap_role` ...

dave@shldeOTCopen005:~$ keystone-manage bootstrap --bootstrap-username 
bootstrap_user --bootstrap-project-name bootstrap_project --bootstrap-role-name 
bootstrap_role --bootstrap-password abc123
25813 TRACE keystone details=_('Duplicate Entry'))
25813 TRACE keystone Conflict: Conflict occurred attempting to store project - 
Duplicate Entry
25813 TRACE keystone

So, if we want to boostrap again, we need delete project, user manually,  this 
is not friendly to end  user.
`keystone-manage bootstrap` should not create any `user`, `project` if the 
command is not executed successfully.

** Affects: keystone
 Importance: Undecided
 Assignee: Dave Chen (wei-d-chen)
 Status: New

** Changed in: keystone
 Assignee: (unassigned) => Dave Chen (wei-d-chen)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1534140

Title:
  keystone-manage bootstrap should not create user/project if it fails

Status in OpenStack Identity (keystone):
  New

Bug description:
  If `keystone-manage bootstrap` fails with the role already exists
  (this may happen if someone use OSC CLI created a role but someone
  else want to boostrap a set of `user`, `project` or `role` without
  aware of  the role has already created.), the project or user can
  still be created successfully.

  And then if redefine the role, `keystone-manage bootstrap` will still
  fail since `user`, `project` have been created, but `keystone-manage
  bootstrap` cannot handle with this.

  See the example:
  dave@shldeOTCopen005:~$ keystone-manage bootstrap --bootstrap-username 
bootstrap_user --bootstrap-project-name bootstrap_project --bootstrap-role-name 
admin --bootstrap-password abc123
  25784 TRACE keystone details=_('Duplicate Entry'))
  25784 TRACE keystone Conflict: Conflict occurred attempting to store role - 
Duplicate Entry
  25784 TRACE keystone

  
  change the role to `bootstrap_role` ...

  dave@shldeOTCopen005:~$ keystone-manage bootstrap --bootstrap-username 
bootstrap_user --bootstrap-project-name bootstrap_project --bootstrap-role-name 
bootstrap_role --bootstrap-password abc123
  25813 TRACE keystone details=_('Duplicate Entry'))
  25813 TRACE keystone Conflict: Conflict occurred attempting to store project 
- Duplicate Entry
  25813 TRACE keystone

  So, if we want to boostrap again, we need delete project, user manually,  
this is not friendly to end  user.
  `keystone-manage bootstrap` should not create any `user`, `project` if the 
command is not executed successfully.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1534140/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1533160] Re: It's not possible to click on a panel within Admin dashboard which has a homonym at Project dashboard

2016-01-14 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/266306
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=14ce2292597294567729cc7f7bc351e9ebeb8fc6
Submitter: Jenkins
Branch:master

commit 14ce2292597294567729cc7f7bc351e9ebeb8fc6
Author: Timur Sufiev 
Date:   Tue Jan 12 14:35:06 2016 +0300

Eliminate ambiguity when matching panel in i9n tests

To do so, an argument `src_elem` is added to _click_menu_item()
method. Whenever dashboard or a panel group is clicked, its wrapper is
returned to be used as `src_elem` in a subsequent call for clicking
third-level item.  This way the set of panel labels being matched is
restricted to the descendants of that particular dashboard or panel
group.

Change-Id: I54f3febed645b6bf2faddfbe27690ceb0944cd12
Closes-Bug: #1533160


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1533160

Title:
  It's not possible to click on a panel within Admin dashboard which has
  a homonym at Project dashboard

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  XPATH selectors for matching panel in integration tests are written in
  a such way that the first panel with the name, say, 'Images' is
  matched. Since both Project and Admin dashboards has panel named
  'Images', when we test Admin->System->Images, the wrong panel is being
  matched and the test fails due to inability to click a hidden panel
  label.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1533160/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1534110] [NEW] OF native connection sometimes goes away and agent exits

2016-01-14 Thread Miguel Angel Ajo
Public bug reported:

Probably we should provide a reconnection mechanism when something on
the OpenFlow connection goes wrong.


2016-01-06 08:23:45.031 11755 DEBUG OfctlService [-] dpid 231386065181514 -> 
datapath None _handle_get_datapath 
/opt/stack/new/neutron/.tox/dsvm-fullstack-constraints/local/lib/python2.7/site-packages/ryu/app/ofctl/service.py:106
2016-01-06 08:23:45.032 11755 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ofswitch [-] 
Switch connection timeout
2016-01-06 08:23:45.033 11755 DEBUG neutron.agent.linux.utils [-] Running 
command: ['sudo', 'ovs-vsctl', '--timeout=10', '--oneline', '--format=json', 
'--', '--columns=datapath_id', 'list', 'Bridge', 'br-int261889006'] 
create_process /opt/stack/new/neutron/neutron/agent/linux/utils.py:84
2016-01-06 08:23:45.057 11755 DEBUG neutron.agent.linux.utils [-] Exit code: 0 
execute /opt/stack/new/neutron/neutron/agent/linux/utils.py:142
2016-01-06 08:23:45.058 11755 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [-] Switch 
connection timeout Agent terminated!
2016-01-06 08:23:45.060 11755 ERROR ryu.lib.hub [-] hub: uncaught exception: 
Traceback (most recent call last):
  File 
"/opt/stack/new/neutron/.tox/dsvm-fullstack-constraints/local/lib/python2.7/site-packages/ryu/lib/hub.py",
 line 52, in _launch
func(*args, **kwargs)
  File 
"/opt/stack/new/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 1991, in main
sys.exit(1)
SystemExit: 1


http://logs.openstack.org/77/240577/6/check/gate-neutron-dsvm-
fullstack/ec699c7/logs/TestConnectivitySameNetwork.test_connectivity_VLANs,Native_
/neutron-openvswitch-agent--2016-01-06--
08-23-13-672140.log.txt.gz#_2016-01-06_08_23_45_032

http://logs.openstack.org/77/240577/6/check/gate-neutron-dsvm-
fullstack/ec699c7/testr_results.html.gz

** Affects: neutron
 Importance: Medium
 Assignee: YAMAMOTO Takashi (yamamoto)
 Status: New


** Tags: ovs

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1534110

Title:
  OF native connection sometimes goes away and agent exits

Status in neutron:
  New

Bug description:
  Probably we should provide a reconnection mechanism when something on
  the OpenFlow connection goes wrong.

  
  2016-01-06 08:23:45.031 11755 DEBUG OfctlService [-] dpid 231386065181514 -> 
datapath None _handle_get_datapath 
/opt/stack/new/neutron/.tox/dsvm-fullstack-constraints/local/lib/python2.7/site-packages/ryu/app/ofctl/service.py:106
  2016-01-06 08:23:45.032 11755 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ofswitch [-] 
Switch connection timeout
  2016-01-06 08:23:45.033 11755 DEBUG neutron.agent.linux.utils [-] Running 
command: ['sudo', 'ovs-vsctl', '--timeout=10', '--oneline', '--format=json', 
'--', '--columns=datapath_id', 'list', 'Bridge', 'br-int261889006'] 
create_process /opt/stack/new/neutron/neutron/agent/linux/utils.py:84
  2016-01-06 08:23:45.057 11755 DEBUG neutron.agent.linux.utils [-] Exit code: 
0 execute /opt/stack/new/neutron/neutron/agent/linux/utils.py:142
  2016-01-06 08:23:45.058 11755 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [-] Switch 
connection timeout Agent terminated!
  2016-01-06 08:23:45.060 11755 ERROR ryu.lib.hub [-] hub: uncaught exception: 
Traceback (most recent call last):
File 
"/opt/stack/new/neutron/.tox/dsvm-fullstack-constraints/local/lib/python2.7/site-packages/ryu/lib/hub.py",
 line 52, in _launch
  func(*args, **kwargs)
File 
"/opt/stack/new/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 1991, in main
  sys.exit(1)
  SystemExit: 1


  http://logs.openstack.org/77/240577/6/check/gate-neutron-dsvm-
  
fullstack/ec699c7/logs/TestConnectivitySameNetwork.test_connectivity_VLANs,Native_
  /neutron-openvswitch-agent--2016-01-06--
  08-23-13-672140.log.txt.gz#_2016-01-06_08_23_45_032

  http://logs.openstack.org/77/240577/6/check/gate-neutron-dsvm-
  fullstack/ec699c7/testr_results.html.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1534110/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1534113] [NEW] default sg could add same rule as original egress ipv4 rule

2016-01-14 Thread yujie
Public bug reported:

In default securitygroup,  we could add a rule in default same as the
original egress ipv4 rule.

Reproduce step: 
# neutron security-group-rule-create --direction egress --remote-ip-prefix 
0.0.0.0/0 default

It returns:
Created a new security_group_rule:
+---+--+
| Field | Value|
+---+--+
| direction | egress   |
| ethertype | IPv4 |
| id| d8f968e2-270b-4d6e-a2d0-a408726b7edc |
| port_range_max|  |
| port_range_min|  |
| protocol  |  |
| remote_group_id   |  |
| remote_ip_prefix  | 0.0.0.0/0|
| security_group_id | 9a2c0d86-4a36-46d4-a4da-43a239003eef |
| tenant_id | 52953da91c0e47528d5317867391aaec |
+---+--+

Actually we expect that "Security group rule already exists. Rule id is
x".

** Affects: neutron
 Importance: Undecided
 Assignee: yujie (16189455-d)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => yujie (16189455-d)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1534113

Title:
  default sg could add same rule as original egress ipv4 rule

Status in neutron:
  New

Bug description:
  In default securitygroup,  we could add a rule in default same as the
  original egress ipv4 rule.

  Reproduce step: 
  # neutron security-group-rule-create --direction egress --remote-ip-prefix 
0.0.0.0/0 default

  It returns:
  Created a new security_group_rule:
  +---+--+
  | Field | Value|
  +---+--+
  | direction | egress   |
  | ethertype | IPv4 |
  | id| d8f968e2-270b-4d6e-a2d0-a408726b7edc |
  | port_range_max|  |
  | port_range_min|  |
  | protocol  |  |
  | remote_group_id   |  |
  | remote_ip_prefix  | 0.0.0.0/0|
  | security_group_id | 9a2c0d86-4a36-46d4-a4da-43a239003eef |
  | tenant_id | 52953da91c0e47528d5317867391aaec |
  +---+--+

  Actually we expect that "Security group rule already exists. Rule id
  is x".

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1534113/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1533638] Re: test_bash_completion fails due to deprecation warning generated by neutronclient

2016-01-14 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/266885
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=feced76488ea99355c605e0bc719723931621693
Submitter: Jenkins
Branch:master

commit feced76488ea99355c605e0bc719723931621693
Author: Ihar Hrachyshka 
Date:   Wed Jan 13 13:32:15 2016 +0100

tests: stop validating neutronclient in neutron-debug tests

In neutronclient 4.0.0, any command executed triggers DeprecationWarning
on stderr (to be fixed by I77f168af92ae51ce16bed4988bbcaf7c18557727 and
a new client release including it).

The test cases assumed that if command is successful, it never writes to
stderr though. Making the test failing when using the latest client.

Instead of fixing the test class not to assume there is no output on
stderr, remove it because we are not meant to validate neutronclient in
neutron gate at all and should rely on the library as shipped. Client
should already have reasonable coverage for its CLI.

Change-Id: I6440445b80637a5a9f4de052cf5ea1fbd8dcf7d1
Closes-Bug: #1533638


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1533638

Title:
  test_bash_completion fails due to deprecation warning generated by
  neutronclient

Status in neutron:
  Fix Released

Bug description:
  neutron.tests.unit.debug.test_shell.ShellTest.test_bash_completion
  --

  Captured traceback:
  ~~~
  Traceback (most recent call last):
File "neutron/tests/unit/debug/test_shell.py", line 62, in 
test_bash_completion
  self.assertFalse(stderr)
File 
"/home/vagrant/git/neutron/.tox/py27-constraints/lib/python2.7/site-packages/unittest2/case.py",
 line 696, in assertFalse
  raise self.failureException(msg)
  AssertionError: 
"/home/vagrant/git/neutron/.tox/py27-constraints/lib/python2.7/site-packages/neutronclient/neutron/v2_0/availability_zone.py:21:
 DeprecationWarning: Function 'neutronclient.i18n._()' has moved to 
'oslo_i18n._factory.f()': moved to neutronclient._i18n; please migrate to local 
oslo_i18n usage, as defined at 
http://docs.openstack.org/developer/oslo.i18n/usage.html\n  
help=_('Availability Zone for the %s '\n" is not false

  We see the warning because we enable them in base test class.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1533638/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1534163] [NEW] nova boot Unexpected API Error

2016-01-14 Thread Gennaro Oliva
Public bug reported:

I'm following the Installation guide on a CentOS 7 box.

When i try to launch an instance with nova using:

nova boot --flavor m1.tiny --image cirros --nic net-id=2a3bd890-1afd-
4d22-9cfb-e9e9415f1a03 --security-group default --key-name oliva public-
instance

I get the following error:

ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
 (HTTP 500) (Request-ID: 
req-fa00f28f-ab95-441d-91cb-68f9353ae16f)

Output of nova flavor-list, nova image-list, nova secgroup-list, neutron
net-list follows:

[root@controller ~]# nova flavor-list
++---+---+--+---+--+---+-+---+
| ID | Name  | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | 
Is_Public |
++---+---+--+---+--+---+-+---+
| 1  | m1.tiny   | 512   | 1| 0 |  | 1 | 1.0 | 
True  |
| 2  | m1.small  | 2048  | 20   | 0 |  | 1 | 1.0 | 
True  |
| 3  | m1.medium | 4096  | 40   | 0 |  | 2 | 1.0 | 
True  |
| 4  | m1.large  | 8192  | 80   | 0 |  | 4 | 1.0 | 
True  |
| 5  | m1.xlarge | 16384 | 160  | 0 |  | 8 | 1.0 | 
True  |
++---+---+--+---+--+---+-+---+

[root@controller ~]# nova image-list
+--++++
| ID   | Name   | Status | Server |
+--++++
| 704c03bc-19af-4546-b995-da8751016642 | cirros | ACTIVE ||
+--++++

root@controller ~]# neutron net-list
+--+-+-+
| id   | name| subnets  
   |
+--+-+-+
| 2a3bd890-1afd-4d22-9cfb-e9e9415f1a03 | public  | 
fe905412-865c-48aa-b1b6-992496ff8c3b 192.168.8.0/24 |
| 07fab7de-6e98-48d5-a132-3494c2bf55a0 | private | 
37bdd457-17dc-4f24-b928-d13d278ba9f3 172.16.1.0/24  |
+--+-+-+

[root@controller ~]# nova secgroup-list
+--+-++
| Id   | Name| Description|
+--+-++
| 893815d5-5d8e-42aa-9681-b98681c72e13 | default | Default security group |
+--+-++

This is the list of openstack packages installed on controller:

openstack-swift-proxy-2.5.0-1.el7.noarch
openstack-neutron-7.0.0-2.el7.noarch
openstack-selinux-0.6.41-1.el7.noarch
openstack-dashboard-8.0.0-1.el7.noarch
openstack-swift-2.5.0-1.el7.noarch
openstack-heat-engine-5.0.0-1.el7.noarch
openstack-nova-scheduler-12.0.0-3.94d6b69git.el7.noarch
openstack-keystone-8.0.1-1.el7.noarch
python2-django-openstack-auth-2.0.1-1.el7.noarch
openstack-swift-plugin-swift3-1.7-4.el7.noarch
openstack-heat-common-5.0.0-1.el7.noarch
openstack-utils-2014.2-1.el7.noarch
openstack-nova-common-12.0.0-3.94d6b69git.el7.noarch
openstack-neutron-ml2-7.0.0-2.el7.noarch
openstack-nova-novncproxy-12.0.0-3.94d6b69git.el7.noarch
openstack-nova-api-12.0.0-3.94d6b69git.el7.noarch
openstack-swift-container-2.5.0-1.el7.noarch
openstack-heat-api-5.0.0-1.el7.noarch
python-openstackclient-1.7.1-1.el7.noarch
openstack-neutron-linuxbridge-7.0.0-2.el7.noarch
openstack-nova-console-12.0.0-3.94d6b69git.el7.noarch
openstack-cinder-7.0.1-1.el7.noarch
openstack-glance-11.0.1-1.el7.noarch
openstack-heat-api-cfn-5.0.0-1.el7.noarch
centos-release-openstack-liberty-1-4.el7.noarch
openstack-nova-cert-12.0.0-3.94d6b69git.el7.noarch
openstack-neutron-common-7.0.0-2.el7.noarch
openstack-nova-conductor-12.0.0-3.94d6b69git.el7.noarch

nova-api.log follows:

2016-01-14 14:32:18.158 3136 INFO nova.osapi_compute.wsgi.server 
[req-674ec488-9823-455b-9b0e-9c8e3ef57933 07be303336004eafb9751f8bf0e10f1f 
afb0d6294ea6426b8d5bdb109c4c4d1e - - -] 192.168.8.114 "GET /v2/ HTTP/1.1" 
status: 200 len: 572 time: 0.0707300
2016-01-14 14:32:18.704 3136 INFO nova.osapi_compute.wsgi.server 
[req-c95e2855-be73-490a-b83b-dea9e015b676 07be303336004eafb9751f8bf0e10f1f 
afb0d6294ea6426b8d5bdb109c4c4d1e - - -] 192.168.8.114 "GET 
/v2/afb0d6294ea6426b8d5bdb109c4c4d1e/images HTTP/1.1" status: 200 len: 692 
time: 0.2755518
2016-01-14 14:32:18.894 3136 INFO nova.osapi_compute.wsgi.server 
[req-1f8c9407-9417-46e7-a40a-26100aa7067c 07be303336004eafb9751f8bf0e10f1f 

[Yahoo-eng-team] [Bug 1534168] [NEW] TenantAbsoluteLimits rest call do not return dict

2016-01-14 Thread Marcos Lobo
Public bug reported:

In the cinder REST api, all the get() methods do return always a
dictionary or a list of dictionaries.

But there is only one method that it does NOT: TenantAbsoluteLimits. You
can check here
https://github.com/openstack/horizon/blob/master/openstack_dashboard/api/rest/cinder.py#L204

My question is, why?. I did some checks and I think this is a bug.
TenantAbsoluteLimits should return a list of dictionaries.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1534168

Title:
  TenantAbsoluteLimits rest call do not return dict

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  In the cinder REST api, all the get() methods do return always a
  dictionary or a list of dictionaries.

  But there is only one method that it does NOT: TenantAbsoluteLimits.
  You can check here
  
https://github.com/openstack/horizon/blob/master/openstack_dashboard/api/rest/cinder.py#L204

  My question is, why?. I did some checks and I think this is a bug.
  TenantAbsoluteLimits should return a list of dictionaries.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1534168/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1348818] Re: Unittests do not succeed with random PYTHONHASHSEED value

2016-01-14 Thread Rob Cresswell
** Changed in: horizon
Milestone: None => mitaka-2

** Changed in: horizon/icehouse
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1348818

Title:
  Unittests do not succeed with random PYTHONHASHSEED value

Status in Ceilometer:
  Fix Released
Status in Cinder:
  In Progress
Status in Cinder icehouse series:
  Won't Fix
Status in Designate:
  Fix Released
Status in Glance:
  Fix Released
Status in Glance icehouse series:
  Fix Committed
Status in heat:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  In Progress
Status in OpenStack Dashboard (Horizon) icehouse series:
  Won't Fix
Status in Ironic:
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in python-glanceclient:
  Fix Released
Status in python-neutronclient:
  Fix Released
Status in Sahara:
  Fix Released
Status in Trove:
  Fix Released
Status in WSME:
  Fix Released

Bug description:
  New tox and python3.3 set a random PYTHONHASHSEED value by default.
  These projects should support this in their unittests so that we do
  not have to override the PYTHONHASHSEED value and potentially let bugs
  into these projects.

  To reproduce these failures:

  # install latest tox
  pip install --upgrade tox
  tox --version # should report 1.7.2 or greater
  cd $PROJECT_REPO
  # edit tox.ini to remove any PYTHONHASHSEED=0 lines
  tox -epy27

  Most of these failures appear to be related to dict entry ordering.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1348818/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1518200] Re: instance is not destroyed on source host after a successful evacuate

2016-01-14 Thread Matt Riedemann
** Tags added: liberty-backport-potential

** Also affects: nova/liberty
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1518200

Title:
  instance is not destroyed on source host after a successful evacuate

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) liberty series:
  New

Bug description:
  After evacuate an instance to a new host successfully, then start the old 
host's nova-compute, bug the old instance not be destroyed as expected.
  See following code:
  
https://github.com/openstack/nova/blob/stable/liberty/nova/compute/manager.py#L817
  nova-compute read migration record from db to get the evacuated instance and 
then destroy it.  It filters migration with status 'accepted'. 
  
https://github.com/openstack/nova/blob/stable/liberty/nova/compute/manager.py#L2715
  After successfully evacuate instance, status of migration will change to 
'done' from 'accepted' 
  So, I think we should modify 'accepted' to 'done' when filter migration 
record.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1518200/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1520180] Re: Pecan: no authZ check on DELETE operations

2016-01-14 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/234457
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=293c3e01efce74d110ff34703a9e68ce2cd782e6
Submitter: Jenkins
Branch:master

commit 293c3e01efce74d110ff34703a9e68ce2cd782e6
Author: Salvatore Orlando 
Date:   Tue Oct 13 15:08:47 2015 -0700

Pecan: Fixes and tests for the policy enforcement hook

As PolicyNotAuthorizedException is raised in a hook, the
ExceptionTranslationHook is not invoked for it; therefore a 500
response is returned whereas a 403 was expected. This patch
explicitly handles the exception in the hook in order to ensure
the appropriate response code is returned.

Moreover, the structure of the 'before' hook prevented checks
on DELETE requests from being performed. As a result the check
was not performed at all (checks on the 'after' hook only pertain
GET requests). This patch changes the logic of the 'before' hook
by ensuring the item to authorize acces to is loaded both on PUT
and DELETE requests.

This patch also adds functional tests specific for the policy
enforcement hook.

Change-Id: I8c76cb05568df47648cff71a107cfe701b286bb7
Closes-Bug: #1520180
Closes-Bug: #1505831


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1520180

Title:
  Pecan: no authZ check on DELETE operations

Status in neutron:
  Fix Released

Bug description:
  Authorization checks are completely skipped on DELETE operations both in the 
'before' and in the 'after' hooks.
  This does not look great, and should be fixed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1520180/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1505843] Re: Pecan: quota management API broken

2016-01-14 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/234466
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=5fe6f8015ac3e532c4cf95201209f49e6b69955f
Submitter: Jenkins
Branch:master

commit 5fe6f8015ac3e532c4cf95201209f49e6b69955f
Author: Salvatore Orlando 
Date:   Fri Sep 18 14:10:26 2015 -0700

Pecan: fix quota management

This patch fixes quota management APIs in the Pecan framework.
To this aim:

1) an ad-hoc pair of collection/item controllers are introduced
   for the quota resource; as the new controllers have been added
   in a separate module, the neutron.pecan_wsgi.controllers.utils
   module has been added as well for helpers, routines and classes
   used by all pecan controllers;
2) the quota API extension is made pecan-aware, meaning that it
   simply returns a Pecan controller instance rather than deferring
   the task to the startup process that builds controllers using the
   home-grown WSGI framework ext manager;
3) the quota resource is now "almost" a standard neutron resource;
   unfortunately since it does not yet have its own service plugin a
   special provision is made in the attribute population hook in
   order to ensure the object is loaded for allowing correct
   policy enforcement.
4) Functional tests for the quota controller have been added.

Closes-Bug: #1505843

Change-Id: I44a1fd73f678e493d5b1163e5f183d9efdc678ac


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1505843

Title:
  Pecan: quota management API broken

Status in neutron:
  Fix Released

Bug description:
  The quota management APIs in Pecan simply do not work.

  The pecan controller framework try to treat quota as a resource, and
  even create resource and collection controllers for this resource.
  However, this fails as the plugins do not implement a quota interface.

  In the current WSGI framework indeed quota management is performed by
  a special controller which interacts directly with the driver and
  implements its own authZ logic.

  The pecan framework should implement quota management correctly,
  possibly avoiding to carry on "special" behaviours from the current
  WSGI framework

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1505843/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1505831] Re: Pecan: policy evaluation error can trigger 500 response

2016-01-14 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/234457
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=293c3e01efce74d110ff34703a9e68ce2cd782e6
Submitter: Jenkins
Branch:master

commit 293c3e01efce74d110ff34703a9e68ce2cd782e6
Author: Salvatore Orlando 
Date:   Tue Oct 13 15:08:47 2015 -0700

Pecan: Fixes and tests for the policy enforcement hook

As PolicyNotAuthorizedException is raised in a hook, the
ExceptionTranslationHook is not invoked for it; therefore a 500
response is returned whereas a 403 was expected. This patch
explicitly handles the exception in the hook in order to ensure
the appropriate response code is returned.

Moreover, the structure of the 'before' hook prevented checks
on DELETE requests from being performed. As a result the check
was not performed at all (checks on the 'after' hook only pertain
GET requests). This patch changes the logic of the 'before' hook
by ensuring the item to authorize acces to is loaded both on PUT
and DELETE requests.

This patch also adds functional tests specific for the policy
enforcement hook.

Change-Id: I8c76cb05568df47648cff71a107cfe701b286bb7
Closes-Bug: #1520180
Closes-Bug: #1505831


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1505831

Title:
  Pecan: policy evaluation error can trigger 500 response

Status in neutron:
  Fix Released

Bug description:
  in [1] if policy_method == enforce an PolicyNotAuthorizedException is 
triggered.
  However, the exception translation hook is not called, most likely because 
the on_error hook is not installed on other policy hooks.
  This might be logical and should therefore not be considered a pecan bug.

  The policy hook should take this into account and handle the
  exception.

  [1]
  
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/pecan_wsgi/hooks/policy_enforcement.py#n94

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1505831/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1534409] [NEW] Permissions is written with fixed string on Admin dashboard

2016-01-14 Thread Kenji Ishii
Public bug reported:

In openstack_dashboard/dashboards/admin/dashboard.py, the permission of Admin 
dashboard is specified.
However it value is used fixed string.
'xxx' in 'openstack.role.xxx' is a real role name.
So at the moment, it can not address change of 'OPENSTACK_KEYSTONE_ADMIN_ROLES'.

** Affects: horizon
 Importance: Undecided
 Assignee: Kenji Ishii (ken-ishii)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Kenji Ishii (ken-ishii)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1534409

Title:
  Permissions is written with fixed string on Admin dashboard

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  In openstack_dashboard/dashboards/admin/dashboard.py, the permission of Admin 
dashboard is specified.
  However it value is used fixed string.
  'xxx' in 'openstack.role.xxx' is a real role name.
  So at the moment, it can not address change of 
'OPENSTACK_KEYSTONE_ADMIN_ROLES'.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1534409/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1518110] Re: Launch Instance Wizard - Security Groups Available table count not working

2016-01-14 Thread Rajat Vig
** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1518110

Title:
  Launch Instance Wizard - Security Groups Available table count not
  working

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Angular Launch Instance Wizard > Security Group Step:

  The Available table is acting strangely.  Please take a look at the
  Available table in the attached screenshot.

  Default Security Group is selected by default, but it is still showing
  up in Available table, and also 'No available items' row.  So there
  are 2 rows.

  Also, if I have more than one security group, the Available item count
  is incorrect.  If I try to allocate multiple, they don't show up in
  the Allocated table.  Opening up browser console shows me these
  errors:

  Duplicates in a repeater are not allowed. Use 'track by' expression to
  specify unique keys. Repeater: row in
  ctrl.tableData.displayedAllocated track by row.id, Duplicate key: 1,
  Duplicate value:
  
{"description":"default","id":1,"name":"default","rules":[],"tenant_id":"485eee44635643f0a60fe38d4e0f9044","security_group_rules":[null]}

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1518110/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1417791] Re: Neutron allows non-admin user to circumvent port security via port-update device_owner

2016-01-14 Thread Kevin Benton
*** This bug is a duplicate of bug 1489111 ***
https://bugs.launchpad.net/bugs/1489111

Thanks for filing this. This was actually a vulnerability fixed later in
bug 1489111. I think the part that was overlooked was that this could be
done on shared networks.

** This bug has been marked a duplicate of bug 1489111
   [OSSA 2015-018] IP, MAC, and DHCP spoofing rules can by bypassed by changing 
device_owner (CVE-2015-5240)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1417791

Title:
  Neutron allows non-admin user to circumvent port security via port-
  update device_owner

Status in neutron:
  New
Status in OpenStack Security Advisory:
  Won't Fix

Bug description:
  Neutron allows a non-admin tenant to circumvent and spoofing port
  security by updating the device-owner to 'network:None' and rebooting
  the instance.

  How to reproduce:

  1. Create a new tenant: `keystone tenant-create --name demo --enable=true`
  2. Create a new user in that tenant: `keystone user-create --name demo 
--tenant $TENANT_ID --pass $PASSWORD --enabled true`
  3. Switch to that new user: `export OS_USERNAME=demo; export 
OS_TENANT_NAME=demo; export OS_PASSWORD=$PASSWORD`
  4. Create a keypair: `nova keypair-add demo-key --pub-key 
~/.ssh/authorized_keys`
  5. Create a security group: `neutron security-group-create demo-secgroup`
  6. Add a permit rule to that secuirty group: `neutron 
security-group-rule-create demo-secgroup --remote-ip-prefix 0.0.0.0/0`
  7. Deploy a new instance: `nova boot --flavor m1.tiny --image ubuntu-14.04 
--nic net-id=$NETWORK_ID --key-name demo-key --security-groups demo-secgroup 
--poll demo-instance`
  8. Find the instance's neutron port: `neutron port-list`
  9. Update neutron port device owner: `neutron port-update $PORT_ID 
--device_owner network:None`
  10. Verify neutron port device owner updated: `neutron port-show $PORT_ID`
  11. Reboot instance: `nova reboot $INSTANCE_ID`

  When the instance comes back up, it will not have anti-spoofing port
  security rules present and can source traffic from any IP and MAC
  combination.

  It doesn't appear like this was intended, in Juno the stock 
neutron/policy.conf includes:
  ```
  "update_port": "rule:admin_or_owner",
  "update_port:port_security_enabled": "rule:admin_or_network_owner",
  ```

  But the port owner is permitted to modify the device-owner attribute
  of the port which allows circumventing the port security.

  I would recommend protecting the device_owner and device_id port
  attributes so they can only be modified by an admin user.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1417791/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1280105] Re: urllib/urllib2 is incompatible for python 3

2016-01-14 Thread Catherine Diep
Fixed by https://review.openstack.org/#/c/261201/

** Changed in: refstack
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1280105

Title:
  urllib/urllib2  is incompatible for python 3

Status in Ceilometer:
  Fix Released
Status in Cinder:
  In Progress
Status in Fuel for OpenStack:
  In Progress
Status in Glance:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in Magnum:
  In Progress
Status in Manila:
  In Progress
Status in Murano:
  In Progress
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  In Progress
Status in python-troveclient:
  In Progress
Status in refstack:
  Fix Released
Status in Sahara:
  Fix Released
Status in tacker:
  In Progress
Status in tempest:
  In Progress
Status in Trove:
  In Progress
Status in Zuul:
  In Progress

Bug description:
  urllib/urllib2  is incompatible for python 3

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1280105/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1532688] Re: Testing volume encryption fails

2016-01-14 Thread Nguyen Truong Son
** Changed in: openstack-manuals
   Status: New => Invalid

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1532688

Title:
  Testing volume encryption fails

Status in OpenStack Compute (nova):
  Invalid
Status in openstack-manuals:
  Invalid

Bug description:
  Hi

  I deploy openstack liberty with nfs cinder and barbican key manager.
  When attaching encrypted volume to instance, in compute host run the
  command:

  sudo nova-rootwrap /etc/nova/rootwrap.conf cryptsetup --batch-mode
  luksFormat --key-file=- --cipher aes-xts-plain64 --key-size 512
  /home/openstack/deployment/lib/nova/mnt/014350d8bf61a4224293d8dd521b6438
  /volume-ac170625-e126-4f01-b123-55f864125821

  After that, it run the command:

  sudo nova-rootwrap /etc/nova/rootwrap.conf cryptsetup luksOpen --key-
  file=-
  /home/openstack/deployment/lib/nova/mnt/014350d8bf61a4224293d8dd521b6438
  /volume-ac170625-e126-4f01-b123-55f864125821 volume-
  ac170625-e126-4f01-b123-55f864125821

  The luksOpen does things: original cinder volume file is deleted, and it is a 
link pointed to the encrypted device.
  See: https://bugs.launchpad.net/nova/+bug/1511255

  compute host is where cryptsetup is run, so it can read data from
  volume.

  When run command to test: strings
  /home/openstack/deployment/lib/nova/mnt/014350d8bf61a4224293d8dd521b6438
  /volume-ac170625-e126-4f01-b123-55f864125821 | grep "Hello"

  Result is:

  Hello, world (unencrypted /dev/vdb)
  Hello, world (encrypted /dev/vdc)

  ---
  Built: 2016-01-10T11:13:36 00:00
  git SHA: 2e180b474baadea9df8d9ae5f73a0cf8e150a417
  URL: 
http://docs.openstack.org/liberty/config-reference/content/section_testing_encryption.html
  source File: 
file:/home/jenkins/workspace/openstack-manuals-tox-doc-publishdocs/doc/config-reference/block-storage/section_volume-encryption.xml
  xml:id: section_testing_encryption

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1532688/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1531484] Re: Adopt oslotest in DietTestCase

2016-01-14 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/264158
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=7a2824afc46342a8cef1937384b1475e68862b27
Submitter: Jenkins
Branch:master

commit 7a2824afc46342a8cef1937384b1475e68862b27
Author: Ihar Hrachyshka 
Date:   Wed Jan 6 13:54:00 2016 +0100

Adopt oslotest BaseTestCase as a base class for DietTestCase

This will make us more in line with other projects in terms of testing
API. It also allows to remove some duplicate code from base test classes
for Neutron, leaving just Neutron specific fixture setup there.

Note: we don't add a new dependency because the library is already used
in some of database functional tests through oslo.db base test classes.

Change-Id: Ifec6cce386d8b024605496026c8469200f3c002b
Closes-Bug: #1531484


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1531484

Title:
  Adopt oslotest in DietTestCase

Status in neutron:
  Fix Released

Bug description:
  This will allow us to stay more in line with other projects in how we
  handle tests, and kill some code from the base test classes that
  duplicates what's already available in oslotest.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1531484/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1534354] [NEW] ServerGroupsV213SampleJsonTest is not actually running tests against the v2.13 microversion

2016-01-14 Thread Matt Riedemann
Public bug reported:

There are a few issues here:

https://github.com/openstack/nova/blob/master/nova/tests/functional/api_sample_tests/test_server_groups.py#L81

class ServerGroupsV213SampleJsonTest(api_sample_base.ApiSampleTestBaseV21):
extension_name = "os-server-groups"
request_api_version = '2.13'
scenarios = [('v2_13', {})]

1. It is not extending the ServerGroupsSampleJsonTest class so it's not
actually running any tests.

2. The request_api_version variable isn't used, and the scenarios
variable is not defined correctly, so it's only running against v2 API,
not the v2.13 API.

** Affects: nova
 Importance: Undecided
 Status: Confirmed


** Tags: api testing

** Changed in: nova
   Status: New => Confirmed

** Tags added: api testing

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1534354

Title:
  ServerGroupsV213SampleJsonTest is not actually running tests against
  the v2.13 microversion

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  There are a few issues here:

  
https://github.com/openstack/nova/blob/master/nova/tests/functional/api_sample_tests/test_server_groups.py#L81

  class ServerGroupsV213SampleJsonTest(api_sample_base.ApiSampleTestBaseV21):
  extension_name = "os-server-groups"
  request_api_version = '2.13'
  scenarios = [('v2_13', {})]

  1. It is not extending the ServerGroupsSampleJsonTest class so it's
  not actually running any tests.

  2. The request_api_version variable isn't used, and the scenarios
  variable is not defined correctly, so it's only running against v2
  API, not the v2.13 API.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1534354/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1499785] Re: Static routes are not added to the qrouter namespace for DVR routers

2016-01-14 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/228026
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=158f9eabe20824b2c91eaac795dad8b8a773611d
Submitter: Jenkins
Branch:master

commit 158f9eabe20824b2c91eaac795dad8b8a773611d
Author: Swaminathan Vasudevan 
Date:   Fri Sep 25 09:54:44 2015 -0700

Static routes not added to qrouter namespace for DVR

Today static routes are added to the SNAT namespace
for DVR routers. But they are not added to the qrouter
namespace.

Also while configuring the static routes to SNAT
namespace, the router is not checked for the existence
of the gateway.

When routes are added to a router without a gateway the
routes are only configured in the router namespace, but
when a gateway is set later, those routes have to be
populated in the snat_namespace as well.

This patch addresses the above mentioned issues.

Closes-Bug: #1499785
Closes-Bug: #1499787

Change-Id: I37e0d0d723fcc727faa09028045b776957c75a82


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1499785

Title:
  Static routes are not added to the qrouter namespace for DVR routers

Status in neutron:
  Fix Released

Bug description:
  Static routes are not added to the qrouter namespace when routers are
  added.

  Initially it used to be configuring the routes in the qrouter namespace but 
not in the SNAT namespace.
  A recent patch caused this regression in moving the routes from qrouter 
namespace to SNAT namespace.

  2bb48eb58ad28a629dd12c434b83680aa3f240a4

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1499785/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1499787] Re: Static routes are attempted to add to SNAT Namespace of DVR routers without checking for Router Gateway.

2016-01-14 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/228026
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=158f9eabe20824b2c91eaac795dad8b8a773611d
Submitter: Jenkins
Branch:master

commit 158f9eabe20824b2c91eaac795dad8b8a773611d
Author: Swaminathan Vasudevan 
Date:   Fri Sep 25 09:54:44 2015 -0700

Static routes not added to qrouter namespace for DVR

Today static routes are added to the SNAT namespace
for DVR routers. But they are not added to the qrouter
namespace.

Also while configuring the static routes to SNAT
namespace, the router is not checked for the existence
of the gateway.

When routes are added to a router without a gateway the
routes are only configured in the router namespace, but
when a gateway is set later, those routes have to be
populated in the snat_namespace as well.

This patch addresses the above mentioned issues.

Closes-Bug: #1499785
Closes-Bug: #1499787

Change-Id: I37e0d0d723fcc727faa09028045b776957c75a82


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1499787

Title:
  Static routes are attempted to add to SNAT Namespace of DVR routers
  without checking for Router Gateway.

Status in neutron:
  Fix Released

Bug description:
  In DVR routers static routes are now only added to snat namespace.
  But before adding to snat namespace, the routers are not checked for the 
existence of gateway.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1499787/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1534273] [NEW] Nova - Unexpected API Error

2016-01-14 Thread Ahmed
Public bug reported:

Hi,

Launching new/first instance fails with an error, this is a  new openstack 
Liberty with 2 node deployment, following official documentation:
http://docs.openstack.org/liberty/install-guide-rdo/launch-instance-public.html


Command used::
[root@xepcloud ~]# nova boot --flavor m1.tiny --image cirros --nic 
net-id=8fb32974-8dcf-47c8-a42b-a890e47725f4 --security-group default --key-name 
mykey public-instance

Error::
ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API 
log if possible.
 (HTTP 500) (Request-ID: 
req-a2d56513-0c19-46e8-9a45-d63fcee17224)

Note:
Could this be related to networking config? I was not sure what IP to use for 
OVERLAY_INTERFACE_IP_ADDRESS, so I used mgmt IP but I have a second physical 
interface for public access with no ip assigned.


nova API logs:


2016-01-14 11:00:13.292 11528 INFO nova.osapi_compute.wsgi.server 
[req-c3f11e04-e83e-46c8-9f3b-7a1709f5cc5d 7d4aa0f0645248f3b49ec2a4956a2535 
ee13d79dc9954b458a8d0f173bd63ccb - - -] 192.168.178.90 "GET 
/v2/ee13d79dc9954b458a8d0f173bd63ccb/flavors?is_public=None HTTP/1.1" status: 
200 len: 1477 time: 0.0188160
2016-01-14 11:00:13.311 11528 INFO nova.osapi_compute.wsgi.server 
[req-696093c4-6513-4209-9f56-4a0ae9488a25 7d4aa0f0645248f3b49ec2a4956a2535 
ee13d79dc9954b458a8d0f173bd63ccb - - -] 192.168.178.90 "GET 
/v2/ee13d79dc9954b458a8d0f173bd63ccb/flavors/1 HTTP/1.1" status: 200 len: 629 
time: 0.0156209
2016-01-14 11:00:16.143 11528 ERROR nova.api.openstack.extensions 
[req-a2d56513-0c19-46e8-9a45-d63fcee17224 7d4aa0f0645248f3b49ec2a4956a2535 
ee13d79dc9954b458a8d0f173bd63ccb - - -] Unexpected exception in API method
2016-01-14 11:00:16.143 11528 ERROR nova.api.openstack.extensions Traceback 
(most recent call last):
2016-01-14 11:00:16.143 11528 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/openstack/extensions.py", line 478, 
in wrapped
2016-01-14 11:00:16.143 11528 ERROR nova.api.openstack.extensions return 
f(*args, **kwargs)
2016-01-14 11:00:16.143 11528 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 73, in 
wrapper
2016-01-14 11:00:16.143 11528 ERROR nova.api.openstack.extensions return 
func(*args, **kwargs)
2016-01-14 11:00:16.143 11528 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 73, in 
wrapper
2016-01-14 11:00:16.143 11528 ERROR nova.api.openstack.extensions return 
func(*args, **kwargs)
2016-01-14 11:00:16.143 11528 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/openstack/compute/servers.py", line 
611, in create
2016-01-14 11:00:16.143 11528 ERROR nova.api.openstack.extensions 
**create_kwargs)
2016-01-14 11:00:16.143 11528 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/hooks.py", line 149, in inner
2016-01-14 11:00:16.143 11528 ERROR nova.api.openstack.extensions rv = 
f(*args, **kwargs)
2016-01-14 11:00:16.143 11528 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/compute/api.py", line 1581, in create
2016-01-14 11:00:16.143 11528 ERROR nova.api.openstack.extensions 
check_server_group_quota=check_server_group_quota)
2016-01-14 11:00:16.143 11528 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/compute/api.py", line 1181, in 
_create_instance
2016-01-14 11:00:16.143 11528 ERROR nova.api.openstack.extensions 
auto_disk_config, reservation_id, max_count)
2016-01-14 11:00:16.143 11528 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/compute/api.py", line 955, in 
_validate_and_build_base_options
2016-01-14 11:00:16.143 11528 ERROR nova.api.openstack.extensions 
pci_request_info, requested_networks)
2016-01-14 11:00:16.143 11528 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 1059, in 
create_pci_requests_for_sriov_ports
2016-01-14 11:00:16.143 11528 ERROR nova.api.openstack.extensions neutron = 
get_client(context, admin=True)
2016-01-14 11:00:16.143 11528 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 237, in 
get_client
2016-01-14 11:00:16.143 11528 ERROR nova.api.openstack.extensions 
auth_token = _ADMIN_AUTH.get_token(_SESSION)
2016-01-14 11:00:16.143 11528 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/keystoneclient/auth/identity/base.py", line 
200, in get_token
2016-01-14 11:00:16.143 11528 ERROR nova.api.openstack.extensions return 
self.get_access(session).auth_token
2016-01-14 11:00:16.143 11528 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/keystoneclient/auth/identity/base.py", line 
240, in get_access
2016-01-14 

[Yahoo-eng-team] [Bug 1534281] [NEW] Linux bridge unit test test_report_state_revived fails on OSX

2016-01-14 Thread Brian Haley
Public bug reported:

Linux bridge unit test test_report_state_revived fails on OSX because
bridge_lib tries to use a Linux-specific check to find the list of
current bridges.  Mocking-out the method to just return a list of bridge
names fixes the issue.

This is the tox output:

neutron.tests.unit.plugins.ml2.drivers.linuxbridge.agent.test_linuxbridge_neutron_agent.TestLinuxBridgeAgent.test_report_state_revived
--

Captured pythonlogging:
~~~
2016-01-14 13:05:38,700  WARNING [neutron.agent.securitygroups_rpc] Driver 
configuration doesn't match with enable_security_group
2016-01-14 13:05:38,700 INFO 
[neutron.plugins.ml2.drivers.linuxbridge.agent.linuxbridge_neutron_agent] RPC 
agent_id: lb0001
2016-01-14 13:05:38,702 INFO [neutron.agent.l2.extensions.manager] 
Loaded agent extensions: []
2016-01-14 13:05:38,703ERROR 
[neutron.plugins.ml2.drivers.linuxbridge.agent.linuxbridge_neutron_agent] 
Failed reporting state!
Traceback (most recent call last):
  File 
"neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py", 
line 857, in _report_state
devices = len(self.br_mgr.get_tap_devices())
  File 
"neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py", 
line 531, in get_tap_devices
for device in bridge_lib.get_bridge_names():
  File "neutron/agent/linux/bridge_lib.py", line 44, in get_bridge_names
return os.listdir(BRIDGE_FS)
OSError: [Errno 2] No such file or directory: '/sys/class/net/'


Captured traceback:
~~~
Traceback (most recent call last):
  File 
"neutron/tests/unit/plugins/ml2/drivers/linuxbridge/agent/test_linuxbridge_neutron_agent.py",
 line 467, in test_report_state_revived
self.assertTrue(self.agent.fullsync)
  File 
"/Users/haley/neutron/.tox/py27/lib/python2.7/site-packages/unittest2/case.py", 
line 702, in assertTrue
raise self.failureException(msg)
AssertionError: False is not true

** Affects: neutron
 Importance: Undecided
 Assignee: Brian Haley (brian-haley)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => Brian Haley (brian-haley)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1534281

Title:
  Linux bridge unit test test_report_state_revived fails on OSX

Status in neutron:
  In Progress

Bug description:
  Linux bridge unit test test_report_state_revived fails on OSX because
  bridge_lib tries to use a Linux-specific check to find the list of
  current bridges.  Mocking-out the method to just return a list of
  bridge names fixes the issue.

  This is the tox output:

  
neutron.tests.unit.plugins.ml2.drivers.linuxbridge.agent.test_linuxbridge_neutron_agent.TestLinuxBridgeAgent.test_report_state_revived
  
--

  Captured pythonlogging:
  ~~~
  2016-01-14 13:05:38,700  WARNING [neutron.agent.securitygroups_rpc] 
Driver configuration doesn't match with enable_security_group
  2016-01-14 13:05:38,700 INFO 
[neutron.plugins.ml2.drivers.linuxbridge.agent.linuxbridge_neutron_agent] RPC 
agent_id: lb0001
  2016-01-14 13:05:38,702 INFO [neutron.agent.l2.extensions.manager] 
Loaded agent extensions: []
  2016-01-14 13:05:38,703ERROR 
[neutron.plugins.ml2.drivers.linuxbridge.agent.linuxbridge_neutron_agent] 
Failed reporting state!
  Traceback (most recent call last):
File 
"neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py", 
line 857, in _report_state
  devices = len(self.br_mgr.get_tap_devices())
File 
"neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py", 
line 531, in get_tap_devices
  for device in bridge_lib.get_bridge_names():
File "neutron/agent/linux/bridge_lib.py", line 44, in get_bridge_names
  return os.listdir(BRIDGE_FS)
  OSError: [Errno 2] No such file or directory: '/sys/class/net/'
  

  Captured traceback:
  ~~~
  Traceback (most recent call last):
File 
"neutron/tests/unit/plugins/ml2/drivers/linuxbridge/agent/test_linuxbridge_neutron_agent.py",
 line 467, in test_report_state_revived
  self.assertTrue(self.agent.fullsync)
File 
"/Users/haley/neutron/.tox/py27/lib/python2.7/site-packages/unittest2/case.py", 
line 702, in assertTrue
  raise self.failureException(msg)
  AssertionError: False is not true

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1534281/+subscriptions

-- 
Mailing list: 

[Yahoo-eng-team] [Bug 1534447] [NEW] ip allocation can be taken after dup check before commit

2016-01-14 Thread Kevin Benton
Public bug reported:

A concurrent thread/server can use an IP allocation before the port
creation that attempts to insert it on the local server gets a chance to
commit its transaction to the database. So even though we have a dup
check, it may return that the IP is not in use right before something
else steals it.

http://logs.openstack.org/38/257938/9/gate/gate-neutron-dsvm-
api/d98d247/logs/screen-q-svc.txt.gz?level=ERROR

** Affects: neutron
 Importance: High
 Assignee: Kevin Benton (kevinbenton)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Kevin Benton (kevinbenton)

** Changed in: neutron
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1534447

Title:
  ip allocation can be taken after dup check before commit

Status in neutron:
  New

Bug description:
  A concurrent thread/server can use an IP allocation before the port
  creation that attempts to insert it on the local server gets a chance
  to commit its transaction to the database. So even though we have a
  dup check, it may return that the IP is not in use right before
  something else steals it.

  http://logs.openstack.org/38/257938/9/gate/gate-neutron-dsvm-
  api/d98d247/logs/screen-q-svc.txt.gz?level=ERROR

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1534447/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1202797] Re: Scheduler exception during tempest test_network_basic_ops

2016-01-14 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1202797

Title:
  Scheduler exception during tempest test_network_basic_ops

Status in neutron:
  Expired

Bug description:
  During investigating gating problem, i found this execption.

  http://paste.openstack.org/show/40832/

  What's I did is running 
  nosetests tempest/scenario/test_network_basic_ops.py

  several times.

  I'm still not sure what's cause this one, but I'll keep investigation
  on this

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1202797/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1496787] Re: If qos service_plugin is enabled, but ml2 extension driver is not, api requests attaching policies to ports or nets will fail with an ugly exception

2016-01-14 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/253853
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=c615c6f3a7a05dac8684366dde78080f347964dd
Submitter: Jenkins
Branch:master

commit c615c6f3a7a05dac8684366dde78080f347964dd
Author: Sławek Kapłoński 
Date:   Sun Dec 6 00:11:46 2015 +0100

ML2: verify if required extension drivers are loaded

This change ensures extension drivers required by service plugins are loaded
when using ML2 plugin: we check that ML2 loads QoS extension driver when QoS
service plugin is enabled.

Change-Id: Ibf19e77b88ce34c58519ae157c852c9e2b30e31f
Closes-bug: #1496787


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1496787

Title:
  If qos service_plugin is enabled, but ml2 extension driver is not, api
  requests attaching policies to ports or nets will fail with an ugly
  exception

Status in neutron:
  Fix Released

Bug description:
  $ neutron port-update b0885ae1-487b-40bc-8fc0-32432a21e39d --qos-policy 
bw-limiter
  Request Failed: internal server error while processing your request.

  Neutron Exception:

  DEBUG neutron.api.v2.base [req-218cddfd-2b7d-4050-91db-251c139029b2 admin 
85b859134de2428d94f6ee910dc545d8] Request body: {u'port': {u'qos_policy_id': 
u'0ee1c673-5671-40ca-b55f-4cd4bbd999c7'}} from (pid=18237) prepare_request_body 
/opt/stack/neutron/neutron/api/v2/base.py:645
  2015-09-15 01:05:26.022 ERROR neutron.api.v2.resource 
[req-218cddfd-2b7d-4050-91db-251c139029b2 admin 
85b859134de2428d94f6ee910dc545d8] update failed
  2015-09-15 01:05:26.022 TRACE neutron.api.v2.resource Traceback (most recent 
call last):
  2015-09-15 01:05:26.022 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/resource.py", line 83, in resource
  2015-09-15 01:05:26.022 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
  2015-09-15 01:05:26.022 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 146, in wrapper
  2015-09-15 01:05:26.022 TRACE neutron.api.v2.resource ectxt.value = 
e.inner_exc
  2015-09-15 01:05:26.022 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 195, in 
__exit__
  2015-09-15 01:05:26.022 TRACE neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
  2015-09-15 01:05:26.022 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 136, in wrapper
  2015-09-15 01:05:26.022 TRACE neutron.api.v2.resource return f(*args, 
**kwargs)
  2015-09-15 01:05:26.022 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/base.py", line 613, in update
  2015-09-15 01:05:26.022 TRACE neutron.api.v2.resource obj = 
obj_updater(request.context, id, **kwargs)
  2015-09-15 01:05:26.022 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/plugins/ml2/plugin.py", line 1158, in update_port
  2015-09-15 01:05:26.022 TRACE neutron.api.v2.resource 
original_port[qos_consts.QOS_POLICY_ID] !=
  2015-09-15 01:05:26.022 TRACE neutron.api.v2.resource KeyError: 
'qos_policy_id'
  2015-09-15 01:05:26.022 TRACE neutron.api.v2.resource
  2015-09-15 01:05:26.026 INFO neutron.wsgi 
[req-218cddfd-2b7d-4050-91db-251c139029b2 admin 
85b859134de2428d94f6ee910dc545d8] 172.16.175.128 - - [15/Sep/2015 01:05:26] 
"PUT /v2.0/ports/b0885ae1-487b-40bc-8fc0-32432a21e39d.json HTTP/1.1" 500 383 
0.084317

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1496787/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1493026] Re: location-add return error when add new location to 'queued' image

2016-01-14 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/242535
Committed: 
https://git.openstack.org/cgit/openstack/python-glanceclient/commit/?id=cea67763c9f8037f47844e3e057166d6874d801d
Submitter: Jenkins
Branch:master

commit cea67763c9f8037f47844e3e057166d6874d801d
Author: kairat_kushaev 
Date:   Fri Nov 6 18:16:30 2015 +0300

Remove location check from V2 client

Glance client has a custom check that generates exception if
location has not been returned by image-get request.
This check should on server side and it should be managed by
policy rules when do location-add action.
That also allows to increase possibility of migrating Heat
to v2[1].

NOTE: After this patch, we'll raise a HTTPBadRequest from
server side instead of HTTPConflict when a user adds a
duplicate location.

[1]: https://review.openstack.org/#/c/240450/

Co-Authored-By: wangxiyuan 

Change-Id: I778ad2a97805b4d85eb0430c603c27a0a1c148e0
Closes-bug: #1493026


** Changed in: python-glanceclient
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1493026

Title:
  location-add return error when add new location to 'queued' image

Status in Glance:
  Opinion
Status in python-glanceclient:
  Fix Released

Bug description:
  Reproduce:

  1. create a new image:
  glance image-create --disk-format qcow2 --container-format bare --name test

  suppose the image'id is 1

  2.add location to the image:

  glance location-add 1 --url 

  Result :  the client raise an error:'The administrator has disabled
  API access to image locations'.

  3.set show_multiple_locations = True in glance-api.conf. Then take
  step 1,2. It works now.

  But when use REST API to reproduce it, No matter
  show_multiple_locations is False or True, it runs both well and the
  image's status will be changed into 'active'.

  So there is one thing to discuess: Is it need to check the location in
  glance-client(show_multiple_locations)? Or Is it need to add the check
  like glance-client does in Glance server on the contrary?

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1493026/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1534445] [NEW] Multiple floating IPs from the same external network are associated to one port when commands are executed at the same time

2016-01-14 Thread Lujin Luo
Public bug reported:

I have three controller nodes and the Neutron servers on these
controllers are set behind Pacemaker and HAProxy to realize
active/active HA using DevStack. MariaDB Galera cluster is used as my
database backend.  I am using the latest codes.

If I have multiple commands to create floating IPs and associate them to
the same port at the same time, all of the commands would return success
and end up with multiple floating IPs from the same external network
associated to the same port.

How to reproduce:

Step 1: Create a network
$ neutron net-create net1

Step 2: Create a subnet on the network
$ neutron subnet-create --name subnet1 net1 192.168.100.0/24

Step 3: Create a port on the network
$ neutron port-create net1

Step 4: Create a router
$ neutron router-create router-floatingip-test

Step 5: Add the subnet as its interface
$ neutron router-interface-add router-floatingip-test subnet1

Step 5: Create an external network
$ neutron net-create ext-net --router:external True

Step 6: Add a subnet on the external network
$ neutron subnet-create --name ext-subnet ext-net 192.168.122.0/24

Step 7: Set the external network as the router's default gateway
$ neutron router-gateway-set router-floatingip-test ext-net

Step 8: Run the three commands at the same time to create floating IPs
On controller1:
$ neutron floatingip-create ext-net --port-id 
b53d0826-53c4-427b-81b2-3ab6cb0f4511

On controller2:
$ neutron floatingip-create ext-net --port-id 
b53d0826-53c4-427b-81b2-3ab6cb0f4511

On controller3:
$ neutron floatingip-create ext-net --port-id 
b53d0826-53c4-427b-81b2-3ab6cb0f4511

where, port_id b53d0826-53c4-427b-81b2-3ab6cb0f4511 is the port we
created in Step 3.

The result would be three floating IPs associated to the same port, as
shown in http://paste.openstack.org/show/483691/

The expected error message (say, we run the second command after the first one 
succeeds) would be
Cannot associate floating IP 192.168.122.20 
(bd4d47a5-45c1-48e1-a48a-aef08039a955) with port 
b53d0826-53c4-427b-81b2-3ab6cb0f4511 using fixed IP 192.168.100.3, as that 
fixed IP already has a floating IP on external network 
920ee0f3-3db8-4005-8d29-0be474947186.
Since one port with one fixed_ip is not allowed to have multiple floating IPs 
from the same external network. 

In the above procedure, I set port_id when creating these three floating
IPs. Same bug occurred when I updated three existing floating IPs to be
associated with the same port at the same time.

I assume this bug happens because multiple APIs are executed
concurrently and the validation check on every API succeeds [1].

[1]
https://github.com/openstack/neutron/blob/master/neutron/db/l3_db.py#L915

** Affects: neutron
 Importance: Undecided
 Assignee: Lujin Luo (luo-lujin)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Lujin Luo (luo-lujin)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1534445

Title:
  Multiple floating IPs from the same external network are associated to
  one port when commands are executed at the same time

Status in neutron:
  New

Bug description:
  I have three controller nodes and the Neutron servers on these
  controllers are set behind Pacemaker and HAProxy to realize
  active/active HA using DevStack. MariaDB Galera cluster is used as my
  database backend.  I am using the latest codes.

  If I have multiple commands to create floating IPs and associate them
  to the same port at the same time, all of the commands would return
  success and end up with multiple floating IPs from the same external
  network associated to the same port.

  How to reproduce:

  Step 1: Create a network
  $ neutron net-create net1

  Step 2: Create a subnet on the network
  $ neutron subnet-create --name subnet1 net1 192.168.100.0/24

  Step 3: Create a port on the network
  $ neutron port-create net1

  Step 4: Create a router
  $ neutron router-create router-floatingip-test

  Step 5: Add the subnet as its interface
  $ neutron router-interface-add router-floatingip-test subnet1

  Step 5: Create an external network
  $ neutron net-create ext-net --router:external True

  Step 6: Add a subnet on the external network
  $ neutron subnet-create --name ext-subnet ext-net 192.168.122.0/24

  Step 7: Set the external network as the router's default gateway
  $ neutron router-gateway-set router-floatingip-test ext-net

  Step 8: Run the three commands at the same time to create floating IPs
  On controller1:
  $ neutron floatingip-create ext-net --port-id 
b53d0826-53c4-427b-81b2-3ab6cb0f4511

  On controller2:
  $ neutron floatingip-create ext-net --port-id 
b53d0826-53c4-427b-81b2-3ab6cb0f4511

  On controller3:
  $ neutron floatingip-create ext-net --port-id 
b53d0826-53c4-427b-81b2-3ab6cb0f4511

  where, port_id b53d0826-53c4-427b-81b2-3ab6cb0f4511 is the port we
  created in Step 3.

  The 

[Yahoo-eng-team] [Bug 1482092] Re: oslo_versionedobjects raise exception when boot instance with nova-network

2016-01-14 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1482092

Title:
  oslo_versionedobjects raise exception when boot instance with nova-
  network

Status in OpenStack Compute (nova):
  Expired

Bug description:
  oslo_versionedobjects will raise TypeError exception when boot
  instance with nova-network.

  I'm using devstack with
  nova:  00af05e13f5f0a2d8d10baf238dad553a86bc6e0
  oslo_versionedobjects: 5.2

  Nova had remove VirtaulInterface's super class base.NovaObjectDictCompat
  
https://github.com/openstack/nova/commit/91f8cc9c153b61a5aed081c2d1b44b21f35d3311
  It can work above oslo_versionedobjects 6.0.

  But oslo_versionedobjects 5.2 still using dict to assign value.
  
https://github.com/openstack/oslo.versionedobjects/blob/0.5.2/oslo_versionedobjects/base.py#L205

  Maybe we should update oslo_versionedobjects version in global-
  requirement.

  Following is traceback in n-net:
  2015-08-06 05:08:31.264 TRACE oslo_messaging.rpc.dispatcher Traceback (most 
recent call last):
  2015-08-06 05:08:31.264 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
142, in _dispatch_and_reply
  2015-08-06 05:08:31.264 TRACE oslo_messaging.rpc.dispatcher 
executor_callback))
  2015-08-06 05:08:31.264 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
186, in _dispatch
  2015-08-06 05:08:31.264 TRACE oslo_messaging.rpc.dispatcher 
executor_callback)
  2015-08-06 05:08:31.264 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
129, in _do_dispatch
  2015-08-06 05:08:31.264 TRACE oslo_messaging.rpc.dispatcher result = 
func(ctxt, **new_args)
  2015-08-06 05:08:31.264 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/network/floating_ips.py", line 113, in 
allocate_for_instance
  2015-08-06 05:08:31.264 TRACE oslo_messaging.rpc.dispatcher **kwargs)
  2015-08-06 05:08:31.264 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/network/manager.py", line 496, in allocate_for_instance
  2015-08-06 05:08:31.264 TRACE oslo_messaging.rpc.dispatcher context, 
instance_uuid)
  2015-08-06 05:08:31.264 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 119, in 
__exit__
  2015-08-06 05:08:31.264 TRACE oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2015-08-06 05:08:31.264 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/network/manager.py", line 490, in allocate_for_instance
  2015-08-06 05:08:31.264 TRACE oslo_messaging.rpc.dispatcher networks, 
macs)
  2015-08-06 05:08:31.264 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/network/manager.py", line 755, in _allocate_mac_addresses
  2015-08-06 05:08:31.264 TRACE oslo_messaging.rpc.dispatcher network['id'])
  2015-08-06 05:08:31.264 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/network/manager.py", line 774, in _add_virtual_interface
  2015-08-06 05:08:31.264 TRACE oslo_messaging.rpc.dispatcher vif.create()
  2015-08-06 05:08:31.264 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_versionedobjects/base.py", line 
205, in wrapper
  2015-08-06 05:08:31.264 TRACE oslo_messaging.rpc.dispatcher self[key] = 
field.from_primitive(self, key, value)
  2015-08-06 05:08:31.264 TRACE oslo_messaging.rpc.dispatcher TypeError: 
'VirtualInterface' object does not support item assignment
  2015-08-06 05:08:31.264 TRACE oslo_messaging.rpc.dispatcher

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1482092/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1534458] [NEW] [RFE] Multi-region Security Group

2016-01-14 Thread Takao Indoh
Public bug reported:

This RFE is requesting feature "Multi-region Security Group" to
configure security group across multiple regions.

[Backgroud]
OpenStack 'Regions' is used to construct more than one openstack environments 
between geographically-distributed places. Each region is independent openstack 
environment and placed in a datacenter which is geographically distant from 
each other, for example, different country. This is important to ensure 
availability. Even if one region stops due to problems, we can continue our 
work in other regions.

[Existing problem]
In multi-region environment, the one of inconvenient points is configuring 
security group. For example, there are two regions 'region 1' and 'region 2'. 
Each region has web server and its db server.
Region 1: web server(W1) and db server (D1)
Region 2: web server(W2) and db server (D2)
Say that each region is connected in L3 layer (IP is reachable each other).

In such a case, we want to set up security group so that both of W1 and
W2 can access to D1 and D2. But each region is independent and we have
to set up security group one by one in each region.

[Proposal]
Multi-region security group enables us to create security group across regions. 
Once it is introduced, we can add security group which can be shared between 
regions. In the case above:
- Make two multi-region security group, SG1 and SG2
- Add W1,W2 to SG1
- Add D1,D2 to SG2
And then by adding rule to SG2 to allow access from SG1, W1 and W2 can access 
to D1 and D2.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: rfe

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1534458

Title:
  [RFE] Multi-region Security Group

Status in neutron:
  New

Bug description:
  This RFE is requesting feature "Multi-region Security Group" to
  configure security group across multiple regions.

  [Backgroud]
  OpenStack 'Regions' is used to construct more than one openstack environments 
between geographically-distributed places. Each region is independent openstack 
environment and placed in a datacenter which is geographically distant from 
each other, for example, different country. This is important to ensure 
availability. Even if one region stops due to problems, we can continue our 
work in other regions.

  [Existing problem]
  In multi-region environment, the one of inconvenient points is configuring 
security group. For example, there are two regions 'region 1' and 'region 2'. 
Each region has web server and its db server.
  Region 1: web server(W1) and db server (D1)
  Region 2: web server(W2) and db server (D2)
  Say that each region is connected in L3 layer (IP is reachable each other).

  In such a case, we want to set up security group so that both of W1
  and W2 can access to D1 and D2. But each region is independent and we
  have to set up security group one by one in each region.

  [Proposal]
  Multi-region security group enables us to create security group across 
regions. Once it is introduced, we can add security group which can be shared 
between regions. In the case above:
  - Make two multi-region security group, SG1 and SG2
  - Add W1,W2 to SG1
  - Add D1,D2 to SG2
  And then by adding rule to SG2 to allow access from SG1, W1 and W2 can access 
to D1 and D2.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1534458/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1424096] Re: DVR routers attached to shared networks aren't being unscheduled from a compute node after deleting the VMs using the shared net

2016-01-14 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/257938
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=96ba199d733944e5b8aa3664a04d9204fd66c878
Submitter: Jenkins
Branch:master

commit 96ba199d733944e5b8aa3664a04d9204fd66c878
Author: Oleg Bondarev 
Date:   Tue Dec 15 17:58:51 2015 +0300

Use admin context when removing DVR router on vm port deletion

In case non-admin tenant removes last VM on a shared network (owned
by admin) connected to a DVR router (also owned by admin) we need
to remove the router from the host where there are no more dvr
serviceable ports. Commit edbade486102a219810137d1c6b916e87475d477
fixed logic that determines routers that should be removed from host.
However in order to actually remove the router we also need admin
context.

This was not caught by unit tests and one reason for that is so called
'mock everything' approach which is evil and generally useless.
This patch replaces unit tests with functional tests that we able
to catch the bug.

Closes-Bug: #1424096
Change-Id: Ia6cdf2294562c2a2727350c78eeab155097e0c33


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1424096

Title:
  DVR routers attached to shared networks aren't being unscheduled from
  a compute node after deleting the VMs using the shared net

Status in neutron:
  Fix Released
Status in neutron juno series:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  As the administrator, a DVR router is created and attached to a shared
  network. The administrator also created the shared network.

  As a non-admin tenant, a VM is created with the port using the shared
  network.  The only VM using the shared network is scheduled to a
  compute node.  When the VM is deleted, it is expected the qrouter
  namespace of the DVR router is removed.  But it is not.  This doesn't
  happen with routers attached to networks that are not shared.

  The environment consists of 1 controller node and 1 compute node.

  Routers having the problem are created by the administrator attached
  to shared networks that are also owned by the admin:

  As the administrator, do the following commands on a setup having 1
  compute node and 1 controller node:

  1. neutron net-create shared-net -- --shared True
 Shared net's uuid is f9ccf1f9-aea9-4f72-accc-8a03170fa242.

  2. neutron subnet-create --name shared-subnet shared-net 10.0.0.0/16

  3. neutron router-create shared-router
  Router's UUID is ab78428a-9653-4a7b-98ec-22e1f956f44f.

  4. neutron router-interface-add shared-router shared-subnet
  5. neutron router-gateway-set  shared-router public

  
  As a non-admin tenant (tenant-id: 95cd5d9c61cf45c7bdd4e9ee52659d13), boot a 
VM using the shared-net network:

  1. neutron net-show shared-net
  +-+--+
  | Field   | Value|
  +-+--+
  | admin_state_up  | True |
  | id  | f9ccf1f9-aea9-4f72-accc-8a03170fa242 |
  | name| shared-net   |
  | router:external | False|
  | shared  | True |
  | status  | ACTIVE   |
  | subnets | c4fd4279-81a7-40d6-a80b-01e8238c1c2d |
  | tenant_id   | 2a54d6758fab47f4a2508b06284b5104 |
  +-+--+

  At this point, there are no VMs using the shared-net network running
  in the environment.

  2. Boot a VM that uses the shared-net network: nova boot ... --nic 
net-id=f9ccf1f9-aea9-4f72-accc-8a03170fa242 ... vm_sharednet
  3. Assign a floating IP to the VM "vm_sharednet"
  4. Delete "vm_sharednet". On the compute node, the qrouter namespace of the 
shared router (qrouter-ab78428a-9653-4a7b-98ec-22e1f956f44f) is left behind

  stack@DVR-CN2:~/DEVSTACK/manage$ ip netns
  qrouter-ab78428a-9653-4a7b-98ec-22e1f956f44f
   ...

  
  This is consistent with the output of "neutron l3-agent-list-hosting-router" 
command.  It shows the router is still being hosted on the compute node.

  
  $ neutron l3-agent-list-hosting-router ab78428a-9653-4a7b-98ec-22e1f956f44f
  
+--+++---+
  | id   | host   | admin_state_up | 
alive |
  
+--+++---+
  | 42f12eb0-51bc-4861-928a-48de51ba7ae1 | DVR-Controller | True   | 
:-)   |
  | ff869dc5-d39c-464d-86f3-112b55ec1c08 | DVR-CN2| True   | 
:-)   |
  

[Yahoo-eng-team] [Bug 1505406] Re: Queries for fetching quotas are not scoped

2016-01-14 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/233855
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=24b482ac15b5fa99edd2c3438318a41f9af06bcf
Submitter: Jenkins
Branch:master

commit 24b482ac15b5fa99edd2c3438318a41f9af06bcf
Author: Salvatore Orlando 
Date:   Mon Oct 12 15:47:03 2015 -0700

Scope get_tenant_quotas by tenant_id

Using model_query in the operation for retrieving tenant limits
will spare the need for explicit authorization check in the
quota controller. This is particularly relevant for the pecan
framework where every Neutron API call undergoes authZ checks
in the same pecan hook.

This patch will automatically adapt by eventuals changes
introducing "un-scoped" contexts.

Closes-bug: #1505406

Change-Id: I6952f5c85cd7fb0263789f768d23de3fe80b8183


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1505406

Title:
  Queries for fetching quotas are not scoped

Status in neutron:
  Fix Released

Bug description:
  get_tenant_quotas retrieves quotas for a tenant without scoping the
  query with the tenant_id issuing the request [1]; even if the API
  extension has an explicit authorisation check (...) [2], it is
  advisable to scope the query so that this problem is avoided.

  This is particularly relevant as with the pecan framework quota
  management APIs are not anymore "special" from an authZ perspective,
  but use the same authorization  hook as any other API.

  
  [1] 
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/db/quota/driver.py#n50
  [2] 
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/extensions/quotasv2.py#n87

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1505406/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1529836] Re: Fix deprecated library function (os.popen()).

2016-01-14 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/266953
Committed: 
https://git.openstack.org/cgit/openstack/keystonemiddleware/commit/?id=5dba16bc13c59e30fb05bace86779acd454f9dfa
Submitter: Jenkins
Branch:master

commit 5dba16bc13c59e30fb05bace86779acd454f9dfa
Author: LiuNanke 
Date:   Wed Jan 13 22:47:14 2016 +0800

Replace deprecated library function os.popen() with subprocess

os.popen() is deprecated since version 2.6. Resolved with use of
subprocess module.
Closes-bug: #1529836

Change-Id: I3f78fff64f100aa7d435c830a2a913a521af698e


** Changed in: keystonemiddleware
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1529836

Title:
  Fix deprecated library function (os.popen()).

Status in Ceilometer:
  In Progress
Status in Cinder:
  In Progress
Status in devstack:
  In Progress
Status in Glance:
  In Progress
Status in heat:
  In Progress
Status in OpenStack Dashboard (Horizon):
  In Progress
Status in OpenStack Identity (keystone):
  Fix Released
Status in keystoneauth:
  Fix Released
Status in keystonemiddleware:
  Fix Released
Status in Manila:
  Fix Released
Status in Murano:
  Fix Released
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in oslo-incubator:
  In Progress
Status in python-keystoneclient:
  Fix Released
Status in Python client library for Zaqar:
  In Progress
Status in Sahara:
  Fix Released
Status in senlin:
  Fix Released
Status in OpenStack Object Storage (swift):
  In Progress
Status in tempest:
  In Progress

Bug description:
  Deprecated library function os.popen is still in use at some places.
  Need to replace it using subprocess module.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1529836/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1534252] [NEW] fernet tokens don't support oauth1 authentication

2016-01-14 Thread Lance Bragstad
Public bug reported:

The fernet token provider doesn't issue or validate oauth1 token types.

** Affects: keystone
 Importance: Undecided
 Status: New


** Tags: fernet

** Tags added: fernet

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1534252

Title:
  fernet tokens don't support oauth1 authentication

Status in OpenStack Identity (keystone):
  New

Bug description:
  The fernet token provider doesn't issue or validate oauth1 token
  types.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1534252/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1534186] [NEW] Nova remove-fixed-ip doesn't return an error message when incorrect fixed IP is removed

2016-01-14 Thread Anna Babich
Public bug reported:

Reproduce steps:
1. Create net01: net01__subnet, 192.168.1.0/24:
neutron net-create net01
neutron subnet-create net01 192.168.1.0/24 --enable-dhcp --name net01__subnet
2. Boot instance vm1 in net01:
NET_ID=$(neutron net-list | grep 'net01' | awk '{print $2}')
nova boot --flavor m1.micro --image TestVM --nic net-id=$NET_ID 
--security-groups default vm1 
3. Note the fixed IP of vm1:
nova show vm1 | grep network
| net01 network| 192.168.1.36   
  |
4. Try to remove incorrect fixed IP from vm1:
nova remove-fixed-ip vm1 192.168.1.37

Expected result:
A message appears informing that the operation was not correct

Actual result:
Nothing happens and displayed


Nova version - http://paste.openstack.org/show/483603/

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1534186

Title:
  Nova remove-fixed-ip doesn't return an error message when incorrect
  fixed IP is removed

Status in OpenStack Compute (nova):
  New

Bug description:
  Reproduce steps:
  1. Create net01: net01__subnet, 192.168.1.0/24:
  neutron net-create net01
  neutron subnet-create net01 192.168.1.0/24 --enable-dhcp --name net01__subnet
  2. Boot instance vm1 in net01:
  NET_ID=$(neutron net-list | grep 'net01' | awk '{print $2}')
  nova boot --flavor m1.micro --image TestVM --nic net-id=$NET_ID 
--security-groups default vm1 
  3. Note the fixed IP of vm1:
  nova show vm1 | grep network
  | net01 network| 192.168.1.36 
|
  4. Try to remove incorrect fixed IP from vm1:
  nova remove-fixed-ip vm1 192.168.1.37

  Expected result:
  A message appears informing that the operation was not correct

  Actual result:
  Nothing happens and displayed

  
  Nova version - http://paste.openstack.org/show/483603/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1534186/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1441054] Re: live-migration --block-migrate fails with default libvirt flags

2016-01-14 Thread Mathieu Rohon
** Changed in: nova
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1441054

Title:
  live-migration --block-migrate fails with default libvirt flags

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  while trying to live-migrate an instance with the --block-migrate
  option, I've got an error on the host which hosts the VM :

  
  2015-04-07 11:01:32.554 DEBUG nova.virt.libvirt.driver [-] [instance: 
31b63d63-b392-4197-8864-b6d85dae438f] Starting monitoring of live migration 
from (pid=5202) _live_migration /opt/stack/nova/nova/virt/libvirt/driver.py:5642
  2015-04-07 11:01:32.556 DEBUG nova.virt.libvirt.driver [-] [instance: 
31b63d63-b392-4197-8864-b6d85dae438f] Operation thread is still running from 
(pid=5202) _live_migration_monitor 
/opt/stack/nova/nova/virt/libvirt/driver.py:5494
  2015-04-07 11:01:32.557 DEBUG nova.virt.libvirt.driver [-] [instance: 
31b63d63-b392-4197-8864-b6d85dae438f] Migration not running yet from (pid=5202) 
_live_migration_monitor /opt/stack/nova/nova/virt/libvirt/driver.py:5525
  2015-04-07 11:01:33.142 INFO nova.virt.libvirt.driver [-] [instance: 
31b63d63-b392-4197-8864-b6d85dae438f] Migration running for 0 secs, memory 0% 
remaining; (bytes processed=0, remaining=0, total=0)
  2015-04-07 11:01:33.277 ERROR nova.virt.libvirt.driver [-] [instance: 
31b63d63-b392-4197-8864-b6d85dae438f] Live Migration failure: End of file while 
reading data: Input/output error
  2015-04-07 11:01:33.278 DEBUG nova.virt.libvirt.driver [-] [instance: 
31b63d63-b392-4197-8864-b6d85dae438f] Migration operation thread notification 
from (pid=5202) thread_finished /opt/stack/nova/nova/virt/libvirt/driver.py:5633
  Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/eventlet/hubs/hub.py", line 
457, in fire_timers
  timer()
File "/usr/local/lib/python2.7/dist-packages/eventlet/hubs/timer.py", line 
58, in __call__
  cb(*args, **kw)
File "/usr/local/lib/python2.7/dist-packages/eventlet/event.py", line 168, 
in _do_send
  waiter.switch(result)
File "/usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py", line 
214, in main
  result = function(*args, **kwargs)
File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 5428, in 
_live_migration_operation
  instance=instance)
File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 
85, in __exit__
  six.reraise(self.type_, self.value, self.tb)
File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 5397, in 
_live_migration_operation
  CONF.libvirt.live_migration_bandwidth)
File "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 183, 
in doit
  result = proxy_call(self._autowrap, f, *args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 141, 
in proxy_call
  rv = execute(f, *args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 122, 
in execute
  six.reraise(c, e, tb)
File "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 80, 
in tworker
  rv = meth(*args, **kwargs)
File "/usr/lib/python2.7/dist-packages/libvirt.py", line 1734, in 
migrateToURI2
  if ret == -1: raise libvirtError ('virDomainMigrateToURI2() failed', 
dom=self)
  libvirtError: End of file while reading data: Input/output error
  2015-04-07 11:01:33.644 DEBUG nova.virt.libvirt.driver [-] [instance: 
31b63d63-b392-4197-8864-b6d85dae438f] VM running on src, migration failed from 
(pid=5202) _live_migration_monitor 
/opt/stack/nova/nova/virt/libvirt/driver.py:5500
  2015-04-07 11:01:33.645 DEBUG nova.virt.libvirt.driver [-] [instance: 
31b63d63-b392-4197-8864-b6d85dae438f] Fixed incorrect job type to be 4 from 
(pid=5202) _live_migration_monitor 
/opt/stack/nova/nova/virt/libvirt/driver.py:5520
  2015-04-07 11:01:33.645 ERROR nova.virt.libvirt.driver [-] [instance: 
31b63d63-b392-4197-8864-b6d85dae438f] Migration operation has aborted
  2015-04-07 11:01:33.733 DEBUG nova.virt.libvirt.driver [-] [instance: 
31b63d63-b392-4197-8864-b6d85dae438f] Live migration monitoring is all done 
from (pid=5202) _live_migration /opt/stack/nova/nova/virt/libvirt/driver.py:5653

  
  live migration with block-migrate works fine when I remove the flag 
VIR_MIGRATE_TUNNELLED from the option block_migration_flag.

  indeed, live block migration cannot occur in tunneling mode, as reported here 
: 
  https://wiki.openstack.org/wiki/OSSN/OSSN-0007

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1441054/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1534197] [NEW] advertise_mtu=True for DHCP agent when plugin does not pass .mtu attribute for networks results in agent failure

2016-01-14 Thread Ihar Hrachyshka
Public bug reported:

In that case, you would see a traceback with the following lines:

  File "neutron/agent/linux/dhcp.py", line 371, in _build_cmdline_callback
mtu = self.network.mtu
AttributeError: 'FakeV4Network' object has no attribute 'mtu'

We should check whether plugin passed mtu attribute before accessing it.

** Affects: neutron
 Importance: Low
 Assignee: Ihar Hrachyshka (ihar-hrachyshka)
 Status: In Progress


** Tags: liberty-backport-potential

** Changed in: neutron
 Assignee: (unassigned) => Ihar Hrachyshka (ihar-hrachyshka)

** Changed in: neutron
   Status: New => Confirmed

** Tags added: liberty-backport-potential

** Changed in: neutron
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1534197

Title:
  advertise_mtu=True for DHCP agent when plugin does not pass .mtu
  attribute for networks results in agent failure

Status in neutron:
  In Progress

Bug description:
  In that case, you would see a traceback with the following lines:

File "neutron/agent/linux/dhcp.py", line 371, in _build_cmdline_callback
  mtu = self.network.mtu
  AttributeError: 'FakeV4Network' object has no attribute 'mtu'

  We should check whether plugin passed mtu attribute before accessing
  it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1534197/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1500920] Re: SameHostFilter should fail if no instances on host

2016-01-14 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/229030
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=b6198c834cd22264815efa2299fce1059ba5c085
Submitter: Jenkins
Branch:master

commit b6198c834cd22264815efa2299fce1059ba5c085
Author: Alvaro Lopez Garcia 
Date:   Tue Sep 29 17:20:44 2015 +0200

SameHostFilter should fail if host does not have instances

The SameHostFilter should pass only if the host is executing an instance
from the set of uuids passed in the scheduler hint 'same_host'. However,
it passes also if the host does not have any instance.

Fixes-Bug: #1500920
Change-Id: I9e65fd1153a5e33c676890ab3e562e464f9ff625


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1500920

Title:
  SameHostFilter should fail if no instances on host

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  According to the docs, the SameHostFilter "schedules the instance on
  the same host as another instance in a set of instances", so it should
  only pass if the host is executing any of the instances passed as the
  scheduler hint. However, the filter also passes if the host does not
  have any instances.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1500920/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1436166] Re: Problems with images bubble up as a simple "There are not enough hosts available"

2016-01-14 Thread Matt Riedemann
** Tags added: liberty-backport-potential

** Also affects: nova/liberty
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1436166

Title:
  Problems with images bubble up as a simple "There are not enough hosts
  available"

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) liberty series:
  New

Bug description:
  When starting a new instance, I received the generic "There are not
  enough hosts available" error, but the real reason was buried in logs,
  which was that the image I was trying to use was corrupt.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1436166/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1533405] Re: Release networking-powervm 1.0.0

2016-01-14 Thread Kyle Mestery
1.0.0 is on PyPI now:

https://pypi.python.org/pypi/networking-powervm/1.0.0

** Changed in: neutron
Milestone: None => mitaka-2

** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1533405

Title:
  Release networking-powervm 1.0.0

Status in networking-powervm:
  New
Status in neutron:
  Fix Released

Bug description:
  We are requesting that networking-powervm release 1.0.0 be created. It
  should contain everything up to the current tip of the stable/liberty
  branch, commit 68e8db46ffbd06e680b63236f32a80185258 from Nov 18,
  2015 (http://git.openstack.org/cgit/openstack/networking-
  powervm/commit/?h=stable/liberty).

  Creating this release should publish both a source package and a
  python wheel to pypi.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-powervm/+bug/1533405/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1519402] Re: Add docker container format to defaults

2016-01-14 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/252806
Committed: 
https://git.openstack.org/cgit/openstack/python-glanceclient/commit/?id=f7b50c48efbb2d34a95b187dfaf5ca70f77c67bc
Submitter: Jenkins
Branch:master

commit f7b50c48efbb2d34a95b187dfaf5ca70f77c67bc
Author: Atsushi SAKAI 
Date:   Thu Dec 3 16:51:30 2015 +0900

Add docker to image_schema on glance v2 cli

Add docker to v2 image_schema
Add docker to v2 unit tests

This is related to following glance api extension.
  https://review.openstack.org/#/c/249282/

Co-Authored-By: Kairat Kushaev 

Closes-Bug: #1519402
Change-Id: Ia015f027788b49c1b0002fb3e3a93ac825854596


** Changed in: python-glanceclient
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1519402

Title:
  Add docker container format to defaults

Status in Glance:
  Fix Committed
Status in python-glanceclient:
  Fix Released

Bug description:
  An image with the 'docker' container format is a tar archive of the
  container file system. In order to use the nova-docker compute driver
  in nova and boot docker instances glance support of the docker
  container format is required. Rather than having to specifically
  configure glance to allow the docker container format I would like to
  add it to the default list of container_formats.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1519402/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp