[Yahoo-eng-team] [Bug 1493122] Re: There is no quota check for instance snapshot

2016-02-17 Thread OpenStack Infra
** Changed in: horizon
   Status: Invalid => In Progress

** Changed in: horizon
 Assignee: (unassigned) => zhaozhilong (zhaozhilong)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1493122

Title:
  There is no quota check for instance snapshot

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  There is no quota check for snapshots getting from instances both via
  APIs and horizon. Imagine a situation in which a normal user can fill-
  out whole of the cinder(ceph) storage space by calling the
  get_instance_snapshot() API. But its need to control the amount of
  instance snapshots by defining instance-snapshot-quota.

  How to reproduce?
  1- In specific project, launch a new instance.
  2- Set the project's quota all the way down(e.g. instances: 1, 
volume_snapshots: 0, ...).
  3- Get snapshots from running instance as much as you can.

  You see that there is no quota check and user can fill-out the whole
  of the storage space.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1493122/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1541090] Re: Integration password config should match local_conf

2016-02-17 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/275395
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=dc37a024938a5138c8816a10211a27d69f314a46
Submitter: Jenkins
Branch:master

commit dc37a024938a5138c8816a10211a27d69f314a46
Author: Thai Tran 
Date:   Tue Feb 16 09:44:24 2016 -0800

Updating password in local_conf

In our recommended local_conf.rst, we use pass for password.
We should change pass to match what we have in devstack and i9n tests.

Change-Id: Ia019eadf91bc6195f40b99d6b1f8478982b76404
Closes-Bug: #1541090


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1541090

Title:
  Integration password config should match local_conf

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  In our recommended local_conf.rst, we use pass for password.
  We should change pass to match what we have in devstack and i9n tests.

  Link to local_conf.rst
  https://github.com/openstack/horizon/blob/master/doc/source/ref/local_conf.rst

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1541090/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1545655] Re: A close icon in the upper right don't show in Update Metadata

2016-02-17 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/280133
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=de30e3c7d3c6ac1287bfd88af7abc58d13c1c1d9
Submitter: Jenkins
Branch:master

commit de30e3c7d3c6ac1287bfd88af7abc58d13c1c1d9
Author: Kenji Ishii 
Date:   Mon Feb 15 19:10:04 2016 +0900

Add a close icon in the upper right in Update Metadata modal

Other modal dialog have this icon but Update Metadata dialog
don't have it. This patch will fix it.

Change-Id: I6f7bc243d3ef5e48fc68fbdfa1dcc39b51df
Closes-Bug: #1545655


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1545655

Title:
  A close icon in the upper right don't show in Update Metadata

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  A close icon in the upper right can close modal dialog.
  Other modal dialog have this icon but Update Metadata dialog don't have it.
  When we use a browser with not maximize window, it often needs a scroll 
operation.
  Therefore it is better to add icon in the upper right to cancel the modal, 
same as other modals.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1545655/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1546883] [NEW] poll_rebooting_instances in nova libvirt driver is not implemented

2016-02-17 Thread Sanjay Kumar Singh
Public bug reported:

In nova.virt.libvirt.driver poll_rebooting_instance has no
implementation

** Affects: nova
 Importance: Undecided
 Assignee: Sanjay Kumar Singh (sanjay6-singh)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Sanjay Kumar Singh (sanjay6-singh)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1546883

Title:
  poll_rebooting_instances in nova libvirt driver is not implemented

Status in OpenStack Compute (nova):
  New

Bug description:
  In nova.virt.libvirt.driver poll_rebooting_instance has no
  implementation

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1546883/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1493122] Re: There is no quota check for instance snapshot

2016-02-17 Thread zhaozhilong
** Changed in: horizon
   Status: In Progress => Invalid

** Changed in: horizon
 Assignee: zhaozhilong (zhaozhilong) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1493122

Title:
  There is no quota check for instance snapshot

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  There is no quota check for snapshots getting from instances both via
  APIs and horizon. Imagine a situation in which a normal user can fill-
  out whole of the cinder(ceph) storage space by calling the
  get_instance_snapshot() API. But its need to control the amount of
  instance snapshots by defining instance-snapshot-quota.

  How to reproduce?
  1- In specific project, launch a new instance.
  2- Set the project's quota all the way down(e.g. instances: 1, 
volume_snapshots: 0, ...).
  3- Get snapshots from running instance as much as you can.

  You see that there is no quota check and user can fill-out the whole
  of the storage space.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1493122/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1546879] [NEW] heat miss some snapshot api

2016-02-17 Thread zhaozhilong
Public bug reported:

There is no any api to control heat-snapshot in heat.py.

** Affects: horizon
 Importance: Undecided
 Assignee: zhaozhilong (zhaozhilong)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => zhaozhilong (zhaozhilong)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1546879

Title:
  heat miss some snapshot api

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  There is no any api to control heat-snapshot in heat.py.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1546879/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1493122] Re: There is no quota check for instance snapshot

2016-02-17 Thread OpenStack Infra
Change abandoned by zhaozhilong (zhaozhil...@unitedstack.com) on branch: master
Review: https://review.openstack.org/281642

** Changed in: horizon
   Status: Invalid => In Progress

** Changed in: horizon
 Assignee: (unassigned) => zhaozhilong (zhaozhilong)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1493122

Title:
  There is no quota check for instance snapshot

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  There is no quota check for snapshots getting from instances both via
  APIs and horizon. Imagine a situation in which a normal user can fill-
  out whole of the cinder(ceph) storage space by calling the
  get_instance_snapshot() API. But its need to control the amount of
  instance snapshots by defining instance-snapshot-quota.

  How to reproduce?
  1- In specific project, launch a new instance.
  2- Set the project's quota all the way down(e.g. instances: 1, 
volume_snapshots: 0, ...).
  3- Get snapshots from running instance as much as you can.

  You see that there is no quota check and user can fill-out the whole
  of the storage space.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1493122/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1544835] Re: Using scope to clear table selections

2016-02-17 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/279383
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=c28b8267de69d099c8a97f6bb5b796208c5575ca
Submitter: Jenkins
Branch:master

commit c28b8267de69d099c8a97f6bb5b796208c5575ca
Author: Thai Tran 
Date:   Wed Feb 17 11:01:42 2016 -0800

Using events to clear table selections instead of scope

We currently use scope to clear table selections. This is not ideal because 
it
breaks encapsulation and encourages the use of scope over ctrl. This patch
adds a clear method and uses event propagation to invoke it.

Change-Id: I6115047298d5fa673eabb707a358c84a4e6d9eb6
Closes-Bug: #1544835


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1544835

Title:
  Using scope to clear table selections

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  We currently use scope to clear table selections.  This is not ideal
  because it breaks encapsulation and encourages the use of scope over
  ctrl. We should provide a method that can clear instead.

  Reference:
  
https://github.com/openstack/horizon/blob/master/openstack_dashboard/static/app/core/images/table/images.controller.js#L101

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1544835/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1546736] Re: Running eslint in quiet mode

2016-02-17 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/280842
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=8123d9ce34aa6e6bf034b0c27b2afc49bc0dc19b
Submitter: Jenkins
Branch:master

commit 8123d9ce34aa6e6bf034b0c27b2afc49bc0dc19b
Author: Thai Tran 
Date:   Tue Feb 16 09:39:10 2016 -0800

Running eslint in quiet mode with color

We have a ton of warnings. This makes it difficult to locate linting errors.
This patch adds an npm script for developers to run eslint in quiet mode
with color.

Change-Id: Ie1ecc201d025c428d15b310b78e9c343a341aed3
Closes-Bug: #1546736


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1546736

Title:
  Running eslint in quiet mode

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  We have a ton of warnings. This makes it difficult to locate linting
  errors. I think we should enable quiet mode so that warnings do not
  show up.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1546736/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1546834] [NEW] The deletion of an LDAP domain in keystone when write enabled should not clear the LDAP database

2016-02-17 Thread Adam Young
Public bug reported:

Description of problem:
Testing multi domain support in RHOS. The deletion of this domain when write 
enabled cleared the LDAP database entirely. Thankfully this was done in a lab, 
because LDAP was a total loss. 

Version-Release number of selected component (if applicable):

# rpm -qa | grep packstack
openstack-packstack-puppet-2015.1-0.14.dev1589.g1d6372f.el7ost.noarch
openstack-packstack-2015.1-0.14.dev1589.g1d6372f.el7ost.noarch

# rpm -qa | grep keystone
python-keystoneclient-1.3.0-2.el7ost.noarch
python-keystone-2015.1.2-2.el7ost.noarch
openstack-keystone-2015.1.2-2.el7ost.noarch
python-keystonemiddleware-1.5.1-1.el7ost.noarch

How reproducible:
Assuming always? I was only able to do this once. 


Steps to Reproduce:
1. Enable multi domain support in keystone, ensure the following is in 
/etc/keystone.conf

[identity]
domain_specific_drivers_enabled = true 
domain_config_dir = /etc/keystone/domains
#default_domain_id = 7d9bed61b1564f2289296a4e9241482d

2. Then add an LDAP domain and ensure that writes are permitted.

vim /etc/keystone/domains/keystone.laboratory.conf

[ldap]
url=ldap://auth.lab.runlevelone.lan
user=uid=keystone,cn=users,cn=accounts,dc=lab,dc=runlevelone,dc=lan
password=xxx
suffix=ccn=accounts,dc=lab,dc=runlevelone,dc=lan
user_tree_dn=cn=users,cn=accounts,dc=lab,dc=runlevelone,dc=lan
user_objectclass=person
user_id_attribute=uid
user_name_attribute=uid
user_mail_attribute=mail
user_allow_create=true
user_allow_update=true
user_allow_delete=true
group_tree_dn=cn=groups,cn=accounts,dc=lab,dc=runlevelone,dc=lan
group_objectclass=groupOfNames
group_id_attribute=cn
group_name_attribute=cn
group_member_attribute=member
group_desc_attribute=description
group_allow_create=true
group_allow_update=true
group_allow_delete=true
user_enabled_attribute=nsAccountLock
user_enabled_default=false
user_enabled_invert=true

[identity]
driver = keystone.identity.backends.ldap.Identity


3. Remove the domain, using 'openstack domain delete #domain_id' 


Actual results:
Clears LDAP database, cn=users/groups,cn=accounts,dc=lab,dc=runlevelone,dc=lan 
was completely empty


Expected results:
Does not delete users on removal or prompt "THIS WILL DELETE ALL USERS, DO YOU 
WANT TO PROCEED"

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1546834

Title:
   The deletion of an LDAP domain in keystone when write enabled should
  not clear the LDAP database

Status in OpenStack Identity (keystone):
  New

Bug description:
  Description of problem:
  Testing multi domain support in RHOS. The deletion of this domain when write 
enabled cleared the LDAP database entirely. Thankfully this was done in a lab, 
because LDAP was a total loss. 

  Version-Release number of selected component (if applicable):

  # rpm -qa | grep packstack
  openstack-packstack-puppet-2015.1-0.14.dev1589.g1d6372f.el7ost.noarch
  openstack-packstack-2015.1-0.14.dev1589.g1d6372f.el7ost.noarch

  # rpm -qa | grep keystone
  python-keystoneclient-1.3.0-2.el7ost.noarch
  python-keystone-2015.1.2-2.el7ost.noarch
  openstack-keystone-2015.1.2-2.el7ost.noarch
  python-keystonemiddleware-1.5.1-1.el7ost.noarch

  How reproducible:
  Assuming always? I was only able to do this once. 

  
  Steps to Reproduce:
  1. Enable multi domain support in keystone, ensure the following is in 
/etc/keystone.conf

  [identity]
  domain_specific_drivers_enabled = true 
  domain_config_dir = /etc/keystone/domains
  #default_domain_id = 7d9bed61b1564f2289296a4e9241482d

  2. Then add an LDAP domain and ensure that writes are permitted.

  vim /etc/keystone/domains/keystone.laboratory.conf

  [ldap]
  url=ldap://auth.lab.runlevelone.lan
  user=uid=keystone,cn=users,cn=accounts,dc=lab,dc=runlevelone,dc=lan
  password=xxx
  suffix=ccn=accounts,dc=lab,dc=runlevelone,dc=lan
  user_tree_dn=cn=users,cn=accounts,dc=lab,dc=runlevelone,dc=lan
  user_objectclass=person
  user_id_attribute=uid
  user_name_attribute=uid
  user_mail_attribute=mail
  user_allow_create=true
  user_allow_update=true
  user_allow_delete=true
  group_tree_dn=cn=groups,cn=accounts,dc=lab,dc=runlevelone,dc=lan
  group_objectclass=groupOfNames
  group_id_attribute=cn
  group_name_attribute=cn
  group_member_attribute=member
  group_desc_attribute=description
  group_allow_create=true
  group_allow_update=true
  group_allow_delete=true
  user_enabled_attribute=nsAccountLock
  user_enabled_default=false
  user_enabled_invert=true

  [identity]
  driver = keystone.identity.backends.ldap.Identity

  
  3. Remove the domain, using 'openstack domain delete #domain_id' 


  Actual results:
  Clears LDAP database, 
cn=users/groups,cn=accounts,dc=lab,dc=runlevelone,dc=lan was completely empty

  
  Expected results:
  Does not delete users on removal or prompt "THIS WILL DELETE ALL USERS, D

[Yahoo-eng-team] [Bug 1546832] [NEW] Typo error of wrong msg format in _registry_notify of SecurityGroupDbMixin

2016-02-17 Thread yalei wang
Public bug reported:

args are not parsed decently.

original
reason = _('cannot perform %(event)s due to %(reason)s'), {
   'event': event, 'reason': e}

should be

reason = _('cannot perform %(event)s due to %(reason)s') % {
   'event': event, 'reason': e}

** Affects: neutron
 Importance: Undecided
 Assignee: yalei wang (yalei-wang)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => yalei wang (yalei-wang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1546832

Title:
  Typo error of wrong msg format in _registry_notify of
  SecurityGroupDbMixin

Status in neutron:
  In Progress

Bug description:
  args are not parsed decently.

  original
  reason = _('cannot perform %(event)s due to %(reason)s'), {   
 
 'event': event, 'reason': e}

  should be

  reason = _('cannot perform %(event)s due to %(reason)s') % {  
  
 'event': event, 'reason': e}

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1546832/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1350608] Re: Request ID header is lost between nova.virt.ironic and ironic-api service

2016-02-17 Thread Jim Rollenhagen
Going to close this one on the Ironic side in favor of the RFE
https://bugs.launchpad.net/ironic/+bug/1505119

** Changed in: ironic
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1350608

Title:
  Request ID header is lost between nova.virt.ironic and ironic-api
  service

Status in Ironic:
  Invalid
Status in OpenStack Compute (nova):
  Incomplete

Bug description:
  Services pass request-id headers around to assist with operator
  interpretation of log files.

  This "req-XXX" header is being logged at the nova.virt.ironic layer,
  but does not seem to be passed to ironic's API service (or is not
  received / logged there).

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1350608/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1545101] Re: "TypeError: __init__() takes exactly 3 arguments (2 given)" in n-api logs for nova metadata api request

2016-02-17 Thread Sean M. Collins
+Neutron and myself since it's the grenade multinode job that is being
hit

** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: neutron
   Status: New => In Progress

** Changed in: neutron
 Assignee: (unassigned) => Sean M. Collins (scollins)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1545101

Title:
  "TypeError: __init__() takes exactly 3 arguments (2 given)" in n-api
  logs for nova metadata api request

Status in neutron:
  In Progress
Status in OpenStack Compute (nova):
  In Progress

Bug description:
  http://logs.openstack.org/59/265759/24/experimental/gate-grenade-dsvm-
  neutron-
  
multinode/8f1deec/logs/new/screen-n-api.txt.gz?level=INFO#_2016-02-12_16_28_16_860

  2016-02-12 16:28:16.860 20168 INFO nova.metadata.wsgi.server [-] Traceback 
(most recent call last):
File "/usr/local/lib/python2.7/dist-packages/eventlet/wsgi.py", line 470, 
in handle_one_response
  result = self.application(self.environ, start_response)
File "/usr/local/lib/python2.7/dist-packages/paste/urlmap.py", line 216, in 
__call__
  return app(environ, start_response)
File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 130, in 
__call__
  resp = self.call_func(req, *args, **self.kwargs)
File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 195, in 
call_func
  return self.func(req, *args, **kwargs)
File "/opt/stack/new/nova/nova/api/ec2/__init__.py", line 32, in __call__
  return webob.exc.HTTPException(message=_DEPRECATION_MESSAGE)
  TypeError: __init__() takes exactly 3 arguments (2 given)

  This only shows up in the gate-grenade-dsvm-neutron-multinode job
  which is not running the n-api-meta service but is running the neutron
  metadata service, which has a bunch of warnings because it's not
  getting valid responses back from the nova metadata API (b/c it's not
  running):

  http://logs.openstack.org/59/265759/24/experimental/gate-grenade-dsvm-
  neutron-multinode/8f1deec/logs/new/screen-q-meta.txt.gz?level=TRACE

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1545101/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1545960] Re: authenticating with ldap user fails due to notification

2016-02-17 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/280542
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=90c95049a3e0a9ceadbd45b1aa5a7de50a8ea1d0
Submitter: Jenkins
Branch:master

commit 90c95049a3e0a9ceadbd45b1aa5a7de50a8ea1d0
Author: Steve Martinelli 
Date:   Tue Feb 16 03:12:08 2016 -0500

encode user id for notifications

local user ids that are returned from the mapping_id backend are
in unicode. this causes an issue when attempting to transform
the value into uuid5.

Change-Id: I87745944a3eb606fdd435ae983e5de602d08bd0d
closes-bug: 1545960


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1545960

Title:
  authenticating with ldap user fails due to notification

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  I setup a non-default domain with an LDAP backend, with emails as
  usernames. This caused ldap user authentication to fail:

  2016-02-16 02:49:48.311 18101 DEBUG keystone.common.ldap.core 
[req-d086b3ca-bddc-4927-b4d5-205913f4187e - - - - -] LDAP init: 
url=ldap://bluepages.ibm.com 2016-02-16 02:49:48.311 
_common_ldap_initialization /opt/stack/keystone/keystone/common/ldap/core.py:579
  2016-02-16 02:49:48.311 18101 DEBUG keystone.common.ldap.core 
[req-d086b3ca-bddc-4927-b4d5-205913f4187e - - - - -] LDAP init: use_tls=False 
tls_cacertfile=None tls_cacertdir=None tls_req_cert=2 tls_avail=1 2016-02-16 
02:49:48.311 _common_ldap_initialization 
/opt/stack/keystone/keystone/common/ldap/core.py:583
  2016-02-16 02:49:48.311 18101 DEBUG keystone.common.ldap.core 
[req-d086b3ca-bddc-4927-b4d5-205913f4187e - - - - -] LDAP search: 
base=ou=bluepages,o=ibm.com scope=2 
filterstr=(&(mail=steve...@ca.ibm.com)(objectClass=ibmPerson)(uid=*)) 
attrs=['mail', 'userPassword', 'enabled', 'uid'] attrsonly=0 2016-02-16 
02:49:48.311 search_s /opt/stack/keystone/keystone/common/ldap/core.py:938
  2016-02-16 02:49:48.418 18101 DEBUG keystone.common.ldap.core 
[req-d086b3ca-bddc-4927-b4d5-205913f4187e - - - - -] LDAP unbind 2016-02-16 
02:49:48.418 unbind_s /opt/stack/keystone/keystone/common/ldap/core.py:911
  2016-02-16 02:49:48.420 18101 DEBUG keystone.identity.core 
[req-d086b3ca-bddc-4927-b4d5-205913f4187e - - - - -] ID Mapping - Domain ID: 
f661d8c0c14848f5909cf5229a473377, Default Driver: False, Domains: False, UUIDs: 
False, Compatible IDs: True 2016-02-16 02:49:48.420 _set_domain_id_and_mapping 
/opt/stack/keystone/keystone/identity/core.py:577
  2016-02-16 02:49:48.420 18101 DEBUG keystone.identity.core 
[req-d086b3ca-bddc-4927-b4d5-205913f4187e - - - - -] Local ID: 011918649 
2016-02-16 02:49:48.420 _set_domain_id_and_mapping_for_single_ref 
/opt/stack/keystone/keystone/identity/core.py:595
  2016-02-16 02:49:48.425 18101 DEBUG keystone.identity.core 
[req-d086b3ca-bddc-4927-b4d5-205913f4187e - - - - -] Found existing mapping to 
public ID: 2165702f085e15ff59308d8723df016d75fdd07e9af527a881b87812278e5068 
2016-02-16 02:49:48.425 _set_domain_id_and_mapping_for_single_ref 
/opt/stack/keystone/keystone/identity/core.py:608

  2016-02-16 02:32:22.650 17136 ERROR keystone.common.wsgi 
[req-0fb5bb7b-2ba1-4ced-a814-71bd53939d46 - - - - -] 'ascii' codec can't decode 
byte 0xec in position 2: ordinal not in range(128)
  2016-02-16 02:32:22.650 17136 TRACE keystone.common.wsgi Traceback (most 
recent call last):
  2016-02-16 02:32:22.650 17136 TRACE keystone.common.wsgi   File 
"/opt/stack/keystone/keystone/common/wsgi.py", line 247, in __call__
  2016-02-16 02:32:22.650 17136 TRACE keystone.common.wsgi result = 
method(context, **params)
  2016-02-16 02:32:22.650 17136 TRACE keystone.common.wsgi   File 
"/opt/stack/keystone/keystone/auth/controllers.py", line 396, in 
authenticate_for_token
  2016-02-16 02:32:22.650 17136 TRACE keystone.common.wsgi 
self.authenticate(context, auth_info, auth_context)
  2016-02-16 02:32:22.650 17136 TRACE keystone.common.wsgi   File 
"/opt/stack/keystone/keystone/auth/controllers.py", line 520, in authenticate
  2016-02-16 02:32:22.650 17136 TRACE keystone.common.wsgi auth_context)
  2016-02-16 02:32:22.650 17136 TRACE keystone.common.wsgi   File 
"/opt/stack/keystone/keystone/auth/plugins/password.py", line 36, in 
authenticate
  2016-02-16 02:32:22.650 17136 TRACE keystone.common.wsgi 
password=user_info.password)
  2016-02-16 02:32:22.650 17136 TRACE keystone.common.wsgi   File 
"/opt/stack/keystone/keystone/common/manager.py", line 124, in wrapped
  2016-02-16 02:32:22.650 17136 TRACE keystone.common.wsgi __ret_val = 
__f(*args, **kwargs)
  2016-02-16 02:32:22.650 17136 TRACE keystone.common.wsgi   File 
"/opt/stack/keystone/keystone/notifications.py", line 555, in wrapper
  2016-02-16 02:32:22.650 17136 TRACE keystone.common.wsgi initiator = 
_get_request_audit_info(context, user

[Yahoo-eng-team] [Bug 1538619] Re: Fix up argument order in remove_volume_connection()

2016-02-17 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/280923
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=603e7db9a8091191ed2605c04c5b72593f0b0094
Submitter: Jenkins
Branch:master

commit 603e7db9a8091191ed2605c04c5b72593f0b0094
Author: gh159m 
Date:   Tue Feb 16 14:42:25 2016 -0600

Fixed arguement order in remove_volume_connection

RPC API and ComputeManager both contain a function named
remove_volume_connection with the same arguments, but ordered
differently.  This causes problems when called by
_rollback_live_migration.

This fix is more for future consistency, as this was affecting the
_ComputeV4Proxy class, which is present in stable/kilo but
no longer exists.

Change-Id: Iacadd5f015888c4181b8a332625ec746f991e239
Closes-Bug: #1538619


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1538619

Title:
  Fix up argument order in remove_volume_connection()

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  The RPC API function for remove_volume_connection() uses a different argument 
order than the ComputeManager function of the same name.
  
  The normal RPC code uses named arguments, but the _ComputeV4Proxy version 
doesn't, and it has the order wrong.  This causes problems when called by 
_rollback_live_migration().

  The fix seems to be trivial:
  diff --git a/nova/compute/manager.py b/nova/compute/manager.py
  index d6efd18..65c1b75 100644
  --- a/nova/compute/manager.py
  +++ b/nova/compute/manager.py
  @@ -6870,7 +6870,8 @@ class _ComputeV4Proxy(object):
 instance)
   
   def remove_volume_connection(self, ctxt, instance, volume_id):
  -return self.manager.remove_volume_connection(ctxt, instance, 
volume_id)
  +# The RPC API uses different argument order than the local API.
  +return self.manager.remove_volume_connection(ctxt, volume_id, 
instance)
   
   def rescue_instance(self, ctxt, instance, rescue_password,
   rescue_image_ref, clean_shutdown):

  Given that this only applies to stable/kilo I'm guessing there's no
  point in trying to push a patch, but I thought I'd include this here
  in case anyone else runs into it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1538619/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1311401] Re: nova.virt.ironic tries to remove vif_port_id unnecessarily

2016-02-17 Thread Jim Rollenhagen
Looks like this is fixed already:
https://github.com/openstack/nova/commit/d3acac0f5bffca59441d9a4a12c89db1d45ec4cf

** Changed in: nova
 Assignee: Aniruddha Singh Gautam (aniruddha-gautam) => (unassigned)

** Changed in: nova
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1311401

Title:
  nova.virt.ironic tries to remove vif_port_id unnecessarily

Status in Ironic:
  Won't Fix
Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  While spawning an instance, Ironic nova driver logs the following
  warning every time:

  2014-04-22 17:23:21.967 15379 WARNING wsme.api [-] Client-side error:
  Couldn't apply patch '[{'path': '/extra/vif_port_id', 'op':
  'remove'}]'. Reason: u'vif_port_id'

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1311401/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1526462] Re: Need support for OpenDirectory in LDAP driver

2016-02-17 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/258528
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=449f1f2bdee5fe8026239667838cf2ab976806fd
Submitter: Jenkins
Branch:master

commit 449f1f2bdee5fe8026239667838cf2ab976806fd
Author: Alexander Makarov 
Date:   Wed Dec 16 17:11:36 2015 +0300

Enable support for posixGroups in LDAP

Support LDAP backends using POSIX goups

Change-Id: Iaaf022bfdcbd26b3a29c84ff60a033f65a60302b
Closes-Bug: 1526462


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1526462

Title:
  Need support for OpenDirectory in LDAP driver

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  It is necessary to support Apple OpenDirectory as the backend for
  Keystone Identity.

  OpenDirectory uses a concept of POSIX groups, when the entities of
  users in the groups are represented as UIDs, not full DNs:

  dn: cn=group1, cn=groups,dc=domain,dc=com
  
  memberUid: user1
  memberUid: user2
  

  while in the driver of LDAP it is hardcoded that the entities could be
  only full DNs, like:

  dn: cn=group1, cn=groups,dc=domain,dc=com
  
  memberUid: uid=user1,cn=users,dc=domain,dc=com
  memberUid: uid=user2,cn=users,dc=domain,dc=com

  Because of this reason it is impossible to use groups in Keystone and
  we cannot assign the roles to the Keystone groups - Keystone doesn't
  recognize any user to be a part of any group. When it checks the
  roles, it searches for the direct user's assignments, and then for any
  groups which the user can be a member of. So by default the search
  returns nothing.

  We have to have an additional parameter in the config where we specify
  the type of the entity in the groups - whether is it currently a dn or
  an id.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1526462/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1470797] Re: Ironic: Improve logs

2016-02-17 Thread Michael Davies
The logging has been improved over time, so we'll close this for now.
If you still think there's a lack of logging here, please re-open the
bug.  Thanks.

** Changed in: nova
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1470797

Title:
  Ironic: Improve logs

Status in OpenStack Compute (nova):
  Won't Fix

Bug description:
  We need to improve the Ironic driver logs, currently we have very
  little. E.g there's only one INFO log in the whole driver code.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1470797/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1326639] Re: Ironic nova driver fails to setup initial state correctly

2016-02-17 Thread Jim Rollenhagen
This has been fixed for a while; we only expose resources available to a
node in AVAILABLE/NONE provision state and with no instance uuid:
https://github.com/openstack/nova/blob/master/nova/virt/ironic/driver.py#L301-L318

** Changed in: nova
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1326639

Title:
  Ironic nova driver fails to setup initial state correctly

Status in Ironic:
  Invalid
Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  2014-06-05 04:04:54.552 28915 ERROR ironic.nova.virt.ironic.driver
  [req-66403d15-5f7e-4a59-8d3d-ba9d6e654fb5 None] Failed to request
  Ironic to provision instance ef3421ef-e7b3-4203-811c-dad052b9badf: RPC
  do_node_deploy called for cfa5c267-3a7c-4973-bdcf-80a139a947ea, but
  provision state is already deploy failed. (HTTP 500)

  
  This happened because the node wasn't 'properly' cleaned after the last 
instance_uuid was removed from it. Seems to me that the ironic nova driver 
should not make any assumptions - just its instance_uuid atomically, and then 
reset all the state, and finally proceed to set the state it wants for 
deployment.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1326639/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1279611] Re: urlparse is incompatible for python 3

2016-02-17 Thread Tim Burke
Looks like this was addressed for Swift in
https://review.openstack.org/#/c/232536/ - this was committed as
https://git.openstack.org/cgit/openstack/swift/commit/?id=c0af385173658fa149bddf155aeb1ae0bbd4eb7e
and released in 2.6.0.

** Changed in: swift
   Status: In Progress => Fix Released

** Changed in: swift
 Assignee: Bill Huber (wbhuber) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1279611

Title:
   urlparse is incompatible for python 3

Status in Astara:
  In Progress
Status in Blazar:
  In Progress
Status in Ceilometer:
  Fix Released
Status in Cinder:
  Fix Released
Status in gce-api:
  In Progress
Status in Glance:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in openstack-ansible:
  In Progress
Status in openstack-doc-tools:
  In Progress
Status in python-barbicanclient:
  Fix Released
Status in python-cinderclient:
  In Progress
Status in python-designateclient:
  In Progress
Status in python-neutronclient:
  Fix Released
Status in RACK:
  In Progress
Status in python-rackclient:
  In Progress
Status in Sahara:
  Fix Released
Status in Solar:
  In Progress
Status in storyboard:
  Fix Committed
Status in surveil:
  In Progress
Status in OpenStack Object Storage (swift):
  Fix Released
Status in swift-bench:
  In Progress
Status in tempest:
  In Progress
Status in Trove:
  Fix Released
Status in tuskar:
  Fix Released
Status in vmware-nsx:
  In Progress
Status in zaqar:
  Fix Released
Status in Zuul:
  Fix Committed

Bug description:
  import urlparse

  should be changed to :
  import six.moves.urllib.parse as urlparse

  for python3 compatible.

To manage notifications about this bug go to:
https://bugs.launchpad.net/astara/+bug/1279611/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1341347] Re: failed Ironic deploys can have incorrect hypervisor attribute in Nova

2016-02-17 Thread Jim Rollenhagen
I tend to think the instance should always be tagged with a "hypervisor"
for a record of where it was built. In the past this could cause
problems with the resource tracker, but those are long solved.

There's also the part of this where the logs are likely gone by now,
tripleo has changed its architecture up, etc. This is likely to be hard
to reproduce, even if we think it is a bug.

Going to close this as WONTFIX, feel free to reopen if you think I'm a
terrible person :)

** Changed in: nova
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1341347

Title:
  failed Ironic deploys can have incorrect hypervisor attribute in Nova

Status in Ironic:
  Invalid
Status in OpenStack Compute (nova):
  Won't Fix

Bug description:
  I just booted 46 nodes at once from a single Ironic
  conductor/Nova/keystone etc all in one cloud.

  After this, according to Ironic:

   - 1 node was in maintenance mode (see bug 1326279) 5 have
  instance_uuid None and the rest are active.

  But according to Nova:

   - 8 are in ERROR spawning:
  (in nova) | eb0e1255-4da5-46cb-b8e4-d3e1059e1087 | 
hw-test-eb0e1255-4da5-46cb-b8e4-d3e1059e1087 | ERROR  | spawning   | NOSTATE
 |   |
  (in ironic) | ebd0e2c1-7630-4067-94c1-81771c1680b6 | 
eb0e1255-4da5-46cb-b8e4-d3e1059e1087 | power on| active | False 
  |
  (see bug 1341346)

   - 5 are in ERROR NOSTATE:
  (nova)| c389bb7b-1760-4e69-a4ea-0aea07ccd4d8 | 
hw-test-c389bb7b-1760-4e69-a4ea-0aea07ccd4d8 | ERROR  | -  | NOSTATE
 | ctlplane=10.10.16.146 |
  nova show shows us that it has a hypervisor 
  | OS-EXT-SRV-ATTR:hypervisor_hostname  | 8bc4357a-6b32-47de-b3ee-cec5b41e72d2 
  
  but in ironic there is no instance uuid (nor a deployment dict..):
  | 8bc4357a-6b32-47de-b3ee-cec5b41e72d2 | None 
| power off   | None   | False   |

  This bug is about the Nova instance having a hypervisor attribute that
  is wrong :)

  I have logs for this copied inside the DC, but a) its a production
  environment, so only tripleo-cd-admins can look (due to me being
  concened about passwords being in the logs) and b) they are 2.6GB in
  size, so its not all that feasible to attach them to the bug anyhow
  :).

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1341347/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1546793] [NEW] Fix neutron-fwaas cover tests

2016-02-17 Thread James Arendt
Public bug reported:

The tox.ini command for 'tox -e cover' breaks with error:
cover runtests: commands[0] | python setup.py testr --coverage 
--coverage-package-name=neutron_fwaas --testr-args=
usage: setup.py [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...]
   or: setup.py --help [cmd1 cmd2 ...]
   or: setup.py --help-commands
   or: setup.py cmd --help

error: option --coverage-package-name not recognized
ERROR: InvocationError: '/opt/stack/neutron-fwaas/.tox/cover/bin/python 
setup.py testr --coverage --coverage-package-name=neutron_fwaas --testr-args='
___ summary 
ERROR:   cover: commands failed

Appears to be same issue as found in neutron-vpnaas and fixed there
under https://review.openstack.org/#/c/217847/

** Affects: neutron
 Importance: Undecided
 Assignee: James Arendt (james-arendt-7)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => James Arendt (james-arendt-7)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1546793

Title:
  Fix neutron-fwaas cover tests

Status in neutron:
  In Progress

Bug description:
  The tox.ini command for 'tox -e cover' breaks with error:
  cover runtests: commands[0] | python setup.py testr --coverage 
--coverage-package-name=neutron_fwaas --testr-args=
  usage: setup.py [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...]
 or: setup.py --help [cmd1 cmd2 ...]
 or: setup.py --help-commands
 or: setup.py cmd --help

  error: option --coverage-package-name not recognized
  ERROR: InvocationError: '/opt/stack/neutron-fwaas/.tox/cover/bin/python 
setup.py testr --coverage --coverage-package-name=neutron_fwaas --testr-args='
  ___ summary 

  ERROR:   cover: commands failed

  Appears to be same issue as found in neutron-vpnaas and fixed there
  under https://review.openstack.org/#/c/217847/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1546793/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1546786] [NEW] Removed vpn tempest mappings breaks vpnaas tests

2016-02-17 Thread James Arendt
Public bug reported:

The 'tox -e api' tests for neutron-vpnaas fail.  The code
relies on the neutron tempest network client and the relevant
service_resource_prefix_map, and the mappings for resources
like:
'vpnservices': 'vpn',
were removed.  This causes the tests to fail with 404 errors
using an incorrect uri.

** Affects: neutron
 Importance: Undecided
 Assignee: James Arendt (james-arendt-7)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => James Arendt (james-arendt-7)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1546786

Title:
   Removed vpn tempest mappings breaks vpnaas tests

Status in neutron:
  In Progress

Bug description:
  The 'tox -e api' tests for neutron-vpnaas fail.  The code
  relies on the neutron tempest network client and the relevant
  service_resource_prefix_map, and the mappings for resources
  like:
  'vpnservices': 'vpn',
  were removed.  This causes the tests to fail with 404 errors
  using an incorrect uri.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1546786/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1479569] Re: Output from "role assignment list" is not useful

2016-02-17 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/255363
Committed: 
https://git.openstack.org/cgit/openstack/python-openstackclient/commit/?id=3a48989eb02187f384cfbf7bb7cd55502741fc68
Submitter: Jenkins
Branch:master

commit 3a48989eb02187f384cfbf7bb7cd55502741fc68
Author: Tom Cocozzello 
Date:   Wed Dec 9 10:08:16 2015 -0600

Return names in list role assignments

Utilize the new include names functionality added to
list role assignments (GET /role_assignments?include_names=True).
Which will return the names of the entities instead of their
IDs.

Change-Id: I6dc03baf61ef9354a8a259a9f17ff47ce1665ce7
Depends-On: I4aa77c08660a0cbd021502155938a46121ca76ef
Closes-Bug: #1479569
Implements: blueprint list-assignment-with-names


** Changed in: python-openstackclient
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1479569

Title:
  Output from "role assignment list" is not useful

Status in OpenStack Identity (keystone):
  Fix Released
Status in python-keystoneclient:
  Fix Released
Status in python-openstackclient:
  Fix Released

Bug description:
  It's showing the internal IDs of all the objects, which is really
  unhelpful. It would be much nicer if it showed the actual names of
  users, groups, projects and domains.

  Example:

  $ openstack role assignment list
  
+--+--+--+--+--+
  | Role | User | Group 
   | Project  | Domain  
 |
  
+--+--+--+--+--+
  | 83e948848b7842c9a15e01cfd9db6e1e | 0fa9633d884a42448bbd386778ca6b87 |   
   | 4404002027374bfe878501259b02a5d5 | 
 |
  | 83e948848b7842c9a15e01cfd9db6e1e | 0fa9633d884a42448bbd386778ca6b87 |   
   | 5568fe0e2ca74a5aae35b01f035cdee8 | 
 |
  | f25338bd4a1f4d74927375507d555fa5 | 339e327397d4437d8d1123d63098de76 |   
   | 67e30450f1c44010960aa7e1a457f9b3 | 
 |
  | ecea53c035034e93912428789e8272f6 | 35a3b6d9cb324661b5f144fd60a62964 |   
   | 9b5b2ef803514898b4a3a90ef09dcf66 | 
 |
  | 83e948848b7842c9a15e01cfd9db6e1e | 4644b913eb77414db8f344d37e3da2c2 |   
   | 9b5b2ef803514898b4a3a90ef09dcf66 | 
 |
  | ecea53c035034e93912428789e8272f6 | 50e99a8a5d6c40b2bd973fe55f2cb38b |   
   | 9b5b2ef803514898b4a3a90ef09dcf66 | 
 |
  | f1f56af00ee942a5b24d73dbfe2364cb | 54b9ac936fd04293981b828580a9a3e1 |   
   | 4404002027374bfe878501259b02a5d5 | 
 |
  | f25338bd4a1f4d74927375507d555fa5 | 54b9ac936fd04293981b828580a9a3e1 |   
   | 4404002027374bfe878501259b02a5d5 | 
 |
  | f25338bd4a1f4d74927375507d555fa5 | 54b9ac936fd04293981b828580a9a3e1 |   
   | c02e1e2d94584805a7445b6d31cab364 | 
 |
  | f25338bd4a1f4d74927375507d555fa5 |  | 
96a35e9d12544ee8aa3cfbf05f2fb649 | 4404002027374bfe878501259b02a5d5 |   
   |
  | f25338bd4a1f4d74927375507d555fa5 | 0fa9633d884a42448bbd386778ca6b87 |   
   |  | 
88fc45635a134ef084866fe0fa94e7f3 |
  
+--+--+--+--+--+

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1479569/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1546778] [NEW] libvirt: resize with deleted backing image fails

2016-02-17 Thread Chris St. Pierre
Public bug reported:

Once the glance image from which an instance was spawned is deleted,
resizes of that image fail if they would take place across more than one
compute node. Migration and live block migration both succeed.

Resize fails, I believe, because 'qemu-img resize' is called
(https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L7218-L7221)
before the backing image has been transferred from the source compute
node
(https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L7230-L7233).

Replication requires two compute nodes. To replicate:

1. Boot an instance from an image or snapshot.
2. Delete the image from Glance.
3. Resize the instance. It will fail with an error similar to:

Stderr: u"qemu-img: Could not open '/var/lib/nova/instances/f77f1c5c-
71f7-4645-afa1-dd30bacef874/disk': Could not open backing file: Could
not open
'/var/lib/nova/instances/_base/ca94b18d94077894f4ccbaafb1881a90225f1224':
No such file or directory\n"

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1546778

Title:
  libvirt: resize with deleted backing image fails

Status in OpenStack Compute (nova):
  New

Bug description:
  Once the glance image from which an instance was spawned is deleted,
  resizes of that image fail if they would take place across more than
  one compute node. Migration and live block migration both succeed.

  Resize fails, I believe, because 'qemu-img resize' is called
  
(https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L7218-L7221)
  before the backing image has been transferred from the source compute
  node
  
(https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L7230-L7233).

  Replication requires two compute nodes. To replicate:

  1. Boot an instance from an image or snapshot.
  2. Delete the image from Glance.
  3. Resize the instance. It will fail with an error similar to:

  Stderr: u"qemu-img: Could not open '/var/lib/nova/instances/f77f1c5c-
  71f7-4645-afa1-dd30bacef874/disk': Could not open backing file: Could
  not open
  '/var/lib/nova/instances/_base/ca94b18d94077894f4ccbaafb1881a90225f1224':
  No such file or directory\n"

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1546778/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1461000] Re: [rfe] openvswitch based firewall driver

2016-02-17 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/249337
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=ef29f7eb9a2a37133eacdb7f019b48ec3f9a42c3
Submitter: Jenkins
Branch:master

commit ef29f7eb9a2a37133eacdb7f019b48ec3f9a42c3
Author: Jakub Libosvar 
Date:   Tue Sep 1 15:50:48 2015 +

Open vSwitch conntrack based firewall driver

This firewall requires OVS 2.5+ version supporting conntrack and kernel
conntrack datapath support (kernel>=4.3). For more information, see
https://github.com/openvswitch/ovs/blob/master/FAQ.md

As part of this new entry points for current reference firewalls were
added.

Configuration:
in openvswitch_agent.ini:
- in securitygroup section set firewall_driver to openvswitch

DocImpact
Closes-bug: #1461000

Co-Authored-By: Miguel Angel Ajo Pelayo 
Co-Authored-By: Amir Sadoughi 

Change-Id: I13e5cda8b5f3a13a60b14d80e54f198f32d7a529


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1461000

Title:
  [rfe] openvswitch based firewall driver

Status in neutron:
  Fix Released

Bug description:
  Nowadays, when using openvswitch-agent with security groups we must
  use hybrid bridging, i.e. per instance we have both openvswitch bridge
  and linux bridge. The rationale behind this approach is to set
  filtering rules matching on given linux bridge.

  We can get rid of linux bridge if filtering is done directly in
  openvswitch via openflow rules. The benefits of this approach are
  better throughput in data plain due to removal of linux bridge and
  faster rule filtering due to not using physdev extension in iptables.
  Another improvement is in control plain because currently setting
  rules via iptables firewall driver doesn't scale well.

  This RFE requests a new firewall driver that is capable of filtering
  packets based on specified security groups using openvswitch only.
  Requirement for OVS is to have conntrack support which is planned to
  be released with OVS 2.4.

  UPDATE (2015-06-02 jlibosva): What we want to achieve with this rfe is
  to use security groups with openvswitch-agent without having a need of
  linux bridge. The reasons for this include performance and easier
  debugging.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1461000/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1546780] [NEW] Open vSwitch conntrack based firewall driver

2016-02-17 Thread OpenStack Infra
Public bug reported:

https://review.openstack.org/249337
Dear bug triager. This bug was created since a commit was marked with DOCIMPACT.
Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

commit ef29f7eb9a2a37133eacdb7f019b48ec3f9a42c3
Author: Jakub Libosvar 
Date:   Tue Sep 1 15:50:48 2015 +

Open vSwitch conntrack based firewall driver

This firewall requires OVS 2.5+ version supporting conntrack and kernel
conntrack datapath support (kernel>=4.3). For more information, see
https://github.com/openvswitch/ovs/blob/master/FAQ.md

As part of this new entry points for current reference firewalls were
added.

Configuration:
in openvswitch_agent.ini:
- in securitygroup section set firewall_driver to openvswitch

DocImpact
Closes-bug: #1461000

Co-Authored-By: Miguel Angel Ajo Pelayo 
Co-Authored-By: Amir Sadoughi 

Change-Id: I13e5cda8b5f3a13a60b14d80e54f198f32d7a529

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: doc neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1546780

Title:
  Open vSwitch conntrack based firewall driver

Status in neutron:
  New

Bug description:
  https://review.openstack.org/249337
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit ef29f7eb9a2a37133eacdb7f019b48ec3f9a42c3
  Author: Jakub Libosvar 
  Date:   Tue Sep 1 15:50:48 2015 +

  Open vSwitch conntrack based firewall driver
  
  This firewall requires OVS 2.5+ version supporting conntrack and kernel
  conntrack datapath support (kernel>=4.3). For more information, see
  https://github.com/openvswitch/ovs/blob/master/FAQ.md
  
  As part of this new entry points for current reference firewalls were
  added.
  
  Configuration:
  in openvswitch_agent.ini:
  - in securitygroup section set firewall_driver to openvswitch
  
  DocImpact
  Closes-bug: #1461000
  
  Co-Authored-By: Miguel Angel Ajo Pelayo 
  Co-Authored-By: Amir Sadoughi 
  
  Change-Id: I13e5cda8b5f3a13a60b14d80e54f198f32d7a529

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1546780/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1546762] [NEW] 8a6d8bdae39_migrate_neutron_resources_table is not postgres compliant

2016-02-17 Thread Cedric Brandily
Public bug reported:

8a6d8bdae39_migrate_neutron_resources_table.py[1] is not postgres-
compliant[3] and perhaps not working with non empty tables because
generate_records_for_existing assumes that
session.execute(...).inserted_primary_key is a value BUT it's a list[2]!

[1] in package neutron.db.migration.alembic_migrations.versions.mitaka.contract
[2] 
http://docs.sqlalchemy.org/en/rel_1_0/core/connections.html?highlight=inserted_primary_key#sqlalchemy.engine.ResultProxy.inserted_primary_key
[3]
# Starting with a liberty neutron db
#$ neutron-db-manage upgrade head


No handlers could be found for logger "oslo_config.cfg"
INFO  [alembic.runtime.migration] Context impl PostgresqlImpl.
INFO  [alembic.runtime.migration] Will assume transactional DDL.
  Running upgrade for neutron ...
INFO  [alembic.runtime.migration] Context impl PostgresqlImpl.
INFO  [alembic.runtime.migration] Will assume transactional DDL.
INFO  [alembic.runtime.migration] Running upgrade 34af2b5c5a59 -> 59cb5b6cf4d, 
Add availability zone
INFO  [alembic.runtime.migration] Running upgrade 59cb5b6cf4d -> 13cfb89f881a, 
add is_default to subnetpool
INFO  [alembic.runtime.migration] Running upgrade 13cfb89f881a -> 32e5974ada25, 
Add standard attribute table
INFO  [alembic.runtime.migration] Running upgrade 4af11ca47297 -> 1b294093239c, 
Drop embrane plugin table
INFO  [alembic.runtime.migration] Running upgrade 1b294093239c, 32e5974ada25 -> 
8a6d8bdae39, standardattributes migration
Traceback (most recent call last):
  File "/home/user/projects/os/neutron/.tox/py27/bin/neutron-db-manage", line 
10, in 
sys.exit(main())
  File "/home/user/projects/os/neutron/neutron/db/migration/cli.py", line 744, 
in main
return_val |= bool(CONF.command.func(config, CONF.command.name)) 
  File "/home/user/projects/os/neutron/neutron/db/migration/cli.py", line 220, 
in do_upgrade
desc=branch, sql=CONF.command.sql)
  File "/home/user/projects/os/neutron/neutron/db/migration/cli.py", line 127, 
in do_alembic_command
getattr(alembic_command, cmd)(config, *args, **kwargs)
  File 
"/home/user/projects/os/neutron/.tox/py27/local/lib/python2.7/site-packages/alembic/command.py",
 line 174, in upgrade
script.run_env()
  File 
"/home/user/projects/os/neutron/.tox/py27/local/lib/python2.7/site-packages/alembic/script/base.py",
 line 397, in run_env
util.load_python_file(self.dir, 'env.py')
  File 
"/home/user/projects/os/neutron/.tox/py27/local/lib/python2.7/site-packages/alembic/util/pyfiles.py",
 line 81, in load_python_file
module = load_module_py(module_id, path)
  File 
"/home/user/projects/os/neutron/.tox/py27/local/lib/python2.7/site-packages/alembic/util/compat.py",
 line 79, in load_module_py
mod = imp.load_source(module_id, path, fp)
  File 
"/home/user/projects/os/neutron/neutron/db/migration/alembic_migrations/env.py",
 line 126, in 
run_migrations_online()
  File 
"/home/user/projects/os/neutron/neutron/db/migration/alembic_migrations/env.py",
 line 120, in run_migrations_online
context.run_migrations()
  File "", line 8, in run_migrations
  File 
"/home/user/projects/os/neutron/.tox/py27/local/lib/python2.7/site-packages/alembic/runtime/environment.py",
 line 797, in run_migrations
self.get_context().run_migrations(**kw)
  File 
"/home/user/projects/os/neutron/.tox/py27/local/lib/python2.7/site-packages/alembic/runtime/migration.py",
 line 312, in run_migrations
step.migration_fn(**kw)
  File 
"/home/user/projects/os/neutron/neutron/db/migration/alembic_migrations/versions/mitaka/contract/8a6d8bdae39_migrate_neutron_resources_table.py",
 line 50, in upgrade
generate_records_for_existing()
  File 
"/home/user/projects/os/neutron/neutron/db/migration/alembic_migrations/versions/mitaka/contract/8a6d8bdae39_migrate_neutron_resources_table.py",
 line 83, in generate_records_for_existing
model.c.id == row[0]))
  File 
"/home/user/projects/os/neutron/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/orm/session.py",
 line 1034, in execute
bind, close_with_result=True).execute(clause, params or {})
  File 
"/home/user/projects/os/neutron/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
 line 914, in execute
return meth(self, multiparams, params)
  File 
"/home/user/projects/os/neutron/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/sql/elements.py",
 line 323, in _execute_on_connection
return connection._execute_clauseelement(self, multiparams, params)
  File 
"/home/user/projects/os/neutron/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
 line 1010, in _execute_clauseelement
compiled_sql, distilled_params
  File 
"/home/user/projects/os/neutron/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
 line 1146, in _execute_context
context)
  File 
"/home/user/projects/os/neutron/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
 line 1337, in _handle_dbapi_exception
util.raise_from_cause

[Yahoo-eng-team] [Bug 1546758] [NEW] Inconsistent ordering for angular table actions

2016-02-17 Thread Justin Pomeroy
Public bug reported:

The horizon angular actions service uses $qExtensions.allSettled when
resolving permitted actions.  The allSettled method does not enforce
that the order of the pass and fail promise arrays are the same as the
original list of promises, and this can cause the order of the actions
to be inconsistent.  The order of the actions is actually determined by
the order in which they are resolved.  This causes actions I want to be
last in the menu (Delete) to sometimes show up as the default button
action.

** Affects: horizon
 Importance: Undecided
 Assignee: Justin Pomeroy (jpomero)
 Status: In Progress

** Changed in: horizon
 Assignee: (unassigned) => Justin Pomeroy (jpomero)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1546758

Title:
  Inconsistent ordering for angular table actions

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  The horizon angular actions service uses $qExtensions.allSettled when
  resolving permitted actions.  The allSettled method does not enforce
  that the order of the pass and fail promise arrays are the same as the
  original list of promises, and this can cause the order of the actions
  to be inconsistent.  The order of the actions is actually determined
  by the order in which they are resolved.  This causes actions I want
  to be last in the menu (Delete) to sometimes show up as the default
  button action.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1546758/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1546742] [NEW] Unable to create an instance

2016-02-17 Thread Vladislav
Public bug reported:

I am trying to create an instance from command line.

System requirements:
OS: CentOS 7
Openstack Liberty

Reproduce a bug:
openstack server create --debug --flavor m1.tiny --image 
97836f02-2059-40a8-99ba-1730e97aa101 --nic 
net-id=256eea0b-06fe-49a0-880d-6ecc8afeff5a --security-group default --key-name 
vladf public-instance

Program output:
Instantiating network client: 
Instantiating network api: 
REQ: curl -g -i -X GET 
http://controller:9696/v2.0/networks.json?fields=id&name=256eea0b-06fe-49a0-880d-6ecc8afeff5a
 -H "User-Agent: python-neutronclient" -H "Accept: application/json" -H 
"X-Auth-Token: {SHA1}c153a4df19fb8e5fb095ea656a8dabbe26d88e13"
RESP: [503] date: Wed, 17 Feb 2016 22:04:50 GMT connection: keep-alive 
content-type: text/plain; charset=UTF-8 content-length: 100 
x-openstack-request-id: req-72933065-8db0-4dd4-af82-4a529cd08e90
RESP BODY: 503 Service Unavailable

The server is currently unavailable. Please try again at a later time.


Error message: 503 Service Unavailable

The server is currently unavailable. Please try again at a later time.


503 Service Unavailable

The server is currently unavailable. Please try again at a later time.


Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/cliff/app.py", line 374, in 
run_subcommand
result = cmd.run(parsed_args)
  File "/usr/lib/python2.7/site-packages/cliff/display.py", line 92, in run
column_names, data = self.take_action(parsed_args)
  File "/usr/lib/python2.7/site-packages/openstackclient/common/utils.py", line 
45, in wrapper
return func(self, *args, **kwargs)
  File "/usr/lib/python2.7/site-packages/openstackclient/compute/v2/server.py", 
line 452, in take_action
nic_info["net-id"])
  File "/usr/lib/python2.7/site-packages/openstackclient/network/common.py", 
line 32, in find
data = list_method(**kwargs)
  File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 
102, in with_params
ret = self.function(instance, *args, **kwargs)
  File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 
574, in list_networks
**_params)
  File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 
307, in list
for r in self._pagination(collection, path, **params):
  File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 
320, in _pagination
res = self.get(path, params=params)
  File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 
293, in get
headers=headers, params=params)
  File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 
270, in retry_request
headers=headers, params=params)
  File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 
211, in do_request
self._handle_fault_response(status_code, replybody)
  File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 
185, in _handle_fault_response
exception_handler_v20(status_code, des_error_body)
  File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 
83, in exception_handler_v20
message=message)
NeutronClientException: 503 Service Unavailable

The server is currently unavailable. Please try again at a later time.


clean_up CreateServer: 503 Service Unavailable

The server is currently unavailable. Please try again at a later time.


Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/openstackclient/shell.py", line 112, 
in run
ret_val = super(OpenStackShell, self).run(argv)
  File "/usr/lib/python2.7/site-packages/cliff/app.py", line 255, in run
result = self.run_subcommand(remainder)
  File "/usr/lib/python2.7/site-packages/cliff/app.py", line 374, in 
run_subcommand
result = cmd.run(parsed_args)
  File "/usr/lib/python2.7/site-packages/cliff/display.py", line 92, in run
column_names, data = self.take_action(parsed_args)
  File "/usr/lib/python2.7/site-packages/openstackclient/common/utils.py", line 
45, in wrapper
return func(self, *args, **kwargs)
  File "/usr/lib/python2.7/site-packages/openstackclient/compute/v2/server.py", 
line 452, in take_action
nic_info["net-id"])
  File "/usr/lib/python2.7/site-packages/openstackclient/network/common.py", 
line 32, in find
data = list_method(**kwargs)
  File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 
102, in with_params
ret = self.function(instance, *args, **kwargs)
  File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 
574, in list_networks
**_params)
  File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 
307, in list
for r in self._pagination(collection, path, **params):
  File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 
320, in _pagination
res = self.get(path, params=params)
  File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 
293, in get
headers=headers, params=params)
  File "/

[Yahoo-eng-team] [Bug 1546731] [NEW] 1df244e556f5_add_unique_ha_router_agent_port_bindings revision is not postgres compliant

2016-02-17 Thread Cedric Brandily
Public bug reported:

1df244e556f5_add_unique_ha_router_agent_port_bindings.py[1] is not
postgres-compliant[2] because it uses GROUP BY incorrectly:

column "ha_router_agent_port_bindings.port_id" must appear in the GROUP
BY clause or be used in an aggregate function


[1] in package neutron.db.migration.alembic_migrations.versions.mitaka.expand
[2]
# Starting with a liberty neutron db
#$ neutron-db-manage upgrade head


INFO  [alembic.runtime.migration] Context impl PostgresqlImpl.
INFO  [alembic.runtime.migration] Will assume transactional DDL.
Traceback (most recent call last):
  File "/home/user/projects/os/neutron/.tox/py27/bin/neutron-db-manage", line 
10, in 
sys.exit(main())
  File "/home/user/projects/os/neutron/neutron/db/migration/cli.py", line 744, 
in main
return_val |= bool(CONF.command.func(config, CONF.command.name))
  File "/home/user/projects/os/neutron/neutron/db/migration/cli.py", line 218, 
in do_upgrade
run_sanity_checks(config, revision)
  File "/home/user/projects/os/neutron/neutron/db/migration/cli.py", line 726, 
in run_sanity_checks
script_dir.run_env()
  File 
"/home/user/projects/os/neutron/.tox/py27/local/lib/python2.7/site-packages/alembic/script/base.py",
 line 397, in run_env
util.load_python_file(self.dir, 'env.py')
  File 
"/home/user/projects/os/neutron/.tox/py27/local/lib/python2.7/site-packages/alembic/util/pyfiles.py",
 line 81, in load_python_file
module = load_module_py(module_id, path)
  File 
"/home/user/projects/os/neutron/.tox/py27/local/lib/python2.7/site-packages/alembic/util/compat.py",
 line 79, in load_module_py
mod = imp.load_source(module_id, path, fp)
  File 
"/home/user/projects/os/neutron/neutron/db/migration/alembic_migrations/env.py",
 line 126, in 
run_migrations_online()
  File 
"/home/user/projects/os/neutron/neutron/db/migration/alembic_migrations/env.py",
 line 120, in run_migrations_online
context.run_migrations()
  File "", line 8, in run_migrations
  File 
"/home/user/projects/os/neutron/.tox/py27/local/lib/python2.7/site-packages/alembic/runtime/environment.py",
 line 797, in run_migrations
self.get_context().run_migrations(**kw)
  File 
"/home/user/projects/os/neutron/.tox/py27/local/lib/python2.7/site-packages/alembic/runtime/migration.py",
 line 303, in run_migrations
for step in self._migrations_fn(heads, self):
  File "/home/user/projects/os/neutron/neutron/db/migration/cli.py", line 719, 
in check_sanity
script.module.check_sanity(context.connection)
  File 
"/home/user/projects/os/neutron/neutron/db/migration/alembic_migrations/versions/mitaka/expand/1df244e556f5_add_unique_ha_router_agent_port_bindings.py",
 line 57, in check_sanity
res = get_duplicate_l3_ha_port_bindings(connection)
  File 
"/home/user/projects/os/neutron/neutron/db/migration/alembic_migrations/versions/mitaka/expand/1df244e556f5_add_unique_ha_router_agent_port_bindings.py",
 line 70, in get_duplicate_l3_ha_port_bindings
.having(sa.func.count() > 1)).all()
  File 
"/home/user/projects/os/neutron/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/orm/query.py",
 line 2588, in all
return list(self)
  File 
"/home/user/projects/os/neutron/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/orm/query.py",
 line 2736, in __iter__
return self._execute_and_instances(context)
  File 
"/home/user/projects/os/neutron/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/orm/query.py",
 line 2751, in _execute_and_instances
result = conn.execute(querycontext.statement, self._params)
  File 
"/home/user/projects/os/neutron/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
 line 914, in execute
return meth(self, multiparams, params)
  File 
"/home/user/projects/os/neutron/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/sql/elements.py",
 line 323, in _execute_on_connection
return connection._execute_clauseelement(self, multiparams, params)
  File 
"/home/user/projects/os/neutron/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
 line 1010, in _execute_clauseelement
compiled_sql, distilled_params
  File 
"/home/user/projects/os/neutron/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
 line 1146, in _execute_context
context)
  File 
"/home/user/projects/os/neutron/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
 line 1337, in _handle_dbapi_exception
util.raise_from_cause(newraise, exc_info)
  File 
"/home/user/projects/os/neutron/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/util/compat.py",
 line 200, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb, cause=cause)
  File 
"/home/user/projects/os/neutron/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
 line 1139, in _execute_context
context)
  File 
"/home/user/projects/os/neutron/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/engine/default.py",
 line 450, in do_execute
cu

[Yahoo-eng-team] [Bug 1546736] [NEW] Running eslint in quiet mode

2016-02-17 Thread Thai Tran
Public bug reported:

We have a ton of warnings. This makes it difficult to locate linting
errors. I think we should enable quiet mode so that warnings do not show
up.

** Affects: horizon
 Importance: Medium
 Assignee: Thai Tran (tqtran)
 Status: In Progress

** Description changed:

  We have a ton of warnings. This makes it difficult to locate linting
- errors. This is my personal preference, but I think we should enable
- quiet mode so that warnings do not show up.
+ errors. I think we should enable quiet mode so that warnings do not show
+ up.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1546736

Title:
  Running eslint in quiet mode

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  We have a ton of warnings. This makes it difficult to locate linting
  errors. I think we should enable quiet mode so that warnings do not
  show up.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1546736/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1546723] [NEW] dnsmasq processes inherit system mounts that should not be inherited

2016-02-17 Thread Valeriy Ponomaryov
Public bug reported:

See paste [1] - there is list of mounts that each dnsmasq process holds.
The ones that have "alpha", "betta" and "gamma" words in names are ZFS
filesystems. And it is impossible to unmount them. In case of ZFS it
means we cannot "destroy" ZFS filesystems that are in that list because
it is "busy". To be able to destroy ZFS dataset we need either terminate
dnsmasq processes or hack them to unmount those mounts.

It happens when we create dataset first then spawn dnsmasq process.

Problem was found in Manila project with its new ZFSonLinux share driver
[2] running Neutron on same host.

So, it is expected that such bug affects lots of filesystems.

Expected behaviour: each dnsmasq process should hold only required
mounts for it not blocking all other while it is alive.

[1] http://paste.openstack.org/show/487325/

[2] https://review.openstack.org/#/c/277192/

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: dnsmasq

** Tags added: dnsmasq

** Description changed:

  See paste [1] - there is list of mounts that each dnsmasq process holds.
  The ones that have "alpha", "betta" and "gamma" words in names are ZFS
  filesystems. And it is impossible to unmount them. In case of ZFS it
  means we cannot "destroy" ZFS filesystems that are in that list. To be
  able to destroy ZFS dataset we need either terminate dnsmasq processes
  or hack them to unmount those mounts.
  
  It happens when we create dataset first then spawn dnsmasq process.
  
- Problem was found in Manila project with its new share driver ZFSonLinux
- [2] running neutron on same host.
+ Problem was found in Manila project with its new ZFSonLinux share driver
+ [2] running Neutron on same host.
  
  Expected behaviour: each dnsmasq process should hold only required for
  them mounts not blocking all other while it is alive.
  
  [1] http://paste.openstack.org/show/487325/
  
  [2] https://review.openstack.org/#/c/277192/

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1546723

Title:
  dnsmasq processes inherit system mounts that should not be inherited

Status in neutron:
  New

Bug description:
  See paste [1] - there is list of mounts that each dnsmasq process
  holds. The ones that have "alpha", "betta" and "gamma" words in names
  are ZFS filesystems. And it is impossible to unmount them. In case of
  ZFS it means we cannot "destroy" ZFS filesystems that are in that list
  because it is "busy". To be able to destroy ZFS dataset we need either
  terminate dnsmasq processes or hack them to unmount those mounts.

  It happens when we create dataset first then spawn dnsmasq process.

  Problem was found in Manila project with its new ZFSonLinux share
  driver  [2] running Neutron on same host.

  So, it is expected that such bug affects lots of filesystems.

  Expected behaviour: each dnsmasq process should hold only required
  mounts for it not blocking all other while it is alive.

  [1] http://paste.openstack.org/show/487325/

  [2] https://review.openstack.org/#/c/277192/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1546723/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1546708] [NEW] ng flavor table missing column

2016-02-17 Thread Cindy Lu
Public bug reported:

Added a missing column to the Flavors table, the rx-tx factor.  We
should add it to ng flavors table for consistency.

https://review.openstack.org/#/c/247673/

** Affects: horizon
 Importance: Undecided
 Assignee: Cindy Lu (clu-m)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Cindy Lu (clu-m)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1546708

Title:
  ng flavor table missing column

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Added a missing column to the Flavors table, the rx-tx factor.  We
  should add it to ng flavors table for consistency.

  https://review.openstack.org/#/c/247673/

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1546708/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1546668] [NEW] Auto allocate topology masks error when executing without a default subnetpool

2016-02-17 Thread Assaf Muller
Public bug reported:

How to reproduce:

Create an external network with is_default = True
Run 'neutron auto-allocated-topology-show', actual output:
Deployment error: Unable to provide tenant private network.

>From neutron-server.log:
No default pools available
Unable to auto allocate topology for tenant 760c003239354f45a41caccd0af7ab42 
due to missing requirements, e.g. default or shared subnetpools

Looking at the code, when auto allocating a tenant network it catches a
bunch of different errors when creating a subnet, logs the error above,
then raises another exception, losing the reason for the failure.

The expected output would be something like:
'Cannot auto allocate a topology without a default subnetpool configured'

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: auto-allocated-topology

** Tags added: auto-allocated-topology

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1546668

Title:
  Auto allocate topology masks error when executing without a default
  subnetpool

Status in neutron:
  New

Bug description:
  How to reproduce:

  Create an external network with is_default = True
  Run 'neutron auto-allocated-topology-show', actual output:
  Deployment error: Unable to provide tenant private network.

  From neutron-server.log:
  No default pools available
  Unable to auto allocate topology for tenant 760c003239354f45a41caccd0af7ab42 
due to missing requirements, e.g. default or shared subnetpools

  Looking at the code, when auto allocating a tenant network it catches
  a bunch of different errors when creating a subnet, logs the error
  above, then raises another exception, losing the reason for the
  failure.

  The expected output would be something like:
  'Cannot auto allocate a topology without a default subnetpool configured'

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1546668/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1279611] Re: urlparse is incompatible for python 3

2016-02-17 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/281068
Committed: 
https://git.openstack.org/cgit/openstack/python-barbicanclient/commit/?id=84fc9dc40d8a77fdc52dbf38bc24e8f66c4c958d
Submitter: Jenkins
Branch:master

commit 84fc9dc40d8a77fdc52dbf38bc24e8f66c4c958d
Author: Tin Lam 
Date:   Wed Feb 17 00:27:37 2016 -0600

Use six.moves.urllib.parse to replace urlparse

Import six.moves.urllib.parse as urlparse for python3 compatible.

Change-Id: I0f28f01a54daaa690cd890540fd4edc3b32411d1
Closes-Bug: #1279611


** Changed in: python-barbicanclient
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1279611

Title:
   urlparse is incompatible for python 3

Status in Astara:
  In Progress
Status in Blazar:
  In Progress
Status in Ceilometer:
  Fix Released
Status in Cinder:
  Fix Released
Status in gce-api:
  In Progress
Status in Glance:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in openstack-ansible:
  In Progress
Status in openstack-doc-tools:
  In Progress
Status in python-barbicanclient:
  Fix Released
Status in python-cinderclient:
  In Progress
Status in python-designateclient:
  In Progress
Status in python-neutronclient:
  Fix Released
Status in RACK:
  In Progress
Status in python-rackclient:
  In Progress
Status in Sahara:
  Fix Released
Status in Solar:
  In Progress
Status in storyboard:
  Fix Committed
Status in surveil:
  In Progress
Status in OpenStack Object Storage (swift):
  In Progress
Status in swift-bench:
  In Progress
Status in tempest:
  In Progress
Status in Trove:
  Fix Released
Status in tuskar:
  Fix Released
Status in vmware-nsx:
  In Progress
Status in zaqar:
  Fix Released
Status in Zuul:
  Fix Committed

Bug description:
  import urlparse

  should be changed to :
  import six.moves.urllib.parse as urlparse

  for python3 compatible.

To manage notifications about this bug go to:
https://bugs.launchpad.net/astara/+bug/1279611/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1546664] [NEW] Unable to run tox tests, install_command error

2016-02-17 Thread Dane Fichter
Public bug reported:

I'm starting from a new install of Ubuntu 14.04, I'm able to run tests
via the ./run_tests.sh script, but am unable to run the tests via tox.
Whenever I attempt to, I get the following error:

Traceback (most recent call last):
  File "/usr/bin/tox", line 9, in 
load_entry_point('tox==1.6.0', 'console_scripts', 'tox')()
  File "/usr/lib/python2.7/dist-packages/tox/_cmdline.py", line 25, in main
config = parseconfig(args, 'tox')
  File "/usr/lib/python2.7/dist-packages/tox/_config.py", line 44, in 
parseconfig
parseini(config, inipath)
  File "/usr/lib/python2.7/dist-packages/tox/_config.py", line 236, in __init__
config)
  File "/usr/lib/python2.7/dist-packages/tox/_config.py", line 335, in 
_makeenvconfig
"'install_command' must contain '{packages}' substitution")
tox.ConfigError: ConfigError: 'install_command' must contain '{packages}' 
substitution

I believe it's an error in tox.ini, but after checking on IRC, I don't
see anyone else experiencing the same error.

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1546664

Title:
  Unable to run tox tests, install_command error

Status in Glance:
  New

Bug description:
  I'm starting from a new install of Ubuntu 14.04, I'm able to run tests
  via the ./run_tests.sh script, but am unable to run the tests via tox.
  Whenever I attempt to, I get the following error:

  Traceback (most recent call last):
File "/usr/bin/tox", line 9, in 
  load_entry_point('tox==1.6.0', 'console_scripts', 'tox')()
File "/usr/lib/python2.7/dist-packages/tox/_cmdline.py", line 25, in main
  config = parseconfig(args, 'tox')
File "/usr/lib/python2.7/dist-packages/tox/_config.py", line 44, in 
parseconfig
  parseini(config, inipath)
File "/usr/lib/python2.7/dist-packages/tox/_config.py", line 236, in 
__init__
  config)
File "/usr/lib/python2.7/dist-packages/tox/_config.py", line 335, in 
_makeenvconfig
  "'install_command' must contain '{packages}' substitution")
  tox.ConfigError: ConfigError: 'install_command' must contain '{packages}' 
substitution

  I believe it's an error in tox.ini, but after checking on IRC, I don't
  see anyone else experiencing the same error.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1546664/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1546506] Re: spawn_n fails in functional tests

2016-02-17 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/281278
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=21d139d441f9e43b35b2b96d05e962dff9a690a1
Submitter: Jenkins
Branch:master

commit 21d139d441f9e43b35b2b96d05e962dff9a690a1
Author: Jakub Libosvar 
Date:   Wed Feb 17 13:35:25 2016 +

Don't disable Nagle algorithm in HttpProtocol

0.18.3 evenlet by default disables Nagle algorithm for sockets using
http protocol [1]. It brought a regression to eventlet [2] and this
patch adds workaround that doesn't disable the algorithm.

[1] 
https://github.com/eventlet/eventlet/commit/40714b1ffadd47b315ca07f9b85009448f0fe63d
[2] https://github.com/eventlet/eventlet/issues/301

Change-Id: I79a8583a5fe9812b6609bd4df5623f13c3b81df5
Closes-bug: 1546506


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1546506

Title:
  spawn_n fails in functional tests

Status in neutron:
  Fix Released

Bug description:
  Gate seems broken due to spawn_n failure, doesn't seem like Neutron's
  fault but library issue. I haven't tracked which library was updated
  yet. Started occurring at about Feb 16/17 midnight UTC. Also
  influences fullstack tests.

  
  e-s: 
http://logstash.openstack.org/#dashboard/file/logstash.json?query=build_name%3A%5C%22gate-neutron-dsvm-functional%5C%22%20AND%20build_status%3A%5C%22FAILURE%5C%22%20AND%20message%3A%5C%22greenpool.py%5C%5C%5C%22%2C%20line%2082%2C%20in%20_spawn_n_impl%5C%22

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1546506/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1535346] Re: Direct Snapshot in Ceph patch causes nova live-migration to break

2016-02-17 Thread Sean Dague
Given the clearly invalid python in the stack trace that mriedem pointed
out, marking as Invalid.

** Changed in: nova
   Status: Incomplete => Invalid

** Changed in: nova
   Importance: High => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1535346

Title:
  Direct Snapshot in Ceph patch causes nova live-migration to break

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Greetings,

  When trying to use nova live-migration we do have this trace on the compute 
node logs (nova-compute.log)
  The Vm stays in "MIGRATING" state, the VM is still up on the originating 
node, and never moves.

  VERSION IN USE: ( KILO v1 )

  root@compute2:~# dpkg -l | grep nova
  ii  nova-common 1:2015.1.0-0ubuntu1~cloud0
all  OpenStack Compute - common files
  ii  nova-compute1:2015.1.0-0ubuntu1~cloud0
all  OpenStack Compute - compute node base
  ii  nova-compute-kvm1:2015.1.0-0ubuntu1~cloud0
all  OpenStack Compute - compute node (KVM)
  ii  nova-compute-libvirt1:2015.1.0-0ubuntu1~cloud0
all  OpenStack Compute - compute node libvirt support
  ii  python-nova 1:2015.1.0-0ubuntu1~cloud0
all  OpenStack Compute Python libraries
  ii  python-novaclient   1:2.22.0-0ubuntu1~cloud0  
all  client library for OpenStack Compute API

  CMD:

  nova migrate-live 6b91ffb3-bf84-48e9-b07d-99603c89055c compute4

  LOGS (/var/log/nova/nova-compute.log):

  2016-01-18 09:55:46.160 18571 ERROR oslo_messaging.rpc.dispatcher 
[req-fae37d19-0867-4796-9df3-b08be2120d15 ad8ac2392f41482e887c6b44402a641d 
dfcd0863f170402689d89771da0ea3ff - - -] Exception during message handling: 
'module' object has no attribute 'spawn'
  2016-01-18 09:55:46.160 18571 TRACE oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
  2016-01-18 09:55:46.160 18571 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 142, 
in _dispatch_and_reply
  2016-01-18 09:55:46.160 18571 TRACE oslo_messaging.rpc.dispatcher 
executor_callback))
  2016-01-18 09:55:46.160 18571 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 186, 
in _dispatch
  2016-01-18 09:55:46.160 18571 TRACE oslo_messaging.rpc.dispatcher 
executor_callback)
  2016-01-18 09:55:46.160 18571 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 130, 
in _do_dispatch
  2016-01-18 09:55:46.160 18571 TRACE oslo_messaging.rpc.dispatcher result 
= func(ctxt, **new_args)
  2016-01-18 09:55:46.160 18571 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 6668, in 
live_migration
  2016-01-18 09:55:46.160 18571 TRACE oslo_messaging.rpc.dispatcher 
migrate_data=migrate_data)
  2016-01-18 09:55:46.160 18571 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/exception.py", line 88, in wrapped
  2016-01-18 09:55:46.160 18571 TRACE oslo_messaging.rpc.dispatcher payload)
  2016-01-18 09:55:46.160 18571 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 85, in __exit__
  2016-01-18 09:55:46.160 18571 TRACE oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2016-01-18 09:55:46.160 18571 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/exception.py", line 71, in wrapped
  2016-01-18 09:55:46.160 18571 TRACE oslo_messaging.rpc.dispatcher return 
f(self, context, *args, **kw)
  2016-01-18 09:55:46.160 18571 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 355, in 
decorated_function
  2016-01-18 09:55:46.160 18571 TRACE oslo_messaging.rpc.dispatcher 
kwargs['instance'], e, sys.exc_info())
  2016-01-18 09:55:46.160 18571 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 85, in __exit__
  2016-01-18 09:55:46.160 18571 TRACE oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2016-01-18 09:55:46.160 18571 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 343, in 
decorated_function
  2016-01-18 09:55:46.160 18571 TRACE oslo_messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
  2016-01-18 09:55:46.160 18571 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 5237, in 
live_migration
  2016-01-18 09:55:46.160 18571 TRACE oslo_me

[Yahoo-eng-team] [Bug 1526642] Re: Simultaneous live migrations break anti-affinity policy

2016-02-17 Thread Sean Dague
This is definitely a feature rather than a bug, given the complexity. We
should track it as a spec or blueprint instead.

** Changed in: nova
   Status: New => Opinion

** Changed in: nova
   Importance: Undecided => Wishlist

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1526642

Title:
  Simultaneous live migrations break anti-affinity policy

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  Let's say we have a setup with 3 compute nodes (CN1, CN2 and CN3) and
  3 controllers (in HA mode). There are 2 VMs with anti-affinity policy
  (the same server group) running in the environment:

  * CN1 - VM A (anti-affinity)
  * CN2 - VM B (anti-affinity)
  * CN3 - empty

  If we trigger live migration of VM A and then trigger live migration
  of VM B without waiting for scheduling phase of VM A to complete we
  will end up with anti-affinity policy violated:

  * CN1 - empty
  * CN2 - empty
  * CN3 - VM A, VM B (both with anti-affinity policy)

  Workaround is to wait few seconds and let scheduler finish the job for
  the first VM.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1526642/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1526818] Re: Incorrect and excess ARP responses in tenant subnets

2016-02-17 Thread Sean Dague
This feels like it needs neutron experts to weigh in because under this
kind of environment the network setup is basically done by neutron.

** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: nova
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1526818

Title:
  Incorrect and excess ARP responses in tenant subnets

Status in neutron:
  New
Status in OpenStack Compute (nova):
  Incomplete

Bug description:
  We are facing a very strange behaviour of ARP in tenant networks,
  causing Windows guests to incorrectly decline DHCP addresses. These
  VMs apparently do an ARP request for the address they have been
  offered, discarding them in case a different MAC is reporting to own
  that IP already.

  We are using openvswitch-agent with ml2 plugin.

  Investigating this issue using Linux guests. Please look at the
  following example. A VM with the fixed-ip 192.168.1.15 reports the
  following ARP cache:

 root@michael-test2:~# arp
 Address  HWtype  HWaddress   Flags Mask
Iface
 host-192-168-1-2.openst  ether   fa:16:3e:de:ab:ea   C 
eth0
 192.168.1.13 ether   a6:b2:dc:d8:39:c1   C 
eth0
 192.168.1.119(incomplete)  
eth0
 host-192-168-1-20.opens  ether   fa:16:3e:76:43:ce   C 
eth0
 host-192-168-1-19.opens  ether   fa:16:3e:0d:a6:0b   C 
eth0
 host-192-168-1-1.openst  ether   fa:16:3e:2a:81:ff   C 
eth0
 192.168.1.14 ether   0e:bf:04:b7:ed:52   C 
eth0
 
  Both 192.168.1.13 and 192.168.1.14 do not exist in this subnet, and their MAC 
addresses a6:b2:dc:d8:39:c1 and 0e:bf:04:b7:ed:52 actually belong to other 
instance qbr* and qvb* devices, living on their respective hypervisor hosts!

  Looking at 0e:bf:04:b7:ed:52, for example, yields

 # ip link list | grep -C1 -e 0e:bf:04:b7:ed:52
 59: qbr9ac24ac1-e1:  mtu 1500 qdisc 
noqueue state UP mode DEFAULT group default 
 link/ether 0e:bf:04:b7:ed:52 brd ff:ff:ff:ff:ff:ff
 60: qvo9ac24ac1-e1:  mtu 1500 
qdisc pfifo_fast master ovs-system state UP mode DEFAULT group default qlen 1000
 --
 61: qvb9ac24ac1-e1:  mtu 1500 
qdisc pfifo_fast master qbr9ac24ac1-e1 state UP mode DEFAULT group default qlen 
1000
 link/ether 0e:bf:04:b7:ed:52 brd ff:ff:ff:ff:ff:ff
 62: tap9ac24ac1-e1:  mtu 1500 qdisc 
pfifo_fast master qbr9ac24ac1-e1 state UNKNOWN mode DEFAULT group default qlen 
500

  on the compute node. Using tcpdump on qbr9ac24ac1-e1 on the host and
  triggering a fresh ARM lookup from the guest results in

 # tcpdump -i qbr9ac24ac1-e1 -vv -l | grep ARP
 tcpdump: WARNING: qbr9ac24ac1-e1: no IPv4 address assigned
 tcpdump: listening on qbr9ac24ac1-e1, link-type EN10MB (Ethernet), capture 
size 65535 bytes
 14:00:32.089726 ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 
192.168.1.14 tell 192.168.1.15, length 28
 14:00:32.089740 ARP, Ethernet (len 6), IPv4 (len 4), Reply 192.168.1.14 
is-at 0e:bf:04:b7:ed:52 (oui Unknown), length 28
 14:00:32.090141 ARP, Ethernet (len 6), IPv4 (len 4), Reply 192.168.1.14 
is-at 7a:a5:71:63:47:94 (oui Unknown), length 28
 14:00:32.090160 ARP, Ethernet (len 6), IPv4 (len 4), Reply 192.168.1.14 
is-at 02:f9:33:d5:04:0d (oui Unknown), length 28
 14:00:32.090168 ARP, Ethernet (len 6), IPv4 (len 4), Reply 192.168.1.14 
is-at 9a:a0:46:e4:03:06 (oui Unknown), length 28

  Four different devices are claiming to own the non-existing IP
  address! Looking them up in neutron shows they are all related to
  existing ports on the subnet, but different ones:

 # neutron port-list | grep -e 47fbb8b5-55 -e 46647cca-32 -e e9e2d7c3-7e -e 
9ac24ac1-e1
 | 46647cca-3293-42ea-8ec2-0834e19422fa |   
| fa:16:3e:7d:9c:45 | {"subnet_id": 
"25dbbdc0-f438-4f89-8663-1772f9c7ef36", "ip_address": "192.168.1.8"}   |
 | 47fbb8b5-5549-46e4-850e-bd382375e0f8 |   
| fa:16:3e:fa:df:32 | {"subnet_id": 
"25dbbdc0-f438-4f89-8663-1772f9c7ef36", "ip_address": "192.168.1.7"}   |
 | 9ac24ac1-e157-484e-b6a2-a1dded4731ac |   
| fa:16:3e:2a:80:6b | {"subnet_id": 
"25dbbdc0-f438-4f89-8663-1772f9c7ef36", "ip_address": "192.168.1.15"}  |
 | e9e2d7c3-7e58-4bc2-a25f-d48e658b2d56 |   
| fa:16:3e:0d:a6:0b | {"subnet_id": 
"25dbbdc0-f438-4f89-8663-1772f9c7ef36", "ip_address": "192.168.1.19"}  |

  Environment:

  Host: Ubuntu server 14.04
  Kernel: linux-image-generic-lts-vivid, 3.19.0-39-generic #44~14.04.1-Ubuntu 
SMP Wed Dec 2 10:00:35 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
  OpenStack Kilo:
  # dpkg -l |

[Yahoo-eng-team] [Bug 1532076] Re: Nova intermittently fails test_volume_boot_patters with db error

2016-02-17 Thread Sean Dague
This isn't really enough to go on, there aren't even links to logs in
the gate.

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1532076

Title:
  Nova intermittently fails test_volume_boot_patters with db error

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  This test seems randomly problematic, but noticed 3 failures today
  with the following error logged in nova.api:

  2016-01-08 03:04:42.603 ERROR oslo_db.api 
[req-9fb82769-155d-4f50-87db-c912c8ad34a6 
tempest-TestVolumeBootPattern-388230709 
tempest-TestVolumeBootPattern-1026177222] DB error.
  2016-01-08 03:04:42.603 12908 ERROR oslo_db.api Traceback (most recent call 
last):
  2016-01-08 03:04:42.603 12908 ERROR oslo_db.api   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 137, in wrapper
  2016-01-08 03:04:42.603 12908 ERROR oslo_db.api return f(*args, **kwargs)
  2016-01-08 03:04:42.603 12908 ERROR oslo_db.api   File 
"/opt/stack/nova/nova/db/sqlalchemy/api.py", line 1717, in instance_destroy
  2016-01-08 03:04:42.603 12908 ERROR oslo_db.api raise 
exception.ConstraintNotMet()
  2016-01-08 03:04:42.603 12908 ERROR oslo_db.api ConstraintNotMet: Constraint 
not met.
  2016-01-08 03:04:42.603 12908 ERROR oslo_db.api

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1532076/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1536442] Re: swap volume is not available because the parameters are opposite

2016-02-17 Thread Sean Dague
*** This bug is a duplicate of bug 1451860 ***
https://bugs.launchpad.net/bugs/1451860

** This bug has been marked a duplicate of bug 1451860
   Attached volume migration failed, due to incorrect arguments  order passed 
to swap_volume

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1536442

Title:
  swap volume is not available because the parameters are opposite

Status in OpenStack Compute (nova):
  New

Bug description:
  version: kilo

  description:

  in the file:/nova/compute/manager.py   . the functions like this:

  class _ComputeV4Proxy(object):
  
  def swap_volume(self, ctxt, instance, old_volume_id, new_volume_id):
  return self.manager.swap_volume(ctxt, instance, old_volume_id,
  new_volume_id)

  
  class ComputeManager(manager.Manager):
  
 @wrap_exception()
  @reverts_task_state
  @wrap_instance_fault
  def swap_volume(self, context, old_volume_id, new_volume_id, instance):

  
  you see ,the parameters of instance should be in the right place .

  
  product steps:

  1 create a volume
  2 create a vm
  3 attach the volume to the vm
  4 migrate the volume to another backend.

  
  error logs:

  2016-01-21 10:57:24.916 24271 ERROR root 
[req-15d4d656-06ee-4c02-8da0-af779944 9134a1a11e5441c29e37757231f36450 
32bfe3124bb2478aad3e6aa1cee09f14 - - -] Original exception being dropped: 
['Traceback (most recent call last):\n', '  File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 392, in 
decorated_function\nreturn function(self, context, *args, **kwargs)\n', '  
File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 5628, in 
swap_volume\ninstance.uuid)\n', "AttributeError: 'unicode' object has no 
attribute 'uuid'\n"]
  2016-01-21 10:57:24.916 24271 ERROR root 
[req-15d4d656-06ee-4c02-8da0-af779944 9134a1a11e5441c29e37757231f36450 
32bfe3124bb2478aad3e6aa1cee09f14 - - -] Original exception being dropped: 
['Traceback (most recent call last):\n', '  File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 347, in 
decorated_function\nreturn function(self, context, *args, **kwargs)\n', '  
File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 404, in 
decorated_function\nkwargs[\'instance\'], e, sys.exc_info())\n', '  File 
"/usr/lib/python2.7/site-packages/nova/compute/utils.py", line 89, in 
add_instance_fault_from_exc\nfault_obj.instance_uuid = instance.uuid\n', 
"AttributeError: 'unicode' object has no attribute 'uuid'\n"]
  2016-01-21 10:57:24.917 24271 ERROR oslo_messaging.rpc.dispatcher 
[req-15d4d656-06ee-4c02-8da0-af779944 9134a1a11e5441c29e37757231f36450 
32bfe3124bb2478aad3e6aa1cee09f14 - - -] Exception during message handling: 
string indices must be integers
  2016-01-21 10:57:24.917 24271 TRACE oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
  2016-01-21 10:57:24.917 24271 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 142, 
in _dispatch_and_reply
  2016-01-21 10:57:24.917 24271 TRACE oslo_messaging.rpc.dispatcher 
executor_callback))
  2016-01-21 10:57:24.917 24271 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 186, 
in _dispatch
  2016-01-21 10:57:24.917 24271 TRACE oslo_messaging.rpc.dispatcher 
executor_callback)
  2016-01-21 10:57:24.917 24271 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 130, 
in _do_dispatch
  2016-01-21 10:57:24.917 24271 TRACE oslo_messaging.rpc.dispatcher result 
= func(ctxt, **new_args)
  2016-01-21 10:57:24.917 24271 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 8262, in 
swap_volume
  2016-01-21 10:57:24.917 24271 TRACE oslo_messaging.rpc.dispatcher 
new_volume_id)
  2016-01-21 10:57:24.917 24271 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/exception.py", line 88, in wrapped
  2016-01-21 10:57:24.917 24271 TRACE oslo_messaging.rpc.dispatcher payload)
  2016-01-21 10:57:24.917 24271 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 85, in __exit__
  2016-01-21 10:57:24.917 24271 TRACE oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2016-01-21 10:57:24.917 24271 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/exception.py", line 71, in wrapped
  2016-01-21 10:57:24.917 24271 TRACE oslo_messaging.rpc.dispatcher return 
f(self, context, *args, **kw)
  2016-01-21 10:57:24.917 24271 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py"

[Yahoo-eng-team] [Bug 1544522] Re: Don't use Mock.called_once_with that does not exist

2016-02-17 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/281252
Committed: 
https://git.openstack.org/cgit/openstack/rally/commit/?id=a924151906eab93d7455688337cf08415c56058e
Submitter: Jenkins
Branch:master

commit a924151906eab93d7455688337cf08415c56058e
Author: Chaozhe.Chen 
Date:   Wed Feb 17 20:54:46 2016 +0800

Test: Stop using non-existent method of Mock

There is no method called_once_with() in Mock object.
Use assert_called_once_with() instead.
And called_once_with() does nothing because it's a mock object.

In case 'test_setup_with_no_lbaas', method iterate_per_tenants() will
not be called when setup with no lbass, so use assert_not_called()
instead.

Change-Id: Ib25b325b8764e6f0e0928f46f4789fce0f04b9e1
Closes-Bug: #1544522


** Changed in: rally
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1544522

Title:
  Don't use Mock.called_once_with that does not exist

Status in Cinder:
  Fix Released
Status in neutron:
  In Progress
Status in octavia:
  In Progress
Status in python-designateclient:
  In Progress
Status in Rally:
  Fix Released
Status in Sahara:
  Fix Released
Status in Trove:
  In Progress

Bug description:
  class mock.Mock does not exist method "called_once_with", it just
  exists method "assert_called_once_with". Currently there are still
  some places where we use called_once_with method, we should correct
  it.

  NOTE: called_once_with() does nothing because it's a mock object.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1544522/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1545922] Re: Nova API allow DB index as the server_id parem

2016-02-17 Thread Anne Gentle
We need the EC2 API devs to let us know if this is required before
documenting this capability, as it may be removed.

** Changed in: openstack-api-site
   Importance: Undecided => Medium

** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1545922

Title:
  Nova API allow DB index as the server_id parem

Status in OpenStack Compute (nova):
  New
Status in openstack-api-site:
  New

Bug description:
  I read the nova API doc: 
  such as this API: 
  http://developer.openstack.org/api-ref-compute-v2.1.html#showServer

  GET /v2.1/​{tenant_id}​/servers/​{server_id}​Show server details

  
  Request parameters
  Parameter Style   TypeDescription
  tenant_id URI csapi:UUID  
  The UUID of the tenant in a multi-tenancy cloud.

  server_id URI csapi:UUID  
  The UUID of the server.

  
  But I can get the server by DB index: 

  curl -s -H X-Auth-Token:6b8968eb38df47c6a09ac9aee81ea0c6 
http://192.168.2.103:8774/v2.1/f5a8829cc14c4825a2728b273aa91aa1/servers/2
  {
  "server": {
  "OS-DCF:diskConfig": "MANUAL",
  "OS-EXT-AZ:availability_zone": "nova",
  "OS-EXT-SRV-ATTR:host": "shaohe1",
  "OS-EXT-SRV-ATTR:hypervisor_hostname": "shaohe1",
  "OS-EXT-SRV-ATTR:instance_name": "instance-0002",
  "OS-EXT-STS:power_state": 1,
  "OS-EXT-STS:task_state": "migrating",
  "OS-EXT-STS:vm_state": "error",
  "OS-SRV-USG:launched_at": "2015-12-18T07:41:00.00",
  "OS-SRV-USG:terminated_at": null,
  ..
  }
  }

  and the code really allow it use  DB index
  https://github.com/openstack/nova/blob/master/nova/compute/api.py#L1939

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1545922/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1536610] Re: nova instance actions periodically have no start and finish time (empty events[])

2016-02-17 Thread Sean Dague
** Changed in: nova
   Status: New => Opinion

** Changed in: nova
   Importance: Undecided => Wishlist

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1536610

Title:
  nova instance actions periodically have no start and finish time
  (empty events[])

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  Nova instance actions periodically have no start and finish time, I
  get empty events[]:

  root@controller1:/var/log# nova instance-action-list 
5f844d05-405f-4997-b542-0c85d1b6c8ed
  
++--+-++
  | Action | Request_ID   | Message | Start_Time
 |
  
++--+-++
  | create | req-58d31051-ca3b-4a1c-ab75-0335ff3b28ae | -   | 
2016-01-21T11:24:07.00 |
  | reboot | req-eba0435f-610a-47cd-841f-f3500c7b66b7 | -   | 
2016-01-21T11:25:47.00 |
  | reboot | req-71688344-840e-4cb4-ba3e-21e4c0a85db7 | -   | 
2016-01-21T11:26:17.00 |
  
++--+-++

  root@controller1:/var/log# nova instance-action 
5f844d05-405f-4997-b542-0c85d1b6c8ed req-eba0435f-610a-47cd-841f-f3500c7b66b7
  +---+--+
  | Property  | Value|
  +---+--+
  | action| reboot   |
  | events| [{u'event': u'compute_reboot_instance',  |
  |   |   u'finish_time': u'2016-01-21T11:25:50.00', |
  |   |   u'result': u'Success', |
  |   |   u'start_time': u'2016-01-21T11:25:48.00',  |
  |   |   u'traceback': None}]   |
  | instance_uuid | 5f844d05-405f-4997-b542-0c85d1b6c8ed |
  | message   | -|
  | project_id| 73eb606e175249049987ec6a5774f282 |
  | request_id| req-eba0435f-610a-47cd-841f-f3500c7b66b7 |
  | start_time| 2016-01-21T11:25:47.00   |
  | user_id   | 8855a4b15321469c8b44bbc1e0ea5320 |
  +---+--+

  root@controller1:/var/log# nova instance-action 
5f844d05-405f-4997-b542-0c85d1b6c8ed req-71688344-840e-4cb4-ba3e-21e4c0a85db7
  +---+--+
  | Property  | Value|
  +---+--+
  | action| reboot   |
  | events| []   |
  | instance_uuid | 5f844d05-405f-4997-b542-0c85d1b6c8ed |
  | message   | -|
  | project_id| 73eb606e175249049987ec6a5774f282 |
  | request_id| req-71688344-840e-4cb4-ba3e-21e4c0a85db7 |
  | start_time| 2016-01-21T11:26:17.00   |
  | user_id   | 8855a4b15321469c8b44bbc1e0ea5320 |
  +---+--+

  root@controller1:/var/log# nova show 5f844d05-405f-4997-b542-0c85d1b6c8ed
  
+--+--+
  | Property | Value
|
  
+--+--+
  | OS-DCF:diskConfig| MANUAL   
|
  | OS-EXT-AZ:availability_zone  | nova 
|
  | OS-EXT-SRV-ATTR:host | compute3 
|
  | OS-EXT-SRV-ATTR:hypervisor_hostname  | compute3.serverel.net
|
  | OS-EXT-SRV-ATTR:instance_name| instance-010e
|
  | OS-EXT-STS:power_state   | 1
|
  | OS-EXT-STS:task_state| -
|
  | OS-EXT-STS:vm_state  | active   

[Yahoo-eng-team] [Bug 1543012] Re: Routres: attaching a router to a external network without a subnet leads to exceptions

2016-02-17 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/277340
Committed: 
https://git.openstack.org/cgit/openstack/vmware-nsx/commit/?id=971f420657eb7fe716a2c68842954216bfe5060c
Submitter: Jenkins
Branch:master

commit 971f420657eb7fe716a2c68842954216bfe5060c
Author: Gary Kotton 
Date:   Mon Feb 8 02:52:55 2016 -0800

NSX|V: ensure that gateway network has a subnet

Leverage neutron callbacks to validate that the network being
attached to the router has a configured subnet.

Change-Id: I9ab76ca698a093eab56498b23f533f34420b2dfa
Closes-bug: #1543012
Depends-on: 06af05e0a7fd0201e138dbbefce60432e51c0c71


** Changed in: vmware-nsx
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1543012

Title:
  Routres: attaching a router to a external network without a subnet
  leads to exceptions

Status in neutron:
  Fix Released
Status in vmware-nsx:
  Fix Released

Bug description:
  2016-01-29 06:45:03.920 18776 ERROR neutron.api.v2.resource 
[req-c2074082-a6d0-4e5a-8657-41fecb82dacc ] add_router_interface failed
  2016-01-29 06:45:03.920 18776 TRACE neutron.api.v2.resource Traceback (most 
recent call last):
  2016-01-29 06:45:03.920 18776 TRACE neutron.api.v2.resource File 
"/usr/lib/python2.7/dist-packages/neutron/api/v2/resource.py", line 83, in 
resource
  2016-01-29 06:45:03.920 18776 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
  2016-01-29 06:45:03.920 18776 TRACE neutron.api.v2.resource File 
"/usr/lib/python2.7/dist-packages/neutron/api/v2/base.py", line 207, in 
_handle_action
  2016-01-29 06:45:03.920 18776 TRACE neutron.api.v2.resource return 
getattr(self._plugin, name)(*arg_list, **kwargs)
  2016-01-29 06:45:03.920 18776 TRACE neutron.api.v2.resource File 
"/usr/local/lib/python2.7/dist-packages/vmware_nsx/neutron/plugins/vmware/plugins/nsx_v.py",
 line 1672, in add_router_interface
  2016-01-29 06:45:03.920 18776 TRACE neutron.api.v2.resource context, 
router_id, interface_info)
  2016-01-29 06:45:03.920 18776 TRACE neutron.api.v2.resource File 
"/usr/local/lib/python2.7/dist-packages/vmware_nsx/neutron/plugins/vmware/plugins/nsx_v_drivers/shared_router_driver.py",
 line 723, in add_router_interface
  2016-01-29 06:45:03.920 18776 TRACE neutron.api.v2.resource context, 
router_id, router_db.admin_state_up)
  2016-01-29 06:45:03.920 18776 TRACE neutron.api.v2.resource File 
"/usr/local/lib/python2.7/dist-packages/vmware_nsx/neutron/plugins/vmware/plugins/nsx_v_drivers/shared_router_driver.py",
 line 468, in _bind_router_on_available_edge
  2016-01-29 06:45:03.920 18776 TRACE neutron.api.v2.resource 
self._get_available_and_conflicting_ids(context, router_id))
  2016-01-29 06:45:03.920 18776 TRACE neutron.api.v2.resource File 
"/usr/local/lib/python2.7/dist-packages/vmware_nsx/neutron/plugins/vmware/plugins/nsx_v_drivers/shared_router_driver.py",
 line 273, in _get_available_and_conflicting_ids
  2016-01-29 06:45:03.920 18776 TRACE neutron.api.v2.resource 
gwp['fixed_ips'][0]['subnet_id'])
  2016-01-29 06:45:03.920 18776 TRACE neutron.api.v2.resource IndexError: list 
index out of range

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1543012/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1546605] [NEW] Default domain's description references keystone v2

2016-02-17 Thread Anna Sortland
Public bug reported:

Default domain's description references keystone v2:

# openstack domain list
+--+-+-+--+
| ID   | Name| Enabled | Description
  |
+--+-+-+--+
| default  | Default | True| Owns users and tenants 
(i.e. projects) available on Identity API v2. |

Remove reference to v2.

** Affects: keystone
 Importance: Undecided
 Assignee: Anna Sortland (annasort)
 Status: New

** Changed in: keystone
 Assignee: (unassigned) => Anna Sortland (annasort)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1546605

Title:
  Default domain's description references keystone v2

Status in OpenStack Identity (keystone):
  New

Bug description:
  Default domain's description references keystone v2:

  # openstack domain list
  
+--+-+-+--+
  | ID   | Name| Enabled | Description  
|
  
+--+-+-+--+
  | default  | Default | True| Owns users and 
tenants (i.e. projects) available on Identity API v2. |

  Remove reference to v2.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1546605/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1540764] Re: build instance fail with sriov port on child cell

2016-02-17 Thread Sean Dague
I believe the issue is just that Cells does not support PCI pass
through. Cells v1 is frozen, and only regressions will be fixed. So this
is getting marked as Opinion as a future feature which will likely not
ever be done on cells v1.

** Tags added: pci

** Tags added: cells

** Changed in: nova
   Status: New => Opinion

** Changed in: nova
   Importance: Undecided => Wishlist

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1540764

Title:
  build instance fail with sriov port on child cell

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  Version:
  Kilo
  Liberty

  When I was trying to build instance with sriov port on my child cell, I used 
the following command:
  nova boot --image 6df3b3d4-3c9e-4772-9f18-b6f42c6a9c77 --flavor 3 --nic 
port-id=61c20d21-43fe-487d-9296-893172cb725f --hint 
target_cell='api_cell!child_cell' sr-vxlan_vm30

  But spawning failed on child_cell compute node. Here are error
  information in nova-compute.log:

  2016-01-14 11:00:17.246 2908 ERROR nova.compute.manager 
[req-9d7fafab-ea2a-43c3-a201-4234db29c572 8a9704e00c44452590c6c8d014ed028a 
452b30922b3f42b59743158f695264c9 - - -] [instance: 
b91305de-5501-4083-9a0e-0f4bfc942f5c] Instance failed to spawn
  016-01-14 11:00:17.246 2908 TRACE nova.compute.manager [instance: 
b91305de-5501-4083-9a0e-0f4bfc942f5c] Traceback (most recent call last):
  2016-01-14 11:00:17.246 2908 TRACE nova.compute.manager [instance: 
b91305de-5501-4083-9a0e-0f4bfc942f5c]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2660, in 
_build_resources
  2016-01-14 11:00:17.246 2908 TRACE nova.compute.manager [instance: 
b91305de-5501-4083-9a0e-0f4bfc942f5c] yield resources
  2016-01-14 11:00:17.246 2908 TRACE nova.compute.manager [instance: 
b91305de-5501-4083-9a0e-0f4bfc942f5c]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2532, in 
_build_and_run_instance
  2016-01-14 11:00:17.246 2908 TRACE nova.compute.manager [instance: 
b91305de-5501-4083-9a0e-0f4bfc942f5c] block_device_info=block_device_info)
  2016-01-14 11:00:17.246 2908 TRACE nova.compute.manager [instance: 
b91305de-5501-4083-9a0e-0f4bfc942f5c]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2879, in 
spawn
  2016-01-14 11:00:17.246 2908 TRACE nova.compute.manager [instance: 
b91305de-5501-4083-9a0e-0f4bfc942f5c] write_to_disk=True)
  2016-01-14 11:00:17.246 2908 TRACE nova.compute.manager [instance: 
b91305de-5501-4083-9a0e-0f4bfc942f5c]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4844, in 
_get_guest_xml
  2016-01-14 11:00:17.246 2908 TRACE nova.compute.manager [instance: 
b91305de-5501-4083-9a0e-0f4bfc942f5c] memory_backup_file=memory_backup_file)
  2016-01-14 11:00:17.246 2908 TRACE nova.compute.manager [instance: 
b91305de-5501-4083-9a0e-0f4bfc942f5c]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4675, in 
_get_guest_config
  2016-01-14 11:00:17.246 2908 TRACE nova.compute.manager [instance: 
b91305de-5501-4083-9a0e-0f4bfc942f5c] network_info, virt_type)
  2016-01-14 11:00:17.246 2908 TRACE nova.compute.manager [instance: 
b91305de-5501-4083-9a0e-0f4bfc942f5c]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4818, in 
_get_guest_vif_config
  2016-01-14 11:00:17.246 2908 TRACE nova.compute.manager [instance: 
b91305de-5501-4083-9a0e-0f4bfc942f5c] instance, vif, image_meta, flavor, 
virt_type)
  2016-01-14 11:00:17.246 2908 TRACE nova.compute.manager [instance: 
b91305de-5501-4083-9a0e-0f4bfc942f5c]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/vif.py", line 376, in 
get_config
  2016-01-14 11:00:17.246 2908 TRACE nova.compute.manager [instance: 
b91305de-5501-4083-9a0e-0f4bfc942f5c] inst_type, virt_type)
  2016-01-14 11:00:17.246 2908 TRACE nova.compute.manager [instance: 
b91305de-5501-4083-9a0e-0f4bfc942f5c]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/vif.py", line 292, in 
get_config_hw_veb
  2016-01-14 11:00:17.246 2908 TRACE nova.compute.manager [instance: 
b91305de-5501-4083-9a0e-0f4bfc942f5c] conf, net_type, profile["pci_slot"],
  2016-01-14 11:00:17.246 2908 TRACE nova.compute.manager [instance: 
b91305de-5501-4083-9a0e-0f4bfc942f5c] KeyError: 'pci_slot'


  In function _create_instances_here() in nova/cells/scheduler.py of
  Kilo and Liberty, I found this:

  # FIXME(danms): The instance was brutally serialized before being
  # sent over RPC to us. Thus, the pci_requests value wasn't really
  # sent in a useful form. Since it was getting ignored for cells
  # before it was part of the Instance, skip it now until cells RPC
  # is sending proper instance objects.
  instance_values.pop('pci_requests', None)

  The "pci_requests" is discarded by cell scheduler,

[Yahoo-eng-team] [Bug 1542421] Re: Split-network-plane-for-live-migration

2016-02-17 Thread Sean Dague
The manual updated needed here is for a new configuration option called:
live_migration_inbound_addr

Setting this allows for compute hosts to advertise the address which
live migrations should come in on. This allows fine grained control in
environments so that compute to compute live migration traffic can exist
on a dedicated network.

** Also affects: openstack-manuals
   Importance: Undecided
   Status: New

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1542421

Title:
  Split-network-plane-for-live-migration

Status in OpenStack Compute (nova):
  Invalid
Status in openstack-manuals:
  New

Bug description:
  https://review.openstack.org/245005
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/nova" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit af41accff9456748a3106bc1206cfc22d10a8cf4
  Author: Kevin_Zheng 
  Date:   Fri Nov 13 14:14:28 2015 +0800

  Split-network-plane-for-live-migration
  
  When we do live migration with QEMU/KVM driver,
  we use hostname of target compute node as the
  target of live migration. So the RPC call and live
  migration traffic will be in same network plane.
  
  This patch adds a new option live_migration_inbound_addr
  in configuration file, set None as default value.
  When pre_live_migration() executes on destination host, set
  the option into pre_migration_data, if it's not None.
  When driver.live_migration() executes on source host,
  if this option is present in pre_migration_data, the ip/hostname
  address is used instead of CONF.libvirt.live_migration_uri
  as the uri for live migration, if it's None, then the
  mechanism remains as it is now.
  
  This patch (BP) focuses only on the QEMU/KVM driver,
  the implementations for other drivers should be done
  in a separate blueprint.
  
  DocImpact:new config option "live_migration_inbound_addr" will be added.
  
  Change-Id: I81c783886497a844fb4b38d0f2a3d6c18a99831c
  Co-Authored-By: Rui Chen 
  Implements: blueprint split-network-plane-for-live-migration

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1542421/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1540254] Re: "#flake8: noqa" is using incorrectly

2016-02-17 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/277924
Committed: 
https://git.openstack.org/cgit/openstack/designate/commit/?id=3873383df0347cbf27864733728462f74c7c46f0
Submitter: Jenkins
Branch:master

commit 3873383df0347cbf27864733728462f74c7c46f0
Author: Chaozhe.Chen 
Date:   Wed Feb 10 00:10:03 2016 +0800

Use "# noqa" instead of "#flake8: noqa"

"# flake8: noqa" option disables all checks for the whole file.
To disable one line we should use "# noqa".

This patch use "# noqa" instead of "#flake8: noqa" and fix some
flake8 viilations.

Change-Id: Ic9f7c82428728582cecf0fa40f288e9f20f5d5ca
Closes-bug: #1540254


** Changed in: designate
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1540254

Title:
  "#flake8: noqa" is using incorrectly

Status in Designate:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in python-heatclient:
  Fix Released
Status in python-novaclient:
  Fix Released

Bug description:
  "# flake8: noqa" option disables all checks for the whole file. To
  disable one line we should use "# noqa".

  Refer to: https://pypi.python.org/pypi/flake8
  
https://github.com/openstack/python-keystoneclient/commit/3b766c51438396a0ab0032de309c9d56e275e0cb

To manage notifications about this bug go to:
https://bugs.launchpad.net/designate/+bug/1540254/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1546562] [NEW] deleting role with implied role fails

2016-02-17 Thread Adam Young
Public bug reported:

Create two  roles.  Make one imply the other (need curl for now)


$ openstack role delete identity_policy_manager
ERROR: openstack An unexpected error prevented the server from fulfilling your 
request. (HTTP 500) (Request-ID: req-a2b89f42-ad24-4985-a599-33cc182d8f80)


Looking in the log

Feb 17 14:05:44 ayoung541 admin: 2016-02-17 14:05:44.042 31 ERROR
keystone.common.wsgi [req-a2b89f42-ad24-4985-a599-33cc182d8f80
6259462f07a940f19b1ad8d36ee42612 b0fa955539c442cc838067a55605102d -
default default] (pymysql.err.IntegrityError) (1451, u'Cannot delete or
update a parent row: a foreign key constraint fails
(`keystone`.`implied_role`, CONSTRAINT `implied_role_prior_role_id_fkey`
FOREIGN KEY (`prior_role_id`) REFERENCES `role` (`id`))') [SQL: u'DELETE
FROM role WHERE role.id = %(id)s'] [parameters: {'id':
u'142340f53b624665a86641cf13135615'}]

This is supposed to be a cascading delete.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1546562

Title:
  deleting role with implied role fails

Status in OpenStack Identity (keystone):
  New

Bug description:
  Create two  roles.  Make one imply the other (need curl for now)

  
  $ openstack role delete identity_policy_manager
  ERROR: openstack An unexpected error prevented the server from fulfilling 
your request. (HTTP 500) (Request-ID: req-a2b89f42-ad24-4985-a599-33cc182d8f80)

  
  Looking in the log

  Feb 17 14:05:44 ayoung541 admin: 2016-02-17 14:05:44.042 31 ERROR
  keystone.common.wsgi [req-a2b89f42-ad24-4985-a599-33cc182d8f80
  6259462f07a940f19b1ad8d36ee42612 b0fa955539c442cc838067a55605102d -
  default default] (pymysql.err.IntegrityError) (1451, u'Cannot delete
  or update a parent row: a foreign key constraint fails
  (`keystone`.`implied_role`, CONSTRAINT
  `implied_role_prior_role_id_fkey` FOREIGN KEY (`prior_role_id`)
  REFERENCES `role` (`id`))') [SQL: u'DELETE FROM role WHERE role.id =
  %(id)s'] [parameters: {'id': u'142340f53b624665a86641cf13135615'}]

  This is supposed to be a cascading delete.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1546562/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1544964] Re: [glare] glance-glare fails with default paste config

2016-02-17 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/279525
Committed: 
https://git.openstack.org/cgit/openstack/glance/commit/?id=f880351fd846b9050a735c7ffbcf1d35be949eb8
Submitter: Jenkins
Branch:master

commit f880351fd846b9050a735c7ffbcf1d35be949eb8
Author: Kirill Zaitsev 
Date:   Fri Feb 12 16:09:55 2016 +0300

Include version number into glare factory path in paste

Change-Id: I7cdfa81fdf29a26f510bce6804678a343c1fe428
Closes-Bug: #1544964


** Changed in: glance
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1544964

Title:
  [glare] glance-glare fails with default paste config

Status in Glance:
  Fix Released

Bug description:
  $ tox -e venv -- glance-glare --config-file ./etc/glance-glare.conf

  ...

  2016-02-12 16:02:20.318 16830 ERROR glance.common.config [-] Unable to load 
glare-api from configuration file 
/Users/teferi/openstack/glance/etc/glance-glare-paste.ini.
  Got: ImportError('No module named router',)
  ERROR: Unable to load glare-api from configuration file 
/Users/teferi/openstack/glance/etc/glance-glare-paste.ini.
  Got: ImportError('No module named router',)

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1544964/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1544522] Re: Don't use Mock.called_once_with that does not exist

2016-02-17 Thread Chaozhe Chen
** Also affects: rally
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1544522

Title:
  Don't use Mock.called_once_with that does not exist

Status in Cinder:
  Fix Released
Status in neutron:
  In Progress
Status in octavia:
  In Progress
Status in python-designateclient:
  In Progress
Status in Rally:
  New
Status in Sahara:
  Fix Released
Status in Trove:
  In Progress

Bug description:
  class mock.Mock does not exist method "called_once_with", it just
  exists method "assert_called_once_with". Currently there are still
  some places where we use called_once_with method, we should correct
  it.

  NOTE: called_once_with() does nothing because it's a mock object.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1544522/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1544522] Re: Don't use Mock.called_once_with that does not exist

2016-02-17 Thread Chaozhe Chen
** Also affects: trove
   Importance: Undecided
   Status: New

** Changed in: trove
 Assignee: (unassigned) => Chaozhe Chen (chaozhe-chen)

** Also affects: designate
   Importance: Undecided
   Status: New

** No longer affects: designate

** Also affects: python-designateclient
   Importance: Undecided
   Status: New

** Changed in: python-designateclient
 Assignee: (unassigned) => Chaozhe Chen (chaozhe-chen)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1544522

Title:
  Don't use Mock.called_once_with that does not exist

Status in Cinder:
  Fix Released
Status in neutron:
  In Progress
Status in octavia:
  In Progress
Status in python-designateclient:
  In Progress
Status in Sahara:
  Fix Released
Status in Trove:
  In Progress

Bug description:
  class mock.Mock does not exist method "called_once_with", it just
  exists method "assert_called_once_with". Currently there are still
  some places where we use called_once_with method, we should correct
  it.

  NOTE: called_once_with() does nothing because it's a mock object.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1544522/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1546423] Re: delete volume action link is shown even if it shouldn't

2016-02-17 Thread Matthias Runge
in the volumes details page, there is still a delete action

** Changed in: horizon
   Status: Invalid => Confirmed

** Changed in: horizon
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1546423

Title:
  delete volume action link is shown even if it shouldn't

Status in OpenStack Dashboard (Horizon):
  Confirmed

Bug description:
  copying a end user bug report here:

  Description of problem:
  If volume has snaphost, it can't be deleted (need to remove them first). If 
you navigate to detail of the volume, there exists a link to Delete volume - 
and it shouldn't be there.

  
  Version-Release number of selected component (if applicable):
  liberty

  How reproducible:
  100%

  Steps to Reproduce:
  1. log in as demo user
  2. Project - Compute - Volumes
  3. create Volume with default values, name it "test"
  4. verify, that drop down menu of the row with "test" volume contain Delete 
volume
  click to Create snapshot, name it as "test_snap"
  navigate back to list of Volumes (not snapshots of them)
  verify, that drop down menu of the row with "test" volume does NOT contain 
Delete volume
  click to name of the volume to navigate to detail of volume "test"
  on the right top of page there is action button with dropdown menu. There is 
Delete volume item, that is not accessible from list of volumes

  
  Actual results:
  Delete volume item on detail page shouldn't be there, if there exists 
snapshot of it

  Expected results:
  Delete volume is visible always (or seems to be)

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1546423/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1546506] [NEW] spawn_n fails in functional tests

2016-02-17 Thread Jakub Libosvar
Public bug reported:

Gate seems broken due to spawn_n failure, doesn't seem like Neutron's
fault but library issue. I haven't tracked which library was updated
yet. Started occurring at about Feb 16/17 midnight UTC. Also influences
fullstack tests.


e-s: 
http://logstash.openstack.org/#dashboard/file/logstash.json?query=build_name%3A%5C%22gate-neutron-dsvm-functional%5C%22%20AND%20build_status%3A%5C%22FAILURE%5C%22%20AND%20message%3A%5C%22greenpool.py%5C%5C%5C%22%2C%20line%2082%2C%20in%20_spawn_n_impl%5C%22

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: fullstack functional-tests gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1546506

Title:
  spawn_n fails in functional tests

Status in neutron:
  New

Bug description:
  Gate seems broken due to spawn_n failure, doesn't seem like Neutron's
  fault but library issue. I haven't tracked which library was updated
  yet. Started occurring at about Feb 16/17 midnight UTC. Also
  influences fullstack tests.

  
  e-s: 
http://logstash.openstack.org/#dashboard/file/logstash.json?query=build_name%3A%5C%22gate-neutron-dsvm-functional%5C%22%20AND%20build_status%3A%5C%22FAILURE%5C%22%20AND%20message%3A%5C%22greenpool.py%5C%5C%5C%22%2C%20line%2082%2C%20in%20_spawn_n_impl%5C%22

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1546506/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1398329] Re: Wrong filtering during "nova list --tenant "

2016-02-17 Thread Sean Dague
This requires a change to the Nova API, so move to a feature

** Changed in: nova
   Status: In Progress => Opinion

** Changed in: nova
   Importance: Medium => Wishlist

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1398329

Title:
  Wrong filtering during "nova list  --tenant "

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  I am using DevStack development environment.
  I have sourced admin user and admin tenant using command - ''source openrc 
admin admin"

  Again I have booted two servers - 
  test-server-1
  test-server-2

  During "nova list", I have used tenant filtering here. 
  1. admin tenant
  2. demo tenant

  In case of admin all both servers get listed (Correct behavior)
  But in case of demo tenant, Ideally no instace should be reflected in the 
list.
  But here I see all  servers get listed irrespective to tenant filter. 

  
  Please see below operations  -

  [raies@localhost devstack]$ nova list
  
+--+---+++-+--+
  | ID   | Name  | Status | Task State 
| Power State | Networks |
  
+--+---+++-+--+
  | 299e99f7-ed33-4a17-8755-18be1cbe46b9 | test-server-1 | ACTIVE | -  
| Running | private=10.0.0.2 |
  | 0f9c1b84-0d5d-474a-9705-c9defbb8ec2b | test-server-2 | ACTIVE | -  
| Running | private=10.0.0.5 |
  
+--+---+++-+--+

  
  [raies@localhost devstack]$ nova list --all-tenant
  
+--+---+++-+--+
  | ID   | Name  | Status | Task State 
| Power State | Networks |
  
+--+---+++-+--+
  | 299e99f7-ed33-4a17-8755-18be1cbe46b9 | test-server-1 | ACTIVE | -  
| Running | private=10.0.0.2 |
  | 0f9c1b84-0d5d-474a-9705-c9defbb8ec2b | test-server-2 | ACTIVE | -  
| Running | private=10.0.0.5 |
  
+--+---+++-+--+

  
  [raies@localhost devstack]$ nova list --tenant admin
  
+--+---+++-+--+
  | ID   | Name  | Status | Task State 
| Power State | Networks |
  
+--+---+++-+--+
  | 299e99f7-ed33-4a17-8755-18be1cbe46b9 | test-server-1 | ACTIVE | -  
| Running | private=10.0.0.2 |
  | 0f9c1b84-0d5d-474a-9705-c9defbb8ec2b | test-server-2 | ACTIVE | -  
| Running | private=10.0.0.5 |
  
+--+---+++-+--+


  
  

  
  [raies@localhost devstack]$ nova list --tenant demo
  
+--+---+++-+--+
  | ID   | Name  | Status | Task State 
| Power State | Networks |
  
+--+---+++-+--+
  | 299e99f7-ed33-4a17-8755-18be1cbe46b9 | test-server-1 | ACTIVE | -  
| Running | private=10.0.0.2 |
  | 0f9c1b84-0d5d-474a-9705-c9defbb8ec2b | test-server-2 | ACTIVE | -  
| Running | private=10.0.0.5 |
  
+--+---+++-+--+

  
  [raies@localhost devstack]$ keystone tenant-list
  +--++-+
  |id|name| enabled |
  +--++-+
  | 7ada46b6530147daa4c2138d03ea75ba |   admin|   True  |
  | 3861ef986db14c888a6d0167b0bb3cee |  alt_demo  |   True  |
  | 4931442604ef4368b5d9134e79c00c27 |demo|   True  |
  | 8cd01cd392ed441298a80240024f2cd2 | invisible_to_admin |   True  |
  | f509d7c57bef4554bcdd2322697cd3cd |  service   |   True  |
  +--++-+

  
  [raies@localhost devstack]$ nova list --tenant 
4931442604ef4368b5d9134e79c00c27
  
+--+---+++-+--+
  | ID   | Name  | Status | Task State 
| Power S

[Yahoo-eng-team] [Bug 1546423] Re: delete volume action link is shown even if it shouldn't

2016-02-17 Thread Matthias Runge
** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1546423

Title:
  delete volume action link is shown even if it shouldn't

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  copying a end user bug report here:

  Description of problem:
  If volume has snaphost, it can't be deleted (need to remove them first). If 
you navigate to detail of the volume, there exists a link to Delete volume - 
and it shouldn't be there.

  
  Version-Release number of selected component (if applicable):
  liberty

  How reproducible:
  100%

  Steps to Reproduce:
  1. log in as demo user
  2. Project - Compute - Volumes
  3. create Volume with default values, name it "test"
  4. verify, that drop down menu of the row with "test" volume contain Delete 
volume
  click to Create snapshot, name it as "test_snap"
  navigate back to list of Volumes (not snapshots of them)
  verify, that drop down menu of the row with "test" volume does NOT contain 
Delete volume
  click to name of the volume to navigate to detail of volume "test"
  on the right top of page there is action button with dropdown menu. There is 
Delete volume item, that is not accessible from list of volumes

  
  Actual results:
  Delete volume item on detail page shouldn't be there, if there exists 
snapshot of it

  Expected results:
  Delete volume is visible always (or seems to be)

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1546423/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1546490] [NEW] Security groups don't work with fullstack

2016-02-17 Thread John Schwarz
Public bug reported:

Iptables doesn't work properly with fullstack, as can be observed in
[1].

The gist is that since all ovs-agents are running on the same namespace, they 
try to override each other's iptables, causing the failures. This will 
obviously cause security groups to fail.
Also, Assaf Muller mentioned that since FakeMachines are directly connected to 
br-int, security groups will also not work properly on them. Instead, they 
should be connected through an intermediary linuxbridge.

[1]: http://logs.openstack.org/71/270971/3/check/gate-neutron-dsvm-
fullstack/c913b51/logs/TestConnectivitySameNetwork.test_connectivity_VLANs,Ofctl_
/neutron-openvswitch-agent--2016-02-14--
11-40-19-078390.log.txt.gz#_2016-02-14_11_41_03_165

** Affects: neutron
 Importance: Undecided
 Status: Confirmed


** Tags: fullstack

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1546490

Title:
  Security groups don't work with fullstack

Status in neutron:
  Confirmed

Bug description:
  Iptables doesn't work properly with fullstack, as can be observed in
  [1].

  The gist is that since all ovs-agents are running on the same namespace, they 
try to override each other's iptables, causing the failures. This will 
obviously cause security groups to fail.
  Also, Assaf Muller mentioned that since FakeMachines are directly connected 
to br-int, security groups will also not work properly on them. Instead, they 
should be connected through an intermediary linuxbridge.

  [1]: http://logs.openstack.org/71/270971/3/check/gate-neutron-dsvm-
  
fullstack/c913b51/logs/TestConnectivitySameNetwork.test_connectivity_VLANs,Ofctl_
  /neutron-openvswitch-agent--2016-02-14--
  11-40-19-078390.log.txt.gz#_2016-02-14_11_41_03_165

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1546490/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1541742] Re: fullstack tests break when tearing down database

2016-02-17 Thread Miguel Angel Ajo
** Changed in: neutron
   Status: Fix Released => Confirmed

** Changed in: neutron
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1541742

Title:
  fullstack tests break when tearing down database

Status in neutron:
  Confirmed

Bug description:
  http://logs.openstack.org/41/265041/7/check/gate-neutron-dsvm-
  fullstack/8ac64cd/testr_results.html.gz

  Late runs fail with the same errors:

  Traceback (most recent call last):
File 
"/opt/stack/new/neutron/.tox/dsvm-fullstack-constraints/local/lib/python2.7/site-packages/fixtures/fixture.py",
 line 125, in cleanUp
  return self._cleanups(raise_errors=raise_first)
File 
"/opt/stack/new/neutron/.tox/dsvm-fullstack-constraints/local/lib/python2.7/site-packages/fixtures/callmany.py",
 line 88, in __call__
  reraise(error[0], error[1], error[2])
File 
"/opt/stack/new/neutron/.tox/dsvm-fullstack-constraints/local/lib/python2.7/site-packages/fixtures/callmany.py",
 line 82, in __call__
  cleanup(*args, **kwargs)
File 
"/opt/stack/new/neutron/.tox/dsvm-fullstack-constraints/local/lib/python2.7/site-packages/testresources/__init__.py",
 line 797, in tearDownResources
  resource[1].finishedWith(getattr(test, resource[0]), result)
File 
"/opt/stack/new/neutron/.tox/dsvm-fullstack-constraints/local/lib/python2.7/site-packages/testresources/__init__.py",
 line 509, in finishedWith
  self._clean_all(resource, result)
File 
"/opt/stack/new/neutron/.tox/dsvm-fullstack-constraints/local/lib/python2.7/site-packages/testresources/__init__.py",
 line 478, in _clean_all
  self.clean(resource)
File 
"/opt/stack/new/neutron/.tox/dsvm-fullstack-constraints/local/lib/python2.7/site-packages/oslo_db/sqlalchemy/provision.py",
 line 127, in clean
  resource.database.engine)
File 
"/opt/stack/new/neutron/.tox/dsvm-fullstack-constraints/local/lib/python2.7/site-packages/oslo_db/sqlalchemy/provision.py",
 line 263, in drop_all_objects
  self.impl.drop_all_objects(engine)
File 
"/opt/stack/new/neutron/.tox/dsvm-fullstack-constraints/local/lib/python2.7/site-packages/oslo_db/sqlalchemy/provision.py",
 line 415, in drop_all_objects
  conn.execute(schema.DropConstraint(fkc))
File 
"/opt/stack/new/neutron/.tox/dsvm-fullstack-constraints/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
 line 914, in execute
  return meth(self, multiparams, params)
File 
"/opt/stack/new/neutron/.tox/dsvm-fullstack-constraints/local/lib/python2.7/site-packages/sqlalchemy/sql/ddl.py",
 line 68, in _execute_on_connection
  return connection._execute_ddl(self, multiparams, params)
File 
"/opt/stack/new/neutron/.tox/dsvm-fullstack-constraints/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
 line 968, in _execute_ddl
  compiled
File 
"/opt/stack/new/neutron/.tox/dsvm-fullstack-constraints/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
 line 1146, in _execute_context
  context)
File 
"/opt/stack/new/neutron/.tox/dsvm-fullstack-constraints/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
 line 1337, in _handle_dbapi_exception
  util.raise_from_cause(newraise, exc_info)
File 
"/opt/stack/new/neutron/.tox/dsvm-fullstack-constraints/local/lib/python2.7/site-packages/sqlalchemy/util/compat.py",
 line 200, in raise_from_cause
  reraise(type(exception), exception, tb=exc_tb)
File 
"/opt/stack/new/neutron/.tox/dsvm-fullstack-constraints/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
 line 1139, in _execute_context
  context)
File 
"/opt/stack/new/neutron/.tox/dsvm-fullstack-constraints/local/lib/python2.7/site-packages/sqlalchemy/engine/default.py",
 line 450, in do_execute
  cursor.execute(statement, parameters)
File 
"/opt/stack/new/neutron/.tox/dsvm-fullstack-constraints/local/lib/python2.7/site-packages/pymysql/cursors.py",
 line 146, in execute
  result = self._query(query)
File 
"/opt/stack/new/neutron/.tox/dsvm-fullstack-constraints/local/lib/python2.7/site-packages/pymysql/cursors.py",
 line 296, in _query
  conn.query(q)
File 
"/opt/stack/new/neutron/.tox/dsvm-fullstack-constraints/local/lib/python2.7/site-packages/pymysql/connections.py",
 line 819, in query
  self._affected_rows = self._read_query_result(unbuffered=unbuffered)
File 
"/opt/stack/new/neutron/.tox/dsvm-fullstack-constraints/local/lib/python2.7/site-packages/pymysql/connections.py",
 line 1001, in _read_query_result
  result.read()
File 
"/opt/stack/new/neutron/.tox/dsvm-fullstack-constraints/local/lib/python2.7/site-packages/pymysql/connections.py",
 line 1285, in read
  first_packet = self.connection._read_packet()
File 
"/opt/stack/new/neutron/.tox/dsvm-fullstack-constraints/local/lib/python2.7/site-packages/pymysql/connections.py",
 line 945, in _read_pa

[Yahoo-eng-team] [Bug 1540755] Re: The update image should be changed to edit image

2016-02-17 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/275029
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=b06d5a5db602638fafea2f50b944de6843173903
Submitter: Jenkins
Branch:master

commit b06d5a5db602638fafea2f50b944de6843173903
Author: space 
Date:   Tue Feb 2 14:23:59 2016 +0800

The update image should be changed to edit image

The edit image and the update image is not uniform in the horizon.
Changed the update image to the edit image because there is only
a few places to use the update image.

Change-Id: I6ccca2e9da0d770d92a281cc4cb681c0507469dc
Closes-Bug: #1540755


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1540755

Title:
  The update image should be changed to edit image

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  The update image should be changed to edit image because there is only
  one place to use the update image.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1540755/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1546462] [NEW] can not reload dnsmasq after update dhcp port

2016-02-17 Thread magic0704
Public bug reported:

after create_dhcp_port [1], the port would not be put into cache ,  so
if we update the dhcp port [2], there would be a NoneType error.  as
showed below:

2016-02-17 18:10:53.121 60074 ERROR oslo_messaging.rpc.dispatcher 
[req-0e99287a-7a91-46e2-9c36-d8ecb096de58 ] Exception during message handling: 
'NoneType' object has no attribute '__getitem__'
2016-02-17 18:10:53.121 60074 TRACE oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
2016-02-17 18:10:53.121 60074 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 142, 
in _dispatch_and_reply
2016-02-17 18:10:53.121 60074 TRACE oslo_messaging.rpc.dispatcher 
executor_callback))
2016-02-17 18:10:53.121 60074 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 186, 
in _dispatch
2016-02-17 18:10:53.121 60074 TRACE oslo_messaging.rpc.dispatcher 
executor_callback)
2016-02-17 18:10:53.121 60074 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 130, 
in _do_dispatch
2016-02-17 18:10:53.121 60074 TRACE oslo_messaging.rpc.dispatcher result = 
func(ctxt, **new_args)
2016-02-17 18:10:53.121 60074 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 445, in 
inner
2016-02-17 18:10:53.121 60074 TRACE oslo_messaging.rpc.dispatcher return 
f(*args, **kwargs)
2016-02-17 18:10:53.121 60074 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/neutron/agent/dhcp/agent.py", line 331, in 
port_update_end
2016-02-17 18:10:53.121 60074 TRACE oslo_messaging.rpc.dispatcher old_ips = 
{i['ip_address'] for i in orig['fixed_ips'] or []}
2016-02-17 18:10:53.121 60074 TRACE oslo_messaging.rpc.dispatcher TypeError: 
'NoneType' object has no attribute '__getitem__'
2016-02-17 18:10:53.121 60074 TRACE oslo_messaging.rpc.dispatcher


[1] 
https://github.com/openstack/neutron/blob/master/neutron/agent/dhcp/agent.py#L449
[2] 
https://github.com/openstack/neutron/blob/master/neutron/agent/dhcp/agent.py#L332

in neutron.conf:
dhcp_agents_per_network = 1
 
the bug can be reproduce as below steps:
1. create network1
2. create subnet1(10.0.0.0/24), enable dhcp, then would create a dhcp port such 
as 10.0.0.2, 
3. delete the dhcp port 10.0.0.2
4. delete the dhcp agent such as DhcpAgent1
5. re-added network1 to the dhcp agent1, then would be a new port 10.0.0.3
6. update the new port from 10.0.0.3 to 10.0.0.2
then there would be an error..

** Affects: neutron
 Importance: Undecided
 Assignee: magic0704 (wbaoping0704)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => magic0704 (wbaoping0704)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1546462

Title:
  can not reload dnsmasq after update dhcp port

Status in neutron:
  New

Bug description:
  after create_dhcp_port [1], the port would not be put into cache ,  so
  if we update the dhcp port [2], there would be a NoneType error.  as
  showed below:

  2016-02-17 18:10:53.121 60074 ERROR oslo_messaging.rpc.dispatcher 
[req-0e99287a-7a91-46e2-9c36-d8ecb096de58 ] Exception during message handling: 
'NoneType' object has no attribute '__getitem__'
  2016-02-17 18:10:53.121 60074 TRACE oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
  2016-02-17 18:10:53.121 60074 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 142, 
in _dispatch_and_reply
  2016-02-17 18:10:53.121 60074 TRACE oslo_messaging.rpc.dispatcher 
executor_callback))
  2016-02-17 18:10:53.121 60074 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 186, 
in _dispatch
  2016-02-17 18:10:53.121 60074 TRACE oslo_messaging.rpc.dispatcher 
executor_callback)
  2016-02-17 18:10:53.121 60074 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 130, 
in _do_dispatch
  2016-02-17 18:10:53.121 60074 TRACE oslo_messaging.rpc.dispatcher result 
= func(ctxt, **new_args)
  2016-02-17 18:10:53.121 60074 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 445, in 
inner
  2016-02-17 18:10:53.121 60074 TRACE oslo_messaging.rpc.dispatcher return 
f(*args, **kwargs)
  2016-02-17 18:10:53.121 60074 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/neutron/agent/dhcp/agent.py", line 331, in 
port_update_end
  2016-02-17 18:10:53.121 60074 TRACE oslo_messaging.rpc.dispatcher old_ips 
= {i['ip_address'] for i in orig['fixed_ips'] or []}
  2016-02-17 18:10:53.121 60074 TRACE oslo_messaging.rpc.dispatcher TypeE

[Yahoo-eng-team] [Bug 1546237] Re: Typo in alembic_migrations.rst

2016-02-17 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/280868
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=887316ae43848ba26facb01d39d60c57111ec75d
Submitter: Jenkins
Branch:master

commit 887316ae43848ba26facb01d39d60c57111ec75d
Author: James Arendt 
Date:   Fri Feb 12 01:33:43 2016 -0800

Fix typo 'indepedent' in alembic_migration.rst

Change to 'independent'.

Closes-Bug: #1546237
Change-Id: I31acc5ae6d88ea56c9aded94b46573126f557fce


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1546237

Title:
  Typo in alembic_migrations.rst

Status in neutron:
  Fix Released

Bug description:
  "Indepedent" should be "Independent" in header "1. Indepedent Sub-
  Project Tables"

  Externally visible at
  http://docs.openstack.org/developer/neutron/devref/alembic_migrations.html.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1546237/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1546454] [NEW] VMware: NFC lease has to be updated when transferring streamOpt images

2016-02-17 Thread Radoslav Gerganov
Public bug reported:

Booting large streamOptimized images (>2GB) fails because the NFC lease
is not updated. This causes the lease to timeout  and kill the image
transfer. The fix is to call update_progress() method every 60sec. This
is also an opportunity to refactor the image transfer code and make it
simpler.

** Affects: nova
 Importance: High
 Assignee: Radoslav Gerganov (rgerganov)
 Status: In Progress


** Tags: vmware

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1546454

Title:
  VMware: NFC lease has to be updated when transferring streamOpt images

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Booting large streamOptimized images (>2GB) fails because the NFC
  lease is not updated. This causes the lease to timeout  and kill the
  image transfer. The fix is to call update_progress() method every
  60sec. This is also an opportunity to refactor the image transfer code
  and make it simpler.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1546454/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1546441] [NEW] db sync command should give user friendly message for invalid 'version' specified

2016-02-17 Thread Dinesh Bhor
Public bug reported:

db sync command should give user friendly message for invalid 'version'
specified

The command:

$ nova-manage db sync 11

LOG:

2016-02-16 01:54:53.908 CRITICAL nova [-] OverflowError: range() result
has too many items

2016-02-16 01:54:53.908 TRACE nova Traceback (most recent call last):
2016-02-16 01:54:53.908 TRACE nova   File "/usr/local/bin/nova-manage", line 
10, in 
2016-02-16 01:54:53.908 TRACE nova sys.exit(main())
2016-02-16 01:54:53.908 TRACE nova   File "/opt/stack/nova/nova/cmd/manage.py", 
line 1448, in main
2016-02-16 01:54:53.908 TRACE nova ret = fn(*fn_args, **fn_kwargs)
2016-02-16 01:54:53.908 TRACE nova   File "/opt/stack/nova/nova/cmd/manage.py", 
line 932, in sync
2016-02-16 01:54:53.908 TRACE nova return migration.db_sync(version)
2016-02-16 01:54:53.908 TRACE nova   File 
"/opt/stack/nova/nova/db/migration.py", line 26, in db_sync
2016-02-16 01:54:53.908 TRACE nova return IMPL.db_sync(version=version, 
database=database)
2016-02-16 01:54:53.908 TRACE nova   File 
"/opt/stack/nova/nova/db/sqlalchemy/migration.py", line 57, in db_sync
2016-02-16 01:54:53.908 TRACE nova version)
2016-02-16 01:54:53.908 TRACE nova   File 
"/usr/local/lib/python2.7/dist-packages/migrate/versioning/api.py", line 186, 
in upgrade
2016-02-16 01:54:53.908 TRACE nova return _migrate(url, repository, 
version, upgrade=True, err=err, **opts)
2016-02-16 01:54:53.908 TRACE nova   File "", line 2, in 
_migrate
2016-02-16 01:54:53.908 TRACE nova   File 
"/usr/local/lib/python2.7/dist-packages/migrate/versioning/util/__init__.py", 
line 160, in with_engine
2016-02-16 01:54:53.908 TRACE nova return f(*a, **kw)
2016-02-16 01:54:53.908 TRACE nova   File 
"/usr/local/lib/python2.7/dist-packages/migrate/versioning/api.py", line 345, 
in _migrate
2016-02-16 01:54:53.908 TRACE nova changeset = schema.changeset(version)
2016-02-16 01:54:53.908 TRACE nova   File 
"/usr/local/lib/python2.7/dist-packages/migrate/versioning/schema.py", line 82, 
in changeset
2016-02-16 01:54:53.908 TRACE nova changeset = 
self.repository.changeset(database, start_ver, version)
2016-02-16 01:54:53.908 TRACE nova   File 
"/usr/local/lib/python2.7/dist-packages/migrate/versioning/repository.py", line 
224, in changeset
2016-02-16 01:54:53.908 TRACE nova versions = range(int(start) + range_mod, 
int(end) + range_mod, step)
2016-02-16 01:54:53.908 TRACE nova OverflowError: range() result has too many 
items
2016-02-16 01:54:53.908 TRACE nova


The command:
$ nova-manage db sync 2147483

LOG:
CRITICAL nova [-] KeyError: 

2016-02-16 02:06:15.045 TRACE nova Traceback (most recent call last):
2016-02-16 02:06:15.045 TRACE nova   File "/usr/local/bin/nova-manage", line 
10, in 
2016-02-16 02:06:15.045 TRACE nova sys.exit(main())
2016-02-16 02:06:15.045 TRACE nova   File "/opt/stack/nova/nova/cmd/manage.py", 
line 1448, in main
2016-02-16 02:06:15.045 TRACE nova ret = fn(*fn_args, **fn_kwargs)
2016-02-16 02:06:15.045 TRACE nova   File "/opt/stack/nova/nova/cmd/manage.py", 
line 932, in sync
2016-02-16 02:06:15.045 TRACE nova return migration.db_sync(version)
2016-02-16 02:06:15.045 TRACE nova   File 
"/opt/stack/nova/nova/db/migration.py", line 26, in db_sync
2016-02-16 02:06:15.045 TRACE nova return IMPL.db_sync(version=version, 
database=database)
2016-02-16 02:06:15.045 TRACE nova   File 
"/opt/stack/nova/nova/db/sqlalchemy/migration.py", line 57, in db_sync
2016-02-16 02:06:15.045 TRACE nova version)
2016-02-16 02:06:15.045 TRACE nova   File 
"/usr/local/lib/python2.7/dist-packages/migrate/versioning/api.py", line 186, 
in upgrade
2016-02-16 02:06:15.045 TRACE nova return _migrate(url, repository, 
version, upgrade=True, err=err, **opts)
2016-02-16 02:06:15.045 TRACE nova   File "", line 2, in 
_migrate
2016-02-16 02:06:15.045 TRACE nova   File 
"/usr/local/lib/python2.7/dist-packages/migrate/versioning/util/__init__.py", 
line 160, in with_engine
2016-02-16 02:06:15.045 TRACE nova return f(*a, **kw)
2016-02-16 02:06:15.045 TRACE nova   File 
"/usr/local/lib/python2.7/dist-packages/migrate/versioning/api.py", line 345, 
in _migrate
2016-02-16 02:06:15.045 TRACE nova changeset = schema.changeset(version)
2016-02-16 02:06:15.045 TRACE nova   File 
"/usr/local/lib/python2.7/dist-packages/migrate/versioning/schema.py", line 82, 
in changeset
2016-02-16 02:06:15.045 TRACE nova changeset = 
self.repository.changeset(database, start_ver, version)
2016-02-16 02:06:15.045 TRACE nova   File 
"/usr/local/lib/python2.7/dist-packages/migrate/versioning/repository.py", line 
225, in changeset
2016-02-16 02:06:15.045 TRACE nova changes = 
[self.version(v).script(database, op) for v in versions]
2016-02-16 02:06:15.045 TRACE nova   File 
"/usr/local/lib/python2.7/dist-packages/migrate/versioning/repository.py", line 
189, in version
2016-02-16 02:06:15.045 TRACE nova return self.versions.version(*p, **k)
2016-02-16

[Yahoo-eng-team] [Bug 1546433] [NEW] nova.service.Service.kill() is unused and orphaned

2016-02-17 Thread Roman Dobosz
Public bug reported:

oslo.service.Service doesn't provide kill method in it's interface [1].
Nova is implementing it (it removes service record from the DB), but
obviously it isn't actually ever called. This was probably orphaned long
time ago (last changes in 2011).

I think the method should go away.

[1]
https://github.com/openstack/oslo.service/blob/master/oslo_service/service.py#L88-L109

** Affects: nova
 Importance: Undecided
 Assignee: Roman Dobosz (roman-dobosz)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Roman Dobosz (roman-dobosz)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1546433

Title:
  nova.service.Service.kill() is unused and orphaned

Status in OpenStack Compute (nova):
  New

Bug description:
  oslo.service.Service doesn't provide kill method in it's interface [1].
  Nova is implementing it (it removes service record from the DB), but
  obviously it isn't actually ever called. This was probably orphaned long
  time ago (last changes in 2011).

  I think the method should go away.

  [1]
  
https://github.com/openstack/oslo.service/blob/master/oslo_service/service.py#L88-L109

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1546433/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1546422] Re: secgroup in v1 driver could not work in lithium

2016-02-17 Thread yalei wang
** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: neutron
 Assignee: (unassigned) => yalei wang (yalei-wang)

** Description changed:

  sync_from_callback will reraise the error return from sendjson, but
  because notify in ./neutron/callbacks/manager.py ingnored this
- error(just work on BEFORE/PRECOMMIT), no error will returned to user.
+ error(just work on BEFORE/PRECOMMIT), so no error will be returned to
+ user like tempest test case.
  
- I think we need update the neutron code and also find the real reason in
- opendaylight.
+ 
+ I think we need update the neutron code and also find the real reason in 
opendaylight/networking-odl.

** Description changed:

- sync_from_callback will reraise the error return from sendjson, but
- because notify in ./neutron/callbacks/manager.py ingnored this
- error(just work on BEFORE/PRECOMMIT), so no error will be returned to
- user like tempest test case.
+ sync_from_callback will reraise the error return from sendjson, but because 
notify in ./neutron/callbacks/manager.py ingnored this error(just work on 
BEFORE/PRECOMMIT), so no error will be returned to user like tempest test case.
+ Then CI also could not report the error.
  
- 
- I think we need update the neutron code and also find the real reason in 
opendaylight/networking-odl.
+ I think we need update the neutron code and also find the real reason in
+ opendaylight/networking-odl.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1546422

Title:
  secgroup in v1 driver could not work in lithium

Status in networking-odl:
  New
Status in neutron:
  New

Bug description:
  sync_from_callback will reraise the error return from sendjson, but because 
notify in ./neutron/callbacks/manager.py ingnored this error(just work on 
BEFORE/PRECOMMIT), so no error will be returned to user like tempest test case.
  Then CI also could not report the error.

  I think we need update the neutron code and also find the real reason
  in opendaylight/networking-odl.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-odl/+bug/1546422/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1546423] [NEW] delete volume action link is shown even if it shouldn't

2016-02-17 Thread Matthias Runge
Public bug reported:

copying a end user bug report here:

Description of problem:
If volume has snaphost, it can't be deleted (need to remove them first). If you 
navigate to detail of the volume, there exists a link to Delete volume - and it 
shouldn't be there.


Version-Release number of selected component (if applicable):
liberty

How reproducible:
100%

Steps to Reproduce:
1. log in as demo user
2. Project - Compute - Volumes
3. create Volume with default values, name it "test"
4. verify, that drop down menu of the row with "test" volume contain Delete 
volume
click to Create snapshot, name it as "test_snap"
navigate back to list of Volumes (not snapshots of them)
verify, that drop down menu of the row with "test" volume does NOT contain 
Delete volume
click to name of the volume to navigate to detail of volume "test"
on the right top of page there is action button with dropdown menu. There is 
Delete volume item, that is not accessible from list of volumes


Actual results:
Delete volume item on detail page shouldn't be there, if there exists snapshot 
of it

Expected results:
Delete volume is visible always (or seems to be)

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1546423

Title:
  delete volume action link is shown even if it shouldn't

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  copying a end user bug report here:

  Description of problem:
  If volume has snaphost, it can't be deleted (need to remove them first). If 
you navigate to detail of the volume, there exists a link to Delete volume - 
and it shouldn't be there.

  
  Version-Release number of selected component (if applicable):
  liberty

  How reproducible:
  100%

  Steps to Reproduce:
  1. log in as demo user
  2. Project - Compute - Volumes
  3. create Volume with default values, name it "test"
  4. verify, that drop down menu of the row with "test" volume contain Delete 
volume
  click to Create snapshot, name it as "test_snap"
  navigate back to list of Volumes (not snapshots of them)
  verify, that drop down menu of the row with "test" volume does NOT contain 
Delete volume
  click to name of the volume to navigate to detail of volume "test"
  on the right top of page there is action button with dropdown menu. There is 
Delete volume item, that is not accessible from list of volumes

  
  Actual results:
  Delete volume item on detail page shouldn't be there, if there exists 
snapshot of it

  Expected results:
  Delete volume is visible always (or seems to be)

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1546423/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp