[Yahoo-eng-team] [Bug 1561200] Re: created_at and updated_at times don't include timezone

2016-06-10 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1561200

Title:
  created_at and updated_at times don't include timezone

Status in neutron:
  Expired

Bug description:
  created_at and updated_at were recently added to the API calls and
  notifications for many neutron resources (networks, subnets, ports,
  possibly more), which is awesome! I've noticed that the times don't
  include a timezone (compare to nova servers and glance images, for
  instance).

  Even if there's an assumption a user can make, this can create
  problems with some display tools (I noticed this because a javascript
  date formatting filter does local timezone conversions when a timezone
  is created, which meant times for resources created seconds apart
  looked as though they were several hours adrift.

  Tested on neutron mitaka RC1.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1561200/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1567271] Re: The error log has appeared, but the status of the agent is displayed to be good

2016-06-10 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1567271

Title:
   The error log has appeared, but the status of the agent is displayed
  to be good

Status in neutron:
  Expired

Bug description:
  * High level description:

  I suggest that updating the status of the agent upon detecting error
  logs.

  I understand that "neutron agent-list " output means only whether the agent 
is running.
  However, by changing the display if the agent is not working properly, it 
helps to troubleshoot.

  
  * Pre-conditions:

  It will occur in an environment that br-ex does not exist. (This is an
  example.)

  
  * Step-by-step reproduction steps:

  1. Build the environment with the Devstack.
  2. Remove the br-ex.
  3. Boot the instance.
  4. The Instance booting will fail.

  
  * Expected output:

  When the agent is not working properly, chang the display as follows:

  $ neutron agent-list -c agent_type -c alive
  ++---+
  | agent_type | alive |
  ++---+
  | L3 agent | :-) |
  | Open vSwitch agent | :-( |
  | DHCP agent | :-) |
  ++---+
  ...

  
  * Actual output:

  q-agt.log
  http://paste.openstack.org/show/493429/

  
  $ neutron agent-list -c agent_type -c alive
  ++---+
  | agent_type | alive |
  ++---+
  | L3 agent | :-) |
  | Open vSwitch agent | :-) |
  | DHCP agent | :-) |
  ++---+

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1567271/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1487506] Re: compression error

2016-06-10 Thread Launchpad Bug Tracker
[Expired for OpenStack Dashboard (Horizon) because there has been no
activity for 60 days.]

** Changed in: horizon
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1487506

Title:
  compression error

Status in OpenStack Dashboard (Horizon):
  Expired

Bug description:
  while using 
  django-pyscss==2.0.2
  pyScss==1.3.4

  I'm getting

  CommandError: An error occured during rendering 
/builddir/build/BUILD/horizon-2015.1.1/openstack_dashboard/templates/_stylesheets.html:
 Don't know how to merge conflicting combinators:  and  .btn'>
  Found 'compress' tags in:

/builddir/build/BUILD/horizon-2015.1.1/openstack_dashboard/templates/_stylesheets.html

/builddir/build/BUILD/horizon-2015.1.1/horizon/templates/horizon/_scripts.html

/builddir/build/BUILD/horizon-2015.1.1/openstack_dashboard/dashboards/theme/templates/_stylesheets.html

/builddir/build/BUILD/horizon-2015.1.1/horizon/templates/horizon/_conf.html
  Compressing... error: Bad exit status from /var/tmp/rpm-tmp.oZj0k6 (%build)
  Bad exit status from /var/tmp/rpm-tmp.oZj0k6 (%build)

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1487506/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1591434] [NEW] get 'itemNotFound' when flavor-show flavor_name

2016-06-10 Thread PanFengyun
Public bug reported:

when I use 'flavor-show' to get detail info of flavor, novaclient get
"Flavor m1.small could not be found".

1.create m1.small flavor 
  $ nova flavor-list | grep m1.small
  | 2| m1.small  | 2048  | 20   | 0 |  | 1 | 1.0
 | True  |
2.get the detail info of m1.small flavor
  nova --debug flavor-show m1.small
  ...
  RESP BODY: {"itemNotFound": {"message": "Flavor m1.small could not be 
found.", "code": 404}}
  ...
  ++--+
  | Property   | Value|
  ++--+
  | OS-FLV-DISABLED:disabled   | False|
  | OS-FLV-EXT-DATA:ephemeral  | 0|
  | disk   | 20   |
  | extra_specs| {}   |
  | id | 2|
  | name   | m1.small |
  | os-flavor-access:is_public | True |
  | ram| 2048 |
  | rxtx_factor| 1.0  |
  | swap   |  |
  | vcpus  | 1|
  ++--+

reason:
  nova not allow user to get flavor by name. Nova just have 
get_flavor_by_flavor_id(), and have no get_flavor_by_flavor_name.
-
def show(self, req, id):
"""Return data about the given flavor id."""
context = req.environ['nova.context']
try:
flavor = flavors.get_flavor_by_flavor_id(id, ctxt=context)#just 
get_flavor_by_flavor_id
req.cache_db_flavor(flavor)
except exception.FlavorNotFound as e:
raise webob.exc.HTTPNotFound(explanation=e.format_message())

return self._view_builder.show(req, flavor)
- 


helpful: Add get_flavor_by_flavor_name() into show()
reason:  novaclient just allow user to create flavor by unique id and unique 
name, so we can get flavor by the id or name. And we can add 
get_flavor_by_flavor_name().
-
Positional arguments:
 Unique name of the new flavor.
   Unique ID of the new flavor. Specifying 'auto' will
   generated a UUID for the ID.
-

** Affects: nova
 Importance: Undecided
 Assignee: PanFengyun (pan-feng-yun)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => PanFengyun (pan-feng-yun)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1591434

Title:
  get 'itemNotFound' when flavor-show  flavor_name

Status in OpenStack Compute (nova):
  New

Bug description:
  when I use 'flavor-show' to get detail info of flavor, novaclient get
  "Flavor m1.small could not be found".

  1.create m1.small flavor 
$ nova flavor-list | grep m1.small
| 2| m1.small  | 2048  | 20   | 0 |  | 1 | 1.0  
   | True  |
  2.get the detail info of m1.small flavor
nova --debug flavor-show m1.small
...
RESP BODY: {"itemNotFound": {"message": "Flavor m1.small could not be 
found.", "code": 404}}
...
++--+
| Property   | Value|
++--+
| OS-FLV-DISABLED:disabled   | False|
| OS-FLV-EXT-DATA:ephemeral  | 0|
| disk   | 20   |
| extra_specs| {}   |
| id | 2|
| name   | m1.small |
| os-flavor-access:is_public | True |
| ram| 2048 |
| rxtx_factor| 1.0  |
| swap   |  |
| vcpus  | 1|
++--+

  reason:
nova not allow user to get flavor by name. Nova just have 
get_flavor_by_flavor_id(), and have no get_flavor_by_flavor_name.
  -
  def show(self, req, id):
  """Return data about the given flavor id."""
  context = req.environ['nova.context']
  try:
  flavor = flavors.get_flavor_by_flavor_id(id, ctxt=context)#just 
get_flavor_by_flavor_id
  req.cache_db_flavor(flavor)
  except exception.FlavorNotFound as e:
  raise webob.exc.HTTPNotFound(explanation=e.format_message())

  return self._view_builder.show(req, flavor)
  - 

  
  helpful: Add get_flavor_by_flavor_name() into show()
  reason:  novaclient just allow user to create flavor by unique id and unique 
name, so we can get flavor by the id or name. And we can add 
get_flavor_by_flavor_name().
  -
  Positional arguments:
   Unique name of the new flavor.
 Unique ID of the new flavor. Specifying 'auto' will
 generated a UUID for the ID.
  -

To manage notifications about this bug go to:

[Yahoo-eng-team] [Bug 1591431] [NEW] openstack/common/cache should be removed

2016-06-10 Thread yong sheng gong
Public bug reported:

since oslo cache is used, we are able to remove openstack/common/cache
now

** Affects: neutron
 Importance: Undecided
 Status: Invalid

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1591431

Title:
  openstack/common/cache should be removed

Status in neutron:
  Invalid

Bug description:
  since oslo cache is used, we are able to remove openstack/common/cache
  now

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1591431/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1591429] [NEW] User quota not working when project quotas set to unlimited.

2016-06-10 Thread Alberto Laporte
Public bug reported:

Description
===
User is unable to boot an instance when user quota exist and project quota for 
instance, ram, or cores are set to unlimited(-1).
 

Steps to reproduce
==

1. Tenant quotas are set to unlimited for instances,core,and ram.

root@osad-aio:~# export tenant=$(openstack project list | awk '/support-test/ { 
print $2 }')
root@osad-aio:~# export tuser=$(openstack user list | awk '/test-user/ { print 
$2 }')
root@osad-aio:~# nova quota-update --instances -1 --cores -1 --ram -1 $tenant

root@osad-aio:~# nova quota-show --tenant $tenant
+-+---+
| Quota   | Limit |
+-+---+
| instances   | -1|
| cores   | -1|
| ram | -1|
| floating_ips| 10|
| fixed_ips   | -1|
| metadata_items  | 128   |
| injected_files  | 5 |
| injected_file_content_bytes | 10240 |
| injected_file_path_bytes| 255   |
| key_pairs   | 100   |
| security_groups | 10|
| security_group_rules| 20|
| server_groups   | 10|
| server_group_members| 10|
+-+---+


2. User quotas are set for user under tenant.
root@osad-aio:~# nova quota-update --instances 4 --cores 20 --ram 4096 --user 
$tuser $tenant

root@osad-aio:~# nova quota-show --user $tuser --tenant $tenant
+-+---+
| Quota   | Limit |
+-+---+
| instances   | 4 |
| cores   | 20|
| ram | 4096  |
| floating_ips| 10|
| fixed_ips   | -1|
| metadata_items  | 128   |
| injected_files  | 5 |
| injected_file_content_bytes | 10240 |
| injected_file_path_bytes| 255   |
| key_pairs   | 100   |
| security_groups | 10|
| security_group_rules| 20|
| server_groups   | 10|
| server_group_members| 10|
+-+---+


3. Booting of the instance fails due to quota exceeding.  Additional debugging 
output below [0]

root@osad-aio:~# nova boot --security-groups default --image 
9fde3d51-05f2-4de8-83e2-2c93f1085194 --nic 
net-id=0d415531-a990-4bb2-9b0e-09cf43543559 --flavor 1 test-instance
ERROR (Forbidden): Quota exceeded for cores, instances, ram: Requested 1, 1, 
512, but already used 0, 0, 0 of 20, 4, 4096 cores, instances, ram (HTTP 403) 
(Request-ID: req-27fa978c-6fc5-4f7a-a4e4-558797bd6a72


4. Setting limits instead of unlimited on the project fixes the inability for 
the user to boot the instance.

root@osad-aio:~# nova quota-update --instances 65535 --cores 65535 --ram
65535 $tenant

root@osad-aio:~# nova quota-show --tenant $tenant
+-+---+
| Quota   | Limit |
+-+---+
| instances   | 65535 |
| cores   | 65535 |
| ram | 65535 |
| floating_ips| 10|
| fixed_ips   | -1|
| metadata_items  | 128   |
| injected_files  | 5 |
| injected_file_content_bytes | 10240 |
| injected_file_path_bytes| 255   |
| key_pairs   | 100   |
| security_groups | 10|
| security_group_rules| 20|
| server_groups   | 10|
| server_group_members| 10|
+-+---


root@osad-aio:~# nova boot --security-groups default --image 
9fde3d51-05f2-4de8-83e2-2c93f1085194 --nic 
net-id=0d415531-a990-4bb2-9b0e-09cf43543559 --flavor 1 test-instance
+--+---+
| Property | Value  
   |
+--+---+
| OS-DCF:diskConfig| MANUAL 
   |
| OS-EXT-AZ:availability_zone  |
   |
| OS-EXT-STS:power_state   | 0  
   |
| OS-EXT-STS:task_state| scheduling 
   |
| OS-EXT-STS:vm_state  | building   
   |
| OS-SRV-USG:launched_at   | -  
   |
| OS-SRV-USG:terminated_at | -  
   |
| accessIPv4   |
   |
| accessIPv6   |
   |
| adminPass| 

[Yahoo-eng-team] [Bug 1588560] Re: neutron-lbaas devstack plugin for ubuntu hard-codes trusty

2016-06-10 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/324943
Committed: 
https://git.openstack.org/cgit/openstack/neutron-lbaas/commit/?id=0f8faacd5e961d4fe508e73ac91bb72cb5a38d57
Submitter: Jenkins
Branch:master

commit 0f8faacd5e961d4fe508e73ac91bb72cb5a38d57
Author: Stephen Balukoff 
Date:   Thu Jun 2 15:59:27 2016 -0700

Fix hard-coding of trusty in devstack plugin.sh

This patch updates the devstack plugin.sh to get current devstack ubuntu
codename from the lsb_release command instead of hard-coding 'trusty' in
there (as we probably should have been doing the whole time). This
should allow neutron-lbaas to be tested on other releases of Ubuntu
without breaking support for trusty.

Change-Id: I040832b3ffa5d596669796269879023c761d3d05
Closes-Bug: 1588560


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1588560

Title:
  neutron-lbaas devstack plugin for ubuntu hard-codes trusty

Status in neutron:
  Fix Released

Bug description:
  Ubuntu 16.04 just came out, and it's likely people will want to start
  testing neutron-lbaas on this (and potentially other) releases of
  Ubuntu. However, presently the neutron-lbaas devstack plugin.sh hard-
  codes trusty in a couple of places. This script should be updated to
  dynamically determine the Ubuntu codename in use on the current
  devstack (i.e. so we don't break compatibility with trusty, but also
  allow for testing on other Ubuntu releases).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1588560/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1588372] Re: L7 policy is deleted along with listener deletion

2016-06-10 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/326403
Committed: 
https://git.openstack.org/cgit/openstack/neutron-lbaas/commit/?id=58fe3494ef2c25688123454640388fa0c041c196
Submitter: Jenkins
Branch:master

commit 58fe3494ef2c25688123454640388fa0c041c196
Author: Evgeny Fedoruk 
Date:   Tue Jun 7 04:49:37 2016 -0700

Preventing listener deletion if it has l7 policy

If L7 policy is associated to listener, the policy should be deleted
prior to listener deletion.
Trying to delete the listener with L7 policy will fail
with EntityInUse exception.
This is to preserve common neutron's API concept
to delete cascade resources' subresources only.

Change-Id: I575f898f74576613cb55d4dbba5b1b0f524dd35f
Closes-Bug: 1588372


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1588372

Title:
  L7 policy is deleted along with listener deletion

Status in neutron:
  Fix Released

Bug description:
  Alike to https://bugs.launchpad.net/neutron/+bug/1571097,
  there is an issue with deletion of related entities.
  In this case the issue is unnecessary deletion of L7 policy and its rules 
when listener related to it is deleted.

  The solution should be preventing deletion of a listener if the last
  has L7 policy associated with it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1588372/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1557992] Re: NoSuchOptError: no such option: service_auth when using local cert manager

2016-06-10 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/293352
Committed: 
https://git.openstack.org/cgit/openstack/neutron-lbaas/commit/?id=f271c8c13954654a1bb41badc51fee9cdacf72f1
Submitter: Jenkins
Branch:master

commit f271c8c13954654a1bb41badc51fee9cdacf72f1
Author: yuyangbj 
Date:   Fri Mar 18 10:46:53 2016 +0800

Fix no such option defined for service_auth

The service_auth is only defined in keystone.py, but the keystone.py
is only imported by barbican cert manager. So importing the registration
into cert_manager.py

Closes-Bug: #1557992
Change-Id: Idf1468ade6c9435cd5c781504c4e74fe55eb4e8c


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1557992

Title:
  NoSuchOptError: no such option: service_auth when using local cert
  manager

Status in neutron:
  Fix Released

Bug description:
  When changing the default cert manager barbican to local cert manager
  using vmware edge driver, it threw an exception below, that was
  because the method get_service_url is defined in parent module
  cert_manager.py, but the service_auth is only registered in
  keystone.py which is imported by barbican.

  2016-03-04 11:45:10.461 27366 TRACE 
neutron_lbaas.services.loadbalancer.plugin   File 
"/opt/stack/neutron-lbaas/neutron_lbaas/services/loadbalancer/plugin.py", line 
463, in _call_driver_operation
  2016-03-04 11:45:10.461 27366 TRACE 
neutron_lbaas.services.loadbalancer.plugin driver_method(context, db_entity)
  2016-03-04 11:45:10.461 27366 TRACE 
neutron_lbaas.services.loadbalancer.plugin   File 
"/usr/local/lib/python2.7/dist-packages/oslo_log/helpers.py", line 46, in 
wrapper
  2016-03-04 11:45:10.461 27366 TRACE 
neutron_lbaas.services.loadbalancer.plugin return method(*args, **kwargs)
  2016-03-04 11:45:10.461 27366 TRACE 
neutron_lbaas.services.loadbalancer.plugin   File 
"/opt/stack/neutron-lbaas/neutron_lbaas/drivers/vmware/edge_driver_v2.py", line 
91, in create
  2016-03-04 11:45:10.461 27366 TRACE 
neutron_lbaas.services.loadbalancer.plugin context, listener, 
certificate=self._get_default_cert(listener))
  2016-03-04 11:45:10.461 27366 TRACE 
neutron_lbaas.services.loadbalancer.plugin   File 
"/opt/stack/neutron-lbaas/neutron_lbaas/drivers/vmware/edge_driver_v2.py", line 
84, in _get_default_cert
  2016-03-04 11:45:10.461 27366 TRACE 
neutron_lbaas.services.loadbalancer.plugin return 
cert_backend.CertManager.get_cert(
  2016-03-04 11:45:10.461 27366 TRACE 
neutron_lbaas.services.loadbalancer.plugin   File 
"/opt/stack/neutron-lbaas/neutron_lbaas/drivers/vmware/edge_driver_v2.py", line 
73, in _get_service_url
  2016-03-04 11:45:10.461 27366 TRACE 
neutron_lbaas.services.loadbalancer.plugin 
cfg.CONF.service_auth.service_name,
  2016-03-04 11:45:10.461 27366 TRACE 
neutron_lbaas.services.loadbalancer.plugin   File 
"/usr/local/lib/python2.7/dist-packages/oslo_config/cfg.py", line 1870, in 
__getattr__
  2016-03-04 11:45:10.461 27366 TRACE 
neutron_lbaas.services.loadbalancer.plugin raise NoSuchOptError(name)
  2016-03-04 11:45:10.461 27366 TRACE 
neutron_lbaas.services.loadbalancer.plugin NoSuchOptError: no such option: 
service_auth
  2016-03-04 11:45:10.461 27366 TRACE neutron_lbaas.services.loadbalancer.plugin

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1557992/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1579314] Re: there is no haproxy for lbaas v2 doc in devstack README.md

2016-06-10 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/313842
Committed: 
https://git.openstack.org/cgit/openstack/neutron-lbaas/commit/?id=5cca9722cb2ce835289f4115fbdf7c2b26ea6985
Submitter: Jenkins
Branch:master

commit 5cca9722cb2ce835289f4115fbdf7c2b26ea6985
Author: yong sheng gong 
Date:   Sat May 7 14:56:20 2016 +0800

Add agent haproxy driver setup in README.md

Change-Id: I354a9f351cbb00f01ea4ca98476a827be253ff07
Closes-bug: 1579314


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1579314

Title:
  there is no haproxy for lbaas v2 doc in devstack README.md

Status in neutron:
  Fix Released

Bug description:
  octavia is the default plugin driver, but the haproxy is also
  available.

  we should add a hint at https://github.com/openstack/neutron-
  lbaas/tree/master/devstack

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1579314/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1591380] Re: neutron-lbaas: broken tests after vip allocation behavior changed

2016-06-10 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/328515
Committed: 
https://git.openstack.org/cgit/openstack/neutron-lbaas/commit/?id=41ec2ff5e3ea0578dac628176a0c464d1763b3c7
Submitter: Jenkins
Branch:master

commit 41ec2ff5e3ea0578dac628176a0c464d1763b3c7
Author: Stephen Balukoff 
Date:   Fri Jun 10 15:24:58 2016 -0700

Fix test assumptions about IP allocation

Several tests in the neutron-lbaas code tree make assumptions about how
IP addresses in Neutron subnets will be allocated without actually
specifying the IP addresses explictly. Neutron's behavior recently
changed on this front, and these tests started failing in the gate. This
commit updates these tests to explicitly specify the IP address they
should be testing against for allocation purposes, or to dynamically
detect the IP address Neutron has assigned during a previous call.

Change-Id: I0afc3acfff96f85622495364240e1e60f9fd5d7c
Closes-Bug: #1591380


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1591380

Title:
  neutron-lbaas: broken tests after vip allocation behavior changed

Status in neutron:
  Fix Released

Bug description:
  In the last couple of days, gate checks for neutron-lbaas have started
  to consistently fail on tests that have not been altered recently.
  Digging futher into this, it appears that neutron's subnet IP address
  allocation behavior has changed, and there are several neutron-lbaas
  tests which hard code assumptions about which VIP address for a given
  load balancer will be assigned when it isn't actually specified in the
  test. These tests need to be updated to either specify the IP address
  to be used directly (ie. over-riding Neutron's IP address allocation
  behavior which can indeed change at any time), or updated to more
  dynamically detect the VIP address that gets assigned when the value
  isn't specified directly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1591380/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1583835] Re: dhcp api method get_active_networks is unused

2016-06-10 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/318449
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=2ddbbce7ba8723ba2790c5f52a67564d30914d13
Submitter: Jenkins
Branch:master

commit 2ddbbce7ba8723ba2790c5f52a67564d30914d13
Author: Aaron Rosen 
Date:   Wed May 18 22:29:03 2016 -0700

remove unused rpc method get_active_networks

This method was originally left for backwards compatibility during
an upgrade of neutron. It stopped being used in havana so I think it
should be safe to remove now.

Closes-bug: #1583835
Change-Id: I503d10ee59ba2fa046b5030e8281fd41a49ee74e


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1583835

Title:
  dhcp api method get_active_networks is unused

Status in neutron:
  Fix Released

Bug description:
  This method was originally left for backwards compatibility during
  an upgrade of neutron. It stopped being used in havana so I think it
  should be safe to remove now.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1583835/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1569918] Re: Allowed_address_pair fixed_ip configured with FloatingIP after getting associated with a VM port does not work with DVR routers

2016-06-10 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/304905
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=3a5315ef8dbc6265ad2c47eebc1e2c42722a7cb4
Submitter: Jenkins
Branch:master

commit 3a5315ef8dbc6265ad2c47eebc1e2c42722a7cb4
Author: Swaminathan Vasudevan 
Date:   Tue Apr 12 16:06:41 2016 -0700

DVR: Fix allowed_address_pair port binding with delayed fip

Today when allowed_address_pairs are configured on a dvr
service port and if a floatingip is associated with the
allowed_address_pair port, we inherit the dvr service port's
host and the device_owner to which the port is associated.

But when the floatingip is created on the allowed_address_pair
port after the port is associated with a dvr service port, then
we apply the right port host binding and the arp_update.

This patch address the issue, by checking for the host binding
when there is a new floatingip configured. If host binding is
missing and if the port happens to be an allowed_address_pair
port, then it checks for the associated service port and if there
is a single valid and active service port, then it inherits the
host binding and device owner from the dvr service port and also
applies the right arp entry. If there is are more than one
active ports, then it returns.

Closes-Bug: #1569918
Change-Id: I80a299d3f99113f77d2e728c3d9e000d01dacebd


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1569918

Title:
  Allowed_address_pair fixed_ip configured with FloatingIP after getting
  associated with a VM port does not work with DVR routers

Status in neutron:
  Fix Released

Bug description:
  Allowed_address_pair fixed_ip when configured with FloatingIP after
  the port is associated with the VM port is not reachable from DVR
  router.

  The current code only supports adding in the proper ARP update and
  port host binding inheritence for the Allowed_address_pair port only
  if the port has a FloatingIP configured before it is associated with a
  VM port.

  When the floatingIP is added later,  it fails.

  How to reproduce.

  1. Create networks
  2. Create vrrp-net.
  3. Create vrrp-subnet.
  4. Create a DVR router.
  5. Attach the vrrp-subnet to the router.
  6. Create a VM on the vrrp-subnet
  7. Create a VRRP port.
  8. Attach the VRRP port with the VM.
  9. Now assign a FloatingIP to the VRRP port.
  10. Now check the ARP table entry in the router_namespace and also the VRRP 
port details. The VRRP port is still unbound and so the DVR cannot handle 
unbound ports.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1569918/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1591386] [NEW] Possible race condition L3HA when VRRP state changes why building

2016-06-10 Thread Bjoern Teipel
Public bug reported:

Currently I suspect a race condition when creating neutron HA enabled
router and attaching router interfaces.

All of my router ports are stuck in build state but passing traffic.
If I pick one port from this router is shows it is still in BUILD state:

+---++
| Field | Value 
 |
+---++
| admin_state_up| True  
 |
| allowed_address_pairs |   
 |
| binding:host_id   | controller2_neutron_agents_container-cb4bb90e 
 |
| binding:profile   | {}
 |
| binding:vif_details   | {"port_filter": true} 
 |
| binding:vif_type  | bridge
 |
| binding:vnic_type | normal
 |
| device_id | 5b861c43-9a0d-494c-bfe4-27aeb50e94fe  
 |
| device_owner  | network:router_interface  
 |
| dns_assignment| {"hostname": "host-10-11-12-1", "ip_address": 
"10.11.12.1", "fqdn": "host-10-11-12-1.openstacklocal."} |
| dns_name  |   
 |
| extra_dhcp_opts   |   
 |
| fixed_ips | {"subnet_id": "77be837a-ddd4-40df-876f-e31f0d241d85", 
"ip_address": "10.11.12.1"}  |
| id| 68ab5b64-d22c-4c8a-951e-8a57c1397a31  
 |
| mac_address   | fa:16:3e:26:c6:86 
 |
| name  |   
 |
| network_id| 9d69083d-e229-47ea-9dd1-deef2b8e21df  
 |
| security_groups   |   
 |
| status| BUILD 
 |
| tenant_id | 96e14d3700b549fda9367a2672107a55  
 |
+---++

Unfortunately I did not catch many details from the neutron logs just
that the VRRP election happened

VRRP state changes:
===

controller1_neutron_agents_container-b3c216d9 | success | rc=0 >>
2016-06-10 08:00:26.728 13586 INFO neutron.agent.l3.ha [-] Router 
5b861c43-9a0d-494c-bfe4-27aeb50e94fe transitioned to backup

controller2_neutron_agents_container-cb4bb90e | success | rc=0 >>
2016-06-10 08:00:26.493 13733 INFO neutron.agent.l3.ha [-] Router 
5b861c43-9a0d-494c-bfe4-27aeb50e94fe transitioned to backup
2016-06-10 08:00:38.483 13733 INFO neutron.agent.l3.ha [-] Router 
5b861c43-9a0d-494c-bfe4-27aeb50e94fe transitioned to master

controller3_neutron_agents_container-2442033f | success | rc=0 >>
2016-06-10 08:00:26.889 16262 INFO neutron.agent.l3.ha [-] Router 
5b861c43-9a0d-494c-bfe4-27aeb50e94fe transitioned to backup


and when the neutron port was roughly created because of the statistics update.
The port is correctly bound to the master VRRP agent.

interface stats update:


controller1:
2016-06-10 08:01:09.713 14268 INFO neutron.agent.securitygroups_rpc 
[req-52afd361-8d21-45a3-8974-c93f7f76f0d3 - - - - -] Preparing filters for 
devices set(['tap68ab5b64-d2'])
2016-06-10 08:01:09.713 14268 INFO neutron.agent.securitygroups_rpc 
[req-52afd361-8d21-45a3-8974-c93f7f76f0d3 - - - - -] Preparing filters for 
devices set(['tap68ab5b64-d2'])
2016-06-10 08:01:10.106 14268 INFO 
neutron.plugins.ml2.drivers.linuxbridge.agent.linuxbridge_neutron_agent 
[req-52afd361-8d21-45a3-8974-c93f7f76f0d3 - - 

[Yahoo-eng-team] [Bug 1591381] [NEW] Instance tags can be set before an instance is active

2016-06-10 Thread Andrew Laski
Public bug reported:

As opposed to metadata or other attributes of an instance tags can be
set very early in the instance lifecycle. This will eventually lead to
issues if the boot process makes use of these tags because setting them
before boot will be a race condition. And there is a proposed spec which
intends to do exactly that, use tags in the scheduling process.

To be consistent and to avoid future racy behavior instance tags should
only be settable after a boot request after it has gone to ACTIVE
status. Passing instance tags in as part of the boot request would be
desirable behavior but requires a spec and is outside the scope of this
bug.

** Affects: nova
 Importance: Undecided
 Assignee: melanie witt (melwitt)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1591381

Title:
  Instance tags can be set before an instance is active

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  As opposed to metadata or other attributes of an instance tags can be
  set very early in the instance lifecycle. This will eventually lead to
  issues if the boot process makes use of these tags because setting
  them before boot will be a race condition. And there is a proposed
  spec which intends to do exactly that, use tags in the scheduling
  process.

  To be consistent and to avoid future racy behavior instance tags
  should only be settable after a boot request after it has gone to
  ACTIVE status. Passing instance tags in as part of the boot request
  would be desirable behavior but requires a spec and is outside the
  scope of this bug.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1591381/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1591380] [NEW] neutron-lbaas: broken tests after vip allocation behavior changed

2016-06-10 Thread Stephen Balukoff
Public bug reported:

In the last couple of days, gate checks for neutron-lbaas have started
to consistently fail on tests that have not been altered recently.
Digging futher into this, it appears that neutron's subnet IP address
allocation behavior has changed, and there are several neutron-lbaas
tests which hard code assumptions about which VIP address for a given
load balancer will be assigned when it isn't actually specified in the
test. These tests need to be updated to either specify the IP address to
be used directly (ie. over-riding Neutron's IP address allocation
behavior which can indeed change at any time), or updated to more
dynamically detect the VIP address that gets assigned when the value
isn't specified directly.

** Affects: neutron
 Importance: Undecided
 Assignee: Stephen Balukoff (sbalukoff)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Stephen Balukoff (sbalukoff)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1591380

Title:
  neutron-lbaas: broken tests after vip allocation behavior changed

Status in neutron:
  New

Bug description:
  In the last couple of days, gate checks for neutron-lbaas have started
  to consistently fail on tests that have not been altered recently.
  Digging futher into this, it appears that neutron's subnet IP address
  allocation behavior has changed, and there are several neutron-lbaas
  tests which hard code assumptions about which VIP address for a given
  load balancer will be assigned when it isn't actually specified in the
  test. These tests need to be updated to either specify the IP address
  to be used directly (ie. over-riding Neutron's IP address allocation
  behavior which can indeed change at any time), or updated to more
  dynamically detect the VIP address that gets assigned when the value
  isn't specified directly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1591380/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1591338] [NEW] Pecan does not support pagination

2016-06-10 Thread Brandon Logan
Public bug reported:

The legacy wsgi layer suppports pagination and there are new tests that
have been added that show that the pecan implemenation does not support
it yet.

http://logs.openstack.org/26/319626/6/experimental/gate-neutron-dsvm-
api-pecan/5223df2/testr_results.html.gz

** Affects: neutron
 Importance: Undecided
 Assignee: Brandon Logan (brandon-logan)
 Status: New


** Tags: api

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1591338

Title:
  Pecan does not support pagination

Status in neutron:
  New

Bug description:
  The legacy wsgi layer suppports pagination and there are new tests
  that have been added that show that the pecan implemenation does not
  support it yet.

  http://logs.openstack.org/26/319626/6/experimental/gate-neutron-dsvm-
  api-pecan/5223df2/testr_results.html.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1591338/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1591339] [NEW] radio buttons on ng modals should be consistent

2016-06-10 Thread Cindy Lu
Public bug reported:

The toggle button on ng create images modal is barely visible on the
Default theme.

Please look at attachment.

It should be made to look like the same as on ng launch instance wizard
- take a look at Source step - Delete Volume on Instance Delete.

** Affects: horizon
 Importance: Undecided
 Assignee: Cindy Lu (clu-m)
 Status: New

** Attachment added: "Screen Shot 2016-06-10 at 12.06.38 PM.png"
   
https://bugs.launchpad.net/bugs/1591339/+attachment/4681247/+files/Screen%20Shot%202016-06-10%20at%2012.06.38%20PM.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1591339

Title:
  radio buttons on ng modals should be consistent

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The toggle button on ng create images modal is barely visible on the
  Default theme.

  Please look at attachment.

  It should be made to look like the same as on ng launch instance
  wizard - take a look at Source step - Delete Volume on Instance
  Delete.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1591339/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1585165] Re: floating ip not reachable after vm migration

2016-06-10 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/327551
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=a1f06fd707ffe663e09f2675316257c8dc528d47
Submitter: Jenkins
Branch:master

commit a1f06fd707ffe663e09f2675316257c8dc528d47
Author: rossella 
Date:   Wed Jun 8 17:18:51 2016 +0200

After a migration clean up the floating ip on the source host

When a VM is migrated that has a floating IP associated, the L3
agent on the source host should be notified when the migration
is over. If the router on the source host is not going to be
removed (there are other ports using it) then we should nofity
that the floating IP needs to be cleaned up.

Change-Id: Iad6fbad06cdd33380ef536e6360fd90375ed380d
Closes-bug: #1585165


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1585165

Title:
  floating ip not reachable after vm migration

Status in neutron:
  Fix Released

Bug description:
  On a cloud running Liberty, a VM is assigned a floating IP. The VM is
  live migrated and the floating IP is no longer reachable from outside
  the cloud. Steps to reproduce:

  1) spawn a VM
  2) assign a floating IP
  3) live migrate the VM
  4) ping the floating IP from outside the cloud

  the problem seems to be that both the node that was hosting the VM
  before the migration and the node that hosts it now answers the ARP
  request:

  admin:~ # arping -I eth0 10.127.128.12 
  ARPING 10.127.128.12 from 10.127.0.1 eth0
  Unicast reply from 10.127.128.12 [FA:16:3E:C8:E6:13]  305.145ms
  Unicast reply from 10.127.128.12 [FA:16:3E:45:BF:9E]  694.062ms
  Unicast reply from 10.127.128.12 [FA:16:3E:45:BF:9E]  0.964ms

  on the compute that was hosting the VM:

  root:~ # sudo ip netns exec fip-c622fafe-c663-456a-8549-ebd3dbed4792 ip route
  default via 10.127.0.1 dev fg-c100b010-af 
  10.127.0.0/16 dev fg-c100b010-af  proto kernel  scope link  src 10.127.128.3 
  10.127.128.12 via 169.254.31.28 dev fpr-7d1a001a-9 

  On the node that it's hosting the VM:

  root:~ # sudo ip netns exec fip-c622fafe-c663-456a-8549-ebd3dbed4792 ip route
  default via 10.127.0.1 dev fg-e532a13f-35 
  10.127.0.0/16 dev fg-e532a13f-35  proto kernel  scope link  src 10.127.128.8 
9 
  10.127.128.12 via 169.254.31.28 dev fpr-7d1a001a-9 

  the entry "10.127.128.12" is present in both nodes.  That happens
  because when the VM is migrated no clean up is triggered on the source
  host. Restarting the l3 agent fixes the problem because the stale
  entry is removed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1585165/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1557909] Re: SNAT namespace is not getting cleared after the manual move of SNAT with dead agent

2016-06-10 Thread Carl Baldwin
** Changed in: neutron
   Status: Fix Released => In Progress

** Description changed:

+ Llatest patch (2016-06-10):  https://review.openstack.org/#/c/326729/
+ 
  Stale snat namespace on the controller after recovery of dead l3 agent.
  
  Note: Only on Stable/LIBERTY Branch:
- 
  
  Setup:
  Multiple controller (DVR_SNAT) setup.
  
  Steps:
  1) Create tenant network, subnet and router.
-  2) Create a external network
-  3) Attached internal & external network to a router
-  4) Create VM on above tenant network.
-  5) Make sure VM can reach outside using CSNAT.
-  6) Find router hosting l3 agent and stop the l3 agent.
-  7) Manually move router to other controller (dvr_snat mode). SNAT namespace 
should be create on new controller node.
-  8) Start the l3 agent on the controller (the one that  stopped in step6)
-  9) Notice that snat namespace is now available on 2 controller and it is not 
getting deleted from the agent which is not hosting it.
- 
+  2) Create a external network
+  3) Attached internal & external network to a router
+  4) Create VM on above tenant network.
+  5) Make sure VM can reach outside using CSNAT.
+  6) Find router hosting l3 agent and stop the l3 agent.
+  7) Manually move router to other controller (dvr_snat mode). SNAT namespace 
should be create on new controller node.
+  8) Start the l3 agent on the controller (the one that  stopped in step6)
+  9) Notice that snat namespace is now available on 2 controller and it is not 
getting deleted from the agent which is not hosting it.
  
  Example:
  | cfa97c12-b975-4515-86c3-9710c9b88d76 | L3 agent   | vm2-ctl2-936 | 
:-)   | True   | neutron-l3-agent  |
  | df4ca7c5-9bae-4cfb-bc83-216612b2b378 | L3 agent   | vm1-ctl1-936 | 
:-)   | True   | neutron-l3-agent  |
- 
  
  mysql> select * from csnat_l3_agent_bindings;
  
+--+--+-+--+
  | router_id| l3_agent_id  
| host_id | csnat_gw_port_id |
  
+--+--+-+--+
  | 0fb68420-9e69-41bb-8a88-8ab53b0faabb | cfa97c12-b975-4515-86c3-9710c9b88d76 
| NULL| NULL |
  
+--+--+-+--+
- 
  
  On vm1-ctl1-936
  
  Stale SNAT namespace on Initially hosting controller.
  
  ubuntu@vm1-ctl1-936:~/devstack$ sudo ip netns
  snat-0fb68420-9e69-41bb-8a88-8ab53b0faabb
  qrouter-0fb68420-9e69-41bb-8a88-8ab53b0faabb
  
- 
  On vm2-ctl2-936 (2nd Controller)
  
  ubuntu@vm2-ctl2-936:~$ ip netns
  snat-0fb68420-9e69-41bb-8a88-8ab53b0faabb
  qrouter-0fb68420-9e69-41bb-8a88-8ab53b0faabb

** Description changed:

- Llatest patch (2016-06-10):  https://review.openstack.org/#/c/326729/
+ Latest patch (2016-06-10):  https://review.openstack.org/#/c/326729/
  
  Stale snat namespace on the controller after recovery of dead l3 agent.
  
  Note: Only on Stable/LIBERTY Branch:
  
  Setup:
  Multiple controller (DVR_SNAT) setup.
  
  Steps:
  1) Create tenant network, subnet and router.
   2) Create a external network
   3) Attached internal & external network to a router
   4) Create VM on above tenant network.
   5) Make sure VM can reach outside using CSNAT.
   6) Find router hosting l3 agent and stop the l3 agent.
   7) Manually move router to other controller (dvr_snat mode). SNAT namespace 
should be create on new controller node.
   8) Start the l3 agent on the controller (the one that  stopped in step6)
   9) Notice that snat namespace is now available on 2 controller and it is not 
getting deleted from the agent which is not hosting it.
  
  Example:
  | cfa97c12-b975-4515-86c3-9710c9b88d76 | L3 agent   | vm2-ctl2-936 | 
:-)   | True   | neutron-l3-agent  |
  | df4ca7c5-9bae-4cfb-bc83-216612b2b378 | L3 agent   | vm1-ctl1-936 | 
:-)   | True   | neutron-l3-agent  |
  
  mysql> select * from csnat_l3_agent_bindings;
  
+--+--+-+--+
  | router_id| l3_agent_id  
| host_id | csnat_gw_port_id |
  
+--+--+-+--+
  | 0fb68420-9e69-41bb-8a88-8ab53b0faabb | cfa97c12-b975-4515-86c3-9710c9b88d76 
| NULL| NULL |
  
+--+--+-+--+
  
  On vm1-ctl1-936
  
  Stale SNAT namespace on Initially hosting controller.
  
  ubuntu@vm1-ctl1-936:~/devstack$ sudo ip netns
  snat-0fb68420-9e69-41bb-8a88-8ab53b0faabb
  qrouter-0fb68420-9e69-41bb-8a88-8ab53b0faabb
  
  On vm2-ctl2-936 (2nd Controller)
  
  ubuntu@vm2-ctl2-936:~$ ip netns
  

[Yahoo-eng-team] [Bug 1591314] [NEW] subnet quota usage is not correct for admin users

2016-06-10 Thread Eric Peterson
Public bug reported:

When a user has the admin role, the count of used subnets is that of all
subnets in existence (in that region).

This makes it so admin users cannot create subnets.

The other parts of the quota code does some filtering to ensure only
resources for the current project are used - we need this for subnets
too.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1591314

Title:
  subnet quota usage is not correct for admin users

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When a user has the admin role, the count of used subnets is that of
  all subnets in existence (in that region).

  This makes it so admin users cannot create subnets.

  The other parts of the quota code does some filtering to ensure only
  resources for the current project are used - we need this for subnets
  too.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1591314/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1588927] Re: /v3/groups?name= bypasses group_filter for LDAP

2016-06-10 Thread Dolph Mathews
** Also affects: keystone/mitaka
   Importance: Undecided
   Status: New

** Changed in: keystone/mitaka
   Status: New => In Progress

** Changed in: keystone/mitaka
   Importance: Undecided => Medium

** Changed in: keystone/mitaka
 Assignee: (unassigned) => Matthew Edmonds (edmondsw)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1588927

Title:
  /v3/groups?name= bypasses group_filter for LDAP

Status in OpenStack Identity (keystone):
  Fix Released
Status in OpenStack Identity (keystone) mitaka series:
  In Progress

Bug description:
  The same problem reported and fixed for users as
  https://bugs.launchpad.net/keystone/+bug/1577804 also exists for
  groups.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1588927/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1576418] Re: Swift UI deletions result in confused action buttons

2016-06-10 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/313243
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=241cf1bd7e809c8e104d14a91246374dc86b7836
Submitter: Jenkins
Branch:master

commit 241cf1bd7e809c8e104d14a91246374dc86b7836
Author: Richard Jones 
Date:   Mon Jun 6 16:22:56 2016 +1000

Migrate swift ui to use hz-dynamic-table

This removes the custom table code and uses the new
hz-dynamic-table directive to manage the table.

The documentation for actionResultService was also
edited to improve clarity.

Note: I was not intending to migrate all actions over
to use actionResultService in this patch (to keep the
patch size under control) so only the delete actions
have been done, and even then not optimally. These will
be revisited in a subsequent patch.

Change-Id: If8c4009c29fbbdbbeac12ce2bee4dcbef287ea98
Closes-Bug: 1576418
Closes-Bug: 1586175
Partially-Implements: blueprint swift-ui-functionality


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1576418

Title:
  Swift UI deletions result in confused action buttons

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Deleting a row in the swift UI will sometimes result in the row above
  it getting its actions button, even if it is the wrong type (file vs.
  folder).

  To reproduce, create a container with:

  - a file named "a"
  - a file named "b"
  - a folder named "c"

  Delete the file named "b" and it will be removed but the folder named
  "c" will have its action list instead of its own.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1576418/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1586175] Re: ng containers - cancel button on object delete

2016-06-10 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/313243
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=241cf1bd7e809c8e104d14a91246374dc86b7836
Submitter: Jenkins
Branch:master

commit 241cf1bd7e809c8e104d14a91246374dc86b7836
Author: Richard Jones 
Date:   Mon Jun 6 16:22:56 2016 +1000

Migrate swift ui to use hz-dynamic-table

This removes the custom table code and uses the new
hz-dynamic-table directive to manage the table.

The documentation for actionResultService was also
edited to improve clarity.

Note: I was not intending to migrate all actions over
to use actionResultService in this patch (to keep the
patch size under control) so only the delete actions
have been done, and even then not optimally. These will
be revisited in a subsequent patch.

Change-Id: If8c4009c29fbbdbbeac12ce2bee4dcbef287ea98
Closes-Bug: 1576418
Closes-Bug: 1586175
Partially-Implements: blueprint swift-ui-functionality


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1586175

Title:
  ng containers - cancel button on object delete

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Go to the (ng) containers, create a container, create an object, click
  select all, and delete action

  On the Delete action modal, click cancel.

  Once the modal closes, the object *magically* disappears too (which it
  shouldn't)... but if you refresh the page, it is back.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1586175/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1577982] Re: ConfigDrive: cloud-init fails to configure network from network_data.json

2016-06-10 Thread Scott Moser
This was fixed in trunk at revno 1225

** Changed in: cloud-init
   Status: In Progress => Fix Committed

** Changed in: cloud-init
 Assignee: (unassigned) => Scott Moser (smoser)

** Also affects: cloud-init (Ubuntu Xenial)
   Importance: Undecided
   Status: New

** Changed in: cloud-init (Ubuntu Xenial)
   Status: New => Confirmed

** Changed in: cloud-init (Ubuntu Xenial)
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1577982

Title:
  ConfigDrive: cloud-init fails to configure network from
  network_data.json

Status in cloud-init:
  Fix Committed
Status in cloud-init package in Ubuntu:
  Fix Released
Status in cloud-init source package in Xenial:
  Confirmed

Bug description:
  When running Ubuntu 16.04 on OpenStack, cloud-init fails to properly
  configure the network from network_data.json found in ConfigDrive.

  When instance boots, network is configured fine until next reboot
  where it falls back to dhcp.

  The /etc/network/interfaces.d/50-cloud-init.cfg file has the following
  content when instance is initially booted, this could explain why dhcp
  is used on second boot:

  auto lo
  iface lo inet loopback
  
  auto eth0
  iface eth0 inet dhcp

  When debugging, if this line in stages.py [1] is commented, we can see
  that cloud-init initially copy the /etc/network/interfaces file found
  in the configdrive (the network template injected by Nova) and isn't
  using the network config found in network_data.json. But later it
  falls back to "dhcp" and rewrites yet again the network config.

  I also found that within self._find_networking_config(), it looks like
  no datasource is found at this point. This could be because cloud-init
  is still in "local" dsmode and then refuses to use the network config
  found in the ConfigDrive. (triggering the "dhcp" fallback logic)

  Manually forcing "net" dsmode makes cloud-init configure
  /etc/network/interfaces.d/50-cloud-init.cfg properly with network
  config found in the ConfigDrive. However no gateway is configured and
  so, instance doesn't respond to ping or SSH.

  At that point, I'm not sure what's going on and how I can debug
  further.

  Notes:
  * The image used for testing uses "net.ifnames=0". Removing this config makes 
things much worst. (no ping at all on first boot)
  * Logs, configs and configdrive can be found attached to this bug report.

  [1] http://bazaar.launchpad.net/~cloud-init-dev/cloud-
  init/trunk/view/head:/cloudinit/stages.py#L604

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1577982/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1577982] Re: ConfigDrive: cloud-init fails to configure network from network_data.json

2016-06-10 Thread Scott Moser
This was fixed in yakkety at 0.7.7~bzr1227-0ubuntu1 . It will be sru'd
sometime soon.

** Changed in: cloud-init (Ubuntu)
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1577982

Title:
  ConfigDrive: cloud-init fails to configure network from
  network_data.json

Status in cloud-init:
  Fix Committed
Status in cloud-init package in Ubuntu:
  Fix Released
Status in cloud-init source package in Xenial:
  Confirmed

Bug description:
  When running Ubuntu 16.04 on OpenStack, cloud-init fails to properly
  configure the network from network_data.json found in ConfigDrive.

  When instance boots, network is configured fine until next reboot
  where it falls back to dhcp.

  The /etc/network/interfaces.d/50-cloud-init.cfg file has the following
  content when instance is initially booted, this could explain why dhcp
  is used on second boot:

  auto lo
  iface lo inet loopback
  
  auto eth0
  iface eth0 inet dhcp

  When debugging, if this line in stages.py [1] is commented, we can see
  that cloud-init initially copy the /etc/network/interfaces file found
  in the configdrive (the network template injected by Nova) and isn't
  using the network config found in network_data.json. But later it
  falls back to "dhcp" and rewrites yet again the network config.

  I also found that within self._find_networking_config(), it looks like
  no datasource is found at this point. This could be because cloud-init
  is still in "local" dsmode and then refuses to use the network config
  found in the ConfigDrive. (triggering the "dhcp" fallback logic)

  Manually forcing "net" dsmode makes cloud-init configure
  /etc/network/interfaces.d/50-cloud-init.cfg properly with network
  config found in the ConfigDrive. However no gateway is configured and
  so, instance doesn't respond to ping or SSH.

  At that point, I'm not sure what's going on and how I can debug
  further.

  Notes:
  * The image used for testing uses "net.ifnames=0". Removing this config makes 
things much worst. (no ping at all on first boot)
  * Logs, configs and configdrive can be found attached to this bug report.

  [1] http://bazaar.launchpad.net/~cloud-init-dev/cloud-
  init/trunk/view/head:/cloudinit/stages.py#L604

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1577982/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1315501] Re: cloud-init does not use interfaces.d in trusty

2016-06-10 Thread Scott Moser
marking fix-commited in cloud-init as revno 1225.
fix released in yakkety images and will make it back to xenial via sru.

** Changed in: cloud-init
   Status: Confirmed => Fix Committed

** Changed in: cloud-init (Ubuntu)
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1315501

Title:
  cloud-init does not use interfaces.d in trusty

Status in cloud-init:
  Fix Committed
Status in cloud-init package in Ubuntu:
  Fix Released

Bug description:
  Hi,

  Reference/context: https://ask.openstack.org/en/question/28297/cloud-
  init-nonet-waiting-and-fails/

  The trusty image provided by http://cloud-images.ubuntu.com/trusty/ contains 
an eth0 interface configured as dhcp in /etc/network/interfaces.d/eth0.cfg.
  When I boot this image in an Openstack non-dhcp networking environment, 
cloud-init configures the static IP provided by Neutron directly in 
/etc/network/interfaces (not interfaces.d).

  This means I now have two eth0 devices configured, in two different files.
  Booting 20 VMs with the same image yields around 50-60% of VMs that are not 
reachable by network.

  Soft rebooting a VM in this state or doing and "ifdown eth0 && ifup
  eth0" will make it ping.

  I removed the the eth0 interface file in
  /etc/network/interfaces.d/eth0.cfg from the image, booted another
  round of VMs and all of them worked fine.

  Now, I see three possible outcomes:
  - If eth0 is present in /etc/network/interfaces.d, cloud-init 
configures/re-configures that interface
  - If eth0 is present in /etc/network/interfaces.d, cloud-init deletes it and 
configures /etc/network/interfaces
  - Ubuntu cloud images ships without eth0 being configured by default

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1315501/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1591281] [NEW] Missing test-requirement: testresources

2016-06-10 Thread Thomas Goirand
Public bug reported:

Keystone Mitaka b1 fails to build because of a missing testresources in
test-requirements.txt.

** Affects: keystone
 Importance: Undecided
 Assignee: Thomas Goirand (thomas-goirand)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1591281

Title:
  Missing test-requirement: testresources

Status in OpenStack Identity (keystone):
  In Progress

Bug description:
  Keystone Mitaka b1 fails to build because of a missing testresources
  in test-requirements.txt.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1591281/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1576093] Re: block migration fail with libvirt since version 1.2.17

2016-06-10 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/310707
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=1885a39083776605348523002f4a6aedace12cce
Submitter: Jenkins
Branch:master

commit 1885a39083776605348523002f4a6aedace12cce
Author: Eli Qiao 
Date:   Fri Jun 10 14:44:54 2016 +0200

libvirt: Prevent block live migration with tunnelled flag

libvirt will report "Selecting disks to migrate is not
implemented for tunneled migration" while doing block migration
with VIR_MIGRATE_TUNNELLED flag.

This patch does 2 changes:

1. Raise exception.MigrationPreCheckError if block live migration with
   with mapped volumes and tunnelled flag on.
2. Remove migrate_disks from params of migrateToURI3 in case of
   tunnelled block live migration w/o mapped volumes since we want
   to copy all disks to destination

Co-Authored-By: Pawel Koniszewski 
Closes-bug: #1576093
Change-Id: Id6e49f298133c53d21386ea619c83e413ef3117a


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1576093

Title:
  block migration fail with libvirt since version  1.2.17

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Try to do block migration but fail and libvirt reports that

  "Selecting disks to migrate is not implemented for tunneled migration"

  Nova:  a4e15e329f9adbcfe72fbcd6acb94f0743ad02f8

  libvirt: 1.3.1

  reproduce:

  default devstack setting and do block migration (no shared
  instance_dir and shared instance storage used)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1576093/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1590816] Re: metadata agent make invalid token requests

2016-06-10 Thread Ian Cordasco
Bjoern I've worked with Paul and determined that the configuration was
incorrect.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1590816

Title:
  metadata agent make invalid token requests

Status in neutron:
  Invalid

Bug description:
  Sporadically the neutron metadata agent seems to return 401 wrapped up in a 
404.
  For still unknown reasons, the metadata agents creates sporadically invalid 
v3 token requests 

  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent
  Unauthorized: {"error": {"message": "The resource could not be
  found.", "code": 404, "title": "Not Found"}}

  POST /tokens HTTP/1.1
  Host: 1.2.3.4:35357
  Content-Length: 91
  Accept-Encoding: gzip, deflate
  Accept: application/json
  User-Agent: python-neutronclient

  and the response is

  HTTP/1.1 404 Not Found
  Date: Tue, 01 Mar 2016 22:14:58 GMT
  Server: Apache
  Vary: X-Auth-Token
  Content-Length: 93
  Content-Type: application/json

  and the agent will stop responding with a full stack. At first we thought 
this issue would be related to a improper auth_url configuration (see 
https://bugs.launchpad.net/openstack-ansible/liberty/+bug/1552394) but the 
issue came back.
  Interestingly the agent start working once we restart it but the problem 
slowly appears once you start putting more workload on it (spinning up 
instances)


  2016-02-26 13:34:46.478 33371 INFO eventlet.wsgi.server [-] (33371) accepted 
''
  2016-02-26 13:34:46.486 33371 ERROR neutron.agent.metadata.agent [-] 
Unexpected error.
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent Traceback 
(most recent call last):
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent File 
"/usr/local/lib/python2.7/dist-packages/neutron/agent/metadata/agent.py", line 
109, in __call__
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent instance_id, 
tenant_id = self._get_instance_and_tenant_id(req)
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent File 
"/usr/local/lib/python2.7/dist-packages/neutron/agent/metadata/agent.py", line 
204, in _get_instance_and_tenant_id
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent ports = 
self._get_ports(remote_address, network_id, router_id)
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent File 
"/usr/local/lib/python2.7/dist-packages/neutron/agent/metadata/agent.py", line 
197, in _get_ports
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent return 
self._get_ports_for_remote_address(remote_address, networks)
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent File 
"/usr/local/lib/python2.7/dist-packages/neutron/common/utils.py", line 101, in 
__call__
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent return 
self._get_from_cache(target_self, *args, **kwargs)
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent File 
"/usr/local/lib/python2.7/dist-packages/neutron/common/utils.py", line 79, in 
_get_from_cache
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent item = 
self.func(target_self, *args, **kwargs)
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent File 
"/usr/local/lib/python2.7/dist-packages/neutron/agent/metadata/agent.py", line 
166, in _get_ports_for_remote_address
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent 
ip_address=remote_address)
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent File 
"/usr/local/lib/python2.7/dist-packages/neutron/agent/metadata/agent.py", line 
135, in _get_ports_from_server
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent return 
self._get_ports_using_client(filters)
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent File 
"/usr/local/lib/python2.7/dist-packages/neutron/agent/metadata/agent.py", line 
177, in _get_ports_using_client
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent ports = 
client.list_ports(**filters)
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent File 
"/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 
102, in with_params
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent ret = 
self.function(instance, *args, **kwargs)
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent File 
"/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 
534, in list_ports
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent **_params)
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent File 
"/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 
307, in list
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent for r in 
self._pagination(collection, path, **params):
  2016-02-26 

[Yahoo-eng-team] [Bug 1590921] Re: Project Detail: Breadcrumbs are being injected as the page title

2016-06-10 Thread Diana Whitten
** Project changed: openstack-ux => horizon

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1590921

Title:
  Project Detail: Breadcrumbs are being injected as the page title

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The page title is not present/ breadcrumbs are in it's place.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1590921/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1590921] [NEW] Project Detail: Breadcrumbs are being injected as the page title

2016-06-10 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

The page title is not present/ breadcrumbs are in it's place.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
Project Detail: Breadcrumbs are being injected as the page title
https://bugs.launchpad.net/bugs/1590921
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1590917] Re: Global tables don't have styled sort by indicators

2016-06-10 Thread Diana Whitten
** Description changed:

- Globally there is no indication of which column is being sorted. the
- sorted column should have a stroke underneath the column label.
+ Globally there is no indication of which column is being sorted.

** Project changed: openstack-ux => horizon

** Attachment removed: "projects list02.png"
   
https://bugs.launchpad.net/horizon/+bug/1590917/+attachment/4680561/+files/projects%20list02.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1590917

Title:
  Global tables don't have styled sort by indicators

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Globally there is no indication of which column is being sorted.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1590917/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1590915] Re: Images: Filter box is treated differently than other screens

2016-06-10 Thread Diana Whitten
** Attachment removed: "images.png"
   
https://bugs.launchpad.net/openstack-ux/+bug/1590915/+attachment/4680560/+files/images.png

** Project changed: openstack-ux => horizon

** Description changed:

  The filter box has a different treatment than other other screens.
- (filter icon and x.clear icon are not present and the box is the wrong
- height)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1590915

Title:
  Images: Filter box is treated differently than other screens

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The filter box has a different treatment than other other screens.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1590915/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1590917] [NEW] Global tables don't have styled sort by indicators

2016-06-10 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

Globally there is no indication of which column is being sorted.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
Global tables don't have styled sort by indicators
https://bugs.launchpad.net/bugs/1590917
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1526964] Re: Can delete lbaas-pool with lbaas-hm attached

2016-06-10 Thread Darek Smigiel
*** This bug is a duplicate of bug 1571097 ***
https://bugs.launchpad.net/bugs/1571097

** This bug is no longer a duplicate of bug 1450375
   cannot delete v2 healthmonitor if the hm-associated-pool was deleted first
** This bug has been marked a duplicate of bug 1571097
   unable to delete lbaasv2 health monitor if its listener deleted

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1526964

Title:
  Can delete lbaas-pool with lbaas-hm attached

Status in neutron:
  Confirmed

Bug description:
  Using the lbaas v2 resources, I created a loadbalancer, listener,
  pool, and healthmonitor, all attached to each other as expected. Next
  I deleted the pool, which I wouldn't expect to work since there is a
  healthmonitor attached to it, yet it worked successfully. Now I am
  unable to do anything with the healthmonitor that is remaining.

  I tried deleting the healthmonitor, only to get this error:
  $ neutron lbaas-healthmonitor-delete d7257837-7b76-4f70-836d-97dbfa7d5a4a
  Request Failed: internal server error while processing your request.
  $

  Looking through the logs I grabbed this stack trace 
http://paste.openstack.org/show/482123/
  The healthmonitor is unable to locate the pool it was attached to, since it 
has been deleted.

  Looking through the v1 code, it seems there was a check to make sure the pool 
was able to be deleted: 
https://github.com/openstack/neutron-lbaas/blob/master/neutron_lbaas/db/loadbalancer/loadbalancer_db.py#L619
  This check does not exist in the v2 code however:
  
https://github.com/openstack/neutron-lbaas/blob/master/neutron_lbaas/db/loadbalancer/loadbalancer_dbv2.py#L468

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1526964/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1450375] Re: cannot delete v2 healthmonitor if the hm-associated-pool was deleted first

2016-06-10 Thread Darek Smigiel
*** This bug is a duplicate of bug 1571097 ***
https://bugs.launchpad.net/bugs/1571097

Marked this bug as duplicate, even thou it was created earlier, but fix
is assigned to other one.

** This bug has been marked a duplicate of bug 1571097
   unable to delete lbaasv2 health monitor if its listener deleted

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1450375

Title:
  cannot delete v2 healthmonitor if the hm-associated-pool was deleted
  first

Status in neutron:
  Fix Released

Bug description:
  Steps:
  1. create lb
  2. create listener
  3. create pool  (pool-1)
  4. add a healthmonitor (healthmonitor-1) to pool-1
  5. delete the pool-1

  Then, you cannot delete healthmonitor-1 if the pool-1 was deleted
  first.

  
  Log:

  2015-04-30 16:51:23.422 6369 INFO neutron.wsgi 
[req-2876374c-03c3-49b6-825c-83116108cbed cf6a52a3be734e4cad457d5283148882 
356b4d225c7e44de961d888086948f7c - - -] 172.16.2.10 - - [30/Apr/2015 16:51:23] 
"GET /v2.0/lbaas/healthmonitors.json?tenant_id=356b4d225c7e44de961d888086948f7c 
HTTP/1.1" 200 754 0.280336
  2015-04-30 16:51:23.430 6369 INFO neutron.wsgi [-] (6369) accepted 
('172.16.2.10', 17115)
  2015-04-30 16:51:23.532 6369 ERROR neutron.api.v2.resource 
[req-88a2316d-805a-4bad-a52b-b270325008e7 cf6a52a3be734e4cad457d5283148882 
356b4d225c7e44de961d888086948f7c - - -] delete failed
  2015-04-30 16:51:23.532 6369 TRACE neutron.api.v2.resource Traceback (most 
recent call last):
  2015-04-30 16:51:23.532 6369 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.6/site-packages/neutron/api/v2/resource.py", line 83, in 
resource
  2015-04-30 16:51:23.532 6369 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
  2015-04-30 16:51:23.532 6369 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.6/site-packages/neutron/api/v2/base.py", line 490, in delete
  2015-04-30 16:51:23.532 6369 TRACE neutron.api.v2.resource 
obj_deleter(request.context, id, **kwargs)
  2015-04-30 16:51:23.532 6369 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.6/site-packages/neutron_lbaas/services/loadbalancer/plugin.py",
 line 887, in delete_healthmonitor
  2015-04-30 16:51:23.532 6369 TRACE neutron.api.v2.resource 
constants.PENDING_DELETE)
  2015-04-30 16:51:23.532 6369 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.6/site-packages/neutron_lbaas/db/loadbalancer/loadbalancer_dbv2.py",
 line 159, in test_and_set_status
  2015-04-30 16:51:23.532 6369 TRACE neutron.api.v2.resource 
db_lb_child.root_loadbalancer.id)
  2015-04-30 16:51:23.532 6369 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.6/site-packages/neutron_lbaas/db/loadbalancer/models.py", 
line 115, in root_loadbalancer
  2015-04-30 16:51:23.532 6369 TRACE neutron.api.v2.resource return 
self.pool.listener.loadbalancer
  2015-04-30 16:51:23.532 6369 TRACE neutron.api.v2.resource AttributeError: 
'NoneType' object has no attribute 'listener'

  
  Code:

  class HealthMonitorV2(model_base.BASEV2, models_v2.HasId,
  models_v2.HasTenant):

  ...

  @property
  def root_loadbalancer(self):
  return self.pool.listener.loadbalancer

  
  Potential Solution:
  1. Check pool whether binding a healthmonitor before delete
  2. Add loadbalancer attr to HealthMonitorV2

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1450375/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1566178] Re: Lbaasv2 healthmonitor is not deleted

2016-06-10 Thread Darek Smigiel
*** This bug is a duplicate of bug 1571097 ***
https://bugs.launchpad.net/bugs/1571097

** This bug is no longer a duplicate of bug 1450375
   cannot delete v2 healthmonitor if the hm-associated-pool was deleted first
** This bug has been marked a duplicate of bug 1571097
   unable to delete lbaasv2 health monitor if its listener deleted

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1566178

Title:
  Lbaasv2 healthmonitor is not deleted

Status in neutron:
  Incomplete

Bug description:
  I was running neutron_lbaas tempest automation tests, In this case test which 
tests basic HealthMonitor test. 
  The test configures LB,Listener, pool, members. Then configures the HM.
  In the end of the test all automatically created objects are removed ( in the 
opposite order of the creation) 
  I stopped the test when all objects created in order to manually check LB.
  I created and attached new healtmonitor to the pool ( PING hm).
  After validation I continued the test run , which should delete all related 
to the test object. I saw that   the newly created HM is not deleted.

  I have 3 controllers which are also neutron servers and 2 compute nodes 
(VIRTUAL ENV)
  The loadbalncer objects are created only one one of the controllers.
  the objects are created by admin user on non-admin tenant. 

  Reproduction steps:
  1.Create Lbaas,listener,pool,members
  2. Create HM and attach it to the pool.
  3. delete pool
  4. Try to delete HM .

  
  logs:
  http://pastebin.com/KdZ6mY4X
  http://pastebin.com/hgqv8T3Q
  The steps:

  1.Create Lbaas,listener,pool,members
  2. Create HM and attach it to the pool.
  3. delete pool, Delete listener' delete LB 
  4. Try to delete HM .

  I wanted result should be that the HM will be deleted.

  The HM was not removed - "internal server error" (see logs in pastebin
  links)

  Liberty
  RHEL 7.2

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1566178/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1571097] Re: unable to delete lbaasv2 health monitor if its listener deleted

2016-06-10 Thread Darek Smigiel
** This bug is no longer a duplicate of bug 1450375
   cannot delete v2 healthmonitor if the hm-associated-pool was deleted first

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1571097

Title:
  unable to delete lbaasv2 health monitor if its listener deleted

Status in neutron:
  Fix Released

Bug description:
  problem is in Kilo neutron-lbaas branch.

  monitor is attached a pool.
  When pool and listener were deleted, not error reported that there is a 
health-monitor associated to pool.

  If all lbaas resoruces except health-monitor were deleted, health
  monitor can not be deleted.

  See the following procedure to reproduce this issue:

  $ neutron lbaas-loadbalancer-create --name=v-lb2 lb2-v-1574810802
  $ neutron lbaas-listener-create --protocol=HTTP --protocol-port=80 
--name=v-lb2-1 --loadbalancer=v-lb2
  $ neutron lbaas-pool-create --lb-algorithm=ROUND_ROBIN --protocol=HTTP 
--name=v-lb2-pool  --listener=v-lb2-1
  $ neutron lbaas-member-create --subnet lb2-v-1574810802 --address 10.199.88.3 
--protocol-port=80 v-lb2-pool
  $ neutron lbaas-member-create --subnet lb2-v-1574810802 --address 10.199.88.4 
--protocol-port=80 v-lb2-pool
  $ neutron lbaas-healthmonitor-create --max-retries=3 --delay=3 --timeout=10 
--type=HTTP --pool=v-lb2-pool
  $ neutron lbaas-member-delete 4d2977fc-5600-4dbf-8af2-35c017c8f4a0 v-lb2-pool 
  $ neutron lbaas-member-delete 2f60a49b-add1-43d6-97d8-4e53a925b25f  
v-lb2-pool 
  $ neutron lbaas-pool-delete v-lb2-pool
  $ neutron lbaas-listener-delete v-lb2-1
  $ neutron lbaashealthmonitor-delete 044f98a5-755d-498d-a38e-18eb8ca13884

  neutron log seems point to lbaas resources were gone.
  In this specific issue, we should just remove the health monitor from system.

  2016-04-10 16:57:38.220 INFO neutron.wsgi 
[req-7e697943-e70d-4ac8-a840-b1edf441806a Venus Venus] 10.34.57.68 - - 
[10/Apr/2016 16:57:38] "GET 
/v2.0/lbaas/healthmonitors.json?fields=id=044f98a5-755d-498d-a38e-18eb8ca13884
 HTTP/1.1" 200 444 0.112257
  2016-04-10 16:57:38.252 ERROR neutron.api.v2.resource 
[req-aaeae392-33b2-427c-96a0-918782882c9a Venus Venus] delete failed
  2016-04-10 16:57:38.252 4158 TRACE neutron.api.v2.resource Traceback (most 
recent call last):
  2016-04-10 16:57:38.252 4158 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/resource.py", line 83, in resource
  2016-04-10 16:57:38.252 4158 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
  2016-04-10 16:57:38.252 4158 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/base.py", line 490, in delete
  2016-04-10 16:57:38.252 4158 TRACE neutron.api.v2.resource 
obj_deleter(request.context, id, **kwargs)
  2016-04-10 16:57:38.252 4158 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron-lbaas/neutron_lbaas/services/loadbalancer/plugin.py", line 
906, in delete_healthmonitor
  2016-04-10 16:57:38.252 4158 TRACE neutron.api.v2.resource 
constants.PENDING_DELETE)
  2016-04-10 16:57:38.252 4158 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron-lbaas/neutron_lbaas/db/loadbalancer/loadbalancer_dbv2.py", 
line 163, in test_and_set_status
  2016-04-10 16:57:38.252 4158 TRACE neutron.api.v2.resource 
db_lb_child.root_loadbalancer.id)
  2016-04-10 16:57:38.252 4158 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron-lbaas/neutron_lbaas/db/loadbalancer/models.py", line 115, 
in root_loadbalancer
  2016-04-10 16:57:38.252 4158 TRACE neutron.api.v2.resource return 
self.pool.listener.loadbalancer
  2016-04-10 16:57:38.252 4158 TRACE neutron.api.v2.resource AttributeError: 
'NoneType' object has no attribute 'listener'
  2016-04-10 16:57:38.252 4158 TRACE neutron.api.v2.resource 
  2016-04-10 16:57:38.253 INFO neutron.wsgi 
[req-aaeae392-33b2-427c-96a0-918782882c9a Venus Venus] 10.34.57.68 - - 
[10/Apr/2016 16:57:38] "DELETE 
/v2.0/lbaas/healthmonitors/044f98a5-755d-498d-a38e-18eb8ca13884.json HTTP/1.1" 
500 383 0.030720

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1571097/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1590915] [NEW] Images: Filter box is treated differently than other screens

2016-06-10 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

The filter box has a different treatment than other other screens.
(filter icon and x.clear icon are not present and the box is the wrong
height)

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
Images: Filter box is treated differently than other screens
https://bugs.launchpad.net/bugs/1590915
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1590914] Re: Create container screen theming issues

2016-06-10 Thread Diana Whitten
** Project changed: openstack-ux => horizon

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1590914

Title:
  Create container screen theming issues

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The labels are not currently contained in a field.
  There is a "required" in the field, can that be empty?
  Star for required field
  The check boxes should be themeable, not the default bootstrap styling.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1590914/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1590914] [NEW] Create container screen theming issues

2016-06-10 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

The labels are not currently contained in a field.
There is a "required" in the field, can that be empty?
Star for required field
The check boxes should be themeable, not the default bootstrap styling.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
Create container screen theming issues
https://bugs.launchpad.net/bugs/1590914
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1450375] Re: cannot delete v2 healthmonitor if the hm-associated-pool was deleted first

2016-06-10 Thread LIU Yulong
Please see the alternative fix:
https://review.openstack.org/#/c/324380/

** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1450375

Title:
  cannot delete v2 healthmonitor if the hm-associated-pool was deleted
  first

Status in neutron:
  Fix Released

Bug description:
  Steps:
  1. create lb
  2. create listener
  3. create pool  (pool-1)
  4. add a healthmonitor (healthmonitor-1) to pool-1
  5. delete the pool-1

  Then, you cannot delete healthmonitor-1 if the pool-1 was deleted
  first.

  
  Log:

  2015-04-30 16:51:23.422 6369 INFO neutron.wsgi 
[req-2876374c-03c3-49b6-825c-83116108cbed cf6a52a3be734e4cad457d5283148882 
356b4d225c7e44de961d888086948f7c - - -] 172.16.2.10 - - [30/Apr/2015 16:51:23] 
"GET /v2.0/lbaas/healthmonitors.json?tenant_id=356b4d225c7e44de961d888086948f7c 
HTTP/1.1" 200 754 0.280336
  2015-04-30 16:51:23.430 6369 INFO neutron.wsgi [-] (6369) accepted 
('172.16.2.10', 17115)
  2015-04-30 16:51:23.532 6369 ERROR neutron.api.v2.resource 
[req-88a2316d-805a-4bad-a52b-b270325008e7 cf6a52a3be734e4cad457d5283148882 
356b4d225c7e44de961d888086948f7c - - -] delete failed
  2015-04-30 16:51:23.532 6369 TRACE neutron.api.v2.resource Traceback (most 
recent call last):
  2015-04-30 16:51:23.532 6369 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.6/site-packages/neutron/api/v2/resource.py", line 83, in 
resource
  2015-04-30 16:51:23.532 6369 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
  2015-04-30 16:51:23.532 6369 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.6/site-packages/neutron/api/v2/base.py", line 490, in delete
  2015-04-30 16:51:23.532 6369 TRACE neutron.api.v2.resource 
obj_deleter(request.context, id, **kwargs)
  2015-04-30 16:51:23.532 6369 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.6/site-packages/neutron_lbaas/services/loadbalancer/plugin.py",
 line 887, in delete_healthmonitor
  2015-04-30 16:51:23.532 6369 TRACE neutron.api.v2.resource 
constants.PENDING_DELETE)
  2015-04-30 16:51:23.532 6369 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.6/site-packages/neutron_lbaas/db/loadbalancer/loadbalancer_dbv2.py",
 line 159, in test_and_set_status
  2015-04-30 16:51:23.532 6369 TRACE neutron.api.v2.resource 
db_lb_child.root_loadbalancer.id)
  2015-04-30 16:51:23.532 6369 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.6/site-packages/neutron_lbaas/db/loadbalancer/models.py", 
line 115, in root_loadbalancer
  2015-04-30 16:51:23.532 6369 TRACE neutron.api.v2.resource return 
self.pool.listener.loadbalancer
  2015-04-30 16:51:23.532 6369 TRACE neutron.api.v2.resource AttributeError: 
'NoneType' object has no attribute 'listener'

  
  Code:

  class HealthMonitorV2(model_base.BASEV2, models_v2.HasId,
  models_v2.HasTenant):

  ...

  @property
  def root_loadbalancer(self):
  return self.pool.listener.loadbalancer

  
  Potential Solution:
  1. Check pool whether binding a healthmonitor before delete
  2. Add loadbalancer attr to HealthMonitorV2

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1450375/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1526964] Re: Can delete lbaas-pool with lbaas-hm attached

2016-06-10 Thread LIU Yulong
*** This bug is a duplicate of bug 1450375 ***
https://bugs.launchpad.net/bugs/1450375

** This bug is no longer a duplicate of bug 1571097
   unable to delete lbaasv2 health monitor if its listener deleted
** This bug has been marked a duplicate of bug 1450375
   cannot delete v2 healthmonitor if the hm-associated-pool was deleted first

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1526964

Title:
  Can delete lbaas-pool with lbaas-hm attached

Status in neutron:
  Confirmed

Bug description:
  Using the lbaas v2 resources, I created a loadbalancer, listener,
  pool, and healthmonitor, all attached to each other as expected. Next
  I deleted the pool, which I wouldn't expect to work since there is a
  healthmonitor attached to it, yet it worked successfully. Now I am
  unable to do anything with the healthmonitor that is remaining.

  I tried deleting the healthmonitor, only to get this error:
  $ neutron lbaas-healthmonitor-delete d7257837-7b76-4f70-836d-97dbfa7d5a4a
  Request Failed: internal server error while processing your request.
  $

  Looking through the logs I grabbed this stack trace 
http://paste.openstack.org/show/482123/
  The healthmonitor is unable to locate the pool it was attached to, since it 
has been deleted.

  Looking through the v1 code, it seems there was a check to make sure the pool 
was able to be deleted: 
https://github.com/openstack/neutron-lbaas/blob/master/neutron_lbaas/db/loadbalancer/loadbalancer_db.py#L619
  This check does not exist in the v2 code however:
  
https://github.com/openstack/neutron-lbaas/blob/master/neutron_lbaas/db/loadbalancer/loadbalancer_dbv2.py#L468

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1526964/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1591240] [NEW] progress_watermark is not updated

2016-06-10 Thread Luis Tomas
Public bug reported:

During the live migration process the progress_watermark/progress_time
are not being (re)updated with the new progress made by the live
migration at the "_live_migration_monitor" function
(virt/libvirt/driver.py).

More specifically, in these lines of code:
if ((progress_watermark is None) or
(progress_watermark > info.data_remaining)):
progress_watermark = info.data_remaining
progress_time = now


It may happen that the first time it gets inside (progress_watermark = None), 
the info.data_remaining is still 0, thus the progress_watermark is set to 0. 
This avoids to get inside the "if" block in the future iterations (as 
progress_watermark=0 is never bigger than info.data_remaining), and thus not 
updating neither the progress_watermark, nor the progress_time from that point. 

This may lead to (unneeded) abort migrations due to progress_time not
being updated, making (now - progress_time) > progress_timeout.

It can be fixed just by modifying the if clause to be like:
if ((progress_watermark is None) or
(progress_watermark == 0) or
(progress_watermark > info.data_remaining)):

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1591240

Title:
  progress_watermark is not updated

Status in OpenStack Compute (nova):
  New

Bug description:
  During the live migration process the progress_watermark/progress_time
  are not being (re)updated with the new progress made by the live
  migration at the "_live_migration_monitor" function
  (virt/libvirt/driver.py).

  More specifically, in these lines of code:
  if ((progress_watermark is None) or
  (progress_watermark > info.data_remaining)):
  progress_watermark = info.data_remaining
  progress_time = now

  
  It may happen that the first time it gets inside (progress_watermark = None), 
the info.data_remaining is still 0, thus the progress_watermark is set to 0. 
This avoids to get inside the "if" block in the future iterations (as 
progress_watermark=0 is never bigger than info.data_remaining), and thus not 
updating neither the progress_watermark, nor the progress_time from that point. 

  This may lead to (unneeded) abort migrations due to progress_time not
  being updated, making (now - progress_time) > progress_timeout.

  It can be fixed just by modifying the if clause to be like:
  if ((progress_watermark is None) or
  (progress_watermark == 0) or
  (progress_watermark > info.data_remaining)):

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1591240/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1591234] [NEW] should yield the thread when verifying image's signature

2016-06-10 Thread ChangBo Guo(gcb)
Public bug reported:

 We should give other threads chance to run when verifying image's
signature.

 For more details, please refer to:
http://docs.openstack.org/developer/nova/threading.html#yielding-the-
thread-in-long-running-tasks

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1591234

Title:
  should yield the thread when verifying image's signature

Status in OpenStack Compute (nova):
  New

Bug description:
   We should give other threads chance to run when verifying image's
  signature.

   For more details, please refer to:
  http://docs.openstack.org/developer/nova/threading.html#yielding-the-
  thread-in-long-running-tasks

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1591234/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1591022] Re: Transient test failure in test_v3_auth.TestAuthTOTP

2016-06-10 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/327922
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=380514bc66984057b7ee9d817e1b4cac5fb75f11
Submitter: Jenkins
Branch:master

commit 380514bc66984057b7ee9d817e1b4cac5fb75f11
Author: nonameentername 
Date:   Thu Jun 9 15:24:41 2016 -0500

Fix TOTP transient test failure

Closes-Bug: 1591022

Change-Id: Ie8a986941fc5d6064f6e84fc01c045ff63c3fe75


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1591022

Title:
  Transient test failure in test_v3_auth.TestAuthTOTP

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  In 0.06% of my test runs, test_v3_auth.TestAuthTOTP fails with either:

  Traceback (most recent call last):
File "/root/keystone/keystone/tests/unit/test_v3_auth.py", line 4904, in 
test_with_multiple_credential$
  self.v3_create_token(auth_data, expected_status=http_client.CREATED)
File "/root/keystone/keystone/tests/unit/test_v3.py", line 504, in 
v3_create_token
  expected_status=expected_status)
File "/root/keystone/keystone/tests/unit/rest.py", line 212, in 
admin_request
  return self._request(app=self.admin_app, **kwargs)
File "/root/keystone/keystone/tests/unit/rest.py", line 201, in _request
  response = self.restful_request(**kwargs)
File "/root/keystone/keystone/tests/unit/rest.py", line 186, in 
restful_request
  **kwargs)
File "/root/keystone/keystone/tests/unit/rest.py", line 90, in request
  **kwargs)
File 
"/root/keystone/.tox/py27/local/lib/python2.7/site-packages/webtest/app.py", 
line 571, in request
  expect_errors=expect_errors,
File 
"/root/keystone/.tox/py27/local/lib/python2.7/site-packages/webtest/app.py", 
line 636, in do_requ$
  st
  self._check_status(status, res)
File 
"/root/keystone/.tox/py27/local/lib/python2.7/site-packages/webtest/app.py", 
line 671, in _check_$
  tatus
  "Bad response: %s (not %s)", res_status, status)
  webtest.app.AppError: Bad response: 401 Unauthorized (not 201)

  OR

  Traceback (most recent call last):
File "/root/keystone/keystone/tests/unit/test_v3_auth.py", line 4961, in 
test_with_username_and_domain_
  id
  self.v3_create_token(auth_data, expected_status=http_client.CREATED)
File "/root/keystone/keystone/tests/unit/test_v3.py", line 504, in 
v3_create_token
  expected_status=expected_status)
File "/root/keystone/keystone/tests/unit/rest.py", line 212, in 
admin_request
  return self._request(app=self.admin_app, **kwargs)
File "/root/keystone/keystone/tests/unit/rest.py", line 201, in _request
  response = self.restful_request(**kwargs)
File "/root/keystone/keystone/tests/unit/rest.py", line 186, in 
restful_request
  **kwargs)
File "/root/keystone/keystone/tests/unit/rest.py", line 90, in request
  **kwargs)
File 
"/root/keystone/.tox/py27/local/lib/python2.7/site-packages/webtest/app.py", 
line 571, in request
  expect_errors=expect_errors,
File 
"/root/keystone/.tox/py27/local/lib/python2.7/site-packages/webtest/app.py", 
line 636, in do_reque
  st
  self._check_status(status, res)
File 
"/root/keystone/.tox/py27/local/lib/python2.7/site-packages/webtest/app.py", 
line 671, in _check_s
  tatus
  "Bad response: %s (not %s)", res_status, status)
  webtest.app.AppError: Bad response: 401 Unauthorized (not 201)

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1591022/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1591222] [NEW] Shared filter is broken for QoS policies since Mitaka

2016-06-10 Thread Ihar Hrachyshka
Public bug reported:

The filter was working in Liberty, but was broken in Mitaka when 
the field became synthetic as part of RBAC enablement for the resource.

** Affects: neutron
 Importance: Medium
 Assignee: Ihar Hrachyshka (ihar-hrachyshka)
 Status: In Progress


** Tags: mitaka-backport-potential qos

** Tags added: qos

** Changed in: neutron
 Assignee: (unassigned) => Ihar Hrachyshka (ihar-hrachyshka)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1591222

Title:
  Shared filter is broken for QoS policies since Mitaka

Status in neutron:
  In Progress

Bug description:
  The filter was working in Liberty, but was broken in Mitaka when 
  the field became synthetic as part of RBAC enablement for the resource.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1591222/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1591206] [NEW] Openstack Dahsobard tab cant get view data

2016-06-10 Thread Paul Karikh
Public bug reported:

Currently openstack dashbord tab fetches data and passes it into context and it 
is not able to consume already loaded data from the view.
For example, lets suppose that we have a MultiTableView with tabs. To render 
our view itself we need to call api.neutron.get_network(). And one of our tabs 
also need to get api.neutron.get_network() data in _get_data() method.
It would be nice not to call it again but consume from somewhere (kwargs, 
context, etc).
But I believe it could require changing Horizon code and lots of tests will be 
failed.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1591206

Title:
  Openstack Dahsobard tab cant get view data

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Currently openstack dashbord tab fetches data and passes it into context and 
it is not able to consume already loaded data from the view.
  For example, lets suppose that we have a MultiTableView with tabs. To render 
our view itself we need to call api.neutron.get_network(). And one of our tabs 
also need to get api.neutron.get_network() data in _get_data() method.
  It would be nice not to call it again but consume from somewhere (kwargs, 
context, etc).
  But I believe it could require changing Horizon code and lots of tests will 
be failed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1591206/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1586527] Re: horizon should support live migration using nova scheduler

2016-06-10 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/322325
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=cc42e94c0c36eb33d1dadd0b45955ea3e39238d6
Submitter: Jenkins
Branch:master

commit cc42e94c0c36eb33d1dadd0b45955ea3e39238d6
Author: eric 
Date:   Fri May 27 14:33:29 2016 -0600

Live migration auto schedule new host

This change adds the option of using the nova scheduler
to place a vm during live migration

Change-Id: I136de833ab1b9e38c8077ad1b3ff156b761055d5
Closes-bug: #1586527


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1586527

Title:
  horizon should support live migration using nova scheduler

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Requiring users to select the new host for a VM is not ideal.  Users
  would then need to have a complete picture of all nova hosts to
  intelligently decide where to place the vm.  In addition, there is no
  guarantee that the nova scheduler would also view the new location as
  ideal.

  Horizon should add the option of having the nova scheduler place the
  VM.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1586527/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1591048] Re: VM in self Service Networks aren't getting IP

2016-06-10 Thread Manish
To be more specific. We have created 2 networks (public, private) public
is external network. If we launch any VM in public network vm are
getting launched and getting IP's and ping able however if we launch a
VM in private network using router with public netwotk and launch a VM
it is getting private ip assigned and associate a floating up to it.
When we try to ping floating ip it says "destination host not reachable"
upon further checking when we login to vm using console and do "ip addr
list" instance does not get a private ip. All the settings are similar
like we did in Liberty and there it works fine but here it's giving
problem.

** Changed in: neutron
   Status: Invalid => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1591048

Title:
  VM in self Service Networks aren't getting IP

Status in neutron:
  In Progress

Bug description:
  Hello Team,

  I have setup openstack mitaka distribution on RHEL 7 box. I setup One
  Controller node and one compute node with Networking 2 option (Self
  Service Networks). I can spin up VM in both subnets but VM in private
  self-service network is not getting IP assigned where as VM in
  provider networks are getting IP . Is this kind of bug in Mitaka
  version.

  I had setup openstack-liberty also where VM's in self service networks
  are getting IP's.

  I found i am not the only one who coming across this issue.
  http://stackoverflow.com/questions/37426821/why-the-vm-in-selfservice-
  network-can-not-get-ip

  Thanks,
  Rajiv Sharma

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1591048/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1590693] Re: libvirt's use of driver.get_instance_disk_info() is generally problematic

2016-06-10 Thread Matthew Booth
This was intended to be a low hanging fruit bug, but it doesn't meet the
criteria. Closing, at it has no other purpose.

** Changed in: nova
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1590693

Title:
  libvirt's use of driver.get_instance_disk_info() is generally
  problematic

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  The nova.virt.driver 'interface' defines a get_instance_disk_info
  method, which is called by compute manager to get disk info during
  live migration to get the source hypervisor's internal representation
  of disk info and pass it directly to the target hypervisor over rpc.
  To compute manager this is an opaque blob of stuff which only the
  driver understands, which is presumably why json was chosen. There are
  a couple of problems with it.

  This is a useful method within the libvirt driver, which uses it
  fairly liberally. However, the method returns a json blob. Every use
  of it internal to the libvirt driver first json encodes it in
  get_instance_disk_info, then immediately decodes it again, which is
  inefficient... except 2 uses of it in migrate_disk_and_power_off and
  check_can_live_migrate_source, which don't decode it and assume it's a
  dict. These are both broken, which presumably means something relating
  to migration of volume-backed instances is broken. The libvirt driver
  should not use this internally. We can have a wrapper method to do the
  json encoding for compute manager, and internally use the unencoded
  data data directly.

  Secondly, we're passing an unversioned blob of data over rpc. We
  should probably turn this data into a versioned object.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1590693/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1591048] Re: VM in self Service Networks aren't getting IP

2016-06-10 Thread Miguel Angel Ajo
What do you mean specifically by self-service?, a normal tenant-network?

If it's that, I believe the scenario you're describing is already tested
in the multinode jobs.

I'd recommend you to seek for help in the redhat bugzilla (picking RDO
and explaining the installer and settings you used), or go to freenode
#rdo looking for help.

I'll close this for now (this is the U/S tracker, and I believe this is
a configuration or deployment error, and not a bug.)

Please feel free to reopen if you believe I'm wrong, and please provide
more details in such case.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1591048

Title:
  VM in self Service Networks aren't getting IP

Status in neutron:
  Invalid

Bug description:
  Hello Team,

  I have setup openstack mitaka distribution on RHEL 7 box. I setup One
  Controller node and one compute node with Networking 2 option (Self
  Service Networks). I can spin up VM in both subnets but VM in private
  self-service network is not getting IP assigned where as VM in
  provider networks are getting IP . Is this kind of bug in Mitaka
  version.

  I had setup openstack-liberty also where VM's in self service networks
  are getting IP's.

  I found i am not the only one who coming across this issue.
  http://stackoverflow.com/questions/37426821/why-the-vm-in-selfservice-
  network-can-not-get-ip

  Thanks,
  Rajiv Sharma

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1591048/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1591165] [NEW] npm-run-test is failing in the gate

2016-06-10 Thread Rob Cresswell
Public bug reported:

A brand new npm failure! The timeout issue has been resolved, but now we
have some issue with connections/installation.

See:

npm-run-test:
http://logs.openstack.org/34/327934/3/check/gate-horizon-npm-run-test/8a13b57/console.html
http://logs.openstack.org/73/279573/20/gate/gate-horizon-npm-run-test/e380210/console.html

npm-run-lint
http://logs.openstack.org/73/279573/20/gate/gate-horizon-npm-run-lint/1fced75/console.html

** Affects: horizon
 Importance: Critical
 Assignee: Rob Cresswell (robcresswell)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Rob Cresswell (robcresswell)

** Changed in: horizon
   Importance: Undecided => Critical

** Changed in: horizon
Milestone: None => newton-2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1591165

Title:
  npm-run-test is failing in the gate

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  A brand new npm failure! The timeout issue has been resolved, but now
  we have some issue with connections/installation.

  See:

  npm-run-test:
  
http://logs.openstack.org/34/327934/3/check/gate-horizon-npm-run-test/8a13b57/console.html
  
http://logs.openstack.org/73/279573/20/gate/gate-horizon-npm-run-test/e380210/console.html

  npm-run-lint
  
http://logs.openstack.org/73/279573/20/gate/gate-horizon-npm-run-lint/1fced75/console.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1591165/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1588307] Re: It is possible that the volume size in integration test is fetched before volume finishes to extend

2016-06-10 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/324370
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=873a761511f84b71d5496b1182f10f1a249fe8f8
Submitter: Jenkins
Branch:master

commit 873a761511f84b71d5496b1182f10f1a249fe8f8
Author: Timur Sufiev 
Date:   Thu Jun 2 14:10:52 2016 +0300

In integration tests prevent getting volume size too early

In the test_volume_extend it was possible for a new volume size to be
fetched while the volume itself was still being extended, giving an
old size and failing a test as a result. Fix this intermittent failure
by waiting till the volume becomes 'Available' (and no longer
'Extending') before fetching its new size.

Closes-Bug: #1588307
Change-Id: I7197904dccd842eb4ef8931208b50ba2144c1a8c


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1588307

Title:
  It is possible that the volume size in integration test is fetched
  before volume finishes to extend

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  As a result, if volume takes a bit longer to extend and Selenium is
  fast enough, Selenium fetches old volume size while the volume is
  extending. This results in a test failure.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1588307/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1585738] Re: ML2 doesn't return fixed_ips on a port update with binding

2016-06-10 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/321152
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=fcd33b31282848c5853cebc37e1e8e0a06e878f2
Submitter: Jenkins
Branch:master

commit fcd33b31282848c5853cebc37e1e8e0a06e878f2
Author: Hong Hui Xiao 
Date:   Tue Jun 7 15:35:32 2016 -0600

Return fixed_ips from port update

fixed_ips has a join relationship with port. During a DB transaction,
it will be acquired once. If the IPAllocation is updated after
acquiring fixed_ips, the port DB model object will still have the
stale value of fixed_ips, which is empty in the case of reported bug.

Expire the fixed_ips of the port can make the next query of port have
the latest fixed_ips, and thus, has the latest fixed_ips in the result
of update port.

Change-Id: I8ca52caf661daa2975cf53212d008eb953d83cc0
Closes-Bug: #1585738


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1585738

Title:
  ML2 doesn't return fixed_ips on a port update with binding

Status in neutron:
  Fix Released

Bug description:
  I found this yesterday while working on deferred IP allocation for
  routed networks.  However, it isn't unique to deferred port binding.
  With my deferred IP allocation patch [2], I need to be able to make a
  port create call [1] without binding information that doesn't allocate
  an IP address.  Then, I need to follow it up with a port update which
  sends host binding information and allocates an IP address.  But, when
  I do that, the response doesn't contain the IP addresses that were
  allocated [3].  However, immediately following it with a GET on the
  same port shows the allocation [4].

  This doesn't happen in other plugins besides ML2.  Only with ML2.
  I've put up a patch to run unit tests with ML2 that expose this
  problem [5].  The problem can be reproduced on master [6].  I can get
  it to happen by creating a network without a subnet, creating a port
  on the network (with no IP address), and then calling port update to
  allocate an IP address.

  If this goes unaddressed, Nova will have to make a GET call after
  doing a port update with binding information when working with a port
  with deferred IP allocation.

  [1] http://paste.openstack.org/show/505419/
  [2] https://review.openstack.org/#/c/320631/
  [3] http://paste.openstack.org/show/505420/
  [4] http://paste.openstack.org/show/505421/
  [5] 
http://logs.openstack.org/57/320657/2/check/gate-neutron-python27/153a619/testr_results.html.gz
  [6] https://review.openstack.org/321152

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1585738/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1591111] [NEW] schedulers deprecated warning in unit tests

2016-06-10 Thread Gary Kotton
Public bug reported:

aptured stderr:


/home/gkotton/vmware-nsx/.tox/py27/src/neutron/neutron/db/agentschedulers_db.py:229:
 DeprecationWarning: Using function/method 
'NsxV3Plugin.add_agent_status_check()' is deprecated: This will be removed in 
the N cycle. Please use 'add_agent_status_check_worker' instead.
  self.add_agent_status_check(self.remove_networks_from_down_agents)

{0} 
vmware_nsx.tests.unit.nsx_v3.test_plugin.TestL3NatTestCase.test_router_add_interface_port_bad_tenant_returns_404
 [3.499239s] ... ok

Captured stderr:


/home/gkotton/vmware-nsx/.tox/py27/src/neutron/neutron/db/agentschedulers_db.py:229:
 DeprecationWarning: Using function/method 
'NsxV3Plugin.add_agent_status_check()' is deprecated: This will be removed in 
the N cycle. Please use 'add_agent_status_check_worker' instead.
  self.add_agent_status_check(self.remove_networks_from_down_agents)

** Affects: neutron
 Importance: Undecided
 Assignee: Gary Kotton (garyk)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/159

Title:
  schedulers deprecated warning in unit tests

Status in neutron:
  In Progress

Bug description:
  aptured stderr:
  
  
/home/gkotton/vmware-nsx/.tox/py27/src/neutron/neutron/db/agentschedulers_db.py:229:
 DeprecationWarning: Using function/method 
'NsxV3Plugin.add_agent_status_check()' is deprecated: This will be removed in 
the N cycle. Please use 'add_agent_status_check_worker' instead.
self.add_agent_status_check(self.remove_networks_from_down_agents)
  
  {0} 
vmware_nsx.tests.unit.nsx_v3.test_plugin.TestL3NatTestCase.test_router_add_interface_port_bad_tenant_returns_404
 [3.499239s] ... ok

  Captured stderr:
  
  
/home/gkotton/vmware-nsx/.tox/py27/src/neutron/neutron/db/agentschedulers_db.py:229:
 DeprecationWarning: Using function/method 
'NsxV3Plugin.add_agent_status_check()' is deprecated: This will be removed in 
the N cycle. Please use 'add_agent_status_check_worker' instead.
self.add_agent_status_check(self.remove_networks_from_down_agents)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/159/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1508442] Re: LOG.warn is deprecated

2016-06-10 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/291627
Committed: 
https://git.openstack.org/cgit/openstack/tripleo-image-elements/commit/?id=787bf7041f477f9ff2f81065a4a5c1a522524e29
Submitter: Jenkins
Branch:master

commit 787bf7041f477f9ff2f81065a4a5c1a522524e29
Author: Swapnil Kulkarni (coolsvap) 
Date:   Fri Mar 11 14:57:27 2016 +0530

Replace deprecated LOG.warn with LOG.warning

LOG.warn is deprecated. It still used in a few places.
Updated to non-deprecated LOG.warning.

Change-Id: Ieb3b6eaf6ffc88998364560b0c0ddda9de5adc67
Closes-Bug:#1508442


** Changed in: tripleo
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1508442

Title:
  LOG.warn is deprecated

Status in anvil:
  In Progress
Status in Aodh:
  Fix Released
Status in Astara:
  Fix Released
Status in Barbican:
  Fix Released
Status in bilean:
  Fix Released
Status in Blazar:
  In Progress
Status in Ceilometer:
  Fix Released
Status in cloud-init:
  In Progress
Status in cloudkitty:
  Fix Released
Status in congress:
  Fix Released
Status in Designate:
  Fix Released
Status in django-openstack-auth:
  Fix Released
Status in django-openstack-auth-kerberos:
  In Progress
Status in DragonFlow:
  Fix Released
Status in ec2-api:
  Fix Released
Status in Evoque:
  In Progress
Status in Freezer:
  In Progress
Status in gce-api:
  Fix Released
Status in Gnocchi:
  Fix Released
Status in heat:
  Fix Released
Status in heat-cfntools:
  In Progress
Status in OpenStack Identity (keystone):
  Fix Released
Status in KloudBuster:
  Fix Released
Status in kolla:
  Fix Released
Status in Magnum:
  Fix Released
Status in Manila:
  Fix Released
Status in Mistral:
  In Progress
Status in networking-arista:
  In Progress
Status in networking-calico:
  In Progress
Status in networking-cisco:
  In Progress
Status in networking-fujitsu:
  Fix Released
Status in networking-odl:
  In Progress
Status in networking-ofagent:
  In Progress
Status in networking-plumgrid:
  In Progress
Status in networking-powervm:
  Fix Released
Status in networking-vsphere:
  Fix Released
Status in neutron:
  In Progress
Status in OpenStack Compute (nova):
  In Progress
Status in nova-powervm:
  Fix Released
Status in nova-solver-scheduler:
  In Progress
Status in octavia:
  Fix Released
Status in openstack-ansible:
  Fix Released
Status in oslo.cache:
  Fix Released
Status in Packstack:
  Fix Released
Status in python-dracclient:
  Fix Released
Status in python-magnumclient:
  Fix Released
Status in RACK:
  In Progress
Status in python-watcherclient:
  In Progress
Status in OpenStack Search (Searchlight):
  Fix Released
Status in shaker:
  Fix Released
Status in Solum:
  Fix Released
Status in tempest:
  In Progress
Status in tripleo:
  Fix Released
Status in trove-dashboard:
  Fix Released
Status in watcher:
  Fix Released
Status in zaqar:
  In Progress

Bug description:
  LOG.warn is deprecated in Python 3 [1] . But it still used in a few
  places, non-deprecated LOG.warning should be used instead.

  Note: If we are using logger from oslo.log, warn is still valid [2],
  but I agree we can switch to LOG.warning.

  [1]https://docs.python.org/3/library/logging.html#logging.warning
  [2]https://github.com/openstack/oslo.log/blob/master/oslo_log/log.py#L85

To manage notifications about this bug go to:
https://bugs.launchpad.net/anvil/+bug/1508442/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1591071] [NEW] neutronclient.common.exceptions.Unauthorized

2016-06-10 Thread Francisco Javier ramirez rodriguez
Public bug reported:

hello, I try install openstack mitaka in ubuntu 14.04

when i try to start one instance received the next message.

Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and 
attach the Nova API log if possible.
 (HTTP 500) (Request-ID: 
req-21641c65-ca87-4e84-b604-2d9d4231c3cc)

The log shows the following:

root@controller:/home/usuario# tailf /var/log/nova/nova-api.log 
2016-06-10 10:04:55.521 2459 ERROR nova.api.openstack.extensions 
_ADMIN_AUTH = _load_auth_plugin(CONF)
2016-06-10 10:04:55.521 2459 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/network/neutronv2/api.py", line 120, in 
_load_auth_plugin
2016-06-10 10:04:55.521 2459 ERROR nova.api.openstack.extensions raise 
neutron_client_exc.Unauthorized(message=err_msg)
2016-06-10 10:04:55.521 2459 ERROR nova.api.openstack.extensions Unauthorized: 
Unknown auth type: None
2016-06-10 10:04:55.521 2459 ERROR nova.api.openstack.extensions 
2016-06-10 10:04:55.523 2459 INFO nova.api.openstack.wsgi 
[req-21641c65-ca87-4e84-b604-2d9d4231c3cc d9284b960fe140c2a0b29c17cc2a3cb7 
6fd7e27464584f89834a3f6ef5592c06 - - -] HTTP exception thrown: Unexpected API 
Error. Please report this at http://bugs.launchpad.net/nova/ and attach the 
Nova API log if possible.

2016-06-10 10:04:55.524 2459 DEBUG nova.api.openstack.wsgi 
[req-21641c65-ca87-4e84-b604-2d9d4231c3cc d9284b960fe140c2a0b29c17cc2a3cb7 
6fd7e27464584f89834a3f6ef5592c06 - - -] Returning 500 to user: Unexpected API 
Error. Please report this at http://bugs.launchpad.net/nova/ and attach the 
Nova API log if possible.
 __call__ 
/usr/lib/python2.7/dist-packages/nova/api/openstack/wsgi.py:1070
2016-06-10 10:04:55.529 2459 INFO nova.osapi_compute.wsgi.server 
[req-21641c65-ca87-4e84-b604-2d9d4231c3cc d9284b960fe140c2a0b29c17cc2a3cb7 
6fd7e27464584f89834a3f6ef5592c06 - - -] 10.0.0.11 "POST 
/v2.1/6fd7e27464584f89834a3f6ef5592c06/servers HTTP/1.1" status: 500 len: 520 
time: 0.1389570


thanks.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1591071

Title:
  neutronclient.common.exceptions.Unauthorized

Status in OpenStack Compute (nova):
  New

Bug description:
  hello, I try install openstack mitaka in ubuntu 14.04

  when i try to start one instance received the next message.

  Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ 
and attach the Nova API log if possible.
   (HTTP 500) 
(Request-ID: req-21641c65-ca87-4e84-b604-2d9d4231c3cc)

  The log shows the following:

  root@controller:/home/usuario# tailf /var/log/nova/nova-api.log 
  2016-06-10 10:04:55.521 2459 ERROR nova.api.openstack.extensions 
_ADMIN_AUTH = _load_auth_plugin(CONF)
  2016-06-10 10:04:55.521 2459 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/network/neutronv2/api.py", line 120, in 
_load_auth_plugin
  2016-06-10 10:04:55.521 2459 ERROR nova.api.openstack.extensions raise 
neutron_client_exc.Unauthorized(message=err_msg)
  2016-06-10 10:04:55.521 2459 ERROR nova.api.openstack.extensions 
Unauthorized: Unknown auth type: None
  2016-06-10 10:04:55.521 2459 ERROR nova.api.openstack.extensions 
  2016-06-10 10:04:55.523 2459 INFO nova.api.openstack.wsgi 
[req-21641c65-ca87-4e84-b604-2d9d4231c3cc d9284b960fe140c2a0b29c17cc2a3cb7 
6fd7e27464584f89834a3f6ef5592c06 - - -] HTTP exception thrown: Unexpected API 
Error. Please report this at http://bugs.launchpad.net/nova/ and attach the 
Nova API log if possible.
  
  2016-06-10 10:04:55.524 2459 DEBUG nova.api.openstack.wsgi 
[req-21641c65-ca87-4e84-b604-2d9d4231c3cc d9284b960fe140c2a0b29c17cc2a3cb7 
6fd7e27464584f89834a3f6ef5592c06 - - -] Returning 500 to user: Unexpected API 
Error. Please report this at http://bugs.launchpad.net/nova/ and attach the 
Nova API log if possible.
   __call__ 
/usr/lib/python2.7/dist-packages/nova/api/openstack/wsgi.py:1070
  2016-06-10 10:04:55.529 2459 INFO nova.osapi_compute.wsgi.server 
[req-21641c65-ca87-4e84-b604-2d9d4231c3cc d9284b960fe140c2a0b29c17cc2a3cb7 
6fd7e27464584f89834a3f6ef5592c06 - - -] 10.0.0.11 "POST 
/v2.1/6fd7e27464584f89834a3f6ef5592c06/servers HTTP/1.1" status: 500 len: 520 
time: 0.1389570

  
  thanks.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1591071/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp