[Yahoo-eng-team] [Bug 1590608] Re: Services should use http_proxy_to_wsgi middleware

2016-06-13 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/326798
Committed: 
https://git.openstack.org/cgit/openstack/glance/commit/?id=b0d0b1d0ba7b9d1fadca0e7932c5886bc6cc7825
Submitter: Jenkins
Branch:master

commit b0d0b1d0ba7b9d1fadca0e7932c5886bc6cc7825
Author: Jamie Lennox 
Date:   Wed Jun 8 11:59:09 2016 +1000

Use http-proxy-to-wsgi middleware from oslo.middleware

The HTTP_X_FORWARDED_PROTO handling fails to handle the case of
redirecting the /v1 request to /v1/ because it is handled purely by
routes and does not enter the glance wsgi code. This means a https
request is redirect to http and fails.

oslo.middleware has middleware for handling the X-Forwarded-Proto header
in a standard way so that services don't have to and so we should use
that instead of our own mechanism.

Leaving the existing header handling around until removal should not be
a problem as the worst that will happen is it overwrites an existing
'https' header value set by the middleware.

Closes-Bug: #1558683
Closes-Bug: #1590608
Change-Id: I481d88020b6e8420ce4b9072dd30ec82fe3fb4f7


** Changed in: glance
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1590608

Title:
  Services should use http_proxy_to_wsgi middleware

Status in Barbican:
  New
Status in Cinder:
  New
Status in Glance:
  Fix Released
Status in OpenStack Identity (keystone):
  In Progress
Status in OpenStack DBaaS (Trove):
  New

Bug description:
  It's a common problem when putting a service behind a load balancer to
  need to forward the Protocol and hosts of the original request so that
  the receiving service can construct URLs to the loadbalancer and not
  the private worker node.

  Most services have implemented some form of secure_proxy_ssl_header =
  HTTP_X_FORWARDED_PROTO handling however exactly how this is done is
  dependent on the service.

  oslo.middleware provides the http_proxy_to_wsgi middleware that
  handles these headers and the newer RFC7239 forwarding header and
  completely hides the problem from the service.

  This middleware should be adopted by all services in preference to
  their own HTTP_X_FORWARDED_PROTO handling.

To manage notifications about this bug go to:
https://bugs.launchpad.net/barbican/+bug/1590608/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1558683] Re: Versions endpoint does not support X-Forwarded-Proto

2016-06-13 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/326798
Committed: 
https://git.openstack.org/cgit/openstack/glance/commit/?id=b0d0b1d0ba7b9d1fadca0e7932c5886bc6cc7825
Submitter: Jenkins
Branch:master

commit b0d0b1d0ba7b9d1fadca0e7932c5886bc6cc7825
Author: Jamie Lennox 
Date:   Wed Jun 8 11:59:09 2016 +1000

Use http-proxy-to-wsgi middleware from oslo.middleware

The HTTP_X_FORWARDED_PROTO handling fails to handle the case of
redirecting the /v1 request to /v1/ because it is handled purely by
routes and does not enter the glance wsgi code. This means a https
request is redirect to http and fails.

oslo.middleware has middleware for handling the X-Forwarded-Proto header
in a standard way so that services don't have to and so we should use
that instead of our own mechanism.

Leaving the existing header handling around until removal should not be
a problem as the worst that will happen is it overwrites an existing
'https' header value set by the middleware.

Closes-Bug: #1558683
Closes-Bug: #1590608
Change-Id: I481d88020b6e8420ce4b9072dd30ec82fe3fb4f7


** Changed in: glance
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1558683

Title:
  Versions endpoint does not support X-Forwarded-Proto

Status in Cinder:
  Fix Released
Status in Glance:
  Fix Released

Bug description:
  When a project is deployed behind a SSL terminating proxy, the version
  endpoint returns the wrong URLs.  The returned protocol in the reponse
  URLs is  http:// instead of the expected https://.

  This is because the response built by versions.py git the host
  information only from the incoming req.  If SSL has been terminated by
  a proxy, then the information in the req indicates http://.  Other
  projects have addressed this by adding the config parameter
  secure_proxy_ssl_header = HTTP_X_FORWARDED_PROTO.  This will tell the
  project to use the value in X-Forwarded-Proto (https or http) when
  building the URLs in the response.  Nova and Keystone support this
  configuration option.

  One workaround is to set the public_endpoint parameter. However, the
  value set for public_endpoint, is also returned when the internal and
  admin version endpoints are queried, which breaks other things.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1558683/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1592253] [NEW] Bug: migrate instance after delete flavor

2016-06-13 Thread Hyun Ha
Public bug reported:

Error occured when migrate instance after delete flavor.

Reproduce step :

1. create flavor A
2. boot instance using flavor A
3. delete flavor A
4. migrate instance (ex : nova migrate [instance_uuid])
5. Error occured

Error Log :
==
nova-compute.log
   File "/opt/openstack/src/nova/nova/conductor/manager.py", line 422, in 
_object_dispatch
 return getattr(target, method)(*args, **kwargs)

   File "/opt/openstack/src/nova/nova/objects/base.py", line 163, in wrapper
 result = fn(cls, context, *args, **kwargs)

   File "/opt/openstack/src/nova/nova/objects/flavor.py", line 132, in get_by_id
 db_flavor = db.flavor_get(context, id)

   File "/opt/openstack/src/nova/nova/db/api.py", line 1479, in flavor_get
 return IMPL.flavor_get(context, id)

   File "/opt/openstack/src/nova/nova/db/sqlalchemy/api.py", line 233, in 
wrapper
 return f(*args, **kwargs)

   File "/opt/openstack/src/nova/nova/db/sqlalchemy/api.py", line 4732, in 
flavor_get
 raise exception.FlavorNotFound(flavor_id=id)

 FlavorNotFound: Flavor 8 could not be found.
==

This Error is occured when resize_instance method is called as below code:
/opt/openstack/src/nova/nova/compute/manager.py

def resize_instance(self, context, instance, image,
reservations, migration, instance_type,
clean_shutdown=True):

if (not instance_type or
not isinstance(instance_type, objects.Flavor)):
instance_type = objects.Flavor.get_by_id(
context, migration['new_instance_type_id'])


context parameter has this data:
{'domain': None, 'project_name': u'admin', 'project_domain': None, 'timestamp': 
'2016-06-14T04:34:50.759410', 'auth_token': 
u'457802dc378442a6ac4a5b952587927e', 'remote_address': u'10.10.10.5, 
'quota_class': None, 'resource_uuid': None, 'is_admin': True, 'user': 
u'694df2010229405e966aafc16a30784f', 'service_catalog': [{u'endpoints': 
[{u'adminURL': 
u'https://devel-api.com:8776/v2/9b7ce4df5e1549058687d82e31d127b1', u'region': 
u'RegionOne', u'internalURL': 
u'https://devel-api.com:8776/v2/9b7ce4df5e1549058687d82e31d127b1', 
u'publicURL': 
u'https://devel-api.com:8776/v2/9b7ce4df5e1549058687d82e31d127b1'}], u'type': 
u'volumev2', u'name': u'cinderv2'}, {u'endpoints': [{u'adminURL': 
u'https://devel-api.com:8776/v1/9b7ce4df5e1549058687d82e31d127b1', u'region': 
u'RegionOne', u'internalURL': 
u'https://devel-api.com:8776/v1/9b7ce4df5e1549058687d82e31d127b1', 
u'publicURL': 
u'https://devel-api.com:8776/v1/9b7ce4df5e1549058687d82e31d127b1'}], u'type': 
u'volume', u'name': u'cinder'}], 'tenant': u'9b7ce4df5e1549058687d82e31d127b1', 
'read_only': False, 'project_id': u'9b7ce4df5e1549058687d82e31d127b1', 
'user_id': u'694df2010229405e966aafc16a30784f', 'show_deleted': False, 'roles': 
[u'admin'], 'user_identity': '694df2010229405e966aafc16a30784f 
9b7ce4df5e1549058687d82e31d127b1 - - -', 'read_deleted': 'no', 'request_id': 
u'req-59dca904-6384-4ca0-b696-5731c80198d7', 'instance_lock_checked': False, 
'user_domain': None, 'user_name': u'admin'}

When objects.Flavor.get_by_id method is called, error is occurred because the 
default value of read_deleted is "no".
So, I think context.read_deleted attribute should be set to "yes" before 
objects.Flavor.get_by_id method is called.

I've tested this using stable/kilo and I think liberty, mitaka has same
problem.

Thanks.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: migrate nova

** Tags added: migrate

** Description changed:

  Error occured when migrate instance after delete flavor.
  
  Reproduce step :
  
  1. create flavor A
  2. boot instance using flavor A
  3. delete flavor A
  4. migrate instance (ex : nova migrate [instance_uuid])
  5. Error occured
  
  Error Log :
- 
=
+ ==
  nova-compute.log
-File "/opt/openstack/src/nova/nova/conductor/manager.py", line 422, in 
_object_dispatch
-  return getattr(target, method)(*args, **kwargs)
+    File "/opt/openstack/src/nova/nova/conductor/manager.py", line 422, in 
_object_dispatch
+  return getattr(target, method)(*args, **kwargs)
  
-File "/opt/openstack/src/nova/nova/objects/base.py", line 163, in wrapper
-  result = fn(cls, context, *args, **kwargs)
+    File "/opt/openstack/src/nova/nova/objects/base.py", line 163, in wrapper
+  result = fn(cls, context, *args, **kwargs)
  
-File "/opt/openstack/src/nova/nova/objects/flavor.py", line 132, in 
get_by_id
-  db_flavor = db.flavor_get(context, id)
+    File "/opt/openstack/src/nova/nova/objects/flavor.py", line 132, in 
get_by_id
+  db_flavor = db.flavor_get(context, id)
  
- 

[Yahoo-eng-team] [Bug 1592249] [NEW] The black background frame when hover is too small

2016-06-13 Thread qiaomin032
Public bug reported:

Reproduce:
1 Login as admin
2 Open Admin/System/Resource Usage/Stats tab
3 When hover to the chart, will see the tooltip black background frame for the 
detail info.
The tooltip black background frame is too small, and can show the complete 
detail info
See the screenshot for more detail.

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: "resource-usage.jpeg"
   
https://bugs.launchpad.net/bugs/1592249/+attachment/4683292/+files/resource-usage.jpeg

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1592249

Title:
  The black background frame when hover is too small

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Reproduce:
  1 Login as admin
  2 Open Admin/System/Resource Usage/Stats tab
  3 When hover to the chart, will see the tooltip black background frame for 
the detail info.
  The tooltip black background frame is too small, and can show the complete 
detail info
  See the screenshot for more detail.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1592249/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1408193] Re: Router interface fails to delete the interface with the updated port device id

2016-06-13 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1408193

Title:
  Router interface fails to delete the interface with the updated port
  device id

Status in neutron:
  Expired

Bug description:
  Test to update the port device-id with a new router

  Steps:
  1) Create a network
  2) Create a subnet
  3) Create two routers with name router1 and router2
  4) Add router1 interface with subnet
  5 )update the port with new device-id i.e with router2
  6) Delete the router2 interface with port

  
  Actual Error:
  neutron router-interface-delete router2 $subnet_id

  ERROR: neutronclient.shell Unable to find subnet with name 
'316ac3a6-cd83-424f-855c-368c10cf83bc'
  Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/neutronclient/shell.py", line 691, 
in run_subcommand
  return run_command(cmd, cmd_parser, sub_argv)
File "/usr/lib/python2.7/dist-packages/neutronclient/shell.py", line 90, in 
run_command
  return cmd.run(known_args)
File 
"/usr/lib/python2.7/dist-packages/neutronclient/neutron/v2_0/router.py", line 
143, in run
  neutron_client, resource, value)
File 
"/usr/lib/python2.7/dist-packages/neutronclient/neutron/v2_0/__init__.py", line 
112, in find_resourceid_by_name_or_id
  project_id, cmd_resource, parent_id)
File 
"/usr/lib/python2.7/dist-packages/neutronclient/neutron/v2_0/__init__.py", line 
99, in _find_resourceid_by_name
  message=not_found_message, status_code=404)
  NeutronClientException: Unable to find subnet with name 
'316ac3a6-cd83-424f-855c-368c10cf83bc'

  
  neutron router-interface-delete router2 
port=316ac3a6-cd83-424f-855c-368c10cf83bc
  Router $Router2_id does not have an interface with id $Port_id (HTTP 404) 
(Request-ID: req-$request_id)
  ERROR: neutronclient.shell Router 68afd04f-6e35-4bd2-a9a7-838d5f54e84e does 
not have an interface with id 316ac3a6-cd83-424f-855c-368c10cf83bc (HTTP 404) 
(Request-ID: req-194a53c7-a3ac-49f1-b092-a8e6d9c0d999)
  Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/neutronclient/shell.py", line 691, 
in run_subcommand
  return run_command(cmd, cmd_parser, sub_argv)
File "/usr/lib/python2.7/dist-packages/neutronclient/shell.py", line 90, in 
run_command
  return cmd.run(known_args)
File 
"/usr/lib/python2.7/dist-packages/neutronclient/neutron/v2_0/router.py", line 
146, in run
  portinfo = self.call_api(neutron_client, _router_id, body)
File 
"/usr/lib/python2.7/dist-packages/neutronclient/neutron/v2_0/router.py", line 
166, in call_api
  return neutron_client.remove_interface_router(router_id, body)
File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 
99, in with_params
  ret = self.function(instance, *args, **kwargs)
File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 
425, in remove_interface_router
  "/remove_router_interface", body=body)
File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 
1330, in put
  headers=headers, params=params)
File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 
1298, in retry_request
  headers=headers, params=params)
File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 
1241, in do_request
  content_type=self.content_type())
File "/usr/lib/python2.7/dist-packages/neutronclient/client.py", line 319, 
in do_request
  return self.request(url, method, **kwargs)
File "/usr/lib/python2.7/dist-packages/neutronclient/client.py", line 63, 
in request
  return self._request(url, method, body=body, headers=headers, **kwargs)
File "/usr/lib/python2.7/dist-packages/neutronclient/client.py", line 314, 
in _request
  **kwargs)
File "/usr/lib/python2.7/dist-packages/keystoneclient/utils.py", line 318, 
in inner
  return func(*args, **kwargs)
File "/usr/lib/python2.7/dist-packages/keystoneclient/session.py", line 
339, in request
  raise exceptions.from_response(resp, method, url)
  NotFound: Router 68afd04f-6e35-4bd2-a9a7-838d5f54e84e does not have an 
interface with id 316ac3a6-cd83-424f-855c-368c10cf83bc (HTTP 404) (Request-ID: 
req-194a53c7-a3ac-49f1-b092-a8e6d9c0d999)
  Router 68afd04f-6e35-4bd2-a9a7-838d5f54e84e does not have an interface with 
id 316ac3a6-cd83-424f-855c-368c10cf83bc (HTTP 404) (Request-ID: 
req-194a53c7-a3ac-49f1-b092-a8e6d9c0d999)

  
  In juno the error saying router2 does not having interface with port
  neutron-server  1:2014.2.1-0ubuntu1~cloud0 

  In icehouse release version router2 interface with port with updated device 
id as router id is getting  deleted successfully 
  neutron-server   1:2014.1.2-0ubuntu1.1~cloud0


[Yahoo-eng-team] [Bug 1511134] Re: Batch DVR ARP updates

2016-06-13 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1511134

Title:
  Batch DVR ARP updates

Status in neutron:
  Expired

Bug description:
  The L3 agent currently issues ARP updates one at a time while
  processing a DVR router. Each ARP update creates an external process
  which has to call the neutron-rootwrap helper while also "ip netns
  exec " -ing each time.

  The ip command contains a "-batch " option which would be
  able to batch all of the "ip neigh replace" commands into one external
  process per qrouter namespace. This would greatly reduce the amount of
  time it takes the L3 agent to update large numbers of ARP entries,
  particularly as the number of VMs in a deployment rises.

  The benefit of batching ip commands can be seen in this simple bash
  example:

  $ time for i in {0..50}; do sudo ip netns exec qrouter-bc38451e-0c2f-
  4ad2-b76b-daa84066fefb ip a > /dev/null; done

  real  0m2.437s
  user0m0.183s
  sys   0m0.359s
  $ for i in {0..50}; do echo a >> /tmp/ip_batch_test; done
  $ time sudo ip netns exec qrouter-bc38451e-0c2f-4ad2-b76b-daa84066fefb ip -b 
/tmp/ip_batch_test > /dev/null

  real  0m0.046s
  user0m0.003s
  sys   0m0.007s

  If just 50 arp updates are batched together, there is about a 50x
  speedup. Repeating this test with 500 commands showed a speedup of
  250x (disclaimer: this was a rudimentary test just to get a rough
  estimate of the performance benefit).

  Note: see comments #1-3 for less-artificial performance data.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1511134/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1507050] Re: LBaaS 2.0: Operating Status Tempest Test Changes

2016-06-13 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1507050

Title:
  LBaaS 2.0: Operating Status Tempest Test Changes

Status in neutron:
  Expired

Bug description:
  SUMMARY:
  A gate job for Neutron-LBaaS failed today (20141016).  It was identified that 
the failure occurred due to the introduction of new operating statues; namely, 
"DEGRADED".

  Per the following document, we will see the following valid types for 
operating_status: (‘ONLINE’, ‘OFFLINE’, ‘DEGRADED’, ‘ERROR’)
  
http://specs.openstack.org/openstack/neutron-specs/specs/kilo/lbaas-api-and-objmodel-improvement.html

  LOGS/STACKTRACE:
  refer: 
http://logs.openstack.org/75/230875/12/gate/gate-neutron-lbaasv2-dsvm-listener/18155a8/console.html#_2015-10-15_23_12_27_433

  Captured traceback:
  2015-10-15 23:12:27.507 | 2015-10-15 23:12:27.462 | ~~~
  2015-10-15 23:12:27.508 | 2015-10-15 23:12:27.463 | Traceback (most 
recent call last):
  2015-10-15 23:12:27.508 | 2015-10-15 23:12:27.464 |   File 
"neutron_lbaas/tests/tempest/v2/api/test_listeners_admin.py", line 113, in 
test_create_listener_missing_tenant_id
  2015-10-15 23:12:27.508 | 2015-10-15 23:12:27.465 | 
listener_ids=[self.listener_id, admin_listener_id])
  2015-10-15 23:12:27.508 | 2015-10-15 23:12:27.466 |   File 
"neutron_lbaas/tests/tempest/v2/api/base.py", line 288, in _check_status_tree
  2015-10-15 23:12:27.508 | 2015-10-15 23:12:27.467 | assert 'ONLINE' 
== load_balancer['operating_status']
  2015-10-15 23:12:27.509 | 2015-10-15 23:12:27.469 | AssertionError

  RECOMMENDED ACTION:
  1.  Modify the method, _check_status_tree, in  
neutron_lbaas/tests/tempest/v2/api/base.py  to accept 'DEGRADED" as a valid 
type.
  2.  Add a wait for status/poller to check that a "DEGRADED" operating_status 
would transition over to "ONLINE".A timeout Exception should be thrown if 
we do not reach that state after some amount of seconds.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1507050/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1514068] Re: internal subnet case no need to repeatedly create IPDevice in _update_arp_entry

2016-06-13 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1514068

Title:
  internal subnet case no need to repeatedly create IPDevice in
  _update_arp_entry

Status in neutron:
  Expired

Bug description:
  _update_arp_entry will create IPDevice to do arp task:
  
https://github.com/openstack/neutron/blob/master/neutron/agent/l3/dvr_local_router.py#L200-L227
  def _update_arp_entry(self, ip, mac, subnet_id, operation):
  """Add or delete arp entry into router namespace for the subnet."""
  port = self._get_internal_port(subnet_id)
  # update arp entry only if the subnet is attached to the router
  if not port:
  return False

  try:
  # TODO(mrsmith): optimize the calls below for bulk calls
  interface_name = self.get_internal_device_name(port['id'])
  device = ip_lib.IPDevice(interface_name, namespace=self.ns_name)

  and methods _process_arp_cache_for_internal_port and _set_subnet_arp_info 
will call _update_arp_entry in their for loop based on arp_entry/port. Per 
arp_entry/port is going to be processed, an IPDevice object(same device in 
namespace) will be created. It's not necessary to do that.
  
https://github.com/openstack/neutron/blob/master/neutron/agent/l3/dvr_local_router.py#L174-L182
  
https://github.com/openstack/neutron/blob/master/neutron/agent/l3/dvr_local_router.py#L229-L241

  We can create that IPDevice object before code enter the for loop, and
  pass it to _update_arp_entry

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1514068/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1493688] Re: port-update shouldn't update lbaas vip port IP

2016-06-13 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1493688

Title:
  port-update shouldn't update lbaas vip port IP

Status in neutron:
  Expired

Bug description:
  lbaas vip port IP cannot be updated by lbaas api like "neutron lb-vip-
  update VIP --address NEW_IP", but it can be updated by "neutron port-
  update".

  This is conflict, if lbaas doesn't support update vip port IP yet,
  another API shouldn't make it possible to update.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1493688/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1311470] Re: Disabling an ML2 type driver can leave orphaned DB records

2016-06-13 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1311470

Title:
  Disabling an ML2 type driver can leave orphaned DB records

Status in neutron:
  Expired

Bug description:
  If an ML2 type driver is disabled after segments have been allocated
  using that type driver, subsequent network deletions will not remove
  the DB records allocated by that type driver since the driver isn't
  there to release the segment[1]. These orphaned segments will then be
  unavailable for use if the type driver is re-enabled later.

  1.
  
https://github.com/openstack/neutron/blob/af89d74d2961db6a04572375150ad908c9e72e78/neutron/plugins/ml2/managers.py#L103

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1311470/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1494570] Re: Warning Endpoint with ip already exists should be avoidable

2016-06-13 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1494570

Title:
  Warning Endpoint with ip already exists should be avoidable

Status in neutron:
  Expired

Bug description:
  In tunnel_type.py, method tunnel_sync, endpoint will be queried by
  passed host and tunnel ip, and previous checking in tunnel_sync will
  check queried endpoint with passed host and tunnel ip in multiple
  cases , like whether local_ip and host has changed or upgrade.

  But for case which local_ip and host are not changed will not be
  checked, this will happened when ovs-agent and ovs restarted. And for
  that, previous logic will try to add_endpoint again for passed tunnel
  ip and host, and raise endpoint already exists warning.

  It doesn't make sense, since local ip and host are not changed, and we
  have queried DB twice, we don't need do another DB operation and raise
  a warning.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1494570/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1539333] Re: exception covered in test case: _test_process_routers_update_router_deleted

2016-06-13 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1539333

Title:
  exception covered in test case:
  _test_process_routers_update_router_deleted

Status in neutron:
  Expired

Bug description:
  Testcase _test_process_routers_update_router_deleted directly tests
  the update process.

  It lacks of router creating process so remove_router be called but
  add_router never be called before. That will trigger a TypeError
  exception in remove_router callback function as it would get a router
  NONE.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1539333/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1567886] Re: ERROR msg appeared when project deleted with LDAP

2016-06-13 Thread Launchpad Bug Tracker
[Expired for OpenStack Identity (keystone) because there has been no
activity for 60 days.]

** Changed in: keystone
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1567886

Title:
  ERROR msg appeared when project deleted with LDAP

Status in OpenStack Identity (keystone):
  Expired

Bug description:
  With LDAP, I create project and delete it, Error msg appeared. But
  that project is deleted correctly. I set the "allow_subtre_delete"
  option to True. I use Kilo-1.2.

  
  # openstack project create testpj
  +---+--+
  | Field | Value|
  +---+--+
  | domain_id | default  |
  | enabled   | True |
  | id| f0893610713541e9b4ad62256088c669 |
  | name  | testpj   |
  +---+--+

  # openstack project list
  +--+--+
  | ID   | Name |
  +--+--+
  | 396c685b2b3f4a19b88a80bc5040c388 | admin|
  | 1d5c3233c09740d8a99b115e7edb4c94 | services |
  | f0893610713541e9b4ad62256088c669 | testpj   |
  +--+--+

  # openstack project delete testpj
  ERROR: openstack Could not find role: f0893610713541e9b4ad62256088c669 (HTTP 
404) (Request-ID: req-37e8c56d-225a-4a70-803a-fe668177908a)

  # openstack project list
  +--+--+
  | ID   | Name |
  +--+--+
  | 396c685b2b3f4a19b88a80bc5040c388 | admin|
  | 1d5c3233c09740d8a99b115e7edb4c94 | services |
  +--+--+

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1567886/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1526607] Re: refactor test_attributes.py

2016-06-13 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1526607

Title:
  refactor test_attributes.py

Status in neutron:
  Expired

Bug description:
  there're many test methods contains repeated test cases with the same
  pattern, such as

  def test_validate_string(self):
  msg = attributes._validate_string(None, None)
  self.assertEqual("'None' is not a valid string", msg)

  # 0 == len(data) == max_len
  msg = attributes._validate_string("", 0)
  self.assertIsNone(msg)

  # 0 == len(data) < max_len
  msg = attributes._validate_string("", 9)
  self.assertIsNone(msg)

  # 0 < len(data) < max_len
  msg = attributes._validate_string("123456789", 10)
  self.assertIsNone(msg)

  # 0 < len(data) == max_len
  msg = attributes._validate_string("123456789", 9)
  self.assertIsNone(msg)

  # 0 < max_len < len(data)
  msg = attributes._validate_string("1234567890", 9)
  self.assertEqual("'1234567890' exceeds maximum length of 9", msg)

  msg = attributes._validate_string("123456789", None)
  self.assertIsNone(msg)

  we can refactor this with a for loop, to make them look better.
  Following methods will be refactored:

  test_is_attr_set()
  test_validate_values()
  test_validate_not_empty_string_or_none()
  test_validate_string_or_none()
  test_validate_string()
  test_validate_list_of_unique_strings()
  test_validate_range()
  _test_validate_mac_address()
  test_validate_ip_address()
  test_validate_ip_address_bsd()
  test_validate_ip_address_or_none()
  test_uuid_pattern()
  test_mac_pattern()
  _test_validate_subnet()
  _test_validate_regex()
  test_validate_dict_without_constraints()
  test_validate_dict_or_empty()
  test_convert_to_int_int()
  test_convert_to_int_if_not_none()
  test_convert_to_int_str()
  test_convert_to_float_positve_value()
  test_convert_to_float_string()

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1526607/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1520775] Re: Update the gateway of external net won't affect router

2016-06-13 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1520775

Title:
  Update the gateway of external net won't affect router

Status in neutron:
  Expired

Bug description:
  I found this one when build up a test env.

  Steps to reproduce:
  1) I create a set of external network, internal network and router. The 
external network has gateway ip in its subnet.
  2) connect the external, internal network and router, by using 
router-gateway-set, router-interface-add.
  3) Then I realize my physical network doesn't have a gateway. So I update the 
subnet of external network with --no-gateway.
  4) I can't see the default route be deleted in router namespace, even if I 
restart l3-agent.

  
  I try it in legacy router and DVR, they both have this problem, and I believe 
HA router will have this problem.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1520775/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1524004] Re: linuxbridge agent does not wire ports for non-traditional device owners

2016-06-13 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1524004

Title:
  linuxbridge agent does not wire ports for non-traditional device
  owners

Status in neutron:
  Expired

Bug description:
  A recent change [1] made the wiring super restrictive to network: and
  neutron: this resulted in external systems that use other device
  owners from getting wired.

  [1] https://review.openstack.org/#/c/193485/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1524004/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1569714] Re: Don't select the user's primary project on editing User

2016-06-13 Thread Launchpad Bug Tracker
[Expired for OpenStack Dashboard (Horizon) because there has been no
activity for 60 days.]

** Changed in: horizon
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1569714

Title:
  Don't select the user's primary project on editing User

Status in OpenStack Dashboard (Horizon):
  Expired

Bug description:
  When editing user in the 'Identity/Users' Page ,  the select control
  of primary project  don't select the user's primary project.

  I debugged and found that 'user.project_id' in Line 136 of File
  horizon/openstack_dashboard/dashboards/identity/users/views.py always
  return None.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1569714/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1531065] Re: duplicately fetch subnet_id in get_subnet_for_dvr

2016-06-13 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1531065

Title:
  duplicately  fetch subnet_id in get_subnet_for_dvr

Status in neutron:
  Expired

Bug description:
  In 
https://github.com/openstack/neutron/blob/77a6d114eae9de8078b358a9bd8502fb7c393641/neutron/db/dvr_mac_db.py#L159-L163,
 get_subnet_for_dvr will try to get subnet_id when fixed_ips is not None:
  ...
  def get_subnet_for_dvr(self, context, subnet, fixed_ips=None):
  if fixed_ips:
  subnet_data = fixed_ips[0]['subnet_id']
  else:
  subnet_data = subnet
  ...

  But checking its callers :
  
https://github.com/openstack/neutron/blob/77a6d114eae9de8078b358a9bd8502fb7c393641/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_dvr_neutron_agent.py#L509-L531
 , 
  and
  
https://github.com/openstack/neutron/blob/77a6d114eae9de8078b358a9bd8502fb7c393641/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_dvr_neutron_agent.py#L366-L380
 , they all have similar logic:
  ...
  fixed_ip = fixed_ips[0]
  ...
  subnet_uuid = fixed_ip['subnet_id']
  ...
  subnet_info = self.plugin_rpc.get_subnet_for_dvr(
  self.context, subnet_uuid, fixed_ips=fixed_ips)

  subnet_id has already be fetched and passed into get_subnet_for_dvr.
  So in get_subnet_for_dvr, there is no need to fetch again.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1531065/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1544861] Re: LBaaS: connection limit does not work with HA Proxy

2016-06-13 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1544861

Title:
  LBaaS: connection limit does not work with HA Proxy

Status in neutron:
  Expired

Bug description:
  connection limit does not work with HA Proxy.

  It sets at frontend section like:

  frontend 75a12b66-9d2a-4a68-962e-ec9db8c3e2fb
  option httplog
  capture cookie JSESSIONID len 56
  bind 192.168.10.20:80
  mode http
  default_backend fb8ba6e3-71a4-47dd-8a83-2978bafea6e7
  maxconn 5
  option forwardfor

  But above configuration does not work.
  It should be set at global section like:

  global
  daemon
  user nobody
  group haproxy
  log /dev/log local0
  log /dev/log local1 notice
  stats socket 
/var/lib/neutron/lbaas/fb8ba6e3-71a4-47dd-8a83-2978bafea6e7/sock mode 0666 
level user
  maxconn 5

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1544861/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1564129] Re: Implied Roles responses lack 'links' pointing back to the API call that generated them

2016-06-13 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/300195
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=248f0278f9420fe19238fc5a45eedb2f81d6bfae
Submitter: Jenkins
Branch:master

commit 248f0278f9420fe19238fc5a45eedb2f81d6bfae
Author: Colleen Murphy 
Date:   Thu Mar 31 13:22:17 2016 -0700

Add 'links' to implied roles response

The API spec claims that a GET request for implied roles will provide a
link back to itself in the response[1]. This patch makes it actually do
that.

[1] 
http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3.html#list-implied-roles-for-role

Closes-bug: #1564129

Change-Id: I43571cc8d759922a4d9107cadba590cf14d25b20


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1564129

Title:
  Implied Roles responses lack 'links' pointing back to the API call
  that generated them

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  For instance:

   http https://poc.example.com:35357/v3/role_inferences/ 
"X-Auth-Token:d6b832b25b5f4eafa53ebd8399d41b82"
  HTTP/1.1 200 OK
  Connection: Keep-Alive
  Content-Length: 371
  Content-Type: application/json
  Date: Wed, 30 Mar 2016 22:22:47 GMT
  Keep-Alive: timeout=5, max=100
  Server: Apache/2.4.7 (Ubuntu)
  Vary: X-Auth-Token
  x-openstack-request-id: req-7632b6fb-1ca2-4cf3-a9c4-b2d74627e282

  {
  "role_inferences": [
  {
  "implies": [
  {
  "id": "edd42085d3ab472e9cf13b3cf3c362b6",
  "links": {
  "self": 
"https://poc.example.com:35357/v3/roles/edd42085d3ab472e9cf13b3cf3c362b6;
  },
  "name": "Member"
  }
  ],
  "prior_role": {
  "id": "5a912666c3704c22a20d4c35f3068a88",
  "links": {
  "self": 
"https://poc.example.com:35357/v3/roles/5a912666c3704c22a20d4c35f3068a88;
  },
  "name": "testing"
  }
  }
  ]
  }

  While there are 'links' on the individual roles there is not one on
  the response as a whole. This is the case with all of the Implied
  Roles responses.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1564129/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1592241] [NEW] memory_mb_used of compute node do not consider reserved_huge_pages

2016-06-13 Thread liuxiuli
Public bug reported:

version: master
question:
memory_mb_used of compute node only considers CONF.reserved_host_memory_mb. Now 
memory_mb_used is equal to sum of memory_mb which all instances used and 
CONF.reserved_host_memory_mb, But do not consider CONF.reserved_huge_pages

** Affects: nova
 Importance: Undecided
 Assignee: liuxiuli (liu-lixiu)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => liuxiuli (liu-lixiu)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1592241

Title:
  memory_mb_used of compute node do not consider reserved_huge_pages

Status in OpenStack Compute (nova):
  New

Bug description:
  version: master
  question:
  memory_mb_used of compute node only considers CONF.reserved_host_memory_mb. 
Now memory_mb_used is equal to sum of memory_mb which all instances used and 
CONF.reserved_host_memory_mb, But do not consider CONF.reserved_huge_pages

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1592241/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1579262] Re: Angular registry - Cannot read property 'itemActions' of undefined

2016-06-13 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/317619
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=1d60ffd5d017c82ea3eeb9b429e0ee5030f5891c
Submitter: Jenkins
Branch:master

commit 1d60ffd5d017c82ea3eeb9b429e0ee5030f5891c
Author: Matt Borland 
Date:   Tue May 17 10:38:17 2016 -0600

Added safety check to initActions so unregistered types pass

This patch adds a basic safety check to ensure that initScope doesn't throw
errors when an unregistered type is passed in.  Added a test that 
demonstrates
this as well.

Change-Id: Ifdefaa3cc0b792221323136de8b8bd0eb76d7486
Closes-Bug: 1579262


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1579262

Title:
  Angular registry - Cannot read property 'itemActions' of undefined

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Search (Searchlight):
  In Progress

Bug description:
  If you attempt to use initActions for a resource type that hasn't been
  registered, and error is thrown:

  angular.js:11707 TypeError: Cannot read property 'itemActions' of undefined
  at Object.initActions 
(http://127.0.0.1:8005/static/framework/conf/resource-type-registry.service.js:265:42)
  at 
http://127.0.0.1:8005/static/dashboard/project/search/table/search-table.controller.js:114:18
  at Array.forEach (native)
  at pluginsUpdated 
(http://127.0.0.1:8005/static/dashboard/project/search/table/search-table.controller.js:113:33)
  at Scope.$emit 
(http://127.0.0.1:8005/static/horizon/lib/angular/angular.js:14798:33)
  at pluginsReceived 
(http://127.0.0.1:8005/static/dashboard/project/search/settings/search-settings.service.js:108:15)
  at http://127.0.0.1:8005/static/horizon/lib/angular/angular.js:9442:11
  at processQueue 
(http://127.0.0.1:8005/static/horizon/lib/angular/angular.js:13300:27)
  at http://127.0.0.1:8005/static/horizon/lib/angular/angular.js:13316:27
  at Scope.$eval 
(http://127.0.0.1:8005/static/horizon/lib/angular/angular.js:14552:28)

  Found this when testing the hypervisor plugin for searchlight:
  https://review.openstack.org/#/c/297586/7

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1579262/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1592238] [NEW] ml2: needs a way for MD to pass data from precommit to postcommit

2016-06-13 Thread YAMAMOTO Takashi
Public bug reported:

it would be nice to have a proper mechanism to pass some MD specific data
from precommit to postcommit.

use case:
https://review.openstack.org/#/c/293376/41/dragonflow/neutron/ml2/mech_driver.py@168

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1592238

Title:
  ml2: needs a way for MD to pass data from precommit to postcommit

Status in neutron:
  New

Bug description:
  it would be nice to have a proper mechanism to pass some MD specific data
  from precommit to postcommit.

  use case:
  
https://review.openstack.org/#/c/293376/41/dragonflow/neutron/ml2/mech_driver.py@168

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1592238/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1592212] [NEW] conversion_format should assign as the input parameter

2016-06-13 Thread haobing1
Public bug reported:

when use the command of 'glance task-create --type --input' to create a
image, if we want to convert the image's format, we have to set the
conversion_format in the glance-api.conf file,such as 'conversion_format
= raw'. I think this is inconvenient,we should set the conversion_format
as the input parameter. such as 'glance  task-create --type import
--input
'{"import_from":"http://10.43.176.8/images/cirros-0.3.1-x86_64-disk.img","import_from_format":
"", "conversion_format": "raw",
"image_properties":{"disk_format":"raw","container_format":"bare","name
":"test-hb_1"}}'

** Affects: glance
 Importance: Undecided
 Status: New

** Description changed:

  when use the command of 'glance task-create --type --input' to create a
  image, if we want to convert the image's format, we have to set the
  conversion_format in the glance-api.conf file,such as 'conversion_format
  = raw'. I think this is inconvenient,we should set the conversion_format
  as the input parameter. such as 'glance  task-create --type import
  --input
  
'{"import_from":"http://10.43.176.8/images/cirros-0.3.1-x86_64-disk.img","import_from_format":
  "", "conversion_format": "raw",
- "image_properties":{"disk_format":"qcow2","container_format":"bare","name
+ "image_properties":{"disk_format":"raw","container_format":"bare","name
  ":"test-hb_1"}}'

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1592212

Title:
  conversion_format should assign as the  input parameter

Status in Glance:
  New

Bug description:
  when use the command of 'glance task-create --type --input' to create
  a image, if we want to convert the image's format, we have to set the
  conversion_format in the glance-api.conf file,such as
  'conversion_format = raw'. I think this is inconvenient,we should set
  the conversion_format as the input parameter. such as 'glance  task-
  create --type import --input
  
'{"import_from":"http://10.43.176.8/images/cirros-0.3.1-x86_64-disk.img","import_from_format":
  "", "conversion_format": "raw",
  "image_properties":{"disk_format":"raw","container_format":"bare","name
  ":"test-hb_1"}}'

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1592212/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1591429] Re: User quota not working when project quotas set to unlimited.

2016-06-13 Thread Jordan Callicoat
*** This bug is a duplicate of bug 1491222 ***
https://bugs.launchpad.net/bugs/1491222

Tagged LP:1491222 with liberty-backport-potential, marking this a dup.

** This bug has been marked a duplicate of bug 1491222
   Booting instance cause infinite recursion in the case of enough user quota 
and unlimited project quota

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1591429

Title:
  User quota not working when project quotas set to unlimited.

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  Description
  ===
  User is unable to boot an instance when user quota exist and project quota 
for instance, ram, or cores are set to unlimited(-1).
   

  Steps to reproduce
  ==

  1. Tenant quotas are set to unlimited for instances,core,and ram.

  root@osad-aio:~# export tenant=$(openstack project list | awk '/support-test/ 
{ print $2 }')
  root@osad-aio:~# export tuser=$(openstack user list | awk '/test-user/ { 
print $2 }')
  root@osad-aio:~# nova quota-update --instances -1 --cores -1 --ram -1 $tenant

  root@osad-aio:~# nova quota-show --tenant $tenant
  +-+---+
  | Quota   | Limit |
  +-+---+
  | instances   | -1|
  | cores   | -1|
  | ram | -1|
  | floating_ips| 10|
  | fixed_ips   | -1|
  | metadata_items  | 128   |
  | injected_files  | 5 |
  | injected_file_content_bytes | 10240 |
  | injected_file_path_bytes| 255   |
  | key_pairs   | 100   |
  | security_groups | 10|
  | security_group_rules| 20|
  | server_groups   | 10|
  | server_group_members| 10|
  +-+---+

  
  2. User quotas are set for user under tenant.
  root@osad-aio:~# nova quota-update --instances 4 --cores 20 --ram 4096 --user 
$tuser $tenant

  root@osad-aio:~# nova quota-show --user $tuser --tenant $tenant
  +-+---+
  | Quota   | Limit |
  +-+---+
  | instances   | 4 |
  | cores   | 20|
  | ram | 4096  |
  | floating_ips| 10|
  | fixed_ips   | -1|
  | metadata_items  | 128   |
  | injected_files  | 5 |
  | injected_file_content_bytes | 10240 |
  | injected_file_path_bytes| 255   |
  | key_pairs   | 100   |
  | security_groups | 10|
  | security_group_rules| 20|
  | server_groups   | 10|
  | server_group_members| 10|
  +-+---+

  
  3. Booting of the instance fails due to quota exceeding.  Additional 
debugging output below [0]

  root@osad-aio:~# nova boot --security-groups default --image 
9fde3d51-05f2-4de8-83e2-2c93f1085194 --nic 
net-id=0d415531-a990-4bb2-9b0e-09cf43543559 --flavor 1 test-instance
  ERROR (Forbidden): Quota exceeded for cores, instances, ram: Requested 1, 1, 
512, but already used 0, 0, 0 of 20, 4, 4096 cores, instances, ram (HTTP 403) 
(Request-ID: req-27fa978c-6fc5-4f7a-a4e4-558797bd6a72

  
  4. Setting limits instead of unlimited on the project fixes the inability for 
the user to boot the instance.

  root@osad-aio:~# nova quota-update --instances 65535 --cores 65535
  --ram 65535 $tenant

  root@osad-aio:~# nova quota-show --tenant $tenant
  +-+---+
  | Quota   | Limit |
  +-+---+
  | instances   | 65535 |
  | cores   | 65535 |
  | ram | 65535 |
  | floating_ips| 10|
  | fixed_ips   | -1|
  | metadata_items  | 128   |
  | injected_files  | 5 |
  | injected_file_content_bytes | 10240 |
  | injected_file_path_bytes| 255   |
  | key_pairs   | 100   |
  | security_groups | 10|
  | security_group_rules| 20|
  | server_groups   | 10|
  | server_group_members| 10|
  +-+---

  
  root@osad-aio:~# nova boot --security-groups default --image 
9fde3d51-05f2-4de8-83e2-2c93f1085194 --nic 
net-id=0d415531-a990-4bb2-9b0e-09cf43543559 --flavor 1 test-instance
  
+--+---+
  | Property | Value
 |
  
+--+---+
  | OS-DCF:diskConfig| MANUAL   

[Yahoo-eng-team] [Bug 1544522] Re: Don't use Mock.called_once_with that does not exist

2016-06-13 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/312611
Committed: 
https://git.openstack.org/cgit/openstack/trove/commit/?id=3420886b8d24b570823cb500cf1feb9031d4a887
Submitter: Jenkins
Branch:master

commit 3420886b8d24b570823cb500cf1feb9031d4a887
Author: stewie925 
Date:   Wed May 4 08:37:05 2016 -0700

Rename called_once_with methods correctly

* Remove 'self.assertTrue' enclosing the '.called_once_with'
* methods and rename '.called_once_with' method name to
* '.assert_called_once_with'.

Change-Id: I04d1c99df9983733b6d2579ba71097b2cebc6a54
Closes-Bug: #1544522


** Changed in: trove
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1544522

Title:
  Don't use Mock.called_once_with that does not exist

Status in Cinder:
  Fix Released
Status in neutron:
  Fix Released
Status in octavia:
  Fix Released
Status in python-designateclient:
  Fix Released
Status in python-openstackclient:
  Fix Released
Status in Rally:
  Fix Released
Status in Sahara:
  Fix Released
Status in OpenStack DBaaS (Trove):
  Fix Released

Bug description:
  class mock.Mock does not exist method "called_once_with", it just
  exists method "assert_called_once_with". Currently there are still
  some places where we use called_once_with method, we should correct
  it.

  NOTE: called_once_with() does nothing because it's a mock object.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1544522/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1592169] [NEW] cached tokens break Liberty to Mitaka upgrade

2016-06-13 Thread Matt Fischer
Public bug reported:

Sequence of events.

- Fernet tokens (didnt test with UUID)
- Running cluster with Liberty from about 6 weeks ago, so close to stable.
- Upgrade Keystone to Mitaka (automated)
- Tokens fail to issue for about 5 minutes, after this time, all the cached 
tokens are gone
- Everything works after that. See also Work-around at bottom.

Annotated logs:

Token call works to this point.

db_sync is running here, but code is still Liberty, DB now Mitaka:
An unexpected error prevented the server from fulfilling your request. (HTTP 
500) (Request-ID: req-04dcb954-ae4e-41fa-b235-aa0b05ac8b44)
An unexpected error prevented the server from fulfilling your request. (HTTP 
500) (Request-ID: req-d27eee3a-723a-412e-a7b0-37ffd511c221)
An unexpected error prevented the server from fulfilling your request. (HTTP 
500) (Request-ID: req-265b6261-bcac-44f1-a806-8696b455ff5a)

Puppet bounces Keystone, the restarted code is Mitaka:
Discovering versions from the identity service failed when creating the 
password plugin. Attempting to determine version from URL.

Tokens fail to generate here due to the caching format changing. This will 
continue for about 5 minutes or so, I suspect it depends on whats in the cache 
and timeouts.
An unexpected error prevented the server from fulfilling your request. (HTTP 
500) (Request-ID: req-8b835f67-4a21-42d3-9030-b4dbfd820238)
An unexpected error prevented the server from fulfilling your request. (HTTP 
500) (Request-ID: req-b92bcd56-87da-4977-b82e-c717c7120f4f)
An unexpected error prevented the server from fulfilling your request. (HTTP 
500) (Request-ID: req-a787163f-20c1-493f-9b34-82708dea4191)
An unexpected error prevented the server from fulfilling your request. (HTTP 
500) (Request-ID: req-e2ab7bf1-3483-438e-8425-06e5cfbf2e37)

Keystone log is full of this:

2016-06-13 21:37:58.947 35 ERROR keystone.common.wsgi Traceback (most recent 
call last):
2016-06-13 21:37:58.947 35 ERROR keystone.common.wsgi   File 
"/venv/local/lib/python2.7/site-packages/keystone/common/wsgi.py", line 249, in 
__call__
2016-06-13 21:37:58.947 35 ERROR keystone.common.wsgi result = 
method(context, **params)
2016-06-13 21:37:58.947 35 ERROR keystone.common.wsgi   File 
"/venv/local/lib/python2.7/site-packages/oslo_log/versionutils.py", line 165, 
in wrapped
2016-06-13 21:37:58.947 35 ERROR keystone.common.wsgi return 
func_or_cls(*args, **kwargs)
2016-06-13 21:37:58.947 35 ERROR keystone.common.wsgi   File 
"/venv/local/lib/python2.7/site-packages/keystone/token/controllers.py", line 
100, in authenticate
2016-06-13 21:37:58.947 35 ERROR keystone.common.wsgi context, auth)
2016-06-13 21:37:58.947 35 ERROR keystone.common.wsgi   File 
"/venv/local/lib/python2.7/site-packages/keystone/token/controllers.py", line 
310, in _authenticate_local
2016-06-13 21:37:58.947 35 ERROR keystone.common.wsgi user_id, tenant_id)
2016-06-13 21:37:58.947 35 ERROR keystone.common.wsgi   File 
"/venv/local/lib/python2.7/site-packages/keystone/token/controllers.py", line 
391, in _get_project_roles_and_ref
2016-06-13 21:37:58.947 35 ERROR keystone.common.wsgi user_id, tenant_id)
2016-06-13 21:37:58.947 35 ERROR keystone.common.wsgi   File 
"/venv/local/lib/python2.7/site-packages/keystone/common/manager.py", line 124, 
in wrapped
2016-06-13 21:37:58.947 35 ERROR keystone.common.wsgi __ret_val = 
__f(*args, **kwargs)
2016-06-13 21:37:58.947 35 ERROR keystone.common.wsgi   File 
"/venv/local/lib/python2.7/site-packages/dogpile/cache/region.py", line 1053, 
in decorate
2016-06-13 21:37:58.947 35 ERROR keystone.common.wsgi should_cache_fn)
2016-06-13 21:37:58.947 35 ERROR keystone.common.wsgi   File 
"/venv/local/lib/python2.7/site-packages/dogpile/cache/region.py", line 657, in 
get_or_create
2016-06-13 21:37:58.947 35 ERROR keystone.common.wsgi async_creator) as 
value:
2016-06-13 21:37:58.947 35 ERROR keystone.common.wsgi   File 
"/venv/local/lib/python2.7/site-packages/dogpile/core/dogpile.py", line 158, in 
__enter__
2016-06-13 21:37:58.947 35 ERROR keystone.common.wsgi return self._enter()
2016-06-13 21:37:58.947 35 ERROR keystone.common.wsgi   File 
"/venv/local/lib/python2.7/site-packages/dogpile/core/dogpile.py", line 98, in 
_enter
2016-06-13 21:37:58.947 35 ERROR keystone.common.wsgi generated = 
self._enter_create(createdtime)
2016-06-13 21:37:58.947 35 ERROR keystone.common.wsgi   File 
"/venv/local/lib/python2.7/site-packages/dogpile/core/dogpile.py", line 149, in 
_enter_create
2016-06-13 21:37:58.947 35 ERROR keystone.common.wsgi created = 
self.creator()
2016-06-13 21:37:58.947 35 ERROR keystone.common.wsgi   File 
"/venv/local/lib/python2.7/site-packages/dogpile/cache/region.py", line 625, in 
gen_value
2016-06-13 21:37:58.947 35 ERROR keystone.common.wsgi created_value = 
creator()
2016-06-13 21:37:58.947 35 ERROR keystone.common.wsgi   File 
"/venv/local/lib/python2.7/site-packages/dogpile/cache/region.py", line 1049, 
in creator
2016-06-13 

[Yahoo-eng-team] [Bug 1592167] [NEW] Deleted keypair causes metadata failure

2016-06-13 Thread James Dempsey
Public bug reported:

Description
===

If a user deletes a keypair that was used to create an instance, that
instance receives HTTP 400 errors when attempting to get metadata via
http://169.254.169.254/openstack/latest/meta_data.json.

This causes problems in the instance when cloud-init fails to retrieve
the OpenStack datasource.

Steps to reproduce
==

1. Create instance with SSH keypair defined.
2. Delete SSH keypair
3. Attempt 'curl http://169.254.169.254/openstack/latest/meta_data.json' from 
the instance

Expected result
===

Instance receives metadata from
http://169.254.169.254/openstack/latest/meta_data.json

Actual result
=

Instance receives HTTP 400 error.  Additionally, Ubuntu Cloud Image
instances will fail back to the ec2 datasource and re-generate Host SSH
keys.

Environment
===

Nova:   2015.1.4.2
Hypervisor: Libvirt + KVM
Storage:Ceph
Network:Liberty Neutron ML2+OVS


Logs


[req-a8385839-6993-4289-96dc-1714afe82597 - - - - -] FaultWrapper error
Traceback (most recent call last):
  File 
"/opt/cat/openstack/nova/local/lib/python2.7/site-packages/nova/api/ec2/__init__.py",
 line 93, in __call__
return req.get_response(self.application)
  File 
"/opt/cat/openstack/nova/local/lib/python2.7/site-packages/webob/request.py", 
line 1299, in send
application, catch_exc_info=False)
  File 
"/opt/cat/openstack/nova/local/lib/python2.7/site-packages/webob/request.py", 
line 1263, in call_application
app_iter = application(self.environ, start_response)
  File 
"/opt/cat/openstack/nova/local/lib/python2.7/site-packages/webob/dec.py", line 
130, in __call__
resp = self.call_func(req, *args, **self.kwargs)
  File 
"/opt/cat/openstack/nova/local/lib/python2.7/site-packages/webob/dec.py", line 
195, in call_func
return self.func(req, *args, **kwargs)
  File 
"/opt/cat/openstack/nova/local/lib/python2.7/site-packages/nova/api/ec2/__init__.py",
 line 105, in __call__
rv = req.get_response(self.application)
  File 
"/opt/cat/openstack/nova/local/lib/python2.7/site-packages/webob/request.py", 
line 1299, in send
application, catch_exc_info=False)
  File 
"/opt/cat/openstack/nova/local/lib/python2.7/site-packages/webob/request.py", 
line 1263, in call_application
app_iter = application(self.environ, start_response)
  File 
"/opt/cat/openstack/nova/local/lib/python2.7/site-packages/webob/dec.py", line 
130, in __call__
resp = self.call_func(req, *args, **self.kwargs)
  File 
"/opt/cat/openstack/nova/local/lib/python2.7/site-packages/webob/dec.py", line 
195, in call_func
return self.func(req, *args, **kwargs)
  File 
"/opt/cat/openstack/nova/local/lib/python2.7/site-packages/nova/api/metadata/handler.py",
 line 137, in __call__
data = meta_data.lookup(req.path_info)
  File 
"/opt/cat/openstack/nova/local/lib/python2.7/site-packages/nova/api/metadata/base.py",
 line 418, in lookup
data = self.get_openstack_item(path_tokens[1:])
  File 
"/opt/cat/openstack/nova/local/lib/python2.7/site-packages/nova/api/metadata/base.py",
 line 297, in get_openstack_item
return self._route_configuration().handle_path(path_tokens)
  File 
"/opt/cat/openstack/nova/local/lib/python2.7/site-packages/nova/api/metadata/base.py",
 line 491, in handle_path
return path_handler(version, path)
  File 
"/opt/cat/openstack/nova/local/lib/python2.7/site-packages/nova/api/metadata/base.py",
 line 316, in _metadata_as_json
self.instance.key_name)
  File 
"/opt/cat/openstack/nova/local/lib/python2.7/site-packages/nova/objects/base.py",
 line 163, in wrapper
result = fn(cls, context, *args, **kwargs)
  File 
"/opt/cat/openstack/nova/local/lib/python2.7/site-packages/nova/objects/keypair.py",
 line 60, in get_by_name
db_keypair = db.key_pair_get(context, user_id, name)
  File 
"/opt/cat/openstack/nova/local/lib/python2.7/site-packages/nova/db/api.py", 
line 937, in key_pair_get
return IMPL.key_pair_get(context, user_id, name)
  File 
"/opt/cat/openstack/nova/local/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py",
 line 233, in wrapper
return f(*args, **kwargs)
  File 
"/opt/cat/openstack/nova/local/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py",
 line 2719, in key_pair_get
raise exception.KeypairNotFound(user_id=user_id, name=name)
KeypairNotFound: Keypair keypair_name not found for user 


** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1592167

Title:
  Deleted keypair causes metadata failure

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===

  If a user deletes a keypair that was used to create an instance, that
  instance receives HTTP 400 errors when attempting to get metadata via
  

[Yahoo-eng-team] [Bug 1491622] Re: Images "shared with me" misleading

2016-06-13 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/310063
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=71a7ac29f805de56052b9b0517cba8ccbc240e7f
Submitter: Jenkins
Branch:master

commit 71a7ac29f805de56052b9b0517cba8ccbc240e7f
Author: Andy Hsiang 
Date:   Mon Apr 25 14:54:06 2016 -0400

modified filter tab name for images shared by projects

previously named "Shared with Me" for one of the 3 filter tabs on
project/image page.  Selecting this tab would show all images shared
with current selected project. Updated the tab name accordingly to
clarify the confusion.

Change-Id: Ie623ebea9b69746761d598106ec43d44eb5ef943
Closes-Bug: #1491622


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1491622

Title:
  Images "shared with me" misleading

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  The horizon project images tab shows three sub tabs: Project (count),
  Shared with me (count), and Public (count)

  This is somewhat misleading as glance images are shared with a project
  (formerly known as tenants) not with users. Users see this and are
  afraid other users on the same project cannot see the image since it
  is shared with "me". They view "me" as user_id level
  scoping/association vs. project/tenant level scoping/association. So
  users file tickets to go "huh". Can we fix this please?

  A couple of other possible labels for this:
  * Just "shared"
  * "Project-wide shared"
  * STDs (implying others have already been intimately involved with this 
non-public image)
  * "Cross-project Shared"

  etc.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1491622/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1591339] Re: radio buttons on ng modals should be consistent

2016-06-13 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/328475
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=144df75bbe60806d46c733d64f1e784055b6ab71
Submitter: Jenkins
Branch:master

commit 144df75bbe60806d46c733d64f1e784055b6ab71
Author: Cindy Lu 
Date:   Fri Jun 10 12:56:02 2016 -0700

make toggle buttons look consistent on ng modals

The issue is most apparent in this scenario:
- Change to default theme
- go to ng images panel
- click on Create Image
- the selected buttons don't have a box around it

This patch makes the toggle buttons look consistent
across ng Create Instance and Create Image modals.

Change-Id: I95cfe6219a8b95f5cb314e25d0a28b6b00f22d28
Closes-Bug: #1591339


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1591339

Title:
  radio buttons on ng modals should be consistent

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  The toggle button on ng create images modal is barely visible on the
  Default theme.

  Please look at attachment.

  It should be made to look like the same as on ng launch instance
  wizard - take a look at Source step - Delete Volume on Instance
  Delete.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1591339/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1508442] Re: LOG.warn is deprecated

2016-06-13 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/291582
Committed: 
https://git.openstack.org/cgit/openstack/heat-cfntools/commit/?id=5d62b178c361953d983236ce86e7b6a61e961549
Submitter: Jenkins
Branch:master

commit 5d62b178c361953d983236ce86e7b6a61e961549
Author: Swapnil Kulkarni (coolsvap) 
Date:   Fri Mar 11 13:09:24 2016 +0530

Replace deprecated LOG.warn with LOG.warning

LOG.warn is deprecated. It still used in a few places.
Updated to non-deprecated LOG.warning.

Change-Id: I6e8df0e072448fbd4077c4e5d98b2986e9855489
Closes-Bug:#1508442


** Changed in: heat-cfntools
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1508442

Title:
  LOG.warn is deprecated

Status in anvil:
  In Progress
Status in Aodh:
  Fix Released
Status in Astara:
  Fix Released
Status in Barbican:
  Fix Released
Status in bilean:
  Fix Released
Status in Blazar:
  In Progress
Status in Ceilometer:
  Fix Released
Status in cloud-init:
  In Progress
Status in cloudkitty:
  Fix Released
Status in congress:
  Fix Released
Status in Designate:
  Fix Released
Status in django-openstack-auth:
  Fix Released
Status in django-openstack-auth-kerberos:
  In Progress
Status in DragonFlow:
  Fix Released
Status in ec2-api:
  Fix Released
Status in Evoque:
  In Progress
Status in Freezer:
  In Progress
Status in gce-api:
  Fix Released
Status in Gnocchi:
  Fix Released
Status in heat:
  Fix Released
Status in heat-cfntools:
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in KloudBuster:
  Fix Released
Status in kolla:
  Fix Released
Status in Magnum:
  Fix Released
Status in Manila:
  Fix Released
Status in Mistral:
  In Progress
Status in networking-arista:
  In Progress
Status in networking-calico:
  In Progress
Status in networking-cisco:
  In Progress
Status in networking-fujitsu:
  Fix Released
Status in networking-odl:
  In Progress
Status in networking-ofagent:
  In Progress
Status in networking-plumgrid:
  In Progress
Status in networking-powervm:
  Fix Released
Status in networking-vsphere:
  Fix Released
Status in neutron:
  In Progress
Status in OpenStack Compute (nova):
  In Progress
Status in nova-powervm:
  Fix Released
Status in nova-solver-scheduler:
  In Progress
Status in octavia:
  Fix Released
Status in openstack-ansible:
  Fix Released
Status in oslo.cache:
  Fix Released
Status in Packstack:
  Fix Released
Status in python-dracclient:
  Fix Released
Status in python-magnumclient:
  Fix Released
Status in RACK:
  In Progress
Status in python-watcherclient:
  In Progress
Status in OpenStack Search (Searchlight):
  Fix Released
Status in shaker:
  Fix Released
Status in Solum:
  Fix Released
Status in tempest:
  In Progress
Status in tripleo:
  Fix Released
Status in trove-dashboard:
  Fix Released
Status in watcher:
  Fix Released
Status in zaqar:
  In Progress

Bug description:
  LOG.warn is deprecated in Python 3 [1] . But it still used in a few
  places, non-deprecated LOG.warning should be used instead.

  Note: If we are using logger from oslo.log, warn is still valid [2],
  but I agree we can switch to LOG.warning.

  [1]https://docs.python.org/3/library/logging.html#logging.warning
  [2]https://github.com/openstack/oslo.log/blob/master/oslo_log/log.py#L85

To manage notifications about this bug go to:
https://bugs.launchpad.net/anvil/+bug/1508442/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1592017] Re: ML2 uses global MTU for encapsulation calculation

2016-06-13 Thread Ihar Hrachyshka
** Changed in: neutron
   Status: In Progress => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1592017

Title:
  ML2 uses global MTU for encapsulation calculation

Status in neutron:
  Opinion

Bug description:
  My goal is to achieve an MTU of 1500 for both Vlan and Vxlan based
  tenant networks.

  So I set the path_mtu to 1550, while leaving global_physnet_mtu at the
  default value of 1500.

  However, the MTU for my Vxlan networks is still calculated as 1450,
  because the underlying MTU is calculated as min(1500,1550)=1500 and
  then the encapsulation overhead is subtracted from that.

  IMHO the correct calculation would be to subtract the encapsulation
  overhead only from the path_mtu if it is specified and possibly take
  the minimum of that and global_physnet_mtu.

  If I set global_physnet_mtu to 1550, my Vxlan networks will be fine,
  but then the Vlan networks will get MTU 1550, which I also want to
  avoid.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1592017/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1592150] [NEW] Having multiple values in a yumrepos key results in incorrectly formatted yum repo config

2016-06-13 Thread Jeff Wang
Public bug reported:

According to the code to handle yumrepos, cloud-init "Can handle 'lists'
in certain cases".

When I specify a list in my cloud-config, it is incorrectly formatted.
It is surrounded by '''triple single-quotes''' when rendered onto the
filesystem, causing the resulting configuration to be invalid.

I am using cloud-init from Centos7, from the amazon ami marketplace:

$ rpm -qi cloud-init
Name: cloud-init
Version : 0.7.5
Release : 10.el7.centos.1
Architecture: x86_64
Install Date: Tue 29 Mar 2016 08:43:31 PM UTC
Group   : System Environment/Base
Size: 1432841
License : GPLv3
Signature   : RSA/SHA256, Wed 10 Sep 2014 12:33:32 PM UTC, Key ID 
24c6a8a7f4a80eb5
Source RPM  : cloud-init-0.7.5-10.el7.centos.1.src.rpm
Build Date  : Wed 10 Sep 2014 11:05:45 AM UTC
Build Host  : worker1.bsys.centos.org
Relocations : (not relocatable)
Packager: CentOS BuildSystem 
Vendor  : CentOS
URL : http://launchpad.net/cloud-init
Summary : Cloud instance init scripts
Description :
Cloud-init is a set of init scripts for cloud instances.  Cloud instances
need special scripts to run during initialization to retrieve and install
ssh keys and to let the user run various scripts.

Example configuration:

yum_repos:
saltstack:
name: SaltStack repo for Red Hat Enterprise Linux $releasever
baseurl: 
https://repo.saltstack.com/yum/redhat/$releasever/$basearch/latest
enabled: true
gpgcheck: true
gpgkey:
- 
https://repo.saltstack.com/yum/redhat/$releasever/$basearch/latest/SALTSTACK-GPG-KEY.pub
- 
https://repo.saltstack.com/yum/redhat/$releasever/$basearch/latest/base/RPM-GPG-KEY-CentOS-7

Result of Example configuration:

# Created by cloud-init on Mon, 13 Jun 2016 20:05:01 +
[saltstack]
gpgcheck = 1
gpgkey = 
'''https://repo.saltstack.com/yum/redhat/$releasever/$basearch/latest/SALTSTACK-GPG-KEY.pub

https://repo.saltstack.com/yum/redhat/$releasever/$basearch/latest/base/RPM-GPG-KEY-CentOS-7'''
enabled = 1
baseurl = https://repo.saltstack.com/yum/redhat/$releasever/$basearch/latest
name = SaltStack repo for Red Hat Enterprise Linux $releasever

** Affects: cloud-init
 Importance: Undecided
 Status: New


** Tags: 0.7.5 centos7 list rhel rpm yum yumrepos

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1592150

Title:
  Having multiple values in a yumrepos key results in incorrectly
  formatted yum repo config

Status in cloud-init:
  New

Bug description:
  According to the code to handle yumrepos, cloud-init "Can handle
  'lists' in certain cases".

  When I specify a list in my cloud-config, it is incorrectly formatted.
  It is surrounded by '''triple single-quotes''' when rendered onto the
  filesystem, causing the resulting configuration to be invalid.

  I am using cloud-init from Centos7, from the amazon ami marketplace:

  $ rpm -qi cloud-init
  Name: cloud-init
  Version : 0.7.5
  Release : 10.el7.centos.1
  Architecture: x86_64
  Install Date: Tue 29 Mar 2016 08:43:31 PM UTC
  Group   : System Environment/Base
  Size: 1432841
  License : GPLv3
  Signature   : RSA/SHA256, Wed 10 Sep 2014 12:33:32 PM UTC, Key ID 
24c6a8a7f4a80eb5
  Source RPM  : cloud-init-0.7.5-10.el7.centos.1.src.rpm
  Build Date  : Wed 10 Sep 2014 11:05:45 AM UTC
  Build Host  : worker1.bsys.centos.org
  Relocations : (not relocatable)
  Packager: CentOS BuildSystem 
  Vendor  : CentOS
  URL : http://launchpad.net/cloud-init
  Summary : Cloud instance init scripts
  Description :
  Cloud-init is a set of init scripts for cloud instances.  Cloud instances
  need special scripts to run during initialization to retrieve and install
  ssh keys and to let the user run various scripts.

  Example configuration:

  yum_repos:
  saltstack:
  name: SaltStack repo for Red Hat Enterprise Linux $releasever
  baseurl: 
https://repo.saltstack.com/yum/redhat/$releasever/$basearch/latest
  enabled: true
  gpgcheck: true
  gpgkey:
  - 
https://repo.saltstack.com/yum/redhat/$releasever/$basearch/latest/SALTSTACK-GPG-KEY.pub
  - 
https://repo.saltstack.com/yum/redhat/$releasever/$basearch/latest/base/RPM-GPG-KEY-CentOS-7

  Result of Example configuration:

  # Created by cloud-init on Mon, 13 Jun 2016 20:05:01 +
  [saltstack]
  gpgcheck = 1
  gpgkey = 
'''https://repo.saltstack.com/yum/redhat/$releasever/$basearch/latest/SALTSTACK-GPG-KEY.pub
  
https://repo.saltstack.com/yum/redhat/$releasever/$basearch/latest/base/RPM-GPG-KEY-CentOS-7'''
  enabled = 1
  baseurl = https://repo.saltstack.com/yum/redhat/$releasever/$basearch/latest
  name = SaltStack repo for Red Hat Enterprise Linux $releasever

To manage notifications about this bug go 

[Yahoo-eng-team] [Bug 1449062] Re: qemu-img calls need to be restricted by ulimit (CVE-2015-5162)

2016-06-13 Thread Corey Bryant
** Also affects: python-oslo.concurrency (Ubuntu)
   Importance: Undecided
   Status: New

** Also affects: python-oslo.concurrency (Ubuntu Yakkety)
   Importance: Undecided
   Status: New

** Also affects: python-oslo.concurrency (Ubuntu Xenial)
   Importance: Undecided
   Status: New

** Changed in: python-oslo.concurrency (Ubuntu Yakkety)
   Status: New => Fix Released

** Changed in: python-oslo.concurrency (Ubuntu Xenial)
   Status: New => Triaged

** Changed in: python-oslo.concurrency (Ubuntu Xenial)
   Importance: Undecided => Medium

** Changed in: python-oslo.concurrency (Ubuntu Yakkety)
   Importance: Undecided => Medium

** Changed in: python-oslo.concurrency (Ubuntu Xenial)
 Assignee: (unassigned) => Corey Bryant (corey.bryant)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1449062

Title:
  qemu-img calls need to be restricted by ulimit (CVE-2015-5162)

Status in Cinder:
  New
Status in Glance:
  In Progress
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Security Advisory:
  Confirmed
Status in python-oslo.concurrency package in Ubuntu:
  Fix Released
Status in python-oslo.concurrency source package in Xenial:
  Triaged
Status in python-oslo.concurrency source package in Yakkety:
  Fix Released

Bug description:
  Reported via private E-mail from Richard W.M. Jones.

  Turns out qemu image parser is not hardened against malicious input
  and can be abused to allocated an arbitrary amount of memory and/or
  dump a lot of information when used with "--output=json".

  The solution seems to be: limit qemu-img ressource using ulimit.

  Example of abuse:

  -- afl1.img --

  $ /usr/bin/time qemu-img info afl1.img
  image: afl1.img
  [...]
  0.13user 0.19system 0:00.36elapsed 92%CPU (0avgtext+0avgdata 
642416maxresident)k
  0inputs+0outputs (0major+156927minor)pagefaults 0swaps

  The original image is 516 bytes, but it causes qemu-img to allocate
  640 MB.

  -- afl2.img --

  $ qemu-img info --output=json afl2.img | wc -l
  589843

  This is a 200K image which causes qemu-img info to output half a
  million lines of JSON (14 MB of JSON).

  Glance runs the --output=json variant of the command.

  -- afl3.img --

  $ /usr/bin/time qemu-img info afl3.img
  image: afl3.img
  [...]
  0.09user 0.35system 0:00.47elapsed 94%CPU (0avgtext+0avgdata 
1262388maxresident)k
  0inputs+0outputs (0major+311994minor)pagefaults 0swaps

  qemu-img allocates 1.3 GB (actually, a bit more if you play with
  ulimit -v).  It appears that you could change it to allocate
  arbitrarily large amounts of RAM.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1449062/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1591916] Re: Named arguments should be used for assertValidUserResponse() in unittest case

2016-06-13 Thread David Stanek
I'm marking this as opinion because I think it's better the way that it
currently is, but I'd be open to changing my mind with a good argument.
To me there is minimal benefit to making a change like this and it's
definitely not a bug.

** Changed in: keystone
   Status: In Progress => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1591916

Title:
  Named arguments should be used for assertValidUserResponse() in
  unittest case

Status in OpenStack Identity (keystone):
  Opinion

Bug description:
  In the keystone unittest of test_v3_identity.py, there are two functions used 
in the IdentityTestCase,
  one is assertValidUserListResponse() and another is assertValidUserResponse().

  These two functions eventually used this
  RestfulTestCase.assertValidUser(), and for the above two functions are
  call in different style.

  self.assertValidUserListResponse(r, ref=self.user, resource_url=resource_url)
  self.assertValidUserResponse(r, user)

  the assertValidUserListResponse() is called with named argument
  "ref=self.user", but the assertValidUserResponse() is not.

  Although, this won't cause any problem, but it would be better to
  unify the function call.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1591916/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1591878] Re: Unit test not using neutron-lib constants

2016-06-13 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/328864
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=da9fdf3b20171fd1e06397ad928e2ee5349b9909
Submitter: Jenkins
Branch:master

commit da9fdf3b20171fd1e06397ad928e2ee5349b9909
Author: Gary Kotton 
Date:   Sun Jun 12 23:38:55 2016 -0700

Use neutron-lib constants

One of the unit tests did not make use of the neutron-lib contants.

TrivialFix
Closes-bug: #1591878

Change-Id: Id25b3b2091b57361692a65f53fea9167752e7572


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1591878

Title:
  Unit test not using neutron-lib constants

Status in neutron:
  Fix Released

Bug description:
  {0}
  
neutron.tests.unit.tests.common.test_net_helpers.PortAllocationTestCase.test_get_free_namespace_port
  [0.816689s] ... ok

  Captured stderr:
  
  neutron/tests/unit/tests/common/test_net_helpers.py:64: 
DeprecationWarning: PROTO_NAME_TCP in version 'mitaka' and will be removed in 
version 'newton': moved to neutron_lib.constants
n_const.PROTO_NAME_TCP)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1591878/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1575225] Re: Neutron only permits IPv6 MLDv1 not v2

2016-06-13 Thread Tristan Cacqueray
Ok my bad, then the OSSA task needs to be removed. Thanks!

** Changed in: ossa
   Status: Incomplete => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1575225

Title:
  Neutron only permits IPv6 MLDv1 not v2

Status in neutron:
  In Progress
Status in OpenStack Security Advisory:
  Won't Fix

Bug description:
  IPv6 Multicast Listener Discovery (MLD) v2 [1] is used on recent
  version of Linux, currently Neutron only permits MLDv1 in the
  ICMPV6_ALLOWED_TYPES, so duplicate address discovery (DAD) doesn't not
  actually detect duplicate addresses should Neutron actually enforce
  ICMPv6 source addresses (bug/1502933). While Neutron should not assign
  duplicate addresses, instances where duplicate addresses are possible
  on provider networks between instances and external devices and on
  user assign addresses when using allowed address pairs.

  Here is a dump showing duplicate address detection on a recent Linux
  kernel:

  $ uname -r
  4.4.0-0.bpo.1-amd64
  $ sudo ip link add veth0 type veth peer name veth1
  $ sudo ip link set veth1 up
  $ sudo tcpdump -npel -i veth1 &
  [1] 15528
  tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
  listening on veth1, link-type EN10MB (Ethernet), capture size 262144 bytes
  $ sudo ip link set veth0 up
  $

  09:47:38.853762 5e:9b:3c:4f:a3:e0 > 33:33:00:00:00:16, ethertype IPv6 
(0x86dd), length 90: :: > ff02::16: HBH ICMP6, multicast listener report v2, 1 
group record(s), length 28
  09:47:38.853774 b2:29:3a:34:bc:eb > 33:33:00:00:00:16, ethertype IPv6 
(0x86dd), length 90: :: > ff02::16: HBH ICMP6, multicast listener report v2, 1 
group record(s), length 28
  09:47:39.113772 b2:29:3a:34:bc:eb > 33:33:ff:34:bc:eb, ethertype IPv6 
(0x86dd), length 78: :: > ff02::1:ff34:bceb: ICMP6, neighbor solicitation, who 
has fe80::b029:3aff:fe34:bceb, length 24
  09:47:39.141766 5e:9b:3c:4f:a3:e0 > 33:33:ff:4f:a3:e0, ethertype IPv6 
(0x86dd), length 78: :: > ff02::1:ff4f:a3e0: ICMP6, neighbor solicitation, who 
has fe80::5c9b:3cff:fe4f:a3e0, length 24
  09:47:39.505764 b2:29:3a:34:bc:eb > 33:33:00:00:00:16, ethertype IPv6 
(0x86dd), length 90: :: > ff02::16: HBH ICMP6, multicast listener report v2, 1 
group record(s), length 28
  09:47:39.717759 5e:9b:3c:4f:a3:e0 > 33:33:00:00:00:16, ethertype IPv6 
(0x86dd), length 90: :: > ff02::16: HBH ICMP6, multicast listener report v2, 1 
group record(s), length 28
  09:47:40.113807 b2:29:3a:34:bc:eb > 33:33:00:00:00:16, ethertype IPv6 
(0x86dd), length 90: fe80::b029:3aff:fe34:bceb > ff02::16: HBH ICMP6, multicast 
listener report v2, 1 group record(s), length 28
  09:47:40.113827 b2:29:3a:34:bc:eb > 33:33:00:00:00:02, ethertype IPv6 
(0x86dd), length 70: fe80::b029:3aff:fe34:bceb > ff02::2: ICMP6, router 
solicitation, length 16
  09:47:40.121756 b2:29:3a:34:bc:eb > 33:33:00:00:00:16, ethertype IPv6 
(0x86dd), length 90: fe80::b029:3aff:fe34:bceb > ff02::16: HBH ICMP6, multicast 
listener report v2, 1 group record(s), length 28
  09:47:40.141811 5e:9b:3c:4f:a3:e0 > 33:33:00:00:00:16, ethertype IPv6 
(0x86dd), length 90: fe80::5c9b:3cff:fe4f:a3e0 > ff02::16: HBH ICMP6, multicast 
listener report v2, 1 group record(s), length 28
  09:47:40.141836 5e:9b:3c:4f:a3:e0 > 33:33:00:00:00:02, ethertype IPv6 
(0x86dd), length 70: fe80::5c9b:3cff:fe4f:a3e0 > ff02::2: ICMP6, router 
solicitation, length 16
  09:47:40.149763 5e:9b:3c:4f:a3:e0 > 33:33:00:00:00:16, ethertype IPv6 
(0x86dd), length 90: fe80::5c9b:3cff:fe4f:a3e0 > ff02::16: HBH ICMP6, multicast 
listener report v2, 1 group record(s), length 28


  1. https://www.ietf.org/rfc/rfc3810.txt

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1575225/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1552394] Re: auth_url contains wrong configuration for metadata_agent.ini and other neutron config

2016-06-13 Thread Bjoern Teipel
Back ports to liberty not necessary anymore due to the v2 fallback
issues we found at https://review.openstack.org/#/c/327960/

Setting liberty to invalid state.

** Changed in: openstack-ansible/liberty
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1552394

Title:
  auth_url contains wrong configuration for  metadata_agent.ini and
  other neutron config

Status in neutron:
  Invalid
Status in openstack-ansible:
  Fix Released
Status in openstack-ansible liberty series:
  Invalid
Status in openstack-ansible trunk series:
  Fix Released

Bug description:
  The current configuration

  auth_url = {{ keystone_service_adminuri }}

  will lead to a incomplete URL like  http://1.2.3.4:35357 and will
  cause the neutron-metadata-agent to make bad token requests like :

  POST /tokens HTTP/1.1
  Host: 1.2.3.4:35357
  Content-Length: 91
  Accept-Encoding: gzip, deflate
  Accept: application/json
  User-Agent: python-neutronclient

  and the response is

  HTTP/1.1 404 Not Found
  Date: Tue, 01 Mar 2016 22:14:58 GMT
  Server: Apache
  Vary: X-Auth-Token
  Content-Length: 93
  Content-Type: application/json

  and the agent will stop responding with

  2016-02-26 13:34:46.478 33371 INFO eventlet.wsgi.server [-] (33371) accepted 
''
  2016-02-26 13:34:46.486 33371 ERROR neutron.agent.metadata.agent [-] 
Unexpected error.
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent Traceback 
(most recent call last):
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent   File 
"/usr/local/lib/python2.7/dist-packages/neutron/agent/metadata/agent.py", line 
109, in __call__
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent 
instance_id, tenant_id = self._get_instance_and_tenant_id(req)
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent   File 
"/usr/local/lib/python2.7/dist-packages/neutron/agent/metadata/agent.py", line 
204, in _get_instance_and_tenant_id
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent ports = 
self._get_ports(remote_address, network_id, router_id)
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent   File 
"/usr/local/lib/python2.7/dist-packages/neutron/agent/metadata/agent.py", line 
197, in _get_ports
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent return 
self._get_ports_for_remote_address(remote_address, networks)
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent   File 
"/usr/local/lib/python2.7/dist-packages/neutron/common/utils.py", line 101, in 
__call__
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent return 
self._get_from_cache(target_self, *args, **kwargs)
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent   File 
"/usr/local/lib/python2.7/dist-packages/neutron/common/utils.py", line 79, in 
_get_from_cache
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent item = 
self.func(target_self, *args, **kwargs)
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent   File 
"/usr/local/lib/python2.7/dist-packages/neutron/agent/metadata/agent.py", line 
166, in _get_ports_for_remote_address
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent 
ip_address=remote_address)
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent   File 
"/usr/local/lib/python2.7/dist-packages/neutron/agent/metadata/agent.py", line 
135, in _get_ports_from_server
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent return 
self._get_ports_using_client(filters)
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent   File 
"/usr/local/lib/python2.7/dist-packages/neutron/agent/metadata/agent.py", line 
177, in _get_ports_using_client
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent ports = 
client.list_ports(**filters)
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent   File 
"/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 
102, in with_params
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent ret = 
self.function(instance, *args, **kwargs)
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent   File 
"/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 
534, in list_ports
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent 
**_params)
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent   File 
"/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 
307, in list
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent for r in 
self._pagination(collection, path, **params):
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent   File 
"/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 
320, 

[Yahoo-eng-team] [Bug 1577558] Re: [OSSA 2016-008] v2.0 fernet tokens audit ids are inconsistent (CVE-2016-4911)

2016-06-13 Thread Tristan Cacqueray
** Changed in: ossa
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1577558

Title:
  [OSSA 2016-008] v2.0 fernet tokens audit ids are inconsistent
  (CVE-2016-4911)

Status in OpenStack Identity (keystone):
  Fix Released
Status in OpenStack Security Advisory:
  Fix Released

Bug description:
  If you set the token provider to token.provider = fernet, get an
  unscoped token from v2.0, then rescope that token to a project, you'll
  notice the audit ids don't match. I've recreated this issue in a test
  [0].

  What should happen is that the unscoped token response will have a
  list of audit_ids containing a single audit_id. The project scoped
  token response from the unscoped token will also have a list of
  audit_ids in the token response but the original audit_id from the
  unscoped token will be in the list of the project scoped token.

  Right now this behavior doesn't exist in with the fernet provider on
  v2.0.

  
  [0] https://review.openstack.org/#/c/311816/1

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1577558/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1592028] [NEW] [RFE] Support security-group-rule creation with address-groups

2016-06-13 Thread Roey Chen
Public bug reported:

Currently, security-group rules can be created with the remote-ip-prefix
attribute to specify origin (if ingress) or destination (if egress)
address filter, this RFE suggests the use of address-groups (group of IP
CIDR blocks, as defined for FWaaS v2) to support multiple remote
address/es in one security-group rule.

[Problem description]
An Openstack cloud may require connectivity between instances and external 
services which are not provisioned by Openstack, each service may also have 
multiple endpoints. in order for tenant instances to be able to access these 
external hosts (and only them), it is required to define a security-group with 
rules that allow traffic to these specific services, one rule per service 
endpoint (Assuming endpoints addresses aren't contiguous).
This process can easily become cumbersome - for each new service endpoint it is 
required to create a specific rule for each tenant.

To overcome this usability issue, it is suggested that Neutron will support an 
API to group IP CIDR blocks in an object which could be later referenced when 
creating a security-group-rule - the user will pass the AddressGroup object id 
as the ‘remote-ip-prefix’ attribute or as other new attribute.
Whenever it's required to add a service endpoint, the new IP address will be 
added to the relevant AddressGroup - as a side effect, changes will be 
reflected in the underlying security-group rules.

NOTE: For the purpose of the use-case above, the default allow-egress
rules are removed ("zero trust" model) once the default sg is created.


A possible example of use in the CLI:

$ neutron address-group-create --cidrs 1.1.1.1,2.2.2.2 "External Services"
$ neutron security-group-rule-create --direction egress --remote-address-group 


** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1592028

Title:
  [RFE] Support security-group-rule creation with address-groups

Status in neutron:
  New

Bug description:
  Currently, security-group rules can be created with the remote-ip-
  prefix attribute to specify origin (if ingress) or destination (if
  egress) address filter, this RFE suggests the use of address-groups
  (group of IP CIDR blocks, as defined for FWaaS v2) to support multiple
  remote address/es in one security-group rule.

  [Problem description]
  An Openstack cloud may require connectivity between instances and external 
services which are not provisioned by Openstack, each service may also have 
multiple endpoints. in order for tenant instances to be able to access these 
external hosts (and only them), it is required to define a security-group with 
rules that allow traffic to these specific services, one rule per service 
endpoint (Assuming endpoints addresses aren't contiguous).
  This process can easily become cumbersome - for each new service endpoint it 
is required to create a specific rule for each tenant.

  To overcome this usability issue, it is suggested that Neutron will support 
an API to group IP CIDR blocks in an object which could be later referenced 
when creating a security-group-rule - the user will pass the AddressGroup 
object id as the ‘remote-ip-prefix’ attribute or as other new attribute.
  Whenever it's required to add a service endpoint, the new IP address will be 
added to the relevant AddressGroup - as a side effect, changes will be 
reflected in the underlying security-group rules.

  NOTE: For the purpose of the use-case above, the default allow-egress
  rules are removed ("zero trust" model) once the default sg is created.

  
  A possible example of use in the CLI:

  $ neutron address-group-create --cidrs 1.1.1.1,2.2.2.2 "External Services"
  $ neutron security-group-rule-create --direction egress 
--remote-address-group 

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1592028/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1592017] [NEW] ML2 uses global MTU for encapsulation calculation

2016-06-13 Thread Dr. Jens Rosenboom
Public bug reported:

My goal is to achieve an MTU of 1500 for both Vlan and Vxlan based
tenant networks.

So I set the path_mtu to 1550, while leaving global_physnet_mtu at the
default value of 1500.

However, the MTU for my Vxlan networks is still calculated as 1450,
because the underlying MTU is calculated as min(1500,1550)=1500 and then
the encapsulation overhead is subtracted from that.

IMHO the correct calculation would be to subtract the encapsulation
overhead only from the path_mtu if it is specified and possibly take the
minimum of that and global_physnet_mtu.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1592017

Title:
  ML2 uses global MTU for encapsulation calculation

Status in neutron:
  New

Bug description:
  My goal is to achieve an MTU of 1500 for both Vlan and Vxlan based
  tenant networks.

  So I set the path_mtu to 1550, while leaving global_physnet_mtu at the
  default value of 1500.

  However, the MTU for my Vxlan networks is still calculated as 1450,
  because the underlying MTU is calculated as min(1500,1550)=1500 and
  then the encapsulation overhead is subtracted from that.

  IMHO the correct calculation would be to subtract the encapsulation
  overhead only from the path_mtu if it is specified and possibly take
  the minimum of that and global_physnet_mtu.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1592017/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1592015] [NEW] libvirt: cleanup of a volume backed instance resize leaves behind the instance directory

2016-06-13 Thread Lee Yarwood
Public bug reported:

Description
===
Attempts to cleanup a volume backed instance resize leaves behind the instance 
directory and additional disk files. This seems to relate to the following 
change and the additional imagebackend calls made in _cleanup_resize :

libvirt: Fix/implement revert-resize for RBD-backed images
https://review.openstack.org/#/c/228505/


Steps to reproduce
==
* Create a volume backed instance.
* Shutdown the instance.
* resize/migrate the instance to another host.
* Review the instance directory on the source host.

Expected result
===
The instance directory on the source host is removed.

Actual result
=
The instance directory on the source host remains and is also populated with a 
previously unused `disk` file and disk.info file.

Environment
===
1. Exact version of OpenStack you are running. See the following

# git rev-parse HEAD
6e2e1dc912199e057e5c3a5e07d39f26cbbbdd5b

2. Which hypervisor did you use?
   libvirt + kvm

2. Which storage type did you use?
   (For example: Ceph, LVM, GPFS, ...)
   LVM/iSCSI

3. Which networking type did you use?
   N/A

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1592015

Title:
  libvirt: cleanup of a volume backed instance resize leaves behind the
  instance directory

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  Attempts to cleanup a volume backed instance resize leaves behind the 
instance directory and additional disk files. This seems to relate to the 
following change and the additional imagebackend calls made in _cleanup_resize :

  libvirt: Fix/implement revert-resize for RBD-backed images
  https://review.openstack.org/#/c/228505/

  
  Steps to reproduce
  ==
  * Create a volume backed instance.
  * Shutdown the instance.
  * resize/migrate the instance to another host.
  * Review the instance directory on the source host.

  Expected result
  ===
  The instance directory on the source host is removed.

  Actual result
  =
  The instance directory on the source host remains and is also populated with 
a previously unused `disk` file and disk.info file.

  Environment
  ===
  1. Exact version of OpenStack you are running. See the following

  # git rev-parse HEAD
  6e2e1dc912199e057e5c3a5e07d39f26cbbbdd5b

  2. Which hypervisor did you use?
 libvirt + kvm

  2. Which storage type did you use?
 (For example: Ceph, LVM, GPFS, ...)
 LVM/iSCSI

  3. Which networking type did you use?
 N/A

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1592015/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1592005] [NEW] [RFE] Security-groups that blocks matched traffic

2016-06-13 Thread Roey Chen
Public bug reported:

Neutron security-group allow the user to define security groups so that only 
traffic matched with security group rules are allowed.
Sometimes it’s simpler to define these rules as blocking rules which matched on 
traffic that should not be allowed (e.g - allow all traffic except ssh).

Supporting both ‘deny’ and ‘allow’ rules combined in one security-group may 
impair the simplicity of the security-group API, therefore, we'd like to 
consider the option of allowing a new type of security-group, one which all 
rules implicit action is 'deny'.
This group should be constructed as any other security-group (by creating rules 
and assigning to ports).
A Neutron port then could be associated with one or more of both security-group 
types.

For each port, ’deny’ rules (when port is associated with one or more
"deny" security group) will always be matched before ‘allow’ rules.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1592005

Title:
  [RFE] Security-groups that blocks matched traffic

Status in neutron:
  New

Bug description:
  Neutron security-group allow the user to define security groups so that only 
traffic matched with security group rules are allowed.
  Sometimes it’s simpler to define these rules as blocking rules which matched 
on traffic that should not be allowed (e.g - allow all traffic except ssh).

  Supporting both ‘deny’ and ‘allow’ rules combined in one security-group may 
impair the simplicity of the security-group API, therefore, we'd like to 
consider the option of allowing a new type of security-group, one which all 
rules implicit action is 'deny'.
  This group should be constructed as any other security-group (by creating 
rules and assigning to ports).
  A Neutron port then could be associated with one or more of both 
security-group types.

  For each port, ’deny’ rules (when port is associated with one or more
  "deny" security group) will always be matched before ‘allow’ rules.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1592005/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1592000] [NEW] [RFE] Admin customized default security-group

2016-06-13 Thread Roey Chen
Public bug reported:

Allow the admin to decide which rules should be added (by default) to
the tenant default security-group once created.

At the moment, each tenant default security-group is created with specific set 
of rules: allow all egress and allow ingress from default sg.
However, this is not the desired behavior for all deployments, as some would 
want to practice a “zero trust” model where all traffic is blocked unless 
explicitly decided otherwise, or on the other hand, allow all inbound+outbound 
traffic.
It’s worth nothing that at some use cases the default behavior can be expressed 
with very specific sets of rules, which only the admin has the knowledge to 
define (e.g- allow connection to active directory endpoints), in such cases the 
impact on usability is even worse, as it requires the admin to create rules on 
every tenant default security-group.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1592000

Title:
  [RFE] Admin customized default security-group

Status in neutron:
  New

Bug description:
  Allow the admin to decide which rules should be added (by default) to
  the tenant default security-group once created.

  At the moment, each tenant default security-group is created with specific 
set of rules: allow all egress and allow ingress from default sg.
  However, this is not the desired behavior for all deployments, as some would 
want to practice a “zero trust” model where all traffic is blocked unless 
explicitly decided otherwise, or on the other hand, allow all inbound+outbound 
traffic.
  It’s worth nothing that at some use cases the default behavior can be 
expressed with very specific sets of rules, which only the admin has the 
knowledge to define (e.g- allow connection to active directory endpoints), in 
such cases the impact on usability is even worse, as it requires the admin to 
create rules on every tenant default security-group.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1592000/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1591996] [NEW] Serial console output is not properly handled

2016-06-13 Thread Lucian Petrut
Public bug reported:

The compute API expects the serial console output to be a string, attempting to 
use a regex to remove some characters.
https://github.com/openstack/nova/blob/6d2470ade25b3a58045e7f75afa2629e851ac049/nova/api/openstack/compute/console_output.py#L70

This will fail if the compute node is using Python 3, as we are passing a byte 
array.
https://github.com/openstack/nova/blob/6d2470ade25b3a58045e7f75afa2629e851ac049/nova/compute/manager.py#L4283-L4297

** Affects: nova
 Importance: Undecided
 Assignee: Lucian Petrut (petrutlucian94)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1591996

Title:
  Serial console output is not properly handled

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  The compute API expects the serial console output to be a string, attempting 
to use a regex to remove some characters.
  
https://github.com/openstack/nova/blob/6d2470ade25b3a58045e7f75afa2629e851ac049/nova/api/openstack/compute/console_output.py#L70

  This will fail if the compute node is using Python 3, as we are passing a 
byte array.
  
https://github.com/openstack/nova/blob/6d2470ade25b3a58045e7f75afa2629e851ac049/nova/compute/manager.py#L4283-L4297

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1591996/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1591993] [NEW] Add IPsec traffic metering for VPNaaS

2016-06-13 Thread Jin Jing Lin
Public bug reported:


Problem Description:
IPsec traffic metering function is missing in current VPNaaS. Those information 
are useful for billing and monitoring purpose. And it should be a common 
requirements, no matter it is vpn+l3-agent or vpn+ovn solution 

Proposed change:
Add IPsec traffic metering function to VPN agent, which will manage the 
metering chain/rules for each vpn connection and also do the periodically data 
gathering and notification sending to ceilometer. This new metering function 
should support any VPNaaS solution: current neutron VPNaaS and also VPN+OVN.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: rfe

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1591993

Title:
  Add IPsec traffic metering for VPNaaS

Status in neutron:
  New

Bug description:
  
  Problem Description:
  IPsec traffic metering function is missing in current VPNaaS. Those 
information are useful for billing and monitoring purpose. And it should be a 
common requirements, no matter it is vpn+l3-agent or vpn+ovn solution 

  Proposed change:
  Add IPsec traffic metering function to VPN agent, which will manage the 
metering chain/rules for each vpn connection and also do the periodically data 
gathering and notification sending to ceilometer. This new metering function 
should support any VPNaaS solution: current neutron VPNaaS and also VPN+OVN.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1591993/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1591981] [NEW] Emulated pagination helper returns marker item when page_reverse is used

2016-06-13 Thread Ihar Hrachyshka
Public bug reported:

When paginating for a plugin that does not support native pagination,
and using page_reverse with a marker, the marker is returned as part of
the result. This is not in line with native behaviour, as well as with
behaviour of forward directed pagination.

** Affects: neutron
 Importance: Undecided
 Assignee: Ihar Hrachyshka (ihar-hrachyshka)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1591981

Title:
  Emulated pagination helper returns marker item when page_reverse is
  used

Status in neutron:
  In Progress

Bug description:
  When paginating for a plugin that does not support native pagination,
  and using page_reverse with a marker, the marker is returned as part
  of the result. This is not in line with native behaviour, as well as
  with behaviour of forward directed pagination.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1591981/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1590805] Re: Revoking "admin" role from a group invalidates user token

2016-06-13 Thread Raildo Mascena de Sousa Filho
According to your steps, you grant a group role, as you said, domain
admin won't be part of this group, so the behavior is correct. If you
want to domain admin still with this role, you should grant the role for
user and not just for group.


** Changed in: keystone
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1590805

Title:
  Revoking "admin" role from a group invalidates user token

Status in OpenStack Identity (keystone):
  Invalid

Bug description:
  Steps to reproduce

  1. Login as domain admin
  2. Create a new group and grant "admin" role to it.
  3. Group will be empty with no users added to it.(Domain admin won't be part 
of this group)
  4. Now revoke "admin" role from this group.
  5. Token for domain admin will be invalidated and he/she has to login again.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1590805/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1591971] [NEW] Glance task creates failed when setting word_dir local

2016-06-13 Thread yuyafei
Public bug reported:

The version is mitaka.

The glance-api.conf setting is:
[task]
work_dir = /home/work/
[taskflow_executor]
conversion_format = raw

Then run the cli:
glance  task-create --type import --input 
'{"import_from":"http://10.43.177.17/cirros-0.3.2-x86_64-disk.img","import_from_format":
 
"","image_properties":{"disk_format":"qcow2","container_format":"bare","name":"test1"}}'

The log is :
2016-06-14 04:08:29.032 DEBUG oslo_concurrency.processutils [-] CMD "qemu-img 
info --output=json file:///home/work/90ff2129-0079-487e-a7ec-79ef23bd1c0d" 
returned: 1 in 0.025s from (pid=5460) execute 
/usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py:374
2016-06-14 04:08:29.033 DEBUG oslo_concurrency.processutils [-] None
command: u'qemu-img info --output=json 
file:///home/work/90ff2129-0079-487e-a7ec-79ef23bd1c0d'
exit code: 1
stdout: u''
stderr: u"qemu-img: Could not open 
'file:///home/work/90ff2129-0079-487e-a7ec-79ef23bd1c0d': Could not open 
'file:///home/work/90ff2129-0079-487e-a7ec-79ef23bd1c0d': No such file or 
directory\n" from (pid=5460) execute 
/usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py:413
2016-06-14 04:08:29.034 DEBUG oslo_concurrency.processutils [-] u'qemu-img info 
--output=json file:///home/work/90ff2129-0079-487e-a7ec-79ef23bd1c0d' failed. 
Not Retrying. from (pid=5460) execute 
/usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py:422
Command: qemu-img info --output=json 
file:///home/work/90ff2129-0079-487e-a7ec-79ef23bd1c0d
Exit code: 1
Stdout: u''
Stderr: u"qemu-img: Could not open 
'file:///home/work/90ff2129-0079-487e-a7ec-79ef23bd1c0d': Could not open 
'file:///home/work/90ff2129-0079-487e-a7ec-79ef23bd1c0d': No such file or 
directory\n"
2016-06-14 04:08:29.072 WARNING glance.async.taskflow_executor [-] Task 
'import-ImportToFS-42684807-86db-4ff5-a4a9-abf3b1998b63' 
(5ff9cf63-f257-48d2-9cc9-cfeffd905854) transitioned into state 'FAILURE' from 
state 'RUNNING'
4 predecessors (most recent first):
  Flow 'import'
  |__Atom 'import-CreateImage-42684807-86db-4ff5-a4a9-abf3b1998b63' 
{'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {}, 'provides': 
'90ff2129-0079-487e-a7ec-79ef23bd1c0d'}
 |__Atom 'import_retry' {'intention': 'EXECUTE', 'state': 'SUCCESS', 
'requires': {}, 'provides': [(None, {})]}
|__Flow 'import'
2016-06-14 04:08:29.072 TRACE glance.async.taskflow_executor Traceback (most 
recent call last):
2016-06-14 04:08:29.072 TRACE glance.async.taskflow_executor   File 
"/usr/lib/python2.7/site-packages/taskflow/engines/action_engine/executor.py", 
line 82, in _execute_task
2016-06-14 04:08:29.072 TRACE glance.async.taskflow_executor result = 
task.execute(**arguments)
2016-06-14 04:08:29.072 TRACE glance.async.taskflow_executor   File 
"/opt/stack/glance/glance/async/flows/base_import.py", line 175, in execute
2016-06-14 04:08:29.072 TRACE glance.async.taskflow_executor metadata = 
json.loads(stdout)
2016-06-14 04:08:29.072 TRACE glance.async.taskflow_executor   File 
"/usr/lib64/python2.7/json/__init__.py", line 338, in loads
2016-06-14 04:08:29.072 TRACE glance.async.taskflow_executor return 
_default_decoder.decode(s)
2016-06-14 04:08:29.072 TRACE glance.async.taskflow_executor   File 
"/usr/lib64/python2.7/json/decoder.py", line 365, in decode
2016-06-14 04:08:29.072 TRACE glance.async.taskflow_executor obj, end = 
self.raw_decode(s, idx=_w(s, 0).end())
2016-06-14 04:08:29.072 TRACE glance.async.taskflow_executor   File 
"/usr/lib64/python2.7/json/decoder.py", line 383, in raw_decode
2016-06-14 04:08:29.072 TRACE glance.async.taskflow_executor raise 
ValueError("No JSON object could be decoded")
2016-06-14 04:08:29.072 TRACE glance.async.taskflow_executor ValueError: No 
JSON object could be decoded

** Affects: glance
 Importance: Undecided
 Assignee: yuyafei (yu-yafei)
 Status: New

** Changed in: glance
 Assignee: (unassigned) => yuyafei (yu-yafei)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1591971

Title:
  Glance task creates failed when setting word_dir local

Status in Glance:
  New

Bug description:
  The version is mitaka.

  The glance-api.conf setting is:
  [task]
  work_dir = /home/work/
  [taskflow_executor]
  conversion_format = raw

  Then run the cli:
  glance  task-create --type import --input 
'{"import_from":"http://10.43.177.17/cirros-0.3.2-x86_64-disk.img","import_from_format":
 
"","image_properties":{"disk_format":"qcow2","container_format":"bare","name":"test1"}}'

  The log is :
  2016-06-14 04:08:29.032 DEBUG oslo_concurrency.processutils [-] CMD "qemu-img 
info --output=json file:///home/work/90ff2129-0079-487e-a7ec-79ef23bd1c0d" 
returned: 1 in 0.025s from (pid=5460) execute 
/usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py:374
  2016-06-14 04:08:29.033 DEBUG oslo_concurrency.processutils [-] None
  

[Yahoo-eng-team] [Bug 1591966] [NEW] Running Unit Tests got 5 errors in Aarch64

2016-06-13 Thread Kevin Zhao
Public bug reported:

Description
===
Using nova to create an instance in Aarch64,the disk.config is the 'cdrom' and 
has the type of 'scsi'.After instance creation ,log into the instance and can't 
see the cdrom device.

Steps to reproduce
==
1.Using devstack to deploy openstack. Using default local.conf.

2.Enter into the nova directory
$ cd /opt/stack/nova
$ tox -e py27

Expected result
===
After running the tests, return:

==
Totals
==
Ran: 13479 tests in 662. sec.
 - Passed: 13423
 - Skipped: 56
 - Expected Fail: 0
 - Unexpected Success: 0
 - Failed: 0
Sum of execute time for each test: 4490.9027 sec.

==
Worker Balance
==
 - Worker 0 (1685 tests) => 0:09:26.657728
 - Worker 1 (1687 tests) => 0:09:37.397435
 - Worker 2 (1684 tests) => 0:09:32.077593
 - Worker 3 (1684 tests) => 0:09:26.830573
 - Worker 4 (1684 tests) => 0:09:23.826421
 - Worker 5 (1685 tests) => 0:09:32.041672
 - Worker 6 (1685 tests) => 0:09:15.550618
 - Worker 7 (1685 tests) => 0:09:23.697743
__ summary 
__

  py27: commands succeeded
  congratulations :)

Actual result
=
Got 5 errors while running the tests.
The test cases are:
nova.tests.unit.virt.libvirt.test_blockinfo.LibvirtBlockInfoTest.test_get_disk_mapping_cdrom_configdrive
nova.tests.unit.virt.libvirt.test_blockinfo.LibvirtBlockInfoTest.test_get_disk_mapping_simple_configdrive
nova.tests.unit.virt.libvirt.test_driver.LibvirtConnTestCase.test_xml_disk_bus_ide
nova.tests.unit.virt.libvirt.test_driver.LibvirtConnTestCase.test_xml_disk_bus_ide_and_virtio
nova.tests.unit.virt.libvirt.test_driver.LibvirtConnTestCase.test_get_guest_config_with_configdrive

The detailed information is in the attaching log file.
Environment
===
1. Exact version of OpenStack you are running. See the following
  list for all releases: http://docs.openstack.org/releases/
   Nova development, commit code: 6e2e1dc912199e057e5c3a5e07d39f26cbbbdd5b

2. Which hypervisor did you use?
Libvirt+KVM
$ kvm --version
QEMU emulator version 2.5.0 (Debian 1:2.5+dfsg-5ubuntu10.1), Copyright (c) 
2003-2008 Fabrice Bellard
$ libvirtd --version
libvirtd (libvirt) 1.3.1

2. Which storage type did you use?
   In the host file system,all in one physics machine.
stack@u202154:/opt/stack/nova$ df -hl
Filesystem Size Used Avail Use% Mounted on
udev 7.8G 0 7.8G 0% /dev
tmpfs 1.6G 61M 1.6G 4% /run
/dev/sda2 917G 41G 830G 5% /
tmpfs 7.9G 0 7.9G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup
/dev/sda1 511M 888K 511M 1% /boot/efi
cgmfs 100K 0 100K 0% /run/cgmanager/fs
tmpfs 1.6G 0 1.6G 0% /run/user/1002
tmpfs 1.6G 0 1.6G 0% /run/user/1000
tmpfs 1.6G 0 1.6G 0% /run/user/0

3. Which networking type did you use?
   nova-network

4. Environment information:
   Architecture : AARCH64
   OS: Ubuntu 16.04

Detailed log info is in the accessory.
The guest xml is also in the log info.

** Affects: nova
 Importance: Undecided
 Assignee: Kevin Zhao (kevin-zhao)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Kevin Zhao (kevin-zhao)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1591966

Title:
  Running Unit Tests got 5 errors in Aarch64

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  Using nova to create an instance in Aarch64,the disk.config is the 'cdrom' 
and has the type of 'scsi'.After instance creation ,log into the instance and 
can't see the cdrom device.

  Steps to reproduce
  ==
  1.Using devstack to deploy openstack. Using default local.conf.

  2.Enter into the nova directory
  $ cd /opt/stack/nova
  $ tox -e py27

  Expected result
  ===
  After running the tests, return:

  ==
  Totals
  ==
  Ran: 13479 tests in 662. sec.
   - Passed: 13423
   - Skipped: 56
   - Expected Fail: 0
   - Unexpected Success: 0
   - Failed: 0
  Sum of execute time for each test: 4490.9027 sec.

  ==
  Worker Balance
  ==
   - Worker 0 (1685 tests) => 0:09:26.657728
   - Worker 1 (1687 tests) => 0:09:37.397435
   - Worker 2 (1684 tests) => 0:09:32.077593
   - Worker 3 (1684 tests) => 0:09:26.830573
   - Worker 4 (1684 tests) => 0:09:23.826421
   - Worker 5 (1685 tests) => 0:09:32.041672
   - Worker 6 (1685 tests) => 0:09:15.550618
   - Worker 7 (1685 tests) => 0:09:23.697743
  __ summary 
__

py27: commands succeeded
congratulations :)

  Actual result
  =
  Got 5 errors while running the tests.
  The test cases are:
  

[Yahoo-eng-team] [Bug 1591954] [NEW] Quota update error occurred when user create/update project in no nova case.

2016-06-13 Thread Kenji Ishii
Public bug reported:

At the moment, when operator create new project or update a project, operator 
can set quotas like nova, neutron, cinder.
And whether operator can set quotas or not depend on  "compute" service (If 
compute service is disabled, quota tab will be hidden).

In this case, when operator create / update project, the message of quota 
update failure will occur even though operator couldn't see the quota tab.
Originally, when we hide quota tab, processes about quota shouldn't be done.

** Affects: horizon
 Importance: Undecided
 Assignee: Kenji Ishii (ken-ishii)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Kenji Ishii (ken-ishii)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1591954

Title:
  Quota update error occurred when user create/update project in no nova
  case.

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  At the moment, when operator create new project or update a project, operator 
can set quotas like nova, neutron, cinder.
  And whether operator can set quotas or not depend on  "compute" service (If 
compute service is disabled, quota tab will be hidden).

  In this case, when operator create / update project, the message of quota 
update failure will occur even though operator couldn't see the quota tab.
  Originally, when we hide quota tab, processes about quota shouldn't be done.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1591954/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1591878] [NEW] Unit test not using neutrob-lib constants

2016-06-13 Thread Gary Kotton
Public bug reported:

{0}
neutron.tests.unit.tests.common.test_net_helpers.PortAllocationTestCase.test_get_free_namespace_port
[0.816689s] ... ok

Captured stderr:

neutron/tests/unit/tests/common/test_net_helpers.py:64: DeprecationWarning: 
PROTO_NAME_TCP in version 'mitaka' and will be removed in version 'newton': 
moved to neutron_lib.constants
  n_const.PROTO_NAME_TCP)

** Affects: neutron
 Importance: Undecided
 Assignee: Gary Kotton (garyk)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1591878

Title:
  Unit test not using neutrob-lib constants

Status in neutron:
  In Progress

Bug description:
  {0}
  
neutron.tests.unit.tests.common.test_net_helpers.PortAllocationTestCase.test_get_free_namespace_port
  [0.816689s] ... ok

  Captured stderr:
  
  neutron/tests/unit/tests/common/test_net_helpers.py:64: 
DeprecationWarning: PROTO_NAME_TCP in version 'mitaka' and will be removed in 
version 'newton': moved to neutron_lib.constants
n_const.PROTO_NAME_TCP)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1591878/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp