[Yahoo-eng-team] [Bug 1415864] Re: heatclient traces in tests

2015-04-17 Thread Rob Cresswell
Addressed byhttps://review.openstack.org/#/c/154929/

** Changed in: horizon
   Status: Confirmed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1415864

Title:
  heatclient traces in tests

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  
...DEBUG:heatclient.common.http:curl
 -i -X GET -H 'X-Auth-Token: {SHA1}8f1ba6b3ebedb0be5cc7985232f405ef1826c2b2' -H 
'Content-Type: application/json' -H 'Accept: application/json' -H 'User-Agent: 
python-heatclient' 
http://public.heat.example.com:8004/v1/stacks?sort_dir=descsort_key=created_atlimit=21
  ..DEBUG:heatclient.common.http:curl -i -X GET -H 'X-Auth-Token: 
{SHA1}8f1ba6b3ebedb0be5cc7985232f405ef1826c2b2' -H 'Content-Type: 
application/json' -H 'Accept: application/json' -H 'User-Agent: 
python-heatclient' 
http://public.heat.example.com:8004/v1/stacks?sort_dir=descsort_key=created_atlimit=21
  .DEBUG:heatclient.common.http:curl -i -X GET -H 'X-Auth-Token: 
{SHA1}8f1ba6b3ebedb0be5cc7985232f405ef1826c2b2' -H 'Content-Type: 
application/json' -H 'Accept: application/json' -H 'User-Agent: 
python-heatclient' 
http://public.heat.example.com:8004/v1/stacks?sort_dir=descsort_key=created_atlimit=21
  


  In github checkout from 2015-01-29

  This must have been introduced recently.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1415864/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1445335] [NEW] create/delete flavor permissions should be controlled by policy.json

2015-04-17 Thread Divya K Konoor
Public bug reported:

The create/delete flavor rest api always expects the user to be of admin
privileges and ignores the rule defined in the nova/policy.json. This
behavior is observed after these changes 
https://review.openstack.org/#/c/150352/.

The expected behavior is that the permissions are controlled as per the
rule defined in the policy file and should not mandate that only an
admin should be able to create/delete a flavor

** Affects: nova
 Importance: High
 Assignee: Divya K Konoor (dikonoor)
 Status: Confirmed


** Tags: kilo-rc-potential nova security

** Changed in: nova
   Status: New = Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1445335

Title:
  create/delete flavor permissions should be controlled by policy.json

Status in OpenStack Compute (Nova):
  Confirmed

Bug description:
  The create/delete flavor rest api always expects the user to be of
  admin privileges and ignores the rule defined in the nova/policy.json.
  This behavior is observed after these changes 
  https://review.openstack.org/#/c/150352/.

  The expected behavior is that the permissions are controlled as per
  the rule defined in the policy file and should not mandate that only
  an admin should be able to create/delete a flavor

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1445335/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1382169] Re: about window is missing

2015-04-17 Thread Matthias Runge
I think, this was resolved some time ago, and this tries to fix it
another time.

If you feel, like this is a missing feature, please add a blueprint
about this. Thank you.

** Changed in: horizon
   Status: Opinion = Incomplete

** Changed in: horizon
 Assignee: Kanagaraj Manickam (kanagaraj-manickam) = (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1382169

Title:
  about window is missing

Status in OpenStack Dashboard (Horizon):
  Incomplete

Bug description:
  In Horizon, the version details of the openstack is not provided.
  Usually all the software provides the About dialog to provide the
  details of the product version. Its missing in the Horizon

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1382169/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1397087] Re: Bulk port create fails with conflict with some addresses fixed

2015-04-17 Thread Eugene Nikanorov
IMO, this just needs to be documented.
Obviously ports that go first get their fixed IPs allocated and later port 
creation may fail if some port specifies already-allocated IP.

I don't think it makes sense to arrange input port list in anyway.

** Changed in: neutron
   Status: Confirmed = Incomplete

** Changed in: neutron
   Status: Incomplete = Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1397087

Title:
  Bulk port create fails with conflict with some addresses fixed

Status in OpenStack Neutron (virtual network service):
  Opinion

Bug description:
  In the bulk version of the port create request, multiple port
  creations may be requested.

  If there is a port without specifying an fixed_ip address, one is
  assigned to it. If a later port requests the same address, a conflict
  is detected and raised. The overall call's succeeds or fails depending
  on which addresses from the pool are set to be assigned next, and the
  order of the requested ports.

  Steps to reproduce:

  # neutron net-create test_fixed_ports
  Created a new network:
  +---+--+
  | Field | Value|
  +---+--+
  | admin_state_up| True |
  | id| af3b3cc8-b556-4604-a4fe-ae9bffa8cd9e |
  | name  | test_fixed_ports |
  | provider:network_type | vxlan|
  | provider:physical_network |  |
  | provider:segmentation_id  | 4|
  | shared| False|
  | status| ACTIVE   |
  | subnets   |  |
  | tenant_id | d42d65485d674e0a9d007a06182e46f7 |
  +---+--+

  # neutron subnet-create test_fixed_ports 10.0.0.0/24
  Created a new subnet:
  +--++
  | Field| Value  |
  +--++
  | allocation_pools | {start: 10.0.0.2, end: 10.0.0.254} |
  | cidr | 10.0.0.0/24|
  | dns_nameservers  ||
  | enable_dhcp  | True   |
  | gateway_ip   | 10.0.0.1   |
  | host_routes  ||
  | id   | 5369fb82-8ff6-4ec5-acf7-1d86d0ec9d2a   |
  | ip_version   | 4  |
  | name ||
  | network_id   | af3b3cc8-b556-4604-a4fe-ae9bffa8cd9e   |
  | tenant_id| d42d65485d674e0a9d007a06182e46f7   |
  +--++

  # cat ports.data
  {ports: [
  {
  name: A,
  network_id: af3b3cc8-b556-4604-a4fe-ae9bffa8cd9e
  }, {
  fixed_ips: [{ip_address: 10.0.0.2}],
  name: B,
  network_id: af3b3cc8-b556-4604-a4fe-ae9bffa8cd9e
  }
  ]}

  # TOKEN='a valid keystone token'

  # curl  -H 'Content-Type: application/json' -H 'X-Auth-Token: '$TOKEN -X POST 
http://127.0.1.1:9696/v2.0/ports; -d...@ports.data
  {NeutronError: {message: Unable to complete operation for network 
af3b3cc8-b556-4604-a4fe-ae9bffa8cd9e. The IP address 10.0.0.2 is in use., 
type: IpAddressInUse, detail: }}

  Positive case:

  # cat ports.data.rev
  {ports: [
  {
  name: A,
  fixed_ips: [{ip_address: 10.0.0.2}],
  network_id: af3b3cc8-b556-4604-a4fe-ae9bffa8cd9e
  }, {
  name: B,
  network_id: af3b3cc8-b556-4604-a4fe-ae9bffa8cd9e
  }
  ]}

  # curl  -H 'Content-Type: application/json' -H 'X-Auth-Token: '$TOKEN -X POST 
http://127.0.1.1:9696/v2.0/ports; -d...@ports.data.rev
  {ports: [{status: DOWN, binding:host_id: , name: A, 
admin_state_up: true, network_id: af3b3cc8-b556-4604-a4fe-ae9bffa8cd9e, 
tenant_id: 7b3e2f49d1fc4154ac5af10a4b9862c5, binding:vif_details: {}, 
binding:vnic_type: normal, binding:vif_type: unbound, device_owner: 
, mac_address: fa:16:3e:16:1e:50, binding:profile: {}, fixed_ips: 
[{subnet_id: 5369fb82-8ff6-4ec5-acf7-1d86d0ec9d2a, ip_address: 
10.0.0.2}], id: 75f5cdb7-5884-4583-9db1-73b946f94a04, device_id: }, 
{status: DOWN, binding:host_id: , name: B, admin_state_up: true, 
network_id: af3b3cc8-b556-4604-a4fe-ae9bffa8cd9e, tenant_id: 
7b3e2f49d1fc4154ac5af10a4b9862c5, binding:vif_details: {}, 
binding:vnic_type: normal, 

[Yahoo-eng-team] [Bug 1393320] Re: Juno: Nova compute service cant connect to AMQP server through qpidd whereas there is no issue working with rabbitmq

2015-04-17 Thread Mehdi Abaakouk
** Changed in: oslo.messaging
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1393320

Title:
  Juno: Nova compute service cant connect to AMQP server through qpidd
  whereas there is no issue working with rabbitmq

Status in OpenStack Compute (Nova):
  Invalid
Status in Messaging API for OpenStack:
  Invalid

Bug description:
  I am using qpidd instead of rabbitmq in nova-compute and controller
  node. While starting nova-compute service i see these errors in the
  nova-compute log.

  {
  2014-11-11 12:17:32.844 5214 DEBUG nova.openstack.common.lockutils [-] 
Semaphore / lock released _update_available_resource inner 
/opt/stack/nova/nova/openstack/common/lockutils.py:275
  Traceback (most recent call last):
    File /usr/local/lib/python2.7/dist-packages/eventlet/hubs/poll.py, line 
115, in wait
  listener.cb(fileno)
    File /usr/local/lib/python2.7/dist-packages/eventlet/green/select.py, 
line 52, in on_read
  current.switch(([original], [], []))
    File /usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py, line 
212, in main
  result = function(*args, **kwargs)
    File /opt/stack/nova/nova/openstack/common/service.py, line 490, in 
run_service
  service.start()
    File /opt/stack/nova/nova/service.py, line 181, in start
  self.manager.pre_start_hook()
    File /opt/stack/nova/nova/compute/manager.py, line 1152, in pre_start_hook
  self.update_available_resource(nova.context.get_admin_context())
    File /opt/stack/nova/nova/compute/manager.py, line 5964, in 
update_available_resource
  use_slave=True)
    File /opt/stack/nova/nova/compute/manager.py, line 5975, in 
_get_compute_nodes_in_db
  use_slave=use_slave)
    File /opt/stack/nova/nova/objects/base.py, line 153, in wrapper
  args, kwargs)
    File /opt/stack/nova/nova/conductor/rpcapi.py, line 346, in 
object_class_action
  objver=objver, args=args, kwargs=kwargs)
    File /usr/lib/python2.7/dist-packages/oslo/messaging/rpc/client.py, line 
150, in call
  wait_for_reply=True, timeout=timeout)
    File /usr/lib/python2.7/dist-packages/oslo/messaging/transport.py, line 
90, in _send
  timeout=timeout)
    File 
/usr/lib/python2.7/dist-packages/oslo/messaging/_drivers/amqpdriver.py, line 
412, in send
  return self._send(target, ctxt, message, wait_for_reply, timeout)
    File 
/usr/lib/python2.7/dist-packages/oslo/messaging/_drivers/amqpdriver.py, line 
405, in _send
  raise result
  IncompatibleObjectVersion_Remote: Version 1.4 of Service is not supported
  Traceback (most recent call last):

    File /usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, 
line 133, in _dispatch_and_reply
  incoming.message))

    File /usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, 
line 176, in _dispatch
  return self._do_dispatch(endpoint, method, ctxt, args)

    File /usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, 
line 122, in _do_dispatch
  result = getattr(endpoint, method)(ctxt, **new_args)

    File /opt/stack/nova/nova/conductor/manager.py, line 1043, in
  object_class_action

    File /opt/stack/nova/nova/conductor/manager.py, line 605, in 
object_class_action
  for t in requested_networks])

    File /opt/stack/nova/nova/objects/base.py, line 224, in
  obj_class_from_name

  IncompatibleObjectVersion: Version 1.4 of Service is not supported

  Removing descriptor: 7
  2014-11-11 12:17:32.897 5214 ERROR nova.openstack.common.threadgroup [-] 
Version 1.4 of Service is not supported
  Traceback (most recent call last):

    File /usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, 
line 133, in _dispatch_and_reply
  incoming.message))

    File /usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, 
line 176, in _dispatch
  return self._do_dispatch(endpoint, method, ctxt, args)

    File /usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, 
line 122, in _do_dispatch
  result = getattr(endpoint, method)(ctxt, **new_args)

    File /opt/stack/nova/nova/conductor/manager.py, line 1043, in
  object_class_action

    File /opt/stack/nova/nova/conductor/manager.py, line 605, in 
object_class_action
  for t in requested_networks])

    File /opt/stack/nova/nova/objects/base.py, line 224, in
  obj_class_from_name

  IncompatibleObjectVersion: Version 1.4 of Service is not supported
  2014-11-11 12:17:32.897 5214 TRACE nova.openstack.common.threadgroup 
Traceback (most recent call last):
  2014-11-11 12:17:32.897 5214 TRACE nova.openstack.common.threadgroup   File 
/opt/stack/nova/nova/openstack/common/threadgroup.py, line 125, in wait
  2014-11-11 12:17:32.897 5214 TRACE nova.openstack.common.threadgroup 
x.wait()
  2014-11-11 12:17:32.897 5214 TRACE nova.openstack.common.threadgroup   File 

[Yahoo-eng-team] [Bug 1445412] [NEW] performance of plugin_rpc.get_routers is bad

2015-04-17 Thread Kevin Benton
Public bug reported:

the get_routers plugin call that the l3 agent makes is serviced by a
massive amount of SQL queries that lead the whole process to take on the
order of hundreds of milliseconds to process a request for 10 routers.

This will be a blanket bug for a series of performance improvements that
will reduce that time by at least an order of magnitude.

** Affects: neutron
 Importance: Undecided
 Assignee: Kevin Benton (kevinbenton)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Kevin Benton (kevinbenton)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1445412

Title:
  performance of plugin_rpc.get_routers is bad

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  the get_routers plugin call that the l3 agent makes is serviced by a
  massive amount of SQL queries that lead the whole process to take on
  the order of hundreds of milliseconds to process a request for 10
  routers.

  This will be a blanket bug for a series of performance improvements
  that will reduce that time by at least an order of magnitude.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1445412/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1445467] [NEW] [data processing] Add data source substitution toggles to configure tab on job launch

2015-04-17 Thread Trevor McKay
Public bug reported:

There are two boolean configuration values that enable substitution of
data source objects for custom urls or uuids in configs, args, and
params when jobs are submitted:

* edp.substitute_data_source_for_name  -- substitute data source objects
for urls of the form datasource://name

* edp.substitute_data_source_for_uuid -- substitute data source objects
for strings identified as uuids by oslo_utils.uuidutils.is_uuid_like()

It would be nice if users could simply check a box to set these configs.
Currently, a user must add these configs on the configure tab by hand.

Both values could be set by a single toggle, or an overall toggle with
subtoggles could be used (one toggle to set both, and individual toggles
for each). Probably a single toggle to set both is adequate -- the edge
case will be when a user wants to pass a literal uuid as an argument to
a job, or even less likely a datasource://name string.  If a user
really wants to set only one or the other, the toggle can be turned off
and the config value can be added by hand (manual config values should
supersede the toggle).

The label on a single box can be something like Reference datasources
by name or uuid.

Default should probably be True.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1445467

Title:
  [data processing] Add data source substitution toggles to configure
  tab on job launch

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  There are two boolean configuration values that enable substitution of
  data source objects for custom urls or uuids in configs, args, and
  params when jobs are submitted:

  * edp.substitute_data_source_for_name  -- substitute data source
  objects for urls of the form datasource://name

  * edp.substitute_data_source_for_uuid -- substitute data source
  objects for strings identified as uuids by
  oslo_utils.uuidutils.is_uuid_like()

  It would be nice if users could simply check a box to set these
  configs. Currently, a user must add these configs on the configure tab
  by hand.

  Both values could be set by a single toggle, or an overall toggle with
  subtoggles could be used (one toggle to set both, and individual
  toggles for each). Probably a single toggle to set both is adequate --
  the edge case will be when a user wants to pass a literal uuid as an
  argument to a job, or even less likely a datasource://name string.
  If a user really wants to set only one or the other, the toggle can be
  turned off and the config value can be added by hand (manual config
  values should supersede the toggle).

  The label on a single box can be something like Reference datasources
  by name or uuid.

  Default should probably be True.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1445467/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1370250] Re: Can not set volume attributes at instance launch by EC2 API

2015-04-17 Thread Andrey Pavlov
** Also affects: ec2-api
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1370250

Title:
  Can not set volume attributes at instance launch by EC2 API

Status in EC2 API:
  New
Status in OpenStack Compute (Nova):
  Confirmed

Bug description:
  AWS allows to change block device attributes (such as volume size,
  delete on termination behavior, existence) at instance launch.

  For example, image xxx has devices:
  vda, size 10, delete on termination
  vdb, size 100, delete on termination
  vdc, size 100, delete on termination
  We can run an instance by
  euca-run-instances ... xxx -b /dev/vda=:20 -b /dev/vdb=::false -b 
/dev/vdc=none
  to get the instance with devices:
  vda, size 20, delete on termination
  vdb, size 100, not delete on termination

  For Nova we get now:
  $ euca-run-instances --instance-type m1.nano -b /dev/vda=::true ami-000a
  euca-run-instances: error (InvalidBDMFormat): Block Device Mapping is 
Invalid: Unrecognized legacy format.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ec2-api/+bug/1370250/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1370330] Re: Cannot attach a volume without '/dev/' prefix by EC2 API

2015-04-17 Thread Andrey Pavlov
** Also affects: ec2-api
   Importance: Undecided
   Status: New

** Changed in: ec2-api
   Status: New = Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1370330

Title:
  Cannot attach a volume without '/dev/' prefix by EC2 API

Status in EC2 API:
  Confirmed
Status in OpenStack Compute (Nova):
  Confirmed

Bug description:
  AWS allows to attach a volume with short device name (without '/dev/'
  prefix). But Nova doesn't.

  $ euca-attach-volume -i i-0008 -d vdd vol-0003
  euca-attach-volume: error (InvalidDevicePath): The supplied device path (vdd) 
is invalid.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ec2-api/+bug/1370330/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1445487] [NEW] Attempting to deactivate a queued image returns a 403

2015-04-17 Thread Luke Wollney
Public bug reported:

Overview:
When attempting to deactivate a queued image (one without an image file) 
returns a '403 Forbidden - Not allowed to deactivate image in status 'queued'.

Steps to reproduce:
1) Register a new image as user
2) Without uploading an image file, deactivate the image as admin via:
POST /images/image_id/actions/deactivate
3) Notice that a '403 Forbidden - Not allowed to deactivate image in status 
'queued' is returned

Expected:
A 400 response should be returned with the same message

Actual:
A 403 response is returned

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1445487

Title:
  Attempting to deactivate a queued image returns a 403

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  Overview:
  When attempting to deactivate a queued image (one without an image file) 
returns a '403 Forbidden - Not allowed to deactivate image in status 'queued'.

  Steps to reproduce:
  1) Register a new image as user
  2) Without uploading an image file, deactivate the image as admin via:
  POST /images/image_id/actions/deactivate
  3) Notice that a '403 Forbidden - Not allowed to deactivate image in status 
'queued' is returned

  Expected:
  A 400 response should be returned with the same message

  Actual:
  A 403 response is returned

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1445487/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1445199] Re: Nova user should not have admin role

2015-04-17 Thread Brant Knudson
I think the reason the 'nova' user needs the 'admin' role is because
neutron uses it to send a network allocation event back to nova. Nova
should be configured by default to allow users with the 'service' role
to do this operation and not require the 'admin' role.

** Information type changed from Public to Public Security

** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1445199

Title:
  Nova user should not have admin role

Status in devstack - openstack dev environments:
  New
Status in OpenStack Compute (Nova):
  New

Bug description:
  
  Most of the service users are granted the 'service' role on the 'service' 
project, except the 'nova' user which is given 'admin'. The 'nova' user should 
also be given only the 'service' role on the 'service' project.

  This is for security hardening.

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1445199/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1370177] Re: Lack of EC2 image attributes for volume backed snapshot.

2015-04-17 Thread Andrey Pavlov
** Also affects: ec2-api
   Importance: Undecided
   Status: New

** Changed in: ec2-api
   Status: New = Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1370177

Title:
  Lack of EC2 image attributes for volume backed snapshot.

Status in EC2 API:
  Confirmed
Status in OpenStack Compute (Nova):
  Confirmed

Bug description:
  For EBS images AWS returns device names, volume sizes, delete on
  termination flags in block device mapping structure.

  $ euca-describe-images ami-d13845e1
  IMAGE ami-d13845e1amazon/amzn-ami-hvm-2014.03.2.x86_64-ebsamazon  
available   public  x86_64  machine ebs hvm 
xen
  BLOCKDEVICEMAPPING/dev/xvda   snap-d15cde24   8   true

  The same in xml form:
  blockDeviceMapping
  item
  deviceName/dev/xvda/deviceName
  ebs
  snapshotIdsnap-d15cde24/snapshotId
  volumeSize8/volumeSize
  deleteOnTerminationtrue/deleteOnTermination
  volumeTypestandard/volumeType
  /ebs
  /item
  /blockDeviceMapping

  But Nova didn't do it now:

  $ euca-describe-images ami-000a
  IMAGE ami-000aNone (sn-in)ef3ddd7aa4b24cda974200baef02730b
available   private machine aki-0002ari-0003
instance-store
  BLOCKDEVICEMAPPINGsnap-0005

  The same in xml form:
    blockDeviceMapping
  item
    ebs
  snapshotIdsnap-0005/snapshotId
    /ebs
  /item
    /blockDeviceMapping

  NB. In Grizzly device names and delete on termination flags was returned. It 
was changed by 
https://github.com/openstack/nova/commit/33e3d4c6b9e0b11500fe47d861110be1c1981572
  Now these attributes are not stored in instance snapshots, so there is no way 
to output them.

  Device name is most critical attribute. Because there is another one 
compatibility issue (see https://bugs.launchpad.net/nova/+bug/1370250): Nova 
isn't able to adjust attributes of volume being created at instance launch 
stage. For example in AWS we can change volume size and delete on termination 
flag of a device by set new values in parameters of run instance command. To 
identify the device in image block device mapping we use device name. For 
example:
  euca-run-instances ... -b /dev/vda=:100
  runs an instance with vda device increased to 100 GB.
  Thus if we haven't device names in images, we haven't a chance to fix this 
compatibility problem.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ec2-api/+bug/1370177/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1445443] [NEW] volume access I/O error with libvirt-xen and lvmdriver-1

2015-04-17 Thread Anthony PERARD
Public bug reported:

On a single-node devstack installation, on Ubuntu LTS 14.4, with the Xen
hypervisor.

To reproduce:
nova volume-create --image-id cirros-0.3.2-x86_64-uec --display-name vol-cirros 
1
nova boot --key-name `hostname` --block-device 
source=volume,id=$volume_id,dest=volume,bootindex=0,shutdown=preserve --image 
'' --flavor 42 cirros

The instance cirros does not finish to boot, and `nova console-log
cirros` shows I/O error while accessing the block device.

console-log:
info: copying initramfs to /dev/xvda
[   79.327661] end_request: I/O error, dev xvda, sector 2
[   79.327678] Buffer I/O error on device xvda, logical block 1
[   79.327686] lost page write due to I/O error on xvda
[   79.327728] EXT3-fs (xvda): I/O error while writing superblock
[  160.463559] end_request: I/O error, dev xvda, sector 25154
[...]

Another way to reproduce would be to use `nova volume-attach` instead of
the --block-device option of nova boot.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: libvirt volume xen

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1445443

Title:
  volume access I/O error with libvirt-xen and lvmdriver-1

Status in OpenStack Compute (Nova):
  New

Bug description:
  On a single-node devstack installation, on Ubuntu LTS 14.4, with the
  Xen hypervisor.

  To reproduce:
  nova volume-create --image-id cirros-0.3.2-x86_64-uec --display-name 
vol-cirros 1
  nova boot --key-name `hostname` --block-device 
source=volume,id=$volume_id,dest=volume,bootindex=0,shutdown=preserve --image 
'' --flavor 42 cirros

  The instance cirros does not finish to boot, and `nova console-log
  cirros` shows I/O error while accessing the block device.

  console-log:
  info: copying initramfs to /dev/xvda
  [   79.327661] end_request: I/O error, dev xvda, sector 2
  [   79.327678] Buffer I/O error on device xvda, logical block 1
  [   79.327686] lost page write due to I/O error on xvda
  [   79.327728] EXT3-fs (xvda): I/O error while writing superblock
  [  160.463559] end_request: I/O error, dev xvda, sector 25154
  [...]

  Another way to reproduce would be to use `nova volume-attach` instead
  of the --block-device option of nova boot.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1445443/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1371445] Re: Nova EC2 doesn't assign a floating IP to an instance being launched

2015-04-17 Thread Andrey Pavlov
** Also affects: ec2-api
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1371445

Title:
  Nova EC2 doesn't assign a floating IP to an instance being launched

Status in EC2 API:
  New
Status in OpenStack Compute (Nova):
  Confirmed

Bug description:
  In EC2 classic mode AWS automatically associates a public IP address
  to an instance being launched. See
  http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-instance-
  addressing.html#differences

  Since Nova EC2 emulates EC2 classic mode of AWS (there is no VPC
  support in Nova EC2), it should associate a floating IP as well. But
  it doesn't do this.

  Though Nova has auto_assign_floating_ip parameter which does work
  similar AWS. But it isn't implemented for Neutron networks. And it
  affects both methods of running: EC2 and native Nova. Thus if we want
  a cloud to be AWS compatible, and we use this parameter, we change
  behavior of Nova in it's native part. This may be not desirable.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ec2-api/+bug/1371445/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1444056] Re: v1 client may truncate listings

2015-04-17 Thread Erno Kuvaja
Fair enough, thanks. I'll move this bug to the client then.

** Project changed: glance = python-glanceclient

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1444056

Title:
  v1 client may truncate listings

Status in Python client library for Glance:
  New

Bug description:

  If I set  api_limit_max = 10 in glance-api.conf:

  and I have 71 images and list them with v2 I see all of them:

  
  $ glance --os-image-api-version 2 image-list --page-size 80 | wc
   71 2835325

  Each response from the server correctly contains only 10 images.

  
  If I set:

  glance-registry.conf
   api_limit_max = 10

  and list:

  $ glance --os-image-api-version 1 image-list --page-size 80 | wc
   14 1381358

  the number of images returned is truncated.

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-glanceclient/+bug/1444056/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1403617] Re: gce datasource does not handle instance ssh keys

2015-04-17 Thread Dan Watkins
** Also affects: cloud-init
   Importance: Undecided
   Status: New

** Changed in: cloud-init
 Assignee: (unassigned) = Dan Watkins (daniel-thewatkins)

** Changed in: cloud-init (Ubuntu)
 Assignee: Dan Watkins (daniel-thewatkins) = (unassigned)

** Changed in: cloud-init
   Status: New = Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1403617

Title:
  gce datasource does not handle instance ssh keys

Status in Init scripts for use on cloud images:
  Confirmed
Status in cloud-init package in Ubuntu:
  Confirmed

Bug description:
  The GCE Datasource pulls the per-project SSH but does not handle the
  per-instance SSH keys.

  The meta-data that it handles is:
  url_map = [   
  
  ('instance-id', 'instance/id', True), 
  
  ('availability-zone', 'instance/zone', True), 
  
  ('local-hostname', 'instance/hostname', True),
  
  ('public-keys', 'project/attributes/sshKeys', False), 
  
  ('user-data', 'instance/attributes/user-data', False),
  
  ]  

  It should also handle:
  ('public-keys', 'instance/attributes/sshKeys', False),

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1403617/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1444765] Re: admin's tenant_id is not the same with load_balancer's tenant_id in tempest tests

2015-04-17 Thread Madhusudhan Kandadai
** Changed in: neutron
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1444765

Title:
  admin's tenant_id is not the same with load_balancer's tenant_id in
  tempest tests

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  Here is the following scenario:

  This is happening only WITH tempest tests by inheriting the necessary
  class from 'tempest/api/neutron':

  (a) When creating a loadbalancer for non_admin user, I could see the
  'tenant_id' is equal to loadbalancer.get('tenant_id'). This sounds
  good to me and requires no attention.

  i.e.,

  credentials = cls.isolated_creds.get_primary_creds()
  mgr = tempest_clients.Manager(credentials=credentials)
  auth_provider = mgr.get_auth_provider(credentials)
  client_args = [auth_provider, 'network', 'regionOne']

  (b) whereas, when I create a loadbalancer using admin credentials,
  the tenant_id  NOT equals loadbalancer.get('tenant_id'). In general it
  SHOULD be equal.

  i.e,.

  credentials_admin = cls.isolated_creds.get_admin_creds()
  mgr_admin = tempest_clients.Manager(credentials=credentials_admin)
  auth_provider_admin = mgr_admin.get_auth_provider(credentials_admin)
  client_args = [auth_provider_admin, 'network', 'regionOne']

  Not sure why this is happening, the expected behavior should be An
  user(either admin/non-admin) should be able to create a loadbalancer
  for the default tenant and that 'tenant_id' should be equal to the
  admin's 'tenant_id'. There are other test cases too specially for
  'admin' role got succeeded and behaving properly.

  Details about the code can be found at
  
https://review.openstack.org/#/c/171832/7/neutron_lbaas/tests/tempest/v2/api/base.py

  For the exact testcase:

  (a) For admin_testcase:  see line 55 - 61: 
https://review.openstack.org/#/c/171832/7/neutron_lbaas/tests/tempest/v2/api/test_load_balancers_admin.py
  (b) For non_admin testcase:  see line 259 -266: 
https://review.openstack.org/#/c/171832/7/neutron_lbaas/tests/tempest/v2/api/test_load_balancers_non_admin.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1444765/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1240537] Re: VPNaaS client support for service type framework

2015-04-17 Thread Paul Michali
Service Type Framework effort was abandoned and will be replaced in the
future by the flavor framework. This will not be done.

** Changed in: neutron
   Status: In Progress = Invalid

** Changed in: neutron
 Assignee: Paul Michali (pcm) = (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1240537

Title:
  VPNaaS client support for service type framework

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  There is a blueprint for adding service type framework support to
  VPNaaS:

  https://blueprints.launchpad.net/neutron/+spec/vpn-service-types-
  integration

  There is a (currently abandoned, but needs to be rebased for Icehouse)
  review under:

  https://review.openstack.org/#/c/41827/

  To go along with the service changes, we need client side
  modifications to allow specification of provider.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1240537/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1445591] [NEW] Edit Instance security group dialog isn't wide enough for translation

2015-04-17 Thread Doug Fish
Public bug reported:

On Project-Compute-Instances-Edit Instance-Security Group there is
not enough width for the Spanish translation of Instance Security Groups
to be rendered inline with the filter.

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: edit_instance_security_group_es.png
   
https://bugs.launchpad.net/bugs/1445591/+attachment/4378547/+files/edit_instance_security_group_es.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1445591

Title:
   Edit Instance security group dialog isn't wide enough for translation

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  On Project-Compute-Instances-Edit Instance-Security Group there is
  not enough width for the Spanish translation of Instance Security
  Groups to be rendered inline with the filter.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1445591/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1445608] [NEW] Deactivated images are not returned for list image requests

2015-04-17 Thread Luke Wollney
Public bug reported:

Overview:
After deactivating an image, the image owner can no longer see the image in a 
list images response.

Steps to reproduce:
1) Create an image with user A
2) Deactivate the image with an admin
3) List all images, accounting for pagination as needed, with user A via:
GET /images
4) Notice that the created (deactivated) image is not returned

Expected:
Deactivated images are returned in a list images response

Actual:
Deactivated images are not returned in a list images response

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1445608

Title:
  Deactivated images are not returned for list image requests

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  Overview:
  After deactivating an image, the image owner can no longer see the image in a 
list images response.

  Steps to reproduce:
  1) Create an image with user A
  2) Deactivate the image with an admin
  3) List all images, accounting for pagination as needed, with user A via:
  GET /images
  4) Notice that the created (deactivated) image is not returned

  Expected:
  Deactivated images are returned in a list images response

  Actual:
  Deactivated images are not returned in a list images response

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1445608/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1444532] Re: nova-scheduler doesnt reconnect to databases when started and database is down

2015-04-17 Thread Brian Murray
** Package changed: ubuntu = nova (Ubuntu)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1444532

Title:
  nova-scheduler doesnt reconnect to databases when started and database
  is down

Status in OpenStack Compute (Nova):
  New
Status in nova package in Ubuntu:
  New

Bug description:
  In Juno release (ubuntu packages), when you start nova-scheduler but
  database is down, the service never reconnects, the stacktrace is as
  follow :

  
  AUDIT nova.service [-] Starting scheduler node (version 2014.2.2)
  ERROR nova.openstack.common.threadgroup [-] (OperationalError) (2003, Can't 
connect to MySQL server on '10.128.30.11' (111)) None None
  TRACE nova.openstack.common.threadgroup Traceback (most recent call last):
  TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/nova/openstack/common/threadgroup.py, line 
125, in wait
  TRACE nova.openstack.common.threadgroup x.wait()
  TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/nova/openstack/common/threadgroup.py, line 
47, in wait
  TRACE nova.openstack.common.threadgroup return self.thread.wait()
  TRACE nova.openstack.common.threadgroup   File 
/usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py, line 173, in 
wait
  TRACE nova.openstack.common.threadgroup return self._exit_event.wait()
  TRACE nova.openstack.common.threadgroup   File 
/usr/local/lib/python2.7/dist-packages/eventlet/event.py, line 121, in wait
  TRACE nova.openstack.common.threadgroup return hubs.get_hub().switch()
  TRACE nova.openstack.common.threadgroup   File 
/usr/local/lib/python2.7/dist-packages/eventlet/hubs/hub.py, line 293, in 
switch
  TRACE nova.openstack.common.threadgroup return self.greenlet.switch()
  TRACE nova.openstack.common.threadgroup   File 
/usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py, line 212, in 
main
  TRACE nova.openstack.common.threadgroup result = function(*args, **kwargs)
  TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/nova/openstack/common/service.py, line 490, 
in run_service
  TRACE nova.openstack.common.threadgroup service.start()
  TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/nova/service.py, line 169, in start
  TRACE nova.openstack.common.threadgroup self.host, self.binary)
  TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/nova/conductor/api.py, line 161, in 
service_get_by_args
  TRACE nova.openstack.common.threadgroup binary=binary, topic=None)
  TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/nova/utils.py, line 949, in wrapper
  TRACE nova.openstack.common.threadgroup return func(*args, **kwargs)
  TRACE nova.openstack.common.threadgroup   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/server.py, line 
139, in inner
  TRACE nova.openstack.common.threadgroup return func(*args, **kwargs)
  TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/nova/conductor/manager.py, line 279, in 
service_get_all_by
  TRACE nova.openstack.common.threadgroup result = 
self.db.service_get_by_args(context, host, binary)
  TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/nova/db/api.py, line 136, in 
service_get_by_args
  TRACE nova.openstack.common.threadgroup return 
IMPL.service_get_by_args(context, host, binary)
  TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.py, line 125, in 
wrapper
  TRACE nova.openstack.common.threadgroup return f(*args, **kwargs)
  TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.py, line 490, in 
service_get_by_args
  TRACE nova.openstack.common.threadgroup result = model_query(context, 
models.Service).\
  TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.py, line 213, in 
model_query
  TRACE nova.openstack.common.threadgroup session = kwargs.get('session') 
or get_session(use_slave=use_slave)
  TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.py, line 101, in 
get_session
  TRACE nova.openstack.common.threadgroup facade = _create_facade_lazily()
  TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.py, line 91, in 
_create_facade_lazily
  TRACE nova.openstack.common.threadgroup _ENGINE_FACADE = 
db_session.EngineFacade.from_config(CONF)
  TRACE nova.openstack.common.threadgroup   File 
/usr/local/lib/python2.7/dist-packages/oslo/db/sqlalchemy/session.py, line 
795, in from_config
  TRACE nova.openstack.common.threadgroup 
retry_interval=conf.database.retry_interval)
  

[Yahoo-eng-team] [Bug 1282855] Re: Add httmock to test-requirements.txt and update requests - 2.1.0

2015-04-17 Thread Paul Michali
** Changed in: neutron
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1282855

Title:
  Add httmock to test-requirements.txt and update requests - 2.1.0

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  To enhance unit testing of REST API code I'd like to add this PyPI
  package to test-requirements.txt.

  I had this as part of my Icehouse-3 review for VPNaaS device driver,
  in requirements.txt, but it was failing Jenkins as this is not on the
  master requirements.

  Since it is only used for unit testing, it was suggested to place this
  in test-requirements.txt. So, I'm going to push a separate review just
  for this file, so I can use it for other commits.

  Ref: https://pypi.python.org/pypi/httmock/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1282855/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1392718] Re: Sticky region selection in Login page

2015-04-17 Thread Lin Hua Cheng
** Changed in: django-openstack-auth
Milestone: None = 1.1.9

** Changed in: django-openstack-auth
   Status: Fix Committed = Fix Released

** Changed in: horizon
   Status: In Progress = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1392718

Title:
  Sticky region selection in Login page

Status in Django OpenStack Auth:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Remember the last Region (keystone endpoint) selected in the Login
  page.

  If the deployment has multiple Regions and user is using the same
  Region most of the time, it would be better for UX to just default to
  the last Region selected for the user.

To manage notifications about this bug go to:
https://bugs.launchpad.net/django-openstack-auth/+bug/1392718/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1420139] Re: VPNPluginDbTestCase unit test failed with upstream submit I16b5e5b2

2015-04-17 Thread Paul Michali
Problem with driver loading in new repo was resolved, so this is no
longer a failure. Marked as invalid.

** Changed in: neutron
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1420139

Title:
  VPNPluginDbTestCase unit test failed with upstream submit I16b5e5b2

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  today I found the unit test case VPNPluginDbTestCase doesn't work as
  below error log shows.

  I debug the code and find the reason, that's because the upstream
  submit I16b5e5b2 (
  
https://review.openstack.org/#/c/151375/7/neutron/services/provider_configuration.py
  ), it trys to read services_provider configrations items in
  neutron-{service}.conf file.

  on the other hand, VPNPluginDbTestCase still try to override
  service_provider, so error 'Invalid: Driver
  neutron_vpnaas.services.vpn.service_drivers.ipsec.IPsecVPNDriver is
  not unique across providers' is throwed.

  if not vpnaas_provider:
  vpnaas_provider = (
  constants.VPN +
  ':vpnaas:neutron_vpnaas.services.vpn.'
  'service_drivers.ipsec.IPsecVPNDriver:default')

  cfg.CONF.set_override('service_provider',
[vpnaas_provider],
'service_providers')


  Traceback (most recent call last):
File 
/bak/openstack/neutron-vpnaas/neutron_vpnaas/tests/unit/services/vpn/test_vpnaas_driver_plugin.py,
 line 47, in setUp
  vpnaas_plugin=VPN_DRIVER_CLASS)
File 
/bak/openstack/neutron-vpnaas/neutron_vpnaas/tests/unit/db/vpn/test_db_vpnaas.py,
 line 437, in setUp
  service_plugins=service_plugins
File /bak/openstack/neutron-vpnaas/neutron_vpnaas/tests/base.py, line 53, 
in setUp
  plugin, service_plugins, ext_mgr)
File /bak/openstack/neutron/neutron/tests/unit/test_db_plugin.py, line 
120, in setUp
  self.api = router.APIRouter()
File /bak/openstack/neutron/neutron/api/v2/router.py, line 74, in __init__
  plugin = manager.NeutronManager.get_plugin()
File /bak/openstack/neutron/neutron/manager.py, line 222, in get_plugin
  return weakref.proxy(cls.get_instance().plugin)
File /bak/openstack/neutron/neutron/manager.py, line 216, in get_instance
  cls._create_instance()
File 
/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py, line 
431, in inner
  return f(*args, **kwargs)
File /bak/openstack/neutron/neutron/manager.py, line 202, in 
_create_instance
  cls._instance = cls()
File /bak/openstack/neutron/neutron/manager.py, line 128, in __init__
  self._load_service_plugins()
File /bak/openstack/neutron/neutron/manager.py, line 175, in 
_load_service_plugins
  provider)
File /bak/openstack/neutron/neutron/manager.py, line 143, in 
_get_plugin_instance
  return plugin_class()
File /bak/openstack/neutron-vpnaas/neutron_vpnaas/services/vpn/plugin.py, 
line 44, in __init__
  constants.VPN, self)
File /bak/openstack/neutron/neutron/services/service_base.py, line 64, in 
load_drivers
  service_type_manager = sdb.ServiceTypeManager.get_instance()
File /bak/openstack/neutron/neutron/db/servicetype_db.py, line 41, in 
get_instance
  cls._instance = cls()
File /bak/openstack/neutron/neutron/db/servicetype_db.py, line 45, in 
__init__
  self._load_conf()
File /bak/openstack/neutron/neutron/db/servicetype_db.py, line 49, in 
_load_conf
  pconf.parse_service_provider_opt())
File /bak/openstack/neutron/neutron/services/provider_configuration.py, 
line 139, in __init__
  self.add_provider(prov)
File /bak/openstack/neutron/neutron/services/provider_configuration.py, 
line 160, in add_provider
  self._ensure_driver_unique(provider['driver'])
File /bak/openstack/neutron/neutron/services/provider_configuration.py, 
line 147, in _ensure_driver_unique
  raise n_exc.Invalid(msg)
  Invalid: Driver 
neutron_vpnaas.services.vpn.service_drivers.ipsec.IPsecVPNDriver is not unique 
across providers

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1420139/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1445569] [NEW] No dhcp lease after shelve unshelve

2015-04-17 Thread Clark Boylan
Public bug reported:

This may be related to 1290635 but I am not familiar enough with Nova's
dhcp and shelve implementations to know for sure. Also the behavior I am
seeing seems to be slightly different.

In the multinode nova-net job
(http://logs.openstack.org/88/174288/1/check/check-tempest-dsvm-
multinode-full/3e3be58/) during tempest test_shelve_instance test we see
dhcp fail when the shelved instance is unshelved:

http://logs.openstack.org/88/174288/1/check/check-tempest-dsvm-multinode-full/3e3be58/console.html#_2015-04-16_11_26_00_029
2015-04-16 11:26:00.029 | Starting network...
2015-04-16 11:26:00.029 | udhcpc (v1.20.1) started
2015-04-16 11:26:00.029 | Sending discover...
2015-04-16 11:26:00.029 | Sending discover...
2015-04-16 11:26:00.029 | Sending discover...
2015-04-16 11:26:00.029 | No lease, failing
2015-04-16 11:26:00.029 | WARN: /etc/rc3.d/S40-network failed
2015-04-16 11:26:00.029 | cirros-ds 'net' up at 187.20

Looking at tempest logs we find that node's MAC address
(fa:16:3e:fb:3e:3e):

http://logs.openstack.org/88/174288/1/check/check-tempest-dsvm-multinode-full/3e3be58/console.html#_2015-04-16_11_25_59_976
2015-04-16 11:25:59.976 | Body: {server: {status: ACTIVE, 
updated: 2015-04-16T11:16:44Z, hostId: 
d0dc2083935df1bf05cadea5c75358ee9d9e0406887667ea4bb582de, addresses: 
{private: [{OS-EXT-IPS-MAC:mac_addr: fa:16:3e:fb:3e:3e, version: 4, 
addr: 10.1.0.6, OS-EXT-IPS:type: fixed}, {OS-EXT-IPS-MAC:mac_addr: 
fa:16:3e:fb:3e:3e, version: 4, addr: 172.24.5.6, OS-EXT-IPS:type: 
floating}]}, links: [{href: 
http://10.208.224.113:8774/v2/b7b633c0117148628342ab9162d7885e/servers/0e9a79cd-96d5-4fcd-a0db-994638967291;,
 rel: self}, {href: 
http://10.208.224.113:8774/b7b633c0117148628342ab9162d7885e/servers/0e9a79cd-96d5-4fcd-a0db-994638967291;,
 rel: bookmark}], key_name: TestShelveInstance-494463835, image: 
{id: 18e4f345-a147-4d0a-922c-46b72b9497e9, links: [{href: 
http://10.208.224.113:8774/b7b633c0117148628342ab9162d7885e/images/18e4f345-a147-4d0a-922c-46b72b9497e9;,
 rel: bookmark}]}, OS-EXT-STS:task_state: nul
 l, OS-EXT-STS:vm_state: active, OS-SRV-USG:launched_at: 
2015-04-16T11:16:44.00, flavor: {id: 42, links: [{href: 
http://10.208.224.113:8774/b7b633c0117148628342ab9162d7885e/flavors/42;, 
rel: bookmark}]}, id: 0e9a79cd-96d5-4fcd-a0db-994638967291, 
security_groups: [{name: TestShelveInstance-909686184}], 
OS-SRV-USG:terminated_at: null, OS-EXT-AZ:availability_zone: nova, 
user_id: a02a8bd6d7734cd1a7aebbfdb4a3eb16, name: 
TestShelveInstance-512771961, created: 2015-04-16T11:15:34Z, tenant_id: 
b7b633c0117148628342ab9162d7885e, OS-DCF:diskConfig: MANUAL, 
os-extended-volumes:volumes_attached: [], accessIPv4: , accessIPv6: , 
progress: 0, OS-EXT-STS:power_state: 1, config_drive: , metadata: {}}}

According to the logs above MAC addr fa:16:3e:fb:3e:3e should get IP
10.1.0.6 but syslog shows:

http://logs.openstack.org/88/174288/1/check/check-tempest-dsvm-multinode-full/3e3be58/logs/10.176.200.184-subnode/syslog.txt.gz#_Apr_16_11_16_00
devstack-trusty-2-node-rax-iad-2194251-1687 dnsmasq-dhcp[22310]: 
DHCPDISCOVER(br100) fa:16:3e:fb:3e:3e no address available

I think this points at one of two problems. Either there is a race
between booting a node and setting its dnsmasq config with nova-net or
nova-net is never setting the dnsmasq config in the first place.

If it is a race then this likely affects other operations. If it is
never set at all that may be a shelve unshelve specific issue with
restoring instance state.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1445569

Title:
  No dhcp lease after shelve unshelve

Status in OpenStack Compute (Nova):
  New

Bug description:
  This may be related to 1290635 but I am not familiar enough with
  Nova's dhcp and shelve implementations to know for sure. Also the
  behavior I am seeing seems to be slightly different.

  In the multinode nova-net job
  (http://logs.openstack.org/88/174288/1/check/check-tempest-dsvm-
  multinode-full/3e3be58/) during tempest test_shelve_instance test we
  see dhcp fail when the shelved instance is unshelved:

  
http://logs.openstack.org/88/174288/1/check/check-tempest-dsvm-multinode-full/3e3be58/console.html#_2015-04-16_11_26_00_029
  2015-04-16 11:26:00.029 | Starting network...
  2015-04-16 11:26:00.029 | udhcpc (v1.20.1) started
  2015-04-16 11:26:00.029 | Sending discover...
  2015-04-16 11:26:00.029 | Sending discover...
  2015-04-16 11:26:00.029 | Sending discover...
  2015-04-16 11:26:00.029 | No lease, failing
  2015-04-16 11:26:00.029 | WARN: /etc/rc3.d/S40-network failed
  2015-04-16 11:26:00.029 | cirros-ds 'net' up at 187.20

  Looking at tempest logs we find that node's MAC address
  (fa:16:3e:fb:3e:3e):

  

[Yahoo-eng-team] [Bug 1445199] Re: Nova user should not have admin role

2015-04-17 Thread Jeremy Stanley
In your bug description you indicate this is only a security hardening
measure, but now you've switched the bug type to indicate it's an
exploitable security vulnerability. Also this looks like a duplicate of
bug 1445475 reported against nova.

** Also affects: ossa
   Importance: Undecided
   Status: New

** Changed in: ossa
   Status: New = Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1445199

Title:
  Nova user should not have admin role

Status in devstack - openstack dev environments:
  New
Status in OpenStack Compute (Nova):
  New
Status in OpenStack Security Advisories:
  Incomplete

Bug description:
  
  Most of the service users are granted the 'service' role on the 'service' 
project, except the 'nova' user which is given 'admin'. The 'nova' user should 
also be given only the 'service' role on the 'service' project.

  This is for security hardening.

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1445199/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1445628] [NEW] Cells: Tempest test_ec2_instance_run.InstanceRunTest.test_run_idempotent_instances

2015-04-17 Thread Andrew Laski
Public bug reported:

The tempest test is failing, likely due to a race condition with setting
system_metadata on instances too quickly.

2015-04-17 15:52:47.001 | 
tempest.thirdparty.boto.test_ec2_instance_run.InstanceRunTest.test_run_idempotent_instances[id-c881fbb7-d56e-4054-9d76-1c3a60a207b0]
2015-04-17 15:52:47.001 | 

2015-04-17 15:52:47.001 | 
2015-04-17 15:52:47.001 | Captured traceback:
2015-04-17 15:52:47.001 | ~~~
2015-04-17 15:52:47.001 | Traceback (most recent call last):
2015-04-17 15:52:47.001 |   File 
tempest/thirdparty/boto/test_ec2_instance_run.py, line 120, in 
test_run_idempotent_instances
2015-04-17 15:52:47.001 | self.assertEqual(reservation_1.id, 
reservation_1a.id)
2015-04-17 15:52:47.001 |   File 
/opt/stack/new/tempest/.tox/all/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 350, in assertEqual
2015-04-17 15:52:47.001 | self.assertThat(observed, matcher, message)
2015-04-17 15:52:47.002 |   File 
/opt/stack/new/tempest/.tox/all/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 435, in assertThat
2015-04-17 15:52:47.002 | raise mismatch_error
2015-04-17 15:52:47.002 | testtools.matchers._impl.MismatchError: 
u'r-b5r49kpt' != u'r-pkxdvrw5'
2015-04-17 15:52:47.002 | 
2015-04-17 15:52:47.002 | 
2015-04-17 15:52:47.002 | Captured pythonlogging:
2015-04-17 15:52:47.002 | ~~~
2015-04-17 15:52:47.002 | 2015-04-17 15:49:33,598 22297 INFO 
[tempest_lib.common.rest_client] Request 
(InstanceRunTest:test_run_idempotent_instances): 200 GET 
http://127.0.0.1:35357/v2.0/users/c70380a98d8a4842a3a40f8470aef63d/credentials/OS-EC2
 0.378s
2015-04-17 15:52:47.002 | 2015-04-17 15:49:33,598 22297 DEBUG
[tempest_lib.common.rest_client] Request - Headers: {'X-Auth-Token': 
'omitted', 'Accept': 'application/json', 'Content-Type': 'application/json'}
2015-04-17 15:52:47.002 | Body: None
2015-04-17 15:52:47.002 | Response - Headers: {'date': 'Fri, 17 Apr 
2015 15:49:33 GMT', 'x-openstack-request-id': 
'req-d4e9b050-8987-4880-a8ad-bf7d0ddeff46', 'vary': 'X-Auth-Token', 'server': 
'Apache/2.4.7 (Ubuntu)', 'content-length': '225', 'connection': 'close', 
'content-type': 'application/json', 'status': '200', 'content-location': 
'http://127.0.0.1:35357/v2.0/users/c70380a98d8a4842a3a40f8470aef63d/credentials/OS-EC2'}
2015-04-17 15:52:47.003 | Body: {credentials: [{access: 
a1dcddda366840f0b39548da591c3eac, tenant_id: 
d3f919661707425eb0cf76113ffd03c4, secret: 
c33e9727f7f743eca190009b96edea13, user_id: 
c70380a98d8a4842a3a40f8470aef63d, trust_id: null}]}
2015-04-17 15:52:47.003 | 2015-04-17 15:49:39,084 22297 INFO 
[tempest_lib.common.rest_client] Request 
(InstanceRunTest:test_run_idempotent_instances): 200 GET 
http://127.0.0.1:35357/v2.0/users/c70380a98d8a4842a3a40f8470aef63d/credentials/OS-EC2
 0.479s
2015-04-17 15:52:47.003 | 2015-04-17 15:49:39,085 22297 DEBUG
[tempest_lib.common.rest_client] Request - Headers: {'X-Auth-Token': 
'omitted', 'Accept': 'application/json', 'Content-Type': 'application/json'}
2015-04-17 15:52:47.003 | Body: None
2015-04-17 15:52:47.003 | Response - Headers: {'date': 'Fri, 17 Apr 
2015 15:49:38 GMT', 'x-openstack-request-id': 
'req-7dec7fd6-19e0-4bd1-9819-374b286e2ba1', 'vary': 'X-Auth-Token', 'server': 
'Apache/2.4.7 (Ubuntu)', 'content-length': '225', 'connection': 'close', 
'content-type': 'application/json', 'status': '200', 'content-location': 
'http://127.0.0.1:35357/v2.0/users/c70380a98d8a4842a3a40f8470aef63d/credentials/OS-EC2'}
2015-04-17 15:52:47.003 | Body: {credentials: [{access: 
a1dcddda366840f0b39548da591c3eac, tenant_id: 
d3f919661707425eb0cf76113ffd03c4, secret: 
c33e9727f7f743eca190009b96edea13, user_id: 
c70380a98d8a4842a3a40f8470aef63d, trust_id: null}]}
2015-04-17 15:52:47.003 | 2015-04-17 15:49:45,016 22297 INFO 
[tempest_lib.common.rest_client] Request 
(InstanceRunTest:test_run_idempotent_instances): 200 GET 
http://127.0.0.1:35357/v2.0/users/c70380a98d8a4842a3a40f8470aef63d/credentials/OS-EC2
 0.430s
2015-04-17 15:52:47.003 | 2015-04-17 15:49:45,016 22297 DEBUG
[tempest_lib.common.rest_client] Request - Headers: {'X-Auth-Token': 
'omitted', 'Accept': 'application/json', 'Content-Type': 'application/json'}
2015-04-17 15:52:47.003 | Body: None
2015-04-17 15:52:47.020 | Response - Headers: {'date': 'Fri, 17 Apr 
2015 15:49:44 GMT', 'x-openstack-request-id': 
'req-4b53172a-d346-4292-bdf5-fcc256ac2310', 'vary': 'X-Auth-Token', 'server': 
'Apache/2.4.7 (Ubuntu)', 'content-length': '225', 'connection': 'close', 
'content-type': 'application/json', 'status': '200', 'content-location': 
'http://127.0.0.1:35357/v2.0/users/c70380a98d8a4842a3a40f8470aef63d/credentials/OS-EC2'}
2015-04-17 

[Yahoo-eng-team] [Bug 1445629] [NEW] Cells: tempest.api.compute.servers.test_disk_config.ServerDiskConfigTestJSON.test_rebuild_server_with_manual_disk_config

2015-04-17 Thread Andrew Laski
Public bug reported:

Tempest failure has been seen due to various failures in the tests.
It's failed on 'MANUAL' != 'AUTO' in an assert, and occasionally the 500
error as seen below.

2015-04-17 15:52:46.992 | 
tempest.api.compute.servers.test_disk_config.ServerDiskConfigTestJSON.test_rebuild_server_with_manual_disk_config[gate,id-bef56b09-2e8c-4883-a370-4950812f430e]
2015-04-17 15:52:46.992 | 
---
2015-04-17 15:52:46.992 | 
2015-04-17 15:52:46.992 | Captured traceback:
2015-04-17 15:52:46.993 | ~~~
2015-04-17 15:52:46.993 | Traceback (most recent call last):
2015-04-17 15:52:46.993 |   File 
tempest/api/compute/servers/test_disk_config.py, line 62, in 
test_rebuild_server_with_manual_disk_config
2015-04-17 15:52:46.993 | disk_config='MANUAL')
2015-04-17 15:52:46.993 |   File 
tempest/services/compute/json/servers_client.py, line 286, in rebuild
2015-04-17 15:52:46.993 | rebuild_schema, **kwargs)
2015-04-17 15:52:46.993 |   File 
tempest/services/compute/json/servers_client.py, line 223, in action
2015-04-17 15:52:46.993 | post_body)
2015-04-17 15:52:46.993 |   File 
/opt/stack/new/tempest/.tox/all/local/lib/python2.7/site-packages/tempest_lib/common/rest_client.py,
 line 252, in post
2015-04-17 15:52:46.993 | return self.request('POST', url, 
extra_headers, headers, body)
2015-04-17 15:52:46.993 |   File 
/opt/stack/new/tempest/.tox/all/local/lib/python2.7/site-packages/tempest_lib/common/rest_client.py,
 line 629, in request
2015-04-17 15:52:46.994 | resp, resp_body)
2015-04-17 15:52:46.994 |   File 
/opt/stack/new/tempest/.tox/all/local/lib/python2.7/site-packages/tempest_lib/common/rest_client.py,
 line 734, in _error_checker
2015-04-17 15:52:46.994 | raise exceptions.ServerFault(message)
2015-04-17 15:52:46.994 | tempest_lib.exceptions.ServerFault: Got server 
fault
2015-04-17 15:52:46.994 | Details: The server has either erred or is 
incapable of performing the requested operation.
2015-04-17 15:52:46.994 | 
2015-04-17 15:52:46.994 | 
2015-04-17 15:52:46.994 | Captured pythonlogging:
2015-04-17 15:52:46.994 | ~~~
2015-04-17 15:52:46.994 | 2015-04-17 15:41:20,649 22297 INFO 
[tempest_lib.common.rest_client] Request 
(ServerDiskConfigTestJSON:test_rebuild_server_with_manual_disk_config): 200 GET 
http://127.0.0.1:8774/v2/122b3703a169473b95c2df822ab3b445/servers/64384837-aeed-4716-8b70-e352969dc70e
 0.093s
2015-04-17 15:52:46.994 | 2015-04-17 15:41:20,649 22297 DEBUG
[tempest_lib.common.rest_client] Request - Headers: {'X-Auth-Token': 
'omitted', 'Accept': 'application/json', 'Content-Type': 'application/json'}
2015-04-17 15:52:46.994 | Body: None
2015-04-17 15:52:46.995 | Response - Headers: {'date': 'Fri, 17 Apr 
2015 15:41:20 GMT', 'content-location': 
'http://127.0.0.1:8774/v2/122b3703a169473b95c2df822ab3b445/servers/64384837-aeed-4716-8b70-e352969dc70e',
 'content-length': '1561', 'connection': 'close', 'content-type': 
'application/json', 'x-compute-request-id': 
'req-d4ffd48d-d352-43a0-93f2-69aaac4b15cc', 'status': '200'}
2015-04-17 15:52:46.995 | Body: {server: {status: ACTIVE, 
updated: 2015-04-17T15:41:19Z, hostId: 
76f5dd41c013d68299b15e024f385c17e44224a89fec40092bc453a1, addresses: 
{private: [{OS-EXT-IPS-MAC:mac_addr: fa:16:3e:fc:d9:ec, version: 4, 
addr: 10.1.0.3, OS-EXT-IPS:type: fixed}]}, links: [{href: 
http://127.0.0.1:8774/v2/122b3703a169473b95c2df822ab3b445/servers/64384837-aeed-4716-8b70-e352969dc70e;,
 rel: self}, {href: 
http://127.0.0.1:8774/122b3703a169473b95c2df822ab3b445/servers/64384837-aeed-4716-8b70-e352969dc70e;,
 rel: bookmark}], key_name: null, image: {id: 
c119e569-3af7-41c9-a5da-6bab97b7c508, links: [{href: 
http://127.0.0.1:8774/122b3703a169473b95c2df822ab3b445/images/c119e569-3af7-41c9-a5da-6bab97b7c508;,
 rel: bookmark}]}, OS-EXT-STS:task_state: null, OS-EXT-STS:vm_state: 
active, OS-SRV-USG:launched_at: 2015-04-17T15:41:17.00, flavor: 
{id: 42, links: [{href: http://127.0.0.1:8774/1
 22b3703a169473b95c2df822ab3b445/flavors/42, rel: bookmark}]}, id: 
64384837-aeed-4716-8b70-e352969dc70e, security_groups: [{name: 
default}], OS-SRV-USG:terminated_at: null, OS-EXT-AZ:availability_zone: 
nova, user_id: 0cbc6e298f524a5d85f68992d71ffa33, name: 
ServerDiskConfigTestJSON-instance-905081337, created: 
2015-04-17T15:41:10Z, tenant_id: 122b3703a169473b95c2df822ab3b445, 
OS-DCF:diskConfig: AUTO, os-extended-volumes:volumes_attached: [], 
accessIPv4: , accessIPv6: , progress: 0, OS-EXT-STS:power_state: 1, 
config_drive: True, metadata: {}}}
2015-04-17 15:52:46.995 | 2015-04-17 15:41:21,776 22297 INFO 
[tempest_lib.common.rest_client] Request 
(ServerDiskConfigTestJSON:test_rebuild_server_with_manual_disk_config): 500 
POST 

[Yahoo-eng-team] [Bug 1445631] [NEW] Cells: tempest.api.compute.servers.test_server_actions.ServerActionsTestJSON.test_rebuild_server_in_stop_state

2015-04-17 Thread Andrew Laski
Public bug reported:

Failed Tempest due to a not yet tracked down race condition.

2015-04-17 15:52:46.984 | 
tempest.api.compute.servers.test_server_actions.ServerActionsTestJSON.test_rebuild_server_in_stop_state[gate,id-30449a88-5aff-4f9b-9866-6ee9b17f906d]
2015-04-17 15:52:46.984 | 
-
2015-04-17 15:52:46.984 | 
2015-04-17 15:52:46.984 | Captured traceback:
2015-04-17 15:52:46.984 | ~~~
2015-04-17 15:52:46.984 | Traceback (most recent call last):
2015-04-17 15:52:46.984 |   File 
tempest/api/compute/servers/test_server_actions.py, line 162, in 
test_rebuild_server_in_stop_state
2015-04-17 15:52:46.984 | self.client.stop(self.server_id)
2015-04-17 15:52:46.984 |   File 
tempest/services/compute/json/servers_client.py, line 356, in stop
2015-04-17 15:52:46.985 | return self.action(server_id, 'os-stop', 
None, **kwargs)
2015-04-17 15:52:46.985 |   File 
tempest/services/compute/json/servers_client.py, line 223, in action
2015-04-17 15:52:46.985 | post_body)
2015-04-17 15:52:46.985 |   File 
/opt/stack/new/tempest/.tox/all/local/lib/python2.7/site-packages/tempest_lib/common/rest_client.py,
 line 252, in post
2015-04-17 15:52:46.985 | return self.request('POST', url, 
extra_headers, headers, body)
2015-04-17 15:52:46.985 |   File 
/opt/stack/new/tempest/.tox/all/local/lib/python2.7/site-packages/tempest_lib/common/rest_client.py,
 line 629, in request
2015-04-17 15:52:46.985 | resp, resp_body)
2015-04-17 15:52:46.985 |   File 
/opt/stack/new/tempest/.tox/all/local/lib/python2.7/site-packages/tempest_lib/common/rest_client.py,
 line 685, in _error_checker
2015-04-17 15:52:46.985 | raise exceptions.Conflict(resp_body)
2015-04-17 15:52:46.985 | tempest_lib.exceptions.Conflict: An object with 
that identifier already exists
2015-04-17 15:52:46.985 | Details: {u'message': uCannot 'stop' instance 
79651f8a-15db-4067-b1e7-184c72341618 while it is in task_state rebuilding, 
u'code': 409}
2015-04-17 15:52:46.986 | 
2015-04-17 15:52:46.986 |

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1445631

Title:
  Cells:
  
tempest.api.compute.servers.test_server_actions.ServerActionsTestJSON.test_rebuild_server_in_stop_state

Status in OpenStack Compute (Nova):
  New

Bug description:
  Failed Tempest due to a not yet tracked down race condition.

  2015-04-17 15:52:46.984 | 
tempest.api.compute.servers.test_server_actions.ServerActionsTestJSON.test_rebuild_server_in_stop_state[gate,id-30449a88-5aff-4f9b-9866-6ee9b17f906d]
  2015-04-17 15:52:46.984 | 
-
  2015-04-17 15:52:46.984 | 
  2015-04-17 15:52:46.984 | Captured traceback:
  2015-04-17 15:52:46.984 | ~~~
  2015-04-17 15:52:46.984 | Traceback (most recent call last):
  2015-04-17 15:52:46.984 |   File 
tempest/api/compute/servers/test_server_actions.py, line 162, in 
test_rebuild_server_in_stop_state
  2015-04-17 15:52:46.984 | self.client.stop(self.server_id)
  2015-04-17 15:52:46.984 |   File 
tempest/services/compute/json/servers_client.py, line 356, in stop
  2015-04-17 15:52:46.985 | return self.action(server_id, 'os-stop', 
None, **kwargs)
  2015-04-17 15:52:46.985 |   File 
tempest/services/compute/json/servers_client.py, line 223, in action
  2015-04-17 15:52:46.985 | post_body)
  2015-04-17 15:52:46.985 |   File 
/opt/stack/new/tempest/.tox/all/local/lib/python2.7/site-packages/tempest_lib/common/rest_client.py,
 line 252, in post
  2015-04-17 15:52:46.985 | return self.request('POST', url, 
extra_headers, headers, body)
  2015-04-17 15:52:46.985 |   File 
/opt/stack/new/tempest/.tox/all/local/lib/python2.7/site-packages/tempest_lib/common/rest_client.py,
 line 629, in request
  2015-04-17 15:52:46.985 | resp, resp_body)
  2015-04-17 15:52:46.985 |   File 
/opt/stack/new/tempest/.tox/all/local/lib/python2.7/site-packages/tempest_lib/common/rest_client.py,
 line 685, in _error_checker
  2015-04-17 15:52:46.985 | raise exceptions.Conflict(resp_body)
  2015-04-17 15:52:46.985 | tempest_lib.exceptions.Conflict: An object with 
that identifier already exists
  2015-04-17 15:52:46.985 | Details: {u'message': uCannot 'stop' instance 
79651f8a-15db-4067-b1e7-184c72341618 while it is in task_state rebuilding, 
u'code': 409}
  2015-04-17 15:52:46.986 | 
  2015-04-17 15:52:46.986 |

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1445631/+subscriptions


[Yahoo-eng-team] [Bug 1445143] Re: fail to process user-data with cloud-config-archive

2015-04-17 Thread Launchpad Bug Tracker
This bug was fixed in the package cloud-init - 0.7.7~bzr1091-0ubuntu1

---
cloud-init (0.7.7~bzr1091-0ubuntu1) vivid; urgency=medium

  * New upstream snapshot.
* fix processing of user-data in cloud-config-archive format (LP: #1445143)
 -- Scott Moser smo...@ubuntu.com   Fri, 17 Apr 2015 12:04:16 -0400

** Changed in: cloud-init (Ubuntu Vivid)
   Status: Confirmed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1445143

Title:
  fail to process user-data with cloud-config-archive

Status in Init scripts for use on cloud images:
  Fix Committed
Status in cloud-init package in Ubuntu:
  Fix Released
Status in cloud-init source package in Vivid:
  Fix Released

Bug description:
  launching an instance with this user-data causes the stack trace further below
  #cloud-config-archive:
   - content: |
   #cloud-config
   chpasswd: {expire: false}
   manage_etc_hosts: true
   password: ubuntu

  
  Apr 16 17:20:29 ubuntu [CLOUDINIT] util.py[DEBUG]: Consuming user data 
failed!#012Traceback (most recent call last):
  File /usr/bin/cloud-init, line 280, in main_init
  init.consume_data(PER_ALWAYS)
File /usr/lib/python3/dist-packages/cloudinit/stages.py, line 496, in 
consume_data
  self._consume_userdata(frequency)
File /usr/lib/python3/dist-packages/cloudinit/stages.py, line 566, in 
_consume_userdata
  self._do_handlers(user_data_msg, c_handlers_list, frequency)
File /usr/lib/python3/dist-packages/cloudinit/stages.py, line 489, in 
_do_handlers
  walk_handlers(excluded)
File /usr/lib/python3/dist-packages/cloudinit/stages.py, line 472, in 
walk_handlers
  handlers.walk(data_msg, handlers.walker_callback, data=part_data)
File /usr/lib/python3/dist-packages/cloudinit/handlers/__init__.py, line 
248, in walk
  payload = util.fully_decoded_payload(part)
File /usr/lib/python3/dist-packages/cloudinit/util.py, line 126, in 
fully_decoded_payload
  return cte_payload.decode(charset, errors='surrogateescape')
  TypeError: decode() argument 1 must be str, not Charset

  ProblemType: Bug
  DistroRelease: Ubuntu 15.04
  Package: cloud-init 0.7.7~bzr1088-0ubuntu3 [modified: 
usr/lib/python3/dist-packages/cloudinit/util.py]
  ProcVersionSignature: User Name 3.19.0-14.14-generic 3.19.3
  Uname: Linux 3.19.0-14-generic x86_64
  ApportVersion: 2.17.1-0ubuntu1
  Architecture: amd64
  Date: Thu Apr 16 17:48:07 2015
  Ec2AMI: ami-02ed
  Ec2AMIManifest: FIXME
  Ec2AvailabilityZone: nova
  Ec2InstanceType: m1.small
  Ec2Kernel: aki-0002
  Ec2Ramdisk: ari-0002
  PackageArchitecture: all
  ProcEnviron:
   TERM=screen
   PATH=(custom, no user)
   XDG_RUNTIME_DIR=set
   LANG=en_US.UTF-8
   SHELL=/bin/bash
  SourcePackage: cloud-init
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1445143/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1445637] [NEW] Instance resource quota not observed for non-ephemeral storage

2015-04-17 Thread Tony Walker
Public bug reported:

I'm using a nova built from stable/kilo and trying to implement instance
IO resource quotas for disk as per
https://wiki.openstack.org/wiki/InstanceResourceQuota#IO.

While this works when building an instance from ephemeral storage, it
does not when booting from a bootable cinder volume. I realize I can
implement this using cinder quota but I want to apply the same settings
in nova regardless of the underlying disk.

Steps to produce:

nova flavor-create iolimited 1 8192 64 4
nova flavor-key 1 set quota:disk_read_iops_sec=1
Boot an instance using the above flavor
Guest XML is missing iotune entries

Expected result:
snip
  target dev='vda' bus='virtio'/
  iotune
read_iops_sec1/read_iops_sec
  /iotune
/snip

This relates somewhat to https://bugs.launchpad.net/nova/+bug/1405367
but that case is purely hit when booting from RBD-backed ephemeral
storage.

Essentially, for non-ephemeral disks, a call is made to
_get_volume_config() which creates a generic LibvirtConfigGuestDisk
object but no further processing is done to add extra-specs (if any).

I've essentially copied the disk_qos() method from the associated code
review (https://review.openstack.org/#/c/143939/) to implement my own
patch (attached).

** Affects: nova
 Importance: Undecided
 Status: New

** Patch added: driver.patch
   
https://bugs.launchpad.net/bugs/1445637/+attachment/4378673/+files/driver.patch

** Patch removed: driver.patch
   
https://bugs.launchpad.net/nova/+bug/1445637/+attachment/4378673/+files/driver.patch

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1445637

Title:
  Instance resource quota not observed for non-ephemeral storage

Status in OpenStack Compute (Nova):
  New

Bug description:
  I'm using a nova built from stable/kilo and trying to implement
  instance IO resource quotas for disk as per
  https://wiki.openstack.org/wiki/InstanceResourceQuota#IO.

  While this works when building an instance from ephemeral storage, it
  does not when booting from a bootable cinder volume. I realize I can
  implement this using cinder quota but I want to apply the same
  settings in nova regardless of the underlying disk.

  Steps to produce:

  nova flavor-create iolimited 1 8192 64 4
  nova flavor-key 1 set quota:disk_read_iops_sec=1
  Boot an instance using the above flavor
  Guest XML is missing iotune entries

  Expected result:
  snip
target dev='vda' bus='virtio'/
iotune
  read_iops_sec1/read_iops_sec
/iotune
  /snip

  This relates somewhat to https://bugs.launchpad.net/nova/+bug/1405367
  but that case is purely hit when booting from RBD-backed ephemeral
  storage.

  Essentially, for non-ephemeral disks, a call is made to
  _get_volume_config() which creates a generic LibvirtConfigGuestDisk
  object but no further processing is done to add extra-specs (if any).

  I've essentially copied the disk_qos() method from the associated code
  review (https://review.openstack.org/#/c/143939/) to implement my own
  patch (attached).

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1445637/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1346778] Re: Neutron does not work by default without a keystone admin user

2015-04-17 Thread Kevin Benton
I had an approach to have a special username matching keyword for
policy.json to address this. It was wildly unpopular.

The general consensus was to add a role in the deployment and match
based on that.

** Changed in: neutron
 Assignee: Kevin Benton (kevinbenton) = (unassigned)

** Changed in: neutron
   Status: In Progress = Opinion

** Changed in: neutron
   Status: Opinion = Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1346778

Title:
  Neutron does not work by default without a keystone admin user

Status in OpenStack Neutron (virtual network service):
  Confirmed

Bug description:
  The default neutron policy.json 'context_is_admin' only matches on
  'role:admin' and the account that neutron is configured with must
  match 'context_is_admin' for neutron to function correctly. This means
  that without modifying policy.json, a deployer cannot use a non-admin
  account for neutron.

  The policy.json keywords have no way to match the username of the
  neutron keystone credentials. This means that policy.json has to be
  modified for every deployment that doesn't use an admin user to match
  the keystone user neutron is configured with.

  This seems like an unnecessary burden to leave to deployers to achieve
  a secure deployment.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1346778/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1445675] [NEW] missing index on virtual_interfaces can cause long queries that can cause timeouts in launching instances

2015-04-17 Thread Mike Bayer
Public bug reported:

In a load test where a nova environment w/ networking enabled was set up
to have ~250K instances,  attempting to launch 50 instances would cause
many to time out, with the error Timeout while waiting on RPC response
- topic: network, RPC method: allocate_for_instance.   The tester
isolated the latency here to queries against the virtual_interfaces
table, which in this test is executed some 500 times, spending ~.5
seconds per query for a total of 200 seconds.  An example query looks
like:

SELECT virtual_interfaces.created_at , virtual_interfaces.updated_at , 
virtual_interfaces.deleted_at , virtual_interfaces.deleted , 
virtual_interfaces.id , virtual_interfaces.address , 
virtual_interfaces.network_id , virtual_interfaces.instance_uuid , 
virtual_interfaces.uuid FROM virtual_interfaces WHERE 
virtual_interfaces.deleted = 0 AND virtual_interfaces.uuid = 
'9774e729-7695-4e2b-a9b2-a104a4b020d0'
LIMIT 1;

Query profiling against this table /query directly proceeded as follows:

I scripted up direct DB access to get 250K rows in a blank database:

MariaDB [nova] select count(*) from virtual_interfaces;
+--+
| count(*) |
+--+
|   25 |
+--+
1 row in set (0.09 sec)

emitting the query when the row is found, on this particular system is
returning in .03 sec:

MariaDB [nova] SELECT virtual_interfaces.created_at , 
virtual_interfaces.updated_at , virtual_interfaces.deleted_at , 
virtual_interfaces.deleted , virtual_interfaces.id , virtual_interfaces.address 
, virtual_interfaces.network_id , virtual_interfaces.instance_uuid , 
virtual_interfaces.uuid FROM virtual_interfaces WHERE 
virtual_interfaces.deleted = 0 AND virtual_interfaces.uuid = 
'0a269012-cbc7-4093-9602-35f003a766c4'  LIMIT 1;
+-+++-+---+---++--+--+
| created_at  | updated_at | deleted_at | deleted | id| address 
  | network_id | instance_uuid| uuid
 |
+-+++-+---+---++--+--+
| 2014-08-12 22:22:14 | NULL   | NULL   |   0 | 58393 | 
address_58393 | 22 | 41f1b859-8c5d-4c27-a52e-3e97652dfe7a | 
0a269012-cbc7-4093-9602-35f003a766c4 |
+-+++-+---+---++--+--+
1 row in set (0.03 sec)


we can see that for a row not found, where it has to scan the whole table, it's 
10x longer:

MariaDB [nova] SELECT virtual_interfaces.created_at , 
virtual_interfaces.updated_at , virtual_interfaces.deleted_at , 
virtual_interfaces.deleted , virtual_interfaces.id , virtual_interfaces.address 
, virtual_interfaces.network_id , virtual_interfaces.instance_uuid , 
virtual_interfaces.uuid FROM virtual_interfaces WHERE 
virtual_interfaces.deleted = 0 AND virtual_interfaces.uuid = 
'0a269012-cbc7-4093-9602-35f003a766c5'  LIMIT 1;
Empty set (0.14 sec)


There's nothing mysterious going on here as an EXPLAIN shows plainly that we 
are doing a full table scan:

MariaDB [nova] EXPLAIN SELECT virtual_interfaces.created_at , 
virtual_interfaces.updated_at , virtual_interfaces.deleted_at , 
virtual_interfaces.deleted , virtual_interfaces.id , virtual_interfaces.address 
, virtual_interfaces.network_id , virtual_interfaces.instance_uuid , 
virtual_interfaces.uuid FROM virtual_interfaces WHERE 
virtual_interfaces.deleted = 0 AND virtual_interfaces.uuid = 
'0a269012-cbc7-4093-9602-35f003a766c4'  LIMIT 1;
+--+-++--+---+--+-+--++-+
| id   | select_type | table  | type | possible_keys | key  | 
key_len | ref  | rows   | Extra   |
+--+-++--+---+--+-+--++-+
|1 | SIMPLE  | virtual_interfaces | ALL  | NULL  | NULL | NULL  
  | NULL | 250170 | Using where |
+--+-++--+---+--+-+--++-+
1 row in set (0.00 sec)


adding an index on the uuid field via create index vuidx on 
virtual_interfaces(uuid), the EXPLAIN now shows the index used:

MariaDB [nova] EXPLAIN SELECT virtual_interfaces.created_at , 
virtual_interfaces.updated_at , virtual_interfaces.deleted_at , 
virtual_interfaces.deleted , virtual_interfaces.id , virtual_interfaces.address 
, virtual_interfaces.network_id , virtual_interfaces.instance_uuid , 
virtual_interfaces.uuid FROM virtual_interfaces WHERE 
virtual_interfaces.deleted = 0 AND virtual_interfaces.uuid = 
'0a269012-cbc7-4093-9602-35f003a766c4'  LIMIT 1;

[Yahoo-eng-team] [Bug 1445674] [NEW] Fix kwargs['migration'] KeyError in @errors_out_migration

2015-04-17 Thread Christine Wang
Public bug reported:

This is similar to bug #1423952.
We need to handle that 'migration' can be in args or kwargs.
It should follow the same fix for bug 1423952.

This fixes the KeyError in the decorator by normalizing the args/kwargs
list into a single dict that we can pull the migration from.

2015-04-17 04:53:26.418 23056 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/nova/exception.py, line 88, in wrapped
2015-04-17 04:53:26.418 23056 TRACE oslo_messaging.rpc.dispatcher payload)
2015-04-17 04:53:26.418 23056 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/oslo_utils/excutils.py, line 85, in __exit__
2015-04-17 04:53:26.418 23056 TRACE oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2015-04-17 04:53:26.418 23056 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/nova/exception.py, line 71, in wrapped
2015-04-17 04:53:26.418 23056 TRACE oslo_messaging.rpc.dispatcher return 
f(self, context, *args, **kw)
2015-04-17 04:53:26.418 23056 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/nova/compute/manager.py, line 328, in 
decorated_function
2015-04-17 04:53:26.418 23056 TRACE oslo_messaging.rpc.dispatcher 
LOG.warning(msg, e, instance_uuid=instance_uuid)
2015-04-17 04:53:26.418 23056 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/oslo_utils/excutils.py, line 85, in __exit__
2015-04-17 04:53:26.418 23056 TRACE oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2015-04-17 04:53:26.418 23056 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/nova/compute/manager.py, line 299, in 
decorated_function
2015-04-17 04:53:26.418 23056 TRACE oslo_messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
2015-04-17 04:53:26.418 23056 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/nova/compute/manager.py, line 378, in 
decorated_function
2015-04-17 04:53:26.418 23056 TRACE oslo_messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
2015-04-17 04:53:26.418 23056 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/nova/compute/manager.py, line 276, in 
decorated_function
2015-04-17 04:53:26.418 23056 TRACE oslo_messaging.rpc.dispatcher migration 
= kwargs['migration']
2015-04-17 04:53:26.418 23056 TRACE oslo_messaging.rpc.dispatcher KeyError: 
'migration'
2015-04-17 04:53:26.418 23056 TRACE oslo_messaging.rpc.dispatcher

** Affects: nova
 Importance: Undecided
 Assignee: Christine Wang (ijuwang)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = Christine Wang (ijuwang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1445674

Title:
  Fix kwargs['migration'] KeyError in @errors_out_migration

Status in OpenStack Compute (Nova):
  New

Bug description:
  This is similar to bug #1423952.
  We need to handle that 'migration' can be in args or kwargs.
  It should follow the same fix for bug 1423952.

  This fixes the KeyError in the decorator by normalizing the args/kwargs
  list into a single dict that we can pull the migration from.

  2015-04-17 04:53:26.418 23056 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/nova/exception.py, line 88, in wrapped
  2015-04-17 04:53:26.418 23056 TRACE oslo_messaging.rpc.dispatcher payload)
  2015-04-17 04:53:26.418 23056 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/oslo_utils/excutils.py, line 85, in __exit__
  2015-04-17 04:53:26.418 23056 TRACE oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2015-04-17 04:53:26.418 23056 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/nova/exception.py, line 71, in wrapped
  2015-04-17 04:53:26.418 23056 TRACE oslo_messaging.rpc.dispatcher return 
f(self, context, *args, **kw)
  2015-04-17 04:53:26.418 23056 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/nova/compute/manager.py, line 328, in 
decorated_function
  2015-04-17 04:53:26.418 23056 TRACE oslo_messaging.rpc.dispatcher 
LOG.warning(msg, e, instance_uuid=instance_uuid)
  2015-04-17 04:53:26.418 23056 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/oslo_utils/excutils.py, line 85, in __exit__
  2015-04-17 04:53:26.418 23056 TRACE oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2015-04-17 04:53:26.418 23056 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/nova/compute/manager.py, line 299, in 
decorated_function
  2015-04-17 04:53:26.418 23056 TRACE oslo_messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
  2015-04-17 

[Yahoo-eng-team] [Bug 1442524] Re: Operation not allowed errors when execute db migration when using DB2 for lbaas

2015-04-17 Thread Matt Riedemann
** Changed in: neutron
   Status: Invalid = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1442524

Title:
  Operation not allowed errors when execute db migration  when using DB2
  for lbaas

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  when using db2 as the database for neutron, we execute:
   neutron-db-manage --service lbaas --config-file /etc/neutron/neutron.conf 
--config-file /etc/neutron/plugin.ini upgrade head

  faced problem:
  INFO  [alembic.migration] Context impl IBMDBImpl.
  INFO  [alembic.migration] Will assume transactional DDL.
  INFO  [alembic.migration] Context impl IBMDBImpl.
  INFO  [alembic.migration] Will assume transactional DDL.
  INFO  [alembic.migration] Running upgrade  - start_neutron_lbaas, start 
neutron-lbaas chain
  INFO  [alembic.migration] Running upgrade start_neutron_lbaas - lbaasv2, 
lbaas version 2 api
  INFO  [alembic.migration] Running upgrade lbaasv2 - 4deef6d81931, add 
provisioning and operating statuses
  INFO  [alembic.migration] Running upgrade 4deef6d81931 - 4b6d8d5310b8, 
add_index_tenant_id
  Traceback (most recent call last):
File /usr/bin/neutron-db-manage, line 10, in module
  sys.exit(main())
File /usr/lib/python2.7/site-packages/neutron/db/migration/cli.py, line 
238, in main
  CONF.command.func(config, CONF.command.name)
File /usr/lib/python2.7/site-packages/neutron/db/migration/cli.py, line 
106, in do_upgrade
  do_alembic_command(config, cmd, revision, sql=CONF.command.sql)
File /usr/lib/python2.7/site-packages/neutron/db/migration/cli.py, line 
72, in do_alembic_command
  getattr(alembic_command, cmd)(config, *args, **kwargs)
File /usr/lib/python2.7/site-packages/alembic/command.py, line 165, in 
upgrade
  script.run_env()
File /usr/lib/python2.7/site-packages/alembic/script.py, line 382, in 
run_env
  util.load_python_file(self.dir, 'env.py')
File /usr/lib/python2.7/site-packages/alembic/util.py, line 241, in 
load_python_file
  module = load_module_py(module_id, path)
File /usr/lib/python2.7/site-packages/alembic/compat.py, line 79, in 
load_module_py
  mod = imp.load_source(module_id, path, fp)
File 
/usr/lib/python2.7/site-packages/neutron_lbaas/db/migration/alembic_migrations/env.py,
 line 85, in module
  run_migrations_online()
File 
/usr/lib/python2.7/site-packages/neutron_lbaas/db/migration/alembic_migrations/env.py,
 line 76, in run_migrations_online
  context.run_migrations()
File string, line 7, in run_migrations
File /usr/lib/python2.7/site-packages/alembic/environment.py, line 742, 
in run_migrations
  self.get_context().run_migrations(**kw)
File /usr/lib/python2.7/site-packages/alembic/migration.py, line 305, in 
run_migrations
  step.migration_fn(**kw)
File 
/usr/lib/python2.7/site-packages/neutron_lbaas/db/migration/alembic_migrations/versions/4b6d8d5310b8_add_index_tenant_id.py,
 line 38, in upgrade
  table, ['tenant_id'], unique=False)
File string, line 7, in create_index
File /usr/lib/python2.7/site-packages/alembic/operations.py, line 1019, 
in create_index
  unique=unique, quote=quote, **kw)
File /usr/lib/python2.7/site-packages/alembic/ddl/impl.py, line 194, in 
create_index
  self._exec(schema.CreateIndex(index))
File /usr/lib/python2.7/site-packages/alembic/ddl/impl.py, line 106, in 
_exec
  return conn.execute(construct, *multiparams, **params)
File /usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py, line 
729, in execute
  return meth(self, multiparams, params)
File /usr/lib64/python2.7/site-packages/sqlalchemy/sql/ddl.py, line 69, 
in _execute_on_connection
  return connection._execute_ddl(self, multiparams, params)
File /usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py, line 
783, in _execute_ddl
  compiled
File /usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py, line 
958, in _execute_context
  context)
File 
/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/compat/handle_error.py, 
line 261, in _handle_dbapi_exception
  e, statement, parameters, cursor, context)
File /usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py, line 
1155, in _handle_dbapi_exception
  util.raise_from_cause(newraise, exc_info)
File /usr/lib64/python2.7/site-packages/sqlalchemy/util/compat.py, line 
199, in raise_from_cause
  reraise(type(exception), exception, tb=exc_tb)
File /usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py, line 
951, in _execute_context
  context)
File /usr/lib/python2.7/site-packages/ibm_db_sa/ibm_db.py, line 106, in 
do_execute
  cursor.execute(statement, parameters)
File /usr/lib64/python2.7/site-packages/ibm_db_dbi.py, line 1335, in 
execute
  self._execute_helper(parameters)
File 

[Yahoo-eng-team] [Bug 1445475] Re: neutron service user should not require admin

2015-04-17 Thread Jeremy Stanley
You've switched this bug report to indicate an exploitable security
vulnerability. Can you describe in greater detail the exploitation
scenario you have in mind? What sort of patch to neutron do you expect
to correct this defect? Does this vulnerability appear in previous
releases of neutron as well, or does it only affect the current master
and stable/kilo branches of neutron?

** Also affects: ossa
   Importance: Undecided
   Status: New

** Changed in: ossa
   Status: New = Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1445475

Title:
  neutron service user should not require admin

Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Security Advisories:
  Incomplete

Bug description:
  
  The typical config has nova using the 'neutron' user in the 'service' project 
to do operations against Neutron. The 'neutron' user should not require the 
'admin' role on the 'service' project to do all the operations it needs to do 
against Neutron. Neutron's default policy.json should allow the 'neutron' user 
(i.e., users with the 'service' role) to do all the operations it needs to do 
against Neutron, rather than requiring 'admin'.

  Nova is allocating networks and creating ports, so these operations
  need to allow the 'service' role to perform these operations, too.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1445475/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1445690] [NEW] legacy admin rule does not work and is not needed anymore

2015-04-17 Thread Salvatore Orlando
Public bug reported:

in neutron/policy.py:

def check_is_admin(context):
Verify context has admin rights according to policy settings.
init()
# the target is user-self
credentials = context.to_dict()
target = credentials
# Backward compatibility: if ADMIN_CTX_POLICY is not
# found, default to validating role:admin
admin_policy = (ADMIN_CTX_POLICY if ADMIN_CTX_POLICY in _ENFORCER.rules
else 'role:admin')
return _ENFORCER.enforce(admin_policy, target, credentials)

if ADMIN_CTX_POLICY is not specified the enforcer checks role:admin,
which since it does not exist among rules loaded from file, defaults to
TrueCheck. This is wrong, and to an extent even dangerous because if
ADMIN_CTX_POLICY is missing, then every context would be regarded as an
admin context. Thankfully this was only for backward compatibility and
is not necessary anymore.

A similar mistake is done for ADVSVC_CTX_POLICY. This is even more
puzzling because there was no backward compatibility requirmeent there,

Obviously the unit tests supposed to ensure the correct behaviour of the
backward compatibility tweak are validating something completely
different.

** Affects: neutron
 Importance: Medium
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1445690

Title:
  legacy admin rule does not work and is not needed anymore

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  in neutron/policy.py:

  def check_is_admin(context):
  Verify context has admin rights according to policy settings.
  init()
  # the target is user-self
  credentials = context.to_dict()
  target = credentials
  # Backward compatibility: if ADMIN_CTX_POLICY is not
  # found, default to validating role:admin
  admin_policy = (ADMIN_CTX_POLICY if ADMIN_CTX_POLICY in _ENFORCER.rules
  else 'role:admin')
  return _ENFORCER.enforce(admin_policy, target, credentials)

  if ADMIN_CTX_POLICY is not specified the enforcer checks role:admin,
  which since it does not exist among rules loaded from file, defaults
  to TrueCheck. This is wrong, and to an extent even dangerous because
  if ADMIN_CTX_POLICY is missing, then every context would be regarded
  as an admin context. Thankfully this was only for backward
  compatibility and is not necessary anymore.

  A similar mistake is done for ADVSVC_CTX_POLICY. This is even more
  puzzling because there was no backward compatibility requirmeent
  there,

  Obviously the unit tests supposed to ensure the correct behaviour of
  the backward compatibility tweak are validating something completely
  different.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1445690/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1445698] [NEW] _update_usage_from_migrations does not handling InstanceNotFound and cause compute service restart to fail

2015-04-17 Thread Christine Wang
Public bug reported:

Due to bug #1445674, Migration object was not set to error if there is a
resize failure.

Later on, if we delete the instance, the Migration object will continue
to exist.

If we were to restart compute service, it will fail to start since
_update_usage_from_migrations

Version:
commit 095e9398ecf69ffdaeb929287d5f5f9a38257361
Merge: 6029860 0e28a5f
Author: Jenkins jenk...@review.openstack.org
Date:   Fri Apr 17 19:47:18 2015 +

Merge fixed tests in test_iptables_network to work with random
PYTHONHASHSE

commit 6029860ffa0f2500505d1894f5bbb9ca717a8232
Merge: 760fba5 5bfe303
Author: Jenkins jenk...@review.openstack.org
Date:   Fri Apr 17 19:46:50 2015 +

Merge refactored tests in test_objects to pass with random
PYTHONHASHSEED

commit 760fba535b2eb17243a39af9fea70e8dbcdbe713
Merge: 1248353 78883fa
[root@ip9-114-195-109 nova]# git log -1
commit 20cb0745550fc6bbd9e789caa7fdbf9669b2d24d
Merge: 095e939 56f355e
Author: Jenkins jenk...@review.openstack.org
Date:   Fri Apr 17 1


2015-04-17 05:56:58.402 22873 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py, line 445, 
in _update_available_resource
2015-04-17 05:56:58.402 22873 TRACE nova.openstack.common.threadgroup 
self._update_usage_from_migrations(context, resources, migrations)
2015-04-17 05:56:58.402 22873 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py, line 708, 
in _update_usage_from_migrations
2015-04-17 05:56:58.402 22873 TRACE nova.openstack.common.threadgroup 
instance = migration.instance
2015-04-17 05:56:58.402 22873 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/site-packages/nova/objects/migration.py, line 80, in 
instance
2015-04-17 05:56:58.402 22873 TRACE nova.openstack.common.threadgroup 
return objects.Instance.get_by_uuid(self._context, self.instance_uuid)
2015-04-17 05:56:58.402 22873 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/site-packages/nova/objects/base.py, line 163, in wrapper
2015-04-17 05:56:58.402 22873 TRACE nova.openstack.common.threadgroup 
result = fn(cls, context, *args, **kwargs)
2015-04-17 05:56:58.402 22873 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/site-packages/nova/objects/instance.py, line 564, in 
get_by_uuid
2015-04-17 05:56:58.402 22873 TRACE nova.openstack.common.threadgroup 
use_slave=use_slave)
2015-04-17 05:56:58.402 22873 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/site-packages/nova/db/api.py, line 651, in 
instance_get_by_uuid
2015-04-17 05:56:58.402 22873 TRACE nova.openstack.common.threadgroup 
columns_to_join, use_slave=use_slave)
2015-04-17 05:56:58.402 22873 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py, line 233, in 
wrapper
2015-04-17 05:56:58.402 22873 TRACE nova.openstack.common.threadgroup 
return f(*args, **kwargs)
2015-04-17 05:56:58.402 22873 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py, line 1744, in 
instance_get_by_uuid
2015-04-17 05:56:58.402 22873 TRACE nova.openstack.common.threadgroup 
columns_to_join=columns_to_join, use_slave=use_slave)
2015-04-17 05:56:58.402 22873 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py, line 1756, in 
_instance_get_by_uuid
2015-04-17 05:56:58.402 22873 TRACE nova.openstack.common.threadgroup raise 
exception.InstanceNotFound(instance_id=uuid)
2015-04-17 05:56:58.402 22873 TRACE nova.openstack.common.threadgroup 
InstanceNotFound: Instance b8ddd534-f114-4ea6-9833-eeb64a8bfc49 could not be 
found.


def _update_usage_from_migrations(self, context, resources, migrations):

self.tracked_migrations.clear()

filtered = {}

# do some defensive filtering against bad migrations records in the
# database:
for migration in migrations:
instance = migration.instance   --- 
getting InstanceNotFound

if not instance:
# migration referencing deleted instance
continue

uuid = instance.uuid

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1445698

Title:
  _update_usage_from_migrations does not handling InstanceNotFound and
  cause compute service restart to fail

Status in OpenStack Compute (Nova):
  New

Bug description:
  Due to bug #1445674, Migration object was not set to error if there is
  a resize failure.

  Later on, if we delete the instance, the Migration object will
  continue to exist.

  If we were to restart compute service, it will fail to start since