[Yahoo-eng-team] [Bug 1483091] [NEW] Same name SecurityGroup could not work

2015-08-11 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

In icehouse, if two tenants create a security group with the same name
respectively, then they could not create a vm in the dashboard using
this security group, with the error says Multiple security_group
matches found for name 'test', use an ID to be more specific. (HTTP 409)
(Request-ID: req-ece4dd00-d1a0-4c38-9587-394fa29610da).

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
Same name SecurityGroup could not work
https://bugs.launchpad.net/bugs/1483091
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to neutron.

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1428305] Re: Floating IP namespace not created when DVR enabled and with IPv6 enabled in devstack

2015-08-11 Thread Carl Baldwin
** Changed in: neutron
   Importance: Undecided = Medium

** Changed in: neutron
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1428305

Title:
  Floating IP namespace not created when DVR enabled and with IPv6
  enabled in devstack

Status in neutron:
  Invalid

Bug description:
  I just created a new devstack based on the latest Neutron code and the
  l3-agent is failing to create the Floating IP namespace, leading to
  floating IPs not working.  This only happens when DVR is enabled, for
  example, I have this in my local.conf:

  Q_DVR_MODE=dvr_snat

  When I allocate a floating IP and attempt to associate it with a
  running instance I see this in the l3-agent log:

  2015-03-04 20:03:46.082 28696 DEBUG neutron.agent.l3.agent [-] FloatingIP 
agent gateway port received from the plugin: {u'status': u'DOWN', 
u'binding:host_id': u'haleyb-devstack', u'name': u'', u'allowed_address_pairs': 
[], u'admin_state_up': True, u'network_id': 
u'bda13d78-bf4c-45b8-8cb6-dd3449b1d3c5', u'tenant_id': u'', u'extra_dhcp_opts': 
[], u'binding:vif_details': {u'port_filter': True, u'ovs_hybrid_plug': True}, 
u'binding:vif_type': u'ovs', u'device_owner': 
u'network:floatingip_agent_gateway', u'mac_address': u'fa:16:3e:94:74:f0', 
u'binding:profile': {}, u'binding:vnic_type': u'normal', u'fixed_ips': 
[{u'subnet_id': u'99260be2-91ef-423a-8dd8-4ecf15ffb14c', u'ip_address': 
u'172.24.4.4'}, {u'subnet_id': u'97a9534f-eec4-4c06-bdf5-61bab04455b7', 
u'ip_address': u'fe80:cafe:cafe::3'}], u'id': 
u'47f8a65f-6008-4a97-93a3-85f68ea4ff00', u'security_groups': [], u'device_id': 
u'2674f378-26c0-4b29-b920-5637640acffc'} create_dvr_fip_interfaces 
/opt/stack/neutron/neutron/agent/l3/agent.py:6
 27
  2015-03-04 20:03:46.082 28696 ERROR neutron.agent.l3.agent [-] Missing 
subnet/agent_gateway_port

  That error means that no port for the external gateway will be created
  along with the namespace where it lives.

  Subsequent errors confirm that:

  2015-03-04 20:03:47.494 28696 DEBUG neutron.agent.linux.utils [-] Running 
command: ['sudo', '/usr/local/bin/neutron-rootwrap', 
'/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 
'fip-bda13d78-bf4c-45b8-8cb6-dd3449b1d3c5', 'ip', '-o', 'link', 'show', 
'fpr-d73fd397-4'] create_process 
/opt/stack/neutron/neutron/agent/linux/utils.py:51
  2015-03-04 20:03:47.668 28696 DEBUG neutron.agent.linux.utils [-]
  Command: ['sudo', '/usr/local/bin/neutron-rootwrap', 
'/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 
'fip-bda13d78-bf4c-45b8-8cb6-dd3449b1d3c5', 'ip', '-o', 'link', 'show', 
'fpr-d73fd397-4']
  Exit code: 1
  Stdout:
  Stderr: Cannot open network namespace 
fip-bda13d78-bf4c-45b8-8cb6-dd3449b1d3c5: No such file or directory

  $ ip netns
  qdhcp-91416e8f-856e-42ae-a9fd-9abe25d8b47a
  snat-d73fd397-47f8-4272-b55a-b33b2307eaad
  qrouter-d73fd397-47f8-4272-b55a-b33b2307eaad

  This is only happening with Kilo, but I don't have an exact date as to
  when it started, just that I noticed it starting yesterday (March
  3rd).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1428305/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1483910] [NEW] Horizon Domain Picker Should be to the right

2015-08-11 Thread Diana Whitten
Public bug reported:

Horizon Domain Picker Should be to the right

An unintended result of a recent refactor left the domain picker css
without any love.

** Affects: horizon
 Importance: Undecided
 Assignee: Diana Whitten (hurgleburgler)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) = Diana Whitten (hurgleburgler)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1483910

Title:
  Horizon Domain Picker Should be to the right

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Horizon Domain Picker Should be to the right

  An unintended result of a recent refactor left the domain picker css
  without any love.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1483910/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1483917] [NEW] 'make test' command fails because 'test' isnt defined in Makefile

2015-08-11 Thread Imran Hayder
Public bug reported:

the Makefile suggests that in order to run tests, use 'make test'
command but it fails and since looking into source code , there is no
test defined https://github.com/openstack/horizon/blob/master/Makefile

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1483917

Title:
  'make test' command fails because 'test' isnt defined in Makefile

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  the Makefile suggests that in order to run tests, use 'make test'
  command but it fails and since looking into source code , there is no
  test defined https://github.com/openstack/horizon/blob/master/Makefile

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1483917/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1398357] Re: Add support for EC2 API : ec2-reset-snapshot-attribute

2015-08-11 Thread Andrey Pavlov
cinder doesn't allow to modify permissions for snapshots/volumes. These
objects can be seen only by owner or admin.

** Also affects: ec2-api
   Importance: Undecided
   Status: New

** Changed in: ec2-api
   Status: New = Opinion

** Changed in: ec2-api
   Importance: Undecided = Wishlist

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1398357

Title:
  Add support for EC2 API : ec2-reset-snapshot-attribute

Status in ec2-api:
  Opinion
Status in OpenStack Compute (nova):
  Opinion

Bug description:
  Provide the implementation similar to the Amazon EC2 API

  ec2-reset-snapshot-attribute

  for making the volume based snapshot instances accessible only to the
  respective account holder.

  http://docs.aws.amazon.com/AWSEC2/latest/CommandLineReference
  /ApiReference-cmd-ResetSnapshotAttribute.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/ec2-api/+bug/1398357/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1318588] Re: Volume create time format is not as per the AWS create time

2015-08-11 Thread Andrey Pavlov
** Also affects: ec2-api
   Importance: Undecided
   Status: New

** Changed in: ec2-api
   Status: New = Confirmed

** Changed in: ec2-api
   Importance: Undecided = Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1318588

Title:
  Volume create time format is not as per the AWS create time

Status in ec2-api:
  Confirmed
Status in OpenStack Compute (nova):
  Opinion

Bug description:
  With openstack, the volume creation time is 
'createTime2014-05-12T10:08:22.00/createTime'
  But with AWS, create volume time shows as 
:'createTime2014-05-12T10:06:41.885Z/createTime'

  For microseconds, 3 digits are extra and time zone is missing. This need to 
be fixed to sync-up with AWS create volume time.
  Doesn't look like a big issue though, but nevertheless shouldn't happen.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ec2-api/+bug/1318588/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1483535] [NEW] Cannot create image: NotAuthenticated

2015-08-11 Thread Eduard Biceri-Matei
Public bug reported:

Devstack Juno (2014.2.4) on Ubuntu 14.04.
Local.conf:

[[local|localrc]]
LOGFILE=/opt/stack/logs/stack.sh.log
LOGDIR=/opt/stack/logs
HOST_IP=192.168.10.214
FLAT_INTERFACE=eth0
FIXED_RANGE=172.22.10.0/24
FIXED_NETWORK_SIZE=255
FLOATING_RANGE=192.168.10.0/24
MULTI_HOST=1
ADMIN_PASSWORD=PASSW
MYSQL_PASSWORD=PASSW
RABBIT_PASSWORD=PASSW
SERVICE_PASSWORD=PASSW
SERVICE_TOKEN=PASSW
KEYSTONE_BRANCH=stable/juno
NOVA_BRANCH=stable/juno
NEUTRON_BRANCH=stable/juno
SWIFT_BRANCH=stable/juno
GLANCE_BRANCH=stable/juno
CINDER_BRANCH=stable/juno
HEAT_BRANCH=stable/juno
TROVE_BRANCH=stable/juno
HORIZON_BRANCH=stable/juno

Exported vars:
export OS_USERNAME=admin
export OS_PASSWORD=PASSW # password set on first node:
export OS_TENANT_NAME=admin
export OS_AUTH_URL=http://192.168.10.214:35357/v2.0

Glance uses local storage (directory): /opt/stack/data/glance/images

Conf:
- glance-api:
[DEFAULT]
workers = 2
filesystem_store_datadir = /opt/stack/data/glance/images/
rabbit_hosts = 192.168.10.214
rpc_backend = glance.openstack.common.rpc.impl_kombu
notification_driver = messaging
use_syslog = False
sql_connection = mysql://root:rooter@127.0.0.1/glance?charset=utf8
debug = True
# Show more verbose log output (sets INFO log level output)
#verbose = False

# Show debugging output in logs (sets DEBUG log level output)
#debug = False

# Which backend scheme should Glance use by default is not specified
# in a request to add a new image to Glance? Known schemes are determined
# by the known_stores option below.
# Default: 'file'
default_store = file

# Maximum image size (in bytes) that may be uploaded through the
# Glance API server. Defaults to 1 TB.
# WARNING: this value should only be increased after careful consideration
# and must be set to a value under 8 EB (9223372036854775808).
#image_size_cap = 1099511627776

# Address to bind the API server
bind_host = 0.0.0.0

# Port the bind the API server to
bind_port = 9292

# Log to this file. Make sure you do not set the same log file for both the API
# and registry servers!
#
# If `log_file` is omitted and `use_syslog` is false, then log messages are
# sent to stdout as a fallback.
#log_file = /var/log/glance/api.log

# Backlog requests when creating socket
backlog = 4096

# TCP_KEEPIDLE value in seconds when creating socket.
# Not supported on OS X.
#tcp_keepidle = 600

# API to use for accessing data. Default value points to sqlalchemy
# package, it is also possible to use: glance.db.registry.api
# data_api = glance.db.sqlalchemy.api

# The number of child process workers that will be
# created to service API requests. The default will be
# equal to the number of CPUs available. (integer value)
#workers = 4

# Maximum line size of message headers to be accepted.
# max_header_line may need to be increased when using large tokens
# (typically those generated by the Keystone v3 API with big service
# catalogs)
# max_header_line = 16384

# Role used to identify an authenticated user as administrator
#admin_role = admin

# Allow unauthenticated users to access the API with read-only
# privileges. This only applies when using ContextMiddleware.
#allow_anonymous_access = False

# Allow access to version 1 of glance api
#enable_v1_api = True

# Allow access to version 2 of glance api
#enable_v2_api = True

# Return the URL that references where the data is stored on
# the backend storage system.  For example, if using the
# file system store a URL of 'file:///path/to/image' will
# be returned to the user in the 'direct_url' meta-data field.
# The default value is false.
#show_image_direct_url = False

# Send headers containing user and tenant information when making requests to
# the v1 glance registry. This allows the registry to function as if a user is
# authenticated without the need to authenticate a user itself using the
# auth_token middleware.
# The default value is false.
#send_identity_headers = False

# Supported values for the 'container_format' image attribute
#container_formats=ami,ari,aki,bare,ovf,ova

# Supported values for the 'disk_format' image attribute
#disk_formats=ami,ari,aki,vhd,vmdk,raw,qcow2,vdi,iso

# Directory to use for lock files. Default to a temp directory
# (string value). This setting needs to be the same for both
# glance-scrubber and glance-api.
#lock_path=None

# Property Protections config file
# This file contains the rules for property protections and the roles/policies
# associated with it.
# If this config value is not specified, by default, property protections
# won't be enforced.
# If a value is specified and the file is not found, then the glance-api
# service will not start.
#property_protection_file =

# Specify whether 'roles' or 'policies' are used in the
# property_protection_file.
# The default value for property_protection_rule_format is 'roles'.
#property_protection_rule_format = roles

# This value sets what strategy will be used to determine the image location
# order. Currently two strategies are packaged with 

[Yahoo-eng-team] [Bug 1483553] [NEW] new launch instance doesn't handle quota

2015-08-11 Thread David Medberry
Public bug reported:

new launch instance doesn't handle unlimited quota and prevents launch

A -1 setting in quota is the shortcut for unlimited quota.
The old launch instance button properly handles unlimited quota (whether it be 
for instances, networks, memory, what have you.)

The new Horizon launch instance button brings up a ! error and won't
let you launch anything as it interprets this as quota exceeded.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1483553

Title:
  new launch instance doesn't handle quota

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  new launch instance doesn't handle unlimited quota and prevents launch

  A -1 setting in quota is the shortcut for unlimited quota.
  The old launch instance button properly handles unlimited quota (whether it 
be for instances, networks, memory, what have you.)

  The new Horizon launch instance button brings up a ! error and won't
  let you launch anything as it interprets this as quota exceeded.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1483553/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1483595] [NEW] Navigating Horizon UI hieroglyphs appears near buttons

2015-08-11 Thread Sergii
Public bug reported:

Preconditions:
ISO:
{build_id: 2015-08-10_03-11-20, build_number: 154, release_versions: 
{2015.1.0-7.0: {VERSION: {build_id: 2015-08-10_03-11-20, 
build_number: 154, api: 1.0, fuel-library_sha: 
1851b4dff75170dbd63f6e15cde734e348e86d27, nailgun_sha: 
58c080206cc17137b124744a40218c89beb6bb28, feature_groups: [mirantis], 
fuel-nailgun-agent_sha: e01693992d7a0304d926b922b43f3b747c35964c, 
openstack_version: 2015.1.0-7.0, fuel-agent_sha: 
57145b1d8804389304cd04322ba0fb3dc9d30327, production: docker, 
python-fuelclient_sha: e069d9fde92451ec7f555342951807c5528e96e5, 
astute_sha: e1d3a435e5df5b40cbfb1a3acf80b4176d15a2dc, fuel-ostf_sha: 
c7f745431aa3c147f2491c865e029e0ffea91c47, release: 7.0, fuelmain_sha: 
bdca75d0256338519c7eddd8a840ee6ecba7f992}}}, auth_required: true, api: 
1.0, fuel-library_sha: 1851b4dff75170dbd63f6e15cde734e348e86d27, 
nailgun_sha: 58c080206cc17137b124744a40218c89beb6bb28, feature_groups: 
[mirantis], fuel-na
 ilgun-agent_sha: e01693992d7a0304d926b922b43f3b747c35964c, 
openstack_version: 2015.1.0-7.0, fuel-agent_sha: 
57145b1d8804389304cd04322ba0fb3dc9d30327, production: docker, 
python-fuelclient_sha: e069d9fde92451ec7f555342951807c5528e96e5, 
astute_sha: e1d3a435e5df5b40cbfb1a3acf80b4176d15a2dc, fuel-ostf_sha: 
c7f745431aa3c147f2491c865e029e0ffea91c47, release: 7.0, fuelmain_sha: 
bdca75d0256338519c7eddd8a840ee6ecba7f992}

Steps to reproduce:

1. Navigate to horizon

Actual result:
Symbols like hieroglyphs appears near buttons (see attachment)
It is reproduced in Firefox, Chrome, Vivaldi browsers

Expected result:
No symbols like hieroglyphs near buttons

https://bugs.launchpad.net/mos/+bug/1483596

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: Selection_022.jpg
   
https://bugs.launchpad.net/bugs/1483595/+attachment/4442790/+files/Selection_022.jpg

** Description changed:

  Preconditions:
  ISO:
  {build_id: 2015-08-10_03-11-20, build_number: 154, 
release_versions: {2015.1.0-7.0: {VERSION: {build_id: 
2015-08-10_03-11-20, build_number: 154, api: 1.0, fuel-library_sha: 
1851b4dff75170dbd63f6e15cde734e348e86d27, nailgun_sha: 
58c080206cc17137b124744a40218c89beb6bb28, feature_groups: [mirantis], 
fuel-nailgun-agent_sha: e01693992d7a0304d926b922b43f3b747c35964c, 
openstack_version: 2015.1.0-7.0, fuel-agent_sha: 
57145b1d8804389304cd04322ba0fb3dc9d30327, production: docker, 
python-fuelclient_sha: e069d9fde92451ec7f555342951807c5528e96e5, 
astute_sha: e1d3a435e5df5b40cbfb1a3acf80b4176d15a2dc, fuel-ostf_sha: 
c7f745431aa3c147f2491c865e029e0ffea91c47, release: 7.0, fuelmain_sha: 
bdca75d0256338519c7eddd8a840ee6ecba7f992}}}, auth_required: true, api: 
1.0, fuel-library_sha: 1851b4dff75170dbd63f6e15cde734e348e86d27, 
nailgun_sha: 58c080206cc17137b124744a40218c89beb6bb28, feature_groups: 
[mirantis], fuel-
 nailgun-agent_sha: e01693992d7a0304d926b922b43f3b747c35964c, 
openstack_version: 2015.1.0-7.0, fuel-agent_sha: 
57145b1d8804389304cd04322ba0fb3dc9d30327, production: docker, 
python-fuelclient_sha: e069d9fde92451ec7f555342951807c5528e96e5, 
astute_sha: e1d3a435e5df5b40cbfb1a3acf80b4176d15a2dc, fuel-ostf_sha: 
c7f745431aa3c147f2491c865e029e0ffea91c47, release: 7.0, fuelmain_sha: 
bdca75d0256338519c7eddd8a840ee6ecba7f992}
  
  Steps to reproduce:
  
  1. Navigate to horizon
  
  Actual result:
  Symbols like hieroglyphs appears near buttons (see attachment)
  It is reproduced in Firefox, Chrome, Vivaldi browsers
  
  Expected result:
  No symbols like hieroglyphs near buttons
+ 
+ https://bugs.launchpad.net/mos/+bug/1483596

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1483595

Title:
  Navigating Horizon UI hieroglyphs appears near buttons

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Preconditions:
  ISO:
  {build_id: 2015-08-10_03-11-20, build_number: 154, 
release_versions: {2015.1.0-7.0: {VERSION: {build_id: 
2015-08-10_03-11-20, build_number: 154, api: 1.0, fuel-library_sha: 
1851b4dff75170dbd63f6e15cde734e348e86d27, nailgun_sha: 
58c080206cc17137b124744a40218c89beb6bb28, feature_groups: [mirantis], 
fuel-nailgun-agent_sha: e01693992d7a0304d926b922b43f3b747c35964c, 
openstack_version: 2015.1.0-7.0, fuel-agent_sha: 
57145b1d8804389304cd04322ba0fb3dc9d30327, production: docker, 
python-fuelclient_sha: e069d9fde92451ec7f555342951807c5528e96e5, 
astute_sha: e1d3a435e5df5b40cbfb1a3acf80b4176d15a2dc, fuel-ostf_sha: 
c7f745431aa3c147f2491c865e029e0ffea91c47, release: 7.0, fuelmain_sha: 
bdca75d0256338519c7eddd8a840ee6ecba7f992}}}, auth_required: true, api: 
1.0, fuel-library_sha: 1851b4dff75170dbd63f6e15cde734e348e86d27, 
nailgun_sha: 58c080206cc17137b124744a40218c89beb6bb28, feature_groups: 
[mirantis], fuel-
 nailgun-agent_sha: e01693992d7a0304d926b922b43f3b747c35964c, 
openstack_version: 2015.1.0-7.0, fuel-agent_sha: 
57145b1d8804389304cd04322ba0fb3dc9d30327, production: 

[Yahoo-eng-team] [Bug 1483570] [NEW] using rbd image, after unrescued the temp unrescue disk are not cleaned

2015-08-11 Thread QinWei
Public bug reported:


vms/volumes/images are all use ceph rbd backend

Nova Version:
root@controller:~# dpkg -l | grep nova
ii  nova-api1:2015.1.0-0ubuntu1~cloud0
all  OpenStack Compute - API frontend
ii  nova-cert   1:2015.1.0-0ubuntu1~cloud0
all  OpenStack Compute - certificate management
ii  nova-common 1:2015.1.0-0ubuntu1~cloud0
all  OpenStack Compute - common files
ii  nova-conductor  1:2015.1.0-0ubuntu1~cloud0
all  OpenStack Compute - conductor service
ii  nova-consoleauth1:2015.1.0-0ubuntu1~cloud0
all  OpenStack Compute - Console Authenticator
ii  nova-novncproxy 1:2015.1.0-0ubuntu1~cloud0
all  OpenStack Compute - NoVNC proxy
ii  nova-scheduler  1:2015.1.0-0ubuntu1~cloud0
all  OpenStack Compute - virtual machine scheduler
ii  python-nova 1:2015.1.0-0ubuntu1~cloud0
all  OpenStack Compute Python libraries
ii  python-novaclient   1:2.22.0-0ubuntu1~cloud0  
all  client library for OpenStack Compute API

Reproduce steps:
1. Runing an Instance
2. rescue the instance
rbd ls vms
f0bf69a9-929e-4e2a-83d3-aaa77bfe7ec9_disk
f0bf69a9-929e-4e2a-83d3-aaa77bfe7ec9_disk.rescue
(OK)
3. unrescue the instance
rbd ls vms
f0bf69a9-929e-4e2a-83d3-aaa77bfe7ec9_disk
f0bf69a9-929e-4e2a-83d3-aaa77bfe7ec9_disk.rescue
(Not OK)the disk.rescue should not be deleted
4. terminate the instance 
rbd ls vms
none

Code Trace:
in the /nova/virt/libvirt/driver.py
def unrescue(self, instance, network_info):

there is not distinguish rbd and lvm

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1483570

Title:
  using rbd image, after unrescued the temp unrescue disk are not
  cleaned

Status in OpenStack Compute (nova):
  New

Bug description:
  
  vms/volumes/images are all use ceph rbd backend

  Nova Version:
  root@controller:~# dpkg -l | grep nova
  ii  nova-api1:2015.1.0-0ubuntu1~cloud0
all  OpenStack Compute - API frontend
  ii  nova-cert   1:2015.1.0-0ubuntu1~cloud0
all  OpenStack Compute - certificate management
  ii  nova-common 1:2015.1.0-0ubuntu1~cloud0
all  OpenStack Compute - common files
  ii  nova-conductor  1:2015.1.0-0ubuntu1~cloud0
all  OpenStack Compute - conductor service
  ii  nova-consoleauth1:2015.1.0-0ubuntu1~cloud0
all  OpenStack Compute - Console Authenticator
  ii  nova-novncproxy 1:2015.1.0-0ubuntu1~cloud0
all  OpenStack Compute - NoVNC proxy
  ii  nova-scheduler  1:2015.1.0-0ubuntu1~cloud0
all  OpenStack Compute - virtual machine scheduler
  ii  python-nova 1:2015.1.0-0ubuntu1~cloud0
all  OpenStack Compute Python libraries
  ii  python-novaclient   1:2.22.0-0ubuntu1~cloud0  
all  client library for OpenStack Compute API

  Reproduce steps:
  1. Runing an Instance
  2. rescue the instance
  rbd ls vms
  f0bf69a9-929e-4e2a-83d3-aaa77bfe7ec9_disk
  f0bf69a9-929e-4e2a-83d3-aaa77bfe7ec9_disk.rescue
  (OK)
  3. unrescue the instance
  rbd ls vms
  f0bf69a9-929e-4e2a-83d3-aaa77bfe7ec9_disk
  f0bf69a9-929e-4e2a-83d3-aaa77bfe7ec9_disk.rescue
  (Not OK)the disk.rescue should not be deleted
  4. terminate the instance 
  rbd ls vms
  none

  Code Trace:
  in the /nova/virt/libvirt/driver.py
  def unrescue(self, instance, network_info):

  there is not distinguish rbd and lvm

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1483570/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1483601] [NEW] l2 population failed when bulk live migrate VMs

2015-08-11 Thread shihanzhang
Public bug reported:

when we bulk live migrate VMs, the l2 population may possiblly(not always) 
failed at destination compute nodes, because when nova migrate VM at 
destination compute node, it just update port's binding:host,  the port's 
status is still active, from neutron perspective, the progress of port status 
is : active - build - active,
in bellow case, l2 population  will fail:
1. nova successfully live migrate vm A and VM B from compute A to compute B.
2. port A and port B status are active,  binding:host are compute B .
3. l2 agent scans these two port, then handle them one by one.
4. neutron-server firstly handle port A, its status will be build(remember port 
B status is still active), and do bellow check
in l2 population check,  this check will be fail

def _update_port_up(self, context):
..
  if agent_active_ports == 1 or (self.get_agent_uptime(agent)  
cfg.CONF.l2pop.agent_boot_time):
  # First port activated on current agent in this network,
  # we have to provide it with the whole list of fdb entries

** Affects: neutron
 Importance: Undecided
 Status: New

** Description changed:

  when we bulk live migrate VMs, the l2 population may possiblly(not always) 
failed at destination compute nodes,
  because when nova migrate VM at destination compute node, it just update 
port's binding:host,  the port's status
- is still active, from neutron perspective, the progress of port status is : 
active - build - active,  
+ is still active, from neutron perspective, the progress of port status is : 
active - build - active,
  in bellow case, l2 population  will fail:
  1. nova successfully live migrate vm A and VM B from compute A to compute B.
  2. port A and port B status are active,  binding:host are compute B .
  3. l2 agent scans these two port, then handle them one by one.
- 4. neutron-server firstly handle port A, its status will be build(remember 
port B status is still active), and do bellow check 
+ 4. neutron-server firstly handle port A, its status will be build(remember 
port B status is still active), and do bellow check
  in l2 population check,  this check will be fail
  
- def _update_port_up(self, context):
- ..
- if agent_active_ports == 1 or (
- self.get_agent_uptime(agent)  
cfg.CONF.l2pop.agent_boot_time):
-# First port activated on current agent in this network,
-# we have to provide it with the whole list of fdb entries
+ def _update_port_up(self, context):
+ ..
+   if agent_active_ports == 1 or (self.get_agent_uptime(agent)  
cfg.CONF.l2pop.agent_boot_time):
+   # First port activated on current agent in this network,
+   # we have to provide it with the whole list of fdb entries

** Description changed:

- when we bulk live migrate VMs, the l2 population may possiblly(not always) 
failed at destination compute nodes,
- because when nova migrate VM at destination compute node, it just update 
port's binding:host,  the port's status
- is still active, from neutron perspective, the progress of port status is : 
active - build - active,
+ when we bulk live migrate VMs, the l2 population may possiblly(not always) 
failed at destination compute nodes, because when nova migrate VM at 
destination compute node, it just update port's binding:host,  the port's 
status is still active, from neutron perspective, the progress of port status 
is : active - build - active,
  in bellow case, l2 population  will fail:
  1. nova successfully live migrate vm A and VM B from compute A to compute B.
  2. port A and port B status are active,  binding:host are compute B .
  3. l2 agent scans these two port, then handle them one by one.
  4. neutron-server firstly handle port A, its status will be build(remember 
port B status is still active), and do bellow check
  in l2 population check,  this check will be fail
  
  def _update_port_up(self, context):
  ..
    if agent_active_ports == 1 or (self.get_agent_uptime(agent)  
cfg.CONF.l2pop.agent_boot_time):
    # First port activated on current agent in this network,
    # we have to provide it with the whole list of fdb entries

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1483601

Title:
  l2 population failed when bulk live migrate VMs

Status in neutron:
  New

Bug description:
  when we bulk live migrate VMs, the l2 population may possiblly(not always) 
failed at destination compute nodes, because when nova migrate VM at 
destination compute node, it just update port's binding:host,  the port's 
status is still active, from neutron perspective, the progress of port status 
is : active - build - active,
  in bellow case, l2 population  will fail:
  1. nova successfully live migrate vm A and VM B from compute A to compute B.
  2. port A and port B status are active,  binding:host are compute B .
  3. l2 agent 

[Yahoo-eng-team] [Bug 1483613] [NEW] It may be possible to request (un)pinning of CPUs not in the NUMA cpuset

2015-08-11 Thread Alexis Lee
Public bug reported:

There's already a check to ensure pinned CPUs are unpinned and vice
versa, but none to ensure the CPUs are in the known set. This could lead
to an invalid system state and emergent bugs.

I noticed this via code inspection during Liberty. I don't know if it's
possible to hit externally but it seems like a potential bug. John
Garbutt encouraged me to open this for advertising.

** Affects: nova
 Importance: Undecided
 Assignee: Alexis Lee (alexisl)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1483613

Title:
  It may be possible to request (un)pinning of CPUs not in the NUMA
  cpuset

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  There's already a check to ensure pinned CPUs are unpinned and vice
  versa, but none to ensure the CPUs are in the known set. This could
  lead to an invalid system state and emergent bugs.

  I noticed this via code inspection during Liberty. I don't know if
  it's possible to hit externally but it seems like a potential bug.
  John Garbutt encouraged me to open this for advertising.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1483613/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1398361] Re: Add support for EC2 API : ec2-modify-snapshot-attribute

2015-08-11 Thread Andrey Pavlov
cinder doesn't allow to modify permissions for snapshots/volumes. These
objects can be seen only by owner or admin.

** Also affects: ec2-api
   Importance: Undecided
   Status: New

** Changed in: ec2-api
   Status: New = Opinion

** Changed in: ec2-api
   Importance: Undecided = Wishlist

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1398361

Title:
  Add support for EC2 API : ec2-modify-snapshot-attribute

Status in ec2-api:
  Opinion
Status in OpenStack Compute (nova):
  Opinion

Bug description:
  The various attributes of the snapshots created from volumes can be
  modified for making them accessible to public or to a particular user
  account.

  The response parameters and supported attribute sets must be according the 
AWS specification:
  
http://docs.aws.amazon.com/AWSEC2/latest/CommandLineReference/ApiReference-cmd-ModifySnapshotAttribute.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/ec2-api/+bug/1398361/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1483618] [NEW] modify subnet twice, then a port is created

2015-08-11 Thread huangpengtaohw
Public bug reported:

1. I create net net1 in dashboard
2. then  create a subnet subnet1 without enable gateway and dhcp.
3. I modify subnet to enable gateway and dhcp, then no port was created
4. I do it again without modify anything,  a port was created.

a port should be created when I first modify subnet,
acturely a port  is created with same  operating twice.
I think it is a bug.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1483618

Title:
  modify subnet twice,then a port is created

Status in neutron:
  New

Bug description:
  1. I create net net1 in dashboard
  2. then  create a subnet subnet1 without enable gateway and dhcp.
  3. I modify subnet to enable gateway and dhcp, then no port was created
  4. I do it again without modify anything,  a port was created.

  a port should be created when I first modify subnet,
  acturely a port  is created with same  operating twice.
  I think it is a bug.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1483618/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1411582] Re: Azure data source should auto-detect ephemeral disk location

2015-08-11 Thread Dan Watkins
utopic is EOL.

** Changed in: cloud-init (Ubuntu Precise)
 Assignee: (unassigned) = Dan Watkins (daniel-thewatkins)

** Changed in: cloud-init (Ubuntu Precise)
   Status: New = In Progress

** Changed in: cloud-init (Ubuntu Utopic)
   Status: New = Invalid

** Changed in: cloud-init (Ubuntu Trusty)
   Status: New = In Progress

** Changed in: cloud-init (Ubuntu Trusty)
 Assignee: (unassigned) = Dan Watkins (daniel-thewatkins)

** Changed in: cloud-init (Ubuntu Vivid)
   Status: New = In Progress

** Changed in: cloud-init (Ubuntu Vivid)
 Assignee: (unassigned) = Dan Watkins (daniel-thewatkins)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1411582

Title:
  Azure data source should auto-detect ephemeral disk location

Status in cloud-init:
  In Progress
Status in cloud-init package in Ubuntu:
  Fix Released
Status in walinuxagent package in Ubuntu:
  Fix Released
Status in cloud-init source package in Precise:
  In Progress
Status in walinuxagent source package in Precise:
  Fix Released
Status in cloud-init source package in Trusty:
  In Progress
Status in walinuxagent source package in Trusty:
  Fix Released
Status in cloud-init source package in Utopic:
  Invalid
Status in walinuxagent source package in Utopic:
  Fix Released
Status in cloud-init source package in Vivid:
  In Progress
Status in walinuxagent source package in Vivid:
  Fix Released

Bug description:
  Currently we assume it will be /dev/sdb, but this may change. There is
  an example of how to handle this in the Azure Linux agent.

  To quote stevez in a comment on bug 1410835:

  Device names are not persistent in Linux and could change, so it is
  not guaranteed that the ephemeral disk will be called /dev/sdb.
  Ideally this should be auto-detected in cloud-init at runtime (for
  example, see DeviceForIdePort() in the Azure Linux agent).

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1411582/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1483649] [NEW] v1: sending bad format to created_at/deleted_at returns 500

2015-08-11 Thread Stuart McLaren
Public bug reported:

 POST /v1/images HTTP/1.1
 User-Agent: curl/7.35.0
 Host: localhost:9292
 Accept: */*
 x-auth-token: 730b8eabc2e34a1299fcafa888131dc5
 x-image-meta-disk-format: raw
 x-image-meta-container-format: bare
 Content-type: application/octet-stream
 x-image-meta-updated_at: foo
 Content-Length: 2
 
* upload completely sent off: 2 out of 2 bytes
 HTTP/1.1 500 Internal Server Error
 Content-Type: text/plain
 Content-Length: 0
 Date: Tue, 11 Aug 2015 10:50:59 GMT
 Connection: close


* Closing connection 0
* Hostname was NOT found in DNS cache
*   Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 9292 (#0)
 POST /v1/images HTTP/1.1
 User-Agent: curl/7.35.0
 Host: localhost:9292
 Accept: */*
 x-auth-token: 730b8eabc2e34a1299fcafa888131dc5
 x-image-meta-disk-format: raw
 x-image-meta-container-format: bare
 Content-type: application/octet-stream
 x-image-meta-deleted_at: foo
 Content-Length: 2
 
* upload completely sent off: 2 out of 2 bytes
 HTTP/1.1 500 Internal Server Error
 Content-Type: text/plain
 Content-Length: 0
 Date: Tue, 11 Aug 2015 10:53:38 GMT
 Connection: close

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1483649

Title:
  v1: sending bad format to created_at/deleted_at returns 500

Status in Glance:
  New

Bug description:
   POST /v1/images HTTP/1.1
   User-Agent: curl/7.35.0
   Host: localhost:9292
   Accept: */*
   x-auth-token: 730b8eabc2e34a1299fcafa888131dc5
   x-image-meta-disk-format: raw
   x-image-meta-container-format: bare
   Content-type: application/octet-stream
   x-image-meta-updated_at: foo
   Content-Length: 2
   
  * upload completely sent off: 2 out of 2 bytes
   HTTP/1.1 500 Internal Server Error
   Content-Type: text/plain
   Content-Length: 0
   Date: Tue, 11 Aug 2015 10:50:59 GMT
   Connection: close


  * Closing connection 0
  * Hostname was NOT found in DNS cache
  *   Trying 127.0.0.1...
  * Connected to localhost (127.0.0.1) port 9292 (#0)
   POST /v1/images HTTP/1.1
   User-Agent: curl/7.35.0
   Host: localhost:9292
   Accept: */*
   x-auth-token: 730b8eabc2e34a1299fcafa888131dc5
   x-image-meta-disk-format: raw
   x-image-meta-container-format: bare
   Content-type: application/octet-stream
   x-image-meta-deleted_at: foo
   Content-Length: 2
   
  * upload completely sent off: 2 out of 2 bytes
   HTTP/1.1 500 Internal Server Error
   Content-Type: text/plain
   Content-Length: 0
   Date: Tue, 11 Aug 2015 10:53:38 GMT
   Connection: close

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1483649/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1477878] Re: Unable to launch instances from snapshot due to kernel and ramdisk fields in glance database

2015-08-11 Thread Vj
Fixed with this patch https://review.openstack.org/#/c/176379/3

** Changed in: nova
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1477878

Title:
  Unable to launch instances from snapshot due to kernel and ramdisk
  fields in glance database

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Hi All,

  I am using openstack kilo with ceph backend. Creating a snapshot of an
  instance works fine, but launching an instance from the snapshot
  fails. Corresponding nova logs:

  ---
  2015-07-24 12:46:44.918 7176 ERROR nova.compute.manager 
[req-f2bfa4ae-20d4-4f10-8772-ab8b1993260a - - - - -] [instance: 
b05daf8c-818f-4018-8790-8f03d44d2fcc] Instance failed to spawn
  2015-07-24 12:46:44.918 7176 TRACE nova.compute.manager [instance: 
b05daf8c-818f-4018-8790-8f03d44d2fcc] Traceback (most recent call last):
  2015-07-24 12:46:44.918 7176 TRACE nova.compute.manager [instance: 
b05daf8c-818f-4018-8790-8f03d44d2fcc]   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 2442, in 
_build_resources
  2015-07-24 12:46:44.918 7176 TRACE nova.compute.manager [instance: 
b05daf8c-818f-4018-8790-8f03d44d2fcc] yield resources
  2015-07-24 12:46:44.918 7176 TRACE nova.compute.manager [instance: 
b05daf8c-818f-4018-8790-8f03d44d2fcc]   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 2314, in 
_build_and_run_instance
  2015-07-24 12:46:44.918 7176 TRACE nova.compute.manager [instance: 
b05daf8c-818f-4018-8790-8f03d44d2fcc] block_device_info=block_device_info)
  2015-07-24 12:46:44.918 7176 TRACE nova.compute.manager [instance: 
b05daf8c-818f-4018-8790-8f03d44d2fcc]   File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 2347, in 
spawn
  2015-07-24 12:46:44.918 7176 TRACE nova.compute.manager [instance: 
b05daf8c-818f-4018-8790-8f03d44d2fcc] admin_pass=admin_password)
  2015-07-24 12:46:44.918 7176 TRACE nova.compute.manager [instance: 
b05daf8c-818f-4018-8790-8f03d44d2fcc]   File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 2745, in 
_create_image
  2015-07-24 12:46:44.918 7176 TRACE nova.compute.manager [instance: 
b05daf8c-818f-4018-8790-8f03d44d2fcc] instance, size, fallback_from_host)
  2015-07-24 12:46:44.918 7176 TRACE nova.compute.manager [instance: 
b05daf8c-818f-4018-8790-8f03d44d2fcc]   File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 5875, in 
_try_fetch_image_cache
  2015-07-24 12:46:44.918 7176 TRACE nova.compute.manager [instance: 
b05daf8c-818f-4018-8790-8f03d44d2fcc] size=size)
  2015-07-24 12:46:44.918 7176 TRACE nova.compute.manager [instance: 
b05daf8c-818f-4018-8790-8f03d44d2fcc]   File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/imagebackend.py, line 231, 
in cache
  2015-07-24 12:46:44.918 7176 TRACE nova.compute.manager [instance: 
b05daf8c-818f-4018-8790-8f03d44d2fcc] *args, **kwargs)
  2015-07-24 12:46:44.918 7176 TRACE nova.compute.manager [instance: 
b05daf8c-818f-4018-8790-8f03d44d2fcc]   File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/imagebackend.py, line 727, 
in create_image
  2015-07-24 12:46:44.918 7176 TRACE nova.compute.manager [instance: 
b05daf8c-818f-4018-8790-8f03d44d2fcc] prepare_template(target=base, 
max_size=size, *args, **kwargs)
  2015-07-24 12:46:44.918 7176 TRACE nova.compute.manager [instance: 
b05daf8c-818f-4018-8790-8f03d44d2fcc]   File 
/usr/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py, line 445, in 
inner
  2015-07-24 12:46:44.918 7176 TRACE nova.compute.manager [instance: 
b05daf8c-818f-4018-8790-8f03d44d2fcc] return f(*args, **kwargs)
  2015-07-24 12:46:44.918 7176 TRACE nova.compute.manager [instance: 
b05daf8c-818f-4018-8790-8f03d44d2fcc]   File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/imagebackend.py, line 221, 
in fetch_func_sync
  2015-07-24 12:46:44.918 7176 TRACE nova.compute.manager [instance: 
b05daf8c-818f-4018-8790-8f03d44d2fcc] fetch_func(target=target, *args, 
**kwargs)
  2015-07-24 12:46:44.918 7176 TRACE nova.compute.manager [instance: 
b05daf8c-818f-4018-8790-8f03d44d2fcc]   File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 2737, in 
clone_fallback_to_fetch
  2015-07-24 12:46:44.918 7176 TRACE nova.compute.manager [instance: 
b05daf8c-818f-4018-8790-8f03d44d2fcc] backend.clone(context, 
disk_images['image_id'])
  2015-07-24 12:46:44.918 7176 TRACE nova.compute.manager [instance: 
b05daf8c-818f-4018-8790-8f03d44d2fcc]   File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/imagebackend.py, line 752, 
in clone
  2015-07-24 12:46:44.918 7176 TRACE nova.compute.manager [instance: 
b05daf8c-818f-4018-8790-8f03d44d2fcc] include_locations=True)
  2015-07-24 12:46:44.918 7176 TRACE nova.compute.manager [instance: 

[Yahoo-eng-team] [Bug 1483645] [NEW] Ephemeral disk size in volume can be bypassed when booting instance

2015-08-11 Thread Adelina Tuvenie
Public bug reported:

When booting a server with ephemeral disks without specifying the size
it defaults to the flavor ephemeral disk size for all  of them.

Steps:

1. Create custom flavor with ephemeral disk  0

ubuntu@ubuntu:~$ nova flavor-list
++---+---+--+---+--+---+-+---+
| ID | Name  | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | 
Is_Public |
++---+---+--+---+--+---+-+---+
| 1  | m1.tiny   | 512   | 1| 0 |  | 1 | 1.0 | 
True  |
| 11 | m1.custom | 512   | 5| 10|  | 1 | 1.0 | 
True  |
| 2  | m1.small  | 2048  | 20   | 0 |  | 1 | 1.0 | 
True  |
| 3  | m1.medium | 4096  | 40   | 0 |  | 2 | 1.0 | 
True  |
| 4  | m1.large  | 8192  | 80   | 0 |  | 4 | 1.0 | 
True  |
| 5  | m1.xlarge | 16384 | 160  | 0 |  | 8 | 1.0 | 
True  |
++---+---+--+---+--+---+-+---+

2.  Boot instance with ephemerals without specifying their size

ubuntu@ubuntu:~$ nova boot --image  3ec2603b-9113-4ca1-92cf-690e573985bd 
--flavor 11 --block-device source=blank,dest=local --block-device 
source=blank,dest=local test
+--++
| Property | Value  
|
+--++
| OS-DCF:diskConfig| MANUAL 
|
| OS-EXT-AZ:availability_zone  | nova   
|
| OS-EXT-SRV-ATTR:host | -  
|
| OS-EXT-SRV-ATTR:hypervisor_hostname  | -  
|
| OS-EXT-SRV-ATTR:instance_name| instance-005e  
|
| OS-EXT-STS:power_state   | 0  
|
| OS-EXT-STS:task_state| scheduling 
|
| OS-EXT-STS:vm_state  | building   
|
| OS-SRV-USG:launched_at   | -  
|
| OS-SRV-USG:terminated_at | -  
|
| accessIPv4   |
|
| accessIPv6   |
|
| adminPass| RgeEPJp5fLYL   
|
| config_drive |
|
| created  | 2015-08-11T09:35:31Z   
|
| flavor   | m1.custom (11) 
|
| hostId   |
|
| id   | 8b1659bf-5ffe-4855-aebe-a63991ce12cc   
|
| image| cirros-0.3.4-x86_64-uec 
(3ec2603b-9113-4ca1-92cf-690e573985bd) |
| key_name | -  
|
| metadata | {} 
|
| name | test   
|
| os-extended-volumes:volumes_attached | [] 
|
| progress | 0  
|
| security_groups  | default
|
| status   | BUILD  
|
| tenant_id| 0b2075b7a28440078e8f50a75eaa9066   
|
| updated  | 2015-08-11T09:35:31Z   
|
| user_id  | 30a0b1adc4b94fefbf938568fe349910   
|
+--++

 We see the ephemerals there:


[Yahoo-eng-team] [Bug 1483687] [NEW] _update_subnet_allocation_pools returns an empty list

2015-08-11 Thread John Davidge
Public bug reported:

_update_subnet_allocation_pools in ipam_backend_mixin.py uses a
generator called pools, but iterates over it twice. Generators can only
be iterated over once, and therefore this function is currently
returning an empty list in all cases. This is not being caught by any
existing tests.

** Affects: neutron
 Importance: Undecided
 Assignee: John Davidge (john-davidge)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = John Davidge (john-davidge)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1483687

Title:
  _update_subnet_allocation_pools returns an empty list

Status in neutron:
  New

Bug description:
  _update_subnet_allocation_pools in ipam_backend_mixin.py uses a
  generator called pools, but iterates over it twice. Generators can
  only be iterated over once, and therefore this function is currently
  returning an empty list in all cases. This is not being caught by any
  existing tests.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1483687/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1483688] [NEW] v1: correct x-image-meta-id header provokes E500

2015-08-11 Thread Niall Bunting
Public bug reported:

This bug addresses the problem where the id is correct.

$ curl -v -X PUT 
http://127.0.0.1:9292/v1/images/8aa1f62d-73f2-4ed7-8d4c-b66407abd439 -H 
'X-Auth-Token: 7535a1be77e3459e8e4928aae02a8042' -H 'x-image-meta-id: 
fd6db2fa-9c9d-4105-aa3b-657914593de8'
* Hostname was NOT found in DNS cache
* Trying 127.0.0.1...
* Connected to 127.0.0.1 (127.0.0.1) port 9292 (#0)
 PUT /v1/images/8aa1f62d-73f2-4ed7-8d4c-b66407abd439 HTTP/1.1
 User-Agent: curl/7.35.0
 Host: 127.0.0.1:9292
 Accept: */*
 X-Auth-Token: 7535a1be77e3459e8e4928aae02a8042
 x-image-meta-id: fd6db2fa-9c9d-4105-aa3b-657914593de8

 HTTP/1.1 500 Internal Server Error
 Content-Type: text/plain
 Content-Length: 0
 Date: Mon, 10 Aug 2015 17:00:00 GMT
 Connection: close

* Closing connection 0

Should return 403

** Affects: glance
 Importance: Undecided
 Status: New

** Description changed:

  This bug addresses the problem where the id is correct.
  
  $ curl -v -X PUT 
http://127.0.0.1:9292/v1/images/8aa1f62d-73f2-4ed7-8d4c-b66407abd439 -H 
'X-Auth-Token: 7535a1be77e3459e8e4928aae02a8042' -H 'x-image-meta-id: 
fd6db2fa-9c9d-4105-aa3b-657914593de8'
  * Hostname was NOT found in DNS cache
  * Trying 127.0.0.1...
  * Connected to 127.0.0.1 (127.0.0.1) port 9292 (#0)
   PUT /v1/images/8aa1f62d-73f2-4ed7-8d4c-b66407abd439 HTTP/1.1
   User-Agent: curl/7.35.0
   Host: 127.0.0.1:9292
   Accept: */*
   X-Auth-Token: 7535a1be77e3459e8e4928aae02a8042
   x-image-meta-id: fd6db2fa-9c9d-4105-aa3b-657914593de8
  
   HTTP/1.1 500 Internal Server Error
   Content-Type: text/plain
   Content-Length: 0
   Date: Mon, 10 Aug 2015 17:00:00 GMT
   Connection: close
  
  * Closing connection 0
+ 
+ Should return 403

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1483688

Title:
  v1: correct x-image-meta-id header provokes E500

Status in Glance:
  New

Bug description:
  This bug addresses the problem where the id is correct.

  $ curl -v -X PUT 
http://127.0.0.1:9292/v1/images/8aa1f62d-73f2-4ed7-8d4c-b66407abd439 -H 
'X-Auth-Token: 7535a1be77e3459e8e4928aae02a8042' -H 'x-image-meta-id: 
fd6db2fa-9c9d-4105-aa3b-657914593de8'
  * Hostname was NOT found in DNS cache
  * Trying 127.0.0.1...
  * Connected to 127.0.0.1 (127.0.0.1) port 9292 (#0)
   PUT /v1/images/8aa1f62d-73f2-4ed7-8d4c-b66407abd439 HTTP/1.1
   User-Agent: curl/7.35.0
   Host: 127.0.0.1:9292
   Accept: */*
   X-Auth-Token: 7535a1be77e3459e8e4928aae02a8042
   x-image-meta-id: fd6db2fa-9c9d-4105-aa3b-657914593de8
  
   HTTP/1.1 500 Internal Server Error
   Content-Type: text/plain
   Content-Length: 0
   Date: Mon, 10 Aug 2015 17:00:00 GMT
   Connection: close
  
  * Closing connection 0

  Should return 403

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1483688/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1483690] [NEW] if no subnet enable DHCP, DHCP agent should be disable

2015-08-11 Thread huangpengtaohw
Public bug reported:

I creat a network ,then creat   a Subnet with enable Gateway and  DHCP.
a port  and a DHCP Agents was created.
then I disable  Gateway and DHCP, the port was deleted but there is no change 
in the DHCP Agents status .

it  make no sense DHCP Agents is running well if no subnet is enable DHCP.
I think the DHCP Agents  should be deleted too if  no subnet is enable DHCP.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1483690

Title:
  if no subnet enable DHCP, DHCP agent should be disable

Status in neutron:
  New

Bug description:
  I creat a network ,then creat   a Subnet with enable Gateway and  DHCP.
  a port  and a DHCP Agents was created.
  then I disable  Gateway and DHCP, the port was deleted but there is no change 
in the DHCP Agents status .

  it  make no sense DHCP Agents is running well if no subnet is enable DHCP.
  I think the DHCP Agents  should be deleted too if  no subnet is enable DHCP.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1483690/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1261510] Re: Instance fails to spawn in tempest tests

2015-08-11 Thread Kyle Mestery
Marking invalid per Mark's comments in #2.

** Changed in: neutron
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1261510

Title:
  Instance fails to spawn in tempest tests

Status in neutron:
  Invalid
Status in neutron havana series:
  Triaged
Status in OpenStack Compute (nova):
  Invalid

Bug description:
  This happened only 3 times in the past 12 hours, so nothing to worry
  about so far.

  Logstash query for the exact failure in [1] available at [2]
  I am also seeing more Timeout waiting for thing errors (not the same 
condition as bug 1254890, which affects the large_ops job and is due to 
nova/neutron chatty interface). Logstash query for this at [3] (13 hits in past 
12 hours). I think they might have the same root cause.

  
  [1] 
http://logs.openstack.org/22/62322/2/check/check-tempest-dsvm-neutron-isolated/cce7146
  [2] 
http://logstash.openstack.org/#eyJzZWFyY2giOiJcImZhaWxlZCB0byByZWFjaCBBQ1RJVkUgc3RhdHVzXCIgQU5EICBcIkN1cnJlbnQgc3RhdHVzOiBCVUlMRFwiIEFORCBcIkN1cnJlbnQgdGFzayBzdGF0ZTogc3Bhd25pbmdcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNDMyMDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxMzg3MjIzNzQ0Mjk2fQ==
  [3] 
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiRGV0YWlsczogVGltZWQgb3V0IHdhaXRpbmcgZm9yIHRoaW5nXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjQzMjAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTM4NzIyMzg2Mjg1MH0=

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1261510/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1483639] [NEW] Nova and Horizon allow to perform inappropriate actions for baremetal node

2015-08-11 Thread Kyrylo Romanenko
Public bug reported:

Ironic baremetal nodes do not support all variety of operations that
nova virtual instances do.

But Nova and Horizon still offers to perform actions with ironic
baremetal nodes that can be applied to virtual instances only.

Examples of steps:
root@node-1:~# nova pause NEW1
root@node-1:~# nova suspend NEW1
As result Nova silently accepts commands without any warning or error messages. 
Same actions can be performed via Horizon with green Success popup.
Also see list of actions over baremetal node on screenshot.

One more example:
Backup to image baremetal instance: 
root@node-1:~# nova image-create --poll --show NEW1 IMAGENEW1
Server snapshotting... 0% complete   

and process stalls showing 0%  in console infinitely. 
Expected that nova will not try do this with baremetal node at all. 

Currently baremetal nodes do not support following actions:

a) Create Snapshot
b) Pause 
c) Suspend
d) Migrate
e) Live Migrate
f) Only one kind or reboot should be supported (Hard reboot?)
g) Resize 

These actions should be disabled for baremetal machines in Nova and Horizon.
Currently there are no destructive aftermaths detected, therefore this bug 
affects user by confusing him when using Horizon and Nova.

** Affects: horizon
 Importance: Undecided
 Status: New

** Affects: ironic
 Importance: Undecided
 Status: New

** Affects: mos
 Importance: Medium
 Assignee: MOS Ironic (mos-ironic)
 Status: New

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: horizon ironic nova

** Attachment added: bm_instance.png
   
https://bugs.launchpad.net/bugs/1483639/+attachment/4442838/+files/bm_instance.png

** Also affects: ironic
   Importance: Undecided
   Status: New

** Also affects: nova
   Importance: Undecided
   Status: New

** Also affects: horizon
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1483639

Title:
  Nova and Horizon allow to perform inappropriate actions for baremetal
  node

Status in OpenStack Dashboard (Horizon):
  New
Status in Ironic:
  New
Status in Mirantis OpenStack:
  New
Status in OpenStack Compute (nova):
  New

Bug description:
  Ironic baremetal nodes do not support all variety of operations that
  nova virtual instances do.

  But Nova and Horizon still offers to perform actions with ironic
  baremetal nodes that can be applied to virtual instances only.

  Examples of steps:
  root@node-1:~# nova pause NEW1
  root@node-1:~# nova suspend NEW1
  As result Nova silently accepts commands without any warning or error 
messages. Same actions can be performed via Horizon with green Success popup.
  Also see list of actions over baremetal node on screenshot.

  One more example:
  Backup to image baremetal instance: 
  root@node-1:~# nova image-create --poll --show NEW1 IMAGENEW1
  Server snapshotting... 0% complete   

  and process stalls showing 0%  in console infinitely. 
  Expected that nova will not try do this with baremetal node at all. 

  Currently baremetal nodes do not support following actions:

  a) Create Snapshot
  b) Pause 
  c) Suspend
  d) Migrate
  e) Live Migrate
  f) Only one kind or reboot should be supported (Hard reboot?)
  g) Resize 

  These actions should be disabled for baremetal machines in Nova and Horizon.
  Currently there are no destructive aftermaths detected, therefore this bug 
affects user by confusing him when using Horizon and Nova.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1483639/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1218994] Re: file based disk images do not get scrubbed on delete

2015-08-11 Thread Darla Ahlert
** Changed in: nova
   Status: Opinion = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1218994

Title:
  file based disk images do not get scrubbed on delete

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Right now, LVM backed instances can be scrubbed (overwritten with
  zeros using dd) upon deletion.  However, there is no such option with
  file backed images.  While it is true that fallocate can handle some
  of this by returning 0s to the instance when reading any unwritten
  parts of the file, there are some cases where it is not desirable to
  enable fallocate.

  What would be preferred would be a similar the options cinder has
  implemented, so the operator can choose to shred or zero out the file,
  based on their organizations own internal data policies.   A zero out
  option satisfies those that must ensure they scrub tenant data upon
  deletion, and shred would satisfy those beholden to DoD 5220-22.

  This would of course make file backed disks vulnerable to
  https://bugs.launchpad.net/nova/+bug/889299 but that might not be a
  bad thing considering its quite old.

  Attached an initial patch for nova/virt/libvirt/driver.py that
  performs the same LVM zero scrub routine to disk backed files, however
  it lacks any flags to enable or disable it right now.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1218994/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1317815] Re: Documentation Keystone SSL configuration lack

2015-08-11 Thread Adam Young
Since we are dropping support for Eventlet based deployments, continuing
to document them is counterproductive.  Please switch over to using
Apache HTTPD.

** Changed in: keystone
   Status: Confirmed = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1317815

Title:
  Documentation Keystone SSL configuration lack

Status in Keystone:
  Won't Fix

Bug description:
  Trying to configuring SSL on OpenStack Havana I read the official
  documentation here
  
https://github.com/openstack/keystone/blob/stable/havana/doc/source/configuration.rst#ssl

  But I think that configuration is not enough to configure SSL on
  OpenStack.

  As far I know, to configure SSL on OpenStack, besides the
  configuration above, it is necessary to modify endpoints protocol from
  http to https and this is not documented on the official SSL
  Configuration document.

  Please, confirm if I'm wrong.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1317815/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1396677] Re: Heavy use of metering labels/rules cause memory leak in neutron server

2015-08-11 Thread gordon chung
** Changed in: ceilometer
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1396677

Title:
  Heavy use of metering labels/rules cause memory leak in neutron server

Status in Ceilometer:
  Invalid
Status in neutron:
  New

Bug description:
  We found that large amount of metering labels and rules cause memory
  leak in neutron server. This problem is multiplied by amount of
  workers (10 workers - 10x memory leak).

  In our case we have 657 metering-lables and 122399 metering-label-
  rules,

  If anyone query them, neutron-server (worker) picked request eats
  +400Mb of memory and keep it until restart. If more requests send,
  they come to different workers cause each of them to bloat up.

  Same problem happens if neutron-plugin-metering-agent running (it send
  requests to neutron-server with same effect).

  If neutron-server hit 100% CPU  it starts to consume even more memory
  (in our case up to 1.4Gb per neutron-server worker).

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1396677/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1483845] [NEW] local_settings.py.example not consistent w/ pep8

2015-08-11 Thread Darren Shaw
Public bug reported:

local_settings.py.example is not consistent w/ python style guides

** Affects: horizon
 Importance: Undecided
 Assignee: Darren Shaw (ds354m)
 Status: New


** Tags: low-hanging-fruit

** Changed in: horizon
 Assignee: (unassigned) = Darren Shaw (ds354m)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1483845

Title:
  local_settings.py.example not consistent w/ pep8

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  local_settings.py.example is not consistent w/ python style guides

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1483845/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1481512] Re: v2 API responce doesn't follow documentation for not defined values

2015-08-11 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/211285
Committed: 
https://git.openstack.org/cgit/openstack/api-site/commit/?id=94cfc6e22143d9d63fb62955b23568f4c044
Submitter: Jenkins
Branch:master

commit 94cfc6e22143d9d63fb62955b23568f4c044
Author: Ilya Sviridov isviri...@mirantis.com
Date:   Mon Aug 10 10:53:28 2015 -0700

OpenStack Image V2 API documentation

Changed None values to null

Change-Id: I9e926c9f91004e7631d9ab6d58ad1f85758d22c0
Closes-Bug: #1481512


** Changed in: openstack-api-site
   Status: In Progress = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1481512

Title:
  v2 API responce doesn't follow documentation for not defined values

Status in Glance:
  Invalid
Status in openstack-api-site:
  Fix Released

Bug description:
  Calling image list functionality for v2 API the JSON null value is
  returned for not defined fields in response, instead expecting  string
  'None' according to documentation  http://developer.openstack.org/api-
  ref-image-v2.html

  If this field has no value, its value is None. 

  Glance code revision:
  ubuntu@ubuntu:/opt/stack/glance$ git show
  commit fbb5e1c440933860da10cd526daca3ad7b63782c
  Merge: aeb3c6a 643ad31
  Author: Jenkins jenk...@review.openstack.org
  Date:   Wed Jul 15 00:10:28 2015 +

  Merge Purge dead file-backed scrubber queue code

  Request:
  http://127.0.0.1:9292/v2/images?limit=100

  Response:

  {
  images: [
  {
  status: queued,
  name: TestImage,
  tags: [],
  container_format: ami,
  created_at: 2015-08-01T17:29:52Z,
  size: null,
  disk_format: ami,
  updated_at: 2015-08-01T17:29:52Z,
  visibility: private,
  self: /v2/images/147824e3-0406-48d4-a064-58f9c5bd8534,
  min_disk: 0,
  protected: false,
  id: 147824e3-0406-48d4-a064-58f9c5bd8534,
  file: /v2/images/147824e3-0406-48d4-a064-58f9c5bd8534/file,
  checksum: null,
  owner: d8ec44f4c4bd4799af7b1c0c8158a8c8,
  virtual_size: null,
  min_ram: 0,
  schema: /v2/schemas/image
  },
  {
  status: queued,
  name: TestImage,
  tags: [],
  container_format: ami,
  created_at: 2015-08-01T17:29:41Z,
  size: null,
  disk_format: ami,
  updated_at: 2015-08-01T17:29:41Z,
  visibility: private,
  self: /v2/images/bc952044-248f-4e4c-b720-2ef337642658,
  min_disk: 0,
  protected: false,
  id: bc952044-248f-4e4c-b720-2ef337642658,
  file: /v2/images/bc952044-248f-4e4c-b720-2ef337642658/file,
  checksum: null,
  owner: d8ec44f4c4bd4799af7b1c0c8158a8c8,
  virtual_size: null,
  min_ram: 0,
  schema: /v2/schemas/image
  },
  {
  status: queued,
  name: TestImage,
  tags: [],
  container_format: ami,
  created_at: 2015-08-01T17:29:32Z,
  size: null,
  disk_format: ami,
  updated_at: 2015-08-01T17:29:32Z,
  visibility: private,
  self: /v2/images/0307f138-a722-4561-aa69-c48ca481d371,
  min_disk: 0,
  protected: false,
  id: 0307f138-a722-4561-aa69-c48ca481d371,
  file: /v2/images/0307f138-a722-4561-aa69-c48ca481d371/file,
  checksum: null,
  owner: d8ec44f4c4bd4799af7b1c0c8158a8c8,
  virtual_size: null,
  min_ram: 0,
  schema: /v2/schemas/image
  },
  {
  container_format: aki,
  min_ram: 0,
  ramdisk_id: 8c64f48a-45a3-4eaa-adff-a8106b6c005b,
  updated_at: 2015-08-01T03:52:09Z,
  file: /v2/images/c77a2e19-b560-4eec-8986-fcd470b5ee0e/file,
  owner: d8ec44f4c4bd4799af7b1c0c8158a8c8,
  id: c77a2e19-b560-4eec-8986-fcd470b5ee0e,
  size: 12501760,
  self: /v2/images/c77a2e19-b560-4eec-8986-fcd470b5ee0e,
  disk_format: aki,
  schema: /v2/schemas/image,
  status: active,
  description: Just a test image,
  tags: [],
  kernel_id: e1b6edd4-bd9b-40ac-b010-8a6c16de4ba4,
  visibility: private,
  min_disk: 0,
  virtual_size: null,
  name: test-image,
  checksum: 0b9c6d663d8ba4f63733e53c2389c6ef,
  created_at: 2015-08-01T03:52:05Z,
  protected: false,
  architecture: 32bit
  },
  {
  status: active,
  name: 

[Yahoo-eng-team] [Bug 1483852] [NEW] Cannot specify multiple Horizon plug-in overrides files

2015-08-11 Thread Richard Hagarty
Public bug reported:

Horizon plug-ins wishing to add actions to existing Horizon table or row
actions can configure this with an overrides file.

But in local_settings.py, only one 'customization_module' can be
processed by Horizon at a time.

Therefore, if two or more plug-ins wish to utilize this feature, only
one be activated at a time.

HORIZON_CONFIG = {
...
   'customization_module' : 'myplugin.overrides'
}

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1483852

Title:
  Cannot specify multiple Horizon plug-in overrides files

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Horizon plug-ins wishing to add actions to existing Horizon table or
  row actions can configure this with an overrides file.

  But in local_settings.py, only one 'customization_module' can be
  processed by Horizon at a time.

  Therefore, if two or more plug-ins wish to utilize this feature, only
  one be activated at a time.

  HORIZON_CONFIG = {
  ...
 'customization_module' : 'myplugin.overrides'
  }

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1483852/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1483853] [NEW] [Juno]: Cannot set the VXLAN UDP destination port to 4789 using Linux Bridge

2015-08-11 Thread Danny Choi
Public bug reported:

I'm running stable/Juno with VXLAN and Linux Bridge.

Linux default VxLAN UDP port is 8472.

IANA assigned port is 4789.

I tried to add the following in /etc/neutron/plugins/ml2/ml2_conf.ini
file, but it still use port 8472 afterwards.

[agent]
vxlan_udp_port=4789

##

Comments from Kevin Benton:

Looking at the code, it doesn't look like vxlan_udp_port applies to
Linux Bridge. Please file a bug and we should be able to get a fix
pretty quickly.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: juno linuxbridge vxlan

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1483853

Title:
  [Juno]: Cannot set the VXLAN UDP destination port to 4789 using Linux
  Bridge

Status in neutron:
  New

Bug description:
  I'm running stable/Juno with VXLAN and Linux Bridge.

  Linux default VxLAN UDP port is 8472.

  IANA assigned port is 4789.

  I tried to add the following in /etc/neutron/plugins/ml2/ml2_conf.ini
  file, but it still use port 8472 afterwards.

  [agent]
  vxlan_udp_port=4789

  ##

  Comments from Kevin Benton:

  Looking at the code, it doesn't look like vxlan_udp_port applies to
  Linux Bridge. Please file a bug and we should be able to get a fix
  pretty quickly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1483853/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1483860] [NEW] Keystone version discovery is broken if you configure admin_endpoint and public_endpoint in conf file

2015-08-11 Thread Haneef Ali
Public bug reported:

Keystone version discovery is broken if you configure  admin_endpoint
and public_endpoint in conf file.  Version discovery is supposed to
return the configured endpoint, but it will always return  admin
endpoint.  This bug is in Juno/Kilo/master.  This is only applicable for
v3


In master
--
Please have a look at 
https://github.com/openstack/keystone/blob/master/keystone/service.py#L130

V3 doesn't have public and admin factories. There is only one factory
and we are installing  only  Version(public), so it is always going to
return public_endpoint configured in  conf file

Juno
--
In juno it is bit different
https://github.com/openstack/keystone/blob/stable/juno/keystone/service.py#L114

We are installing both Version(Public) and Version(Admin)  at /v3.
First will take prcedence and here we will always get admin endpoint.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1483860

Title:
  Keystone version discovery is broken if you configure admin_endpoint
  and public_endpoint in conf file

Status in Keystone:
  New

Bug description:
  Keystone version discovery is broken if you configure  admin_endpoint
  and public_endpoint in conf file.  Version discovery is supposed to
  return the configured endpoint, but it will always return  admin
  endpoint.  This bug is in Juno/Kilo/master.  This is only applicable
  for v3

  
  In master
  --
  Please have a look at 
https://github.com/openstack/keystone/blob/master/keystone/service.py#L130

  V3 doesn't have public and admin factories. There is only one factory
  and we are installing  only  Version(public), so it is always going
  to return public_endpoint configured in  conf file

  Juno
  --
  In juno it is bit different
  
https://github.com/openstack/keystone/blob/stable/juno/keystone/service.py#L114

  We are installing both Version(Public) and Version(Admin)  at /v3.
  First will take prcedence and here we will always get admin
  endpoint.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1483860/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1482772] Re: Region filtering for endpoints does not work

2015-08-11 Thread Lin Hua Cheng
This was approved to be a no-spec required change:
http://eavesdrop.openstack.org/meetings/keystone/2015/keystone.2015-08-11-18.00.log.html

** Changed in: keystone
   Status: Invalid = Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1482772

Title:
  Region filtering for endpoints does not work

Status in Keystone:
  Confirmed
Status in python-keystoneclient:
  New
Status in python-openstackclient:
  New

Bug description:
  When i run “openstack endpoint list --os-url http://192.168.33.10:5000/v3 
--os-identity-api-version=3 --service identity --interface public --region 
RegionTwo” i would expect that it only lists endpoints from RegionTwo. But i 
get the identity endpoint from RegionOne. Here is the output:
  
+--+---+--+--+-+---++
  | ID   | Region| Service Name | Service Type 
| Enabled | Interface | URL|
  
+--+---+--+--+-+---++
  | 4b3efc615c044fb4a2c70ca2e5e7bba9 | RegionOne | keystone | identity 
| True| public| http://192.168.33.10:5000/v2.0 |
  
+--+---+--+--+-+---++

  As this snippet from the debug output from openstackclient shows, the
  client sends the correct query to keystone. So i assume this is a
  filtering problem in keystone.

  DEBUG: requests.packages.urllib3.connectionpool GET 
/v3/endpoints?interface=publicservice_id=050872861656437184778a822032d8d6region=RegionTwo
 HTTP/1.1 200 506
  DEBUG: keystoneclient.session RESP: [200] content-length: 506 vary: 
X-Auth-Token keep-alive: timeout=5, max=96 server: Apache/2.4.7 (Ubuntu) 
connection: Keep-Alive date: Fri, 07 Aug 2015 19:37:08 GMT content-type: 
application/json x-openstack-request-id: 
req-72481573-7fff-4ae0-9a2f-33584b476bd3
  RESP BODY: {endpoints: [{region_id: RegionOne, links: {self: 
http://192.168.33.10:35357/v3/endpoints/4b3efc615c044fb4a2c70ca2e5e7bba9}, 
url: http://192.168.33.10:5000/v2.0;, region: RegionOne, enabled: 
true, interface: public, service_id: 050872861656437184778a822032d8d6, 
id: 4b3efc615c044fb4a2c70ca2e5e7bba9}], links: {self: 
http://192.168.33.10:35357/v3/endpoints?interface=publicservice_id=050872861656437184778a822032d8d6region=RegionTwo;,
 previous: null, next: null}}

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1482772/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1471810] Re: Support host type specific block volume attachment

2015-08-11 Thread Matt Riedemann
This is invalid for master (liberty) because it's already in the os-
brick library which nova is using in liberty.  I've marked this for
kilo.

** Also affects: nova/kilo
   Importance: Undecided
   Status: New

** Changed in: nova/kilo
   Status: New = In Progress

** Changed in: nova
   Status: In Progress = Invalid

** Changed in: cinder
   Status: In Progress = Invalid

** Changed in: nova/kilo
 Assignee: (unassigned) = Markus Zoeller (markus_z) (mzoeller)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1471810

Title:
  Support host type specific block volume attachment

Status in Cinder:
  Invalid
Status in OpenStack Compute (nova):
  Invalid
Status in OpenStack Compute (nova) kilo series:
  In Progress

Bug description:
  
  The IBM DS8000 storage subsystem supports different host types for 
Fibre-Channel. When LUNs are
  mapped to host ports, the user has to specify the LUN format to be used, as 
well as the Volume Group address type. If those properties are not set 
correctly, the host operating system will be unable to detect or use those LUNs 
(volumes).

  A LUN with LUN ID 1234, for example, will be addressed from AIX, or
  System z using LUN 0x40124034 (0x40LL40LL00..00). Linux on
  Intel addresses the LUN by 0x1234. That means, the storage
  subsystem is aware of the host architecture (platform, and Operating
  System).

  The Cinder driver thus needs to set the host type to 'System z' on the
  DS8000 storage subsystem when a Nova running on System z requests
  Cinder to attach a volume. Accordingly, the Cinder driver needs to set
  the host type to 'Intel - Linux' when a Nova running on an Intel
  compute node is requesting Cinder to attach a volume.

  The Cinder driver currently does not have any awareness about the host 
type/operating system when attaching a volume to a host. Nove currently creates 
a connector. And passes it to Cinder when requesting Cinder to attach a volume. 
The connector only provides information, such as the hosts WWPNs. Nova should 
add the output of platform.machine() and sys.platform to
  the connector. Cinder will pass this information to the Cinder driver for the 
storage back-end. The Cinder driver can then determine (in the example of a 
DS8000) the correct host type to be used. 

  Required changes are relatively small: in ``nova/virt/libvirt/driver.py``: 
add output of ``platform.machine()`` and
  ``sys.platform`` to the connector when it is created in 
``get_volume_connector``.

  Note, that similar changes have been released for Cinder already. When
  Cinder needs to attach a volume to it's host/hypervisor, it also
  creates a connector and passes it to the Cinder driver. Those changes
  have been merged by the Cinder team already. They are addressed by
  https://review.openstack.org/192558

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1471810/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1483875] [NEW] FWaaS - firewall deleted unexpectedly by agent

2015-08-11 Thread Sean M. Collins
Public bug reported:

q-svc log entries

2015-08-11 18:16:06.041 WARNING
neutron_fwaas.services.firewall.fwaas_plugin [req-
9e82a535-e233-4190-8bb2-d40476291cd0 FWaaSExtensionTestJSON-1654170307
FWaaSExtensionTestJSON-582089387] Firewall 2d586bfd-a8ae-431d-b88b-
7169520de59d unexpectedly deleted by agent, status was ACTIVE

q-agt log:

2015-08-11 18:17:29.409 10414 DEBUG neutron.agent.linux.utils [-] Running 
command (rootwrap daemon): ['ip', 'netns', 'exec', 
'qrouter-543f9f00-3b3e-4683-8659-8fb6b4a986a8', 'ip', '-6', 'route', 'replace', 
'default', 'via', '2001:db8::2', 'dev', 'qg-bfc277d3-a2'] 
execute_rootwrap_daemon /opt/stack/new/neutron/neutron/agent/linux/utils.py:101
2015-08-11 18:17:29.449 DEBUG neutron.agent.l3.agent 
[req-5784cb5d-5ca4-4fa2-b804-e6a7acf8eec2 RoutersIpV6Test-1482608198 
RoutersIpV6Test-1725884989] Got routers updated notification 
:[u'8b0f2f15-c8b4-4bc2-9e74-d713b7acb917'] routers_updated 
/opt/stack/new/neutron/neutron/agent/l3/agent.py:371
2015-08-11 18:17:29.450 10414 DEBUG neutron.agent.l3.agent [-] Starting router 
update for 8b0f2f15-c8b4-4bc2-9e74-d713b7acb917, action None, priority 0 
_process_router_update /opt/stack/new/neutron/neutron/agent/l3/agent.py:442
2015-08-11 18:17:29.450 10414 DEBUG oslo_messaging._drivers.amqpdriver [-] 
MSG_ID is 1b13de7d03104d96b4324b5328454e3d _send 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:392
2015-08-11 18:17:29.452 DEBUG 
neutron_fwaas.services.firewall.drivers.linux.iptables_fwaas 
[req-c8ebb00d-2415-4b8d-b647-2a1efd1f0fbb None None] Updating firewall 
2d586bfd-a8ae-431d-b88b-7169520de59d for tenant 
0c0cac5e07754eb1be4d747a6865ccee) update_firewall 
/opt/stack/new/neutron-fwaas/neutron_fwaas/services/firewall/drivers/linux/iptables_fwaas.py:112
2015-08-11 18:17:29.452 DEBUG neutron.agent.linux.iptables_manager 
[req-c8ebb00d-2415-4b8d-b647-2a1efd1f0fbb None None] Attempted to remove chain 
iv42d586bfd which does not exist remove_chain 
/opt/stack/new/neutron/neutron/agent/linux/iptables_manager.py:170
2015-08-11 18:17:29.452 DEBUG neutron.agent.linux.iptables_manager 
[req-c8ebb00d-2415-4b8d-b647-2a1efd1f0fbb None None] Attempted to remove chain 
ov42d586bfd which does not exist remove_chain 
/opt/stack/new/neutron/neutron/agent/linux/iptables_manager.py:170
2015-08-11 18:17:29.452 DEBUG neutron.agent.linux.iptables_manager 
[req-c8ebb00d-2415-4b8d-b647-2a1efd1f0fbb None None] Attempted to remove chain 
iv62d586bfd which does not exist remove_chain 
/opt/stack/new/neutron/neutron/agent/linux/iptables_manager.py:170
2015-08-11 18:17:29.452 DEBUG neutron.agent.linux.iptables_manager 
[req-c8ebb00d-2415-4b8d-b647-2a1efd1f0fbb None None] Attempted to remove chain 
ov62d586bfd which does not exist remove_chain 
/opt/stack/new/neutron/neutron/agent/linux/iptables_manager.py:170
2015-08-11 18:17:29.453 DEBUG neutron.agent.linux.iptables_manager 
[req-c8ebb00d-2415-4b8d-b647-2a1efd1f0fbb None None] Attempted to remove chain 
fwaas-defau which does not exist remove_chain 
/opt/stack/new/neutron/neutron/agent/linux/iptables_manager.py:170
2015-08-11 18:17:29.453 DEBUG neutron.agent.linux.iptables_manager 
[req-c8ebb00d-2415-4b8d-b647-2a1efd1f0fbb None None] Attempted to remove chain 
fwaas-defau which does not exist remove_chain 
/opt/stack/new/neutron/neutron/agent/linux/iptables_manager.py:170


Kibana query:

http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwidGVhckRvd25DbGFzcyAobmV1dHJvbi50ZXN0cy5hcGkudGVzdF9md2Fhc19leHRlbnNpb25zLkZXYWFTRXh0ZW5zaW9uVGVzdEpTT04pXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjQzMjAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQzOTMyMjE4Nzg0M30=

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: fwaas

** Tags added: fwaas

** Description changed:

  q-svc log entries
- 2015-08-11 18:16:06.041 WARNING neutron_fwaas.services.firewall.fwaas_plugin 
[req-9e82a535-e233-4190-8bb2-d40476291cd0 FWaaSExtensionTestJSON-1654170307 
FWaaSExtensionTestJSON-582089387] Firewall 2d586bfd-a8ae-431d-b88b-7169520de59d 
unexpectedly deleted by agent, status was ACTIVE
+ 
+ 2015-08-11 18:16:06.041 WARNING
+ neutron_fwaas.services.firewall.fwaas_plugin [req-
+ 9e82a535-e233-4190-8bb2-d40476291cd0 FWaaSExtensionTestJSON-1654170307
+ FWaaSExtensionTestJSON-582089387] Firewall 2d586bfd-a8ae-431d-b88b-
+ 7169520de59d unexpectedly deleted by agent, status was ACTIVE
  
  q-agt log:
  
- http://paste.openstack.org/show/412604/
+ 2015-08-11 18:17:29.409 10414 DEBUG neutron.agent.linux.utils [-] Running 
command (rootwrap daemon): ['ip', 'netns', 'exec', 
'qrouter-543f9f00-3b3e-4683-8659-8fb6b4a986a8', 'ip', '-6', 'route', 'replace', 
'default', 'via', '2001:db8::2', 'dev', 'qg-bfc277d3-a2'] 
execute_rootwrap_daemon /opt/stack/new/neutron/neutron/agent/linux/utils.py:101
+ 2015-08-11 18:17:29.449 DEBUG neutron.agent.l3.agent 
[req-5784cb5d-5ca4-4fa2-b804-e6a7acf8eec2 RoutersIpV6Test-1482608198 
RoutersIpV6Test-1725884989] Got 

[Yahoo-eng-team] [Bug 1483873] [NEW] It should be possible to override the URL used in previous/next links

2015-08-11 Thread Matt Dietz
Public bug reported:

The code for fetching the pagination links is hard coded to use WebOb's
Request path_url property. In setups where a proxy is used to forward
requests to neutron nodes, the href shown to the requesting tenant
doesn't match the one he/she made the request against.

https://github.com/openstack/neutron/blob/master/neutron/api/api_common.py#L56-L73

Ideally, one should be able to define a configuration variable that's
used in place of the request.path_url if defined

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1483873

Title:
  It should be possible to override the URL used in previous/next links

Status in neutron:
  New

Bug description:
  The code for fetching the pagination links is hard coded to use
  WebOb's Request path_url property. In setups where a proxy is used to
  forward requests to neutron nodes, the href shown to the requesting
  tenant doesn't match the one he/she made the request against.

  
https://github.com/openstack/neutron/blob/master/neutron/api/api_common.py#L56-L73

  Ideally, one should be able to define a configuration variable that's
  used in place of the request.path_url if defined

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1483873/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1437154] Re: instance 's host was not updated after live-migration if source compute host crash

2015-08-11 Thread Eli Qiao
another blue print is working on
https://blueprints.launchpad.net/nova/+spec/manager-restart-during-
migration

** Changed in: nova
   Status: Opinion = Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1437154

Title:
  instance 's host was not updated after live-migration if source
  compute host crash

Status in OpenStack Compute (nova):
  Incomplete

Bug description:
  I do a live-migration from host1 to host2
  live-migration successfully to but the instance's host is not set back to 
host2 yet before host1's nova-compute service crash(ctrl+c to stop it)

  nova list show that the instance is still active(actually , it is done
  the migration and I can see it host2 by virsh list)

  taget@liyong:~/devstack$ nova list
  
+--+---+++-+--+
  | ID   | Name  | Status | Task State | Power 
State | Networks |
  
+--+---+++-+--+
  | 1d114104-9a62-49ba-b209-6a42beff4133 | test1 | ACTIVE | -  | 
NOSTATE | private_net=10.0.0.9 |

  show this instance, the instance's host is still host1(due to host1's
  compute crashed an no chance to set it yet).

  after that. do a reboot on this instance, failed due to the instance
  can not be found from host1 host(yes, it is)

  and nova set it to error status(but the instance is still running on
  host2)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1437154/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1483091] Re: Same name SecurityGroup could not work

2015-08-11 Thread yujie
** Changed in: openstack-manuals
   Status: Invalid = New

** Project changed: openstack-manuals = neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1483091

Title:
  Same name SecurityGroup could not work

Status in neutron:
  New

Bug description:
  In icehouse, if two tenants create a security group with the same name
  respectively, then they could not create a vm in the dashboard using
  this security group, with the error says Multiple security_group
  matches found for name 'test', use an ID to be more specific. (HTTP
  409) (Request-ID: req-ece4dd00-d1a0-4c38-9587-394fa29610da).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1483091/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1472999] Re: filter doesn't handle unicode charaters

2015-08-11 Thread Masco Kaliyamoorthy
** Also affects: python-glanceclient
   Importance: Undecided
   Status: New

** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1472999

Title:
  filter doesn't handle unicode charaters

Status in OpenStack Dashboard (Horizon):
  In Progress
Status in OpenStack Compute (nova):
  New
Status in python-glanceclient:
  New

Bug description:
  1 go to project/instances
  2. insert 'ölk' into filter field
  3. enter filter
  4. 
  UnicodeEncodeError at /project/instances/

  'ascii' codec can't encode character u'\xf6' in position 0: ordinal
  not in range(128)

  Request Method:   GET
  Request URL:  http://localhost:8000/project/instances/
  Django Version:   1.8.2
  Exception Type:   UnicodeEncodeError
  Exception Value:  

  'ascii' codec can't encode character u'\xf6' in position 0: ordinal
  not in range(128)

  Exception Location:   /usr/lib64/python2.7/urllib.py in urlencode, line 1347
  Python Executable:/usr/bin/python
  Python Version:   2.7.10
  Python Path:  

  ['/home/mrunge/work/horizon',
   '/usr/lib64/python27.zip',
   '/usr/lib64/python2.7',
   '/usr/lib64/python2.7/plat-linux2',
   '/usr/lib64/python2.7/lib-tk',
   '/usr/lib64/python2.7/lib-old',
   '/usr/lib64/python2.7/lib-dynload',
   '/usr/lib64/python2.7/site-packages',
   '/usr/lib64/python2.7/site-packages/gtk-2.0',
   '/usr/lib/python2.7/site-packages',
   '/home/mrunge/work/horizon/openstack_dashboard']

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1472999/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1464290] Re: UnboundLocalError in neutron/db/l3_db.py (Icehouse)

2015-08-11 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete = Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1464290

Title:
  UnboundLocalError in neutron/db/l3_db.py (Icehouse)

Status in neutron:
  Expired

Bug description:
  Hi,

  working on my SaltStack-modules (outdated versions [0] and [1]) for
  managing subnets in Icehouse-Neutron I managed to cause this error in
  the neutron-server on Ubuntu trusty:

  2015-06-11 16:49:33.636 10605 DEBUG neutron.openstack.common.rpc.amqp 
[req-f47d6292-09bb-4f03-999b-cd1458c3828b None] UNIQUE_ID is 
5fada601c2ca49c5a777f690b0426a45. _add_unique_id 
/usr/lib/python2.7/dist-packages/neutron/openstack/common/rpc/amqp.py:342
  2015-06-11 16:49:33.641 10605 ERROR neutron.api.v2.resource 
[req-f47d6292-09bb-4f03-999b-cd1458c3828b None] add_router_interface failed
  2015-06-11 16:49:33.641 10605 TRACE neutron.api.v2.resource Traceback (most 
recent call last):
  2015-06-11 16:49:33.641 10605 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/dist-packages/neutron/api/v2/resource.py, line 87, in 
resource
  2015-06-11 16:49:33.641 10605 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
  2015-06-11 16:49:33.641 10605 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/dist-packages/neutron/api/v2/base.py, line 200, in 
_handle_action
  2015-06-11 16:49:33.641 10605 TRACE neutron.api.v2.resource return 
getattr(self._plugin, name)(*arg_list, **kwargs)
  2015-06-11 16:49:33.641 10605 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/dist-packages/neutron/db/l3_db.py, line 362, in 
add_router_interface
  2015-06-11 16:49:33.641 10605 TRACE neutron.api.v2.resource 'tenant_id': 
subnet['tenant_id'],
  2015-06-11 16:49:33.641 10605 TRACE neutron.api.v2.resource 
UnboundLocalError: local variable 'subnet' referenced before assignment
  2015-06-11 16:49:33.641 10605 TRACE neutron.api.v2.resource 
  2015-06-11 16:49:33.650 10605 INFO neutron.wsgi 
[req-f47d6292-09bb-4f03-999b-cd1458c3828b None] 192.168.122.85 - - [11/Jun/2015 
16:49:33] PUT 
/v2.0/routers/8afd9ee7-dd37-47f3-b2e1-42805e984a61/add_router_interface.json 
HTTP/1.1 500 296 0.065534

  Installed neutron-packages:

  root@controller:~# dpkg -l neutron\* | grep ^ii
  ii  neutron-common  1:2014.1.4-0ubuntu2   
all  Neutron is a virtual network service for Openstack - common
  ii  neutron-dhcp-agent  1:2014.1.4-0ubuntu2   
all  Neutron is a virtual network service for Openstack - DHCP agent
  ii  neutron-l3-agent1:2014.1.4-0ubuntu2   
all  Neutron is a virtual network service for Openstack - l3 agent
  ii  neutron-metadata-agent  1:2014.1.4-0ubuntu2   
all  Neutron is a virtual network service for Openstack - metadata agent
  ii  neutron-plugin-ml2  1:2014.1.4-0ubuntu2   
all  Neutron is a virtual network service for Openstack - ML2 plugin
  ii  neutron-plugin-openvswitch-agent1:2014.1.4-0ubuntu2   
all  Neutron is a virtual network service for Openstack - Open vSwitch 
plug
  in agent  

   ii  neutron-server  1:2014.1.4-0ubuntu2  
 all  Neutron is a virtual network service for Openstack - server

  More details tomorrow, when I've added some more debugging to my code.

  Regards, Florian

  [0] 
https://github.com/fraunhoferfokus/openstack-formula/blob/master/_modules/neutron.py
  [1] 
https://github.com/fraunhoferfokus/openstack-formula/blob/master/_states/neutron_subnet.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1464290/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1463746] Re: vm status incorrect if hypervisor is broken

2015-08-11 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete = Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1463746

Title:
  vm status incorrect if hypervisor is broken

Status in OpenStack Compute (nova):
  Expired

Bug description:
  If a  nova-compute service is down (power failure) the instances shown
  in nova list are still  in active state while nova service-list
  reports down for the corresponding hypervisor.

  nova list should check the hypervisor state and report unknown /
  undefined for an instances running on a hypervisor where the nova-
  compute is down.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1463746/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1483937] [NEW] version conflict encountered while running with stable/kilo branch

2015-08-11 Thread Su Zhang
Public bug reported:

I hit the following error while running unit test under glance stable/kilo 
branch.
This is the command line I used: ./run_tests.sh -f -V

This is the error information: 
error: python-keystoneclient 1.3.2 is installed but 
python-keystoneclient=1.6.0 is required by set(['python-cinderclient'])

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1483937

Title:
  version conflict encountered while running with stable/kilo branch

Status in Glance:
  New

Bug description:
  I hit the following error while running unit test under glance stable/kilo 
branch.
  This is the command line I used: ./run_tests.sh -f -V

  This is the error information: 
  error: python-keystoneclient 1.3.2 is installed but 
python-keystoneclient=1.6.0 is required by set(['python-cinderclient'])

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1483937/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1483939] [NEW] Allow host route injection of metadata server IP via DHCP

2015-08-11 Thread Marga Millet
Public bug reported:

Vendors implementing Neutron L3 API in their devices may not be able to provide 
metadata server access via the Neutron router. 
In such cases it is useful for the deployer to force metadata server access 
using host route injection as done for isolated network segments.

** Affects: neutron
 Importance: Undecided
 Assignee: Marga Millet (millet)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Marga Millet (millet)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1483939

Title:
  Allow host route injection of metadata server IP via DHCP

Status in neutron:
  New

Bug description:
  Vendors implementing Neutron L3 API in their devices may not be able to 
provide metadata server access via the Neutron router. 
  In such cases it is useful for the deployer to force metadata server access 
using host route injection as done for isolated network segments.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1483939/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1483943] [NEW] ChanceScheduler doesn't sne notifier.info

2015-08-11 Thread Eli Qiao
Public bug reported:

if user chose ChanceScheduler as scheduler driver, then nova scheduler won't 
send out notifier information.
we need to align ChanceScheduler with FilterScheduler

** Affects: nova
 Importance: Undecided
 Assignee: Eli Qiao (taget-9)
 Status: New


** Tags: scheduler

** Changed in: nova
 Assignee: (unassigned) = Eli Qiao (taget-9)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1483943

Title:
  ChanceScheduler doesn't sne notifier.info

Status in OpenStack Compute (nova):
  New

Bug description:
  if user chose ChanceScheduler as scheduler driver, then nova scheduler won't 
send out notifier information.
  we need to align ChanceScheduler with FilterScheduler

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1483943/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1483955] [NEW] Horizon homepage shows internal server error

2015-08-11 Thread Ken Chen
Public bug reported:

I created a devstack today having Sahara installed. And when I login, Horizon 
report a 500 internal server error as below:
Internal Server Error

The server encountered an internal error or misconfiguration and was
unable to complete your request.

Please contact the server administrator at [no address given] to inform
them of the time this error occurred, and the actions you performed just
before this error.

More information about this error may be available in the server error
log.



Apache/2.4.7 (Ubuntu) Server at 127.0.0.1 Port 80

I checked the horizon_error.log and it showed:

2015-08-12 03:25:56.402471 Internal Server Error: /admin/
2015-08-12 03:25:56.402502 Traceback (most recent call last):
2015-08-12 03:25:56.402507   File 
/usr/local/lib/python2.7/dist-packages/django/core/handlers/base.py, line 
137, in get_response
2015-08-12 03:25:56.402511 response = response.render()
2015-08-12 03:25:56.402515   File 
/usr/local/lib/python2.7/dist-packages/django/template/response.py, line 103, 
in render
2015-08-12 03:25:56.402518 self.content = self.rendered_content
2015-08-12 03:25:56.402522   File 
/usr/local/lib/python2.7/dist-packages/django/template/response.py, line 80, 
in rendered_content
2015-08-12 03:25:56.402527 content = template.render(context)
2015-08-12 03:25:56.402531   File 
/usr/local/lib/python2.7/dist-packages/django/template/base.py, line 148, in 
render
2015-08-12 03:25:56.402535 return self._render(context)
2015-08-12 03:25:56.402538   File 
/usr/local/lib/python2.7/dist-packages/django/template/base.py, line 142, in 
_render
2015-08-12 03:25:56.402542 return self.nodelist.render(context)
2015-08-12 03:25:56.402546   File 
/usr/local/lib/python2.7/dist-packages/django/template/base.py, line 844, in 
render
2015-08-12 03:25:56.402549 bit = self.render_node(node, context)
2015-08-12 03:25:56.402553   File 
/usr/local/lib/python2.7/dist-packages/django/template/debug.py, line 80, in 
render_node
2015-08-12 03:25:56.402556 return node.render(context)
2015-08-12 03:25:56.402559   File 
/usr/local/lib/python2.7/dist-packages/django/template/loader_tags.py, line 
126, in render
2015-08-12 03:25:56.402563 return compiled_parent._render(context)
2015-08-12 03:25:56.402566   File 
/usr/local/lib/python2.7/dist-packages/django/template/base.py, line 142, in 
_render
2015-08-12 03:25:56.402570 return self.nodelist.render(context)
2015-08-12 03:25:56.402573   File 
/usr/local/lib/python2.7/dist-packages/django/template/base.py, line 844, in 
render
2015-08-12 03:25:56.402577 bit = self.render_node(node, context)
2015-08-12 03:25:56.402580   File 
/usr/local/lib/python2.7/dist-packages/django/template/debug.py, line 80, in 
render_node
2015-08-12 03:25:56.402583 return node.render(context)
2015-08-12 03:25:56.402587   File 
/usr/local/lib/python2.7/dist-packages/django/template/loader_tags.py, line 
65, in render
2015-08-12 03:25:56.402590 result = block.nodelist.render(context)
2015-08-12 03:25:56.402593   File 
/usr/local/lib/python2.7/dist-packages/django/template/base.py, line 844, in 
render
2015-08-12 03:25:56.402597 bit = self.render_node(node, context)
2015-08-12 03:25:56.402600   File 
/usr/local/lib/python2.7/dist-packages/django/template/debug.py, line 80, in 
render_node
2015-08-12 03:25:56.402604 return node.render(context)
2015-08-12 03:25:56.402607   File 
/usr/local/lib/python2.7/dist-packages/django/template/loader_tags.py, line 
65, in render
2015-08-12 03:25:56.402628 result = block.nodelist.render(context)
2015-08-12 03:25:56.402632   File 
/usr/local/lib/python2.7/dist-packages/django/template/base.py, line 844, in 
render
2015-08-12 03:25:56.402636 bit = self.render_node(node, context)
2015-08-12 03:25:56.402639   File 
/usr/local/lib/python2.7/dist-packages/django/template/debug.py, line 80, in 
render_node
2015-08-12 03:25:56.402643 return node.render(context)
2015-08-12 03:25:56.402646   File 
/usr/local/lib/python2.7/dist-packages/django/template/loader_tags.py, line 
150, in render
2015-08-12 03:25:56.402650 return template.render(context)
2015-08-12 03:25:56.402653   File 
/usr/local/lib/python2.7/dist-packages/django/template/base.py, line 148, in 
render
2015-08-12 03:25:56.402656 return self._render(context)
2015-08-12 03:25:56.402660   File 
/usr/local/lib/python2.7/dist-packages/django/template/base.py, line 142, in 
_render
2015-08-12 03:25:56.402663 return self.nodelist.render(context)
2015-08-12 03:25:56.402666   File 
/usr/local/lib/python2.7/dist-packages/django/template/base.py, line 844, in 
render
2015-08-12 03:25:56.402670 bit = self.render_node(node, context)
2015-08-12 03:25:56.402673   File 
/usr/local/lib/python2.7/dist-packages/django/template/debug.py, line 80, in 
render_node
2015-08-12 03:25:56.402676 return node.render(context)
2015-08-12 03:25:56.402680  

[Yahoo-eng-team] [Bug 1483957] [NEW] icmp rule type should be in [0, 255]

2015-08-11 Thread huangpengtaohw
Public bug reported:

when  I enter  Access  Security  to create security group in dashboard,
then I manage rules and add a rule using custom icmp rule
in the type and code item, it cue Enter a value for ICMP type in the range 
(-1:255)
I think  Enter a value for ICMP type in the range [0:255] will be better

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1483957

Title:
  icmp rule type should be in [0,255]

Status in neutron:
  New

Bug description:
  when  I enter  Access  Security  to create security group in dashboard,
  then I manage rules and add a rule using custom icmp rule
  in the type and code item, it cue Enter a value for ICMP type in the range 
(-1:255)
  I think  Enter a value for ICMP type in the range [0:255] will be better

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1483957/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1483958] [NEW] the router initial status should not be ACTIVE

2015-08-11 Thread shihanzhang
Public bug reported:

Now we create a router, its initial status is ACTIVE,  but I think its
initial status should not be 'ACTIVE' before this  router binds to a l3
agent,  I think it is better to change its initial status  to
'PENDING_CREATE',  such like FWaas, VPNaas.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1483958

Title:
  the router initial status should not be ACTIVE

Status in neutron:
  New

Bug description:
  Now we create a router, its initial status is ACTIVE,  but I think its
  initial status should not be 'ACTIVE' before this  router binds to a
  l3 agent,  I think it is better to change its initial status  to
  'PENDING_CREATE',  such like FWaas, VPNaas.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1483958/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp