[Yahoo-eng-team] [Bug 1458945] Re: Use graduated oslo.policy instead of oslo-incubator code

2015-05-27 Thread zouyee
** Changed in: trove
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1458945

Title:
  Use graduated oslo.policy instead of oslo-incubator code

Status in OpenStack Key Management (Barbican):
  New
Status in OpenStack Telemetry (Ceilometer):
  New
Status in Cinder:
  New
Status in OpenStack Congress:
  New
Status in Designate:
  New
Status in OpenStack Dashboard (Horizon):
  New
Status in MagnetoDB - key-value storage service for OpenStack:
  New
Status in OpenStack Magnum:
  New
Status in Manila:
  New
Status in Mistral:
  New
Status in Murano:
  New
Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Compute (Nova):
  New
Status in Rally:
  Invalid
Status in OpenStack Data Processing (Sahara):
  New
Status in Openstack Database (Trove):
  Invalid

Bug description:
  The Policy code is now be managed as a library, named oslo.policy.

  If there is a CVE level defect, deploying a fix should require
  deploying a new version of the library, not syncing each individual
  project.

  All the projects in the OpenStack ecosystem that are using the policy
  code from oslo-incubator should use the new library.

To manage notifications about this bug go to:
https://bugs.launchpad.net/barbican/+bug/1458945/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1459144] [NEW] Enhance VMware to support VirtualVmxnet3 as network type

2015-05-27 Thread sahid
Public bug reported:

Some devices may need to support VirtualVmxnet3 as a network. We should
to make sure VMware can handle that case.

** Affects: nova
 Importance: Low
 Assignee: sahid (sahid-ferdjaoui)
 Status: Fix Committed


** Tags: juno-backport-potential kilo-backport-potential

** Changed in: nova
   Importance: Undecided = Low

** Changed in: nova
   Status: New = Fix Released

** Changed in: nova
   Status: Fix Released = Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1459144

Title:
  Enhance VMware to support VirtualVmxnet3 as network type

Status in OpenStack Compute (Nova):
  Fix Committed

Bug description:
  Some devices may need to support VirtualVmxnet3 as a network. We
  should to make sure VMware can handle that case.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1459144/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1459179] [NEW] User heat has no access to domain default when using Keystone v3 with multi-domain-driver

2015-05-27 Thread Marcel Jordan
Public bug reported:

When using Keystone v3 with multi-domain-driver in Juno on Centos, I
cann't deploy heat stack, because the heat user has no access to default
domain wich runs on sql

default - SQL - service user and heat
dom - LDAP - AD user

 /var/log/heat/heat.log 
2015-05-27 11:38:42.502 13632 DEBUG heat.engine.stack_lock [-] Engine 
651cdcf1-49cb-4ca4-9436-35ff538666ed acquired lock on stack 
22a20e5a-901b-436c-9c8c-e603bc79015b acquire 
/usr/lib/python2.7/site-packages/heat/engine/stack_lock.py:72
2015-05-27 11:38:42.503 13632 DEBUG keystoneclient.auth.identity.v3 [-] Making 
authentication request to http://172.16.89.1:5000/v3/auth/tokens get_auth_ref 
/usr/lib/python2.7/site-packages/keystoneclient/auth/identity/v3.py:117
2015-05-27 11:38:42.504 13632 INFO urllib3.connectionpool [-] Starting new HTTP 
connection (1): 172.16.89.1
2015-05-27 11:38:42.579 13632 DEBUG urllib3.connectionpool [-] POST 
/v3/auth/tokens HTTP/1.1 401 181 _make_request 
/usr/lib/python2.7/site-packages/urllib3/connectionpool.py:357
2015-05-27 11:38:42.580 13632 DEBUG keystoneclient.session [-] Request returned 
failure status: 401 request 
/usr/lib/python2.7/site-packages/keystoneclient/session.py:345
2015-05-27 11:38:42.580 13632 DEBUG keystoneclient.v3.client [-] Authorization 
failed. get_raw_token_from_identity_service 
/usr/lib/python2.7/site-packages/keystoneclient/v3/client.py:267

 /var/log/keystone/keystone.log 
2015-05-27 11:38:42.265 8847 DEBUG keystone.common.kvs.core [-] KVS lock 
acquired for: os-revoke-events acquire 
/usr/lib/python2.7/site-packages/keystone/common/kvs/core.py:380
2015-05-27 11:38:42.265 8847 DEBUG keystone.common.kvs.core [-] KVS lock 
released for: os-revoke-events release 
/usr/lib/python2.7/site-packages/keystone/common/kvs/core.py:399
2015-05-27 11:38:42.265 8847 DEBUG keystone.middleware.core [-] RBAC: 
auth_context: {'is_delegated_auth': False, 'access_token_id': None, 'user_id': 
u'86396c4533a044a1ab106ccaeb7e883d', 'roles': [u'heat_stack_owner', u'admin'], 
'trustee_$
2015-05-27 11:38:42.266 8847 DEBUG keystone.common.wsgi [-] arg_dict: {} 
__call__ /usr/lib/python2.7/site-packages/keystone/common/wsgi.py:191
2015-05-27 11:38:42.267 8847 DEBUG keystone.common.controller [-] RBAC: 
Authorizing identity:validate_token() _build_policy_check_credentials 
/usr/lib/python2.7/site-packages/keystone/common/controller.py:55
2015-05-27 11:38:42.267 8847 DEBUG keystone.common.controller [-] RBAC: using 
auth context from the request environment _build_policy_check_credentials 
/usr/lib/python2.7/site-packages/keystone/common/controller.py:60
2015-05-27 11:38:42.270 8847 DEBUG keystone.common.kvs.core [-] KVS lock 
acquired for: os-revoke-events acquire 
/usr/lib/python2.7/site-packages/keystone/common/kvs/core.py:380
2015-05-27 11:38:42.270 8847 DEBUG keystone.common.kvs.core [-] KVS lock 
released for: os-revoke-events release 
/usr/lib/python2.7/site-packages/keystone/common/kvs/core.py:399
2015-05-27 11:38:42.270 8847 DEBUG keystone.policy.backends.rules [-] enforce 
identity:validate_token: {'is_delegated_auth': False, 'access_token_id': None, 
'user_id': u'86396c4533a044a1ab106ccaeb7e883d', 'roles': [u'heat_stack_owner', 
u$
2015-05-27 11:38:42.270 8847 DEBUG keystone.common.controller [-] RBAC: 
Authorization granted inner 
/usr/lib/python2.7/site-packages/keystone/common/controller.py:155
2015-05-27 11:38:42.273 8847 DEBUG keystone.common.kvs.core [-] KVS lock 
acquired for: os-revoke-events acquire 
/usr/lib/python2.7/site-packages/keystone/common/kvs/core.py:380
2015-05-27 11:38:42.273 8847 DEBUG keystone.common.kvs.core [-] KVS lock 
released for: os-revoke-events release 
/usr/lib/python2.7/site-packages/keystone/common/kvs/core.py:399
2015-05-27 11:38:42.274 8847 INFO eventlet.wsgi.server [-] 172.16.89.1 - - 
[27/May/2015 11:38:42] GET /v3/auth/tokens HTTP/1.1 200 7887 0.012976
2015-05-27 11:38:42.343 8849 DEBUG keystone.middleware.core [-] Auth token not 
in the request header. Will not build auth context. process_request 
/usr/lib/python2.7/site-packages/keystone/middleware/core.py:270
2015-05-27 11:38:42.345 8849 DEBUG keystone.common.wsgi [-] arg_dict: {} 
__call__ /usr/lib/python2.7/site-packages/keystone/common/wsgi.py:191
2015-05-27 11:38:42.441 8849 INFO eventlet.wsgi.server [-] 172.16.89.1 - - 
[27/May/2015 11:38:42] POST /v3/auth/tokens HTTP/1.1 201 7902 0.097828
2015-05-27 11:38:42.450 8852 DEBUG keystone.common.kvs.core [-] KVS lock 
acquired for: os-revoke-events acquire 
/usr/lib/python2.7/site-packages/keystone/common/kvs/core.py:380
2015-05-27 11:38:42.450 8852 DEBUG keystone.common.kvs.core [-] KVS lock 
released for: os-revoke-events release 
/usr/lib/python2.7/site-packages/keystone/common/kvs/core.py:399
2015-05-27 11:38:42.450 8852 DEBUG keystone.middleware.core [-] RBAC: 
auth_context: {'is_delegated_auth': False, 'access_token_id': None, 'user_id': 
u'c287350c73ef4410ad17326eee940c5f', 'roles': [u'heat_stack_owner', u'admin'], 
'trustee_$

[Yahoo-eng-team] [Bug 1459255] [NEW] Fix the docs since Federation is no longer an extension

2015-05-27 Thread Rodrigo Duarte
Public bug reported:

Currently we have a documentation to enable the Federation extension
[1]. Although there is some steps that are no longer needed, some of
them need to be executed in order to the functionality properly work:
add the saml2 auth method, install xmlsec1 and pysaml2. These steps
should be included in the main Federation doc [2] and the extension one
should be removed.

[1] 
https://github.com/openstack/keystone/blob/master/doc/source/extensions/federation.rst
[2] 
https://github.com/openstack/keystone/blob/master/doc/source/configure_federation.rst

** Affects: keystone
 Importance: Undecided
 Status: New


** Tags: documentation

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1459255

Title:
  Fix the docs since Federation is no longer an extension

Status in OpenStack Identity (Keystone):
  New

Bug description:
  Currently we have a documentation to enable the Federation extension
  [1]. Although there is some steps that are no longer needed, some of
  them need to be executed in order to the functionality properly work:
  add the saml2 auth method, install xmlsec1 and pysaml2. These steps
  should be included in the main Federation doc [2] and the extension
  one should be removed.

  [1] 
https://github.com/openstack/keystone/blob/master/doc/source/extensions/federation.rst
  [2] 
https://github.com/openstack/keystone/blob/master/doc/source/configure_federation.rst

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1459255/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1453953] Re: keystoneclient cannot log non-ascii data

2015-05-27 Thread Dolph Mathews
From Ken Chen in https://bugs.launchpad.net/python-
keystoneclient/+bug/1457279

--
In keystoneclient/session.py file, _http_log_request method, we have below 
codes:
if data:
string_parts.append(-d '%s' % data)

logger.debug(' '.join(string_parts))

However, if data is not in ascii chars, this might cause error like:
UnicodeEncodeError: 'ascii' codec can't encode character u'\\xbb' in position 
10: ordinal not in range(128)
This is also the reason of bug https://bugs.launchpad.net/horizon/+bug/1453953
--

** Summary changed:

- [data processing]  Unable to upload job binaries
+ keystoneclient cannot log non-ascii data

** Project changed: horizon = python-keystoneclient

** Tags removed: sahara

** Changed in: python-keystoneclient
   Status: New = Triaged

** Changed in: python-keystoneclient
   Importance: Undecided = High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1453953

Title:
  keystoneclient cannot log non-ascii data

Status in Python client library for Keystone:
  Triaged

Bug description:
  This bug was originally written against Sahara, but it appears to be a
  Horizon issue instead, so I'm reporting it here.

  When trying to upload the spark-example.jar from the Sahara edp-
  examples, it fails with the message Danger: There was an error
  submitting the form. Please try again.

  In the logs, the stack trace looks like this:

  Internal Server Error: /project/data_processing/job_binaries/create-job-binary
  Traceback (most recent call last):
File 
/home/croberts/src/horizon/.venv/lib/python2.7/site-packages/django/core/handlers/base.py,
 line 111, in get_response
  response = wrapped_callback(request, *callback_args, **callback_kwargs)
File /home/croberts/src/horizon/horizon/decorators.py, line 36, in dec
  return view_func(request, *args, **kwargs)
File /home/croberts/src/horizon/horizon/decorators.py, line 52, in dec
  return view_func(request, *args, **kwargs)
File /home/croberts/src/horizon/horizon/decorators.py, line 36, in dec
  return view_func(request, *args, **kwargs)
File /home/croberts/src/horizon/horizon/decorators.py, line 84, in dec
  return view_func(request, *args, **kwargs)
File 
/home/croberts/src/horizon/.venv/lib/python2.7/site-packages/django/views/generic/base.py,
 line 69, in view
  return self.dispatch(request, *args, **kwargs)
File 
/home/croberts/src/horizon/.venv/lib/python2.7/site-packages/django/views/generic/base.py,
 line 87, in dispatch
  return handler(request, *args, **kwargs)
File 
/home/croberts/src/horizon/.venv/lib/python2.7/site-packages/django/views/generic/edit.py,
 line 173, in post
  return self.form_valid(form)
File /home/croberts/src/horizon/horizon/forms/views.py, line 173, in 
form_valid
  exceptions.handle(self.request)
File /home/croberts/src/horizon/horizon/exceptions.py, line 364, in handle
  six.reraise(exc_type, exc_value, exc_traceback)
File /home/croberts/src/horizon/horizon/forms/views.py, line 170, in 
form_valid
  handled = form.handle(self.request, form.cleaned_data)
File 
/home/croberts/src/horizon/openstack_dashboard/dashboards/project/data_processing/job_binaries/forms.py,
 line 183, in handle
  _(Unable to create job binary))
File /home/croberts/src/horizon/horizon/exceptions.py, line 364, in handle
  six.reraise(exc_type, exc_value, exc_traceback)
File 
/home/croberts/src/horizon/openstack_dashboard/dashboards/project/data_processing/job_binaries/forms.py,
 line 169, in handle
  bin_url = self.handle_internal(request, context)
File 
/home/croberts/src/horizon/openstack_dashboard/dashboards/project/data_processing/job_binaries/forms.py,
 line 216, in handle_internal
  _(Unable to upload job binary))
File /home/croberts/src/horizon/horizon/exceptions.py, line 364, in handle
  six.reraise(exc_type, exc_value, exc_traceback)
File 
/home/croberts/src/horizon/openstack_dashboard/dashboards/project/data_processing/job_binaries/forms.py,
 line 212, in handle_internal
  request.FILES[job_binary_file].read())
File /home/croberts/src/horizon/openstack_dashboard/api/sahara.py, line 
332, in job_binary_internal_create
  data=data)
File 
/home/croberts/src/horizon/.venv/lib/python2.7/site-packages/saharaclient/api/job_binary_internals.py,
 line 31, in create
  'job_binary_internal', dump_json=False)
File 
/home/croberts/src/horizon/.venv/lib/python2.7/site-packages/saharaclient/api/base.py,
 line 110, in _update
  resp = self.api.put(url, **kwargs)
File 
/home/croberts/src/horizon/.venv/lib/python2.7/site-packages/keystoneclient/adapter.py,
 line 179, in put
  return self.request(url, 'PUT', **kwargs)
File 
/home/croberts/src/horizon/.venv/lib/python2.7/site-packages/saharaclient/api/client.py,
 line 

[Yahoo-eng-team] [Bug 1455344] Re: the deprecated compute_port option need to be removed

2015-05-27 Thread Dolph Mathews
Why was this marked invalid?

** Changed in: keystone
   Importance: Undecided = Low

** Changed in: keystone
   Status: Invalid = Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1455344

Title:
  the deprecated compute_port option need to be removed

Status in OpenStack Identity (Keystone):
  Fix Committed

Bug description:
  the compute_port option has been marked deprecated and should be
  remove in Liberty.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1455344/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1458945] Re: Use graduated oslo.policy instead of oslo-incubator code

2015-05-27 Thread Valeriy Ponomaryov
** No longer affects: manila

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1458945

Title:
  Use graduated oslo.policy instead of oslo-incubator code

Status in OpenStack Key Management (Barbican):
  New
Status in OpenStack Telemetry (Ceilometer):
  New
Status in Cinder:
  New
Status in OpenStack Congress:
  New
Status in Designate:
  New
Status in OpenStack Dashboard (Horizon):
  New
Status in MagnetoDB - key-value storage service for OpenStack:
  New
Status in OpenStack Magnum:
  New
Status in Mistral:
  New
Status in Murano:
  New
Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Compute (Nova):
  New
Status in Rally:
  Invalid
Status in Openstack Database (Trove):
  Invalid

Bug description:
  The Policy code is now be managed as a library, named oslo.policy.

  If there is a CVE level defect, deploying a fix should require
  deploying a new version of the library, not syncing each individual
  project.

  All the projects in the OpenStack ecosystem that are using the policy
  code from oslo-incubator should use the new library.

To manage notifications about this bug go to:
https://bugs.launchpad.net/barbican/+bug/1458945/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1458968] Re: stable/juno unit tests blocked: ContextualVersionConflict: (oslo.i18n 1.3.1 (/home/jenkins/workspace/periodic-glance-python27-juno/.tox/py27/lib/python2.7/site-packa

2015-05-27 Thread Dolph Mathews
https://review.openstack.org/#/c/173123/ has merged.

** Changed in: glance
   Status: New = Invalid

** Changed in: keystonemiddleware
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1458968

Title:
  stable/juno unit tests blocked: ContextualVersionConflict: (oslo.i18n
  1.3.1 (/home/jenkins/workspace/periodic-glance-
  python27-juno/.tox/py27/lib/python2.7/site-packages),
  Requirement.parse('oslo.i18n=1.5.0'), set(['oslo.utils', 'pycadf']))

Status in OpenStack Image Registry and Delivery Service (Glance):
  Invalid
Status in OpenStack Identity  (Keystone) Middleware:
  Invalid

Bug description:
  stable/juno unit tests are failing on (multiple) dependency conflicts.
  Reproducible outside the gate simply running the py27 or py26 tox env
  locally:

  Tests in  glance.tests.unit.test_opts fail with:

  ContextualVersionConflict: (oslo.i18n 1.3.1 (/home/jenkins/workspace
  /periodic-glance-python27-juno/.tox/py27/lib/python2.7/site-packages),
  Requirement.parse('oslo.i18n=1.5.0'), set(['oslo.utils', 'pycadf']))

  
  This isn't affecting stable/juno tempest runs of this stuff since devstack 
sets up libraries directly from tip of the stable branches, where requirements 
have been updated to avoid this.  Those fixes haven't been pushed out via 
releases to pypi, which is what the unit tests rely on.

  There are two paths of conflict

  glance (stable/juno) (keystonemiddleware=1.0.0,1.4.0)
- keystonemiddleware (1.3.1) (pycadf=0.6.0)
- pycadf (0.9.0)
- CONFLICT oslo.config=1.9.3  # Apache-2.0
- CONFLICT oslo.i18n=1.5.0  # Apache-2.0

  As per GR, we should be getting pycadf=0.6.0,!=0.6.2,0.7.0, but 
keystomemiddleware's uncapped dep is pulling in the newer.
  https://review.openstack.org/#/c/173123/ resolves the issue by adding the 
proper stable/juno caps to keystonemiddleware stable/juno, but it looks like 
those changes need to be released as keystonemiddlware 1.3.2

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1458968/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1459370] [NEW] neutron-db-migration crash at upgrade 26b54cf9024d - 14be42f3d0a5, Add default security group table

2015-05-27 Thread 0x61
Public bug reported:

When i run this on empty db :

```
neutron-db-manage --config-file /etc/neutron/neutron.conf   --config-file 
/etc/neutron/plugins/ml2/ml2_conf.ini upgrade head
```

I have this error :

```
INFO  [alembic.migration] Running upgrade 26b54cf9024d - 14be42f3d0a5, Add 
default security group table
Traceback (most recent call last):
  File /usr/bin/neutron-db-manage, line 10, in module
sys.exit(main())
  File /usr/lib/python2.7/dist-packages/neutron/db/migration/cli.py, line 
238, in main
CONF.command.func(config, CONF.command.name)
  File /usr/lib/python2.7/dist-packages/neutron/db/migration/cli.py, line 
106, in do_upgrade
do_alembic_command(config, cmd, revision, sql=CONF.command.sql)
  File /usr/lib/python2.7/dist-packages/neutron/db/migration/cli.py, line 72, 
in do_alembic_command
getattr(alembic_command, cmd)(config, *args, **kwargs)
  File /usr/lib/python2.7/dist-packages/alembic/command.py, line 125, in 
upgrade
script.run_env()
  File /usr/lib/python2.7/dist-packages/alembic/script.py, line 203, in 
run_env
util.load_python_file(self.dir, 'env.py')
  File /usr/lib/python2.7/dist-packages/alembic/util.py, line 212, in 
load_python_file
module = load_module_py(module_id, path)
  File /usr/lib/python2.7/dist-packages/alembic/compat.py, line 58, in 
load_module_py
mod = imp.load_source(module_id, path, fp)
  File 
/usr/lib/python2.7/dist-packages/neutron/db/migration/alembic_migrations/env.py,
 line 109, in module
run_migrations_online()
  File 
/usr/lib/python2.7/dist-packages/neutron/db/migration/alembic_migrations/env.py,
 line 100, in run_migrations_online
context.run_migrations()
  File string, line 7, in run_migrations
  File /usr/lib/python2.7/dist-packages/alembic/environment.py, line 688, in 
run_migrations
self.get_context().run_migrations(**kw)
  File /usr/lib/python2.7/dist-packages/alembic/migration.py, line 258, in 
run_migrations
change(**kw)
  File 
/usr/lib/python2.7/dist-packages/neutron/db/migration/alembic_migrations/versions/14be42f3d0a5_default_sec_group_table.py,
 line 62, in upgrade
ins = table.insert(inline=True).from_select(['tenant_id',
AttributeError: 'NoneType' object has no attribute 'insert'
```

```
dpkg -l | grep neutron
ii  neutron-common  2015.1.0-2~bpo8+1 
all  OpenStack virtual network service - common files
ii  neutron-dhcp-agent  2015.1.0-2~bpo8+1 
all  OpenStack virtual network service - DHCP agent
ii  neutron-l3-agent2015.1.0-2~bpo8+1 
all  OpenStack virtual network service - l3 agent
ii  neutron-lbaas-agent 2015.1.0-2~bpo8+1 
all  OpenStack virtual network service - lbass agent
ii  neutron-metadata-agent  2015.1.0-2~bpo8+1 
all  OpenStack virtual network service - metadata agent
ii  neutron-openvswitch-agent   2015.1.0-2~bpo8+1 
all  OpenStack virtual network service - Open vSwitch agent
ii  neutron-plugin-openvswitch-agent2015.1.0-2~bpo8+1 
all  transitional dummy package for switching to Neutron OpenVswitch 
agent.
ii  neutron-server  2015.1.0-2~bpo8+1 
all  OpenStack virtual network service - server
ii  python-neutron  2015.1.0-2~bpo8+1 
all  OpenStack virtual network service - Python library
ii  python-neutronclient2.4.0-1~bpo8+1
all  client API library for Neutron
```

file :
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/db/migration/alembic_migrations/versions/14be42f3d0a5_default_sec_group_table.py?id=79c97120de9cff4d0992b5d41ff4bbf05e890f89

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1459370

Title:
  neutron-db-migration crash at upgrade 26b54cf9024d - 14be42f3d0a5,
  Add default security group table

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  When i run this on empty db :

  ```
  neutron-db-manage --config-file /etc/neutron/neutron.conf   --config-file 
/etc/neutron/plugins/ml2/ml2_conf.ini upgrade head
  ```

  I have this error :

  ```
  INFO  [alembic.migration] Running upgrade 26b54cf9024d - 14be42f3d0a5, Add 
default security group table
  Traceback (most recent call last):
File /usr/bin/neutron-db-manage, line 10, in module
  sys.exit(main())
File /usr/lib/python2.7/dist-packages/neutron/db/migration/cli.py, line 
238, in main
  CONF.command.func(config, CONF.command.name)
File /usr/lib/python2.7/dist-packages/neutron/db/migration/cli.py, line 
106, in do_upgrade
  

[Yahoo-eng-team] [Bug 1459382] [NEW] Fernet tokens can fail with LDAP identity backends

2015-05-27 Thread Lance Bragstad
Public bug reported:

It is possible for Keystone to fail to issue tokens when using an
external identity backend, like LDAP, if the user IDs of a different
format than UUID. This is because the Fernet token formatter attempts to
convert the UUID to bytes before packing the payload. This is done to
save space and results in a shorter token.

When using an LDAP backend that doesn't use UUID format for the user
IDs, we get a ValueError because UUID can't convert whenever the ID is
to UUID.bytes [0]. We have to do something similar with the default
domain in the case that it's not a uuid, same with federated user IDs
[1], which we should probably do in this case.

Related stacktrace [2].


[0] 
https://github.com/openstack/keystone/blob/e5f2d88e471ac3595c4ea0e28f27493687a87588/keystone/token/providers/fernet/token_formatters.py#L415
[1] 
https://github.com/openstack/keystone/blob/e5f2d88e471ac3595c4ea0e28f27493687a87588/keystone/token/providers/fernet/token_formatters.py#L509
[2] http://lists.openstack.org/pipermail/openstack/2015-May/012885.html

** Affects: keystone
 Importance: High
 Assignee: Lance Bragstad (lbragstad)
 Status: In Progress


** Tags: fernet

** Tags added: fernet

** Changed in: keystone
   Importance: Undecided = Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1459382

Title:
  Fernet tokens can fail with LDAP identity backends

Status in OpenStack Identity (Keystone):
  In Progress

Bug description:
  It is possible for Keystone to fail to issue tokens when using an
  external identity backend, like LDAP, if the user IDs of a different
  format than UUID. This is because the Fernet token formatter attempts
  to convert the UUID to bytes before packing the payload. This is done
  to save space and results in a shorter token.

  When using an LDAP backend that doesn't use UUID format for the user
  IDs, we get a ValueError because UUID can't convert whenever the ID is
  to UUID.bytes [0]. We have to do something similar with the default
  domain in the case that it's not a uuid, same with federated user IDs
  [1], which we should probably do in this case.

  Related stacktrace [2].

  
  [0] 
https://github.com/openstack/keystone/blob/e5f2d88e471ac3595c4ea0e28f27493687a87588/keystone/token/providers/fernet/token_formatters.py#L415
  [1] 
https://github.com/openstack/keystone/blob/e5f2d88e471ac3595c4ea0e28f27493687a87588/keystone/token/providers/fernet/token_formatters.py#L509
  [2] http://lists.openstack.org/pipermail/openstack/2015-May/012885.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1459382/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1459386] [NEW] [data processing] All Create buttons are resulting in non-modal form

2015-05-27 Thread Chad Roberts
Public bug reported:

*critical for data processing*

In the Data Processing UI (Sahara), in each of the panels that have
Create X buttons to create each object, those create buttons are
currently resulting in a non-modal form that is broken.

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: sahara

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1459386

Title:
  [data processing] All Create buttons are resulting in non-modal form

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  *critical for data processing*

  In the Data Processing UI (Sahara), in each of the panels that have
  Create X buttons to create each object, those create buttons are
  currently resulting in a non-modal form that is broken.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1459386/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1457554] Re: host-evacuate-live doesn't limit number of servers evacuated simultaneously from a host

2015-05-27 Thread melanie witt
** Also affects: nova
   Importance: Undecided
   Status: New

** No longer affects: python-novaclient

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1457554

Title:
  host-evacuate-live doesn't limit number of servers evacuated
  simultaneously from a host

Status in OpenStack Compute (Nova):
  New

Bug description:
  Attempting to evacuate too many servers from a single host
  simultaneously could result in bandwidth starvation. Instances dirty
  their memory faster than they can be migrated, resulting in instances
  perpetually stuck in the migrating state.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1457554/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1459400] [NEW] Modal form is not working anymore

2015-05-27 Thread Kahou Lei
Public bug reported:

With the latest upstream, modal form seems not working anymore.

See attached

** Affects: horizon
 Importance: Undecided
 Assignee: Kahou Lei (kahou82)
 Status: New

** Attachment added: Screen Shot 2015-05-27 at 12.13.15 PM.png
   
https://bugs.launchpad.net/bugs/1459400/+attachment/4405768/+files/Screen%20Shot%202015-05-27%20at%2012.13.15%20PM.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1459400

Title:
  Modal form is not working anymore

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  With the latest upstream, modal form seems not working anymore.

  See attached

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1459400/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1459399] [NEW] Nova + Ceph: resize-instance doesn't resize anything, lies about it

2015-05-27 Thread Nicolas Simonds
Public bug reported:

Resizing Ceph-backed instances seems to be completely non-functional

Steps to reproduce (in devstack with ceph enabled):

1.  Boot an instance against m1.nano
2.  Run rbd -p vms ls -l and note the size of the volume
3.  Resize the instance to m1.tiny, and confirm the resize
4.  Run rbd -p vms ls -l and note the size of the volume

Expected behavior:

Nova should report success, and the volume should report using ~1GB in
size

Actual behavior:

Nova reports success, but the image does not change size.

** Affects: nova
 Importance: Undecided
 Status: New

** Description changed:

  Resizing Ceph-backed instances seems to be completely non-functional
  
- Steps to reproduce:
+ Steps to reproduce (in devstack with ceph enabled):
  
  1.  Boot an instance against m1.nano
  2.  Run rbd -p vms ls -l and note the size of the volume
  3.  Resize the instance to m1.small, and confirm the resize
  4.  Run rbd -p vms ls -l and note the size of the volume
  
  Expected behavior:
  
  Nova should report success, and the volume should report using ~20GB in
  size
  
  Actual behavior:
  
  Nova reports success, but the image does not change size.

** Description changed:

  Resizing Ceph-backed instances seems to be completely non-functional
  
  Steps to reproduce (in devstack with ceph enabled):
  
  1.  Boot an instance against m1.nano
  2.  Run rbd -p vms ls -l and note the size of the volume
- 3.  Resize the instance to m1.small, and confirm the resize
+ 3.  Resize the instance to m1.tiny, and confirm the resize
  4.  Run rbd -p vms ls -l and note the size of the volume
  
  Expected behavior:
  
- Nova should report success, and the volume should report using ~20GB in
+ Nova should report success, and the volume should report using ~1GB in
  size
  
  Actual behavior:
  
  Nova reports success, but the image does not change size.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1459399

Title:
  Nova + Ceph: resize-instance doesn't resize anything, lies about it

Status in OpenStack Compute (Nova):
  New

Bug description:
  Resizing Ceph-backed instances seems to be completely non-functional

  Steps to reproduce (in devstack with ceph enabled):

  1.  Boot an instance against m1.nano
  2.  Run rbd -p vms ls -l and note the size of the volume
  3.  Resize the instance to m1.tiny, and confirm the resize
  4.  Run rbd -p vms ls -l and note the size of the volume

  Expected behavior:

  Nova should report success, and the volume should report using ~1GB in
  size

  Actual behavior:

  Nova reports success, but the image does not change size.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1459399/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1443598] Re: [OSSA 2015-008] backend_argument containing a password leaked in logs (CVE-2015-3646)

2015-05-27 Thread Tristan Cacqueray
** Changed in: ossa
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1443598

Title:
  [OSSA 2015-008] backend_argument containing a password leaked in logs
  (CVE-2015-3646)

Status in OpenStack Identity (Keystone):
  Fix Committed
Status in Keystone icehouse series:
  Fix Committed
Status in Keystone juno series:
  Fix Committed
Status in Keystone kilo series:
  Fix Released
Status in OpenStack Security Advisories:
  Fix Released

Bug description:
  The keystone.conf has an option backend_argument to set various
  options for the caching backend.  As documented, some of the potential
  values can contain a password.

  Snippet from
  http://docs.openstack.org/developer/keystone/developing.html#dogpile-
  cache-based-mongodb-nosql-backend

  [cache]
  # Global cache functionality toggle.
  enabled = True

  # Referring to specific cache backend
  backend = keystone.cache.mongo

  # Backend specific configuration arguments
  backend_argument = db_hosts:localhost:27017
  backend_argument = db_name:ks_cache
  backend_argument = cache_collection:cache
  backend_argument = username:test_user
  backend_argument = password:test_password

  As a result, passwords can be leaked to the keystone logs since the
  config options is not marked secret.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1443598/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1459279] [NEW] Wrong assertion examples in doc

2015-05-27 Thread Rodrigo Duarte
Public bug reported:

The pure SAML assertion example in [1] is not a valid assertion
generated by keystone and the ECP wrapped one is missing the two new
attributes openstack_user_domain and openstack_project_domain.

[1] https://github.com/openstack/keystone-specs/blob/master/api/v3
/identity-api-v3-os-federation-ext.rst#generating-assertions

** Affects: keystone
 Importance: Undecided
 Assignee: Rodrigo Duarte (rodrigodsousa)
 Status: In Progress


** Tags: documentation

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1459279

Title:
  Wrong assertion examples in doc

Status in OpenStack Identity (Keystone):
  In Progress

Bug description:
  The pure SAML assertion example in [1] is not a valid assertion
  generated by keystone and the ECP wrapped one is missing the two new
  attributes openstack_user_domain and openstack_project_domain.

  [1] https://github.com/openstack/keystone-specs/blob/master/api/v3
  /identity-api-v3-os-federation-ext.rst#generating-assertions

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1459279/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1458945] Re: Use graduated oslo.policy instead of oslo-incubator code

2015-05-27 Thread Tim Simmons
** No longer affects: designate

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1458945

Title:
  Use graduated oslo.policy instead of oslo-incubator code

Status in OpenStack Key Management (Barbican):
  New
Status in OpenStack Telemetry (Ceilometer):
  New
Status in Cinder:
  New
Status in OpenStack Congress:
  New
Status in OpenStack Dashboard (Horizon):
  New
Status in MagnetoDB - key-value storage service for OpenStack:
  New
Status in OpenStack Magnum:
  New
Status in Mistral:
  New
Status in Murano:
  New
Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Compute (Nova):
  New
Status in Rally:
  Invalid
Status in Openstack Database (Trove):
  Invalid

Bug description:
  The Policy code is now be managed as a library, named oslo.policy.

  If there is a CVE level defect, deploying a fix should require
  deploying a new version of the library, not syncing each individual
  project.

  All the projects in the OpenStack ecosystem that are using the policy
  code from oslo-incubator should use the new library.

To manage notifications about this bug go to:
https://bugs.launchpad.net/barbican/+bug/1458945/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1304333] Re: [SRU] Instance left stuck in transitional POWERING state

2015-05-27 Thread Launchpad Bug Tracker
This bug was fixed in the package nova - 1:2014.1.4-0ubuntu2.1

---
nova (1:2014.1.4-0ubuntu2.1) trusty; urgency=medium

  * Ensure that compute manager restarts during instance power
operations don't leave instances stuck in transitional task
states (LP: #1304333):
- d/p/recover-from-power-state-on-compute.patch
  Cherry pick backport of upstream fix from OpenStack = Juno.
 -- Edward Hope-Morley edward.hope-mor...@canonical.com   Wed, 22 Apr 2015 
09:51:28 +0100

** Changed in: nova (Ubuntu Trusty)
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1304333

Title:
  [SRU] Instance left stuck in transitional POWERING state

Status in OpenStack Compute (Nova):
  Fix Released
Status in nova package in Ubuntu:
  Fix Released
Status in nova source package in Trusty:
  Fix Released

Bug description:
  If a compute manager is stopped / fails during POWERING-ON or
  POWERING-OFF operations then the instance will be left stuck in a this
  transitional task_state.

  --- --- --- --- --- --- ---

  [Impact]

   * We are backporting this to Icehouse since nova currently fails to resolve
 instance state when service is restarted. It is not expected to impact
 normal operational behaviour in any way.

  [Test Case]

   * Deploy cloud incl. nova-compute and rabbitmq and create some
  instances.

   * Perform actions on those instances that cause state to change

   * Restart nova-compute and once restarted check that nova instances are in
 expected state.

  [Regression Potential]

   * None that I can see. This is hopefully a very low impact patch and I have
 tested in my local cloud environment with successful results.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1304333/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1453906] Re: Implement Routing Networks in Neutron

2015-05-27 Thread Carl Baldwin
Gotta get this one out of here too, I guess.

** Changed in: neutron
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1453906

Title:
  Implement Routing Networks in Neutron

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  This feature request proposes to allow using private subnets and
  public subnets together on the same physical network. The private
  network will be used for router next-hops and other router
  communication.

  This will also allow having an L3 only routed network which spans L2
  networks. This will depend on dynamic routing integration with
  Neutron.

  https://blueprints.launchpad.net/neutron/+spec/routing-networks

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1453906/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1459361] [NEW] VM created even though ephemeral disk creation failed.

2015-05-27 Thread Ɓukasz Leszczuk
Public bug reported:

When creating a vm qemu-nbd returned error code and created ephemeral
disk with 5 gigs instead of requested 350gigs. After nova rootwrap
returned non-0 code I assume vm creation should fail.


1. Openstack version:
ii  nova-compute 1:2014.2-fuel6.0~mira19
OpenStack Compute - compute node base

2. Log files:
attached nova-compute.log

3. Reproduce steps:
it happened once, don't know how to reproduce

Expeceted result:
vm ends up in error state

Actual result:
vm started but with smaller disk than requested

** Affects: nova
 Importance: Undecided
 Status: New

** Attachment added: nova_error.txt
   
https://bugs.launchpad.net/bugs/1459361/+attachment/4405674/+files/nova_error.txt

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1459361

Title:
  VM created even though ephemeral disk creation failed.

Status in OpenStack Compute (Nova):
  New

Bug description:
  When creating a vm qemu-nbd returned error code and created ephemeral
  disk with 5 gigs instead of requested 350gigs. After nova rootwrap
  returned non-0 code I assume vm creation should fail.

  
  1. Openstack version:
  ii  nova-compute 1:2014.2-fuel6.0~mira19  
  OpenStack Compute - compute node base

  2. Log files:
  attached nova-compute.log

  3. Reproduce steps:
  it happened once, don't know how to reproduce

  Expeceted result:
  vm ends up in error state

  Actual result:
  vm started but with smaller disk than requested

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1459361/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1459343] [NEW] Port security enabled=True is not respected

2015-05-27 Thread Dmitry Ratushnyy
Public bug reported:

It is possible to send traffic through port with
port_security_enabled=True.

Steps to reproduce.

1) Create three VMS on one network:
Guest os:
 ubuntu-14.04

destination VM  to ping (10.100.0.3)
router VM to send traffic through (10.100.0.2)
source VM that will ping destination VM(10.100.0.1)

2) On source VM add route to destination via router ( sudo ip route add 
10.100.0.3 via 10.100.0.2)
3) On router VM  set net.ipv4.ip_forward = 1 (sudo sysctl  
net.ipv4.ip_forward = 1)
4) On  destination VM add route to 'source' via router ( sudo ip route add 
10.100.0.1 via 10.100.0.2) 
5) Start to ping destination on source VM.  
5.1) Check traffic on all VMs

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: port-security

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1459343

Title:
  Port security enabled=True is not respected

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  It is possible to send traffic through port with
  port_security_enabled=True.

  Steps to reproduce.

  1) Create three VMS on one network:
  Guest os:
   ubuntu-14.04

  destination VM  to ping (10.100.0.3)
  router VM to send traffic through (10.100.0.2)
  source VM that will ping destination VM(10.100.0.1)

  2) On source VM add route to destination via router ( sudo ip route add 
10.100.0.3 via 10.100.0.2)
  3) On router VM  set net.ipv4.ip_forward = 1 (sudo sysctl  
net.ipv4.ip_forward = 1)
  4) On  destination VM add route to 'source' via router ( sudo ip route add 
10.100.0.1 via 10.100.0.2) 
  5) Start to ping destination on source VM.  
  5.1) Check traffic on all VMs

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1459343/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1453921] Re: Implement Address Scopes

2015-05-27 Thread Carl Baldwin
I'm sorry to offend you with this.

** Changed in: neutron
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1453921

Title:
  Implement Address Scopes

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  Make address scopes a first class thing in Neutron and make Neutron
  routers aware of them.

  Described in blueprint address-scopes

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1453921/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1459412] [NEW] ldap and fernet token gives ValueError('badly formed hexadecimal UUID string')

2015-05-27 Thread Hans Feldt
Public bug reported:

When playing with some keystone deployment alternatives I stumble on a
keystone issue:

 2015-05-27 12:11:52.946 57 DEBUG keystone.common.ldap.core [-] LDAP search: 
 base=ou=Groups,dc=acme,dc=org scope=1 
 filterstr=(((objectClass=groupOfNames)(member=uid=john,ou=Users,dc=acme,dc=org))(objectClass=groupOfNames))
  attrs=['ou', 'cn', 'description'] attrsonly=0 search_s 
 /usr/lib/python2.7/dist-packages/keystone/common/ldap/core.py:931
 2015-05-27 12:11:52.946 57 DEBUG keystone.common.ldap.core [-] LDAP unbind 
 unbind_s /usr/lib/python2.7/dist-packages/keystone/common/ldap/core.py:904
 2015-05-27 12:11:52.946 57 DEBUG keystone.identity.core [-] ID Mapping - 
 Domain ID: default, Default Driver: True, Domains: False, UUIDs: False, 
 Compatible IDs: True _set_domain_id_and_mapping 
 /usr/lib/python2.7/dist-packages/keystone/identity/core.py:492
 2015-05-27 12:11:52.955 57 ERROR 
 keystone.token.providers.fernet.token_formatters [-] john
 2015-05-27 12:11:52.955 57 ERROR keystone.common.wsgi [-] badly formed 
 hexadecimal UUID string
 2015-05-27 12:11:52.955 57 TRACE keystone.common.wsgi Traceback (most recent 
 call last):
 2015-05-27 12:11:52.955 57 TRACE keystone.common.wsgi   File 
 /usr/lib/python2.7/dist-packages/keystone/common/wsgi.py, line 239, in 
 __call__
 2015-05-27 12:11:52.955 57 TRACE keystone.common.wsgi result = 
 method(context, **params)
 2015-05-27 12:11:52.955 57 TRACE keystone.common.wsgi   File 
 /usr/lib/python2.7/dist-packages/keystone/auth/controllers.py, line 397, in 
 authenticate_for_token
 2015-05-27 12:11:52.955 57 TRACE keystone.common.wsgi 
 parent_audit_id=token_audit_id)
 2015-05-27 12:11:52.955 57 TRACE keystone.common.wsgi   File 
 /usr/lib/python2.7/dist-packages/keystone/token/provider.py, line 344, in 
 issue_v3_token
 2015-05-27 12:11:52.955 57 TRACE keystone.common.wsgi parent_audit_id)
 2015-05-27 12:11:52.955 57 TRACE keystone.common.wsgi   File 
 /usr/lib/python2.7/dist-packages/keystone/token/providers/fernet/core.py, 
 line 198, in issue_v3_token
 2015-05-27 12:11:52.955 57 TRACE keystone.common.wsgi 
 federated_info=federated_dict)
 2015-05-27 12:11:52.955 57 TRACE keystone.common.wsgi   File 
 /usr/lib/python2.7/dist-packages/keystone/token/providers/fernet/token_formatters.py,
  line 133, in create_token
 2015-05-27 12:11:52.955 57 TRACE keystone.common.wsgi audit_ids)
 2015-05-27 12:11:52.955 57 TRACE keystone.common.wsgi   File 
 /usr/lib/python2.7/dist-packages/keystone/token/providers/fernet/token_formatters.py,
  line 416, in assemble
 2015-05-27 12:11:52.955 57 TRACE keystone.common.wsgi b_user_id = 
 cls.convert_uuid_hex_to_bytes(user_id)
 2015-05-27 12:11:52.955 57 TRACE keystone.common.wsgi   File 
 /usr/lib/python2.7/dist-packages/keystone/token/providers/fernet/token_formatters.py,
  line 239, in convert_uuid_hex_to_bytes
 2015-05-27 12:11:52.955 57 TRACE keystone.common.wsgi uuid_obj = 
 uuid.UUID(uuid_string)
 2015-05-27 12:11:52.955 57 TRACE keystone.common.wsgi   File 
 /usr/lib/python2.7/uuid.py, line 134, in __init__
 2015-05-27 12:11:52.955 57 TRACE keystone.common.wsgi raise 
 ValueError('badly formed hexadecimal UUID string')
 2015-05-27 12:11:52.955 57 TRACE keystone.common.wsgi ValueError: badly 
 formed hexadecimal UUID string
 2015-05-27 12:11:52.955 57 TRACE keystone.common.wsgi
 2015-05-27 12:11:52.958 57 INFO eventlet.wsgi.server [-] 172.17.0.26 - - 
 [27/May/2015 12:11:52] POST /v3/auth/tokens HTTP/1.1 500 490 0.029590

Switching to UUID tokens it works. Switching to SQL Identity backend and
fernet tokens works.

The combination of LDAP identity backend and fernet tokens gives me the
above log for any request with name/password. Reproducable always.

I have a very minimalistic cloud setup with only 2 or 3 docker
containers. One with the SQL DB, one for Keystone and optionally one for
LDAP.

I use Ubuntu 15.04 as base image for my containers that includes Kilo.
I've patched keystone with the following changeset to make it work (with
LDAP):

commit 2c6db4a3bb9e1718744b0e5b03af050fd2866182
Author: Edmund Rhudy erh...@bloomberg.net
Date:   Thu May 21 12:42:40 2015 -0400

Make sure LDAP filter is constructed correctly

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1459412

Title:
  ldap and fernet token gives ValueError('badly formed hexadecimal UUID
  string')

Status in OpenStack Identity (Keystone):
  New

Bug description:
  When playing with some keystone deployment alternatives I stumble on a
  keystone issue:

   2015-05-27 12:11:52.946 57 DEBUG keystone.common.ldap.core [-] LDAP search: 
base=ou=Groups,dc=acme,dc=org scope=1 
filterstr=(((objectClass=groupOfNames)(member=uid=john,ou=Users,dc=acme,dc=org))(objectClass=groupOfNames))
 attrs=['ou', 'cn', 'description'] attrsonly=0 search_s 

[Yahoo-eng-team] [Bug 1459427] [NEW] VPNaaS: Certificate support for IPSec

2015-05-27 Thread Paul Michali
Public bug reported:

Since Barbican provides certificate management/storage, and LBaaS has
successfully used the certificates, this RFE proposes to provide
certificate support for VPN IPSec site-to-site connections.

The expectation is that the user would use Barbican to create the
certificate, and then reference the certificate when creating an IPSec
connection.

This would require an REST/CLI API change to accept certificate ID vs
PSK, minor database change to store the certificate ID, *Swan driver
modifications to apply the certificate to the template, and
unit/functional test updates for these changes.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: rfe vpnaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1459427

Title:
  VPNaaS: Certificate support for IPSec

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Since Barbican provides certificate management/storage, and LBaaS has
  successfully used the certificates, this RFE proposes to provide
  certificate support for VPN IPSec site-to-site connections.

  The expectation is that the user would use Barbican to create the
  certificate, and then reference the certificate when creating an IPSec
  connection.

  This would require an REST/CLI API change to accept certificate ID vs
  PSK, minor database change to store the certificate ID, *Swan driver
  modifications to apply the certificate to the template, and
  unit/functional test updates for these changes.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1459427/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1459423] [NEW] VPNaaS: Allow multiple local subnets for IPSec

2015-05-27 Thread Paul Michali
Public bug reported:

Currently, VPNaaS IPsec site to site connections may be created with one
or more peer (right side) subnets specified (as CIDRs). However, for the
local (left) side, only a single subnet can be specified.

The reference OpenSwan/StrongSwan implementations will support multiple
subnets on the local side, and this RFE is proposing to provide that
support.  This requires the following changes:

REST API
===
Modify the API to not specify the local subnet on the VPN service create API, 
and instead, require the local subnet(s) to be specified on the IPSec 
connection API, in a similar fashion to what is done for remote CIDRs.

Validation can make sure that there is at least one local CIDR, and all
subnets in the connection are using the same IP version.

This involves a backward incompatible API change, so will go to v2.0,
and provide support for 1.0 in the code base.


NEUTRON CLIENT
==

The CLI client could change from:
neutron vpn-service-create ROUTER SUBNET
neutron ipsec-site-connection-create ...
--vpnservice-id VPNSERVICE
--ikepolicy-id IKEPOLICY
--ipsecpolicy-id IPSECPOLICY
--peer-address PEER_ADDRESS
--peer-id PEER_ID
--peer-cidr PEER_CIDRS
--psk PSK

to:
neutron vpn-service-create ROUTER
neutron ipsec-site-connection-create ...
--vpnservice-id VPNSERVICE
--ikepolicy-id IKEPOLICY
--ipsecpolicy-id IPSECPOLICY
--peer-address PEER_ADDRESS
--peer-id PEER_ID
--peer-cidr PEER_CIDRS
--local-cidr LOCAL_CIDRS
--psk PSK
   


DATABASE
=
The local CIDRs could be added to the IPSec connection table. Migration needed 
for this change.


DRIVER
==
Besides passing the local CIDR information from service to device driver (along 
with existing info), the device driver needs to apply this information to the 
*Swan template in the same manner as is done for peer CIDR information.


DOCS

Update the API reference pages for VPN service create and IPSec connection 
create. Update existing Wiki how-to pages.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: rfe vpnaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1459423

Title:
  VPNaaS: Allow multiple local subnets for IPSec

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Currently, VPNaaS IPsec site to site connections may be created with
  one or more peer (right side) subnets specified (as CIDRs). However,
  for the local (left) side, only a single subnet can be specified.

  The reference OpenSwan/StrongSwan implementations will support
  multiple subnets on the local side, and this RFE is proposing to
  provide that support.  This requires the following changes:

  REST API
  ===
  Modify the API to not specify the local subnet on the VPN service create API, 
and instead, require the local subnet(s) to be specified on the IPSec 
connection API, in a similar fashion to what is done for remote CIDRs.

  Validation can make sure that there is at least one local CIDR, and
  all subnets in the connection are using the same IP version.

  This involves a backward incompatible API change, so will go to v2.0,
  and provide support for 1.0 in the code base.

  
  NEUTRON CLIENT
  ==

  The CLI client could change from:
  neutron vpn-service-create ROUTER SUBNET
  neutron ipsec-site-connection-create ...
  --vpnservice-id VPNSERVICE
  --ikepolicy-id IKEPOLICY
  --ipsecpolicy-id IPSECPOLICY
  --peer-address PEER_ADDRESS
  --peer-id PEER_ID
  --peer-cidr PEER_CIDRS
  --psk PSK

  to:
  neutron vpn-service-create ROUTER
  neutron ipsec-site-connection-create ...
  --vpnservice-id VPNSERVICE
  --ikepolicy-id IKEPOLICY
  --ipsecpolicy-id IPSECPOLICY
  --peer-address PEER_ADDRESS
   

[Yahoo-eng-team] [Bug 1459442] [NEW] JSCS Cleanup

2015-05-27 Thread Cindy Lu
Public bug reported:

We need to do some cleanup before we can use JSCS globally (turned on in
this patch: https://review.openstack.org/#/c/186154/).

We are using those the JSCS Rules listed by John Papa here:
https://github.com/johnpapa/angular-styleguide#jscs

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1459442

Title:
  JSCS Cleanup

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  We need to do some cleanup before we can use JSCS globally (turned on
  in this patch: https://review.openstack.org/#/c/186154/).

  We are using those the JSCS Rules listed by John Papa here:
  https://github.com/johnpapa/angular-styleguide#jscs

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1459442/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1458861] Re: Unable to retrieve instances after changing to multi-domain setup

2015-05-27 Thread Marcel Jordan
** Changed in: keystone
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1458861

Title:
  Unable to retrieve instances after changing to multi-domain setup

Status in OpenStack Identity (Keystone):
  Invalid

Bug description:
  After I changed keystone to multi-domain driver, I get on the horizon
  dashboard following error message when i want to display instances:
  Error: Unauthorized:. Unable to retrieve instances

  Name  : openstack-nova-api
  Arch: noarch
  Version   : 2014.2.2

  /var/log/nova/nova.log
  2015-05-26 14:09:44.512 2175 WARNING keystonemiddleware.auth_token [-] 
Identity response: {error: {message: Non-default domain is not supported 
(Disable debug mode to suppress these details.), code: 401, title: 
Unauthorized}}
  2015-05-26 14:09:44.512 2175 WARNING keystonemiddleware.auth_token [-] 
Authorization failed for token
  2015-05-26 14:09:44.513 2175 INFO nova.osapi_compute.wsgi.server [-] 
10.0.0.10 GET 
/v2/1d524a0433474fa48eb376d913a80fc1/servers/detail?limit=21project_id=1d524a0433474fa48eb376d913a80fc1
 HTTP/1.1 status: 401 len: 258 time: 0.4322391
  2015-05-26 14:09:44.518 2175 WARNING keystonemiddleware.auth_token [-] Unable 
to find authentication token in headers

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1458861/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1459115] [NEW] ngReorg broke modals along the way

2015-05-27 Thread Richard Jones
Public bug reported:

The horizon modal has been broken along the way during the ngReorg
refactoring. The error appears in the Javascript console when the user
navigates to almost any page that pops up the modal spinner (switching
between Project Overview and Admin Overview, for example).

The error is:

 Uncaught TypeError: Cannot read property 'modal' of undefined
   horizon.modals.modal_spinner@ (some location in compressed js)
   ...

It'll be recommended to turn off compression to figure out what's
actually going on.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1459115

Title:
  ngReorg broke modals along the way

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The horizon modal has been broken along the way during the ngReorg
  refactoring. The error appears in the Javascript console when the user
  navigates to almost any page that pops up the modal spinner (switching
  between Project Overview and Admin Overview, for example).

  The error is:

   Uncaught TypeError: Cannot read property 'modal' of undefined
 horizon.modals.modal_spinner@ (some location in compressed js)
 ...

  It'll be recommended to turn off compression to figure out what's
  actually going on.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1459115/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1459116] [NEW] miss testcase for the external authentication of DefaultDomain

2015-05-27 Thread Dave Chen
Public bug reported:

As to the external authentication methods, the classes defined includ 
'KerberosDomain', 'Domain' and 'DefaultDomain', the *DefaultDomain* is acutally 
different with 'KerberosDomain' and  'Domain', they are different use cases.
We have testcases for 'KerberosDomain' and 'Domain', but there is no testcase 
to verify 'DefaultDomain', so we need to fix up.


[1] 
https://github.com/openstack/keystone/blob/master/keystone/auth/plugins/external.py#L66

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1459116

Title:
  miss testcase for the external authentication of DefaultDomain

Status in OpenStack Identity (Keystone):
  New

Bug description:
  As to the external authentication methods, the classes defined includ 
'KerberosDomain', 'Domain' and 'DefaultDomain', the *DefaultDomain* is acutally 
different with 'KerberosDomain' and  'Domain', they are different use cases.
  We have testcases for 'KerberosDomain' and 'Domain', but there is no testcase 
to verify 'DefaultDomain', so we need to fix up.

  
  [1] 
https://github.com/openstack/keystone/blob/master/keystone/auth/plugins/external.py#L66

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1459116/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1297871] Re: Use GB instead of MB for the swap part size in Ironic

2015-05-27 Thread Michael Davies
Quoting from IRC: devananda | mrda: that feels like a discussion on the
color of the trim of the bikeshed we already store farm equipment in


** Changed in: ironic
   Status: Confirmed = Won't Fix

** Changed in: nova
   Status: Confirmed = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1297871

Title:
  Use GB instead of MB for the swap part size in Ironic

Status in OpenStack Bare Metal Provisioning Service (Ironic):
  Won't Fix
Status in OpenStack Compute (Nova):
  Won't Fix

Bug description:
  Having different unit data for the parameters is not ideal, as we
  expect the size of the root and ephemeral partition in GB we also
  should expect the size of the swap partition to be in GB. In Nova they
  still using root_gb and swap_mb but this is something that the Nova
  Ironic driver should translate before sending the request to the
  Ironic api.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1297871/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1453791] Re: Lbaas Pool and Members from Different SubNets

2015-05-27 Thread ZongKai LI
** Package changed: neutron-lbaas (Ubuntu) = neutron

** Changed in: neutron
 Assignee: (unassigned) = ZongKai LI (lzklibj)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1453791

Title:
  Lbaas Pool and Members from Different SubNets

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  
  There is no definite mapping between Pool Subnet ID and Its Members.

  It is possible to Assign another Subnet with different IP for Pool and
  its members.

  For E.g

  A pool is created with subnet 135.254.189.0/24, and its members from
  Instances assigned to Another Subnet (172.21.184.0/24).

  Under the following reference,

  https://etherpad.openstack.org/p/neutron-lbaas-api-proposals

  For Create-Pool,

  Request
  POST /pools.json
  {
  'pool': {
  'tenant_id': 'someid',
  'name': 'some name',
  'subnet_id': 'id-of-subnet-where-members-reside',  --- The 
Subnet must be defined as per the instances Subnet
  'protocol': 'HTTP',
  'lb_method': 'ROUND_ROBIN'
  'admin_state_up': True,
  }
  }

  
  Validation needs to be done such that the instances ( Members ) are created 
for the Pool of the same Subnet.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1453791/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1459491] [NEW] Unexpected result when create server booted from volume

2015-05-27 Thread wangxiyuan
Public bug reported:

Enviroment: 
flalvor :1 --- 1G disk.   
volume :128c8c78-ff3d-4636-8c5a-e27660741ec0 
,---2G,bootable,image:774b174a-a15a-492d-978d-74c3292a116e,
image:774b174a-a15a-492d-978d-74c3292a116e 13M

when boot from volume like this:
nova boot --flavor 1 --nic net-id=2746e15a-b35a-4316-9b9a-792224f84499  
--boot-volume 128c8c78-ff3d-4636-8c5a-e27660741ec0 test1
it will rasie an error: FlavorDiskTooSmall

when boot from volum like this:
nova boot --flavor 2 --nic net-id=2746e15a-b35a-4316-9b9a-792224f84499 
--block-device 
id=774b174a-a15a-492d-978d-74c3292a116e,source=image,dest=volume,size=2,bootindex=0
 test2
it goes well.

But,the second one is same with the first one.So,either the first or the
second is unexcepted.

I think the second one should raise 'FlavorDiskTooSmall' error.

** Affects: nova
 Importance: Undecided
 Assignee: wangxiyuan (wangxiyuan)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = wangxiyuan (wangxiyuan)

** Summary changed:

- unexpect result when boot from volume
+ Unexpected result when create server booted from volume

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1459491

Title:
  Unexpected result when create server booted from volume

Status in OpenStack Compute (Nova):
  New

Bug description:
  Enviroment: 
  flalvor :1 --- 1G disk.   
  volume :128c8c78-ff3d-4636-8c5a-e27660741ec0 
,---2G,bootable,image:774b174a-a15a-492d-978d-74c3292a116e,
  image:774b174a-a15a-492d-978d-74c3292a116e 13M

  when boot from volume like this:
  nova boot --flavor 1 --nic net-id=2746e15a-b35a-4316-9b9a-792224f84499  
--boot-volume 128c8c78-ff3d-4636-8c5a-e27660741ec0 test1
  it will rasie an error: FlavorDiskTooSmall

  when boot from volum like this:
  nova boot --flavor 2 --nic net-id=2746e15a-b35a-4316-9b9a-792224f84499 
--block-device 
id=774b174a-a15a-492d-978d-74c3292a116e,source=image,dest=volume,size=2,bootindex=0
 test2
  it goes well.

  But,the second one is same with the first one.So,either the first or
  the second is unexcepted.

  I think the second one should raise 'FlavorDiskTooSmall' error.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1459491/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1459482] [NEW] Default policy too restrictive on getting user

2015-05-27 Thread Qiming Teng
Public bug reported:

For services that need to talk to many other services, Keystone has
provided the trust based authentication model. That is good.

When a user (e.g. USER) raises a service request, the actual job is
delegated to the service user (e.g. SERVICE). SERVICE user will use
trust mechanism for authentication in calls that follow. When creating a
trust between USER and SERVICE, we will need the user ID of the SERVICE
user, however, it is not possible today as keystone is restricting the
get_user call to be admin only.

A 'service' user may need to find out his own user ID given the user
name specified in the configuration file. The usage scenario is for a
requester to create a trust relationship with the service user so that
the service user can do jobs on the requester's behalf. Restricting
user_list or user_get to only admin users is making this very cumbersome
even impossible.

** Affects: keystone
 Importance: Undecided
 Assignee: Qiming Teng (tengqim)
 Status: In Progress

** Changed in: keystone
 Assignee: (unassigned) = Qiming Teng (tengqim)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1459482

Title:
  Default policy too restrictive on getting user

Status in OpenStack Identity (Keystone):
  In Progress

Bug description:
  For services that need to talk to many other services, Keystone has
  provided the trust based authentication model. That is good.

  When a user (e.g. USER) raises a service request, the actual job is
  delegated to the service user (e.g. SERVICE). SERVICE user will use
  trust mechanism for authentication in calls that follow. When creating
  a trust between USER and SERVICE, we will need the user ID of the
  SERVICE user, however, it is not possible today as keystone is
  restricting the get_user call to be admin only.

  A 'service' user may need to find out his own user ID given the user
  name specified in the configuration file. The usage scenario is for a
  requester to create a trust relationship with the service user so that
  the service user can do jobs on the requester's behalf. Restricting
  user_list or user_get to only admin users is making this very
  cumbersome even impossible.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1459482/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1459483] [NEW] able to validate a Fernet token with garbage at the end

2015-05-27 Thread Matt Fischer
Public bug reported:

I am able to verify Fernet tokens that contain garbage at the end, not
so with UUID tokens.

For example.

UUID:

curl -H X-Auth-Token:84db9247b27d4fe6bd0a09b7b39281e2
http://localhost:35357/v2.0/tokens/84db9247b27d4fe6bd0a09b7b39281e2

Works

curl -H X-Auth-Token:84db9247b27d4fe6bd0a09b7b39281e2 
http://localhost:35357/v2.0/tokens/84db9247b27d4fe6bd0a09b7b39281e2-GARBAGE
{error: {message: Could not find token: 
84db9247b27d4fe6bd0a09b7b39281e2-GARBAGE, code: 404, title: Not Found}}

Fernet on the other hand happily validates it even with garbage and even
inserts -GARBAGE into the ID.

curl -H X-Auth-Token
:gABVZnaEJuVPaQwW5y84w1sZt9TvxJk4Cgh8dmeISr68a7yVnl0hIpOAJ8YWluXJwym96xauaj0M737GZLzwhiF44u5JJXIjSiqQFtH3bQDrlBS-
TmIAgkHcy0TsCBioof-Rzu4NbuSqkzjD5BJSRJnRqI2Sg-G-kTbRdblC5JBuyJjdMj8%3D
http://localhostt:35357/v2.0/tokens
/gABVZnaEJuVPaQwW5y84w1sZt9TvxJk4Cgh8dmeISr68a7yVnl0hIpOAJ8YWluXJwym96xauaj0M737GZLzwhiF44u5JJXIjSiqQFtH3bQDrlBS-
TmIAgkHcy0TsCBioof-Rzu4NbuSqkzjD5BJSRJnRqI2Sg-G-kTbRdblC5JBuyJjdMj8%3D

token: {
audit_ids: [
WlVgiNv2RmOGaDa_4PpGGg
],
expires: 2015-05-28T03:59:32.00Z,
id: 
gABVZnaEJuVPaQwW5y84w1sZt9TvxJk4Cgh8dmeISr68a7yVnl0hIpOAJ8YWluXJwym96xauaj0M737GZLzwhiF44u5JJXIjSiqQFtH3bQDrlBS-TmIAgkHcy0TsCBioof-Rzu4NbuSqkzjD5BJSRJnRqI2Sg-G-kTbRdblC5JBuyJjdMj8=,
issued_at: 2015-05-28T01:59:32.00Z,
tenant: {
description: Cloud Infra: Admin Tenant,
enabled: true,
id: 4764ba822ecb43e582794b875751924c,
name: admin,
parent_id: null
}
},


token: {
audit_ids: [
WlVgiNv2RmOGaDa_4PpGGg
],
expires: 2015-05-28T03:59:32.00Z,
id: 
gABVZnaEJuVPaQwW5y84w1sZt9TvxJk4Cgh8dmeISr68a7yVnl0hIpOAJ8YWluXJwym96xauaj0M737GZLzwhiF44u5JJXIjSiqQFtH3bQDrlBS-TmIAgkHcy0TsCBioof-Rzu4NbuSqkzjD5BJSRJnRqI2Sg-G-kTbRdblC5JBuyJjdMj8=-GARBAGE,
issued_at: 2015-05-28T01:59:32.00Z,
tenant: {
description: Cloud Infra: Admin Tenant,
enabled: true,
id: 4764ba822ecb43e582794b875751924c,
name: admin,
parent_id: null
}
},

** Affects: keystone
 Importance: Undecided
 Status: New

** Summary changed:

- able to verify a Fernet token with garbage at the end
+ able to validate a Fernet token with garbage at the end

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1459483

Title:
  able to validate a Fernet token with garbage at the end

Status in OpenStack Identity (Keystone):
  New

Bug description:
  I am able to verify Fernet tokens that contain garbage at the end, not
  so with UUID tokens.

  For example.

  UUID:

  curl -H X-Auth-Token:84db9247b27d4fe6bd0a09b7b39281e2
  http://localhost:35357/v2.0/tokens/84db9247b27d4fe6bd0a09b7b39281e2

  Works

  curl -H X-Auth-Token:84db9247b27d4fe6bd0a09b7b39281e2 
http://localhost:35357/v2.0/tokens/84db9247b27d4fe6bd0a09b7b39281e2-GARBAGE
  {error: {message: Could not find token: 
84db9247b27d4fe6bd0a09b7b39281e2-GARBAGE, code: 404, title: Not Found}}

  Fernet on the other hand happily validates it even with garbage and
  even inserts -GARBAGE into the ID.

  curl -H X-Auth-Token
  
:gABVZnaEJuVPaQwW5y84w1sZt9TvxJk4Cgh8dmeISr68a7yVnl0hIpOAJ8YWluXJwym96xauaj0M737GZLzwhiF44u5JJXIjSiqQFtH3bQDrlBS-
  TmIAgkHcy0TsCBioof-
  Rzu4NbuSqkzjD5BJSRJnRqI2Sg-G-kTbRdblC5JBuyJjdMj8%3D
  http://localhostt:35357/v2.0/tokens
  
/gABVZnaEJuVPaQwW5y84w1sZt9TvxJk4Cgh8dmeISr68a7yVnl0hIpOAJ8YWluXJwym96xauaj0M737GZLzwhiF44u5JJXIjSiqQFtH3bQDrlBS-
  TmIAgkHcy0TsCBioof-Rzu4NbuSqkzjD5BJSRJnRqI2Sg-G-kTbRdblC5JBuyJjdMj8%3D

  token: {
  audit_ids: [
  WlVgiNv2RmOGaDa_4PpGGg
  ],
  expires: 2015-05-28T03:59:32.00Z,
  id: 
gABVZnaEJuVPaQwW5y84w1sZt9TvxJk4Cgh8dmeISr68a7yVnl0hIpOAJ8YWluXJwym96xauaj0M737GZLzwhiF44u5JJXIjSiqQFtH3bQDrlBS-TmIAgkHcy0TsCBioof-Rzu4NbuSqkzjD5BJSRJnRqI2Sg-G-kTbRdblC5JBuyJjdMj8=,
  issued_at: 2015-05-28T01:59:32.00Z,
  tenant: {
  description: Cloud Infra: Admin Tenant,
  enabled: true,
  id: 4764ba822ecb43e582794b875751924c,
  name: admin,
  parent_id: null
  }
  },

  
  token: {
  audit_ids: [
  WlVgiNv2RmOGaDa_4PpGGg
  ],
  expires: 2015-05-28T03:59:32.00Z,
  id: 
gABVZnaEJuVPaQwW5y84w1sZt9TvxJk4Cgh8dmeISr68a7yVnl0hIpOAJ8YWluXJwym96xauaj0M737GZLzwhiF44u5JJXIjSiqQFtH3bQDrlBS-TmIAgkHcy0TsCBioof-Rzu4NbuSqkzjD5BJSRJnRqI2Sg-G-kTbRdblC5JBuyJjdMj8=-GARBAGE,
 

[Yahoo-eng-team] [Bug 1453791] [NEW] Lbaas Pool and Members from Different SubNets

2015-05-27 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:


There is no definite mapping between Pool Subnet ID and Its Members.

It is possible to Assign another Subnet with different IP for Pool and
its members.

For E.g

A pool is created with subnet 135.254.189.0/24, and its members from
Instances assigned to Another Subnet (172.21.184.0/24).

Under the following reference,

https://etherpad.openstack.org/p/neutron-lbaas-api-proposals

For Create-Pool,

Request
POST /pools.json
{
'pool': {
'tenant_id': 'someid',
'name': 'some name',
'subnet_id': 'id-of-subnet-where-members-reside',  --- The 
Subnet must be defined as per the instances Subnet
'protocol': 'HTTP',
'lb_method': 'ROUND_ROBIN'
'admin_state_up': True,
}
}


Validation needs to be done such that the instances ( Members ) are created for 
the Pool of the same Subnet.

** Affects: neutron
 Importance: Undecided
 Assignee: ZongKai LI (lzklibj)
 Status: New

-- 
Lbaas Pool and Members from Different SubNets
https://bugs.launchpad.net/bugs/1453791
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to neutron.

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1459468] [NEW] When doing resize action, CONF.allow_resize_on_same_host should check only once

2015-05-27 Thread Zhenyu Zheng
Public bug reported:

In the current implementation, when doing instance resize action. The 
CONF.allow_resize_to_same_host is first checked in 
compute/api which is on controller node. If CONF.allow_resize_to_same_host = 
True, nothing will added to 
filter_properties['ignore_hosts'], if it is set to False, the source host will 
be added to filter_properties['ignore_hosts'] and it 
will be ignored when performing select_destinations.

The value of CONF.allow_resize_to_same_host has been checked again in 
compute/manager.py which is on the destination
host which has already been selected by scheduler.

This will lead to a problem, if CONF.allow_resize_to_same_host parameter is set 
to True in controller node but set to False
or didn't set in compute node. When scheduler decided that the original compute 
node is the best one for resize but when
the compute node implementing the resize action, it will throw an exception.

The value of CONF.allow_resize_to_same_host should only check once in 
controller node (compute/api.py) and let scheduler
judge which host is best for rebuild, the compute node should only perform the 
action when it has been selected.

** Affects: nova
 Importance: Undecided
 Assignee: Zhenyu Zheng (zhengzhenyu)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1459468

Title:
  When doing resize action, CONF.allow_resize_on_same_host should check
  only once

Status in OpenStack Compute (Nova):
  New

Bug description:
  In the current implementation, when doing instance resize action. The 
CONF.allow_resize_to_same_host is first checked in 
  compute/api which is on controller node. If CONF.allow_resize_to_same_host = 
True, nothing will added to 
  filter_properties['ignore_hosts'], if it is set to False, the source host 
will be added to filter_properties['ignore_hosts'] and it 
  will be ignored when performing select_destinations.

  The value of CONF.allow_resize_to_same_host has been checked again in 
compute/manager.py which is on the destination
  host which has already been selected by scheduler.

  This will lead to a problem, if CONF.allow_resize_to_same_host parameter is 
set to True in controller node but set to False
  or didn't set in compute node. When scheduler decided that the original 
compute node is the best one for resize but when
  the compute node implementing the resize action, it will throw an exception.

  The value of CONF.allow_resize_to_same_host should only check once in 
controller node (compute/api.py) and let scheduler
  judge which host is best for rebuild, the compute node should only perform 
the action when it has been selected.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1459468/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1459467] [NEW] port update multiple fixed IPs anticipating allocation fails with mac address error

2015-05-27 Thread Kevin Benton
Public bug reported:

A port update with multiple fixed IP specifications, one with a subnet
ID and one with a fixed IP that conflicts with the address picked by the
one specifying the subnet ID will result in a dbduplicate entry which is
presented to the user as a mac address error.

~$ neutron port-update 7521786b-6c7f-4385-b5e1-fb9565552696 --fixed-ips 
type=dict 
{subnet_id=ca9dd2f0-cbaf-4997-9f59-dee9a39f6a7d,ip_address=42.42.42.42}
Unable to complete operation for network 0897a051-bf56-43c1-9083-3ac38ffef84e. 
The mac address None is in use.

** Affects: neutron
 Importance: Undecided
 Assignee: Kevin Benton (kevinbenton)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Kevin Benton (kevinbenton)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1459467

Title:
  port update multiple fixed IPs anticipating allocation fails with mac
  address error

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  A port update with multiple fixed IP specifications, one with a subnet
  ID and one with a fixed IP that conflicts with the address picked by
  the one specifying the subnet ID will result in a dbduplicate entry
  which is presented to the user as a mac address error.

  ~$ neutron port-update 7521786b-6c7f-4385-b5e1-fb9565552696 --fixed-ips 
type=dict 
{subnet_id=ca9dd2f0-cbaf-4997-9f59-dee9a39f6a7d,ip_address=42.42.42.42}
  Unable to complete operation for network 
0897a051-bf56-43c1-9083-3ac38ffef84e. The mac address None is in use.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1459467/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1459477] [NEW] gate-horizon-dsvm-integration intermittently fails with 'selenium.common.exceptions.NoSuchElementException: Message: Unable to locate element: {method:css selec

2015-05-27 Thread Matt Riedemann
Public bug reported:

http://logs.openstack.org/48/185848/1/gate/gate-horizon-dsvm-
integration/0c6efa4/console.html#_2015-05-27_10_11_32_954

2015-05-27 10:11:32.921 | 2015-05-27 10:11:32.905 | Traceback (most recent call 
last):
2015-05-27 10:11:32.923 | 2015-05-27 10:11:32.906 |   File 
/opt/stack/new/horizon/openstack_dashboard/test/integration_tests/tests/test_image_create_delete.py,
 line 29, in test_image_create_delete
2015-05-27 10:11:32.924 | 2015-05-27 10:11:32.908 | 
images_page.create_image(self.IMAGE_NAME)
2015-05-27 10:11:32.926 | 2015-05-27 10:11:32.909 |   File 
/opt/stack/new/horizon/openstack_dashboard/test/integration_tests/pages/project/compute/imagespage.py,
 line 76, in create_image
2015-05-27 10:11:32.927 | 2015-05-27 10:11:32.910 | 
self.create_image_form.name.text = name
2015-05-27 10:11:32.930 | 2015-05-27 10:11:32.913 |   File 
/opt/stack/new/horizon/openstack_dashboard/test/integration_tests/pages/project/compute/imagespage.py,
 line 63, in create_image_form
2015-05-27 10:11:32.931 | 2015-05-27 10:11:32.915 | 
self.CREATE_IMAGE_FORM_FIELDS)
2015-05-27 10:11:32.933 | 2015-05-27 10:11:32.916 |   File 
/opt/stack/new/horizon/openstack_dashboard/test/integration_tests/regions/forms.py,
 line 242, in __init__
2015-05-27 10:11:32.934 | 2015-05-27 10:11:32.918 | super(FormRegion, 
self).__init__(driver, conf, src_elem)
2015-05-27 10:11:32.936 | 2015-05-27 10:11:32.919 |   File 
/opt/stack/new/horizon/openstack_dashboard/test/integration_tests/regions/forms.py,
 line 221, in __init__
2015-05-27 10:11:32.938 | 2015-05-27 10:11:32.921 | src_elem = 
self._get_element(*self._default_form_locator)
2015-05-27 10:11:32.939 | 2015-05-27 10:11:32.923 |   File 
/opt/stack/new/horizon/openstack_dashboard/test/integration_tests/regions/baseregion.py,
 line 101, in _get_element
2015-05-27 10:11:32.941 | 2015-05-27 10:11:32.924 | return 
self.src_elem.find_element(*locator)
2015-05-27 10:11:32.942 | 2015-05-27 10:11:32.925 |   File 
/opt/stack/new/horizon/openstack_dashboard/test/integration_tests/webdriver.py,
 line 29, in find_element
2015-05-27 10:11:32.943 | 2015-05-27 10:11:32.927 | web_el = 
super(WrapperFindOverride, self).find_element(by, value)
2015-05-27 10:11:32.945 | 2015-05-27 10:11:32.928 |   File 
/opt/stack/new/horizon/.tox/py27integration/local/lib/python2.7/site-packages/selenium/webdriver/remote/webdriver.py,
 line 664, in find_element
2015-05-27 10:11:32.947 | 2015-05-27 10:11:32.930 | {'using': by, 'value': 
value})['value']
2015-05-27 10:11:32.948 | 2015-05-27 10:11:32.931 |   File 
/opt/stack/new/horizon/.tox/py27integration/local/lib/python2.7/site-packages/selenium/webdriver/remote/webdriver.py,
 line 175, in execute
2015-05-27 10:11:32.949 | 2015-05-27 10:11:32.933 | 
self.error_handler.check_response(response)
2015-05-27 10:11:32.951 | 2015-05-27 10:11:32.934 |   File 
/opt/stack/new/horizon/.tox/py27integration/local/lib/python2.7/site-packages/selenium/webdriver/remote/errorhandler.py,
 line 166, in check_response
2015-05-27 10:11:32.952 | 2015-05-27 10:11:32.936 | raise 
exception_class(message, screen, stacktrace)
2015-05-27 10:11:32.954 | 2015-05-27 10:11:32.937 | 
selenium.common.exceptions.NoSuchElementException: Message: Unable to locate 
element: {method:css selector,selector:div.modal-dialog}

http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwic2VsZW5pdW0uY29tbW9uLmV4Y2VwdGlvbnMuTm9TdWNoRWxlbWVudEV4Y2VwdGlvbjogTWVzc2FnZTogVW5hYmxlIHRvIGxvY2F0ZSBlbGVtZW50OiB7XFxcIm1ldGhvZFxcXCI6XFxcImNzcyBzZWxlY3RvclxcXCIsXFxcInNlbGVjdG9yXFxcIjpcXFwiZGl2Lm1vZGFsLWRpYWxvZ1xcXCJ9XCIgQU5EIHByb2plY3Q6XCJvcGVuc3RhY2svaG9yaXpvblwiIEFORCB0YWdzOlwiY29uc29sZVwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI2MDQ4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxNDMyNzc2MTI5NzI3LCJtb2RlIjoiIiwiYW5hbHl6ZV9maWVsZCI6IiJ9

328 hits in 7 days, check and gate, master branch only.

This is a different failure trace than what's in bug 1436903.

** Affects: horizon
 Importance: Undecided
 Status: Confirmed

** Changed in: horizon
   Status: New = Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1459477

Title:
  gate-horizon-dsvm-integration intermittently fails with
  'selenium.common.exceptions.NoSuchElementException: Message: Unable to
  locate element: {method:css selector,selector:div.modal-
  dialog}'

Status in OpenStack Dashboard (Horizon):
  Confirmed

Bug description:
  http://logs.openstack.org/48/185848/1/gate/gate-horizon-dsvm-
  integration/0c6efa4/console.html#_2015-05-27_10_11_32_954

  2015-05-27 10:11:32.921 | 2015-05-27 10:11:32.905 | Traceback (most recent 
call last):
  2015-05-27 10:11:32.923 | 2015-05-27 10:11:32.906 |   File 
/opt/stack/new/horizon/openstack_dashboard/test/integration_tests/tests/test_image_create_delete.py,
 line 29, in 

[Yahoo-eng-team] [Bug 1459496] [NEW] task_state is not 'None' when vm stay in 'ERROR' state

2015-05-27 Thread Rui Chen
Public bug reported:

Compute instance task states represent what is happening to the instance
at the current moment. When the instance is 'ERROR' state, the
'spawning' task_state make users confused, 'spawning' job had been done
and failed, it should been set to 'None'.

1. Version of Nova

$ git log -1
commit 4cf6ef68199183697a0209751575f88fe5b2a733
Merge: f40619b 70ba331
Author: Jenkins jenk...@review.openstack.org
Date:   Wed May 27 22:14:34 2015 +

Merge improve speed of some ec2 keypair tests

2. Log files

stack@devstack:/home/devstack/logs$  [master]$ nova list
+--+---+++-++
| ID   | Name  | Status | Task State | 
Power State | Networks   |
+--+---+++-++
| 5fa49075-f0a0-4806-bdf3-0cedd09c7c6f | chenrui_again | ERROR  | spawning   | 
NOSTATE ||
| 19920850-86b0-4904-8431-bf1ed6f9cea7 | chenrui_vm| ACTIVE | -  | 
Running | private=fd6b:c8ae:7d0d:0:f816:3eff:fe96:bbfa, 10.0.0.3 |
+--+---+++-++


2015-05-28 10:42:14.618 4705 WARNING nova.network.neutronv2.api [-] [instance: 
5fa49075-f0a0-4806-bdf3-0cedd09c7c6f] Neutron error: No more fixed IPs in 
network: ecf5d5d3-6198-4c95-84d4-db633fb09526
2015-05-28 10:42:14.619 4705 ERROR nova.compute.manager [-] Instance failed 
network setup after 1 attempt(s)
2015-05-28 10:42:14.619 4705 TRACE nova.compute.manager Traceback (most recent 
call last):
2015-05-28 10:42:14.619 4705 TRACE nova.compute.manager   File 
/opt/stack/nova/nova/compute/manager.py, line 1535, in _allocate_network_async
2015-05-28 10:42:14.619 4705 TRACE nova.compute.manager 
dhcp_options=dhcp_options)
2015-05-28 10:42:14.619 4705 TRACE nova.compute.manager   File 
/opt/stack/nova/nova/network/neutronv2/api.py, line 667, in 
allocate_for_instance
2015-05-28 10:42:14.619 4705 TRACE nova.compute.manager 
self._delete_ports(neutron, instance, created_port_ids)
2015-05-28 10:42:14.619 4705 TRACE nova.compute.manager   File 
/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py, line 85, in 
__exit__
2015-05-28 10:42:14.619 4705 TRACE nova.compute.manager 
six.reraise(self.type_, self.value, self.tb)
2015-05-28 10:42:14.619 4705 TRACE nova.compute.manager   File 
/opt/stack/nova/nova/network/neutronv2/api.py, line 659, in 
allocate_for_instance
2015-05-28 10:42:14.619 4705 TRACE nova.compute.manager security_group_ids, 
available_macs, dhcp_opts)
2015-05-28 10:42:14.619 4705 TRACE nova.compute.manager   File 
/opt/stack/nova/nova/network/neutronv2/api.py, line 321, in _create_port
2015-05-28 10:42:14.619 4705 TRACE nova.compute.manager raise 
exception.NoMoreFixedIps(net=network_id)
2015-05-28 10:42:14.619 4705 TRACE nova.compute.manager NoMoreFixedIps: No 
fixed IP addresses available for network: ecf5d5d3-6198-4c95-84d4-db633fb09526
2015-05-28 10:42:14.619 4705 TRACE nova.compute.manager
Traceback (most recent call last):
  File /usr/local/lib/python2.7/dist-packages/eventlet/hubs/poll.py, line 
115, in wait
listener.cb(fileno)
  File /usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py, line 
214, in main
result = function(*args, **kwargs)
  File /opt/stack/nova/nova/compute/manager.py, line 1552, in 
_allocate_network_async
six.reraise(*exc_info)
  File /opt/stack/nova/nova/compute/manager.py, line 1535, in 
_allocate_network_async
dhcp_options=dhcp_options)
  File /opt/stack/nova/nova/network/neutronv2/api.py, line 667, in 
allocate_for_instance
self._delete_ports(neutron, instance, created_port_ids)
  File /usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py, line 
85, in __exit__
six.reraise(self.type_, self.value, self.tb)
  File /opt/stack/nova/nova/network/neutronv2/api.py, line 659, in 
allocate_for_instance
security_group_ids, available_macs, dhcp_opts)
  File /opt/stack/nova/nova/network/neutronv2/api.py, line 321, in 
_create_port
raise exception.NoMoreFixedIps(net=network_id)
NoMoreFixedIps: No fixed IP addresses available for network: 
ecf5d5d3-6198-4c95-84d4-db633fb09526
Removing descriptor: 19

3.  Reproduce steps:

* create a neutron network and subnet, there is only 1 fixed ip in the subnet.
   neutron subnet-create --allocation-pool start=100.100.1.5,end=100.100.1.5 
ecf5d5d3-6198-4c95-84d4-db633fb09526 100.100.1.1/24
* boot a instance with the network_id.

Expected result:
* booting failed, instance is 'ERROR' state and task_state is 'None'

Actual result:
* booting failed, instance is 'ERROR' state and task_state is 'spawning'

** 

[Yahoo-eng-team] [Bug 1459446] [NEW] can't update dns for an ipv6 subnet

2015-05-27 Thread Doug Fish
Public bug reported:

It's not possible to update ipv6 subnet info using Horizon. To recreate:

Setup: create a new network (Admin-System-Networks-Create Network)
create an ipv6 subnet in that network
(new network Detail-Create Subnet)
Network Address: fdc5:f49e:fe9e::/64 
IP Version IPv6
Gateway IP: fdc5:f49e:fe9e::1
click create

To view the problem: Edit the subnet
(Admin-System-Networks[detail]-Edit Subnet-Subnet Details
attempt to add a DNS name server
fdc5:f49e:fe9e::3

An error is returned: Error: Failed to update subnet
fdc5:f49e:fe9e::/64: Cannot update read-only attribute ipv6_ra_mode

however, it's possible to make the update using
neutron subnet-update --dns-nameserver [ip] [id]

** Affects: horizon
 Importance: Undecided
 Assignee: Doug Fish (drfish)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) = Doug Fish (drfish)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1459446

Title:
  can't update dns for an ipv6 subnet

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  It's not possible to update ipv6 subnet info using Horizon. To
  recreate:

  Setup: create a new network (Admin-System-Networks-Create Network)
  create an ipv6 subnet in that network
  (new network Detail-Create Subnet)
  Network Address: fdc5:f49e:fe9e::/64 
  IP Version IPv6
  Gateway IP: fdc5:f49e:fe9e::1
  click create

  To view the problem: Edit the subnet
  (Admin-System-Networks[detail]-Edit Subnet-Subnet Details
  attempt to add a DNS name server
  fdc5:f49e:fe9e::3

  An error is returned: Error: Failed to update subnet
  fdc5:f49e:fe9e::/64: Cannot update read-only attribute ipv6_ra_mode

  however, it's possible to make the update using
  neutron subnet-update --dns-nameserver [ip] [id]

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1459446/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1459021] Re: nova vmware unit tests failing with oslo.vmware 0.13.0

2015-05-27 Thread OpenStack Infra
** Changed in: nova
   Status: Invalid = In Progress

** Changed in: nova
 Assignee: (unassigned) = Matthew Gilliard (matthew-gilliard-u)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1459021

Title:
  nova vmware unit tests failing with oslo.vmware 0.13.0

Status in OpenStack Compute (Nova):
  In Progress
Status in Oslo VMware library for OpenStack projects:
  In Progress

Bug description:
  http://logs.openstack.org/68/184968/2/check/gate-nova-
  python27/e3dadf7/console.html#_2015-05-26_20_45_35_734

  2015-05-26 20:45:35.734 | {4} 
nova.tests.unit.virt.vmwareapi.test_vm_util.VMwareVMUtilTestCase.test_create_vm_invalid_guestid
 [0.058940s] ... FAILED
  2015-05-26 20:45:35.735 | 
  2015-05-26 20:45:35.735 | Captured traceback:
  2015-05-26 20:45:35.735 | ~~~
  2015-05-26 20:45:35.736 | Traceback (most recent call last):
  2015-05-26 20:45:35.736 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/mock.py,
 line 1201, in patched
  2015-05-26 20:45:35.736 | return func(*args, **keywargs)
  2015-05-26 20:45:35.737 |   File 
nova/tests/unit/virt/vmwareapi/test_vm_util.py, line 796, in 
test_create_vm_invalid_guestid
  2015-05-26 20:45:35.737 | 'folder', config_spec, 'res-pool')
  2015-05-26 20:45:35.737 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 422, in assertRaises
  2015-05-26 20:45:35.738 | self.assertThat(our_callable, matcher)
  2015-05-26 20:45:35.738 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 433, in assertThat
  2015-05-26 20:45:35.738 | mismatch_error = self._matchHelper(matchee, 
matcher, message, verbose)
  2015-05-26 20:45:35.738 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 483, in _matchHelper
  2015-05-26 20:45:35.739 | mismatch = matcher.match(matchee)
  2015-05-26 20:45:35.739 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/matchers/_exception.py,
 line 108, in match
  2015-05-26 20:45:35.739 | mismatch = 
self.exception_matcher.match(exc_info)
  2015-05-26 20:45:35.740 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/matchers/_higherorder.py,
 line 62, in match
  2015-05-26 20:45:35.740 | mismatch = matcher.match(matchee)
  2015-05-26 20:45:35.740 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 414, in match
  2015-05-26 20:45:35.741 | reraise(*matchee)
  2015-05-26 20:45:35.741 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/matchers/_exception.py,
 line 101, in match
  2015-05-26 20:45:35.741 | result = matchee()
  2015-05-26 20:45:35.742 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 969, in __call__
  2015-05-26 20:45:35.742 | return self._callable_object(*self._args, 
**self._kwargs)
  2015-05-26 20:45:35.742 |   File nova/virt/vmwareapi/vm_util.py, line 
1280, in create_vm
  2015-05-26 20:45:35.742 | task_info = 
session._wait_for_task(vm_create_task)
  2015-05-26 20:45:35.743 |   File nova/virt/vmwareapi/driver.py, line 
714, in _wait_for_task
  2015-05-26 20:45:35.743 | return self.wait_for_task(task_ref)
  2015-05-26 20:45:35.743 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/oslo_vmware/api.py,
 line 381, in wait_for_task
  2015-05-26 20:45:35.744 | return evt.wait()
  2015-05-26 20:45:35.744 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/eventlet/event.py,
 line 121, in wait
  2015-05-26 20:45:35.744 | return hubs.get_hub().switch()
  2015-05-26 20:45:35.745 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/eventlet/hubs/hub.py,
 line 294, in switch
  2015-05-26 20:45:35.745 | return self.greenlet.switch()
  2015-05-26 20:45:35.745 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/oslo_vmware/common/loopingcall.py,
 line 76, in _inner
  2015-05-26 20:45:35.745 | self.f(*self.args, **self.kw)
  2015-05-26 20:45:35.746 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/oslo_vmware/api.py,
 line 423, in _poll_task
  2015-05-26 20:45:35.746 | raise task_ex
  2015-05-26 20:45:35.746