[Yahoo-eng-team] [Bug 1355621] [NEW] nova floating-ip-create needs pool name

2014-08-12 Thread Choonho Son
Public bug reported:

floating-ip-create shows that floating-ip-pool is optional, but it is
need to specify pool name.

#
# help menu
#
[root@cnode35-m ~(keystone_admin)]# nova help floating-ip-create
usage: nova floating-ip-create [floating-ip-pool]

Allocate a floating IP for the current tenant.

Positional arguments:
  floating-ip-pool  Name of Floating IP Pool. (Optional)

#
# error log
#
[root@cnode35-m ~(keystone_admin)]# nova floating-ip-create
ERROR: FloatingIpPoolNotFound: Floating ip pool not found. (HTTP 404) 
(Request-ID: req-224995d7-b1bf-4b82-83f6-d9259c1ca265)

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1355621

Title:
  nova floating-ip-create needs pool name

Status in OpenStack Compute (Nova):
  New

Bug description:
  floating-ip-create shows that floating-ip-pool is optional, but it is
  need to specify pool name.

  #
  # help menu
  #
  [root@cnode35-m ~(keystone_admin)]# nova help floating-ip-create
  usage: nova floating-ip-create [floating-ip-pool]

  Allocate a floating IP for the current tenant.

  Positional arguments:
floating-ip-pool  Name of Floating IP Pool. (Optional)

  #
  # error log
  #
  [root@cnode35-m ~(keystone_admin)]# nova floating-ip-create
  ERROR: FloatingIpPoolNotFound: Floating ip pool not found. (HTTP 404) 
(Request-ID: req-224995d7-b1bf-4b82-83f6-d9259c1ca265)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1355621/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1355620] [NEW] nova floating-ip-create needs pool name

2014-08-12 Thread Choonho Son
Public bug reported:

floating-ip-create shows that floating-ip-pool is optional, but it is
need to specify pool name.

#
# help menu
#
[root@cnode35-m ~(keystone_admin)]# nova help floating-ip-create
usage: nova floating-ip-create [floating-ip-pool]

Allocate a floating IP for the current tenant.

Positional arguments:
  floating-ip-pool  Name of Floating IP Pool. (Optional)

#
# error log
#
[root@cnode35-m ~(keystone_admin)]# nova floating-ip-create
ERROR: FloatingIpPoolNotFound: Floating ip pool not found. (HTTP 404) 
(Request-ID: req-224995d7-b1bf-4b82-83f6-d9259c1ca265)

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1355620

Title:
  nova floating-ip-create needs pool name

Status in OpenStack Compute (Nova):
  New

Bug description:
  floating-ip-create shows that floating-ip-pool is optional, but it is
  need to specify pool name.

  #
  # help menu
  #
  [root@cnode35-m ~(keystone_admin)]# nova help floating-ip-create
  usage: nova floating-ip-create [floating-ip-pool]

  Allocate a floating IP for the current tenant.

  Positional arguments:
floating-ip-pool  Name of Floating IP Pool. (Optional)

  #
  # error log
  #
  [root@cnode35-m ~(keystone_admin)]# nova floating-ip-create
  ERROR: FloatingIpPoolNotFound: Floating ip pool not found. (HTTP 404) 
(Request-ID: req-224995d7-b1bf-4b82-83f6-d9259c1ca265)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1355620/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1355627] [NEW] There should be a parameter to define MTU for DVR interfaces MTU in l3_agent.ini config file

2014-08-12 Thread Sarada
Public bug reported:

There should be a parameter to define MTU for DVR interfaces in
l3_agent.ini config file


Since configuring the MTU as 8900 for DVR router interfaces gives better 
performance, it should be good to provide an option to configure the MTU for 
DVR interfaces in l3_agent.ini config file. 

By default the DVR interface MTU is configured as 1500. Since we have
seen the significant improvements in performance with MTU 8900 for DVR
interfaces, it would be good to provide an attribute/parameter in the
l3_agent config file to enable the user to provide the desired MTU for
the DVR interfaces.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1355627

Title:
  There should be a parameter to define MTU for DVR interfaces MTU in
  l3_agent.ini config file

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  There should be a parameter to define MTU for DVR interfaces in
  l3_agent.ini config file

  
  Since configuring the MTU as 8900 for DVR router interfaces gives better 
performance, it should be good to provide an option to configure the MTU for 
DVR interfaces in l3_agent.ini config file. 

  By default the DVR interface MTU is configured as 1500. Since we have
  seen the significant improvements in performance with MTU 8900 for DVR
  interfaces, it would be good to provide an attribute/parameter in the
  l3_agent config file to enable the user to provide the desired MTU for
  the DVR interfaces.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1355627/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1355634] [NEW] reschedule error after creating an instance from a remote volume failed

2014-08-12 Thread HenryShen
Public bug reported:


boot an instance via a remote volume created by a specified image ,when 
the spwan process failed for some reasons , the reschedule process disabled  to 
 recreate the instance.
to simulate the spwan process failed, an exception was raised just 
before the function self.driver.spwan() called by _build_and_run_instance 
function() in compute/manager.py file.
then excute nova boot --flavor 1 --block-device 
id=my_image_id,source=image,dest=volume,bus=virtio,shutdown=removed,device=/dev/vda,bootindex=0,size=1
  --nic net-id=my_net_id,created instance failed.
cinder list showed that the state of the remote volume is in-use.
 It seemed that the reschedule process failed cause the volume wasn't 
freed.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1355634

Title:
  reschedule error after creating an instance from a remote volume
  failed

Status in OpenStack Compute (Nova):
  New

Bug description:
  
boot an instance via a remote volume created by a specified image ,when 
the spwan process failed for some reasons , the reschedule process disabled  to 
 recreate the instance.
to simulate the spwan process failed, an exception was raised just 
before the function self.driver.spwan() called by _build_and_run_instance 
function() in compute/manager.py file.
then excute nova boot --flavor 1 --block-device 
id=my_image_id,source=image,dest=volume,bus=virtio,shutdown=removed,device=/dev/vda,bootindex=0,size=1
  --nic net-id=my_net_id,created instance failed.
cinder list showed that the state of the remote volume is in-use.
 It seemed that the reschedule process failed cause the volume wasn't 
freed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1355634/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1355649] [NEW] L3 scheduler additions to support DVR migration fails

2014-08-12 Thread Robert Collins
Public bug reported:

I have a running cloud (off of trunk) which I'm upgrading, and is failing to 
apply neutron migrations:
+ neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file 
/etc/neutron/plugins/ml2/ml2_conf.ini upgrade head
INFO  [alembic.migration] Context impl MySQLImpl.
INFO  [alembic.migration] Will assume non-transactional DDL.
INFO  [alembic.migration] Running upgrade 31d7f831a591 - 5589aa32bf80, L3 
scheduler additions to support DVR
Traceback (most recent call last):
  File /usr/local/bin/neutron-db-manage, line 10, in module
sys.exit(main())
  File 
/opt/stack/venvs/neutron/local/lib/python2.7/site-packages/neutron/db/migration/cli.py,
 line 175, in main
CONF.command.func(config, CONF.command.name)
  File 
/opt/stack/venvs/neutron/local/lib/python2.7/site-packages/neutron/db/migration/cli.py,
 line 85, in do_upgrade_downgrade
do_alembic_command(config, cmd, revision, sql=CONF.command.sql)
  File 
/opt/stack/venvs/neutron/local/lib/python2.7/site-packages/neutron/db/migration/cli.py,
 line 63, in do_alembic_command
getattr(alembic_command, cmd)(config, *args, **kwargs)
  File 
/opt/stack/venvs/neutron/local/lib/python2.7/site-packages/alembic/command.py,
 line 125, in upgrade
script.run_env()
  File 
/opt/stack/venvs/neutron/local/lib/python2.7/site-packages/alembic/script.py, 
line 203, in run_env
util.load_python_file(self.dir, 'env.py')
  File 
/opt/stack/venvs/neutron/local/lib/python2.7/site-packages/alembic/util.py, 
line 215, in load_python_file
module = load_module_py(module_id, path)
  File 
/opt/stack/venvs/neutron/local/lib/python2.7/site-packages/alembic/compat.py, 
line 58, in load_module_py
mod = imp.load_source(module_id, path, fp)
  File 
/opt/stack/venvs/neutron/local/lib/python2.7/site-packages/neutron/db/migration/alembic_migrations/env.py,
 line 125, in module
run_migrations_online()
  File 
/opt/stack/venvs/neutron/local/lib/python2.7/site-packages/neutron/db/migration/alembic_migrations/env.py,
 line 109, in run_migrations_online
options=build_options())
  File string, line 7, in run_migrations
  File 
/opt/stack/venvs/neutron/local/lib/python2.7/site-packages/alembic/environment.py,
 line 689, in run_migrations
self.get_context().run_migrations(**kw)
  File 
/opt/stack/venvs/neutron/local/lib/python2.7/site-packages/alembic/migration.py,
 line 263, in run_migrations
change(**kw)
  File 
/opt/stack/venvs/neutron/local/lib/python2.7/site-packages/neutron/db/migration/alembic_migrations/versions/5589aa32bf80_l3_dvr_scheduler.py,
 line 54, in upgrade
sa.PrimaryKeyConstraint('router_id')
  File string, line 7, in create_table
  File 
/opt/stack/venvs/neutron/local/lib/python2.7/site-packages/alembic/operations.py,
 line 713, in create_table
self._table(name, *columns, **kw)
  File 
/opt/stack/venvs/neutron/local/lib/python2.7/site-packages/alembic/ddl/impl.py,
 line 149, in create_table
self._exec(schema.CreateTable(table))
  File 
/opt/stack/venvs/neutron/local/lib/python2.7/site-packages/alembic/ddl/impl.py,
 line 76, in _exec
conn.execute(construct, *multiparams, **params)
  File 
/opt/stack/venvs/neutron/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py,
 line 729, in execute
return meth(self, multiparams, params)
  File 
/opt/stack/venvs/neutron/local/lib/python2.7/site-packages/sqlalchemy/sql/ddl.py,
 line 69, in _execute_on_connection
return connection._execute_ddl(self, multiparams, params)
  File 
/opt/stack/venvs/neutron/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py,
 line 783, in _execute_ddl
compiled
  File 
/opt/stack/venvs/neutron/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py,
 line 958, in _execute_context
context)
  File 
/opt/stack/venvs/neutron/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py,
 line 1160, in _handle_dbapi_exception
exc_info
  File 
/opt/stack/venvs/neutron/local/lib/python2.7/site-packages/sqlalchemy/util/compat.py,
 line 199, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb)
  File 
/opt/stack/venvs/neutron/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py,
 line 951, in _execute_context
context)
  File 
/opt/stack/venvs/neutron/local/lib/python2.7/site-packages/sqlalchemy/engine/default.py,
 line 436, in do_execute
cursor.execute(statement, parameters)
  File 
/opt/stack/venvs/neutron/local/lib/python2.7/site-packages/MySQLdb/cursors.py,
 line 205, in execute
self.errorhandler(self, exc, value)
  File 
/opt/stack/venvs/neutron/local/lib/python2.7/site-packages/MySQLdb/connections.py,
 line 36, in defaulterrorhandler
raise errorclass, errorvalue
sqlalchemy.exc.OperationalError: (OperationalError) (1050, Table 
'csnat_l3_agent_bindings' already exists) '\nCREATE TABLE 
csnat_l3_agent_bindings (\n\trouter_id VARCHAR(36) NOT NULL, \n\tl3_agent_id 
VARCHAR(36) NOT NULL, \n\thost_id VARCHAR(255), \n\tcsnat_gw_port_id 
VARCHAR(36), \n\tPRIMARY 

[Yahoo-eng-team] [Bug 1355655] [NEW] Attempt to assign a role to a non existent user should fail

2014-08-12 Thread troy_chen
Public bug reported:

I use tempest tests get the following error: 
===
StringException: Traceback (most recent call last): 
   File 
/usr/lib/python2.7/dist-packages/tempest/api/identity/admin/test_roles.py, 
line 143, in test_assign_user_role_for_non_existent_user 
 tenant ['id'], 'junk-user-id-999', role ['id']) 
   File /usr/lib/python2.7/dist-packages/testtools/testcase.py, line 393, in 
assertRaises 
 self.assertThat (our_callable, matcher) 
   File /usr/lib/python2.7/dist-packages/testtools/testcase.py, line 406, in 
assertThat 
 raise mismatch_error 
MismatchError: bound method IdentityClientJSON.assign_user_role of 
tempest.services.identity.json.identity_client.IdentityClientJSON object at 
0x7f9183c2f250  returned ({'status': '200', 'content-length': '78', 'vary' : 
'X-Auth-Token', 'date': 'Tue, 12 Aug 2014 08:00:39 GMT', 'content-type': 
'application / json', 'x-distribution': 'Ubuntu'}, {u'id ': 
u'd4a5fe216f92439789389f968c6e50d6', u'name ': u'role1552687157'})


by testing found that assign a role to a user that does not exist is a 
success.
See attachment Screenshot by postman

** Affects: keystone
 Importance: Undecided
 Status: Incomplete

** Attachment added: postman
   
https://bugs.launchpad.net/bugs/1355655/+attachment/4175195/+files/%7BF25C4498-06C3-484D-B9E7-0533D1E98110%7D.bmp

** Changed in: keystone
   Status: New = Fix Committed

** Attachment removed: postman
   
https://bugs.launchpad.net/keystone/+bug/1355655/+attachment/4175195/+files/%7BF25C4498-06C3-484D-B9E7-0533D1E98110%7D.bmp

** Changed in: keystone
   Status: Fix Committed = Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1355655

Title:
  Attempt to assign a role to a non existent user should fail

Status in OpenStack Identity (Keystone):
  Incomplete

Bug description:
  I use tempest tests get the following error: 
  ===
  StringException: Traceback (most recent call last): 
     File 
/usr/lib/python2.7/dist-packages/tempest/api/identity/admin/test_roles.py, 
line 143, in test_assign_user_role_for_non_existent_user 
   tenant ['id'], 'junk-user-id-999', role ['id']) 
     File /usr/lib/python2.7/dist-packages/testtools/testcase.py, line 393, 
in assertRaises 
   self.assertThat (our_callable, matcher) 
     File /usr/lib/python2.7/dist-packages/testtools/testcase.py, line 406, 
in assertThat 
   raise mismatch_error 
  MismatchError: bound method IdentityClientJSON.assign_user_role of 
tempest.services.identity.json.identity_client.IdentityClientJSON object at 
0x7f9183c2f250  returned ({'status': '200', 'content-length': '78', 'vary' : 
'X-Auth-Token', 'date': 'Tue, 12 Aug 2014 08:00:39 GMT', 'content-type': 
'application / json', 'x-distribution': 'Ubuntu'}, {u'id ': 
u'd4a5fe216f92439789389f968c6e50d6', u'name ': u'role1552687157'})
  

  by testing found that assign a role to a user that does not exist is a 
success.
  See attachment Screenshot by postman

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1355655/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1355661] [NEW] evacuate scheduler can support it?

2014-08-12 Thread troy_chen
Public bug reported:

Hello everyone

host-evacuate/evacuate functions can migrate a downtime in all instances to 
another compute node.
example:
nova host-evacuate --target_host compute1 compute2

whether the virtual machine scheduler all reasonable dispatch to more
servers?

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1355661

Title:
  evacuate scheduler can support it?

Status in OpenStack Compute (Nova):
  New

Bug description:
  Hello everyone

  host-evacuate/evacuate functions can migrate a downtime in all instances to 
another compute node.
  example:
  nova host-evacuate --target_host compute1 compute2

  whether the virtual machine scheduler all reasonable dispatch to more
  servers?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1355661/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1288231] Re: Page layout of #content_body is build with padding

2014-08-12 Thread Timur Sufiev
** Changed in: horizon
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1288231

Title:
  Page layout of #content_body is build with padding

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  #content_body is aligned on a page with padding that is goes from the
  left page side. But this is pretty unflexiable.

  It would be better for #content_body to be aligned to div#sidebar
  element

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1288231/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1355715] [NEW] Adding a role member gives duplicate entry error whereas assigning role member to an user gives role not found error.

2014-08-12 Thread Ajaya Agrawal
Public bug reported:

The above problem occurs if a role Member is present already.

logs:
openstack role add member

DEBUG: keystoneclient.session REQ: curl -i --insecure -X POST 
http://127.0.0.1:35357/v2.0/OS-KSADM/roles -H User-Agent: 
python-keystoneclient -H Content-Type: application/json -H X-Auth-Token: 
1ffd2d4d966a47ad871525b986f7171e -d '{role: {name: member}}'
INFO: requests.packages.urllib3.connectionpool Starting new HTTP connection 
(1): 127.0.0.1
DEBUG: requests.packages.urllib3.connectionpool POST /v2.0/OS-KSADM/roles 
HTTP/1.1 409 120
DEBUG: keystoneclient.session RESP: [409] {'date': 'Tue, 12 Aug 2014 10:02:22 
GMT', 'content-type': 'application/json', 'content-length': '120', 'vary': 
'X-Auth-Token'}
RESP BODY: {error: {message: Conflict occurred attempting to store role - 
Duplicate Entry, code: 409, title: Conflict}}

openstack role add --project agr --user aj member

DEBUG: keystoneclient.session REQ: curl -i --insecure -X GET 
http://127.0.0.1:35357/v2.0/OS-KSADM/roles/member -H User-Agent: 
python-keystoneclient -H X-Auth-Token: 661cb74bee6e44ffacd084a3cc013e61
INFO: requests.packages.urllib3.connectionpool Starting new HTTP connection 
(1): 127.0.0.1
DEBUG: requests.packages.urllib3.connectionpool GET 
/v2.0/OS-KSADM/roles/member HTTP/1.1 404 88
DEBUG: keystoneclient.session RESP: [404] {'date': 'Tue, 12 Aug 2014 10:03:32 
GMT', 'content-type': 'application/json', 'content-length': '88', 'vary': 
'X-Auth-Token'}
RESP BODY: {error: {message: Could not find role: member, code: 404, 
title: Not Found}}

DEBUG: keystoneclient.session Request returned failure status: 404
DEBUG: keystoneclient.session REQ: curl -i --insecure -X GET 
http://127.0.0.1:35357/v2.0/OS-KSADM/roles -H User-Agent: 
python-keystoneclient -H X-Auth-Token: 661cb74bee6e44ffacd084a3cc013e61
INFO: requests.packages.urllib3.connectionpool Starting new HTTP connection 
(1): 127.0.0.1
DEBUG: requests.packages.urllib3.connectionpool GET /v2.0/OS-KSADM/roles 
HTTP/1.1 200 540
DEBUG: keystoneclient.session RESP: [200] {'date': 'Tue, 12 Aug 2014 10:03:32 
GMT', 'content-type': 'application/json', 'content-length': '540', 'vary': 
'X-Auth-Token'}
RESP BODY: {roles: [{id: 1c56c40303c940ef9498ca9e21706a5a, name: 
admin}, {id: 284081712d604180a9362221395dc18b, name: ResellerAdmin}, 
{id: 4f840799baa44197ab46573462770add, name: Member}, {id: 
76a88370b9294b09b3ec30dbe2e6c7f0, name: heat_stack_owner}, {id: 
778d9bb6a2194eb2b7ef076821168e23, name: service}, {id: 
9fe2ff9ee4384b1894a90878d3e92bab, name: _member_}, {id: 
a82e17d760704ac180f2ebf3bf2efc3e, name: anotherrole}, {id: 
ed6031e3725f42a4a050d6987ccee574, name: heat_stack_user}]}

** Affects: keystone
 Importance: Undecided
 Status: New

** Summary changed:

- Adding a role member gives duplicate entry error while assigning role 
member to an user gives role not found error.
+ Adding a role member gives duplicate entry error whereas assigning role 
member to an user gives role not found error.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1355715

Title:
  Adding a role member gives duplicate entry error whereas assigning
  role member to an user gives role not found error.

Status in OpenStack Identity (Keystone):
  New

Bug description:
  The above problem occurs if a role Member is present already.

  logs:
  openstack role add member

  DEBUG: keystoneclient.session REQ: curl -i --insecure -X POST 
http://127.0.0.1:35357/v2.0/OS-KSADM/roles -H User-Agent: 
python-keystoneclient -H Content-Type: application/json -H X-Auth-Token: 
1ffd2d4d966a47ad871525b986f7171e -d '{role: {name: member}}'
  INFO: requests.packages.urllib3.connectionpool Starting new HTTP connection 
(1): 127.0.0.1
  DEBUG: requests.packages.urllib3.connectionpool POST /v2.0/OS-KSADM/roles 
HTTP/1.1 409 120
  DEBUG: keystoneclient.session RESP: [409] {'date': 'Tue, 12 Aug 2014 10:02:22 
GMT', 'content-type': 'application/json', 'content-length': '120', 'vary': 
'X-Auth-Token'}
  RESP BODY: {error: {message: Conflict occurred attempting to store role 
- Duplicate Entry, code: 409, title: Conflict}}

  openstack role add --project agr --user aj member

  DEBUG: keystoneclient.session REQ: curl -i --insecure -X GET 
http://127.0.0.1:35357/v2.0/OS-KSADM/roles/member -H User-Agent: 
python-keystoneclient -H X-Auth-Token: 661cb74bee6e44ffacd084a3cc013e61
  INFO: requests.packages.urllib3.connectionpool Starting new HTTP connection 
(1): 127.0.0.1
  DEBUG: requests.packages.urllib3.connectionpool GET 
/v2.0/OS-KSADM/roles/member HTTP/1.1 404 88
  DEBUG: keystoneclient.session RESP: [404] {'date': 'Tue, 12 Aug 2014 10:03:32 
GMT', 'content-type': 'application/json', 'content-length': '88', 'vary': 
'X-Auth-Token'}
  RESP BODY: {error: {message: Could not find role: member, code: 404, 
title: Not Found}}

  DEBUG: keystoneclient.session Request returned failure status: 404
  DEBUG: 

[Yahoo-eng-team] [Bug 1355601] Re: User project can't be list after the project is created

2014-08-12 Thread Hong-Guang
tested on a wrong version, i changed it status invalid

** Changed in: horizon
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1355601

Title:
  User project can't be list after the project is created

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  Testing step:
  1:login as admin
  2:create a new user and assign admin role
  3:login as this new user
  4:create a new project
  5:search this new project and return empty

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1355601/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1355748] [NEW] Integration tests - Chrome and IE webdrivers

2014-08-12 Thread Daniel Korn
Public bug reported:

Currently the Integration tests only uses  Selenium's FirefoxDriver, making 
Firefox the sole browser being tested.
Selenium WebDriver also support ChromeDriver and  InternetExplorerDriver and 
few other less interesting browsers (IMHO at least). 
The integration tests should run on these browsers as well, to discover 
browser-specific issues.

Useful info from Selenium Wiki:

1. Chrome Driver: https://code.google.com/p/selenium/wiki/ChromeDriver
2. Internet Explorer Driver: 
https://code.google.com/p/selenium/wiki/InternetExplorerDriver

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: chrome chromedriver ie integration-tests internetexplorerdriver 
test-browsers webdrivers

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1355748

Title:
  Integration tests - Chrome and IE webdrivers

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Currently the Integration tests only uses  Selenium's FirefoxDriver, making 
Firefox the sole browser being tested.
  Selenium WebDriver also support ChromeDriver and  InternetExplorerDriver and 
few other less interesting browsers (IMHO at least). 
  The integration tests should run on these browsers as well, to discover 
browser-specific issues.

  Useful info from Selenium Wiki:

  1. Chrome Driver: https://code.google.com/p/selenium/wiki/ChromeDriver
  2. Internet Explorer Driver: 
https://code.google.com/p/selenium/wiki/InternetExplorerDriver

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1355748/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1355759] [NEW] L2populationRpcCallBackTunnelMixin get_agent_ports yields (None, {})

2014-08-12 Thread YAMAMOTO Takashi
Public bug reported:

L2populationRpcCallBackTunnelMixin get_agent_ports yields (None, {}) for 
unknown networks.
it's useless for consumers.

** Affects: neutron
 Importance: Undecided
 Assignee: YAMAMOTO Takashi (yamamoto)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1355759

Title:
  L2populationRpcCallBackTunnelMixin get_agent_ports yields (None, {})

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  L2populationRpcCallBackTunnelMixin get_agent_ports yields (None, {}) for 
unknown networks.
  it's useless for consumers.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1355759/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1351029] Re: Add OS-FEDERATION to scoped federation tokens

2014-08-12 Thread wanghong
This bug has been fixed in https://review.openstack.org/111070

** Changed in: keystone
 Assignee: wanghong (w-wanghong) = (unassigned)

** Changed in: keystone
   Status: Triaged = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1351029

Title:
  Add OS-FEDERATION to scoped federation tokens

Status in OpenStack Identity (Keystone):
  Invalid

Bug description:
  Currently, when a federated user gets a token, it has an OS-FEDERATION
  section under 'user', which contains information about the idp and
  protocol.

  However when the same user uses the unscoped token to get a scoped
  token, we should put the same information in there as well. This will
  help support revocation events for federated tokens, i.e. revoking all
  tokens based on IDP id.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1351029/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1355777] [NEW] support for ipv6 nameservers

2014-08-12 Thread Pierre-Antoine Haidar-Bachminska
Public bug reported:

Current git version of nova does not fully support ipv6 nameservers despite 
being able to set them during subnet creation.
This patch adds this support in nova (git) and its interfaces.template. It is 
currently deployed and used in our infrastructure based on icehouse (Nova 
2.17.0).

** Affects: nova
 Importance: Undecided
 Status: New

** Patch added: dns_ipv6.patch
   
https://bugs.launchpad.net/bugs/1355777/+attachment/4175385/+files/dns_ipv6.patch

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1355777

Title:
  support for ipv6 nameservers

Status in OpenStack Compute (Nova):
  New

Bug description:
  Current git version of nova does not fully support ipv6 nameservers despite 
being able to set them during subnet creation.
  This patch adds this support in nova (git) and its interfaces.template. It is 
currently deployed and used in our infrastructure based on icehouse (Nova 
2.17.0).

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1355777/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1355780] [NEW] q-l3 service is failing due to unsupported RPC version 1.3 in the agent

2014-08-12 Thread omrim
Public bug reported:

Because the q-l3 agent is not loaded it caused My CI tests to failed.

The Stack Trace is:
TRACE neutron Traceback (most recent call last):
TRACE neutron   File /usr/local/bin/neutron-l3-agent, line 10, in module
TRACE neutron sys.exit(main())
TRACE neutron   File /opt/stack/neutron/neutron/agent/l3_agent.py, line 1787, 
in main
TRACE neutron manager=manager)
TRACE neutron   File /opt/stack/neutron/neutron/service.py, line 264, in 
create
TRACE neutron periodic_fuzzy_delay=periodic_fuzzy_delay)
TRACE neutron   File /opt/stack/neutron/neutron/service.py, line 197, in 
__init__
TRACE neutron self.manager = manager_class(host=host, *args, **kwargs)
TRACE neutron   File /opt/stack/neutron/neutron/agent/l3_agent.py, line 1706, 
in __init__
TRACE neutron super(L3NATAgentWithStateReport, self).__init__(host=host, 
conf=conf)
TRACE neutron   File /opt/stack/neutron/neutron/agent/l3_agent.py, line 430, 
in __init__
TRACE neutron self.plugin_rpc.get_service_plugin_list(self.context))
TRACE neutron   File /opt/stack/neutron/neutron/agent/l3_agent.py, line 142, 
in get_service_plugin_list
TRACE neutron version='1.3')
TRACE neutron   File /opt/stack/neutron/neutron/common/log.py, line 36, in 
wrapper
TRACE neutron return method(*args, **kwargs)
TRACE neutron   File /opt/stack/neutron/neutron/common/rpc.py, line 170, in 
call
TRACE neutron context, msg, rpc_method='call', **kwargs)
TRACE neutron   File /opt/stack/neutron/neutron/common/rpc.py, line 196, in 
__call_rpc_method
TRACE neutron return func(context, msg['method'], **msg['args'])
TRACE neutron   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/client.py, line 
152, in call
TRACE neutron retry=self.retry)
TRACE neutron   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/transport.py, line 90, 
in _send
TRACE neutron timeout=timeout, retry=retry)
TRACE neutron   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/_drivers/amqpdriver.py, 
line 404, in send
TRACE neutron retry=retry)
TRACE neutron   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/_drivers/amqpdriver.py, 
line 395, in _send
TRACE neutron raise result
TRACE neutron RemoteError: Remote error: UnsupportedVersion Endpoint does not 
support RPC version 1.3

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1355780

Title:
  q-l3 service is failing due to unsupported RPC version 1.3 in the
  agent

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Because the q-l3 agent is not loaded it caused My CI tests to failed.

  The Stack Trace is:
  TRACE neutron Traceback (most recent call last):
  TRACE neutron   File /usr/local/bin/neutron-l3-agent, line 10, in module
  TRACE neutron sys.exit(main())
  TRACE neutron   File /opt/stack/neutron/neutron/agent/l3_agent.py, line 
1787, in main
  TRACE neutron manager=manager)
  TRACE neutron   File /opt/stack/neutron/neutron/service.py, line 264, in 
create
  TRACE neutron periodic_fuzzy_delay=periodic_fuzzy_delay)
  TRACE neutron   File /opt/stack/neutron/neutron/service.py, line 197, in 
__init__
  TRACE neutron self.manager = manager_class(host=host, *args, **kwargs)
  TRACE neutron   File /opt/stack/neutron/neutron/agent/l3_agent.py, line 
1706, in __init__
  TRACE neutron super(L3NATAgentWithStateReport, self).__init__(host=host, 
conf=conf)
  TRACE neutron   File /opt/stack/neutron/neutron/agent/l3_agent.py, line 
430, in __init__
  TRACE neutron self.plugin_rpc.get_service_plugin_list(self.context))
  TRACE neutron   File /opt/stack/neutron/neutron/agent/l3_agent.py, line 
142, in get_service_plugin_list
  TRACE neutron version='1.3')
  TRACE neutron   File /opt/stack/neutron/neutron/common/log.py, line 36, in 
wrapper
  TRACE neutron return method(*args, **kwargs)
  TRACE neutron   File /opt/stack/neutron/neutron/common/rpc.py, line 170, in 
call
  TRACE neutron context, msg, rpc_method='call', **kwargs)
  TRACE neutron   File /opt/stack/neutron/neutron/common/rpc.py, line 196, in 
__call_rpc_method
  TRACE neutron return func(context, msg['method'], **msg['args'])
  TRACE neutron   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/client.py, line 
152, in call
  TRACE neutron retry=self.retry)
  TRACE neutron   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/transport.py, line 90, 
in _send
  TRACE neutron timeout=timeout, retry=retry)
  TRACE neutron   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/_drivers/amqpdriver.py, 
line 404, in send
  TRACE neutron retry=retry)
  TRACE neutron   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/_drivers/amqpdriver.py, 
line 395, in _send
  TRACE neutron raise result
  TRACE neutron 

[Yahoo-eng-team] [Bug 1354454] Re: First attempt at inline edit does not work

2014-08-12 Thread Sam Betts
Unable to replicate on horizon/master, marking as Invalid

** Changed in: horizon
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1354454

Title:
  First attempt at inline edit does not work

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  After opening a browser and bringing up the dashboard, navigate to the
  Admin  Projects panel and use inline edit to change one of the
  project names.  The new name does not seem to stick the first time you
  try.  If you try again it will work the second time and every time
  after that.  To make the problem happen again it seems you need to
  close the browser and reopen it.  I was able to reproduce this on
  Chrome, Firefox, and IE 10 so I don't think it's browser specific.

  I think this is causing one of the projects selenium tests to fail.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1354454/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1355661] Re: evacuate scheduler can support it?

2014-08-12 Thread Thang Pham
Troy,

Please post your question to the mailing list
(openst...@lists.openstack.org), as opposed to an actual bug.  This is a
question and not a bug.

Regards,
Thang

** Changed in: nova
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1355661

Title:
  evacuate scheduler can support it?

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Hello everyone

  host-evacuate/evacuate functions can migrate a downtime in all instances to 
another compute node.
  example:
  nova host-evacuate --target_host compute1 compute2

  whether the virtual machine scheduler all reasonable dispatch to more
  servers?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1355661/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1355009] Re: Client for test Federation on Icehouse Keystone

2014-08-12 Thread Dolph Mathews
Closing this because it's not a bug (there's nothing to reproduce), but
subscribing Marek and Steve who should be able to help you out.

** Changed in: keystone
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1355009

Title:
  Client for test Federation on Icehouse Keystone

Status in OpenStack Identity (Keystone):
  Invalid

Bug description:
  Hi,

  I'm  studying the Federation on Keystone ICEHOUSE version. So ,i created the 
entries to the IdP and mapping the attributes unsing OS-FEDERATION API, 
according to: 
https://github.com/openstack/identity-api/blob/master/v3/src/markdown/identity-api-v3-os-federation-ext.md
   
  In this version (ICEHOUSE), the keystone seems like just a proxy (in contrast 
to grizzly version, where federated operations like certs handshakes  used to 
performed in the federated middleware). Now, the shibboleth/apache configured 
as SP is used for that. 

  So, my question is, how can i test the federated authentication with
  keystone? In order words, how could i simulate a user trying to access
  resource and authenticating himself in a IdP? Does openstack have a
  specific client for that (like the federated swift client in Grizzly
  version)?

  Thanks.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1355009/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1355857] [NEW] HyperV: resize of instance fails when trying migration across host

2014-08-12 Thread Mayank
Public bug reported:

I have a devstack setup and 2 hyperv hosts. Both the hosts are in same domain. 
Compute service and Live Migration is enabled in both the hosts.
When I am trying to resize a provisioned instance it succeeds if it is resized 
in the same host. However, when it is trying to resize and migrate across hosts 
it fails with following error:

Compute.log
2014-08-12 14:03:57.533 2992 DEBUG nova.virt.hyperv.migrationops 
[req-98eaeb46-1272-40af-9205-09fff169f450 None] [instance: 
904fa8eb-7377-4c42-9268-eae266b52648] migrate_disk_and_power_off called 
migrate_disk_and_power_off C:\Program Files (x86)\Cloudbase 
Solutions\OpenStack\Nova\Python27\lib\site-packages\nova\virt\hyperv\migrationops.py:114
2014-08-12 14:03:57.533 2992 DEBUG nova.virt.hyperv.vmops 
[req-98eaeb46-1272-40af-9205-09fff169f450 None] [instance: 
904fa8eb-7377-4c42-9268-eae266b52648] Power off instance power_off C:\Program 
Files (x86)\Cloudbase 
Solutions\OpenStack\Nova\Python27\lib\site-packages\nova\virt\hyperv\vmops.py:425
2014-08-12 14:03:57.908 2992 DEBUG nova.virt.hyperv.vmutils 
[req-98eaeb46-1272-40af-9205-09fff169f450 None] WMI job succeeded: Turning Off 
Virtual Machine, Elapsed=00.217830:000 _wait_for_job C:\Program 
Files (x86)\Cloudbase 
Solutions\OpenStack\Nova\Python27\lib\site-packages\nova\virt\hyperv\vmutils.py:481
2014-08-12 14:03:57.908 2992 DEBUG nova.virt.hyperv.vmutils 
[req-98eaeb46-1272-40af-9205-09fff169f450 None] Successfully changed vm state 
of instance-05d6 to 3 set_vm_state C:\Program Files (x86)\Cloudbase 
Solutions\OpenStack\Nova\Python27\lib\site-packages\nova\virt\hyperv\vmutils.py:394
2014-08-12 14:03:57.908 2992 DEBUG nova.virt.hyperv.vmops 
[req-98eaeb46-1272-40af-9205-09fff169f450 None] Successfully changed state of 
VM instance-05d6 to: 3 _set_vm_state C:\Program Files (x86)\Cloudbase 
Solutions\OpenStack\Nova\Python27\lib\site-packages\nova\virt\hyperv\vmops.py:440
2014-08-12 14:03:59.096 2992 DEBUG nova.virt.hyperv.migrationops 
[req-98eaeb46-1272-40af-9205-09fff169f450 None] Migration target host: 
10.1.4.214 _migrate_disk_files C:\Program Files (x86)\Cloudbase 
Solutions\OpenStack\Nova\Python27\lib\site-packages\nova\virt\hyperv\migrationops.py:53
2014-08-12 14:04:04.753 2992 DEBUG nova.virt.hyperv.pathutils 
[req-98eaeb46-1272-40af-9205-09fff169f450 None] Creating directory: 
\\10.1.4.214\C$$\OpenStack\Instances\instance-05d6 _check_create_dir 
C:\Program Files (x86)\Cloudbase 
Solutions\OpenStack\Nova\Python27\lib\site-packages\nova\virt\hyperv\pathutils.py:96
2014-08-12 14:07:20.177 2992 ERROR nova.compute.manager 
[req-98eaeb46-1272-40af-9205-09fff169f450 None] [instance: 
904fa8eb-7377-4c42-9268-eae266b52648] Setting instance vm_state to ERROR
2014-08-12 14:07:20.177 2992 TRACE nova.compute.manager [instance: 
904fa8eb-7377-4c42-9268-eae266b52648] Traceback (most recent call last):
2014-08-12 14:07:20.177 2992 TRACE nova.compute.manager [instance: 
904fa8eb-7377-4c42-9268-eae266b52648]   File C:\Program Files (x86)\Cloudbase 
Solutions\OpenStack\Nova\Python27\lib\site-packages\nova\compute\manager.py, 
line 5780, in _error_out_instance_on_exception
2014-08-12 14:07:20.177 2992 TRACE nova.compute.manager [instance: 
904fa8eb-7377-4c42-9268-eae266b52648] yield
2014-08-12 14:07:20.177 2992 TRACE nova.compute.manager [instance: 
904fa8eb-7377-4c42-9268-eae266b52648]   File C:\Program Files (x86)\Cloudbase 
Solutions\OpenStack\Nova\Python27\lib\site-packages\nova\compute\manager.py, 
line 3569, in resize_instance
2014-08-12 14:07:20.177 2992 TRACE nova.compute.manager [instance: 
904fa8eb-7377-4c42-9268-eae266b52648] block_device_info)
2014-08-12 14:07:20.177 2992 TRACE nova.compute.manager [instance: 
904fa8eb-7377-4c42-9268-eae266b52648]   File C:\Program Files (x86)\Cloudbase 
Solutions\OpenStack\Nova\Python27\lib\site-packages\nova\virt\hyperv\driver.py,
 line 191, in migrate_disk_and_power_off
2014-08-12 14:07:20.177 2992 TRACE nova.compute.manager [instance: 
904fa8eb-7377-4c42-9268-eae266b52648] block_device_info)
2014-08-12 14:07:20.177 2992 TRACE nova.compute.manager [instance: 
904fa8eb-7377-4c42-9268-eae266b52648]   File C:\Program Files (x86)\Cloudbase 
Solutions\OpenStack\Nova\Python27\lib\site-packages\nova\virt\hyperv\migrationops.py,
 line 126, in migrate_disk_and_power_off
2014-08-12 14:07:20.177 2992 TRACE nova.compute.manager [instance: 
904fa8eb-7377-4c42-9268-eae266b52648] 
self._migrate_disk_files(instance_name, disk_files, dest)
2014-08-12 14:07:20.177 2992 TRACE nova.compute.manager [instance: 
904fa8eb-7377-4c42-9268-eae266b52648]   File C:\Program Files (x86)\Cloudbase 
Solutions\OpenStack\Nova\Python27\lib\site-packages\nova\virt\hyperv\migrationops.py,
 line 86, in _migrate_disk_files
2014-08-12 14:07:20.177 2992 TRACE nova.compute.manager [instance: 
904fa8eb-7377-4c42-9268-eae266b52648] dest_path)
2014-08-12 14:07:20.177 2992 TRACE nova.compute.manager [instance: 
904fa8eb-7377-4c42-9268-eae266b52648]   

[Yahoo-eng-team] [Bug 1355875] [NEW] VMware: ESX deprecation break VC driver

2014-08-12 Thread Gary Kotton
Public bug reported:

The ESX dprecation
https://github.com/openstack/nova/commit/1deb31f85a8f5d1e261b2cf1eddc537a5da7f60b
break devstack

2014-08-12 07:53:45.453 ERROR nova.openstack.common.threadgroup [-] 
2014-08-12 07:53:45.453 TRACE nova.openstack.common.threadgroup Traceback (most 
recent call last):
2014-08-12 07:53:45.453 TRACE nova.openstack.common.threadgroup   File 
/opt/stack/nova/nova/openstack/common/threadgroup.py, line 125, in wait
2014-08-12 07:53:45.453 TRACE nova.openstack.common.threadgroup x.wait()
2014-08-12 07:53:45.453 TRACE nova.openstack.common.threadgroup   File 
/opt/stack/nova/nova/openstack/common/threadgroup.py, line 47, in wait
2014-08-12 07:53:45.453 TRACE nova.openstack.common.threadgroup return 
self.thread.wait()
2014-08-12 07:53:45.453 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/eventlet/greenthread.py, line 168, in wait
2014-08-12 07:53:45.453 TRACE nova.openstack.common.threadgroup return 
self._exit_event.wait()
2014-08-12 07:53:45.453 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/eventlet/event.py, line 116, in wait
2014-08-12 07:53:45.453 TRACE nova.openstack.common.threadgroup return 
hubs.get_hub().switch()
2014-08-12 07:53:45.453 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/eventlet/hubs/hub.py, line 187, in switch
2014-08-12 07:53:45.453 TRACE nova.openstack.common.threadgroup return 
self.greenlet.switch()
2014-08-12 07:53:45.453 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/eventlet/greenthread.py, line 194, in main
2014-08-12 07:53:45.453 TRACE nova.openstack.common.threadgroup result = 
function(*args, **kwargs)
2014-08-12 07:53:45.453 TRACE nova.openstack.common.threadgroup   File 
/opt/stack/nova/nova/openstack/common/service.py, line 490, in run_service
2014-08-12 07:53:45.453 TRACE nova.openstack.common.threadgroup 
service.start()
2014-08-12 07:53:45.453 TRACE nova.openstack.common.threadgroup   File 
/opt/stack/nova/nova/service.py, line 164, in start
2014-08-12 07:53:45.453 TRACE nova.openstack.common.threadgroup 
self.manager.init_host()
2014-08-12 07:53:45.453 TRACE nova.openstack.common.threadgroup   File 
/opt/stack/nova/nova/compute/manager.py, line 1058, in init_host
2014-08-12 07:53:45.453 TRACE nova.openstack.common.threadgroup 
self.driver.init_host(host=self.host)
2014-08-12 07:53:45.453 TRACE nova.openstack.common.threadgroup   File 
/opt/stack/nova/nova/virt/driver.py, line 150, in init_host
2014-08-12 07:53:45.453 TRACE nova.openstack.common.threadgroup raise 
NotImplementedError()
2014-08-12 07:53:45.453 TRACE nova.openstack.common.threadgroup 
NotImplementedError
2014-08-12 07:53:45.453 TRACE nova.openstack.common.threadgroup 
nicira@Ubuntu1404Server:/opt/stack/nova$

** Affects: nova
 Importance: Critical
 Assignee: Gary Kotton (garyk)
 Status: In Progress

** Changed in: nova
   Importance: Undecided = Critical

** Changed in: nova
 Assignee: (unassigned) = Gary Kotton (garyk)

** Changed in: nova
Milestone: None = juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1355875

Title:
  VMware: ESX deprecation break VC driver

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  The ESX dprecation
  
https://github.com/openstack/nova/commit/1deb31f85a8f5d1e261b2cf1eddc537a5da7f60b
  break devstack

  2014-08-12 07:53:45.453 ERROR nova.openstack.common.threadgroup [-] 
  2014-08-12 07:53:45.453 TRACE nova.openstack.common.threadgroup Traceback 
(most recent call last):
  2014-08-12 07:53:45.453 TRACE nova.openstack.common.threadgroup   File 
/opt/stack/nova/nova/openstack/common/threadgroup.py, line 125, in wait
  2014-08-12 07:53:45.453 TRACE nova.openstack.common.threadgroup x.wait()
  2014-08-12 07:53:45.453 TRACE nova.openstack.common.threadgroup   File 
/opt/stack/nova/nova/openstack/common/threadgroup.py, line 47, in wait
  2014-08-12 07:53:45.453 TRACE nova.openstack.common.threadgroup return 
self.thread.wait()
  2014-08-12 07:53:45.453 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/eventlet/greenthread.py, line 168, in wait
  2014-08-12 07:53:45.453 TRACE nova.openstack.common.threadgroup return 
self._exit_event.wait()
  2014-08-12 07:53:45.453 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/eventlet/event.py, line 116, in wait
  2014-08-12 07:53:45.453 TRACE nova.openstack.common.threadgroup return 
hubs.get_hub().switch()
  2014-08-12 07:53:45.453 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/eventlet/hubs/hub.py, line 187, in switch
  2014-08-12 07:53:45.453 TRACE nova.openstack.common.threadgroup return 
self.greenlet.switch()
  2014-08-12 07:53:45.453 

[Yahoo-eng-team] [Bug 1355882] [NEW] get_floating_ip_pools for neutron v2 API inconsistent with nova network API

2014-08-12 Thread Salvatore Orlando
Public bug reported:

Commit e00bdd7aa8c1ac9f1ae5057eb2f774f34a631845 change
get_floating_ip_pools in a way that it now return a list of names rather
than a list whose elements are in the form {'name': 'pool_name'}.

The implementation of this method in nova.network.neutron_v2.api has not
been adjusted thus causing
tempest.api.compute.floating_ips.test_list_floating_ips.FloatingIPDetailsTestJSON
to always fail with neutron

The fix is straightforward.

** Affects: nova
 Importance: Undecided
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: New


** Tags: neutron-full-job

** Also affects: nova
   Importance: Undecided
   Status: New

** Changed in: nova
 Assignee: (unassigned) = Salvatore Orlando (salvatore-orlando)

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1355882

Title:
  get_floating_ip_pools for neutron v2 API inconsistent with nova
  network API

Status in OpenStack Compute (Nova):
  New

Bug description:
  Commit e00bdd7aa8c1ac9f1ae5057eb2f774f34a631845 change
  get_floating_ip_pools in a way that it now return a list of names
  rather than a list whose elements are in the form {'name':
  'pool_name'}.

  The implementation of this method in nova.network.neutron_v2.api has
  not been adjusted thus causing
  
tempest.api.compute.floating_ips.test_list_floating_ips.FloatingIPDetailsTestJSON
  to always fail with neutron

  The fix is straightforward.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1355882/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1355879] [NEW] javelin check_server is not compatible with Neutron

2014-08-12 Thread Jakub Libosvar
Public bug reported:

If Neutron is used in OpenStack and created server is checked in javelin
then check fails because of it's designed to work with nova network
only:

2014-08-11 11:37:32.966 | 2014-08-11 11:37:32.966 563 DEBUG tempest.cmd.javelin 
[-] Created client for user {'id': u'b9daacf50a03427a973ddff1a8140abe', 
'tenant_id': u'c45c6aed5cb64937b8cce399b2ca40c4', 'name': 'javelin', 'tenant': 
'javelin', 'pass': 'gungnir'} client_for_user 
/opt/stack/old/tempest/tempest/cmd/javelin.py:81
2014-08-11 11:37:33.103 | 2014-08-11 11:37:33.103 563 INFO 
tempest.common.rest_client [-] Request (main): 200 POST 
http://127.0.0.1:5000/v2.0/tokens
2014-08-11 11:37:33.177 | 2014-08-11 11:37:33.176 563 INFO 
tempest.common.rest_client [req-105bd99d-7db6-4332-a45c-36cca487ca05 None] 
Request (main): 200 GET 
http://127.0.0.1:8774/v2/c45c6aed5cb64937b8cce399b2ca40c4/servers 0.073s
2014-08-11 11:37:33.318 | 2014-08-11 11:37:33.318 563 INFO 
tempest.common.rest_client [req-ddb4937a-92da-4e1a-90f0-037dab9f0078 None] 
Request (main): 200 GET 
http://127.0.0.1:8774/v2/c45c6aed5cb64937b8cce399b2ca40c4/servers/a35df94a-108c-498e-bdb6-b2d7d03438f7
 0.138s
2014-08-11 11:37:33.331 | 2014-08-11 11:37:33.330 563 CRITICAL tempest [-] 
KeyError: 'private'
2014-08-11 11:37:33.331 | 2014-08-11 11:37:33.330 563 TRACE tempest Traceback 
(most recent call last):
2014-08-11 11:37:33.331 | 2014-08-11 11:37:33.330 563 TRACE tempest   File 
/usr/local/bin/javelin2, line 10, in 
2014-08-11 11:37:33.331 | 2014-08-11 11:37:33.330 563 TRACE tempest 
sys.exit(main())
2014-08-11 11:37:33.331 | 2014-08-11 11:37:33.330 563 TRACE tempest   File 
/opt/stack/old/tempest/tempest/cmd/javelin.py, line 575, in main
2014-08-11 11:37:33.331 | 2014-08-11 11:37:33.330 563 TRACE tempest 
checker.check()
2014-08-11 11:37:33.331 | 2014-08-11 11:37:33.330 563 TRACE tempest   File 
/opt/stack/old/tempest/tempest/cmd/javelin.py, line 195, in check
2014-08-11 11:37:33.331 | 2014-08-11 11:37:33.330 563 TRACE tempest 
self.check_servers()
2014-08-11 11:37:33.331 | 2014-08-11 11:37:33.330 563 TRACE tempest   File 
/opt/stack/old/tempest/tempest/cmd/javelin.py, line 246, in check_servers
2014-08-11 11:37:33.331 | 2014-08-11 11:37:33.330 563 TRACE tempest addr = 
found['addresses']['private'][0]['addr']
2014-08-11 11:37:33.331 | 2014-08-11 11:37:33.330 563 TRACE tempest KeyError: 
'private'
2014-08-11 11:37:33.331 | 2014-08-11 11:37:33.330 563 TRACE tempest 


Also if private address would be obtained correctly it won't be pingable with 
following ping attempts because it needs to be pinged from correct namespace.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1355879

Title:
  javelin check_server is not compatible with Neutron

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  If Neutron is used in OpenStack and created server is checked in
  javelin then check fails because of it's designed to work with nova
  network only:

  2014-08-11 11:37:32.966 | 2014-08-11 11:37:32.966 563 DEBUG 
tempest.cmd.javelin [-] Created client for user {'id': 
u'b9daacf50a03427a973ddff1a8140abe', 'tenant_id': 
u'c45c6aed5cb64937b8cce399b2ca40c4', 'name': 'javelin', 'tenant': 'javelin', 
'pass': 'gungnir'} client_for_user 
/opt/stack/old/tempest/tempest/cmd/javelin.py:81
  2014-08-11 11:37:33.103 | 2014-08-11 11:37:33.103 563 INFO 
tempest.common.rest_client [-] Request (main): 200 POST 
http://127.0.0.1:5000/v2.0/tokens
  2014-08-11 11:37:33.177 | 2014-08-11 11:37:33.176 563 INFO 
tempest.common.rest_client [req-105bd99d-7db6-4332-a45c-36cca487ca05 None] 
Request (main): 200 GET 
http://127.0.0.1:8774/v2/c45c6aed5cb64937b8cce399b2ca40c4/servers 0.073s
  2014-08-11 11:37:33.318 | 2014-08-11 11:37:33.318 563 INFO 
tempest.common.rest_client [req-ddb4937a-92da-4e1a-90f0-037dab9f0078 None] 
Request (main): 200 GET 
http://127.0.0.1:8774/v2/c45c6aed5cb64937b8cce399b2ca40c4/servers/a35df94a-108c-498e-bdb6-b2d7d03438f7
 0.138s
  2014-08-11 11:37:33.331 | 2014-08-11 11:37:33.330 563 CRITICAL tempest [-] 
KeyError: 'private'
  2014-08-11 11:37:33.331 | 2014-08-11 11:37:33.330 563 TRACE tempest Traceback 
(most recent call last):
  2014-08-11 11:37:33.331 | 2014-08-11 11:37:33.330 563 TRACE tempest   File 
/usr/local/bin/javelin2, line 10, in 
  2014-08-11 11:37:33.331 | 2014-08-11 11:37:33.330 563 TRACE tempest 
sys.exit(main())
  2014-08-11 11:37:33.331 | 2014-08-11 11:37:33.330 563 TRACE tempest   File 
/opt/stack/old/tempest/tempest/cmd/javelin.py, line 575, in main
  2014-08-11 11:37:33.331 | 2014-08-11 11:37:33.330 563 TRACE tempest 
checker.check()
  2014-08-11 11:37:33.331 | 2014-08-11 11:37:33.330 563 TRACE tempest   File 
/opt/stack/old/tempest/tempest/cmd/javelin.py, line 195, in check
  2014-08-11 11:37:33.331 | 2014-08-11 11:37:33.330 563 TRACE 

[Yahoo-eng-team] [Bug 1355902] [NEW] neutron HTTP exceptions have invalid format

2014-08-12 Thread Elena Ezhova
Public bug reported:

The python neutron client expects for the v2 API expects the neutron API
to send exceptions as a structure which has type, message and details.
While this is so for exceptions.NeutronException and
netaddr.AddrFormatError, in case of webob.exc.HTTPException  only
exception message is returned. This leads to error when exception's
'message' or 'details' attributes are attempted to be extracted in the
neutronclient session and the AttributeError in raised instead:

ash@precise64:~/devstack$ neutron firewall-rule-create --source-ip-address 
10.2.0.453 --protocol tcp --action allow  /* create rule with invalid source ip 
address */
AttributeError: 'unicode' object has no attribute 'get'

That's why neutron should send type and detail of an exception in the
body of a message.

** Affects: neutron
 Importance: Undecided
 Assignee: Elena Ezhova (eezhova)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Elena Ezhova (eezhova)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1355902

Title:
  neutron HTTP exceptions have invalid format

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  The python neutron client expects for the v2 API expects the neutron
  API to send exceptions as a structure which has type, message and
  details. While this is so for exceptions.NeutronException and
  netaddr.AddrFormatError, in case of webob.exc.HTTPException  only
  exception message is returned. This leads to error when exception's
  'message' or 'details' attributes are attempted to be extracted in the
  neutronclient session and the AttributeError in raised instead:

  ash@precise64:~/devstack$ neutron firewall-rule-create --source-ip-address 
10.2.0.453 --protocol tcp --action allow  /* create rule with invalid source ip 
address */
  AttributeError: 'unicode' object has no attribute 'get'

  That's why neutron should send type and detail of an exception in the
  body of a message.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1355902/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1355655] Re: Attempt to assign a role to a non existent user should fail

2014-08-12 Thread Dolph Mathews
Leaving this as Opinion for the moment, because this was actually by
design (although, I personally disagree with the behavior illustrated
above). Going to mention this at the Keystone meeting today.

** Changed in: keystone
   Status: New = Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1355655

Title:
  Attempt to assign a role to a non existent user should fail

Status in OpenStack Identity (Keystone):
  Opinion

Bug description:
  I use tempest tests get the following error: 
  ===
  StringException: Traceback (most recent call last): 
     File 
/usr/lib/python2.7/dist-packages/tempest/api/identity/admin/test_roles.py, 
line 143, in test_assign_user_role_for_non_existent_user 
   tenant ['id'], 'junk-user-id-999', role ['id']) 
     File /usr/lib/python2.7/dist-packages/testtools/testcase.py, line 393, 
in assertRaises 
   self.assertThat (our_callable, matcher) 
     File /usr/lib/python2.7/dist-packages/testtools/testcase.py, line 406, 
in assertThat 
   raise mismatch_error 
  MismatchError: bound method IdentityClientJSON.assign_user_role of 
tempest.services.identity.json.identity_client.IdentityClientJSON object at 
0x7f9183c2f250  returned ({'status': '200', 'content-length': '78', 'vary' : 
'X-Auth-Token', 'date': 'Tue, 12 Aug 2014 08:00:39 GMT', 'content-type': 
'application / json', 'x-distribution': 'Ubuntu'}, {u'id ': 
u'd4a5fe216f92439789389f968c6e50d6', u'name ': u'role1552687157'})
  

  by testing found that assign a role to a user that does not exist is a 
success.
  See attachment Screenshot by postman

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1355655/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1355909] [NEW] cloud-init with puppet doesn't work in trusty

2014-08-12 Thread Daniel Roschka
Public bug reported:

When running cloud-init with puppet configuration in the user-data a
bunch of problems occurs when cloud-init tries install and start puppet:

Problem 1: Enabling of puppet fails
cloud-init isn't able to enable the puppet service, because non of the options 
in _autostart_puppet in cc_puppet.py is valid (see: 
https://github.com/number5/cloud-init/blob/master/cloudinit/config/cc_puppet.py#L35-L48).
 For Ubuntu 12.04 LTS /etc/default/puppet was created when installing puppet 
which caused that issue not to occur there.

Problem 2: (Re)Starting of puppet fails
I worked around Problem 1, by including the following bootcmd into the userdata:
bootcmd:
 - echo START=yes  /etc/default/puppet
Even then puppet doesn't get installed correctly, because when cloud-init tries 
to start puppet (by using service puppet start) puppet is already running 
(because it has been started during the installation) and service puppet 
start is returning 1 as return code, causing cloud-init to fail.

Problem 3: puppet is still not enabled
Manually restarting puppet works, but puppet won't do anything useful, because:
Aug 12 15:41:39 ip-10-128-24-151 puppet-agent[26304]: Skipping run of Puppet 
configuration client; administratively disabled (Reason: 'Disabled by default 
on new installations');
Aug 12 15:41:39 ip-10-128-24-151 puppet-agent[26304]: Use 'puppet agent 
--enable' to re-enable.


Please fix those issues to make cloud-init with setup of puppet working again.

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1355909

Title:
  cloud-init with puppet doesn't work in trusty

Status in Init scripts for use on cloud images:
  New

Bug description:
  When running cloud-init with puppet configuration in the user-data a
  bunch of problems occurs when cloud-init tries install and start
  puppet:

  Problem 1: Enabling of puppet fails
  cloud-init isn't able to enable the puppet service, because non of the 
options in _autostart_puppet in cc_puppet.py is valid (see: 
https://github.com/number5/cloud-init/blob/master/cloudinit/config/cc_puppet.py#L35-L48).
 For Ubuntu 12.04 LTS /etc/default/puppet was created when installing puppet 
which caused that issue not to occur there.

  Problem 2: (Re)Starting of puppet fails
  I worked around Problem 1, by including the following bootcmd into the 
userdata:
  bootcmd:
   - echo START=yes  /etc/default/puppet
  Even then puppet doesn't get installed correctly, because when cloud-init 
tries to start puppet (by using service puppet start) puppet is already 
running (because it has been started during the installation) and service 
puppet start is returning 1 as return code, causing cloud-init to fail.

  Problem 3: puppet is still not enabled
  Manually restarting puppet works, but puppet won't do anything useful, 
because:
  Aug 12 15:41:39 ip-10-128-24-151 puppet-agent[26304]: Skipping run of Puppet 
configuration client; administratively disabled (Reason: 'Disabled by default 
on new installations');
  Aug 12 15:41:39 ip-10-128-24-151 puppet-agent[26304]: Use 'puppet agent 
--enable' to re-enable.

  
  Please fix those issues to make cloud-init with setup of puppet working again.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1355909/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1288178] Re: Sync new policy from oslo

2014-08-12 Thread Doug Hellmann
I don't see anything obvious that needs to be done in oslo for this to
be closed out, so I'm going to remove the bug from the oslo project. If
I'm wrong, please add it back with a  quick description of the work
needed. Thanks!

** No longer affects: oslo

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1288178

Title:
  Sync new policy from oslo

Status in Cinder:
  Triaged
Status in OpenStack Image Registry and Delivery Service (Glance):
  New
Status in OpenStack Bare Metal Provisioning Service (Ironic):
  In Progress
Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Compute (Nova):
  Triaged
Status in Tuskar:
  Fix Committed

Bug description:
  The oslo has changed the common policy for a long time, using a
  Enforer class to replace the old check function .In order to sync the
  common policy to nova, we have to rewrite the nova policy and the
  related unittests.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1288178/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1354829] Re: sudo: 3 incorrect password attempts in gate-neutron-python26

2014-08-12 Thread Matt Riedemann
** Also affects: openstack-ci
   Importance: Undecided
   Status: New

** Changed in: horizon
   Status: New = Invalid

** Changed in: neutron
   Status: New = Invalid

** Changed in: tempest
   Status: New = Invalid

** Summary changed:

- sudo: 3 incorrect password attempts in gate-neutron-python26
+ sudo: 3 incorrect password attempts in host setup

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1354829

Title:
  sudo: 3 incorrect password attempts in host setup

Status in OpenStack Dashboard (Horizon):
  Invalid
Status in OpenStack Neutron (virtual network service):
  Invalid
Status in OpenStack Core Infrastructure:
  New
Status in Tempest:
  Invalid

Bug description:
  http://logs.openstack.org/86/110186/1/check/gate-neutron-
  python26/9dec53b/console.html

  2014-08-10 06:51:57.341 | Started by user anonymous
  2014-08-10 06:51:57.343 | Building remotely on bare-centos6-rax-dfw-1380820 
in workspace /home/jenkins/workspace/gate-neutron-python26
  2014-08-10 06:51:57.458 | [gate-neutron-python26] $ /bin/bash -xe 
/tmp/hudson1812648831707400818.sh
  2014-08-10 06:51:57.540 | + rpm -ql libffi-devel
  2014-08-10 06:51:57.543 | /tmp/hudson1812648831707400818.sh: line 2: rpm: 
command not found
  2014-08-10 06:51:57.543 | + sudo yum install -y libffi-devel
  2014-08-10 06:51:57.549 | sudo: no tty present and no askpass program 
specified
  2014-08-10 06:51:57.551 | Sorry, try again.
  2014-08-10 06:51:57.552 | sudo: no tty present and no askpass program 
specified
  2014-08-10 06:51:57.552 | Sorry, try again.
  2014-08-10 06:51:57.553 | sudo: no tty present and no askpass program 
specified
  2014-08-10 06:51:57.553 | Sorry, try again.
  2014-08-10 06:51:57.553 | sudo: 3 incorrect password attempts
  2014-08-10 06:51:57.571 | Build step 'Execute shell' marked build as failure

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOiBcInN1ZG86IDMgaW5jb3JyZWN0IHBhc3N3b3JkIGF0dGVtcHRzXCIgQU5EIGZpbGVuYW1lOiBcImNvbnNvbGUuaHRtbFwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiIxNzI4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxNDA3NjcyNjY4NDc3fQ==

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1354829/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1355715] Re: Adding a role member gives duplicate entry error whereas assigning role member to an user gives role not found error.

2014-08-12 Thread Dolph Mathews
In one request, you're referencing a role by name (member, which is
valid), and in another request, you're trying to get a role by ID - and
there's certainly no role with id=member (id's are generally UUIDs).

GET /v3/roles?name=member should allow you to find the role ID by name.

** Changed in: keystone
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1355715

Title:
  Adding a role member gives duplicate entry error whereas assigning
  role member to an user gives role not found error.

Status in OpenStack Identity (Keystone):
  Invalid

Bug description:
  The above problem occurs if a role Member is present already.

  logs:
  openstack role add member

  DEBUG: keystoneclient.session REQ: curl -i --insecure -X POST 
http://127.0.0.1:35357/v2.0/OS-KSADM/roles -H User-Agent: 
python-keystoneclient -H Content-Type: application/json -H X-Auth-Token: 
1ffd2d4d966a47ad871525b986f7171e -d '{role: {name: member}}'
  INFO: requests.packages.urllib3.connectionpool Starting new HTTP connection 
(1): 127.0.0.1
  DEBUG: requests.packages.urllib3.connectionpool POST /v2.0/OS-KSADM/roles 
HTTP/1.1 409 120
  DEBUG: keystoneclient.session RESP: [409] {'date': 'Tue, 12 Aug 2014 10:02:22 
GMT', 'content-type': 'application/json', 'content-length': '120', 'vary': 
'X-Auth-Token'}
  RESP BODY: {error: {message: Conflict occurred attempting to store role 
- Duplicate Entry, code: 409, title: Conflict}}

  openstack role add --project agr --user aj member

  DEBUG: keystoneclient.session REQ: curl -i --insecure -X GET 
http://127.0.0.1:35357/v2.0/OS-KSADM/roles/member -H User-Agent: 
python-keystoneclient -H X-Auth-Token: 661cb74bee6e44ffacd084a3cc013e61
  INFO: requests.packages.urllib3.connectionpool Starting new HTTP connection 
(1): 127.0.0.1
  DEBUG: requests.packages.urllib3.connectionpool GET 
/v2.0/OS-KSADM/roles/member HTTP/1.1 404 88
  DEBUG: keystoneclient.session RESP: [404] {'date': 'Tue, 12 Aug 2014 10:03:32 
GMT', 'content-type': 'application/json', 'content-length': '88', 'vary': 
'X-Auth-Token'}
  RESP BODY: {error: {message: Could not find role: member, code: 404, 
title: Not Found}}

  DEBUG: keystoneclient.session Request returned failure status: 404
  DEBUG: keystoneclient.session REQ: curl -i --insecure -X GET 
http://127.0.0.1:35357/v2.0/OS-KSADM/roles -H User-Agent: 
python-keystoneclient -H X-Auth-Token: 661cb74bee6e44ffacd084a3cc013e61
  INFO: requests.packages.urllib3.connectionpool Starting new HTTP connection 
(1): 127.0.0.1
  DEBUG: requests.packages.urllib3.connectionpool GET /v2.0/OS-KSADM/roles 
HTTP/1.1 200 540
  DEBUG: keystoneclient.session RESP: [200] {'date': 'Tue, 12 Aug 2014 10:03:32 
GMT', 'content-type': 'application/json', 'content-length': '540', 'vary': 
'X-Auth-Token'}
  RESP BODY: {roles: [{id: 1c56c40303c940ef9498ca9e21706a5a, name: 
admin}, {id: 284081712d604180a9362221395dc18b, name: ResellerAdmin}, 
{id: 4f840799baa44197ab46573462770add, name: Member}, {id: 
76a88370b9294b09b3ec30dbe2e6c7f0, name: heat_stack_owner}, {id: 
778d9bb6a2194eb2b7ef076821168e23, name: service}, {id: 
9fe2ff9ee4384b1894a90878d3e92bab, name: _member_}, {id: 
a82e17d760704ac180f2ebf3bf2efc3e, name: anotherrole}, {id: 
ed6031e3725f42a4a050d6987ccee574, name: heat_stack_user}]}

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1355715/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1355919] [NEW] By default when caching is on objects will be cached forever

2014-08-12 Thread David Stanek
Public bug reported:

The `cache_time` setting for the assignments, catalogs and tokens is
currently set to None by default.  This means that if caching is enabled
for one of those subsystems and the operator did not specify their own
timeout the data will not automatically expire.

We are doing invalidation when data changes, at least in some cases.
I'm not sure that it's safe to say that anytime the data changes we are
correctly invaliding the key.  We should strive to do this as it's the
right thing to do, but we should also have a default timeout so that
things we miss will not slip through.

I believe 10 minutes is a reasonable default for most things so I'll
provide a patch with that as the value.  When I read `cache_time` I see
the only acceptable amount of time to accept stale data.  Usually this
is determined base on the information being cached, but we currently
only have the ability to set this at the subsystem level.

** Affects: keystone
 Importance: Medium
 Assignee: David Stanek (dstanek)
 Status: Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1355919

Title:
  By default when caching is on objects will be cached forever

Status in OpenStack Identity (Keystone):
  Confirmed

Bug description:
  The `cache_time` setting for the assignments, catalogs and tokens is
  currently set to None by default.  This means that if caching is
  enabled for one of those subsystems and the operator did not specify
  their own timeout the data will not automatically expire.

  We are doing invalidation when data changes, at least in some cases.
  I'm not sure that it's safe to say that anytime the data changes we
  are correctly invaliding the key.  We should strive to do this as it's
  the right thing to do, but we should also have a default timeout so
  that things we miss will not slip through.

  I believe 10 minutes is a reasonable default for most things so I'll
  provide a patch with that as the value.  When I read `cache_time` I
  see the only acceptable amount of time to accept stale data.
  Usually this is determined base on the information being cached, but
  we currently only have the ability to set this at the subsystem level.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1355919/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1355922] [NEW] instance fault not created when boot process fails

2014-08-12 Thread Andrew Laski
Public bug reported:

If the build process makes it to build_and_run_instance in the compute
manager no instance faults are recorded for failures after that point.
The instance will be set to an ERROR state appropriately, but no
information is stored to return to the user.

** Affects: nova
 Importance: Medium
 Assignee: Andrew Laski (alaski)
 Status: New

** Changed in: nova
   Importance: Undecided = Medium

** Changed in: nova
 Assignee: (unassigned) = Andrew Laski (alaski)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1355922

Title:
  instance fault not created when boot process fails

Status in OpenStack Compute (Nova):
  New

Bug description:
  If the build process makes it to build_and_run_instance in the compute
  manager no instance faults are recorded for failures after that point.
  The instance will be set to an ERROR state appropriately, but no
  information is stored to return to the user.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1355922/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1355921] [NEW] [libvirt] When guest configured for threads, poor VCPU accounting

2014-08-12 Thread Jon Grimm
Public bug reported:

Noticed while testing:  https://blueprints.launchpad.net/nova/+spec
/virt-driver-vcpu-topology


I have a host advertising 16 VCPUs (2 sockets, each 8 cores).   Each core 
happens to have 8 threads.(This is on a beefy POWER8 system).With the 
above blueprint,  I can now create a 1 socket, 2 core, 8 thread guest.

All works fine, except that I noticed Free VCPUS:  0' even though I'm
really only using two cores.I'd think I would see 14 free VCPUs in
this scenario.

Guest lscpu output: 
[root@bare-precise ~]# lscpu
Architecture:  ppc64
CPU op-mode(s):32-bit, 64-bit
Byte Order:Big Endian
CPU(s):16
On-line CPU(s) list:   0-15
Thread(s) per core:8
Core(s) per socket:2
Socket(s): 1
NUMA node(s):  1
Model: IBM pSeries (emulated by qemu)
L1d cache: 64K
L1i cache: 32K
NUMA node0 CPU(s): 0-15

Resulting tracker
2014-08-12 12:17:18.874 96650 AUDIT nova.compute.resource_tracker [-] Free 
VCPUS: 0

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1355921

Title:
  [libvirt] When guest configured for threads, poor VCPU accounting

Status in OpenStack Compute (Nova):
  New

Bug description:
  Noticed while testing:  https://blueprints.launchpad.net/nova/+spec
  /virt-driver-vcpu-topology

  
  I have a host advertising 16 VCPUs (2 sockets, each 8 cores).   Each core 
happens to have 8 threads.(This is on a beefy POWER8 system).With the 
above blueprint,  I can now create a 1 socket, 2 core, 8 thread guest.

  All works fine, except that I noticed Free VCPUS:  0' even though I'm
  really only using two cores.I'd think I would see 14 free VCPUs in
  this scenario.

  Guest lscpu output: 
  [root@bare-precise ~]# lscpu
  Architecture:  ppc64
  CPU op-mode(s):32-bit, 64-bit
  Byte Order:Big Endian
  CPU(s):16
  On-line CPU(s) list:   0-15
  Thread(s) per core:8
  Core(s) per socket:2
  Socket(s): 1
  NUMA node(s):  1
  Model: IBM pSeries (emulated by qemu)
  L1d cache: 64K
  L1i cache: 32K
  NUMA node0 CPU(s): 0-15

  Resulting tracker
  2014-08-12 12:17:18.874 96650 AUDIT nova.compute.resource_tracker [-] Free 
VCPUS: 0

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1355921/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1355928] [NEW] Deadlock in reservation commit

2014-08-12 Thread Matthew Booth
Public bug reported:

Details in http://logs.openstack.org/46/104146/15/check/check-tempest-
dsvm-full/d235389/, specifically in n-cond logs:

2014-08-12 14:58:57.099 ERROR nova.quota 
[req-7efe48be-f5b4-4343-898a-5b4b32694530 AggregatesAdminTestJSON-719157131 
AggregatesAdminTestJSON-1908648657] Failed to commit reservations 
[u'5bdde344-b26f-4e0a-9aa7-d91d775b6df0', 
u'5f757426-8f4e-454f-aedb-1186771f85fd', 
u'819aeaf6-9faf-4da5-a16d-ce1c571c4975']
2014-08-12 14:58:57.099 21994 TRACE nova.quota Traceback (most recent call 
last):
2014-08-12 14:58:57.099 21994 TRACE nova.quota   File 
/opt/stack/new/nova/nova/quota.py, line 1326, in commit
2014-08-12 14:58:57.099 21994 TRACE nova.quota user_id=user_id)
2014-08-12 14:58:57.099 21994 TRACE nova.quota   File 
/opt/stack/new/nova/nova/quota.py, line 569, in commit
2014-08-12 14:58:57.099 21994 TRACE nova.quota user_id=user_id)
2014-08-12 14:58:57.099 21994 TRACE nova.quota   File 
/opt/stack/new/nova/nova/db/api.py, line 1148, in reservation_commit
2014-08-12 14:58:57.099 21994 TRACE nova.quota user_id=user_id)
2014-08-12 14:58:57.099 21994 TRACE nova.quota   File 
/opt/stack/new/nova/nova/db/sqlalchemy/api.py, line 167, in wrapper
2014-08-12 14:58:57.099 21994 TRACE nova.quota return f(*args, **kwargs)
2014-08-12 14:58:57.099 21994 TRACE nova.quota   File 
/opt/stack/new/nova/nova/db/sqlalchemy/api.py, line 205, in wrapped
2014-08-12 14:58:57.099 21994 TRACE nova.quota return f(*args, **kwargs)
2014-08-12 14:58:57.099 21994 TRACE nova.quota   File 
/opt/stack/new/nova/nova/db/sqlalchemy/api.py, line 3302, in 
reservation_commit
2014-08-12 14:58:57.099 21994 TRACE nova.quota for reservation in 
reservation_query.all():
2014-08-12 14:58:57.099 21994 TRACE nova.quota   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py, line 2241, in all
2014-08-12 14:58:57.099 21994 TRACE nova.quota return list(self)
2014-08-12 14:58:57.099 21994 TRACE nova.quota   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py, line 2353, in 
__iter__
2014-08-12 14:58:57.099 21994 TRACE nova.quota return 
self._execute_and_instances(context)
2014-08-12 14:58:57.099 21994 TRACE nova.quota   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py, line 2368, in 
_execute_and_instances
2014-08-12 14:58:57.099 21994 TRACE nova.quota result = 
conn.execute(querycontext.statement, self._params)
2014-08-12 14:58:57.099 21994 TRACE nova.quota   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 662, in 
execute
2014-08-12 14:58:57.099 21994 TRACE nova.quota params)
2014-08-12 14:58:57.099 21994 TRACE nova.quota   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 761, in 
_execute_clauseelement
2014-08-12 14:58:57.099 21994 TRACE nova.quota compiled_sql, 
distilled_params
2014-08-12 14:58:57.099 21994 TRACE nova.quota   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 874, in 
_execute_context
2014-08-12 14:58:57.099 21994 TRACE nova.quota context)
2014-08-12 14:58:57.099 21994 TRACE nova.quota   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 1024, in 
_handle_dbapi_exception
2014-08-12 14:58:57.099 21994 TRACE nova.quota exc_info
2014-08-12 14:58:57.099 21994 TRACE nova.quota   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/util/compat.py, line 196, in 
raise_from_cause
2014-08-12 14:58:57.099 21994 TRACE nova.quota reraise(type(exception), 
exception, tb=exc_tb)
2014-08-12 14:58:57.099 21994 TRACE nova.quota   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 867, in 
_execute_context
2014-08-12 14:58:57.099 21994 TRACE nova.quota context)
2014-08-12 14:58:57.099 21994 TRACE nova.quota   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/engine/default.py, line 324, in 
do_execute
2014-08-12 14:58:57.099 21994 TRACE nova.quota cursor.execute(statement, 
parameters)
2014-08-12 14:58:57.099 21994 TRACE nova.quota   File 
/usr/lib/python2.7/dist-packages/MySQLdb/cursors.py, line 174, in execute
2014-08-12 14:58:57.099 21994 TRACE nova.quota self.errorhandler(self, exc, 
value)
2014-08-12 14:58:57.099 21994 TRACE nova.quota   File 
/usr/lib/python2.7/dist-packages/MySQLdb/connections.py, line 36, in 
defaulterrorhandler
2014-08-12 14:58:57.099 21994 TRACE nova.quota raise errorclass, errorvalue
2014-08-12 14:58:57.099 21994 TRACE nova.quota OperationalError: 
(OperationalError) (1213, 'Deadlock found when trying to get lock; try 
restarting transaction') 'SELECT reservations.created_at AS 
reservations_created_at, reservations.updated_at AS reservations_updated_at, 
reservations.deleted_at AS reservations_deleted_at, reservations.deleted AS 
reservations_deleted, reservations.id AS reservations_id, reservations.uuid AS 
reservations_uuid, reservations.usage_id AS reservations_usage_id, 
reservations.project_id AS reservations_project_id, reservations.user_id AS 
reservations_user_id, 

[Yahoo-eng-team] [Bug 1355929] [NEW] test_postgresql_opportunistically fails in stable/havana due to: ERROR: source database template1 is being accessed by other users

2014-08-12 Thread Matt Riedemann
Public bug reported:

http://logs.openstack.org/22/112422/1/check/gate-nova-
python26/621e0ae/console.html

This is probably a latent bug in the nova unit tests for postgresql in
stable/havana, or it's due to slow nodes for the py26 jobs.

http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiRVJST1I6ICBzb3VyY2UgZGF0YWJhc2UgXFxcInRlbXBsYXRlMVxcXCIgaXMgYmVpbmcgYWNjZXNzZWQgYnkgb3RoZXIgdXNlcnNcIiBBTkQgdGFnczpcImNvbnNvbGVcIiBBTkQgYnVpbGRfYnJhbmNoOlwic3RhYmxlL2hhdmFuYVwiIEFORCBidWlsZF9uYW1lOlwiZ2F0ZS1ub3ZhLXB5dGhvbjI2XCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0MDc4NjA5ODg1MjMsIm1vZGUiOiIiLCJhbmFseXplX2ZpZWxkIjoiIn0=

3 hits in 7 days, check queue only but multiple changes and all
failures.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: db testing

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1355929

Title:
  test_postgresql_opportunistically fails in stable/havana due to:
  ERROR:  source database template1 is being accessed by other users

Status in OpenStack Compute (Nova):
  New

Bug description:
  http://logs.openstack.org/22/112422/1/check/gate-nova-
  python26/621e0ae/console.html

  This is probably a latent bug in the nova unit tests for postgresql in
  stable/havana, or it's due to slow nodes for the py26 jobs.

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiRVJST1I6ICBzb3VyY2UgZGF0YWJhc2UgXFxcInRlbXBsYXRlMVxcXCIgaXMgYmVpbmcgYWNjZXNzZWQgYnkgb3RoZXIgdXNlcnNcIiBBTkQgdGFnczpcImNvbnNvbGVcIiBBTkQgYnVpbGRfYnJhbmNoOlwic3RhYmxlL2hhdmFuYVwiIEFORCBidWlsZF9uYW1lOlwiZ2F0ZS1ub3ZhLXB5dGhvbjI2XCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0MDc4NjA5ODg1MjMsIm1vZGUiOiIiLCJhbmFseXplX2ZpZWxkIjoiIn0=

  3 hits in 7 days, check queue only but multiple changes and all
  failures.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1355929/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1293784] Re: Need better default for nova_url

2014-08-12 Thread John Davidge
** Changed in: neutron
   Status: In Progress = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1293784

Title:
  Need better default for nova_url

Status in OpenStack Neutron (virtual network service):
  Fix Released

Bug description:
  While looking into https://bugs.launchpad.net/tripleo/+bug/1293782, we
  noticed that it seems the default for nova_url in neutron.conf is
  http://127.0.0.1:8774

  From common/config.py:
  cfg.StrOpt('nova_url',
     default='http://127.0.0.1:8774',
     help=_('URL for connection to nova')),

  Is this really a sane default? Wouldn't http://127.0.0.1:8774/v2 be
  more correct?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1293784/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1273478] Re: NetworkInfoAsyncWrapper __str__ can cause deadlock when called from a log message

2014-08-12 Thread Doug Hellmann
** No longer affects: oslo

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1273478

Title:
  NetworkInfoAsyncWrapper __str__ can cause deadlock when called from a
  log message

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  CPython logging library generates the string representation of the
  message to log under a lock.

  def handle(self, record):
  
  Conditionally emit the specified logging record.

  Emission depends on filters which may have been added to the handler.
  Wrap the actual emission of the record with acquisition/release of
  the I/O thread lock. Returns whether the filter passed the record for
  emission.
  
  rv = self.filter(record)
  if rv:
  self.acquire()
  try:
  self.emit(record)
  finally:
  self.release()
  return rv

  
  Nova will use the __str__ method of the NetworkInfoAsyncWrapper when logging 
a message as in:

  nova/virt/libvirt/driver.py:to_xml()

  LOG.debug(_('Start to_xml instance=%(instance)s '
  'network_info=%(network_info)s '
  'disk_info=%(disk_info)s '
  'image_meta=%(image_meta)s rescue=%(rescue)s'
  'block_device_info=%(block_device_info)s'),
{'instance': instance, 'network_info': network_info,
 'disk_info': disk_info, 'image_meta': image_meta,
 'rescue': rescue, 'block_device_info': block_device_info})

  Currently this causes the __str__ method to be called under the
  logging lock:

File /usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 
3058, in to_xml
  'rescue': rescue, 'block_device_info': block_device_info})
File /usr/lib/python2.7/logging/__init__.py, line 1412, in debug
  self.logger.debug(msg, *args, **kwargs)
File /usr/lib/python2.7/logging/__init__.py, line 1128, in debug
  self._log(DEBUG, msg, args, **kwargs)
File /usr/lib/python2.7/logging/__init__.py, line 1258, in _log
  self.handle(record)
File /usr/lib/python2.7/logging/__init__.py, line 1268, in handle
  self.callHandlers(record)
File /usr/lib/python2.7/logging/__init__.py, line 1308, in callHandlers
  hdlr.handle(record)
File /usr/lib/python2.7/logging/__init__.py, line 748, in handle
  self.emit(record)
File /usr/lib/python2.7/logging/handlers.py, line 414, in emit
  logging.FileHandler.emit(self, record)
File /usr/lib/python2.7/logging/__init__.py, line 930, in emit
  StreamHandler.emit(self, record)
File /usr/lib/python2.7/logging/__init__.py, line 846, in emit
  msg = self.format(record)
File /usr/lib/python2.7/logging/__init__.py, line 723, in format
  return fmt.format(record)
File /usr/lib/python2.7/dist-packages/nova/openstack/common/log.py, line 
517, in format
  return logging.Formatter.format(self, record)
File /usr/lib/python2.7/logging/__init__.py, line 464, in format
  record.message = record.getMessage()
File /usr/lib/python2.7/logging/__init__.py, line 328, in getMessage
  msg = msg % self.args
File /usr/lib/python2.7/dist-packages/nova/network/model.py, line 383, in 
__str__
  return self._sync_wrapper(fn, *args, **kwargs)

  This then waits for an eventlet to complete. This eventlet may itself
  attempt to use a log message as it executes.

  This sequence of operations can produce a deadlock between a greenlet
  thread waiting for the async operation to finish and the async job
  itself, if it decides to log a message.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1273478/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1355939] [NEW] [Django 1.7] horizon table summation can raise TypeError

2014-08-12 Thread Akihiro Motoki
Public bug reported:

https://review.openstack.org/#/c/111932/

With Django 1.7, the unit tests fail with the following error.
It is one of work towards django 1.7.

average: lambda data: sum(data, 0.0) / len(data)

TypeError: unsupported operand type(s) for +: 'float' and 'str'

With Django 1.6, the template code that looked up the variable behind
get_summation was catching the TypeError exception:

try: # method call (assuming no args required)
current = current()
except TypeError: # arguments *were* required
# GOTCHA: This will also catch any TypeError
# raised in the function itself.
current = settings.TEMPLATE_STRING_IF_INVALID  # invalid

With Django 1.7, the code has been refined to catch the exception only
when the function really requires argument (which get_summation()
doesn't):

try:  # method call (assuming no args required)
current = current()
except TypeError:
try:
getcallargs(current)
except TypeError:  # arguments *were* required
current = settings.TEMPLATE_STRING_IF_INVALID  # invalid
else:
raise

So instead of blindly relying on sum(), I introduced a safe_sum() and
safe_average() functions which mimick the behaviour we got with Django
1.6 by returning an empty string when we have invalid input data.

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: django1.7

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1355939

Title:
  [Django 1.7] horizon table summation can raise TypeError

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  https://review.openstack.org/#/c/111932/

  With Django 1.7, the unit tests fail with the following error.
  It is one of work towards django 1.7.

  average: lambda data: sum(data, 0.0) / len(data)

  TypeError: unsupported operand type(s) for +: 'float' and 'str'

  With Django 1.6, the template code that looked up the variable behind
  get_summation was catching the TypeError exception:

  try: # method call (assuming no args required)
  current = current()
  except TypeError: # arguments *were* required
  # GOTCHA: This will also catch any TypeError
  # raised in the function itself.
  current = settings.TEMPLATE_STRING_IF_INVALID  # invalid

  With Django 1.7, the code has been refined to catch the exception only
  when the function really requires argument (which get_summation()
  doesn't):

  try:  # method call (assuming no args required)
  current = current()
  except TypeError:
  try:
  getcallargs(current)
  except TypeError:  # arguments *were* required
  current = settings.TEMPLATE_STRING_IF_INVALID  # invalid
  else:
  raise

  So instead of blindly relying on sum(), I introduced a safe_sum() and
  safe_average() functions which mimick the behaviour we got with Django
  1.6 by returning an empty string when we have invalid input data.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1355939/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1276207] Re: vmware driver does not validate server certificates

2014-08-12 Thread Davanum Srinivas (DIMS)
** Also affects: oslo.vmware
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1276207

Title:
  vmware driver does not validate server certificates

Status in Cinder:
  In Progress
Status in OpenStack Compute (Nova):
  Confirmed
Status in Oslo VMware library for OpenStack projects:
  New

Bug description:
  The VMware driver establishes connections to vCenter over HTTPS, yet
  the vCenter server certificate is not verified as part of the
  connection process.  I know this because my vCenter server is using a
  self-signed certificate which always fails certification verification.
  As a result, someone could use a man-in-the-middle attack to spoof the
  vcenter host to nova.

  The vmware driver has a dependency on Suds, which I believe also does
  not validate certificates because hartsock and I noticed it uses
  urllib.

  For reference, here is a link on secure connections in OpenStack:
  https://wiki.openstack.org/wiki/SecureClientConnections

  Assuming Suds is fixed to provide an option for certificate
  verification, next step would be to modify the vmware driver to
  provide an option to override invalid certificates (such as self-
  signed).  In other parts of OpenStack, there are options to bypass the
  certificate check with a insecure option set, or you could put the
  server's certificate in the CA store.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1276207/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1356003] [NEW] Tempest test failure for FloatingIPDetailsTestJSON.test_list_floating_ip_pools

2014-08-12 Thread David Shrewsbury
Public bug reported:

http://logs.openstack.org/39/94439/17/check/check-tempest-dsvm-full-
icehouse/54f9d43

Traceback (most recent call last):
  File tempest/test.py, line 128, in wrapper
return f(self, *func_args, **func_kwargs)
  File tempest/api/compute/floating_ips/test_list_floating_ips.py, line 78, 
in test_list_floating_ip_pools
resp, floating_ip_pools = self.client.list_floating_ip_pools()
  File tempest/services/compute/json/floating_ips_client.py, line 113, in 
list_floating_ip_pools
self.validate_response(schema.floating_ip_pools, resp, body)
  File tempest/common/rest_client.py, line 578, in validate_response
raise exceptions.InvalidHTTPResponseBody(msg)
InvalidHTTPResponseBody: HTTP response body is invalid json or xml
Details: HTTP response body is invalid ({u'name': u'public'} is not of type 
'string'

Failed validating 'type' in 
schema['properties']['floating_ip_pools']['items']['properties']['name']:
{'type': 'string'}

On instance['floating_ip_pools'][0]['name']:
{u'name': u'public'})


http://logstash.openstack.org/#eyJmaWVsZHMiOltdLCJzZWFyY2giOiJtZXNzYWdlOlwiSFRUUCByZXNwb25zZSBib2R5IGlzIGludmFsaWQganNvbiBvciB4bWxcIiIsInRpbWVmcmFtZSI6IjM2MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsIm9mZnNldCI6MCwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0MDc4NzEyNTY4OTl9

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1356003

Title:
  Tempest test failure for
  FloatingIPDetailsTestJSON.test_list_floating_ip_pools

Status in OpenStack Compute (Nova):
  New

Bug description:
  http://logs.openstack.org/39/94439/17/check/check-tempest-dsvm-full-
  icehouse/54f9d43

  Traceback (most recent call last):
File tempest/test.py, line 128, in wrapper
  return f(self, *func_args, **func_kwargs)
File tempest/api/compute/floating_ips/test_list_floating_ips.py, line 78, 
in test_list_floating_ip_pools
  resp, floating_ip_pools = self.client.list_floating_ip_pools()
File tempest/services/compute/json/floating_ips_client.py, line 113, in 
list_floating_ip_pools
  self.validate_response(schema.floating_ip_pools, resp, body)
File tempest/common/rest_client.py, line 578, in validate_response
  raise exceptions.InvalidHTTPResponseBody(msg)
  InvalidHTTPResponseBody: HTTP response body is invalid json or xml
  Details: HTTP response body is invalid ({u'name': u'public'} is not of type 
'string'

  Failed validating 'type' in 
schema['properties']['floating_ip_pools']['items']['properties']['name']:
  {'type': 'string'}

  On instance['floating_ip_pools'][0]['name']:
  {u'name': u'public'})

  
  
http://logstash.openstack.org/#eyJmaWVsZHMiOltdLCJzZWFyY2giOiJtZXNzYWdlOlwiSFRUUCByZXNwb25zZSBib2R5IGlzIGludmFsaWQganNvbiBvciB4bWxcIiIsInRpbWVmcmFtZSI6IjM2MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsIm9mZnNldCI6MCwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0MDc4NzEyNTY4OTl9

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1356003/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1356011] [NEW] param type Json badness

2014-08-12 Thread Kevin Fox
Public bug reported:

When launching a template that has a parameter with type Json, the cli
works, but the Dashboard displays a random other field duplicated rather
then displaying the correct parameter, and can not submit properly.
Removing the Json type param causes the stack to display and submit
properly. Dashboard needs to know how to handle the Json type.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1356011

Title:
  param type Json badness

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When launching a template that has a parameter with type Json, the cli
  works, but the Dashboard displays a random other field duplicated
  rather then displaying the correct parameter, and can not submit
  properly. Removing the Json type param causes the stack to display and
  submit properly. Dashboard needs to know how to handle the Json type.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1356011/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1356021] [NEW] Tempest tests for router interfaces need updating to support DVR

2014-08-12 Thread Brian Haley
*** This bug is a duplicate of bug 1355537 ***
https://bugs.launchpad.net/bugs/1355537

Public bug reported:

These Tempest tests are failing when the check experimental jenkins
job is run, causing it to enable DVR in devstack:

tempest.scenario.test_security_groups_basic_ops.TestSecurityGroupsBasicOps
test_cross_tenant_traffic[compute,gate,network,smoke] FAIL
test_in_tenant_traffic[compute,gate,network,smoke]FAIL

tempest/scenario/test_security_groups_basic_ops.py._verify_network_details()
has this check:

if i['device_owner'] == 'network:router_interface']

But a DVR router has device_owner
'network:router_interface_distributed', so the loop returns []

Something like this will catch both:

if i['device_owner'].startswith('network:router')]

tempest/common/isolated_creds.py has a similar check that needs
updating.

A quick check with the above change saw the test pass in my environment.

** Affects: neutron
 Importance: Undecided
 Assignee: Brian Haley (brian-haley)
 Status: New


** Tags: l3-dvr-backlog

** Changed in: neutron
 Assignee: (unassigned) = Brian Haley (brian-haley)

** Tags added: l3-dvr-backlog

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1356021

Title:
  Tempest tests for router interfaces need updating to support DVR

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  These Tempest tests are failing when the check experimental jenkins
  job is run, causing it to enable DVR in devstack:

  tempest.scenario.test_security_groups_basic_ops.TestSecurityGroupsBasicOps
  test_cross_tenant_traffic[compute,gate,network,smoke] FAIL
  test_in_tenant_traffic[compute,gate,network,smoke]FAIL

  tempest/scenario/test_security_groups_basic_ops.py._verify_network_details()
  has this check:

  if i['device_owner'] == 'network:router_interface']

  But a DVR router has device_owner
  'network:router_interface_distributed', so the loop returns []

  Something like this will catch both:

  if i['device_owner'].startswith('network:router')]

  tempest/common/isolated_creds.py has a similar check that needs
  updating.

  A quick check with the above change saw the test pass in my
  environment.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1356021/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1356035] [NEW] cannot launch volumes backup details after deleting volumes corresponding volume.

2014-08-12 Thread Amogh
Public bug reported:

1. Login to Devstack as admin user.
2. Create a back up test1_bak from a volume test1.
3. Delete the volume test1 from Volumes page.
4. Go to Volume backups and try to click on Volume Backup test1_bak to see 
the details. Notice the error. PFA the screenshot.

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: Volume_backup_Details_Error.PNG
   
https://bugs.launchpad.net/bugs/1356035/+attachment/4175948/+files/Volume_backup_Details_Error.PNG

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1356035

Title:
  cannot launch volumes backup details after deleting volumes
  corresponding volume.

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  1. Login to Devstack as admin user.
  2. Create a back up test1_bak from a volume test1.
  3. Delete the volume test1 from Volumes page.
  4. Go to Volume backups and try to click on Volume Backup test1_bak to see 
the details. Notice the error. PFA the screenshot.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1356035/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1060451] Re: detach volume has no effect with HpSanISCSIDriver

2014-08-12 Thread Gary W. Smith
** Changed in: horizon
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1060451

Title:
  detach volume has no effect with HpSanISCSIDriver

Status in Cinder:
  Incomplete
Status in devstack - openstack dev environments:
  Expired
Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  How to produce:

  Get lastest devstack folsom as 10/2/2012
  once ./stack.sh starts
   in dashborad under project demo
  create a vm out a default image
  create a volume 
  attach volume to the instance by clicking Edit Attachments
  once the instance is in-use
  detach volume from the instance ty click Edit Attachments
  click Detach Volume for the volume

  check screen -r 
  horizon 
  n-vol
  n-cpu

  doesn't seem to have any activity in the log.

  However if I run nova command

  nova volume-detach id of my vm id of my volume

  it works fine. 
  I think this is more of GUI problem.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1060451/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1349452] Re: apparent deadlock on lock_bridge in n-cpu

2014-08-12 Thread Davanum Srinivas (DIMS)
** Also affects: oslo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1349452

Title:
  apparent deadlock on lock_bridge in n-cpu

Status in OpenStack Compute (Nova):
  Confirmed
Status in Oslo - a Library of Common OpenStack Code:
  New

Bug description:
  It's not clear if n-cpu is dying trying to acquire the lock
  lock_bridge or if it's just hanging.

  http://logs.openstack.org/08/109108/1/check/check-tempest-dsvm-
  full/4417111/logs/screen-n-cpu.txt.gz

  The logs for n-cpu stop about 15 minutes before the rest of the test
  run, and all tests doing things that require the hypervisor executed
  after that point fail with different errors.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1349452/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1356051] [NEW] Cannot load 'instance' in the base class

2014-08-12 Thread Thang Pham
Public bug reported:

I tried the following on VMware using the VMwareVCDriver with nova-
network:

1. Create an instance
2. Create and associate a floating IP with the instance

It failed and printed out the following messages in n-api logs:

2014-08-12 13:54:29.578 ERROR nova.api.openstack 
[req-86d8f466-cfae-42ac-8340-9eac36d6fc71 demo demo] Caught error: Cannot load 
'instance' in the base class
2014-08-12 13:54:29.578 TRACE nova.api.openstack Traceback (most recent call 
last):
2014-08-12 13:54:29.578 TRACE nova.api.openstack   File 
/opt/stack/nova/nova/api/openstack/__init__.py, line 124, in __call__
2014-08-12 13:54:29.578 TRACE nova.api.openstack return 
req.get_response(self.application)
2014-08-12 13:54:29.578 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/request.py, line 1320, in send
2014-08-12 13:54:29.578 TRACE nova.api.openstack application, 
catch_exc_info=False)
2014-08-12 13:54:29.578 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/request.py, line 1284, in 
call_application
2014-08-12 13:54:29.578 TRACE nova.api.openstack app_iter = 
application(self.environ, start_response)
2014-08-12 13:54:29.578 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
2014-08-12 13:54:29.578 TRACE nova.api.openstack return resp(environ, 
start_response)
2014-08-12 13:54:29.578 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/keystonemiddleware/auth_token.py, line 
565, in __call__
2014-08-12 13:54:29.578 TRACE nova.api.openstack return self._app(env, 
start_response)
2014-08-12 13:54:29.578 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
2014-08-12 13:54:29.578 TRACE nova.api.openstack return resp(environ, 
start_response)
2014-08-12 13:54:29.578 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
2014-08-12 13:54:29.578 TRACE nova.api.openstack return resp(environ, 
start_response)
2014-08-12 13:54:29.578 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/routes/middleware.py, line 131, in __call__
2014-08-12 13:54:29.578 TRACE nova.api.openstack response = 
self.app(environ, start_response)
2014-08-12 13:54:29.578 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
2014-08-12 13:54:29.578 TRACE nova.api.openstack return resp(environ, 
start_response)
2014-08-12 13:54:29.578 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/dec.py, line 130, in __call__
2014-08-12 13:54:29.578 TRACE nova.api.openstack resp = self.call_func(req, 
*args, **self.kwargs)
2014-08-12 13:54:29.578 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/dec.py, line 195, in call_func
2014-08-12 13:54:29.578 TRACE nova.api.openstack return self.func(req, 
*args, **kwargs)
2014-08-12 13:54:29.578 TRACE nova.api.openstack   File 
/opt/stack/nova/nova/api/openstack/wsgi.py, line 908, in __call__
2014-08-12 13:54:29.578 TRACE nova.api.openstack content_type, body, accept)
2014-08-12 13:54:29.578 TRACE nova.api.openstack   File 
/opt/stack/nova/nova/api/openstack/wsgi.py, line 974, in _process_stack
2014-08-12 13:54:29.578 TRACE nova.api.openstack action_result = 
self.dispatch(meth, request, action_args)
2014-08-12 13:54:29.578 TRACE nova.api.openstack   File 
/opt/stack/nova/nova/api/openstack/wsgi.py, line 1058, in dispatch
2014-08-12 13:54:29.578 TRACE nova.api.openstack return method(req=request, 
**action_args)
2014-08-12 13:54:29.578 TRACE nova.api.openstack   File 
/opt/stack/nova/nova/api/openstack/compute/contrib/floating_ips.py, line 146, 
in index
2014-08-12 13:54:29.578 TRACE nova.api.openstack 
self._normalize_ip(floating_ip)
2014-08-12 13:54:29.578 TRACE nova.api.openstack   File 
/opt/stack/nova/nova/api/openstack/compute/contrib/floating_ips.py, line 117, 
in _normalize_ip
2014-08-12 13:54:29.578 TRACE nova.api.openstack floating_ip['instance'] = 
fixed_ip['instance']
2014-08-12 13:54:29.578 TRACE nova.api.openstack   File 
/opt/stack/nova/nova/objects/base.py, line 447, in __getitem__
2014-08-12 13:54:29.578 TRACE nova.api.openstack return getattr(self, name)
2014-08-12 13:54:29.578 TRACE nova.api.openstack   File 
/opt/stack/nova/nova/objects/base.py, line 67, in getter
2014-08-12 13:54:29.578 TRACE nova.api.openstack self.obj_load_attr(name)
2014-08-12 13:54:29.578 TRACE nova.api.openstack   File 
/opt/stack/nova/nova/objects/base.py, line 375, in obj_load_attr
2014-08-12 13:54:29.578 TRACE nova.api.openstack _(Cannot load '%s' in the 
base class) % attrname)
2014-08-12 13:54:29.578 TRACE nova.api.openstack NotImplementedError: Cannot 
load 'instance' in the base class
2014-08-12 13:54:29.579 INFO nova.api.openstack 
[req-86d8f466-cfae-42ac-8340-9eac36d6fc71 demo demo] 

[Yahoo-eng-team] [Bug 1356058] [NEW] Various extensions don't respect content header when returning a 202

2014-08-12 Thread Vish Ishaya
Public bug reported:

Various nova extensions commands return a text response (202 Accepted
[..]) even when provided with an Accept: application/json header. For
other 202 responses, either an empty body or a JSON-formatted response
is standard. The implementation should be consistent with other 202's
from Nova and other OpenStack Services.

This seems to be due to returning a webob exception instead of a response. The 
affected extensions are:
$ grep HTTPAccepted nova/api/openstack/compute/contrib/*.py
nova/api/openstack/compute/contrib/cloudpipe_update.py:return 
webob.exc.HTTPAccepted()
nova/api/openstack/compute/contrib/fixed_ips.py:return 
webob.exc.HTTPAccepted()
nova/api/openstack/compute/contrib/networks_associate.py:return 
exc.HTTPAccepted()
nova/api/openstack/compute/contrib/networks_associate.py:return 
exc.HTTPAccepted()
nova/api/openstack/compute/contrib/networks_associate.py:return 
exc.HTTPAccepted()
nova/api/openstack/compute/contrib/os_networks.py:return 
exc.HTTPAccepted()
nova/api/openstack/compute/contrib/os_networks.py:return 
exc.HTTPAccepted()
nova/api/openstack/compute/contrib/os_tenant_networks.py:response = 
exc.HTTPAccepted()

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1356058

Title:
  Various extensions don't respect content header when returning a 202

Status in OpenStack Compute (Nova):
  New

Bug description:
  Various nova extensions commands return a text response (202 Accepted
  [..]) even when provided with an Accept: application/json header.
  For other 202 responses, either an empty body or a JSON-formatted
  response is standard. The implementation should be consistent with
  other 202's from Nova and other OpenStack Services.

  This seems to be due to returning a webob exception instead of a response. 
The affected extensions are:
  $ grep HTTPAccepted nova/api/openstack/compute/contrib/*.py
  nova/api/openstack/compute/contrib/cloudpipe_update.py:return 
webob.exc.HTTPAccepted()
  nova/api/openstack/compute/contrib/fixed_ips.py:return 
webob.exc.HTTPAccepted()
  nova/api/openstack/compute/contrib/networks_associate.py:return 
exc.HTTPAccepted()
  nova/api/openstack/compute/contrib/networks_associate.py:return 
exc.HTTPAccepted()
  nova/api/openstack/compute/contrib/networks_associate.py:return 
exc.HTTPAccepted()
  nova/api/openstack/compute/contrib/os_networks.py:return 
exc.HTTPAccepted()
  nova/api/openstack/compute/contrib/os_networks.py:return 
exc.HTTPAccepted()
  nova/api/openstack/compute/contrib/os_tenant_networks.py:response 
= exc.HTTPAccepted()

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1356058/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1356120] [NEW] PortNotFound in update_device_up for DVR

2014-08-12 Thread Armando Migliaccio
Public bug reported:

An example of a failure has been observed here:

http://logs.openstack.org/80/113580/3/experimental/check-tempest-dsvm-
neutron-
dvr/a0e0c32/logs/screen-q-svc.txt.gz?level=TRACE#_2014-08-13_00_13_00_674

More triaging needed but I suspect this is caused by interleaved
create/delete requests of router resources.

** Affects: neutron
 Importance: Undecided
 Assignee: Armando Migliaccio (armando-migliaccio)
 Status: New


** Tags: l3-dvr-backlog

** Changed in: neutron
 Assignee: (unassigned) = Armando Migliaccio (armando-migliaccio)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1356120

Title:
  PortNotFound in update_device_up for DVR

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  An example of a failure has been observed here:

  http://logs.openstack.org/80/113580/3/experimental/check-tempest-dsvm-
  neutron-
  dvr/a0e0c32/logs/screen-q-svc.txt.gz?level=TRACE#_2014-08-13_00_13_00_674

  More triaging needed but I suspect this is caused by interleaved
  create/delete requests of router resources.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1356120/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1356121] [NEW] Lock wait timeout traces for DVR routers

2014-08-12 Thread Armando Migliaccio
Public bug reported:

This has been observed here:

http://logs.openstack.org/80/113580/3/experimental/check-tempest-dsvm-
neutron-
dvr/a0e0c32/logs/screen-q-svc.txt.gz?level=TRACE#_2014-08-13_00_13_58_408

This is most likely caused by a long-running DB transaction, but this
needs more triaging.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: l3-dvr-backlog

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1356121

Title:
  Lock wait timeout traces for DVR routers

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  This has been observed here:

  http://logs.openstack.org/80/113580/3/experimental/check-tempest-dsvm-
  neutron-
  dvr/a0e0c32/logs/screen-q-svc.txt.gz?level=TRACE#_2014-08-13_00_13_58_408

  This is most likely caused by a long-running DB transaction, but this
  needs more triaging.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1356121/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1356127] [NEW] VPNaaS: Cisco validator incorrectly checking public IP of router

2014-08-12 Thread Paul Michali
Public bug reported:

The IPSec site connection validation checks that the router has a GW
specified. It incorrectly accesses this information via the VPN service
table.

As a result, all validations fail.

** Affects: neutron
 Importance: Undecided
 Assignee: Paul Michali (pcm)
 Status: In Progress


** Tags: cisco vnaas

** Changed in: neutron
 Assignee: (unassigned) = Paul Michali (pcm)

** Changed in: neutron
   Status: New = In Progress

** Description changed:

  The IPSec site connection validation checks that the router has a GW
  specified. It incorrectly accesses this information via the VPN service
  table.
+ 
+ As a result, all validations fail.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1356127

Title:
  VPNaaS: Cisco validator incorrectly checking public IP of router

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  The IPSec site connection validation checks that the router has a GW
  specified. It incorrectly accesses this information via the VPN
  service table.

  As a result, all validations fail.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1356127/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1356157] [NEW] make nova floating-ip-delete atomic with neutron

2014-08-12 Thread Aaron Rosen
Public bug reported:

The infra guys were noticing an issue where they were leaking floating ip
addresses. One of the reasons this would occur for them is they called
nova floating-ip-delete which first disassocates the floating-ip in neutron
and then deletes it. Because it makes two calls to neutron if the first one
succeeds and the second fails it results in the instance no longer being
associated with the floatingip. They have retry logic but they base it on
the instance and when they go to retry cleaning up the instance the floatingip
is no longer on the instance so they never delete it.  

This patch fixes this issue by directly calling delete_floating_ip instead
of releasing first if using neutron as neutron allows this. I looked into 
doing the same thing for nova-network but the code is written to prevent this.
This allows the operation to be atomic. I know this is sorta hackish that
we're doing this in the api layer but we do this in a few other places
too fwiw.

** Affects: nova
 Importance: High
 Assignee: Aaron Rosen (arosen)
 Status: In Progress


** Tags: network

** Changed in: nova
   Importance: Undecided = High

** Changed in: nova
 Assignee: (unassigned) = Aaron Rosen (arosen)

** Tags added: network

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1356157

Title:
  make nova floating-ip-delete atomic with neutron

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  The infra guys were noticing an issue where they were leaking floating ip
  addresses. One of the reasons this would occur for them is they called
  nova floating-ip-delete which first disassocates the floating-ip in neutron
  and then deletes it. Because it makes two calls to neutron if the first one
  succeeds and the second fails it results in the instance no longer being
  associated with the floatingip. They have retry logic but they base it on
  the instance and when they go to retry cleaning up the instance the floatingip
  is no longer on the instance so they never delete it.  

  This patch fixes this issue by directly calling delete_floating_ip instead
  of releasing first if using neutron as neutron allows this. I looked into 
  doing the same thing for nova-network but the code is written to prevent this.
  This allows the operation to be atomic. I know this is sorta hackish that
  we're doing this in the api layer but we do this in a few other places
  too fwiw.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1356157/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1355921] Re: [libvirt] When guest configured for threads, poor VCPU accounting

2014-08-12 Thread ugvddm
Hi Jon:

I have tested your issue, but I can't reproduce it. I use below flavor
to create a vm with 3vcpus in a host with 8 vcpus

++--+
| Property   | Value|
++--+
| OS-FLV-DISABLED:disabled   | False|
| OS-FLV-EXT-DATA:ephemeral  | 0|
| disk   | 1|
| extra_specs| {hw:cpu_cores: 1, hw:cpu_sockets: 3} |
| id | 4c8ffddf-1a07-4aea-bb44-687fc9c6ae46 |
| name   | m1.tiny  |
| os-flavor-access:is_public | True |
| ram| 512  |
| rxtx_factor| 1.0  |
| swap   |  |
| vcpus  | 3|
++--+

then , I can see the extra_specs whic I set work to vm:
/usr/bin/kvm-spice -S -M pc-1.1 -enable-kvm -m 512 -smp 
3,sockets=3,cores=1,threads=1 -name instance-0004 -uuid 
b7198295-3667-4abe-b9d4-07fb5e977550 -smbios type=1,manufacturer=OpenStack 
Foundation,product=OpenStack ..

In additon, I see below log from nova-compute.log:
AUDIT nova.compute.resource_tracker [-] Free VCPUS: 5


** Changed in: nova
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1355921

Title:
  [libvirt] When guest configured for threads, poor VCPU accounting

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Noticed while testing:  https://blueprints.launchpad.net/nova/+spec
  /virt-driver-vcpu-topology

  
  I have a host advertising 16 VCPUs (2 sockets, each 8 cores).   Each core 
happens to have 8 threads.(This is on a beefy POWER8 system).With the 
above blueprint,  I can now create a 1 socket, 2 core, 8 thread guest.

  All works fine, except that I noticed Free VCPUS:  0' even though I'm
  really only using two cores.I'd think I would see 14 free VCPUs in
  this scenario.

  Guest lscpu output: 
  [root@bare-precise ~]# lscpu
  Architecture:  ppc64
  CPU op-mode(s):32-bit, 64-bit
  Byte Order:Big Endian
  CPU(s):16
  On-line CPU(s) list:   0-15
  Thread(s) per core:8
  Core(s) per socket:2
  Socket(s): 1
  NUMA node(s):  1
  Model: IBM pSeries (emulated by qemu)
  L1d cache: 64K
  L1i cache: 32K
  NUMA node0 CPU(s): 0-15

  Resulting tracker
  2014-08-12 12:17:18.874 96650 AUDIT nova.compute.resource_tracker [-] Free 
VCPUS: 0

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1355921/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1356167] [NEW] add monitoring on resume_instance

2014-08-12 Thread warewang
Public bug reported:

The existing monitoring is only at the end of the resume_instance, I think we 
should add monitoring on both begin and end, it  is
very convenient to check problem.

** Affects: nova
 Importance: Undecided
 Assignee: warewang (wangguangcai)
 Status: In Progress

** Changed in: nova
 Assignee: (unassigned) = warewang (wangguangcai)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1356167

Title:
  add monitoring on resume_instance

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  The existing monitoring is only at the end of the resume_instance, I think we 
should add monitoring on both begin and end, it  is
  very convenient to check problem.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1356167/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1317588] Re: _network_filter_hook unit test failed in different components

2014-08-12 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete = Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1317588

Title:
  _network_filter_hook unit test failed in different components

Status in OpenStack Neutron (virtual network service):
  Expired

Bug description:
  The unit test, test_network_filter_hook_nonadmin_context, has some wrong 
assertion at the last line:
  self.assertEqual(conditions.__str__(), %s OR %s % (txt, txt))

  This failed the unit tests of bug 1308958 if something was made on the
  function _network_filter_hook.

  The other component should test themselves by using the similar
  assertion, so they will also fail.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1317588/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1312953] Re: neutron router-interface-add fails

2014-08-12 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete = Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1312953

Title:
  neutron router-interface-add fails

Status in OpenStack Neutron (virtual network service):
  Expired

Bug description:
  Invalid input for operation: IP address 10.0.0.1 is not a valid IP
  for the defined subnet

  This was found on my devstack when I enabled neutron.

  It looks neutron tries to use the default NETWORK_GATEWAY, i.e.
  10.0.0.1 for the subnet 192.168.78.0/24

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1312953/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1315805] Re: VPNAAS: Ipsec_site_connection status going in PENDING_CREATE sometimes

2014-08-12 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete = Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1315805

Title:
  VPNAAS: Ipsec_site_connection status  going in PENDING_CREATE
  sometimes

Status in OpenStack Neutron (virtual network service):
  Expired

Bug description:
  Create a site to site connection:
  On checking the status of 'ipsec_site_connection': on both the side one site 
its show down and on other side it show PENDING_CREATE
  Frequency of occurrence in the ice ga release has decreased , observed 
similar issue in ice house 2 release also.
  Occurrence : Rarely

  Release: 1:2.3.4-0ubuntu1~cloud0

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1315805/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1210046] Re: missing PROXY section in metaplugin.ini

2014-08-12 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete = Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1210046

Title:
  missing PROXY section in metaplugin.ini

Status in OpenStack Neutron (virtual network service):
  Expired

Bug description:
  A PROXY section is registered in last line of 
neutron/neutron/plugins/metaplugin/common/config.py
  The code is cfg.CONF.register_opts(proxy_plugin_opts, PROXY)

  There should be a proxy section added in
  neutron/etc/neutron/plugins/metaplugin/metaplugin.ini

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1210046/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1270943] Re: Hypervisor crashes after instance is spawned

2014-08-12 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete = Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1270943

Title:
  Hypervisor crashes after instance is spawned

Status in OpenStack Neutron (virtual network service):
  Expired
Status in OpenStack Compute (Nova):
  Incomplete

Bug description:
  I am running Grizzly on Ubuntu 13.04 (so the network service ==
  Quantum).  Nova runs Quantum with LibvirtHybridOVSBridgeDriver and
  LinuxOVSInterfaceDriver, while Quantum is configured to use GRE
  tunnels.  Further, Quantum runs on a dedicated node and VLAN.

  Starting in mid-December, new Compute nodes that came online were
  unable to spin new VMs.  At the moment the nova-compute.log indicated
  that the instance had spawned successfully, the hypervisor system
  crashed with the following console dump message (last screen's worth):
  http://pastebin.com/004MYzvR.

  The installation of the Compute packages are controlled by puppet:

  1:2013.1-0ubuntu2 nova-common
  1:2013.1-0ubuntu2 nova-compute
  1:2013.1-0ubuntu2 nova-compute-kvm
  1.9.0-0ubuntu1openvswitch-common
  1.9.0-0ubuntu1openvswitch-datapath-dkms
  1.9.0-0ubuntu1openvswitch-datapath-source
  1.9.0-0ubuntu1openvswitch-switch
  1:1.0.3-0ubuntu1 python-cinderclient
  1:2013.1.4-0ubuntu1  python-glance
  1:0.9.0-0ubuntu1.2  python-glanceclient
  1:2013.1.4-0ubuntu1.1   python-keystone
  1:0.2.3-0ubuntu2.2  python-keystoneclient
  1:2013.1-0ubuntu2  python-nova
  1:2.13.0-0ubuntu1   python-novaclient
  1:1.1.0-0ubuntu1  python-oslo.config
  1:2013.1-0ubuntu2  python-quantum
  1:2.2.0-0ubuntu1  python-quantumclient
  1:1.3.0-0ubuntu1  python-swiftclient
  1:2013.1-0ubuntu2  quantum-common
  1:2013.1-0ubuntu2  quantum-plugin-openvswitch
  1:2013.1-0ubuntu2  quantum-plugin-openvswitch-agent

  The kernel being used is *not* controlled by Puppet and ends up being
  whatever the latest and greatest version is in raring-updates.  The
  kernels in use: 3.8.0.34.52.  I tried upgrading to 3.8.0.35.53 when it
  became available, but that had no effect.

  I'm lost.  No idea how to debug this.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1270943/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1355921] Re: [libvirt] When guest configured for threads, poor VCPU accounting

2014-08-12 Thread Jon Grimm
Hi there.You didn't actually setup a configuration that would create
a VM with threads which was the condition I wrote the bug against.

  smp 3,sockets=3,cores=1,threads=1 

Thanks!

** Changed in: nova
   Status: Invalid = New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1355921

Title:
  [libvirt] When guest configured for threads, poor VCPU accounting

Status in OpenStack Compute (Nova):
  New

Bug description:
  Noticed while testing:  https://blueprints.launchpad.net/nova/+spec
  /virt-driver-vcpu-topology

  
  I have a host advertising 16 VCPUs (2 sockets, each 8 cores).   Each core 
happens to have 8 threads.(This is on a beefy POWER8 system).With the 
above blueprint,  I can now create a 1 socket, 2 core, 8 thread guest.

  All works fine, except that I noticed Free VCPUS:  0' even though I'm
  really only using two cores.I'd think I would see 14 free VCPUs in
  this scenario.

  Guest lscpu output: 
  [root@bare-precise ~]# lscpu
  Architecture:  ppc64
  CPU op-mode(s):32-bit, 64-bit
  Byte Order:Big Endian
  CPU(s):16
  On-line CPU(s) list:   0-15
  Thread(s) per core:8
  Core(s) per socket:2
  Socket(s): 1
  NUMA node(s):  1
  Model: IBM pSeries (emulated by qemu)
  L1d cache: 64K
  L1i cache: 32K
  NUMA node0 CPU(s): 0-15

  Resulting tracker
  2014-08-12 12:17:18.874 96650 AUDIT nova.compute.resource_tracker [-] Free 
VCPUS: 0

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1355921/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp