[Yahoo-eng-team] [Bug 1382354] [NEW] can not create router when gateway ip is not in subnet

2014-10-17 Thread xhzhf
Public bug reported:

When we create subnet, neutron does not check whether gateway ip is in the 
subnet ip pool. 
Next we create route, the gateway ip is assigned to route. At this time, 
neutron check  whether ip is in the subnet, the result is failure.

sulution:when creating subnet, neutron should check gateway ip.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1382354

Title:
  can not create router when gateway ip is not in subnet

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  When we create subnet, neutron does not check whether gateway ip is in the 
subnet ip pool. 
  Next we create route, the gateway ip is assigned to route. At this time, 
neutron check  whether ip is in the subnet, the result is failure.

  sulution:when creating subnet, neutron should check gateway ip.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1382354/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1381295] Re: Live migration fails when called via RPC API with admin context

2014-10-17 Thread Joe Cropper
** Also affects: python-glanceclient
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1381295

Title:
  Live migration fails when called via RPC API with admin context

Status in OpenStack Compute (Nova):
  New
Status in Python client library for Glance:
  New

Bug description:
  When trying to live migrate a VM by calling the compute RPC API
  directly (i.e., not via the novaclient) coupled with the elevated
  admin context [1], the destination compute service tries to call
  glance to retrieve the image [2].  However, the destination compute
  service erroneously raises an exception [4].

  This problem was introduced via the following patch:
  https://review.openstack.org/#/c/121692

  It also appears that a similar problem exists within nova too [3].

  #

  [1]
  from nova import compute
  ctxt = context.get_admin_context()
  self.compute_api = compute.API()
  self.compute_api.live_migrate(
  ctxt.elevated(), inst, False, False, host_dict)

  #

  [2]
  def _create_glance_client(context, host, port, use_ssl, version=1):
  Instantiate a new glanceclient.Client object.
  params = {}
  if use_ssl:
  scheme = 'https'
  # https specific params
  params['insecure'] = CONF.glance.api_insecure
  params['ssl_compression'] = False
  if CONF.ssl.cert_file:
  params['cert_file'] = CONF.ssl.cert_file
  if CONF.ssl.key_file:
  params['key_file'] = CONF.ssl.key_file
  if CONF.ssl.ca_file:
  params['cacert'] = CONF.ssl.ca_file
  else:
  scheme = 'http'

  if CONF.auth_strategy == 'keystone':
  # NOTE(isethi): Glanceclient = 0.9.0.49 accepts only
  # keyword 'token', but later versions accept both the
  # header 'X-Auth-Token' and 'token'
  params['token'] = context.auth_token
  params['identity_headers'] = generate_identity_headers(context)   
 would return {'X-Auth-Token': None, }
  if utils.is_valid_ipv6(host):
  # if so, it is ipv6 address, need to wrap it with '[]'
  host = '[%s]' % host
  endpoint = '%s://%s:%s' % (scheme, host, port)
  return glanceclient.Client(str(version), endpoint, **params)  
params=={'identity_headers':{{'X-Auth-Token': None, }}...}

  #

  [3]
  novaclient.client.py:
  def http_log_req(self, method, url, kwargs):
  if not self.http_log_debug:
  return

  string_parts = ['curl -i']

  if not kwargs.get('verify', True):
  string_parts.append(' --insecure')

  string_parts.append( '%s' % url)
  string_parts.append(' -X %s' % method)

  headers = copy.deepcopy(kwargs['headers'])
  self._redact(headers, ['X-Auth-Token'])  
here
  # because dict ordering changes from 2 to 3
  keys = sorted(headers.keys())
  for name in keys:
  value = headers[name]
  header = ' -H %s: %s' % (name, value)
  string_parts.append(header)

  if 'data' in kwargs:
  data = json.loads(kwargs['data'])
  self._redact(data, ['auth', 'passwordCredentials', 'password'])
  string_parts.append( -d '%s' % json.dumps(data))
  self._logger.debug(REQ: %s % .join(string_parts))

  #

  [4]
  2014-10-14 00:42:10.699 31346 INFO nova.compute.manager [-] [instance: 
aa68237f-e669-4025-b16e-f4b50926f7a5] During the sync_power process the 
instance has moved from host cmo-comp5.ibm.com to host cmo-comp4.ibm.com
  2014-10-14 00:42:10.913 31346 INFO nova.compute.manager 
[req-7be58838-3ec2-43d4-afd1-23d6b3d5e3de None] [instance: 
aa68237f-e669-4025-b16e-f4b50926f7a5] Post operation of migration started
  2014-10-14 00:42:11.148 31346 ERROR oslo.messaging.rpc.dispatcher 
[req-7be58838-3ec2-43d4-afd1-23d6b3d5e3de ] Exception during message handling: 
'NoneType' object has no attribute 'encode'
  2014-10-14 00:42:11.148 31346 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
  2014-10-14 00:42:11.148 31346 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py, line 134, 
in _dispatch_and_reply
  2014-10-14 00:42:11.148 31346 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
  2014-10-14 00:42:11.148 31346 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py, line 177, 
in _dispatch
  2014-10-14 00:42:11.148 31346 TRACE oslo.messaging.rpc.dispatcher return 

[Yahoo-eng-team] [Bug 1359808] Re: extended_volumes slows down the nova instance list by 40..50%

2014-10-17 Thread Attila Fazekas
This bug points to the number of queries made, you do not really need to
measure anything to see doing 4096 query in a loop is bad instead if
doing only one (or smaller group).

for id in ids:
 SELECT attr from table where ID=id;

vs.

 SELECT attr from table where ID in ids;


Mysql default max query size is 16777216 byte, so probably you can't specify 
significantly more than 256k uuid in one select statement. postgresql limit is 
bigger.


** Changed in: nova
   Status: Opinion = New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1359808

Title:
  extended_volumes slows down the nova instance list by 40..50%

Status in OpenStack Compute (Nova):
  New

Bug description:
  When listing ~4096 instances, the nova API (n-api) service has high CPU(100%) 
 usage because it does individual SELECTs,
  for every server's block_device_mapping. This adds ~20-25 sec to the response 
time.

  Please use more efficient way for getting the block_device_mapping,
  when multiple instance queried.

  This line initiating the individual select:
  
https://github.com/openstack/nova/blob/4b414adce745c07fbf2003ec25a5e554e634c8b7/nova/api/openstack/compute/contrib/extended_volumes.py#L32

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1359808/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1382360] [NEW] ml2 plugin wrongly calls _filter_nets_l3 from get_networks

2014-10-17 Thread Isaku Yamahata
Public bug reported:

The commit of 0156ec175cc047826b211727d43d5d14a3e1f2d2, change-id of 
I47e01a11afaf6e6bcf06da7bd713fd39b05600ff which fixes bug 1132849 removed the 
call of _filter_nets_l3 methods.
But the fix missed ml2 plugin somehow.

** Affects: neutron
 Importance: Undecided
 Assignee: Isaku Yamahata (yamahata)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1382360

Title:
  ml2 plugin wrongly calls _filter_nets_l3 from get_networks

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  The commit of 0156ec175cc047826b211727d43d5d14a3e1f2d2, change-id of 
I47e01a11afaf6e6bcf06da7bd713fd39b05600ff which fixes bug 1132849 removed the 
call of _filter_nets_l3 methods.
  But the fix missed ml2 plugin somehow.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1382360/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1382377] [NEW] Missing test for Node Group Templates panels

2014-10-17 Thread Lin Hua Cheng
Public bug reported:


The panel have only tests for index and details, but the create and copy
workflow does not have any test.

We should add the missing test for this workflow.

** Affects: horizon
 Importance: Wishlist
 Status: New

** Changed in: horizon
   Importance: Undecided = Wishlist

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1382377

Title:
  Missing test for Node Group Templates panels

Status in OpenStack Dashboard (Horizon):
  New

Bug description:

  The panel have only tests for index and details, but the create and
  copy workflow does not have any test.

  We should add the missing test for this workflow.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1382377/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1382390] [NEW] nova-api should shutdown gracefully

2014-10-17 Thread Tiantian Gao
Public bug reported:

In IceHouse, An awesome feature got implemented: graceful shutdown nova
service, which can make sure in-process RPC request got done before kill
the process.

But nova-api not support graceful shutdown now, which can cause problem
when do upgrading. For example, when a request to create an instance was
in-progress, kill the nova-api may lead to quota not sync or odd
database records. Especially in large-scale development, there are
hundreds of request in a second, kill the nova-api will interrupt lots
in-process greenlet.

In nova/wsgi.py, when stoping WSGI service, we first shrink the greenlet
pool size to 0, then kill the eventlet wsgi server. The work around is
quick and easy: wait for all greenlets in the pool to finish, then kill
wsgi server. The code looks like below:


diff --git a/nova/wsgi.py b/nova/wsgi.py
index ba52872..3c89297 100644
--- a/nova/wsgi.py
+++ b/nova/wsgi.py
@@ -212,6 +212,9 @@ class Server(object):
 if self._server is not None:
 # Resize pool to stop new requests from being processed
 self._pool.resize(0)
+num = self._pool.running()
+LOG.info(_(Waiting WSGI server to finish %d requests. % num))
+self._pool.waitall()
 self._server.kill()
 
 def wait(self):

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1382390

Title:
  nova-api should shutdown gracefully

Status in OpenStack Compute (Nova):
  New

Bug description:
  In IceHouse, An awesome feature got implemented: graceful shutdown
  nova service, which can make sure in-process RPC request got done
  before kill the process.

  But nova-api not support graceful shutdown now, which can cause
  problem when do upgrading. For example, when a request to create an
  instance was in-progress, kill the nova-api may lead to quota not sync
  or odd database records. Especially in large-scale development, there
  are hundreds of request in a second, kill the nova-api will interrupt
  lots in-process greenlet.

  In nova/wsgi.py, when stoping WSGI service, we first shrink the
  greenlet pool size to 0, then kill the eventlet wsgi server. The work
  around is quick and easy: wait for all greenlets in the pool to
  finish, then kill wsgi server. The code looks like below:

  
  diff --git a/nova/wsgi.py b/nova/wsgi.py
  index ba52872..3c89297 100644
  --- a/nova/wsgi.py
  +++ b/nova/wsgi.py
  @@ -212,6 +212,9 @@ class Server(object):
   if self._server is not None:
   # Resize pool to stop new requests from being processed
   self._pool.resize(0)
  +num = self._pool.running()
  +LOG.info(_(Waiting WSGI server to finish %d requests. % num))
  +self._pool.waitall()
   self._server.kill()
   
   def wait(self):

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1382390/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1382438] [NEW] metadata should provides username of the user who created the instances

2014-10-17 Thread Andre Naehring
Public bug reported:

The metadata provided to instances should contain the username of the
user who created the instance. At the moment this could only be
submitted via user_data but we think it should be useful to place that
in the corresponding json.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1382438

Title:
  metadata should provides username of the user who created the
  instances

Status in OpenStack Compute (Nova):
  New

Bug description:
  The metadata provided to instances should contain the username of the
  user who created the instance. At the moment this could only be
  submitted via user_data but we think it should be useful to place that
  in the corresponding json.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1382438/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1364814] Re: Neutron multiple api workers can't send cast message to agent when use zeromq

2014-10-17 Thread James Page
** Tags added: zmq

** Also affects: oslo.messaging (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1364814

Title:
  Neutron multiple api workers can't send cast message to agent when use
  zeromq

Status in OpenStack Neutron (virtual network service):
  Opinion
Status in Messaging API for OpenStack:
  In Progress
Status in “oslo.messaging” package in Ubuntu:
  New

Bug description:
  When I set api_workers  0 in Neutron configuration, delelting or adding 
router interface, Neutron L3 agent can't receive message from Neutron Server.
  In this situation, L3 agent report state can cast to Neutron Server, 
meanwhile it can receive cast message from Neutron Server.(use call method)

  Obviously, Neutron Server can use cast method for sending message to
  L3 agent, But why cast routers_updated fails? This also occurs in
  other Neutron agent.

  Then I make a test, write some codes in  Neutron server starts or
  l3_router_plugins, sends cast periodic message to L3 agent directly.
  From L3 agent rpc-zmq-receiver log file shows it receives message from
  Neutron Server.

  By the way, everything works well when api_workers = 0.

  Test environment:
  neutron(master) + oslo.messaging(master) + zeromq

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1364814/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1382440] [NEW] Detaching multipath volume doesn't work properly when using different targets with same portal for each multipath device

2014-10-17 Thread Keiichi KII
Public bug reported:

Overview:
On Icehouse(2014.1.2) with iscsi_use_multipath=true, detaching iSCSI 
multipath volume doesn't work properly. When we use different targets(IQNs) 
associated with same portal for each different multipath device, all of 
the targets will be deleted via disconnect_volume().

This problem is not yet fixed in upstream. However, the attached patch
fixes this problem.

Steps to Reproduce:

We can easily reproduce this issue without any special storage
system in the following Steps:

  1. configure iscsi_use_multipath=True in nova.conf on compute node.
  2. configure volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver
 in cinder.conf on cinder node.
  2. create an instance.
  3. create 3 volumes and attach them to the instance.
  4. detach one of these volumes.
  5. check multipath -ll and iscsiadm --mode session.

Detail:

This problem was introduced with the following patch which modified
attaching and detaching volume operations for different targets
associated with different portals for the same multipath device.

  commit 429ac4dedd617f8c1f7c88dd8ece6b7d2f2accd0
  Author: Xing Yang xing.y...@emc.com
  Date:   Date: Mon Jan 6 17:27:28 2014 -0500

Fixed a problem in iSCSI multipath

We found out that:

 # Do a discovery to find all targets.
 # Targets for multiple paths for the same multipath device
 # may not be the same.
 out = self._run_iscsiadm_bare(['-m',
   'discovery',
   '-t',
   'sendtargets',
   '-p',
   iscsi_properties['target_portal']],
   check_exit_code=[0, 255])[0] \
 or 

 ips_iqns = self._get_target_portals_from_iscsiadm_output(out)
...
 # If no other multipath device attached has the same iqn
 # as the current device
 if not in_use:
 # disconnect if no other multipath devices with same iqn
 self._disconnect_mpath(iscsi_properties, ips_iqns)
 return
 elif multipath_device not in devices:
 # delete the devices associated w/ the unused multipath
 self._delete_mpath(iscsi_properties, multipath_device, ips_iqns)

When we use different targets(IQNs) associated with same portal for each 
different
multipath device, the ips_iqns has all targets in compute node from the result 
of
iscsiadm -m discovery -t sendtargets -p the same portal.
Then, the _delete_mpath() deletes all of the targets in the ips_iqns
via /sys/block/sdX/device/delete.

For example, we create an instance and attach 3 volumes to the instance:

  # iscsiadm --mode session
  tcp: [17] 192.168.0.55:3260,1 
iqn.2010-10.org.openstack:volume-5c526ffa-ba88-4fe2-a570-9e35c4880d12
  tcp: [18] 192.168.0.55:3260,1 
iqn.2010-10.org.openstack:volume-b4495e7e-b611-4406-8cce-4681ac1e36de
  tcp: [19] 192.168.0.55:3260,1 
iqn.2010-10.org.openstack:volume-b2c01f6a-5723-40e7-9f21-f6b728021b0e
  # multipath -ll
  330030001 dm-7 IET,VIRTUAL-DISK
  size=4.0G features='0' hwhandler='0' wp=rw
  `-+- policy='round-robin 0' prio=1 status=active
 `- 23:0:0:1 sdd 8:48 active ready running
  330010001 dm-5 IET,VIRTUAL-DISK
  size=2.0G features='0' hwhandler='0' wp=rw
  `-+- policy='round-robin 0' prio=1 status=active
 `- 21:0:0:1 sdb 8:16 active ready running
  330020001 dm-6 IET,VIRTUAL-DISK
  size=3.0G features='0' hwhandler='0' wp=rw
  `-+- policy='round-robin 0' prio=1 status=active
 `- 22:0:0:1 sdc 8:32 active ready running

Then we detach one of these volumes:

  # nova volume-detach 95f959cd-d180-4063-ae03-9d21dbd7cc50 5c526ffa-
ba88-4fe2-a570-9e35c4880d12

As a result of detaching the volume, the compute node remains 3 iSCSI sessions
and the instance fails to access the attached multipath devices:

  # iscsiadm --mode session
  tcp: [17] 192.168.0.55:3260,1 
iqn.2010-10.org.openstack:volume-5c526ffa-ba88-4fe2-a570-9e35c4880d12
  tcp: [18] 192.168.0.55:3260,1 
iqn.2010-10.org.openstack:volume-b4495e7e-b611-4406-8cce-4681ac1e36de
  tcp: [19] 192.168.0.55:3260,1 
iqn.2010-10.org.openstack:volume-b2c01f6a-5723-40e7-9f21-f6b728021b0e
  # multipath -ll
  330030001 dm-7 ,
  size=4.0G features='0' hwhandler='0' wp=rw
  `-+- policy='round-robin 0' prio=0 status=enabled
 `- #:#:#:# -   #:# failed faulty running
  330020001 dm-6 ,
  size=3.0G features='0' hwhandler='0' wp=rw
  `-+- policy='round-robin 0' prio=0 status=enabled
 `- #:#:#:# -   #:# failed faulty running

** Affects: nova
 Importance: Undecided
 Status: New

** Patch added: Patch to fix removing wrong iSCSI multipath device issue
   
https://bugs.launchpad.net/bugs/1382440/+attachment/4238782/+files/fix-removing-wrong-device-problem-in-iscsi-multipath.patch

-- 
You received this bug notification because you are a member of Yahoo!

[Yahoo-eng-team] [Bug 1382448] [NEW] ml2 extension manager doesn't pass db entry to entend_xxx_dict

2014-10-17 Thread Isaku Yamahata
Public bug reported:

extension driver isn't passed dbentry to extension result dict
extend_{network, subnet, port}_dict() aren't passed db entry, but only result 
dict. In order to extend the result dict, db entry is necessary.

** Affects: neutron
 Importance: Undecided
 Assignee: Isaku Yamahata (yamahata)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1382448

Title:
  ml2 extension manager doesn't pass db entry to entend_xxx_dict

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  extension driver isn't passed dbentry to extension result dict
  extend_{network, subnet, port}_dict() aren't passed db entry, but only result 
dict. In order to extend the result dict, db entry is necessary.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1382448/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1382331] Re: test_password_change.py : AttributeError: 'AdminPage' object has no attribute 'go_to_settings_page'

2014-10-17 Thread Julie Pichon
I'm sorry, if you're modifying the code this is not an expected use case
anymore. As you've discovered you'll likely have to make changes
elsewhere too.

You could write a new test case for this and submit it back to the
community [1] once you have it working, if you like. Thank you!

[1] https://wiki.openstack.org/wiki/How_To_Contribute

** Changed in: horizon
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1382331

Title:
  test_password_change.py   : AttributeError: 'AdminPage' object has no
  attribute 'go_to_settings_page'

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  
  test_password_change.py fails for user admin

  
  ==
  ERROR: 
openstack_dashboard.test.integration_tests.tests.test_password_change.TestPasswordChange.test_password_change
  --
  _StringException: Traceback (most recent call last):
File 
/opt/stack/horizon/openstack_dashboard/test/integration_tests/tests/test_password_change.py,
 line 25, in test_password_change
  settings_page = self.home_pg.go_to_settings_page()
  AttributeError: 'AdminPage' object has no attribute 'go_to_settings_page'

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1382331/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1268573] Re: Incorrect quota value in database leads to dashboard crash

2014-10-17 Thread Julie Pichon
*** This bug is a duplicate of bug 1370869 ***
https://bugs.launchpad.net/bugs/1370869

** This bug has been marked a duplicate of bug 1370869
   Cannot display project overview page due to cannot convert float infinity 
to integer error

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1268573

Title:
  Incorrect quota value in database leads to dashboard crash

Status in OpenStack Dashboard (Horizon):
  Confirmed

Bug description:
  We have an incorrect value in project quota because of this bug:
  https://bugs.launchpad.net/nova/+bug/1268569

  mysql select resource,in_use from quota_usages;
  +-++
  | resource | in_use |
  +-++
  | security_groups | 0 |
  | instances | 0 |
  | ram | 0 |
  | cores | 0 |
  | fixed_ips | 0 |
  | floating_ips | -1 |
  +-++
  6 rows in set (0.00 sec)

  This cause total crash of dashboard/project page with 500 error. And
  for some reason dashboard thinks that -1 means Inf. It fails to
  display diagrams of quota.

  
  Debug info:
  Environment:

  Request Method: GET
  Request URL: http://172.16.0.6/dashboard/project/

  Django Version: 1.4.8
  Python Version: 2.6.6
  Installed Applications:
  ('openstack_dashboard',
   'django.contrib.contenttypes',
   'django.contrib.auth',
   'django.contrib.sessions',
   'django.contrib.messages',
   'django.contrib.staticfiles',
   'django.contrib.humanize',
   'compressor',
   'horizon',
   'openstack_dashboard.dashboards.project',
   'openstack_dashboard.dashboards.admin',
   'openstack_dashboard.dashboards.settings',
   'openstack_auth',
   'openstack_dashboard.dashboards.router')
  Installed Middleware:
  ('django.middleware.common.CommonMiddleware',
   'django.middleware.csrf.CsrfViewMiddleware',
   'django.contrib.sessions.middleware.SessionMiddleware',
   'django.contrib.auth.middleware.AuthenticationMiddleware',
   'django.contrib.messages.middleware.MessageMiddleware',
   'horizon.middleware.HorizonMiddleware',
   'django.middleware.doc.XViewMiddleware',
   'django.middleware.locale.LocaleMiddleware',
   'django.middleware.clickjacking.XFrameOptionsMiddleware')

  Template error:
  In template 
/usr/lib/python2.6/site-packages/horizon/templates/horizon/common/_limit_summary.html,
 error at line 27
 cannot convert float infinity to integer
 17 : /div

 18 :

 19 : div class=d3_quota_bar

 20 : div class=d3_pie_chart data-used={% widthratio
  usage.limits.totalRAMUsed usage.limits.maxTotalRAMSize 100 %}/div

 21 : strong{% trans RAM %} br /

 22 : {% blocktrans with
  used=usage.limits.totalRAMUsed|mb_float_format
  available=usage.limits.maxTotalRAMSize|mb_float_format %}Used span
  {{ used }} /span of span {{ available }} /span{% endblocktrans
  %}

 23 : /strong

 24 : /div

 25 :

 26 : div class=d3_quota_bar

 27 : div class=d3_pie_chart data-used= {% widthratio
  usage.limits.totalFloatingIpsUsed usage.limits.maxTotalFloatingIps 100
  %} /div

 28 : strong{% trans Floating IPs %} br /

 29 : {% blocktrans with
  used=usage.limits.totalFloatingIpsUsed|intcomma
  available=usage.limits.maxTotalFloatingIps|intcomma %}Used span {{
  used }} /span of span {{ available }} /span{% endblocktrans %}

 30 : /strong

 31 : /div

 32 :

 33 : div class=d3_quota_bar

 34 : div class=d3_pie_chart data-used={% widthratio
  usage.limits.totalSecurityGroupsUsed usage.limits.maxSecurityGroups
  100 %}/div

 35 : strong{% trans Security Groups %} br /

 36 : {% blocktrans with
  used=usage.limits.totalSecurityGroupsUsed|intcomma
  available=usage.limits.maxSecurityGroups|intcomma%}Used span {{ used
  }} /span of span {{ available }} /span{% endblocktrans %}

 37 : /strong

  Traceback:
  File /usr/lib/python2.6/site-packages/django/core/handlers/base.py in 
get_response
136. response = response.render()
  File /usr/lib/python2.6/site-packages/django/template/response.py in render
104. self._set_content(self.rendered_content)
  File /usr/lib/python2.6/site-packages/django/template/response.py in 
rendered_content
81. content = template.render(context)
  File /usr/lib/python2.6/site-packages/django/template/base.py in render
140. return self._render(context)
  File /usr/lib/python2.6/site-packages/django/template/base.py in _render
134. return self.nodelist.render(context)
  File /usr/lib/python2.6/site-packages/django/template/base.py in render
823. bit = self.render_node(node, context)
  File /usr/lib/python2.6/site-packages/django/template/debug.py in 
render_node
74. return node.render(context)
  File /usr/lib/python2.6/site-packages/django/template/loader_tags.py in 
render
123. return compiled_parent._render(context)
  File /usr/lib/python2.6/site-packages/django/template/base.py in _render
134. return 

[Yahoo-eng-team] [Bug 1229475] Re: terminate_instance(): RuntimeError: Second simultaneous read on fileno 16 detected

2014-10-17 Thread Akihiro Motoki
** Changed in: python-neutronclient
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1229475

Title:
  terminate_instance(): RuntimeError: Second simultaneous read on fileno
  16 detected

Status in OpenStack Compute (Nova):
  Invalid
Status in Python client library for Neutron:
  Fix Released

Bug description:
  http://logs.openstack.org/59/47659/6/check/gate-tempest-devstack-vm-
  neutron/fc83d44/logs/screen-n-cpu.txt.gz#_2013-09-23_18_25_06_484


  2013-09-23 18:25:06.484 ERROR nova.openstack.common.rpc.amqp 
[req-3a80ff5a-817b-4eb8-be67-7f180bef8a6e demo demo] Exception during message 
handling
  2013-09-23 18:25:06.484 3476 TRACE nova.openstack.common.rpc.amqp Traceback 
(most recent call last):
  2013-09-23 18:25:06.484 3476 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/new/nova/nova/openstack/common/rpc/amqp.py, line 461, in 
_process_data
  2013-09-23 18:25:06.484 3476 TRACE nova.openstack.common.rpc.amqp **args)
  2013-09-23 18:25:06.484 3476 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/new/nova/nova/openstack/common/rpc/dispatcher.py, line 172, in 
dispatch
  2013-09-23 18:25:06.484 3476 TRACE nova.openstack.common.rpc.amqp result 
= getattr(proxyobj, method)(ctxt, **kwargs)
  2013-09-23 18:25:06.484 3476 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/new/nova/nova/compute/manager.py, line 354, in decorated_function
  2013-09-23 18:25:06.484 3476 TRACE nova.openstack.common.rpc.amqp return 
function(self, context, *args, **kwargs)
  2013-09-23 18:25:06.484 3476 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/new/nova/nova/exception.py, line 90, in wrapped
  2013-09-23 18:25:06.484 3476 TRACE nova.openstack.common.rpc.amqp payload)
  2013-09-23 18:25:06.484 3476 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/new/nova/nova/exception.py, line 73, in wrapped
  2013-09-23 18:25:06.484 3476 TRACE nova.openstack.common.rpc.amqp return 
f(self, context, *args, **kw)
  2013-09-23 18:25:06.484 3476 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/new/nova/nova/compute/manager.py, line 244, in decorated_function
  2013-09-23 18:25:06.484 3476 TRACE nova.openstack.common.rpc.amqp pass
  2013-09-23 18:25:06.484 3476 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/new/nova/nova/compute/manager.py, line 230, in decorated_function
  2013-09-23 18:25:06.484 3476 TRACE nova.openstack.common.rpc.amqp return 
function(self, context, *args, **kwargs)
  2013-09-23 18:25:06.484 3476 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/new/nova/nova/compute/manager.py, line 295, in decorated_function
  2013-09-23 18:25:06.484 3476 TRACE nova.openstack.common.rpc.amqp 
function(self, context, *args, **kwargs)
  2013-09-23 18:25:06.484 3476 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/new/nova/nova/compute/manager.py, line 272, in decorated_function
  2013-09-23 18:25:06.484 3476 TRACE nova.openstack.common.rpc.amqp e, 
sys.exc_info())
  2013-09-23 18:25:06.484 3476 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/new/nova/nova/compute/manager.py, line 259, in decorated_function
  2013-09-23 18:25:06.484 3476 TRACE nova.openstack.common.rpc.amqp return 
function(self, context, *args, **kwargs)
  2013-09-23 18:25:06.484 3476 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/new/nova/nova/compute/manager.py, line 1793, in terminate_instance
  2013-09-23 18:25:06.484 3476 TRACE nova.openstack.common.rpc.amqp 
do_terminate_instance(instance, bdms)
  2013-09-23 18:25:06.484 3476 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/new/nova/nova/openstack/common/lockutils.py, line 246, in inner
  2013-09-23 18:25:06.484 3476 TRACE nova.openstack.common.rpc.amqp return 
f(*args, **kwargs)
  2013-09-23 18:25:06.484 3476 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/new/nova/nova/compute/manager.py, line 1785, in 
do_terminate_instance
  2013-09-23 18:25:06.484 3476 TRACE nova.openstack.common.rpc.amqp 
reservations=reservations)
  2013-09-23 18:25:06.484 3476 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/new/nova/nova/hooks.py, line 105, in inner
  2013-09-23 18:25:06.484 3476 TRACE nova.openstack.common.rpc.amqp rv = 
f(*args, **kwargs)
  2013-09-23 18:25:06.484 3476 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/new/nova/nova/compute/manager.py, line 1758, in _delete_instance
  2013-09-23 18:25:06.484 3476 TRACE nova.openstack.common.rpc.amqp 
user_id=user_id)
  2013-09-23 18:25:06.484 3476 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/new/nova/nova/compute/manager.py, line 1727, in _delete_instance
  2013-09-23 18:25:06.484 3476 TRACE nova.openstack.common.rpc.amqp 
self.conductor_api.instance_info_cache_delete(context, db_inst)
  2013-09-23 18:25:06.484 3476 TRACE 

[Yahoo-eng-team] [Bug 1232965] Re: Can't use nova when configuring neutron.agent.firewall.NoopFirewallDriver in neutron plugins

2014-10-17 Thread Akihiro Motoki
** Changed in: python-neutronclient
Milestone: None = 2.3.0-2.3.4

** Changed in: python-neutronclient
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1232965

Title:
  Can't use nova when configuring
  neutron.agent.firewall.NoopFirewallDriver in neutron plugins

Status in OpenStack Compute (Nova):
  Confirmed
Status in Python client library for Neutron:
  Fix Released

Bug description:
  OS : RHEL6.4
  OpenStack version : Havana

  If setting firewall_driver =
  neutron.agent.firewall.NoopFirewallDriver in vi
  /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini, some
  functions of Nova are lost.

  [root@oxianghui v2_0]# nova list
  ERROR: The server has either erred or is incapable of performing the 
requested operation. (HTTP 500) (Request-ID: 
req-7c2bc0a7-e413-48e9-9865-d743d5ab0497)

  The error log:

  2013-09-29 07:23:08.200 7666 TRACE nova.api.openstack Traceback (most recent 
call last):
  2013-09-29 07:23:08.200 7666 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/nova/api/openstack/__init__.py, line 112, in 
__call__
  2013-09-29 07:23:08.200 7666 TRACE nova.api.openstack return 
req.get_response(self.application)
  2013-09-29 07:23:08.200 7666 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/webob/request.py, line 1296, in send
  2013-09-29 07:23:08.200 7666 TRACE nova.api.openstack application, 
catch_exc_info=False)
  2013-09-29 07:23:08.200 7666 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/webob/request.py, line 1260, in 
call_application
  2013-09-29 07:23:08.200 7666 TRACE nova.api.openstack app_iter = 
application(self.environ, start_response)
  2013-09-29 07:23:08.200 7666 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/webob/dec.py, line 144, in __call__
  2013-09-29 07:23:08.200 7666 TRACE nova.api.openstack return 
resp(environ, start_response)
  2013-09-29 07:23:08.200 7666 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py, 
line 539, in __call__
  2013-09-29 07:23:08.200 7666 TRACE nova.api.openstack return 
self.app(env, start_response)
  2013-09-29 07:23:08.200 7666 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/webob/dec.py, line 144, in __call__
  2013-09-29 07:23:08.200 7666 TRACE nova.api.openstack return 
resp(environ, start_response)
  2013-09-29 07:23:08.200 7666 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/webob/dec.py, line 144, in __call__
  2013-09-29 07:23:08.200 7666 TRACE nova.api.openstack return 
resp(environ, start_response)
  2013-09-29 07:23:08.200 7666 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/routes/middleware.py, line 131, in __call__
  2013-09-29 07:23:08.200 7666 TRACE nova.api.openstack response = 
self.app(environ, start_response)
  2013-09-29 07:23:08.200 7666 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/webob/dec.py, line 144, in __call__
  2013-09-29 07:23:08.200 7666 TRACE nova.api.openstack return 
resp(environ, start_response)
  2013-09-29 07:23:08.200 7666 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/webob/dec.py, line 130, in __call__
  2013-09-29 07:23:08.200 7666 TRACE nova.api.openstack resp = 
self.call_func(req, *args, **self.kwargs)
  2013-09-29 07:23:08.200 7666 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/webob/dec.py, line 195, in call_func
  2013-09-29 07:23:08.200 7666 TRACE nova.api.openstack return 
self.func(req, *args, **kwargs)
  2013-09-29 07:23:08.200 7666 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/nova/api/openstack/wsgi.py, line 912, in 
__call__
  2013-09-29 07:23:08.200 7666 TRACE nova.api.openstack content_type, body, 
accept)
  2013-09-29 07:23:08.200 7666 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/nova/api/openstack/wsgi.py, line 997, in 
_process_stack
  2013-09-29 07:23:08.200 7666 TRACE nova.api.openstack request, 
action_args)
  2013-09-29 07:23:08.200 7666 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/nova/api/openstack/wsgi.py, line 885, in 
post_process_extensions
  2013-09-29 07:23:08.200 7666 TRACE nova.api.openstack **action_args)
  2013-09-29 07:23:08.200 7666 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/nova/api/openstack/compute/contrib/security_groups.py,
 line 583, in detail
  2013-09-29 07:23:08.200 7666 TRACE nova.api.openstack 
self._extend_servers(req, list(resp_obj.obj['servers']))
  2013-09-29 07:23:08.200 7666 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/nova/api/openstack/compute/contrib/security_groups.py,
 line 533, in _extend_servers
  2013-09-29 07:23:08.200 7666 TRACE nova.api.openstack 

[Yahoo-eng-team] [Bug 1245700] Re: nova boot via invalid neutron port raises 500

2014-10-17 Thread Akihiro Motoki
** Changed in: python-neutronclient
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1245700

Title:
  nova boot via invalid neutron port raises 500

Status in OpenStack Compute (Nova):
  Fix Released
Status in Python client library for Neutron:
  Fix Released

Bug description:
  2013-10-28 15:46:38.470 ERROR nova.api.openstack 
[req-e405f8e2-95a1-4748-8f9f-c642080e16b3 demo demo] Caught error: local 
variable 'port' referenced before assignment
  2013-10-28 15:46:38.470 TRACE nova.api.openstack Traceback (most recent call 
last):
  2013-10-28 15:46:38.470 TRACE nova.api.openstack   File 
/opt/stack/nova/nova/api/openstack/__init__.py, line 119, in __call__
  2013-10-28 15:46:38.470 TRACE nova.api.openstack return 
req.get_response(self.application)
  2013-10-28 15:46:38.470 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/request.py, line 1296, in send
  2013-10-28 15:46:38.470 TRACE nova.api.openstack application, 
catch_exc_info=False)
  2013-10-28 15:46:38.470 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/request.py, line 1260, in 
call_application
  2013-10-28 15:46:38.470 TRACE nova.api.openstack app_iter = 
application(self.environ, start_response)
  2013-10-28 15:46:38.470 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
  2013-10-28 15:46:38.470 TRACE nova.api.openstack return resp(environ, 
start_response)
  2013-10-28 15:46:38.470 TRACE nova.api.openstack   File 
/opt/stack/python-keystoneclient/keystoneclient/middleware/auth_token.py, 
line 571, in __call__
  2013-10-28 15:46:38.470 TRACE nova.api.openstack return self.app(env, 
start_response)
  2013-10-28 15:46:38.470 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
  2013-10-28 15:46:38.470 TRACE nova.api.openstack return resp(environ, 
start_response)
  2013-10-28 15:46:38.470 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
  2013-10-28 15:46:38.470 TRACE nova.api.openstack return resp(environ, 
start_response)
  2013-10-28 15:46:38.470 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/routes/middleware.py, line 131, in __call__
  2013-10-28 15:46:38.470 TRACE nova.api.openstack response = 
self.app(environ, start_response)
  2013-10-28 15:46:38.470 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
  2013-10-28 15:46:38.470 TRACE nova.api.openstack return resp(environ, 
start_response)
  2013-10-28 15:46:38.470 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/dec.py, line 130, in __call__
  2013-10-28 15:46:38.470 TRACE nova.api.openstack resp = 
self.call_func(req, *args, **self.kwargs)
  2013-10-28 15:46:38.470 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/dec.py, line 195, in call_func
  2013-10-28 15:46:38.470 TRACE nova.api.openstack return self.func(req, 
*args, **kwargs)
  2013-10-28 15:46:38.470 TRACE nova.api.openstack   File 
/opt/stack/nova/nova/api/openstack/wsgi.py, line 939, in __call__
  2013-10-28 15:46:38.470 TRACE nova.api.openstack content_type, body, 
accept)
  2013-10-28 15:46:38.470 TRACE nova.api.openstack   File 
/opt/stack/nova/nova/api/openstack/wsgi.py, line 998, in _process_stack
  2013-10-28 15:46:38.470 TRACE nova.api.openstack action_result = 
self.dispatch(meth, request, action_args)
  2013-10-28 15:46:38.470 TRACE nova.api.openstack   File 
/opt/stack/nova/nova/api/openstack/wsgi.py, line 1079, in dispatch
  2013-10-28 15:46:38.470 TRACE nova.api.openstack return 
method(req=request, **action_args)
  2013-10-28 15:46:38.470 TRACE nova.api.openstack   File 
/opt/stack/nova/nova/api/openstack/compute/servers.py, line 924, in create
  2013-10-28 15:46:38.470 TRACE nova.api.openstack legacy_bdm=legacy_bdm)
  2013-10-28 15:46:38.470 TRACE nova.api.openstack   File 
/opt/stack/nova/nova/hooks.py, line 105, in inner
  2013-10-28 15:46:38.470 TRACE nova.api.openstack rv = f(*args, **kwargs)
  2013-10-28 15:46:38.470 TRACE nova.api.openstack   File 
/opt/stack/nova/nova/compute/api.py, line 1218, in create
  2013-10-28 15:46:38.470 TRACE nova.api.openstack legacy_bdm=legacy_bdm)
  2013-10-28 15:46:38.470 TRACE nova.api.openstack   File 
/opt/stack/nova/nova/compute/api.py, line 859, in _create_instance
  2013-10-28 15:46:38.470 TRACE nova.api.openstack block_device_mapping, 
auto_disk_config, reservation_id)
  2013-10-28 15:46:38.470 TRACE nova.api.openstack   File 
/opt/stack/nova/nova/compute/api.py, line 673, in 
_validate_and_build_base_options
  2013-10-28 15:46:38.470 TRACE nova.api.openstack 
self._check_requested_networks(context, 

[Yahoo-eng-team] [Bug 1218190] Re: Use assertEqual instead of assertEquals in unitttest

2014-10-17 Thread Akihiro Motoki
** Changed in: python-neutronclient
Milestone: None = 2.3.0-2.3.4

** Changed in: python-neutronclient
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1218190

Title:
  Use assertEqual instead of assertEquals in unitttest

Status in Orchestration API (Heat):
  Fix Released
Status in OpenStack Identity (Keystone):
  Fix Released
Status in Python client library for Keystone:
  Fix Released
Status in Python client library for Neutron:
  Fix Released

Bug description:
  I noticed that [keystone, python-keystoneclient, python-neutronclient]
  configure tox.ini with py33 test, however, assertEquals is deprecated
  in py3 but ok with py2, so i think it is better to change all of
  assertEquals to assertEqual

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/1218190/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1208734] Re: Drop openstack.common.exception

2014-10-17 Thread Akihiro Motoki
** Changed in: python-neutronclient
Milestone: None = 2.3.0-2.3.4

** Changed in: python-neutronclient
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1208734

Title:
  Drop openstack.common.exception

Status in OpenStack Telemetry (Ceilometer):
  Fix Released
Status in Cinder:
  Fix Released
Status in Orchestration API (Heat):
  Fix Released
Status in OpenStack Identity (Keystone):
  Fix Released
Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in The Oslo library incubator:
  Fix Released
Status in Python client library for Neutron:
  Fix Released
Status in OpenStack Data Processing (Sahara, ex. Savanna):
  Fix Released
Status in Taskflow for task-oriented systems.:
  Fix Released

Bug description:
  The library openstack.common.exceptions is deprecated in Oslo and
  should be removed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1208734/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1255876] Re: need to ignore swap files from getting into repository

2014-10-17 Thread Akihiro Motoki
** Changed in: python-neutronclient
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1255876

Title:
  need to ignore swap files from getting into repository

Status in OpenStack Telemetry (Ceilometer):
  Invalid
Status in Heat Orchestration Templates and tools:
  Invalid
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in The Oslo library incubator:
  Won't Fix
Status in Python client library for Ceilometer:
  Fix Committed
Status in Python client library for Cinder:
  Fix Committed
Status in Python client library for Glance:
  Fix Released
Status in Python client library for Keystone:
  Fix Released
Status in Python client library for Neutron:
  Fix Released
Status in Python client library for Nova:
  Fix Released
Status in Python client library for Swift:
  Fix Committed
Status in OpenStack Data Processing (Sahara, ex. Savanna):
  Invalid

Bug description:
  need to ignore swap files from getting into repository
  currently the implemented ignore in .gitignore is *.swp
  however vim goes beyond to generate these so to improve it could be done *.sw?

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1255876/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1217100] Re: fix i18n messages

2014-10-17 Thread Akihiro Motoki
** Changed in: python-neutronclient
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1217100

Title:
  fix i18n messages

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in Python client library for Neutron:
  Fix Released

Bug description:
  As some new feature is added, i18n support for message is broken
  **again**, for, i.e.
  
https://github.com/openstack/neutron/blob/master/neutron/plugins/cisco/nexus/cisco_nexus_plugin_v2.py#L112
  and there are more.

  messages can be divided to 4 parts:
  1) log message  -- important
  2) exception message -- important
  3) print message -- important
  3) opt help message-- need discussion

   i think all of them need i18n support, because these messages will be
  exposed to end-client-user/operators.

  If neutron-drivers think this is invalid, then please change its
  status then i will stop.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1217100/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1178273] Re: unable to unset gateway_ip on existing subnet

2014-10-17 Thread Akihiro Motoki
** Changed in: python-neutronclient
   Status: Fix Committed = Fix Released

** Changed in: python-neutronclient
Milestone: None = 2.2.1-2.2.6

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1178273

Title:
  unable to unset gateway_ip on existing subnet

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron grizzly series:
  Fix Released
Status in Python client library for Neutron:
  Fix Released

Bug description:
  A subnet can be configured without a gateway_ip if the gateway_ip is
  set to null when the subnet is initially created. However it is not
  possible to change the gateway_ip to null for an existing subnet.

  Trying to unset a subnet's gateway_ip/set it to null, results in a
  'failed to detect a valid IP address from None' QuantumError.

  This can be easily corrected with the attached diff

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1178273/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1281083] Re: Firewall policy update should validate rules as list of uuids

2014-10-17 Thread Akihiro Motoki
It was a bug of Neutron and fixed in Neutron.

** Project changed: python-neutronclient = neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1281083

Title:
  Firewall policy update should validate rules as list of uuids

Status in OpenStack Neutron (virtual network service):
  Fix Committed

Bug description:
  Firewall policy update should validate rules as list of uuids,
  otherwise malformed request will result in 500 Internal server error
  returned to the client.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1281083/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1382546] [NEW] Tab bar is unnecessary in detail pages with a single tab

2014-10-17 Thread Akihiro Motoki
Public bug reported:

Tab bar is used even when a detail page has only one tab. It is
unnecessary.

Tab styling is now improved in https://review.openstack.org/#/c/128247/
and once this is merged, the tab bar in a detail page with a single tab
maybe be a bit surprising.

Red square in the attached image is an example.
(Note that Blue square in the image is a different issue but it needs to be 
improved too.)

** Affects: horizon
 Importance: Undecided
 Assignee: Akihiro Motoki (amotoki)
 Status: New

** Attachment added: スクリーンショット_2014-10-17_21_32_35.png
   
https://bugs.launchpad.net/bugs/1382546/+attachment/4238950/+files/%E3%82%B9%E3%82%AF%E3%83%AA%E3%83%BC%E3%83%B3%E3%82%B7%E3%83%A7%E3%83%83%E3%83%88_2014-10-17_21_32_35.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1382546

Title:
  Tab bar is unnecessary in detail pages with a single tab

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Tab bar is used even when a detail page has only one tab. It is
  unnecessary.

  Tab styling is now improved in
  https://review.openstack.org/#/c/128247/ and once this is merged, the
  tab bar in a detail page with a single tab maybe be a bit surprising.

  Red square in the attached image is an example.
  (Note that Blue square in the image is a different issue but it needs to be 
improved too.)

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1382546/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1382564] [NEW] memcache servicegroup driver does not logs connection issues

2014-10-17 Thread Attila Fazekas
Public bug reported:

servicegroup_driver = mc
memcached_servers = blabla  # blabla does not exists

Neither n-cpu or n-api log indicates any connection issue or give any
clue the join was unsuccessful, the n-cpu logs the same two DEBUG line
regardless to the success.

The services are reported down, with nova service-list, as expected.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1382564

Title:
  memcache servicegroup driver does not logs connection issues

Status in OpenStack Compute (Nova):
  New

Bug description:
  servicegroup_driver = mc
  memcached_servers = blabla  # blabla does not exists

  Neither n-cpu or n-api log indicates any connection issue or give any
  clue the join was unsuccessful, the n-cpu logs the same two DEBUG line
  regardless to the success.

  The services are reported down, with nova service-list, as expected.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1382564/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1382573] [NEW] Uncaught GreenletExit in ServiceLauncher if wait called after greenlet kill

2014-10-17 Thread Ihar Hrachyshka
Public bug reported:

This is similar to bug 1282206 that fixed the same issue for
ProcessLauncher.

The failure shows up in gate (Icehouse, Juno) as follows:

ft1.1683: 
tests.unit.test_service.ServiceRestartTest.test_service_restart_StringException:
 Traceback (most recent call last):
  File tests/unit/test_service.py, line 252, in test_service_restart
ready = self._spawn_service()
  File tests/unit/test_service.py, line 244, in _spawn_service
launcher.wait(ready_callback=ready_event.set)
  File openstack/common/service.py, line 196, in wait
status, signo = self._wait_for_exit_or_signal(ready_callback)
  File openstack/common/service.py, line 182, in _wait_for_exit_or_signal
self.stop()
  File openstack/common/service.py, line 128, in stop
self.services.stop()
  File openstack/common/service.py, line 479, in stop
self.tg.stop()
  File openstack/common/threadgroup.py, line 125, in stop
self._stop_threads()
  File openstack/common/threadgroup.py, line 98, in _stop_threads
x.stop()
  File openstack/common/threadgroup.py, line 44, in stop
self.thread.kill()
  File 
/home/jenkins/workspace/gate-oslo-incubator-python26/.tox/py26/lib/python2.6/site-packages/eventlet/greenthread.py,
 line 238, in kill
return kill(self, *throw_args)
  File 
/home/jenkins/workspace/gate-oslo-incubator-python26/.tox/py26/lib/python2.6/site-packages/eventlet/greenthread.py,
 line 292, in kill
g.throw(*throw_args)
  File 
/home/jenkins/workspace/gate-oslo-incubator-python26/.tox/py26/lib/python2.6/site-packages/eventlet/greenthread.py,
 line 212, in main
result = function(*args, **kwargs)
  File 
/home/jenkins/workspace/gate-oslo-incubator-python26/.tox/py26/lib/python2.6/site-packages/eventlet/greenthread.py,
 line 278, in just_raise
raise greenlet.GreenletExit()
GreenletExit
Traceback (most recent call last):
_StringException: Empty attachments:
  pythonlogging:''
  stderr
  stdout

traceback-1: {{{
Traceback (most recent call last):
  File tests/unit/test_service.py, line 93, in _reap_pid
if self.pid:
AttributeError: 'ServiceRestartTest' object has no attribute 'pid'
}}}

Traceback (most recent call last):
  File tests/unit/test_service.py, line 252, in test_service_restart
ready = self._spawn_service()
  File tests/unit/test_service.py, line 244, in _spawn_service
launcher.wait(ready_callback=ready_event.set)
  File openstack/common/service.py, line 196, in wait
status, signo = self._wait_for_exit_or_signal(ready_callback)
  File openstack/common/service.py, line 182, in _wait_for_exit_or_signal
self.stop()
  File openstack/common/service.py, line 128, in stop
self.services.stop()
  File openstack/common/service.py, line 479, in stop
self.tg.stop()
  File openstack/common/threadgroup.py, line 125, in stop
self._stop_threads()
  File openstack/common/threadgroup.py, line 98, in _stop_threads
x.stop()
  File openstack/common/threadgroup.py, line 44, in stop
self.thread.kill()
  File 
/home/jenkins/workspace/gate-oslo-incubator-python26/.tox/py26/lib/python2.6/site-packages/eventlet/greenthread.py,
 line 238, in kill
return kill(self, *throw_args)
  File 
/home/jenkins/workspace/gate-oslo-incubator-python26/.tox/py26/lib/python2.6/site-packages/eventlet/greenthread.py,
 line 292, in kill
g.throw(*throw_args)
  File 
/home/jenkins/workspace/gate-oslo-incubator-python26/.tox/py26/lib/python2.6/site-packages/eventlet/greenthread.py,
 line 212, in main
result = function(*args, **kwargs)
  File 
/home/jenkins/workspace/gate-oslo-incubator-python26/.tox/py26/lib/python2.6/site-packages/eventlet/greenthread.py,
 line 278, in just_raise
raise greenlet.GreenletExit()
GreenletExit

Logs: http://logs.openstack.org/82/129182/1/check/gate-oslo-incubator-
python26/002df95/testr_results.html.gz

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1382573

Title:
  Uncaught GreenletExit in ServiceLauncher if wait called after greenlet
  kill

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  This is similar to bug 1282206 that fixed the same issue for
  ProcessLauncher.

  The failure shows up in gate (Icehouse, Juno) as follows:

  ft1.1683: 
tests.unit.test_service.ServiceRestartTest.test_service_restart_StringException:
 Traceback (most recent call last):
File tests/unit/test_service.py, line 252, in test_service_restart
  ready = self._spawn_service()
File tests/unit/test_service.py, line 244, in _spawn_service
  launcher.wait(ready_callback=ready_event.set)
File openstack/common/service.py, line 196, in wait
  status, signo = self._wait_for_exit_or_signal(ready_callback)
File openstack/common/service.py, line 182, in _wait_for_exit_or_signal
  self.stop()
File openstack/common/service.py, line 128, in 

[Yahoo-eng-team] [Bug 1382568] [NEW] get_multi not used for get_all in mc serviegroup driver

2014-10-17 Thread Attila Fazekas
Public bug reported:

MemcachedDriver get_all method calls the is_up for every record which requests 
for a single key, instead of using a more efficient  get_multi
https://github.com/linsomniac/python-memcached/blob/master/memcache.py#L1049 
which is able to retrieve multiple records with a single query.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1382568

Title:
  get_multi not used for get_all in mc serviegroup driver

Status in OpenStack Compute (Nova):
  New

Bug description:
  MemcachedDriver get_all method calls the is_up for every record which 
requests for a single key, instead of using a more efficient  get_multi
  https://github.com/linsomniac/python-memcached/blob/master/memcache.py#L1049 
which is able to retrieve multiple records with a single query.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1382568/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1381295] Re: Live migration fails when called via RPC API with admin context

2014-10-17 Thread Matt Riedemann
Assuming the null value is for the x-auth-token key, the glanceclient
http connection doesn't enforce that auth token is set, it just tries to
get it if provided and put's it in the header:

http://git.openstack.org/cgit/openstack/python-
glanceclient/tree/glanceclient/common/http.py?id=0.14.1#n52

And in the nova.image.glance._create_glance_client code, if
auth_strategy isn't keystone (which it is by default but I guess you
could not use keystone), then the token isn't set on the client
connection and if you're not using ssl that's how you could get into
this state (since the glance client http connection isn't requiring an
auth token):

http://git.openstack.org/cgit/openstack/nova/tree/nova/image/glance.py?id=2014.2#n150

In this case, are you using auth_strategy=keystone in your nova.conf?  I
guess it doesn't matter since we still have an exposure here, and we
definitely don't test any non-keystone auth strategies in the community
CI.

** Changed in: python-glanceclient
   Status: New = Triaged

** Changed in: python-glanceclient
   Importance: Undecided = High

** Changed in: python-glanceclient
 Assignee: (unassigned) = Matt Riedemann (mriedem)

** Changed in: nova
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1381295

Title:
  safe_header raises AttributeError if X-Auth-Token is None

Status in OpenStack Compute (Nova):
  Invalid
Status in Python client library for Glance:
  Triaged

Bug description:
  When trying to live migrate a VM by calling the compute RPC API
  directly (i.e., not via the novaclient) coupled with the elevated
  admin context [1], the destination compute service tries to call
  glance to retrieve the image [2].  However, the destination compute
  service erroneously raises an exception [4].

  This problem was introduced via the following patch:
  https://review.openstack.org/#/c/121692

  It also appears that a similar problem exists within nova too [3].

  #

  [1]
  from nova import compute
  ctxt = context.get_admin_context()
  self.compute_api = compute.API()
  self.compute_api.live_migrate(
  ctxt.elevated(), inst, False, False, host_dict)

  #

  [2]
  def _create_glance_client(context, host, port, use_ssl, version=1):
  Instantiate a new glanceclient.Client object.
  params = {}
  if use_ssl:
  scheme = 'https'
  # https specific params
  params['insecure'] = CONF.glance.api_insecure
  params['ssl_compression'] = False
  if CONF.ssl.cert_file:
  params['cert_file'] = CONF.ssl.cert_file
  if CONF.ssl.key_file:
  params['key_file'] = CONF.ssl.key_file
  if CONF.ssl.ca_file:
  params['cacert'] = CONF.ssl.ca_file
  else:
  scheme = 'http'

  if CONF.auth_strategy == 'keystone':
  # NOTE(isethi): Glanceclient = 0.9.0.49 accepts only
  # keyword 'token', but later versions accept both the
  # header 'X-Auth-Token' and 'token'
  params['token'] = context.auth_token
  params['identity_headers'] = generate_identity_headers(context)   
 would return {'X-Auth-Token': None, }
  if utils.is_valid_ipv6(host):
  # if so, it is ipv6 address, need to wrap it with '[]'
  host = '[%s]' % host
  endpoint = '%s://%s:%s' % (scheme, host, port)
  return glanceclient.Client(str(version), endpoint, **params)  
params=={'identity_headers':{{'X-Auth-Token': None, }}...}

  #

  [3]
  novaclient.client.py:
  def http_log_req(self, method, url, kwargs):
  if not self.http_log_debug:
  return

  string_parts = ['curl -i']

  if not kwargs.get('verify', True):
  string_parts.append(' --insecure')

  string_parts.append( '%s' % url)
  string_parts.append(' -X %s' % method)

  headers = copy.deepcopy(kwargs['headers'])
  self._redact(headers, ['X-Auth-Token'])  
here
  # because dict ordering changes from 2 to 3
  keys = sorted(headers.keys())
  for name in keys:
  value = headers[name]
  header = ' -H %s: %s' % (name, value)
  string_parts.append(header)

  if 'data' in kwargs:
  data = json.loads(kwargs['data'])
  self._redact(data, ['auth', 'passwordCredentials', 'password'])
  string_parts.append( -d '%s' % json.dumps(data))
  self._logger.debug(REQ: %s % .join(string_parts))

  #

  [4]
  2014-10-14 00:42:10.699 31346 INFO nova.compute.manager [-] [instance: 

[Yahoo-eng-team] [Bug 1348818] Re: Unittests do not succeed with random PYTHONHASHSEED value

2014-10-17 Thread Louis Taylor
** Also affects: python-glanceclient
   Importance: Undecided
   Status: New

** Changed in: python-glanceclient
   Status: New = Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1348818

Title:
  Unittests do not succeed with random PYTHONHASHSEED value

Status in OpenStack Telemetry (Ceilometer):
  In Progress
Status in Cinder:
  Fix Released
Status in Cinder icehouse series:
  Fix Committed
Status in Designate:
  Fix Released
Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in Glance icehouse series:
  Fix Committed
Status in Orchestration API (Heat):
  In Progress
Status in OpenStack Dashboard (Horizon):
  In Progress
Status in OpenStack Dashboard (Horizon) icehouse series:
  In Progress
Status in OpenStack Bare Metal Provisioning Service (Ironic):
  Fix Released
Status in OpenStack Identity (Keystone):
  In Progress
Status in Keystone icehouse series:
  Fix Committed
Status in OpenStack Neutron (virtual network service):
  In Progress
Status in OpenStack Compute (Nova):
  Triaged
Status in Python client library for Glance:
  Confirmed
Status in Python client library for Neutron:
  Fix Committed
Status in OpenStack Data Processing (Sahara, ex. Savanna):
  Fix Released
Status in Openstack Database (Trove):
  Fix Released
Status in Web Services Made Easy:
  New

Bug description:
  New tox and python3.3 set a random PYTHONHASHSEED value by default.
  These projects should support this in their unittests so that we do
  not have to override the PYTHONHASHSEED value and potentially let bugs
  into these projects.

  To reproduce these failures:

  # install latest tox
  pip install --upgrade tox
  tox --version # should report 1.7.2 or greater
  cd $PROJECT_REPO
  # edit tox.ini to remove any PYTHONHASHSEED=0 lines
  tox -epy27

  Most of these failures appear to be related to dict entry ordering.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1348818/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1382579] Re: check-neutron-dsvm-functional dies with Length too long

2014-10-17 Thread Clark Boylan
Moved this bug to neutron as it appears to be a valid test run failure
with neutron's test suite.

** Also affects: neutron
   Importance: Undecided
   Status: New

** No longer affects: openstack-ci

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1382579

Title:
  check-neutron-dsvm-functional dies with Length too long

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Could be seen on this job:
  
http://logs.openstack.org/08/129208/1/check/check-neutron-dsvm-functional/650dd00/console.html
  dsvm-functional installdeps: -r/opt/stack/new/neutron/requirements.txt, 
-r/opt/stack/new/neutron/test-requirements.txt
  dsvm-functional develop-inst: /opt/stack/new/neutron
  dsvm-functional runtests: PYTHONHASHSEED='3217326230'
  dsvm-functional runtests: commands[0] | python -m 
neutron.openstack.common.lockutils python setup.py testr --slowest --testr-args=
  running testr
  Length too long: 20339241
  running=OS_STDOUT_CAPTURE=1 OS_STDERR_CAPTURE=1 OS_LOG_CAPTURE=1 
${PYTHON:-python} -m subunit.run discover -t ./ 
${OS_TEST_PATH:-./neutron/tests/unit} --list 
  running=OS_STDOUT_CAPTURE=1 OS_STDERR_CAPTURE=1 OS_LOG_CAPTURE=1 
${PYTHON:-python} -m subunit.run discover -t ./ 
${OS_TEST_PATH:-./neutron/tests/unit}  --load-list /tmp/tmpRymoqb
  running=OS_STDOUT_CAPTURE=1 OS_STDERR_CAPTURE=1 OS_LOG_CAPTURE=1 
${PYTHON:-python} -m subunit.run discover -t ./ 
${OS_TEST_PATH:-./neutron/tests/unit}  --load-list /tmp/tmpMeUYgP
  running=OS_STDOUT_CAPTURE=1 OS_STDERR_CAPTURE=1 OS_LOG_CAPTURE=1 
${PYTHON:-python} -m subunit.run discover -t ./ 
${OS_TEST_PATH:-./neutron/tests/unit}  --load-list /tmp/tmp6xlXix
  running=OS_STDOUT_CAPTURE=1 OS_STDERR_CAPTURE=1 OS_LOG_CAPTURE=1 
${PYTHON:-python} -m subunit.run discover -t ./ 
${OS_TEST_PATH:-./neutron/tests/unit}  --load-list /tmp/tmpV42F6X
  running=OS_STDOUT_CAPTURE=1 OS_STDERR_CAPTURE=1 OS_LOG_CAPTURE=1 
${PYTHON:-python} -m subunit.run discover -t ./ 
${OS_TEST_PATH:-./neutron/tests/unit}  --load-list /tmp/tmppzphnl
  running=OS_STDOUT_CAPTURE=1 OS_STDERR_CAPTURE=1 OS_LOG_CAPTURE=1 
${PYTHON:-python} -m subunit.run discover -t ./ 
${OS_TEST_PATH:-./neutron/tests/unit}  --load-list /tmp/tmpgoQS1K
  running=OS_STDOUT_CAPTURE=1 OS_STDERR_CAPTURE=1 OS_LOG_CAPTURE=1 
${PYTHON:-python} -m subunit.run discover -t ./ 
${OS_TEST_PATH:-./neutron/tests/unit}  --load-list /tmp/tmpDwh5_K
  running=OS_STDOUT_CAPTURE=1 OS_STDERR_CAPTURE=1 OS_LOG_CAPTURE=1 
${PYTHON:-python} -m subunit.run discover -t ./ 
${OS_TEST_PATH:-./neutron/tests/unit}  --load-list /tmp/tmpdPzTrB
  error: testr failed (3)
  ERROR: InvocationError: 
'/opt/stack/new/neutron/.tox/dsvm-functional/bin/python -m 
neutron.openstack.common.lockutils python setup.py testr --slowest 
--testr-args='
  ___ summary 

  ERROR:   dsvm-functional: commands failed

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1382579/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1382585] [NEW] eventlet.wsgi and glance.wsgi logs not separate

2014-10-17 Thread Stuart McLaren
Public bug reported:


The log output from both glance.common.wsgi and eventlet.wsgi have the same 
prefix:


2014-10-17 15:01:43.096 3640 INFO glance.wsgi.server [-] Started child 3679
2014-10-17 15:01:43.097 3679 INFO glance.wsgi.server [-] (3679) wsgi starting 
up on http://0.0.0.0:9292/


it's better to have them separated, eg:

2014-10-17 15:02:08.409 3722 INFO glance.common.wsgi [-] Started child 3729
2014-10-17 15:02:08.409 3729 INFO eventlet.wsgi.server [-] (3729) wsgi starting 
up on http://0.0.0.0:9292/

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1382585

Title:
  eventlet.wsgi and glance.wsgi logs not separate

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  
  The log output from both glance.common.wsgi and eventlet.wsgi have the same 
prefix:

  
  2014-10-17 15:01:43.096 3640 INFO glance.wsgi.server [-] Started child 3679
  2014-10-17 15:01:43.097 3679 INFO glance.wsgi.server [-] (3679) wsgi starting 
up on http://0.0.0.0:9292/

  
  it's better to have them separated, eg:

  2014-10-17 15:02:08.409 3722 INFO glance.common.wsgi [-] Started child 3729
  2014-10-17 15:02:08.409 3729 INFO eventlet.wsgi.server [-] (3729) wsgi 
starting up on http://0.0.0.0:9292/

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1382585/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1382581] Re: ServerRescueNegativeTestJSON:tearDownClass Failed to delete volume XXX within the required time (196 s).

2014-10-17 Thread Clark Boylan
Removed openstack-infra and added cinder, nova, tempest because this
looks like a legit failure to remove a volume from an instance.

** Also affects: cinder
   Importance: Undecided
   Status: New

** Also affects: nova
   Importance: Undecided
   Status: New

** Also affects: tempest
   Importance: Undecided
   Status: New

** No longer affects: openstack-ci

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1382581

Title:
  ServerRescueNegativeTestJSON:tearDownClass Failed to delete volume XXX
  within the required time (196 s).

Status in Cinder:
  New
Status in OpenStack Compute (Nova):
  New
Status in Tempest:
  New

Bug description:
  See also http://logs.openstack.org/11/129211/1/check/check-tempest-
  dsvm-postgres-full/22c6043/console.html

  ==
  Failed 1 tests - output below:
  ==

  tearDownClass 
(tempest.api.compute.servers.test_server_rescue_negative.ServerRescueNegativeTestJSON)
  


  Captured traceback:
  ~~~
  Traceback (most recent call last):
File tempest/test.py, line 293, in tearDownClass
  cls.resource_cleanup()
File tempest/api/compute/servers/test_server_rescue_negative.py, line 
60, in resource_cleanup
  cls.delete_volume(cls.volume['id'])
File tempest/api/compute/base.py, line 355, in delete_volume
  cls._delete_volume(cls.volumes_extensions_client, volume_id)
File tempest/api/compute/base.py, line 295, in _delete_volume
  volumes_client.wait_for_resource_deletion(volume_id)
File tempest/common/rest_client.py, line 578, in 
wait_for_resource_deletion
  raise exceptions.TimeoutException(message)
  TimeoutException: Request timed out
  Details: (ServerRescueNegativeTestJSON:tearDownClass) Failed to delete 
volume 15641df8-eb69-4bf4-a67b-159e922f7739 within the required time (196 s).

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1382581/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1382630] [NEW] access_ip_* not updated on reschedules when they should be

2014-10-17 Thread Chris Behrens
Public bug reported:

For virt drivers that require networks to be reallocated on nova
reschedules, the access_ip_v[4|6] fields on Instance are not updated.

This bug was introduced when the new build_instances path was added.
This new path updates access_ip_* before the instance goes ACTIVE... and
it only updates when its not already set. The old path only updated the
access_ip_* fields when the instance went ACTIVE...

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1382630

Title:
  access_ip_* not updated on reschedules when they should be

Status in OpenStack Compute (Nova):
  New

Bug description:
  For virt drivers that require networks to be reallocated on nova
  reschedules, the access_ip_v[4|6] fields on Instance are not updated.

  This bug was introduced when the new build_instances path was added.
  This new path updates access_ip_* before the instance goes ACTIVE...
  and it only updates when its not already set. The old path only
  updated the access_ip_* fields when the instance went ACTIVE...

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1382630/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1382650] [NEW] context selection advanced not implemented

2014-10-17 Thread David Lyle
Public bug reported:

There is a link that is never enabled in the context selection box. This
was intended for larger project lists being able to select from a large
number.

The link should be a redirect to the Identity - Projects page which
should allow the user to select the desired project to rescope their
token to, rather than building a redundant view.

** Affects: horizon
 Importance: Medium
 Assignee: David Lyle (david-lyle)
 Status: New


** Tags: keystone ux

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1382650

Title:
  context selection advanced not implemented

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  There is a link that is never enabled in the context selection box.
  This was intended for larger project lists being able to select from a
  large number.

  The link should be a redirect to the Identity - Projects page which
  should allow the user to select the desired project to rescope their
  token to, rather than building a redundant view.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1382650/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1342961] Re: Exception during message handling: Pool FOO could not be found

2014-10-17 Thread Attila Fazekas
** Changed in: neutron
   Status: Expired = New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1342961

Title:
  Exception during message handling: Pool  FOO could not be found

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  $subjecyt style exception appears both in successful and failed jobs.

  message: Exception during message handling AND message:Pool AND
  message:could not be found AND filename:logs/screen-q-svc.txt

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOiBcIkV4Y2VwdGlvbiBkdXJpbmcgbWVzc2FnZSBoYW5kbGluZ1wiIEFORCBtZXNzYWdlOlwiUG9vbFwiIEFORCBtZXNzYWdlOlwiY291bGQgbm90IGJlIGZvdW5kXCIgQU5EIGZpbGVuYW1lOlwibG9ncy9zY3JlZW4tcS1zdmMudHh0XCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6Ijg2NDAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQwNTUzMzU3ODE2NCwibW9kZSI6IiIsImFuYWx5emVfZmllbGQiOiIifQ==

[req-201dcd14-dc9d-4fb5-8eb5-c66c35991cb3 ] Exception during message 
handling: Pool 7fa63738-9030-4136-9b4e-9eb7ffb79f68 could not be found
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
134, in _dispatch_and_reply
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
177, in _dispatch
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
123, in _do_dispatch
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher result 
= getattr(endpoint, method)(ctxt, **new_args)
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/new/neutron/neutron/services/loadbalancer/drivers/common/agent_driver_base.py,
 line 232, in update_pool_stats
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher 
self.plugin.update_pool_stats(context, pool_id, data=stats)
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/new/neutron/neutron/db/loadbalancer/loadbalancer_db.py, line 512, 
in update_pool_stats
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher pool_db 
= self._get_resource(context, Pool, pool_id)
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/new/neutron/neutron/db/loadbalancer/loadbalancer_db.py, line 218, 
in _get_resource
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher raise 
loadbalancer.PoolNotFound(pool_id=id)
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher 
PoolNotFound: Pool 7fa63738-9030-4136-9b4e-9eb7ffb79f68 could not be found
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1342961/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1342961] Re: Exception during message handling: Pool FOO could not be found

2014-10-17 Thread Attila Fazekas
Restoring it new since it still happens very frequently. Tempest just
using 4 client in parallel and  spent most if it's time in sleep.

Is there any way to make neutron to handle the load, for example by
increasing the number of workers ?

** Changed in: neutron
   Status: New = Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1342961

Title:
  Exception during message handling: Pool  FOO could not be found

Status in OpenStack Neutron (virtual network service):
  Opinion

Bug description:
  $subjecyt style exception appears both in successful and failed jobs.

  message: Exception during message handling AND message:Pool AND
  message:could not be found AND filename:logs/screen-q-svc.txt

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOiBcIkV4Y2VwdGlvbiBkdXJpbmcgbWVzc2FnZSBoYW5kbGluZ1wiIEFORCBtZXNzYWdlOlwiUG9vbFwiIEFORCBtZXNzYWdlOlwiY291bGQgbm90IGJlIGZvdW5kXCIgQU5EIGZpbGVuYW1lOlwibG9ncy9zY3JlZW4tcS1zdmMudHh0XCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6Ijg2NDAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQwNTUzMzU3ODE2NCwibW9kZSI6IiIsImFuYWx5emVfZmllbGQiOiIifQ==

[req-201dcd14-dc9d-4fb5-8eb5-c66c35991cb3 ] Exception during message 
handling: Pool 7fa63738-9030-4136-9b4e-9eb7ffb79f68 could not be found
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
134, in _dispatch_and_reply
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
177, in _dispatch
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
123, in _do_dispatch
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher result 
= getattr(endpoint, method)(ctxt, **new_args)
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/new/neutron/neutron/services/loadbalancer/drivers/common/agent_driver_base.py,
 line 232, in update_pool_stats
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher 
self.plugin.update_pool_stats(context, pool_id, data=stats)
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/new/neutron/neutron/db/loadbalancer/loadbalancer_db.py, line 512, 
in update_pool_stats
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher pool_db 
= self._get_resource(context, Pool, pool_id)
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/new/neutron/neutron/db/loadbalancer/loadbalancer_db.py, line 218, 
in _get_resource
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher raise 
loadbalancer.PoolNotFound(pool_id=id)
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher 
PoolNotFound: Pool 7fa63738-9030-4136-9b4e-9eb7ffb79f68 could not be found
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1342961/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1382293] Re: Unable to start nova-compute service in Juno

2014-10-17 Thread Joe Gordon
This sounds like RPM issue, not an upstream nova issue. Please file a
bug with fedora.

** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1382293

Title:
  Unable to start nova-compute service in Juno

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  [DEFAULT]
  rabbit_user = guest
  rabbit_port = 5672
  rabbit_host = 10.40.123.146
  rabbit_password = 
  my_ip=x.x.x.x
  host=controller
  verbose=true
  rpc_backend = nova.openstack.common.rpc.impl_kombu
  virt_type = qemu
  vnc_enabled = True
  vncserver_listen = 0.0.0.0
  vncserver_proxyclient_address = x.x.x.x
  novncproxy_base_url = http://x.x.x.x:6080/vnc_auto.html
  [hyperv]
  [zookeeper]
  [osapi_v3]
  [conductor]
  [keymgr]
  [cells]
  [database]
  connection = mysql://nova:NOVA_DBPASS@x.x.x.x/nova
  [image_file_url]
  [baremetal]
  [rpc_notifier2]
  [matchmaker_redis]
  [ssl]
  [trusted_computing]
  [upgrade_levels]
  [matchmaker_ring]
  [vmware]
  [spice]
  [keystone_authtoken]
  auth_uri = http://controller:5000/v2.0
  identity_uri = http://controller:35357
  admin_tenant_name = service
  admin_user = nova
  admin_password = 
  [glance]
  host = x.x.x.x

  
  Log is as below *** 

  2014-10-16 21:45:15.262 6145 INFO nova.openstack.common.periodic_task [-] 
Skipping periodic task _periodic_update_dns because its interval is negative
  2014-10-16 21:45:15.299 6145 INFO nova.virt.driver [-] Loading compute driver 
'libvirt.LibvirtDriver'
  2014-10-16 21:45:15.363 6145 INFO nova.openstack.common.rpc.common 
[req-c5b16578-860c-4a6c-89bb-b73e5eaa8297 None None] Connected to AMQP server 
on 10.40.123.146:5672
  2014-10-16 21:45:15.378 6145 INFO nova.openstack.common.rpc.common 
[req-c5b16578-860c-4a6c-89bb-b73e5eaa8297 None None] Connected to AMQP server 
on 10.40.123.146:5672
  2014-10-16 21:45:15.435 6145 AUDIT nova.service [-] Starting compute node 
(version 2013.2.4-1.fc20)
  2014-10-16 21:45:15.524 6145 ERROR nova.openstack.common.threadgroup [-] 
Remote error: UnsupportedVersion Endpoint does not support RPC version 1.50
  [u'Traceback (most recent call last):\n', u'  File 
/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py, line 134, 
in _dispatch_and_reply\nincoming.message))\n', u'  File 
/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py, line 186, 
in _dispatch\nraise UnsupportedVersion(version)\n', u'UnsupportedVersion: 
Endpoint does not support RPC version 1.50\n'].
  2014-10-16 21:45:15.524 6145 TRACE nova.openstack.common.threadgroup 
Traceback (most recent call last):
  2014-10-16 21:45:15.524 6145 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/site-packages/nova/openstack/common/threadgroup.py, line 
117, in wait
  2014-10-16 21:45:15.524 6145 TRACE nova.openstack.common.threadgroup 
x.wait()
  2014-10-16 21:45:15.524 6145 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/site-packages/nova/openstack/common/threadgroup.py, line 
49, in wait
  2014-10-16 21:45:15.524 6145 TRACE nova.openstack.common.threadgroup 
return self.thread.wait()
  2014-10-16 21:45:15.524 6145 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/site-packages/eventlet/greenthread.py, line 173, in wait
  2014-10-16 21:45:15.524 6145 TRACE nova.openstack.common.threadgroup 
return self._exit_event.wait()
  2014-10-16 21:45:15.524 6145 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/site-packages/eventlet/event.py, line 121, in wait
  2014-10-16 21:45:15.524 6145 TRACE nova.openstack.common.threadgroup 
return hubs.get_hub().switch()
  2014-10-16 21:45:15.524 6145 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/site-packages/eventlet/hubs/hub.py, line 293, in switch
  2014-10-16 21:45:15.524 6145 TRACE nova.openstack.common.threadgroup 
return self.greenlet.switch()
  2014-10-16 21:45:15.524 6145 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/site-packages/eventlet/greenthread.py, line 212, in main
  2014-10-16 21:45:15.524 6145 TRACE nova.openstack.common.threadgroup 
result = function(*args, **kwargs)
  2014-10-16 21:45:15.524 6145 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/site-packages/nova/openstack/common/service.py, line 66, 
in run_service
  2014-10-16 21:45:15.524 6145 TRACE nova.openstack.common.threadgroup 
service.start()
  2014-10-16 21:45:15.524 6145 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/site-packages/nova/service.py, line 154, in start
  2014-10-16 21:45:15.524 6145 TRACE nova.openstack.common.threadgroup 
self.manager.init_host()
  2014-10-16 21:45:15.524 6145 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/site-packages/nova/compute/manager.py, line 782, in 
init_host
  2014-10-16 21:45:15.524 

[Yahoo-eng-team] [Bug 1382305] Re: conductor and compute fail to work

2014-10-17 Thread Joe Gordon
This sounds like a support request for a misconfiguration. please use
https://ask.openstack.org/en/questions/

** Changed in: nova
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1382305

Title:
  conductor and compute fail to work

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Openstack works normally at start. But after a while, something wrong
  with qpid occur:

  ***the compute log:
  2014-10-17 17:57:46.262 9250 WARNING nova.openstack.common.loopingcall [-] 
task bound method DbDriver._report_state of 
nova.servicegroup.drivers.db.DbDriver object at 0x2d4e2d0 run outlasted 
interval by 110.00 sec
  2014-10-17 17:58:46.263 9250 ERROR nova.openstack.common.periodic_task [-] 
Error during ComputeManager._heal_instance_info_cache: Timed out waiting for a 
reply to message ID c48bd88c14fd4201bc39b0efdaaa43cc
  2014-10-17 17:58:46.263 9250 TRACE nova.openstack.common.periodic_task 
Traceback (most recent call last):
  2014-10-17 17:58:46.263 9250 TRACE nova.openstack.common.periodic_task   File 
/usr/lib/python2.6/site-packages/nova/openstack/common/periodic_task.py, line 
198, in run_periodic_tasks
  2014-10-17 17:58:46.263 9250 TRACE nova.openstack.common.periodic_task 
task(self, context)
  2014-10-17 17:58:46.263 9250 TRACE nova.openstack.common.periodic_task   File 
/usr/lib/python2.6/site-packages/nova/compute/manager.py, line 5348, in 
_heal_instance_info_cache
  2014-10-17 17:58:46.263 9250 TRACE nova.openstack.common.periodic_task 
context, self.host, expected_attrs=[], use_slave=True)
  2014-10-17 17:58:46.263 9250 TRACE nova.openstack.common.periodic_task   File 
/usr/lib/python2.6/site-packages/nova/objects/base.py, line 153, in wrapper
  2014-10-17 17:58:46.263 9250 TRACE nova.openstack.common.periodic_task 
args, kwargs)
  2014-10-17 17:58:46.263 9250 TRACE nova.openstack.common.periodic_task   File 
/usr/lib/python2.6/site-packages/nova/conductor/rpcapi.py, line 341, in 
object_class_action
  2014-10-17 17:58:46.263 9250 TRACE nova.openstack.common.periodic_task 
objver=objver, args=args, kwargs=kwargs)
  2014-10-17 17:58:46.263 9250 TRACE nova.openstack.common.periodic_task   File 
/usr/lib/python2.6/site-packages/oslo/messaging/rpc/client.py, line 152, in 
call
  2014-10-17 17:58:46.263 9250 TRACE nova.openstack.common.periodic_task 
retry=self.retry)
  2014-10-17 17:58:46.263 9250 TRACE nova.openstack.common.periodic_task   File 
/usr/lib/python2.6/site-packages/oslo/messaging/transport.py, line 90, in 
_send
  2014-10-17 17:58:46.263 9250 TRACE nova.openstack.common.periodic_task 
timeout=timeout, retry=retry)
  2014-10-17 17:58:46.263 9250 TRACE nova.openstack.common.periodic_task   File 
/usr/lib/python2.6/site-packages/oslo/messaging/_drivers/amqpdriver.py, line 
404, in send
  2014-10-17 17:58:46.263 9250 TRACE nova.openstack.common.periodic_task 
retry=retry)
  2014-10-17 17:58:46.263 9250 TRACE nova.openstack.common.periodic_task   File 
/usr/lib/python2.6/site-packages/oslo/messaging/_drivers/amqpdriver.py, line 
393, in _send
  2014-10-17 17:58:46.263 9250 TRACE nova.openstack.common.periodic_task 
result = self._waiter.wait(msg_id, timeout)
  2014-10-17 17:58:46.263 9250 TRACE nova.openstack.common.periodic_task   File 
/usr/lib/python2.6/site-packages/oslo/messaging/_drivers/amqpdriver.py, line 
281, in wait
  2014-10-17 17:58:46.263 9250 TRACE nova.openstack.common.periodic_task 
reply, ending = self._poll_connection(msg_id, timeout)
  2014-10-17 17:58:46.263 9250 TRACE nova.openstack.common.periodic_task   File 
/usr/lib/python2.6/site-packages/oslo/messaging/_drivers/amqpdriver.py, line 
231, in _poll_connection
  2014-10-17 17:58:46.263 9250 TRACE nova.openstack.common.periodic_task % 
msg_id)
  2014-10-17 17:58:46.263 9250 TRACE nova.openstack.common.periodic_task 
MessagingTimeout: Timed out waiting for a reply to message ID 
c48bd88c14fd4201bc39b0efdaaa43cc
  2014-10-17 17:58:46.263 9250 TRACE nova.openstack.common.periodic_task 
  2014-10-17 17:59:46.281 9250 WARNING nova.openstack.common.loopingcall [-] 
task bound method DbDriver._report_state of 
nova.servicegroup.drivers.db.DbDriver object at 0x2d4e2d0 run outlasted 
interval by 110.02 sec
  2014-10-17 18:00:46.325 9250 WARNING nova.openstack.common.loopingcall [-] 
task bound method DbDriver._report_state of 
nova.servicegroup.drivers.db.DbDriver object at 0x2d4e2d0 run outlasted 
interval by 50.04 sec
  2014-10-17 18:01:46.326 9250 ERROR nova.openstack.common.periodic_task [-] 
Error during ComputeManager._instance_usage_audit: Timed out waiting for a 
reply to message ID c5869d814e724f9086e2ede76b5e5356
  2014-10-17 18:01:46.326 9250 TRACE nova.openstack.common.periodic_task 
Traceback (most recent call last):
  2014-10-17 18:01:46.326 9250 TRACE nova.openstack.common.periodic_task   File 

[Yahoo-eng-team] [Bug 1316556] Re: vmware: boot from image (create volume) is failing

2014-10-17 Thread Thang Pham
This is because the time it takes to create the volume is longer than
the default timeout value.  You can increase the timeout value in
/etc/nova/nova.conf to something greater, e.g. 3600.

block_device_allocate_retries = 3600

** Changed in: nova
   Status: Confirmed = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1316556

Title:
  vmware: boot from image (create volume) is failing

Status in OpenStack Compute (Nova):
  Invalid
Status in “nova” package in Ubuntu:
  In Progress

Bug description:
  Nova fails to boot an instance from the image using create volume.

  Run time environment details:
    cinder is configured with VMDK driver
    nova is configured with Vmware Vc driver

  While nova is trying to provision a instance by creating the volume
  from the given VMDK image by creating a volume and booting from it, it
  failed to create the instance, though volume is created properly after
  given amount of time. This failure is occurring especially when the
  volume creation is taking more than 180 seconds.

  Exception thrown by nova compute is: Volume f36bf0ce-ef0d-4200-b15b-
  cf2de3689bbb did not finish being created even after we waited 221
  seconds or 180 attempts.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1316556/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1382562] Re: security groups remote_group fails with CIDR in address pairs

2014-10-17 Thread Jeremy Stanley
Thanks Kevin. In that case I've tagged it as a security hardening
opportunity (removes a foot-cannon), and switched the advisory task to
won't-fix.

** Information type changed from Public Security to Public

** Changed in: ossa
   Status: Incomplete = Won't Fix

** Tags added: security

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1382562

Title:
  security groups remote_group fails with CIDR in address pairs

Status in OpenStack Neutron (virtual network service):
  In Progress
Status in OpenStack Security Advisories:
  Won't Fix

Bug description:
  Add a CIDR to allowed address pairs of a host. RPC calls from the
  agents will run into this issue now when retrieving the security group
  members' IPs. I haven't confirmed because I came across this working
  on other code, but I think this may stop all members of the security
  groups referencing that group from getting their rules over the RPC
  channel.

  
File neutron/api/rpc/handlers/securitygroups_rpc.py, line 75, in 
security_group_info_for_devices
  return self.plugin.security_group_info_for_ports(context, ports)
File neutron/db/securitygroups_rpc_base.py, line 202, in 
security_group_info_for_ports
  return self._get_security_group_member_ips(context, sg_info)
File neutron/db/securitygroups_rpc_base.py, line 209, in 
_get_security_group_member_ips
  ethertype = 'IPv%d' % netaddr.IPAddress(ip).version
File 
/home/administrator/code/neutron/.tox/py27/local/lib/python2.7/site-packages/netaddr/ip/__init__.py,
 line 281, in __init__
  % self.__class__.__name__)
  ValueError: IPAddress() does not support netmasks or subnet prefixes! See 
documentation for details.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1382562/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1353664] Re: Translation settings under usersettings page does not work

2014-10-17 Thread liaonanhai
** Changed in: horizon
   Status: Confirmed = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1353664

Title:
  Translation settings under usersettings page does not work

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  Translation settings under usersettings page does not work for
  following languages: hi, sr, zh-tw. After choosing such language
  website strings translates back to english.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1353664/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1354996] Re: when no default route entry in the router namespace, the vpnaas do not work!

2014-10-17 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete = Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1354996

Title:
  when no default route entry in the router namespace, the vpnaas do not
  work!

Status in OpenStack Neutron (virtual network service):
  Expired

Bug description:
  Hi,
  in my situation,there are two subnets on different openstack platforms, 
and I want connect them by using the vpnaas.
  
10.0.1.0/24===192.0.100.15192.0.100.15[+S=C]...192.0.100.20---192.0.100.20192.0.100.20[+S=C]===20.0.2.0/24
  Since I created the external network with --no-gateway option, so the 
routing tables in the router's namespace are like below:
 router on openstack1: 
 10.0.1.0/24 dev qr-6ed9ea58-dd  proto kernel  scope link  src 10.0.1.1
 192.0.100.0/24 dev qg-d2d9942f-4d  proto kernel  scope link  src 
192.0.100.15

  router on openstack2
 192.0.100.0/24 dev qg-fd0f7863-40  proto kernel  scope link  src 
192.0.100.20
 20.0.2.0/24 dev qr-ce203452-50  proto kernel  scope link  src 20.0.2.1

  when the traffic from subnet 10.0.1.0 /24 with 20.0.2.0/24 as its
  destination, there is no matching routing entry,so the traffic will be
  dropped, and won't be forwarded by the vpn tunnel! So I think a static
  default route entry liking  default dev qg-d2d9942f-4d  scope link
  should be added, though the external network without a gateway!

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1354996/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1308984] Re: Floating IP addresses ordered in a weird way

2014-10-17 Thread Launchpad Bug Tracker
[Expired for OpenStack Dashboard (Horizon) because there has been no
activity for 60 days.]

** Changed in: horizon
   Status: Incomplete = Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1308984

Title:
  Floating IP addresses ordered in a weird way

Status in OpenStack Dashboard (Horizon):
  Expired

Bug description:
  The floating ip:s are ordered according to UUID instead of IP, more
  information in the patch.

  ---
  commit 83a10bf02a5079513741039860208e277e1d12e4
  Author: Ian Kumlien ian.kuml...@gmail.com
  Date:   Thu Apr 17 13:49:32 2014 +0200

  Sorting floating IP:s according to IP.
  
  While using alot of manually allocated floating ip:s we wondered why
  the IP list wasn't sorted. While looking at it we found that the UI
  actually does sort the IP but according to the UUID instead of the
  actual IP address.
  
  This change fixes this so that it's sorted according to IP.
  
  Found-By: Marko Bocevski marko.bocev...@gmail.com

  diff --git 
a/openstack_dashboard/dashboards/project/access_and_security/floating_ips/workflows.py
 
b/openstack_dashboard/dashboards/project/access_and_security/floating_ips/workflows.py
  index c4ebbd1..d884dee 100644
  --- 
a/openstack_dashboard/dashboards/project/access_and_security/floating_ips/workflows.py
  +++ 
b/openstack_dashboard/dashboards/project/access_and_security/floating_ips/workflows.py
  @@ -69,7 +69,7 @@ class AssociateIPAction(workflows.Action):
   exceptions.handle(self.request,
 _('Unable to retrieve floating IP addresses.'),
 redirect=redirect)
  -options = sorted([(ip.id, ip.ip) for ip in ips if not ip.port_id])
  +options = sorted([(ip.ip, ip.ip) for ip in ips if not ip.port_id])
   if options:
   options.insert(0, (, _(Select an IP address)))
   else:

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1308984/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp