[Yahoo-eng-team] [Bug 1445863] [NEW] Unable to pass the parameter hostname to nova-api, when creating an instance.

2015-04-18 Thread javeme
Public bug reported:

When we create an instance, it's unable to pass the parameter hostname to 
nova-api.
Now, we use display_name as hostname[1], but obviously this is not a good 
practice because they are independent, In addition, hostname must conform to 
RFC 952, RFC 1123 specification, but the display name is not necessary.
So we need to pass hostname from the Rest API, and set it into the instance.

change method API.create()  [nova/compute/api.py]
def create(self, context, instance_type,
   image_href, kernel_id=None, ramdisk_id=None,
   min_count=None, max_count=None,
   display_name=None, display_description=None,
   key_name=None, key_data=None, security_group=None,
   availability_zone=None, user_data=None, metadata=None,
   injected_files=None, admin_password=None,
   block_device_mapping=None, access_ip_v4=None,
   access_ip_v6=None, requested_networks=None, config_drive=None,
   auto_disk_config=None, scheduler_hints=None, legacy_bdm=True,
   shutdown_terminate=False, check_server_group_quota=False)
into:
def create(self, context, instance_type,
   image_href, kernel_id=None, ramdisk_id=None,
   min_count=None, max_count=None,
   display_name=None, display_description=None,
   key_name=None, key_data=None, security_group=None,
   availability_zone=None, user_data=None, metadata=None,
   injected_files=None, admin_password=None,
   block_device_mapping=None, access_ip_v4=None,
   access_ip_v6=None, requested_networks=None, config_drive=None,
   auto_disk_config=None, scheduler_hints=None, legacy_bdm=True,
   shutdown_terminate=False, check_server_group_quota=False,
   hostname=None)

ps.
[1] nova/compute/api.py class API._populate_instance_for_create():
def _populate_instance_names(self, instance, num_instances):
"""Populate instance display_name and hostname."""
display_name = instance.get('display_name')
if instance.obj_attr_is_set('hostname'):
hostname = instance.get('hostname')
else:
hostname = None

if display_name is None:
display_name = self._default_display_name(instance.uuid)
instance.display_name = display_name

if hostname is None and num_instances == 1:
# NOTE(russellb) In the multi-instance case, we're going to
# overwrite the display_name using the
# multi_instance_display_name_template.  We need the default
# display_name set so that it can be used in the template, though.
# Only set the hostname here if we're only creating one instance.
# Otherwise, it will be built after the template based
# display_name.
hostname = display_name
instance.hostname = utils.sanitize_hostname(hostname)

** Affects: nova
 Importance: Undecided
 Assignee: javeme (javaloveme)
 Status: New

** Description changed:

  When we create an instance, it's unable to pass the parameter hostname to 
nova-api.
  Now, we use display_name as hostname[1], but obviously this is not a good 
practice because they are independent, In addition, hostname must conform to 
RFC 952, RFC 1123 specification, but the display name is not necessary.
  So we need to pass hostname from the Rest API, and set it into the instance.
  
  change method API.create()  [nova/compute/api.py]
- def create(self, context, instance_type,
-image_href, kernel_id=None, ramdisk_id=None,
-min_count=None, max_count=None,
-display_name=None, display_description=None,
-key_name=None, key_data=None, security_group=None,
-availability_zone=None, user_data=None, metadata=None,
-injected_files=None, admin_password=None,
-block_device_mapping=None, access_ip_v4=None,
-access_ip_v6=None, requested_networks=None, config_drive=None,
-auto_disk_config=None, scheduler_hints=None, legacy_bdm=True,
-shutdown_terminate=False, check_server_group_quota=False)
+ def create(self, context, instance_type,
+    image_href, kernel_id=None, ramdisk_id=None,
+    min_count=None, max_count=None,
+    display_name=None, display_description=None,
+    key_name=None, key_data=None, security_group=None,
+    availability_zone=None, user_data=None, metadata=None,
+    injected_files=None, admin_password=None,
+    block_device_mapping=None, access_ip_v4=None,
+    access_ip_v6=None, requested_networks=None, config_drive=None,
+    auto_disk_config=None, scheduler_hints=None, legacy_bdm=True,
+    shutdown_terminate=False

[Yahoo-eng-team] [Bug 1445858] [NEW] [VMware]Using ShutdownGuest instead of PowerOffVM_Task, if vmtools was installed.

2015-04-18 Thread javeme
Public bug reported:

PowerOffVM_Task is a dangerous operation, while ShutdownGuest are more secure.
Using ShutdownGuest instead of PowerOffVM_Task, if vmtools was installed.

now:
def power_off_instance(session, instance, vm_ref=None):
"""Power off the specified instance."""

if vm_ref is None:
vm_ref = get_vm_ref(session, instance)

LOG.debug("Powering off the VM", instance=instance)
try:
poweroff_task = session._call_method(session.vim,
 "PowerOffVM_Task", vm_ref)
session._wait_for_task(poweroff_task)
LOG.debug("Powered off the VM", instance=instance)
except vexc.InvalidPowerStateException:
LOG.debug("VM already powered off", instance=instance)

** Affects: nova
 Importance: Undecided
 Assignee: javeme (javaloveme)
 Status: New


** Tags: vmware

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1445858

Title:
  [VMware]Using ShutdownGuest instead of PowerOffVM_Task, if vmtools was
  installed.

Status in OpenStack Compute (Nova):
  New

Bug description:
  PowerOffVM_Task is a dangerous operation, while ShutdownGuest are more secure.
  Using ShutdownGuest instead of PowerOffVM_Task, if vmtools was installed.

  now:
  def power_off_instance(session, instance, vm_ref=None):
  """Power off the specified instance."""

  if vm_ref is None:
  vm_ref = get_vm_ref(session, instance)

  LOG.debug("Powering off the VM", instance=instance)
  try:
  poweroff_task = session._call_method(session.vim,
   "PowerOffVM_Task", vm_ref)
  session._wait_for_task(poweroff_task)
  LOG.debug("Powered off the VM", instance=instance)
  except vexc.InvalidPowerStateException:
  LOG.debug("VM already powered off", instance=instance)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1445858/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1364328] Re: Horizon Ceilometer throws 'ValueError' message, if the user specifies alphanumeric characters for generating the Usage report

2015-04-18 Thread Launchpad Bug Tracker
[Expired for OpenStack Dashboard (Horizon) because there has been no
activity for 60 days.]

** Changed in: horizon
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1364328

Title:
  Horizon Ceilometer throws 'ValueError' message, if the user specifies
  alphanumeric characters for generating the Usage report

Status in OpenStack Dashboard (Horizon):
  Expired

Bug description:
  In 'Resource Usage Overview' summary specify the alphanumeric
  characters in 'Limit Project Count' field and click on Generate
  Report.

  'ValueError' at metering throwing the error.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1364328/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1441386] Re: keystone-manage domain_config_upload command yield "'CacheRegion' object has no attribute 'expiration_time'"

2015-04-18 Thread Morgan Fainberg
** Also affects: keystone/kilo
   Importance: Undecided
   Status: New

** Changed in: keystone/kilo
Milestone: None => kilo-rc2

** Changed in: keystone/kilo
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1441386

Title:
  keystone-manage domain_config_upload command yield "'CacheRegion'
  object has no attribute 'expiration_time'"

Status in OpenStack Identity (Keystone):
  Fix Committed
Status in Keystone kilo series:
  New

Bug description:
  Steps to reproduce the error:

  1. Install devstack
  2. enable domain-specific driver feature
  
   domain_specific_drivers_enabled=true
   domain_config_dir=/etc/keystone/domains

  3. create an domain-specific conf file in /etc/keystone/domains/. (i.e. 
/etc/keystone/domains/keystone.acme.conf)
  4. run 'keystone-manage domain_config_upload --domain-name acme' and you'll 
see a traceback similar to this

  keystone-manage domain_config_upload --domain-name acme
  4959 DEBUG keystone.notifications [-] Callback: 
`keystone.identity.core.Manager._domain_deleted` subscribed to event 
`identity.domain.deleted`. register_event_callback 
/opt/stack/keystone/keystone/notifications.py:292
  4959 DEBUG oslo_db.sqlalchemy.session [-] MySQL server mode set to 
STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION
 _check_effective_sql_mode 
/usr/local/lib/python2.7/dist-packages/oslo_db/sqlalchemy/session.py:509
  4959 CRITICAL keystone [-] AttributeError: 'CacheRegion' object has no 
attribute 'expiration_time'
  4959 TRACE keystone Traceback (most recent call last):
  4959 TRACE keystone   File "/usr/local/bin/keystone-manage", line 6, in 

  4959 TRACE keystone exec(compile(open(__file__).read(), __file__, 'exec'))
  4959 TRACE keystone   File "/opt/stack/keystone/bin/keystone-manage", line 
44, in 
  4959 TRACE keystone cli.main(argv=sys.argv, config_files=config_files)
  4959 TRACE keystone   File "/opt/stack/keystone/keystone/cli.py", line 600, 
in main
  4959 TRACE keystone CONF.command.cmd_class.main()
  4959 TRACE keystone   File "/opt/stack/keystone/keystone/cli.py", line 543, 
in main
  4959 TRACE keystone status = dcu.run()
  4959 TRACE keystone   File "/opt/stack/keystone/keystone/cli.py", line 513, 
in run
  4959 TRACE keystone self.read_domain_configs_from_files()
  4959 TRACE keystone   File "/opt/stack/keystone/keystone/cli.py", line 481, 
in read_domain_configs_from_files
  4959 TRACE keystone os.path.join(conf_dir, fname), domain_name)
  4959 TRACE keystone   File "/opt/stack/keystone/keystone/cli.py", line 399, 
in upload_config_to_database
  4959 TRACE keystone self.resource_manager.get_domain_by_name(domain_name))
  4959 TRACE keystone   File 
"/usr/local/lib/python2.7/dist-packages/dogpile/cache/region.py", line 1040, in 
decorate
  4959 TRACE keystone should_cache_fn)
  4959 TRACE keystone   File 
"/usr/local/lib/python2.7/dist-packages/dogpile/cache/region.py", line 629, in 
get_or_create
  4959 TRACE keystone expiration_time = self.expiration_time
  4959 TRACE keystone AttributeError: 'CacheRegion' object has no attribute 
'expiration_time'
  4959 TRACE keystone

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1441386/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1440493] Re: Crash with python-memcached==1.5.4

2015-04-18 Thread Morgan Fainberg
** Also affects: keystone/kilo
   Importance: Undecided
   Status: New

** Changed in: keystone
Milestone: None => liberty-1

** Also affects: keystone/liberty
   Importance: High
 Assignee: Alexander Makarov (amakarov)
   Status: Fix Committed

** Changed in: keystone/kilo
Milestone: None => kilo-rc2

** Changed in: keystone/kilo
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1440493

Title:
  Crash with python-memcached==1.5.4

Status in OpenStack Identity (Keystone):
  Fix Committed
Status in Keystone kilo series:
  New
Status in Keystone liberty series:
  Fix Committed
Status in OpenStack Identity  (Keystone) Middleware:
  New

Bug description:
  There's some magic going on at line:
  
https://github.com/openstack/keystone/blob/2014.2.2/keystone/common/cache/_memcache_pool.py#L46

  This magic is broken due to the fact that python-memcached added a super(...) 
initalization at
  https://github.com/linsomniac/python-memcached/blob/master/memcache.py#L218
  
https://github.com/linsomniac/python-memcached/commit/45403325e0249ff0f61d6ae449a7daeeb7e852e5

  Due to this change, keystone can no longer work with the latest
  python-memcached version:

  Traceback (most recent call last):
File ""keystone/common/wsgi.py", line 223, in __call__
  result = method(context, **params)
File ""keystone/identity/controllers.py", line 76, in create_user
  self.assignment_api.get_project(default_project_id)
File ""dogpile/cache/region.py", line 1040, in decorate
  should_cache_fn)
File ""dogpile/cache/region.py", line 651, in get_or_create
  async_creator) as value:
File ""dogpile/core/dogpile.py", line 158, in __enter__
  return self._enter()
File ""dogpile/core/dogpile.py", line 91, in _enter
  value = value_fn()
File ""dogpile/cache/region.py", line 604, in get_value
  value = self.backend.get(key)
File ""dogpile/cache/backends/memcached.py", line 149, in get
  value = self.client.get(key)
File ""keystone/common/cache/backends/memcache_pool.py", line 35, in 
_run_method
  with self.client_pool.acquire() as client:
File "/usr/lib/python2.7/contextlib.py", line 17, in __enter__
  return self.gen.next()
File ""keystone/common/cache/_memcache_pool.py", line 97, in acquire
  conn = self.get(timeout=self._connection_get_timeout)
File ""eventlet/queue.py", line 293, in get
  return self._get()
File ""keystone/common/cache/_memcache_pool.py", line 155, in _get
  conn = ConnectionPool._get(self)
File ""keystone/common/cache/_memcache_pool.py", line 120, in _get
  conn = self._create_connection()
File ""keystone/common/cache/_memcache_pool.py", line 149, in 
_create_connection
  return _MemcacheClient(self.urls, **self._arguments)
File ""memcache.py", line 228, in __init__
  super(Client, self).__init__()
  TypeError: super(type, obj): obj must be an instance or subtype of type

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1440493/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1441827] Re: Cannot set per protocol remote_id_attribute

2015-04-18 Thread Morgan Fainberg
** Also affects: keystone/kilo
   Importance: Undecided
   Status: New

** Changed in: keystone/kilo
Milestone: None => kilo-rc2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1441827

Title:
  Cannot set per protocol remote_id_attribute

Status in OpenStack Identity (Keystone):
  Fix Committed
Status in Keystone kilo series:
  New

Bug description:
  Setup Federation with SSSD.  Worked OK with

  [federation]
  remote_id_attribute=

  but not with

  [kerberos]
  remote_id_attribute=

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1441827/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1445830] [NEW] unit test in IpsetManagerTestCaseHashArgs fails

2015-04-18 Thread Thomas Goirand
Public bug reported:

Hi,

When building Neutron Kilo RC1, there's a unique unit test failure
(which is not bad, but it would be super nice to get that last one fixed
for the final release). Below is the trace. Full build log available
here:

https://kilo-jessie.pkgs.mirantis.com/job/neutron/29/consoleFull

If you wish to rebuild the package yourself in Jessie, here's the instructions:
http://openstack.alioth.debian.org

FAIL: 
neutron.tests.unit.agent.linux.test_ipset_manager.IpsetManagerTestCaseHashArgs.test_set_members_adding_more_than_5
neutron.tests.unit.agent.linux.test_ipset_manager.IpsetManagerTestCaseHashArgs.test_set_members_adding_more_than_5
--
_StringException: Traceback (most recent call last):
_StringException: Empty attachments:
  pythonlogging:''
  pythonlogging:'neutron.api.extensions'
  stderr
  stdout

Traceback (most recent call last):
  File "/��PKGBUILDDIR��/neutron/tests/unit/agent/linux/test_ipset_manager.py", 
line 135, in test_set_members_adding_more_than_5
self.verify_mock_calls()
  File "/��PKGBUILDDIR��/neutron/tests/unit/agent/linux/test_ipset_manager.py", 
line 43, in verify_mock_calls
self.execute.assert_has_calls(self.expected_calls, any_order=False)
  File "/usr/lib/python2.7/dist-packages/mock.py", line 872, in assert_has_calls
'Actual: %r' % (calls, self.mock_calls)
AssertionError: Calls not found.
Expected: [call(['ipset', 'create', '-exist', 'IPv4fake_sgid', 'hash:ip', 
'family', 'inet', 'hashsize', '2048', 'maxelem', '131072'], run_as_root=True, 
process_input=None), call(['ipset', 'restore', '-exist'], run_as_root=True, 
process_input='create IPv4fake_sgid-new hash:ip family inet hashsize 2048 
maxelem 131072\nadd IPv4fake_sgid-new 10.0.0.1'), call(['ipset', 'swap', 
'IPv4fake_sgid-new', 'IPv4fake_sgid'], run_as_root=True, process_input=None), 
call(['ipset', 'destroy', 'IPv4fake_sgid-new'], run_as_root=True, 
process_input=None), call(['ipset', 'restore', '-exist'], run_as_root=True, 
process_input='create IPv4fake_sgid-new hash:ip family inet hashsize 2048 
maxelem 131072\nadd IPv4fake_sgid-new 10.0.0.1\nadd IPv4fake_sgid-new 
10.0.0.2\nadd IPv4fake_sgid-new 10.0.0.3'), call(['ipset', 'swap', 
'IPv4fake_sgid-new', 'IPv4fake_sgid'], run_as_root=True, process_input=None), 
call(['ipset', 'destroy', 'IPv4fake_sgid-new'], run_as_root=True, 
process_input=None)]
Actual: [call(['ipset', 'create', '-exist', 'IPv4fake_sgid', 'hash:ip', 
'family', 'inet', 'hashsize', '2048', 'maxelem', '131072'], run_as_root=True, 
process_input=None),
 call(['ipset', 'restore', '-exist'], run_as_root=True, process_input='create 
IPv4fake_sgid-new hash:ip family inet hashsize 2048 maxelem 131072\nadd 
IPv4fake_sgid-new 10.0.0.1'),
 call(['ipset', 'swap', 'IPv4fake_sgid-new', 'IPv4fake_sgid'], 
run_as_root=True, process_input=None),
 call(['ipset', 'destroy', 'IPv4fake_sgid-new'], run_as_root=True, 
process_input=None),
 call(['ipset', 'add', '-exist', 'IPv4fake_sgid', '10.0.0.3'], 
run_as_root=True, process_input=None),
 call(['ipset', 'add', '-exist', 'IPv4fake_sgid', '10.0.0.2'], 
run_as_root=True, process_input=None)]

Traceback (most recent call last):
_StringException: Empty attachments:
  pythonlogging:''
  pythonlogging:'neutron.api.extensions'
  stderr
  stdout

Traceback (most recent call last):
  File "/��PKGBUILDDIR��/neutron/tests/unit/agent/linux/test_ipset_manager.py", 
line 135, in test_set_members_adding_more_than_5
self.verify_mock_calls()
  File "/��PKGBUILDDIR��/neutron/tests/unit/agent/linux/test_ipset_manager.py", 
line 43, in verify_mock_calls
self.execute.assert_has_calls(self.expected_calls, any_order=False)
  File "/usr/lib/python2.7/dist-packages/mock.py", line 872, in assert_has_calls
'Actual: %r' % (calls, self.mock_calls)
AssertionError: Calls not found.
Expected: [call(['ipset', 'create', '-exist', 'IPv4fake_sgid', 'hash:ip', 
'family', 'inet', 'hashsize', '2048', 'maxelem', '131072'], run_as_root=True, 
process_input=None), call(['ipset', 'restore', '-exist'], run_as_root=True, 
process_input='create IPv4fake_sgid-new hash:ip family inet hashsize 2048 
maxelem 131072\nadd IPv4fake_sgid-new 10.0.0.1'), call(['ipset', 'swap', 
'IPv4fake_sgid-new', 'IPv4fake_sgid'], run_as_root=True, process_input=None), 
call(['ipset', 'destroy', 'IPv4fake_sgid-new'], run_as_root=True, 
process_input=None), call(['ipset', 'restore', '-exist'], run_as_root=True, 
process_input='create IPv4fake_sgid-new hash:ip family inet hashsize 2048 
maxelem 131072\nadd IPv4fake_sgid-new 10.0.0.1\nadd IPv4fake_sgid-new 
10.0.0.2\nadd IPv4fake_sgid-new 10.0.0.3'), call(['ipset', 'swap', 
'IPv4fake_sgid-new', 'IPv4fake_sgid'], run_as_root=True, process_input=None), 
call(['ipset', 'destroy', 'IPv4fake_sgid-new'], run_as_root=True, 
process_input=None)]
Actual: [call(['ipset', 'create', '-exist', 'IPv4fake_sgid', 'hash:ip', 
'family', 'inet', 'hashsize', '2048', 'maxelem', '131072'], run_as_root=True, 
pr

[Yahoo-eng-team] [Bug 1445827] [NEW] unit test failures: Glance insist on ordereddict

2015-04-18 Thread Thomas Goirand
Public bug reported:

There's no python-ordereddict package anymore in Debian, as this is
normally included in Python 2.7. I have therefore patched
requirements.txt to remove ordereddict. However, even after this, I get
some bad unit test errors about it. This must be fixed upstream, because
there's no way (modern) downstream distributions can fix it (as the
ordereddict Python package will *not* come back).

Below is the tracebacks for the 4 failed unit tests.

FAIL: glance.tests.unit.test_opts.OptsTestCase.test_list_api_opts
--
Traceback (most recent call last):
_StringException: Traceback (most recent call last):
  File "/��PKGBUILDDIR��/glance/tests/unit/test_opts.py", line 143, in 
test_list_api_opts
expected_opt_groups, expected_opt_names)
  File "/��PKGBUILDDIR��/glance/tests/unit/test_opts.py", line 45, in 
_test_entry_point
list_fn = ep.load()
  File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 2188, in load
self.require(env, installer)
  File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 2202, in 
require
items = working_set.resolve(reqs, env, installer)
  File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 639, in resolve
raise DistributionNotFound(req)
DistributionNotFound: ordereddict


==
FAIL: glance.tests.unit.test_opts.OptsTestCase.test_list_cache_opts
--
Traceback (most recent call last):
_StringException: Traceback (most recent call last):
  File "/��PKGBUILDDIR��/glance/tests/unit/test_opts.py", line 288, in 
test_list_cache_opts
expected_opt_groups, expected_opt_names)
  File "/��PKGBUILDDIR��/glance/tests/unit/test_opts.py", line 45, in 
_test_entry_point
list_fn = ep.load()
  File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 2188, in load
self.require(env, installer)
  File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 2202, in 
require
items = working_set.resolve(reqs, env, installer)
  File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 639, in resolve
raise DistributionNotFound(req)
DistributionNotFound: ordereddict


==
FAIL: glance.tests.unit.test_opts.OptsTestCase.test_list_manage_opts
--
Traceback (most recent call last):
_StringException: Traceback (most recent call last):
  File "/��PKGBUILDDIR��/glance/tests/unit/test_opts.py", line 301, in 
test_list_manage_opts
expected_opt_groups, expected_opt_names)
  File "/��PKGBUILDDIR��/glance/tests/unit/test_opts.py", line 45, in 
_test_entry_point
list_fn = ep.load()
  File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 2188, in load
self.require(env, installer)
  File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 2202, in 
require
items = working_set.resolve(reqs, env, installer)
  File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 639, in resolve
raise DistributionNotFound(req)
DistributionNotFound: ordereddict


==
FAIL: glance.tests.unit.test_opts.OptsTestCase.test_list_registry_opts
--
Traceback (most recent call last):
_StringException: Traceback (most recent call last):
  File "/��PKGBUILDDIR��/glance/tests/unit/test_opts.py", line 192, in 
test_list_registry_opts
expected_opt_groups, expected_opt_names)
  File "/��PKGBUILDDIR��/glance/tests/unit/test_opts.py", line 45, in 
_test_entry_point
list_fn = ep.load()
  File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 2188, in load
self.require(env, installer)
  File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 2202, in 
require
items = working_set.resolve(reqs, env, installer)
  File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 639, in resolve
raise DistributionNotFound(req)
DistributionNotFound: ordereddict


==
FAIL: glance.tests.unit.test_opts.OptsTestCase.test_list_scrubber_opts
--
Traceback (most recent call last):
_StringException: Traceback (most recent call last):
  File "/��PKGBUILDDIR��/glance/tests/unit/test_opts.py", line 241, in 
test_list_scrubber_opts
expected_opt_groups, expected_opt_names)
  File "/��PKGBUILDDIR��/glance/tests/unit/test_opts.py", line 45, in 
_test_entry_point
list_fn = ep.load()
  File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 2188, in load
self.require(env, installer)
  File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 2202, in 
require
items = working_set.resolve(reqs, env, ins

[Yahoo-eng-team] [Bug 1441393] Re: Keystone and Ceilometer unit tests fail with pymongo 3.0

2015-04-18 Thread Thierry Carrez
** Also affects: keystone/kilo
   Importance: Undecided
   Status: New

** Changed in: keystone/kilo
Milestone: None => kilo-rc2

** Changed in: keystone/kilo
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1441393

Title:
  Keystone and Ceilometer unit tests fail with pymongo 3.0

Status in OpenStack Telemetry (Ceilometer):
  Fix Released
Status in Ceilometer icehouse series:
  Invalid
Status in Ceilometer juno series:
  Invalid
Status in OpenStack Identity (Keystone):
  Fix Committed
Status in Keystone icehouse series:
  Fix Committed
Status in Keystone juno series:
  Fix Released
Status in Keystone kilo series:
  In Progress
Status in OpenStack Messaging and Notifications Service (Zaqar):
  New

Bug description:
  
  pymongo 3.0 was released 2015-04-07. This causes keystone tests to fail:

  Traceback (most recent call last):
File "keystone/tests/unit/test_cache_backend_mongo.py", line 357, in 
test_correct_read_preference
  region.set(random_key, "dummyValue10")
 
  ...
File "keystone/common/cache/backends/mongo.py", line 363, in 
get_cache_collection 
  self.read_preference = pymongo.read_preferences.mongos_enum(  
  
  AttributeError: 'module' object has no attribute 'mongos_enum'
  

  Traceback (most recent call last):
File "keystone/tests/unit/test_cache_backend_mongo.py", line 345, in 
test_incorrect_read_preference
  random_key, "dummyValue10")   
   
  ...
File "keystone/common/cache/backends/mongo.py", line 168, in client 

  self.api.get_cache_collection()   

File "keystone/common/cache/backends/mongo.py", line 363, in 
get_cache_collection   
  self.read_preference = pymongo.read_preferences.mongos_enum(  

  AttributeError: 'module' object has no attribute 'mongos_enum'

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1441393/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1445728] [NEW] [Launch Instance Fix] Scroll doesn't track with collapse expand transfer tables

2015-04-18 Thread Travis Tripp
Public bug reported:

When I collapse /expand a transfer table row, it often pushes most of
the content out of the bottom of the page.  The page doesn't scroll with
it, make a very unpleasant experience when collapsing, expanding more
than a couple rows.  It would be better of the scroll tracked better
with the collapse / expand.

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: ng-subteam

** Tags added: ng-subteam

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1445728

Title:
  [Launch Instance Fix] Scroll doesn't track with collapse expand
  transfer tables

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When I collapse /expand a transfer table row, it often pushes most of
  the content out of the bottom of the page.  The page doesn't scroll
  with it, make a very unpleasant experience when collapsing, expanding
  more than a couple rows.  It would be better of the scroll tracked
  better with the collapse / expand.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1445728/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1445729] [NEW] [Launch Instance Fix] Fields in error need more obvious outline around them

2015-04-18 Thread Travis Tripp
Public bug reported:

During UX review, feedback was given that fields with errors should have
more obvious line / glow around them.

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: ng-subteam

** Tags added: ng-subteam

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1445729

Title:
  [Launch Instance Fix] Fields in error need more obvious outline around
  them

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  During UX review, feedback was given that fields with errors should
  have more obvious line / glow around them.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1445729/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp