[Yahoo-eng-team] [Bug 1462577] [NEW] Admin --> Metadata Definition --> Update Associations Action Javascript console error.

2015-06-05 Thread Travis Tripp
Public bug reported:


Error: [ngRepeat:dupes] Duplicates in a repeater are not allowed. Use 'track 
by' expression to specify unique keys. Repeater: resource_type in 
resource_types  | filter:searchResource, Duplicate key: string:e, Duplicate 
value: e
http://errors.angularjs.org/1.3.7/ngRepeat/dupes?p0=resource_type%20in%20resource_types%20%20%7C%20filter%3AsearchResource&p1=string%3Ae&p2=e
at REGEX_STRING_REGEXP (angular.js:63)
at ngRepeatAction (angular.js:24474)
at Object.$watchCollectionAction [as fn] (angular.js:14090)
at Scope.$get.Scope.$digest (angular.js:14223)
at Scope.$get.Scope.$apply (angular.js:14486)
at HTMLInputElement. (angular.js:22945)
at HTMLInputElement.jQuery.event.dispatch (jquery.js:5095)
at HTMLInputElement.jQuery.event.add.elemData.handle 
(jquery.js:4766)(anonymous function) @ angular.js:11592$get @ 
angular.js:8542$get.Scope.$digest @ angular.js:14241$get.Scope.$apply @ 
angular.js:14486(anonymous function) @ angular.js:22945jQuery.event.dispatch @ 
jquery.js:5095jQuery.event.add.elemData.handle @ jquery.js:4766
jquery.js:7093 GET http://localhost:8005/static/dashboard/css/None 404 (NOT 
FOUND)curCSS @ jquery.js:7093jQuery.extend.cssHooks.opacity.get @ 
jquery.js:6952jQuery.extend.css @ jquery.js:7059Tween.propHooks._default.get @ 
jquery.js:9253Tween.cur @ jquery.js:9209Tween.init @ jquery.js:9200Tween @ 
jquery.js:9189deferred.promise.createTween @ jquery.js:8936tweeners.* @ 
jquery.js:8821createTween @ jquery.js:8884defaultPrefilter @ 
jquery.js:9175Animation @ jquery.js:8969jQuery.fn.extend.animate.doAnimation @ 
jquery.js:9305jQuery.extend.dequeue @ jquery.js:3948(anonymous function) @ 
jquery.js:3991jQuery.extend.each @ jquery.js:657jQuery.fn.jQuery.each @ 
jquery.js:266jQuery.fn.extend.queue @ jquery.js:3984jQuery.fn.extend.animate @ 
jquery.js:9316jQuery.each.jQuery.fn.(anonymous function) @ 
jquery.js:9442(anonymous function) @ 
horizon.tables_inline_edit.js:243jQuery.each.jQuery.event.special.(anonymous 
function).handle @ jquery.js:5460jQuery.event.dispatch @ 
jquery.js:5095jQuery.event.add.ele
 mData.handle @ jquery.js:4766

** Affects: horizon
 Importance: Undecided
 Assignee: Travis Tripp (travis-tripp)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1462577

Title:
  Admin --> Metadata Definition --> Update Associations Action
  Javascript console error.

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:

  Error: [ngRepeat:dupes] Duplicates in a repeater are not allowed. Use 'track 
by' expression to specify unique keys. Repeater: resource_type in 
resource_types  | filter:searchResource, Duplicate key: string:e, Duplicate 
value: e
  
http://errors.angularjs.org/1.3.7/ngRepeat/dupes?p0=resource_type%20in%20resource_types%20%20%7C%20filter%3AsearchResource&p1=string%3Ae&p2=e
  at REGEX_STRING_REGEXP (angular.js:63)
  at ngRepeatAction (angular.js:24474)
  at Object.$watchCollectionAction [as fn] (angular.js:14090)
  at Scope.$get.Scope.$digest (angular.js:14223)
  at Scope.$get.Scope.$apply (angular.js:14486)
  at HTMLInputElement. (angular.js:22945)
  at HTMLInputElement.jQuery.event.dispatch (jquery.js:5095)
  at HTMLInputElement.jQuery.event.add.elemData.handle 
(jquery.js:4766)(anonymous function) @ angular.js:11592$get @ 
angular.js:8542$get.Scope.$digest @ angular.js:14241$get.Scope.$apply @ 
angular.js:14486(anonymous function) @ angular.js:22945jQuery.event.dispatch @ 
jquery.js:5095jQuery.event.add.elemData.handle @ jquery.js:4766
  jquery.js:7093 GET http://localhost:8005/static/dashboard/css/None 404 (NOT 
FOUND)curCSS @ jquery.js:7093jQuery.extend.cssHooks.opacity.get @ 
jquery.js:6952jQuery.extend.css @ jquery.js:7059Tween.propHooks._default.get @ 
jquery.js:9253Tween.cur @ jquery.js:9209Tween.init @ jquery.js:9200Tween @ 
jquery.js:9189deferred.promise.createTween @ jquery.js:8936tweeners.* @ 
jquery.js:8821createTween @ jquery.js:8884defaultPrefilter @ 
jquery.js:9175Animation @ jquery.js:8969jQuery.fn.extend.animate.doAnimation @ 
jquery.js:9305jQuery.extend.dequeue @ jquery.js:3948(anonymous function) @ 
jquery.js:3991jQuery.extend.each @ jquery.js:657jQuery.fn.jQuery.each @ 
jquery.js:266jQuery.fn.extend.queue @ jquery.js:3984jQuery.fn.extend.animate @ 
jquery.js:9316jQuery.each.jQuery.fn.(anonymous function) @ 
jquery.js:9442(anonymous function) @ 
horizon.tables_inline_edit.js:243jQuery.each.jQuery.event.special.(anonymous 
function).handle @ jquery.js:5460jQuery.event.dispatch @ 
jquery.js:5095jQuery.event.add.e
 lemData.handle @ jquery.js:4766

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1462577/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.ne

[Yahoo-eng-team] [Bug 1399454] Re: Nexus VXLAN gateway: 4K VLANs limitation

2015-06-05 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1399454

Title:
  Nexus VXLAN gateway: 4K VLANs limitation

Status in OpenStack Neutron (virtual network service):
  Expired

Bug description:
  With the Nexus VXLAN gateway, each Compute host still has the 4K VLANs
  limitation.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1399454/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1408477] Re: Live Migration Fails with NFS-Backed Cinder Volumes

2015-06-05 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1408477

Title:
  Live Migration Fails with NFS-Backed Cinder Volumes

Status in OpenStack Compute (Nova):
  Expired

Bug description:
  I am running RDO Icehouse-4 (reports version as 2.17.0) on CentOS 7
  using QEMU-KVM / Libvirt

  When issuing the 'live-migrate' command to move a virtual machine with
  NFS-backed Cinder volume to another Hypervisor, I receive the
  following error in the nova-compute.log file on the source Hypervisor:

  2015-01-02 10:33:36.291 7376 ERROR oslo.messaging._drivers.common [-]
  ['Traceback (most recent call last):\n', '  File "/usr/lib/python2.7
  /site-packages/oslo/messaging/rpc/dispatcher.py", line 133, in
  _dispatch_and_reply\nincoming.message))\n', '  File
  "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py",
  line 176, in _dispatch\nreturn self._do_dispatch(endpoint, method,
  ctxt, args)\n', '  File "/usr/lib/python2.7/site-
  packages/oslo/messaging/rpc/dispatcher.py", line 122, in
  _do_dispatch\nresult = getattr(endpoint, method)(ctxt,
  **new_args)\n', '  File "/usr/lib/python2.7/site-
  packages/nova/exception.py", line 88, in wrapped\npayload)\n', '
  File "/usr/lib/python2.7/site-
  packages/nova/openstack/common/excutils.py", line 68, in __exit__\n
  six.reraise(self.type_, self.value, self.tb)\n', '  File
  "/usr/lib/python2.7/site-packages/nova/exception.py", line 71, in
  wrapped\nreturn f(self, context, *args, **kw)\n', '  File
  "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 303,
  in decorated_function\ne, sys.exc_info())\n', '  File
  "/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py",
  line 68, in __exit__\nsix.reraise(self.type_, self.value,
  self.tb)\n', '  File "/usr/lib/python2.7/site-
  packages/nova/compute/manager.py", line 290, in decorated_function\n
  return function(self, context, *args, **kwargs)\n', '  File
  "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 4483,
  in check_can_live_migrate_source\ndest_check_data)\n', '  File
  "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line
  4296, in check_can_live_migrate_source\nraise
  exception.InvalidSharedStorage(reason=reason, path=source)\n',
  'InvalidSharedStorage: openstack-compute01-test.domain.local is not on
  shared storage: Live migration can not be used without shared
  storage.\n']

  I have seen various references to similar bugs, but those didn't
  appear to affect the combination of Icehouse and QEMU-KVM / Libvirt.
  Is anyone else experiencing this issue in Icehouse?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1408477/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1420044] Re: Eventlet timeout when running test_l3_agent functional test

2015-06-05 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1420044

Title:
  Eventlet timeout when running test_l3_agent functional test

Status in OpenStack Neutron (virtual network service):
  Expired

Bug description:
  logstash: http://logs.openstack.org/84/153384/3/gate/gate-neutron-
  dsvm-functional/7033858/console.html

  There was an eventlet timeout during the above test run on a patch
  that didn't change anything but some file locations. The timeout
  happened via a eventlet.green.subprocess.Popen.communicate call. Log
  snip:

  
  2015-02-09 22:07:05.960 | 2015-02-09 22:07:05.866 | Traceback (most 
recent call last):
  2015-02-09 22:07:05.961 | 2015-02-09 22:07:05.867 |   File 
"neutron/tests/functional/agent/test_l3_agent.py", line 520, in 
test_dvr_router_lifecycle_without_ha_with_snat_with_fips
  2015-02-09 22:07:05.961 | 2015-02-09 22:07:05.868 | 
self._dvr_router_lifecycle(enable_ha=False, enable_snat=True)
  2015-02-09 22:07:05.962 | 2015-02-09 22:07:05.870 |   File 
"neutron/tests/functional/agent/test_l3_agent.py", line 571, in 
_dvr_router_lifecycle
  2015-02-09 22:07:05.962 | 2015-02-09 22:07:05.871 | 
self._delete_router(self.agent, router.router_id)
  2015-02-09 22:07:05.962 | 2015-02-09 22:07:05.872 |   File 
"neutron/tests/functional/agent/test_l3_agent.py", line 116, in _delete_router
  2015-02-09 22:07:05.963 | 2015-02-09 22:07:05.873 | 
agent._router_removed(router_id)
  2015-02-09 22:07:05.963 | 2015-02-09 22:07:05.875 |   File 
"neutron/agent/l3/agent.py", line 404, in _router_removed
  2015-02-09 22:07:05.964 | 2015-02-09 22:07:05.876 | 
self.process_router(ri)
  2015-02-09 22:07:05.964 | 2015-02-09 22:07:05.877 |   File 
"neutron/common/utils.py", line 345, in call
  2015-02-09 22:07:05.965 | 2015-02-09 22:07:05.879 | self.logger(e)
  2015-02-09 22:07:05.965 | 2015-02-09 22:07:05.880 |   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 82, in 
__exit__
  2015-02-09 22:07:05.966 | 2015-02-09 22:07:05.881 | 
six.reraise(self.type_, self.value, self.tb)
  2015-02-09 22:07:05.966 | 2015-02-09 22:07:05.883 |   File 
"neutron/common/utils.py", line 342, in call
  2015-02-09 22:07:05.967 | 2015-02-09 22:07:05.884 | return 
func(*args, **kwargs)
  2015-02-09 22:07:05.967 | 2015-02-09 22:07:05.886 |   File 
"neutron/agent/l3/agent.py", line 599, in process_router
  2015-02-09 22:07:05.968 | 2015-02-09 22:07:05.887 | 
self._process_external(ri)
  2015-02-09 22:07:05.968 | 2015-02-09 22:07:05.888 |   File 
"neutron/agent/l3/agent.py", line 564, in _process_external
  2015-02-09 22:07:05.969 | 2015-02-09 22:07:05.889 | 
self._process_external_gateway(ri)
  2015-02-09 22:07:05.969 | 2015-02-09 22:07:05.890 |   File 
"neutron/agent/l3/agent.py", line 502, in _process_external_gateway
  2015-02-09 22:07:05.970 | 2015-02-09 22:07:05.892 | 
self.external_gateway_removed(ri, ri.ex_gw_port, interface_name)
  2015-02-09 22:07:05.970 | 2015-02-09 22:07:05.893 |   File 
"neutron/agent/l3/agent.py", line 895, in external_gateway_removed
  2015-02-09 22:07:05.971 | 2015-02-09 22:07:05.894 | 
self.process_router_floating_ip_addresses(ri, interface_name)
  2015-02-09 22:07:05.971 | 2015-02-09 22:07:05.895 |   File 
"neutron/agent/l3/agent.py", line 779, in process_router_floating_ip_addresses
  2015-02-09 22:07:05.972 | 2015-02-09 22:07:05.897 | 
self._remove_floating_ip(ri, device, ip_cidr)
  2015-02-09 22:07:05.972 | 2015-02-09 22:07:05.898 |   File 
"neutron/agent/l3/agent.py", line 738, in _remove_floating_ip
  2015-02-09 22:07:05.973 | 2015-02-09 22:07:05.899 | 
self.floating_ip_removed_dist(ri, ip_cidr)
  2015-02-09 22:07:05.973 | 2015-02-09 22:07:05.901 |   File 
"neutron/agent/l3/dvr.py", line 227, in floating_ip_removed_dist
  2015-02-09 22:07:05.973 | 2015-02-09 22:07:05.902 | 
ns_ip.del_veth(fip_2_rtr_name)
  2015-02-09 22:07:05.974 | 2015-02-09 22:07:05.903 |   File 
"neutron/agent/linux/ip_lib.py", line 153, in del_veth
  2015-02-09 22:07:05.975 | 2015-02-09 22:07:05.904 | self._as_root('', 
'link', ('del', name))
  2015-02-09 22:07:05.975 | 2015-02-09 22:07:05.905 |   File 
"neutron/agent/linux/ip_lib.py", line 83, in _as_root
  2015-02-09 22:07:05.976 | 2015-02-09 22:07:05.924 | 
log_fail_as_error=self.log_fail_as_error)
  2015-02-09 22:07:05.977 | 2015-02-09 22:07:05.925 |   File 
"neutron/agent/linux/ip_lib.py", line 95, in _execute
  2015-02-09 22:07:05.979 | 2015-02-09 22:07:05.926 | 
log_fail_as_error=log_fail_as_error)
  2015-02-09 22:07:05.980 | 2015-02-09 22:07:05.928 |   File 
"neutron/agent/linux/utils.py", line 6

[Yahoo-eng-team] [Bug 1451860] Re: Attached volume migration failed, due to incorrect arguments order passed to swap_volume

2015-06-05 Thread Kevin Carter
** Changed in: openstack-ansible
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1451860

Title:
  Attached volume migration failed, due to incorrect arguments  order
  passed to swap_volume

Status in OpenStack Compute (Nova):
  Fix Committed
Status in Ansible playbooks for deploying OpenStack:
  Fix Released

Bug description:
  Steps to reproduce:
  1. create a volume in cinder
  2. boot a server from image in nova
  3. attach this volume to server
  4. use ' cinder migrate  --force-host-copy True  
3fa956b6-ba59-46df-8a26-97fcbc18fc82 openstack-wangp11-02@pool_backend_1#Pool_1'

  log from nova compute:( see attched from detail info):

  2015-05-05 00:33:31.768 ERROR root [req-b8424cde-e126-41b0-a27a-ef675e0c207f 
admin admin] Original exception being dropped: ['Traceback (most recent ca
  ll last):\n', '  File "/opt/stack/nova/nova/compute/manager.py", line 351, in 
decorated_function\nreturn function(self, context, *args, **kwargs)\n
  ', '  File "/opt/stack/nova/nova/compute/manager.py", line 4982, in 
swap_volume\ncontext, old_volume_id, instance_uuid=instance.uuid)\n', 
"Attribut
  eError: 'unicode' object has no attribute 'uuid'\n"]

  
  according to my debug result:
  # here  parameters passed to swap_volume
  def swap_volume(self, ctxt, instance, old_volume_id, new_volume_id):
  return self.manager.swap_volume(ctxt, instance, old_volume_id,
  new_volume_id)
  # swap_volume function
  @wrap_exception()
  @reverts_task_state
  @wrap_instance_fault
  def swap_volume(self, context, old_volume_id, new_volume_id, instance):
  """Swap volume for an instance."""
  context = context.elevated()

  bdm = objects.BlockDeviceMapping.get_by_volume_id(
  context, old_volume_id, instance_uuid=instance.uuid)
  connector = self.driver.get_volume_connector(instance)

  
  You can find: passed in order is "self, ctxt, instance, old_volume_id, 
new_volume_id" while function definition is "self, context, old_volume_id, 
new_volume_id, instance"

  this cause the 'unicode' object has no attribute 'uuid'\n" error when
  trying to access instance['uuid']


  BTW: this problem was introduced in
  https://review.openstack.org/#/c/172152

  affect both Kilo and master

  Thanks
  Peter

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1451860/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1462547] [NEW] Launch Instance NG does HTML does not check for block device mapping nova extension

2015-06-05 Thread Travis Tripp
Public bug reported:

The new angular launch instance does not look at
allowCreateVolumeFromImage property to determine whether or not to show
"create from volume".  That property is being set based on if the nova
extension is enabled that supports it.

** Affects: horizon
 Importance: Undecided
 Assignee: Travis Tripp (travis-tripp)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1462547

Title:
  Launch Instance NG does HTML does not check for block device mapping
  nova extension

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  The new angular launch instance does not look at
  allowCreateVolumeFromImage property to determine whether or not to
  show "create from volume".  That property is being set based on if the
  nova extension is enabled that supports it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1462547/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1462544] [NEW] Create Image: Uncaught TypeError: $form.on is not a function

2015-06-05 Thread Tyr Johanson
Public bug reported:

on master June 5 2015

Project -> Images -> Create Image dialog

Stacktrace

horizon.datatables.disable_actions_on_submit@   horizon.tables.js:185
(anonymous function)@   horizon.modals.js:26
jQuery.extend.each  @   jquery.js:657
jQuery.fn.jQuery.each   @   jquery.js:266
horizon.modals.initModal@   horizon.modals.js:25
(anonymous function)@   horizon.modals.js:177
jQuery.event.dispatch   @   jquery.js:5095
jQuery.event.add.elemData.handle@   jquery.js:4766
jQuery.event.trigger@   jquery.js:5007
jQuery.event.trigger@   jquery-migrate.js:493
(anonymous function)@   jquery.js:5691
jQuery.extend.each  @   jquery.js:657
jQuery.fn.jQuery.each   @   jquery.js:266
jQuery.fn.extend.trigger@   jquery.js:5690
horizon.modals.success  @   horizon.modals.js:48
horizon.modals._request.$.ajax.success  @   horizon.modals.js:342
jQuery.Callbacks.fire   @   jquery.js:3048
jQuery.Callbacks.self.fireWith  @   jquery.js:3160
done@   jquery.js:8235
jQuery.ajaxTransport.send.callback  @   jquery.js:8778

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1462544

Title:
  Create Image: Uncaught TypeError: $form.on is not a function

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  on master June 5 2015

  Project -> Images -> Create Image dialog

  Stacktrace
  
  horizon.datatables.disable_actions_on_submit  @   horizon.tables.js:185
  (anonymous function)  @   horizon.modals.js:26
  jQuery.extend.each@   jquery.js:657
  jQuery.fn.jQuery.each @   jquery.js:266
  horizon.modals.initModal  @   horizon.modals.js:25
  (anonymous function)  @   horizon.modals.js:177
  jQuery.event.dispatch @   jquery.js:5095
  jQuery.event.add.elemData.handle  @   jquery.js:4766
  jQuery.event.trigger  @   jquery.js:5007
  jQuery.event.trigger  @   jquery-migrate.js:493
  (anonymous function)  @   jquery.js:5691
  jQuery.extend.each@   jquery.js:657
  jQuery.fn.jQuery.each @   jquery.js:266
  jQuery.fn.extend.trigger  @   jquery.js:5690
  horizon.modals.success@   horizon.modals.js:48
  horizon.modals._request.$.ajax.success@   horizon.modals.js:342
  jQuery.Callbacks.fire @   jquery.js:3048
  jQuery.Callbacks.self.fireWith@   jquery.js:3160
  done  @   jquery.js:8235
  jQuery.ajaxTransport.send.callback@   jquery.js:8778

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1462544/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1462503] [NEW] Need a specific error message for disk too small boot failures

2015-06-05 Thread Doug Fish
Public bug reported:

When using a compressed image format it's possible to have late boot
failures because the expanded image can't fit on disk. Horizon should
show a specific error message for this failure.

To create:
use Admin->System->Image->Create Image
Download a largish compressed image such as:
http://cloud-images.ubuntu.com/trusty/current/trusty-server-cloudimg-i386-disk1.img
Format QCOW2
Copy Data: true
Public: True

This disk is a compressed image which takes 241M, but expands to 2.2G
ubuntu@unbuntu-drf-3:/opt/stack/data/glance/cache$ qemu-img info 
a22f195e-e64a-4170-90ff-e76f49a3d6f8
image: a22f195e-e64a-4170-90ff-e76f49a3d6f8
file format: qcow2
virtual size: 2.2G (2361393152 bytes)
disk size: 241M
cluster_size: 65536


Launch an Instance based on this image
Project->Compute->Instances->Launch Instance
Choose m1.nano, m1.tiny, or some other flavor with not enough disk
Boot From Image (creates new volume)

It fails with an obscure error message.

We should catch the exception related to this specific circumstance and
show a relevant error message.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1462503

Title:
  Need a specific error message for disk too small boot failures

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When using a compressed image format it's possible to have late boot
  failures because the expanded image can't fit on disk. Horizon should
  show a specific error message for this failure.

  To create:
  use Admin->System->Image->Create Image
  Download a largish compressed image such as:
  
http://cloud-images.ubuntu.com/trusty/current/trusty-server-cloudimg-i386-disk1.img
  Format QCOW2
  Copy Data: true
  Public: True

  This disk is a compressed image which takes 241M, but expands to 2.2G
  ubuntu@unbuntu-drf-3:/opt/stack/data/glance/cache$ qemu-img info 
a22f195e-e64a-4170-90ff-e76f49a3d6f8
  image: a22f195e-e64a-4170-90ff-e76f49a3d6f8
  file format: qcow2
  virtual size: 2.2G (2361393152 bytes)
  disk size: 241M
  cluster_size: 65536

  
  Launch an Instance based on this image
  Project->Compute->Instances->Launch Instance
  Choose m1.nano, m1.tiny, or some other flavor with not enough disk
  Boot From Image (creates new volume)

  It fails with an obscure error message.

  We should catch the exception related to this specific circumstance
  and show a relevant error message.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1462503/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1462493] [NEW] when cinder isn't using default quota class, quotas cannot be updated through Horizon

2015-06-05 Thread Doug Fish
Public bug reported:

To re-create:
edit cinder.conf so that
use_default_quota_class=false
restart cinder.

Attempt to edit cinder related values through Horizon, such as
Volumes, Volume Snapshots, or Total Size of Volumes and Snapshots (GB)

although it appears the edits succeed, they actually don't.

** Affects: horizon
 Importance: Low
 Status: New

** Changed in: horizon
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1462493

Title:
  when cinder isn't using default quota class, quotas cannot be updated
  through Horizon

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  To re-create:
  edit cinder.conf so that
  use_default_quota_class=false
  restart cinder.

  Attempt to edit cinder related values through Horizon, such as
  Volumes, Volume Snapshots, or Total Size of Volumes and Snapshots (GB)

  although it appears the edits succeed, they actually don't.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1462493/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1462484] [NEW] Port Details VNIC type value is not translatable

2015-06-05 Thread Doug Fish
Public bug reported:

On port deatails, the Binding/VNIC type value is not translatable. To recreate 
the problem:
- create a pseudo translation:

./run_tests.sh --makemessages
./run_tests.sh --pseudo de
./run_tests.sh --compilemessages

start the dev server, login and change to German/Deutsch (de)

Navigate to
Project->Network->Networks->[Detail]->[Port Detail]

notice at the bottom of the panel the VNIC type is not translated.

The 3 VNIC types should be translated when displayed in Horizon
https://github.com/openstack/neutron/blob/master/neutron/extensions/portbindings.py#L73
but neutron will expect these to be provided in English on API calls.

Note that the mapping is already correct on Edit Port - the translations
just need to be applied on the details panel.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1462484

Title:
  Port Details VNIC type value is not translatable

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  On port deatails, the Binding/VNIC type value is not translatable. To 
recreate the problem:
  - create a pseudo translation:

  ./run_tests.sh --makemessages
  ./run_tests.sh --pseudo de
  ./run_tests.sh --compilemessages

  start the dev server, login and change to German/Deutsch (de)

  Navigate to
  Project->Network->Networks->[Detail]->[Port Detail]

  notice at the bottom of the panel the VNIC type is not translated.

  The 3 VNIC types should be translated when displayed in Horizon
  
https://github.com/openstack/neutron/blob/master/neutron/extensions/portbindings.py#L73
  but neutron will expect these to be provided in English on API calls.

  Note that the mapping is already correct on Edit Port - the
  translations just need to be applied on the details panel.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1462484/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1462482] [NEW] Unstable test glance.tests.functional.artifacts.test_artifacts.TestArtifacts.test_delete_artifact_with_dependency

2015-06-05 Thread Inessa Vasilevskaya
Public bug reported:

L402: 400 BadRequest may occur either because of 'depends_on' or because
of 'depends_on_list' property.

It is incorrect to imply that any one of them will always cause the
error (see L404).

** Affects: glance
 Importance: Undecided
 Status: New


** Tags: artifacts

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1462482

Title:
  Unstable test
  
glance.tests.functional.artifacts.test_artifacts.TestArtifacts.test_delete_artifact_with_dependency

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  L402: 400 BadRequest may occur either because of 'depends_on' or
  because of 'depends_on_list' property.

  It is incorrect to imply that any one of them will always cause the
  error (see L404).

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1462482/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1462479] [NEW] unable to download blob by link after v3 POST into blob_list property

2015-06-05 Thread Inessa Vasilevskaya
Public bug reported:

Steps to reproduce:

1. create an artifact with a bloblist property (for example, screenshots)
Let's assume that the created artifact has 
id=da4ef381-bfb3-4ff2-9da2-576d5b31f8a5.

2. perform a PUT/POST into the property

http://localhost:9292/v3/artifacts/myartifact/v2.0/da4ef381-bfb3-4ff2-9da2-576d5b31f8a5/screenshots

(make sure content-type=application/octet-stream)

3. Try to retrieve blob by the url in download_link

download_link =
/artifacts/myartifact/v2.0/da4ef381-bfb3-4ff2-9da2-576d5b31f8a5/screenshots/0/download

GET
http://localhost:9292/v3/artifacts/myartifact/v2.0/da4ef381-bfb3-4ff2-9da2-576d5b31f8a5/screenshots/0/download
result in:


 
  400 Bad Request
 
 
  400 Bad Request
  Index is required


 


** Affects: glance
 Importance: Undecided
 Status: New


** Tags: artifacts

** Tags added: artifacts

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1462479

Title:
  unable to download blob by link after v3 POST into blob_list property

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  Steps to reproduce:

  1. create an artifact with a bloblist property (for example, screenshots)
  Let's assume that the created artifact has 
id=da4ef381-bfb3-4ff2-9da2-576d5b31f8a5.

  2. perform a PUT/POST into the property

  
http://localhost:9292/v3/artifacts/myartifact/v2.0/da4ef381-bfb3-4ff2-9da2-576d5b31f8a5/screenshots

  (make sure content-type=application/octet-stream)

  3. Try to retrieve blob by the url in download_link

  download_link =
  
/artifacts/myartifact/v2.0/da4ef381-bfb3-4ff2-9da2-576d5b31f8a5/screenshots/0/download

  GET
  
http://localhost:9292/v3/artifacts/myartifact/v2.0/da4ef381-bfb3-4ff2-9da2-576d5b31f8a5/screenshots/0/download
  result in:

  
   
400 Bad Request
   
   
400 Bad Request
Index is required


   
  

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1462479/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1462467] [NEW] exceptions.Duplicate not processed on v3 PUT/POST into blob property

2015-06-05 Thread Inessa Vasilevskaya
Public bug reported:

Any second PUT/POST into blob property leads to HTTP 500:

 File "/home/ina/projects/glance/glance/api/v3/artifacts.py", line 300, in 
upload
setattr(artifact, attr, (data, size))
  File "/home/ina/projects/glance/glance/artifacts/domain/proxy.py", line 27, 
in setter
return self.set_type_specific_property(attr, value)
  File "/home/ina/projects/glance/glance/artifacts/domain/proxy.py", line 99, 
in set_type_specific_property
setattr(self.base, prop_name, value)
  File "/home/ina/projects/glance/glance/artifacts/domain/proxy.py", line 27, 
in setter
return self.set_type_specific_property(attr, value)
  File "/home/ina/projects/glance/glance/artifacts/location.py", line 76, in 
set_type_specific_property
blob_proxy.upload_to_store(data, size)
  File "/home/ina/projects/glance/glance/artifacts/location.py", line 144, in 
upload_to_store
context=self.context)
  File 
"/home/ina/projects/glance/.venv/local/lib/python2.7/site-packages/glance_store/backend.py",
 line 364, in add_to_backend
return store_add_to_backend(image_id, data, size, store, context)
  File 
"/home/ina/projects/glance/.venv/local/lib/python2.7/site-packages/glance_store/backend.py",
 line 339, in store_add_to_backend
context=context)
  File 
"/home/ina/projects/glance/.venv/local/lib/python2.7/site-packages/glance_store/capabilities.py",
 line 226, in op_checker
return store_op_fun(store, *args, **kwargs)
  File 
"/home/ina/projects/glance/.venv/local/lib/python2.7/site-packages/glance_store/_drivers/filesystem.py",
 line 589, in add
raise exceptions.Duplicate(image=filepath)
Duplicate: Image 
/opt/stack/data/glance/images/e7b0e397-4948-4d6d-a3f8-265f2a396a79.image_file 
already exists
2015-06-05 20:28:34.031 22433 INFO eventlet.wsgi.server 
[req-0bf9bc64-2ef7-4ae4-9eaa-e56b88819f9a - - - - -] 127.0.0.1 - - [05/Jun/2015 
20:28:34] "PUT 
/v3/artifacts/myartifact/v2.0/e7b0e397-4948-4d6d-a3f8-265f2a396a79/image_file 
HTTP/1.1" 500 139 0.018206

** Affects: glance
 Importance: Undecided
 Status: New


** Tags: artifacts

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1462467

Title:
  exceptions.Duplicate not processed on v3 PUT/POST into blob property

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  Any second PUT/POST into blob property leads to HTTP 500:

   File "/home/ina/projects/glance/glance/api/v3/artifacts.py", line 300, in 
upload
  setattr(artifact, attr, (data, size))
File "/home/ina/projects/glance/glance/artifacts/domain/proxy.py", line 27, 
in setter
  return self.set_type_specific_property(attr, value)
File "/home/ina/projects/glance/glance/artifacts/domain/proxy.py", line 99, 
in set_type_specific_property
  setattr(self.base, prop_name, value)
File "/home/ina/projects/glance/glance/artifacts/domain/proxy.py", line 27, 
in setter
  return self.set_type_specific_property(attr, value)
File "/home/ina/projects/glance/glance/artifacts/location.py", line 76, in 
set_type_specific_property
  blob_proxy.upload_to_store(data, size)
File "/home/ina/projects/glance/glance/artifacts/location.py", line 144, in 
upload_to_store
  context=self.context)
File 
"/home/ina/projects/glance/.venv/local/lib/python2.7/site-packages/glance_store/backend.py",
 line 364, in add_to_backend
  return store_add_to_backend(image_id, data, size, store, context)
File 
"/home/ina/projects/glance/.venv/local/lib/python2.7/site-packages/glance_store/backend.py",
 line 339, in store_add_to_backend
  context=context)
File 
"/home/ina/projects/glance/.venv/local/lib/python2.7/site-packages/glance_store/capabilities.py",
 line 226, in op_checker
  return store_op_fun(store, *args, **kwargs)
File 
"/home/ina/projects/glance/.venv/local/lib/python2.7/site-packages/glance_store/_drivers/filesystem.py",
 line 589, in add
  raise exceptions.Duplicate(image=filepath)
  Duplicate: Image 
/opt/stack/data/glance/images/e7b0e397-4948-4d6d-a3f8-265f2a396a79.image_file 
already exists
  2015-06-05 20:28:34.031 22433 INFO eventlet.wsgi.server 
[req-0bf9bc64-2ef7-4ae4-9eaa-e56b88819f9a - - - - -] 127.0.0.1 - - [05/Jun/2015 
20:28:34] "PUT 
/v3/artifacts/myartifact/v2.0/e7b0e397-4948-4d6d-a3f8-265f2a396a79/image_file 
HTTP/1.1" 500 139 0.018206

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1462467/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1462462] [NEW] Invalid values of Artifact Blobs lead to HTTP 500

2015-06-05 Thread Alexander Tivelkov
Public bug reported:

An attempt to PATCH or PUT an invalid value into the artifact's blob
property leads to unhandled exception (i.e. HTTP 500 error code)

Example:
PUT /v3/artifacts/images/v1//file sent with application/json request 
type and some json body instead of proper applciation/octet-stream leads to 
HTTP 500 instead of proper HTTP 400

** Affects: glance
 Importance: Undecided
 Status: New


** Tags: artifacts

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1462462

Title:
  Invalid values of Artifact Blobs lead to HTTP 500

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  An attempt to PATCH or PUT an invalid value into the artifact's blob
  property leads to unhandled exception (i.e. HTTP 500 error code)

  Example:
  PUT /v3/artifacts/images/v1//file sent with application/json request 
type and some json body instead of proper applciation/octet-stream leads to 
HTTP 500 instead of proper HTTP 400

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1462462/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1462455] [NEW] Glance v3 api returns 500 on get_all_artifacts with data_api=registry

2015-06-05 Thread Inessa Vasilevskaya
Public bug reported:

If glance-api is configured with

data_api = glance.db.registry.api

Any call to list artifacts (in example, curl
http://localhost:9292/v3/artifacts/myartifact) will return HTTP 500 with
the following stacktrace:

  File 
"/home/ina/projects/glance/.venv/local/lib/python2.7/site-packages/webob/dec.py",
 line 130, in __call__
resp = self.call_func(req, *args, **self.kwargs)
  File 
"/home/ina/projects/glance/.venv/local/lib/python2.7/site-packages/webob/dec.py",
 line 195, in call_func
return self.func(req, *args, **kwargs)
  File "/home/ina/projects/glance/glance/common/wsgi.py", line 873, in __call__
request, **action_args)
  File "/home/ina/projects/glance/glance/common/wsgi.py", line 897, in dispatch
return method(*args, **kwargs)
  File "/home/ina/projects/glance/glance/api/v3/artifacts.py", line 118, in list
**kwargs)
  File "/home/ina/projects/glance/glance/artifacts/domain/proxy.py", line 65, 
in list
items = self.base.list(*args, **kwargs)
  File "/home/ina/projects/glance/glance/artifacts/domain/proxy.py", line 65, 
in list
items = self.base.list(*args, **kwargs)
  File "/home/ina/projects/glance/glance/artifacts/domain/proxy.py", line 65, 
in list
items = self.base.list(*args, **kwargs)
  File "/home/ina/projects/glance/glance/db/__init__.py", line 100, in list
sort_keys=sort_keys, sort_dirs=sort_dirs, show_level=show_level)
  File "/home/ina/projects/glance/glance/db/registry/api.py", line 57, in 
wrapper
return func(client, *args, **kwargs)
TypeError: artifact_get_all() got an unexpected keyword argument 'sort_dirs'

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1462455

Title:
  Glance v3 api returns 500 on get_all_artifacts with data_api=registry

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  If glance-api is configured with

  data_api = glance.db.registry.api

  Any call to list artifacts (in example, curl
  http://localhost:9292/v3/artifacts/myartifact) will return HTTP 500
  with the following stacktrace:

File 
"/home/ina/projects/glance/.venv/local/lib/python2.7/site-packages/webob/dec.py",
 line 130, in __call__
  resp = self.call_func(req, *args, **self.kwargs)
File 
"/home/ina/projects/glance/.venv/local/lib/python2.7/site-packages/webob/dec.py",
 line 195, in call_func
  return self.func(req, *args, **kwargs)
File "/home/ina/projects/glance/glance/common/wsgi.py", line 873, in 
__call__
  request, **action_args)
File "/home/ina/projects/glance/glance/common/wsgi.py", line 897, in 
dispatch
  return method(*args, **kwargs)
File "/home/ina/projects/glance/glance/api/v3/artifacts.py", line 118, in 
list
  **kwargs)
File "/home/ina/projects/glance/glance/artifacts/domain/proxy.py", line 65, 
in list
  items = self.base.list(*args, **kwargs)
File "/home/ina/projects/glance/glance/artifacts/domain/proxy.py", line 65, 
in list
  items = self.base.list(*args, **kwargs)
File "/home/ina/projects/glance/glance/artifacts/domain/proxy.py", line 65, 
in list
  items = self.base.list(*args, **kwargs)
File "/home/ina/projects/glance/glance/db/__init__.py", line 100, in list
  sort_keys=sort_keys, sort_dirs=sort_dirs, show_level=show_level)
File "/home/ina/projects/glance/glance/db/registry/api.py", line 57, in 
wrapper
  return func(client, *args, **kwargs)
  TypeError: artifact_get_all() got an unexpected keyword argument 'sort_dirs'

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1462455/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1402632] Re: issue with glance python client in Icehouse

2015-06-05 Thread nikhil komawar
https://review.openstack.org/#/c/171180/

** Project changed: glance => python-glanceclient

** Changed in: python-glanceclient
   Status: Invalid => Fix Committed

** Changed in: python-glanceclient
   Importance: Undecided => Low

** Changed in: python-glanceclient
Milestone: None => 0.19.0

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1402632

Title:
  issue with glance python client in Icehouse

Status in Python client library for Glance:
  Fix Committed
Status in Ubuntu:
  New

Bug description:
  I installed glance client (0.15.0) on Ubuntu 14.04.1 LTS via pip
  install python-glanceclient. When i try to do something with this
  client i almost allways get back:

  root@anyhost:~/anydir# glance image-list
  "1" is not a supported API version. Example values are "1" or "2".

  Environment (OS_USERNAME, OS_PASSWORD, ...) seems to be fine cause i'm
  able to list all from keystone with those credantials.

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-glanceclient/+bug/1402632/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1462426] [NEW] DNSMasq's Dhcp-option-force not supported as dhcp-option is

2015-06-05 Thread Mauro S M Rodrigues
Public bug reported:

Use case:
Ironic is one part of openstack that regularly uses the ability to set dhcp 
options to a port in order to add options like tfp-server address where the 
deploy images are stored.
Change https://review.openstack.org/#/c/185987/ aims to support power system's 
petitboot netboot and for thus we plan to set some options to be sent on every 
BOOTPREQUEST request even if the client doesn't ask for it, like 208-211 from 
RFC5071, that can be achieved by using dnsmasq's dhcp-option-force.

Problems: 
dhcp-options-force is not supported the same way the usual dhcp-option are, we 
can't set it in a separated file (using --opts-file) so we need to add support 
to it on conf-file (preferably in a per subnet file)

** Affects: neutron
 Importance: Undecided
 Assignee: Mauro S M Rodrigues (maurosr)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => Mauro S M Rodrigues (maurosr)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1462426

Title:
  DNSMasq's Dhcp-option-force not supported as dhcp-option is

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  Use case:
  Ironic is one part of openstack that regularly uses the ability to set dhcp 
options to a port in order to add options like tfp-server address where the 
deploy images are stored.
  Change https://review.openstack.org/#/c/185987/ aims to support power 
system's petitboot netboot and for thus we plan to set some options to be sent 
on every BOOTPREQUEST request even if the client doesn't ask for it, like 
208-211 from RFC5071, that can be achieved by using dnsmasq's dhcp-option-force.

  Problems: 
  dhcp-options-force is not supported the same way the usual dhcp-option are, 
we can't set it in a separated file (using --opts-file) so we need to add 
support to it on conf-file (preferably in a per subnet file)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1462426/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1462424] [NEW] VMware: stable icehouse unable to spawn VM

2015-06-05 Thread Gary Kotton
Public bug reported:

Unable to boot VM due to patch
https://github.com/openstack/nova/commit/539d632fdea1696dc74fd2fb05921466f804e19e

This is with VC 6.

The reason is:
nova-scheduler.log.1:2015-06-02 16:01:49.280 1174 ERROR 
nova.scheduler.filter_scheduler [req-18c26579-09e7-4287-b401-27ac3505e7c3 
bf28f7d47bf348d6ab6bcf31f0f96c92 04ad461fb68d4b80b2911b3fe0f6b1f9] [instance: 
5b3cca48-a295-4aa0-9176-798c174aeb3f] Error from last host: icehouse (node 
domain-c9(compute)): [u'Traceback (most recent call last):\n', u'  File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1379, in 
_build_instance\nset_access_ip=set_access_ip)\n', u'  File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 410, in 
decorated_function\nreturn function(self, context, *args, **kwargs)\n', u'  
File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1797, in 
_spawn\nLOG.exception(_(\'Instance failed to spawn\'), 
instance=instance)\n', u'  File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py", line 68, 
in __exit__\nsix.reraise(self.type_, self.value, self.tb)\n', u'  File 
"/usr/lib/python2.7/dist-pack
 ages/nova/compute/manager.py", line 1794, in _spawn\n
block_device_info)\n', u'  File 
"/usr/lib/python2.7/dist-packages/nova/virt/vmwareapi/driver.py", line 629, in 
spawn\nadmin_password, network_info, block_device_info)\n', u'  File 
"/usr/lib/python2.7/dist-packages/nova/virt/vmwareapi/vmops.py", line 689, in 
spawn\n_power_on_vm()\n', u'  File 
"/usr/lib/python2.7/dist-packages/nova/virt/vmwareapi/vmops.py", line 685, in 
_power_on_vm\nself._session._wait_for_task(power_on_task)\n', u'  File 
"/usr/lib/python2.7/dist-packages/nova/virt/vmwareapi/driver.py", line 966, in 
_wait_for_task\nret_val = done.wait()\n', u'  File 
"/usr/lib/python2.7/dist-packages/eventlet/event.py", line 116, in wait\n
return hubs.get_hub().switch()\n', u'  File 
"/usr/lib/python2.7/dist-packages/eventlet/hubs/hub.py", line 187, in switch\n  
  return self.greenlet.switch()\n', u"AttributeError: TaskInfo instance has no 
attribute 'name'\n"]

** Affects: nova
 Importance: Critical
 Status: New

** Changed in: nova
   Importance: Undecided => Critical

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1462424

Title:
  VMware: stable icehouse unable to spawn VM

Status in OpenStack Compute (Nova):
  New

Bug description:
  Unable to boot VM due to patch
  
https://github.com/openstack/nova/commit/539d632fdea1696dc74fd2fb05921466f804e19e

  This is with VC 6.

  The reason is:
  nova-scheduler.log.1:2015-06-02 16:01:49.280 1174 ERROR 
nova.scheduler.filter_scheduler [req-18c26579-09e7-4287-b401-27ac3505e7c3 
bf28f7d47bf348d6ab6bcf31f0f96c92 04ad461fb68d4b80b2911b3fe0f6b1f9] [instance: 
5b3cca48-a295-4aa0-9176-798c174aeb3f] Error from last host: icehouse (node 
domain-c9(compute)): [u'Traceback (most recent call last):\n', u'  File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1379, in 
_build_instance\nset_access_ip=set_access_ip)\n', u'  File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 410, in 
decorated_function\nreturn function(self, context, *args, **kwargs)\n', u'  
File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1797, in 
_spawn\nLOG.exception(_(\'Instance failed to spawn\'), 
instance=instance)\n', u'  File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py", line 68, 
in __exit__\nsix.reraise(self.type_, self.value, self.tb)\n', u'  File 
"/usr/lib/python2.7/dist-pa
 ckages/nova/compute/manager.py", line 1794, in _spawn\n
block_device_info)\n', u'  File 
"/usr/lib/python2.7/dist-packages/nova/virt/vmwareapi/driver.py", line 629, in 
spawn\nadmin_password, network_info, block_device_info)\n', u'  File 
"/usr/lib/python2.7/dist-packages/nova/virt/vmwareapi/vmops.py", line 689, in 
spawn\n_power_on_vm()\n', u'  File 
"/usr/lib/python2.7/dist-packages/nova/virt/vmwareapi/vmops.py", line 685, in 
_power_on_vm\nself._session._wait_for_task(power_on_task)\n', u'  File 
"/usr/lib/python2.7/dist-packages/nova/virt/vmwareapi/driver.py", line 966, in 
_wait_for_task\nret_val = done.wait()\n', u'  File 
"/usr/lib/python2.7/dist-packages/eventlet/event.py", line 116, in wait\n
return hubs.get_hub().switch()\n', u'  File 
"/usr/lib/python2.7/dist-packages/eventlet/hubs/hub.py", line 187, in switch\n  
  return self.greenlet.switch()\n', u"AttributeError: TaskInfo instance has no 
attribute 'name'\n"]

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1462424/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad

[Yahoo-eng-team] [Bug 1461095] Re: Token is not revoked when removing a user from project in Horizon

2015-06-05 Thread Tristan Cacqueray
Then it's an OSSA class E type of bug.

** Changed in: ossa
   Status: Incomplete => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1461095

Title:
  Token is not revoked when removing a user from project in Horizon

Status in OpenStack Identity (Keystone):
  Invalid
Status in OpenStack Security Advisories:
  Won't Fix

Bug description:
  Steps:
  1. Login to dashboard as admin
  2. Create project (as example - `project_1`)
  3. Create Member-user.
  4. add Member-user  to `project_1`
  5. In another browser login as Member-user
  6. go to `/project/instance` (the behavior is typical for another pages - 
`volumes`, `images`, `identity`)
  7. refresh (or go to page) - 3-5 times. Stay of this page.
  8. Then, as admin, remove Member-user from `project_1`
  9. as Member-user try go to `/project/instance` -- you don't get error

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1461095/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1461095] Re: Token is not revoked when removing a user from project in Horizon

2015-06-05 Thread Dolph Mathews
token_cache_time is part of keystonemiddleware.auth_token's
configuration. It defaults to 5 minutes if you haven't set it in your
deployment:

https://github.com/openstack/keystonemiddleware/blob/57d389da8aaef3f955d7f0b086803d98b6531a2e/keystonemiddleware/auth_token/__init__.py#L278-L283

It sounds like this is working as intended, then.

** Changed in: keystone
   Status: Triaged => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1461095

Title:
  Token is not revoked when removing a user from project in Horizon

Status in OpenStack Identity (Keystone):
  Invalid
Status in OpenStack Security Advisories:
  Incomplete

Bug description:
  Steps:
  1. Login to dashboard as admin
  2. Create project (as example - `project_1`)
  3. Create Member-user.
  4. add Member-user  to `project_1`
  5. In another browser login as Member-user
  6. go to `/project/instance` (the behavior is typical for another pages - 
`volumes`, `images`, `identity`)
  7. refresh (or go to page) - 3-5 times. Stay of this page.
  8. Then, as admin, remove Member-user from `project_1`
  9. as Member-user try go to `/project/instance` -- you don't get error

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1461095/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1462393] [NEW] security group is Invalid when boot with port

2015-06-05 Thread wangxiyuan
Public bug reported:

When create a vm with  specified security group and ports,the security
group is invalid.Because ports have their own security group.

Perhaps users  will be confused with this situation.They input security
group parameter,but it doesn't work.

So,IMO,there are two situations here:

1.boot only with ports.

in this situation ,we shouldn't allow users boot  and return code 400 to
show that security group is invalid and unnecessary.

2.not only with ports but also with networks.

in this situation,we should print a worning information in log files at
least to point out that the security group is invalid with ports.

** Affects: nova
 Importance: Undecided
 Assignee: wangxiyuan (wangxiyuan)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => wangxiyuan (wangxiyuan)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1462393

Title:
  security group is Invalid when boot with port

Status in OpenStack Compute (Nova):
  New

Bug description:
  When create a vm with  specified security group and ports,the security
  group is invalid.Because ports have their own security group.

  Perhaps users  will be confused with this situation.They input
  security group parameter,but it doesn't work.

  So,IMO,there are two situations here:

  1.boot only with ports.

  in this situation ,we shouldn't allow users boot  and return code 400
  to show that security group is invalid and unnecessary.

  2.not only with ports but also with networks.

  in this situation,we should print a worning information in log files
  at least to point out that the security group is invalid with ports.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1462393/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1462391] [NEW] Horizon omits several valid combinations of ipv6 ra mode and address mode

2015-06-05 Thread Doug Fish
Public bug reported:

When creating an IPv6 subnet Horizon prompts for ipv6 ra mode and ipv6
address mode in a combined manner + not all valid combinations are
supported. The UI should be updated so all valid combinations can be
supported.

This can be seen on Admin->System->Networks->[Detail]->Create Subnet
fill in valid ipv6 values ... for example fdc2:f49e:fe9e::/64 for network 
address and click next.

For IPv6 Address Configuration Mode there are only 4 options listed
No options/Default
SLAAC
DHCPv6 stateful
DHCPv6 stateless

Valid combinations with descriptions are listed here:
http://specs.openstack.org/openstack/neutron-specs/specs/juno/ipv6-radvd-ra.html#rest-api-impact

Horizon should support all 10 valid combinations

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1462391

Title:
  Horizon omits several valid combinations of ipv6 ra mode and address
  mode

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When creating an IPv6 subnet Horizon prompts for ipv6 ra mode and ipv6
  address mode in a combined manner + not all valid combinations are
  supported. The UI should be updated so all valid combinations can be
  supported.

  This can be seen on Admin->System->Networks->[Detail]->Create Subnet
  fill in valid ipv6 values ... for example fdc2:f49e:fe9e::/64 for network 
address and click next.

  For IPv6 Address Configuration Mode there are only 4 options listed
  No options/Default
  SLAAC
  DHCPv6 stateful
  DHCPv6 stateless

  Valid combinations with descriptions are listed here:
  
http://specs.openstack.org/openstack/neutron-specs/specs/juno/ipv6-radvd-ra.html#rest-api-impact

  Horizon should support all 10 valid combinations

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1462391/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1462374] [NEW] Ironic: Unavailable nodes may be scheduled to

2015-06-05 Thread Jim Rollenhagen
Public bug reported:

The Ironic driver reports all resources consumed for compute nodes in
certain unavailable states (e.g. deploying, cleaning, deleting).
However, if there is not an instance associated with the node, the
resource tracker will try to correct the driver and expose these
resources. This may result in being scheduled to a node that is still
cleaning up from a previous instance.

** Affects: nova
 Importance: Undecided
 Assignee: Jim Rollenhagen (jim-rollenhagen)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1462374

Title:
  Ironic: Unavailable nodes may be scheduled to

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  The Ironic driver reports all resources consumed for compute nodes in
  certain unavailable states (e.g. deploying, cleaning, deleting).
  However, if there is not an instance associated with the node, the
  resource tracker will try to correct the driver and expose these
  resources. This may result in being scheduled to a node that is still
  cleaning up from a previous instance.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1462374/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1462366] [NEW] nova compute info cache refresh should detach obsolete ports

2015-06-05 Thread Baodong (Robert) Li
Public bug reported:

Nova conduct periodic task to heal/refresh its info cache. Obsolete
ports should be detached during that process.

commit 4a02d9415f64e8d579d1b674d6d2efda902b01fa
Merge: 9fc5c05 13cf0c2
Author: Jenkins 
Date:   Thu Jun 4 11:32:03 2015 +

Merge "Get rid of oslo-incubator copy of middleware"


To test it, create an instance with neutron ports, and then delete one of the 
neutron ports by using neutron CLI. The deleted port remains attached

** Affects: nova
 Importance: Undecided
 Assignee: Baodong (Robert) Li (baoli)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Baodong (Robert) Li (baoli)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1462366

Title:
  nova compute info cache refresh should detach obsolete ports

Status in OpenStack Compute (Nova):
  New

Bug description:
  Nova conduct periodic task to heal/refresh its info cache. Obsolete
  ports should be detached during that process.

  commit 4a02d9415f64e8d579d1b674d6d2efda902b01fa
  Merge: 9fc5c05 13cf0c2
  Author: Jenkins 
  Date:   Thu Jun 4 11:32:03 2015 +

  Merge "Get rid of oslo-incubator copy of middleware"

  
  To test it, create an instance with neutron ports, and then delete one of the 
neutron ports by using neutron CLI. The deleted port remains attached

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1462366/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1459482] Re: Default policy too restrictive on getting user

2015-06-05 Thread Qiming Teng
** Changed in: keystone
   Status: In Progress => Opinion

** Changed in: keystone
 Assignee: Qiming Teng (tengqim) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1459482

Title:
  Default policy too restrictive on getting user

Status in OpenStack Identity (Keystone):
  Opinion

Bug description:
  For services that need to talk to many other services, Keystone has
  provided the trust based authentication model. That is good.

  When a user (e.g. USER) raises a service request, the actual job is
  delegated to the service user (e.g. SERVICE). SERVICE user will use
  trust mechanism for authentication in calls that follow. When creating
  a trust between USER and SERVICE, we will need the user ID of the
  SERVICE user, however, it is not possible today as keystone is
  restricting the get_user call to be admin only.

  A 'service' user may need to find out his own user ID given the user
  name specified in the configuration file. The usage scenario is for a
  requester to create a trust relationship with the service user so that
  the service user can do jobs on the requester's behalf. Restricting
  user_list or user_get to only admin users is making this very
  cumbersome even impossible.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1459482/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1462355] [NEW] wrong title in v3 OS-INHERIT Extension spec

2015-06-05 Thread Guojian Shao
Public bug reported:

The title of API [1] is wrong:

Revoke an inherited project role from group on domain
DELETE 
/OS-INHERIT/projects/{project_id)/groups/{group_id}/roles/{role_id}/inherited_to_projects

It should be "Revoke an inherited project role from group on project".

[1]: http://specs.openstack.org/openstack/keystone-specs/api/v3
/identity-api-v3-os-inherit-ext.html#revoke-an-inherited-project-role-
from-group-on-domain

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1462355

Title:
  wrong title in v3 OS-INHERIT Extension spec

Status in OpenStack Identity (Keystone):
  New

Bug description:
  The title of API [1] is wrong:

  Revoke an inherited project role from group on domain
  DELETE 
/OS-INHERIT/projects/{project_id)/groups/{group_id}/roles/{role_id}/inherited_to_projects

  It should be "Revoke an inherited project role from group on project".

  [1]: http://specs.openstack.org/openstack/keystone-specs/api/v3
  /identity-api-v3-os-inherit-ext.html#revoke-an-inherited-project-role-
  from-group-on-domain

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1462355/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1462315] [NEW] glance member-create failed with HTTP 500 error

2015-06-05 Thread Long Quan Sha
Public bug reported:

 At first, I run "glance member-create" and then "glance member-delete",
they were successful. But when I run "glance member-create" again with
the same parameters, it failed with HTTP 500 error.

[root@vm9 /]# glance member-create  2d2421ca-d4bd-4c69-91cf-5d167d20073a 
95e0dd9ed89544428f1daa7071c6990f
[root@vm9 /]# glance member-delete  2d2421ca-d4bd-4c69-91cf-5d167d20073a 
95e0dd9ed89544428f1daa7071c6990f
[root@vm9 /]#  glance member-create  2d2421ca-d4bd-4c69-91cf-5d167d20073a 
95e0dd9ed89544428f1daa7071c6990f
HTTPInternalServerError (HTTP 500)


 glance  registry.log :

 File "/usr/lib/python2.7/site-packages/glance/registry/api/v1/members.py", 
line 269, in update
self.db_api.image_member_create(req.context, values)
  File "/usr/lib/python2.7/site-packages/glance/db/sqlalchemy/api.py", line 
1020, in image_member_create
_image_member_update(context, memb_ref, values, session=session)
  File "/usr/lib/python2.7/site-packages/glance/db/sqlalchemy/api.py", line 
1051, in _image_member_update
memb_ref.save(session=session)

File "/usr/lib/python2.7/site-packages/glance/db/sqlalchemy/models.py", line 
77, in save
super(GlanceBase, self).save(session or db_api.get_session())
  File "/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/models.py", line 
48, in save
session.flush()
  File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 
1919, in flush
self._flush(objects)
  File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 
2037, in _flush
transaction.rollback(_capture_exception=True)
  File "/usr/lib64/python2.7/site-packages/sqlalchemy/util/langhelpers.py", 
line 60, in __exit__
compat.reraise(exc_type, exc_value, exc_tb)
  File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 
2001, in _flush
flush_context.execute()
  File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/unitofwork.py", line 
372, in execute
rec.execute(self)
  File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/unitofwork.py", line 
526, in execute
uow
  File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/persistence.py", line 
65, in save_obj
mapper, table, insert)
  File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/persistence.py", line 
602, in _emit_insert_statements
execute(statement, params)
  File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 
729, in execute
return meth(self, multiparams, params)
  File "/usr/lib64/python2.7/site-packages/sqlalchemy/sql/elements.py", line 
322, in _execute_on_connection
return connection._execute_clauseelement(self, multiparams, params)
  File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 
826, in _execute_clauseelement
compiled_sql, distilled_params
  File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 
958, in _execute_context
context)
  File 
"/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/compat/handle_error.py", 
line 261, in _handle_dbapi_exception
e, statement, parameters, cursor, context)
  File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 
1155, in _handle_dbapi_exception
util.raise_from_cause(newraise, exc_info)
  File "/usr/lib64/python2.7/site-packages/sqlalchemy/util/compat.py", line 
199, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb)
  File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 
951, in _execute_context
context)
  File "/usr/lib/python2.7/site-packages/ibm_db_sa/ibm_db.py", line 106, in 
do_execute
cursor.execute(statement, parameters)
  File "/usr/lib64/python2.7/site-packages/ibm_db_dbi.py", line 1335, in execute
self._execute_helper(parameters)
  File "/usr/lib64/python2.7/site-packages/ibm_db_dbi.py", line 1247, in 
_execute_helper
raise self.messages[len(self.messages) - 1]

DBDuplicateEntry: (IntegrityError) ibm_db_dbi::IntegrityError: Statement
Execute Failed: [IBM][CLI Driver][DB2/LINUXX8664] SQL0803N  One or more
values in the INSERT statement, UPDATE statement, or foreign key update
caused by a DELETE statement are not valid because the primary key,
unique constraint or unique index identified by "2" constrains table
"GLANCE.IMAGE_MEMBERS" from having duplicate values for the index key.
SQLSTATE=23505 SQLCODE=-803 'INSERT INTO image_members (created_at,
updated_at, deleted_at, deleted, image_id, "member", can_share, status)
VALUES (?, ?, ?, ?, ?, ?, ?, ?)' (datetime.datetime(2015, 6, 5, 10, 39,
9, 724699), datetime.datetime(2015, 6, 5, 10, 39, 9, 724711), None, '0',
'2d2421ca-d4bd-4c69-91cf-5d167d20073a',
'95e0dd9ed89544428f1daa7071c6990f', '0', 'pending')

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1462315

Title:
  glance member-create failed with HTTP 50

[Yahoo-eng-team] [Bug 1462305] [NEW] multi-node test causes nova-compute to lockup

2015-06-05 Thread John Garbutt
Public bug reported:

Its not very clear whats going on here, but here is the symptom.

One of the nova-compute nodes appears to lock up:
http://logs.openstack.org/67/175067/2/check/check-tempest-dsvm-multinode-full/7a95fb0/logs/screen-n-cpu.txt.gz#_2015-05-29_23_27_48_296
It was just completing the termination of an instance:
http://logs.openstack.org/67/175067/2/check/check-tempest-dsvm-multinode-full/7a95fb0/logs/screen-n-cpu.txt.gz#_2015-05-29_23_27_48_153

This is also seen in the scheduler reporting the node as down:
http://logs.openstack.org/67/175067/2/check/check-tempest-dsvm-multinode-full/7a95fb0/logs/screen-n-sch.txt.gz#_2015-05-29_23_31_02_711

On further inspection it seems like the other nova compute node had just 
started a migration:
http://logs.openstack.org/67/175067/2/check/check-tempest-dsvm-multinode-full/7a95fb0/logs/subnode-2/screen-n-cpu.txt.gz#_2015-05-29_23_27_48_079


We have had issues in the past where olso.locks can lead to deadlocks, its not 
totally clear if thats happening here. all the periodic tasks happen in the 
same greenlet, so you can stop them happening if you hold a lock in an RPC call 
thats being processed, etc. No idea if thats happening here though.

** Affects: nova
 Importance: Undecided
 Assignee: Joe Gordon (jogo)
 Status: Incomplete


** Tags: testing

** Changed in: nova
   Status: New => Incomplete

** Changed in: nova
 Assignee: (unassigned) => Joe Gordon (jogo)

** Tags added: testing

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1462305

Title:
  multi-node test causes nova-compute to lockup

Status in OpenStack Compute (Nova):
  Incomplete

Bug description:
  Its not very clear whats going on here, but here is the symptom.

  One of the nova-compute nodes appears to lock up:
  
http://logs.openstack.org/67/175067/2/check/check-tempest-dsvm-multinode-full/7a95fb0/logs/screen-n-cpu.txt.gz#_2015-05-29_23_27_48_296
  It was just completing the termination of an instance:
  
http://logs.openstack.org/67/175067/2/check/check-tempest-dsvm-multinode-full/7a95fb0/logs/screen-n-cpu.txt.gz#_2015-05-29_23_27_48_153

  This is also seen in the scheduler reporting the node as down:
  
http://logs.openstack.org/67/175067/2/check/check-tempest-dsvm-multinode-full/7a95fb0/logs/screen-n-sch.txt.gz#_2015-05-29_23_31_02_711

  On further inspection it seems like the other nova compute node had just 
started a migration:
  
http://logs.openstack.org/67/175067/2/check/check-tempest-dsvm-multinode-full/7a95fb0/logs/subnode-2/screen-n-cpu.txt.gz#_2015-05-29_23_27_48_079

  
  We have had issues in the past where olso.locks can lead to deadlocks, its 
not totally clear if thats happening here. all the periodic tasks happen in the 
same greenlet, so you can stop them happening if you hold a lock in an RPC call 
thats being processed, etc. No idea if thats happening here though.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1462305/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1462295] [NEW] xenapi needs pre live-migration plugin to check/fake pv driver version in xenstore

2015-06-05 Thread Sulochan Acharya
Public bug reported:

Xenserver relies on guest (domU) to provide information about
the presence of PV drivers in the guest image back to dom0 though
xenstore for various actions like live-migration etc.

It is possible for users to disable the xen agent that reports this
info and therefore causing failures in live-migration.
In cases where
PV drivers are running it is safe to fake the presence of this information
in xenstore. XAPI simply reads this information to ascertain the presence
of pv drives.

Since it is common for users to disable this, we need a way to ensure that if
pv tools are present (we can check this though the presence of pv devices like  
vif)
we can carry out a live-migration. We can easily do this by faking driver 
version in xenstore
for the instance for which we are attempting live-migration prior to starting 
live-migration.

In cases where this info is not present in xenstore, xapi will simply fail the 
migration attempt with
VM_MISSING_PV_DRIVERS error.

2014-04-24 20:47:36.938 24870 TRACE nova.virt.xenapi.vmops Failure:
['VM_MISSING_PV_DRIVERS', 'OpaqueRef:ef49f129-691d-
4e18-7a09-8dae8928aa7a']

** Affects: nova
 Importance: Undecided
 Assignee: Sulochan Acharya (sulochan-acharya)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Sulochan Acharya (sulochan-acharya)

** Summary changed:

- xenapi needs pre live-migration plugin to handle pvtools
+ xenapi needs pre live-migration plugin to check/fake pv driver version in 
xenstore

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1462295

Title:
  xenapi needs pre live-migration plugin to check/fake pv driver version
  in xenstore

Status in OpenStack Compute (Nova):
  New

Bug description:
  Xenserver relies on guest (domU) to provide information about
  the presence of PV drivers in the guest image back to dom0 though
  xenstore for various actions like live-migration etc.

  It is possible for users to disable the xen agent that reports this
  info and therefore causing failures in live-migration.
  In cases where
  PV drivers are running it is safe to fake the presence of this information
  in xenstore. XAPI simply reads this information to ascertain the presence
  of pv drives.

  Since it is common for users to disable this, we need a way to ensure that if
  pv tools are present (we can check this though the presence of pv devices 
like  vif)
  we can carry out a live-migration. We can easily do this by faking driver 
version in xenstore
  for the instance for which we are attempting live-migration prior to starting 
live-migration.

  In cases where this info is not present in xenstore, xapi will simply fail 
the migration attempt with
  VM_MISSING_PV_DRIVERS error.

  2014-04-24 20:47:36.938 24870 TRACE nova.virt.xenapi.vmops Failure:
  ['VM_MISSING_PV_DRIVERS', 'OpaqueRef:ef49f129-691d-
  4e18-7a09-8dae8928aa7a']

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1462295/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1461931] Re: __init__ method should not return any value explicitly

2015-06-05 Thread Rajesh Tailor
** Also affects: nova
   Importance: Undecided
   Status: New

** Changed in: nova
 Assignee: (unassigned) => Rajesh Tailor (rajesh-tailor)

** Changed in: nova
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1461931

Title:
  __init__ method should not return any value explicitly

Status in Cinder:
  Fix Committed
Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  As per python documentation [1], __init__ method should not return any
  value explicitly, doing so will raise TypeError.

  [1] https://docs.python.org/2/reference/datamodel.html#object.__init__

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1461931/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1461776] Re: Error messages are encoded to HTML entity

2015-06-05 Thread Ankit Agrawal
On our fail early principle we should not send these values to the
server at the first place and should validate in glance-client.

** Also affects: python-glanceclient
   Importance: Undecided
   Status: New

** Changed in: python-glanceclient
 Assignee: (unassigned) => Ankit Agrawal (ankitagrawal)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1461776

Title:
  Error messages are encoded to HTML entity

Status in OpenStack Image Registry and Delivery Service (Glance):
  In Progress
Status in Python client library for Glance:
  New

Bug description:
  If you pass min_disk or min_ram as -1 to the image create command,
  then it shows following error message on the command prompt.

  $ glance image-create --name test --container-format bare --disk-format raw 
--file  --min-disk -1
  400 Bad Request: Invalid value '-1' for parameter 'min_disk': Image min_disk 
must be >= 0 ('-1' specified). (HTTP 400)

  The above error message will be rendered correctly in the browser but
  it is not readable on the command prompt.

  This issue belongs to v1 API, in case of v2 api it returns proper error 
message:
  400 Bad Request: Invalid value '-1' for parameter 'min_disk': Cannot be a 
negative value (HTTP 400)

  So we can make this error message consistent for both the APIs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1461776/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1462266] [NEW] spell mistake in the annotation of class Servicenova/service.py

2015-06-05 Thread heha
Public bug reported:

The spell mistake is in the annotation of class Service in  nova/service.py.
"""Service object for binaries running on hosts.
A service takes a manager and enables rpc by listening to queues based
on topic. It also periodically runs tasks on the manager and reports
it state to the database services table.
"""
In  the annotation,the last "it" should be "its".It's a very little spell 
mistake.

** Affects: nova
 Importance: Undecided
 Assignee: heha (zhanghanqun)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => heha (zhanghanqun)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1462266

Title:
  spell mistake in the annotation of class Servicenova/service.py

Status in OpenStack Compute (Nova):
  New

Bug description:
  The spell mistake is in the annotation of class Service in  nova/service.py.
  """Service object for binaries running on hosts.
  A service takes a manager and enables rpc by listening to queues based
  on topic. It also periodically runs tasks on the manager and reports
  it state to the database services table.
  """
  In  the annotation,the last "it" should be "its".It's a very little spell 
mistake.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1462266/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1182883] Re: List servers matching a regex fails with Neutron

2015-06-05 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/188511
Committed: 
https://git.openstack.org/cgit/openstack/tempest/commit/?id=98a9aec4a9ad6bcdd4cd5c0e7ad3b7ecb449e4aa
Submitter: Jenkins
Branch:master

commit 98a9aec4a9ad6bcdd4cd5c0e7ad3b7ecb449e4aa
Author: Alexander Gubanov 
Date:   Thu Jun 4 19:10:00 2015 +0300

Unskip test skipped because of closed bugs

Removed skip for test "list_servers_filtered_by_ip_regex"
which skipped because of closed bugs.

Change-Id: I205fcc785595abd8c22100ee01074d859ec75827
Closes-bug: #1182883


** Changed in: tempest
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1182883

Title:
  List servers matching a regex fails with Neutron

Status in OpenStack Neutron (virtual network service):
  Confirmed
Status in OpenStack Compute (Nova):
  Invalid
Status in Tempest:
  Fix Released

Bug description:
  The test
  
tempest.api.compute.servers.test_list_server_filters:ListServerFiltersTestXML.test_list_servers_filtered_by_ip_regex
  tries to search a server with only a fragment of its IP (GET
  http://XX/v2/$Tenant/servers?ip=10.0.) which calls the following
  Quantum request :
  http://XX/v2.0/ports.json?fixed_ips=ip_address%3D10.0. But it seems
  this regex search is not supporter by Quantum. Thus the tempest test
  fauls.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1182883/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1462239] [NEW] libvirt: thrown NotImplementedError when running "nova root-password"

2015-06-05 Thread javeme
Public bug reported:

libvirt: thrown NotImplementedError when running "nova root-password"

As we all know, the command “nova root-password” is used to change the root 
password for a server,
but libvirt driver does not support this function at present.

The following is the description of the error:

1. run "nova root-password" on controler.

2. returning 501 error. nova-api log:

2015-06-05 14:34:11.588 3993 INFO nova.api.openstack.wsgi 
[req-0dc1e6b2-e700-4dfb-a388-c3ddbc0db7e3 None] HTTP exception thrown: Unable 
to set password on instance
2015-06-05 14:34:11.589 3993 DEBUG nova.api.openstack.wsgi 
[req-0dc1e6b2-e700-4dfb-a388-c3ddbc0db7e3 None] Returning 501 to user: Unable 
to set password on instance _call_ 
/usr/lib/python2.6/site-packages/nova/api/openstack/wsgi.py:1217
2015-06-05 14:34:11.591 3993 INFO nova.osapi_compute.wsgi.server 
[req-0dc1e6b2-e700-4dfb-a388-c3ddbc0db7e3 None] 172.40.0.2 "POST /v
2/8e3d0869585d486daf23865ebc85449b/servers/12d57f28-8d00-47d3-876c-ebdba7145ddf/action
 HTTP/1.1" status: 501 len: 282 time: 3.0169549

3. thrown NotImplementedError. nova-compute log:

2015-06-05 14:34:10.654 13446 WARNING nova.compute.manager 
[req-0dc1e6b2-e700-4dfb-a388-c3ddbc0db7e3 None] [instance: 12d57f28-8d00-
47d3-876c-ebdba7145ddf] set_admin_password is not implemented by this driver or 
guest instance.
2015-06-05 14:34:11.532 13446 ERROR oslo.messaging.rpc.dispatcher 
[req-0dc1e6b2-e700-4dfb-a388-c3ddbc0db7e3 ] Exception during messa
ge handling: set_admin_password is not implemented by this driver or guest 
instance.
2015-06-05 14:34:11.532 13446 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
2015-06-05 14:34:11.532 13446 TRACE oslo.messaging.rpc.dispatcher File 
"/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispat
cher.py", line 133, in _dispatch_and_reply
2015-06-05 14:34:11.532 13446 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
2015-06-05 14:34:11.532 13446 TRACE oslo.messaging.rpc.dispatcher File 
"/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispat
cher.py", line 176, in _dispatch
2015-06-05 14:34:11.532 13446 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
2015-06-05 14:34:11.532 13446 TRACE oslo.messaging.rpc.dispatcher File 
"/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispat
cher.py", line 122, in _do_dispatch
2015-06-05 14:34:11.532 13446 TRACE oslo.messaging.rpc.dispatcher result = 
getattr(endpoint, method)(ctxt, **new_args)
2015-06-05 14:34:11.532 13446 TRACE oslo.messaging.rpc.dispatcher File 
"/usr/lib/python2.6/site-packages/nova/compute/manager.py",
line 403, in decorated_function
2015-06-05 14:34:11.532 13446 TRACE oslo.messaging.rpc.dispatcher File 
"/usr/lib/python2.6/site-packages/nova/exception.py", line
88, in wrapped
2015-06-05 14:34:11.532 13446 TRACE oslo.messaging.rpc.dispatcher payload)
2015-06-05 14:34:11.532 13446 TRACE oslo.messaging.rpc.dispatcher File 
"/usr/lib/python2.6/site-packages/nova/openstack/common/exc
utils.py", line 68, in _exit_
2015-06-05 14:34:11.532 13446 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2015-06-05 14:34:11.532 13446 TRACE oslo.messaging.rpc.dispatcher File 
"/usr/lib/python2.6/site-packages/nova/exception.py", line
71, in wrapped
2015-06-05 14:34:11.532 13446 TRACE oslo.messaging.rpc.dispatcher return 
f(self, context, *args, **kw)
2015-06-05 14:34:11.532 13446 TRACE oslo.messaging.rpc.dispatcher File 
"/usr/lib/python2.6/site-packages/nova/compute/manager.py",
line 284, in decorated_function
2015-06-05 14:34:11.532 13446 TRACE oslo.messaging.rpc.dispatcher File 
"/usr/lib/python2.6/site-packages/nova/openstack/common/exc
utils.py", line 68, in _exit_
2015-06-05 14:34:11.532 13446 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2015-06-05 14:34:11.532 13446 TRACE oslo.messaging.rpc.dispatcher File 
"/usr/lib/python2.6/site-packages/nova/compute/manager.py",
line 270, in decorated_function
2015-06-05 14:34:11.532 13446 TRACE oslo.messaging.rpc.dispatcher File 
"/usr/lib/python2.6/site-packages/nova/compute/manager.py",
line 337, in decorated_function
2015-06-05 14:34:11.532 13446 TRACE oslo.messaging.rpc.dispatcher File 
"/usr/lib/python2.6/site-packages/nova/compute/manager.py",
line 313, in decorated_function
2015-06-05 14:34:11.532 13446 TRACE oslo.messaging.rpc.dispatcher File 
"/usr/lib/python2.6/site-packages/nova/openstack/common/exc
utils.py", line 68, in _exit_
2015-06-05 14:34:11.532 13446 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2015-06-05 14:34:11.532 13446 TRACE oslo.messaging.rpc.dispatcher File 
"/usr/lib/python2.6/site-packages/nova/compute/manager.py",
line 300, in decorated_function
2015-06-05 14:34:11.532 13446 TRACE oslo.messaging.rpc.dispatcher File 
"/usr/lib/python2.6/site-packages/nova/compute/manager.py",
line 2919, in set_admin_password
2015-06-05 14:34:11.532 13446 TRACE oslo.messaging.rpc.d

[Yahoo-eng-team] [Bug 1462241] [NEW] hide attach floating ip if already one fip is attached

2015-06-05 Thread Masco Kaliyamoorthy
Public bug reported:

one instance can have only one fip at a time.
so we have to hide/disable the attach floating ip option if one IP is attached.

** Affects: horizon
 Importance: Undecided
 Assignee: Masco Kaliyamoorthy (masco)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Masco Kaliyamoorthy (masco)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1462241

Title:
  hide attach floating ip if already one fip is attached

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  one instance can have only one fip at a time.
  so we have to hide/disable the attach floating ip option if one IP is 
attached.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1462241/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1462242] [NEW] developer docs still have mention of bin/keystone-all

2015-06-05 Thread Rushi Agrawal
Public bug reported:

Looks like there is no keystone-all file any more, but still, developer
docs (installing.rst and developing.rst) talk about keystone-all file.
The docs should be updated. Not sure if I should include documentation
team as well, as these are developer docs.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1462242

Title:
  developer docs still have mention of bin/keystone-all

Status in OpenStack Identity (Keystone):
  New

Bug description:
  Looks like there is no keystone-all file any more, but still,
  developer docs (installing.rst and developing.rst) talk about
  keystone-all file. The docs should be updated. Not sure if I should
  include documentation team as well, as these are developer docs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1462242/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1462235] [NEW] File descriptors are left open by the filesystem store

2015-06-05 Thread Flavio Percoco
Public bug reported:

>From this m-l thread: http://lists.openstack.org/pipermail/openstack-
dev/2015-June/065889.html

I believe what's happening is that the ChunkedFile code opens the file and
creates the iterator.  Nova then starts iterating through the file.

If nova (or any other user of glance) iterates all the way through the file
then the ChunkedFile code will hit the "finally" clause in __iter__() and
close the file descriptor.

If nova starts iterating through the file and then stops (due to running out
of room, for example), the ChunkedFile.__iter__() routine is left with an open
file descriptor.  At this point deleting the image will not actually free up
any space.

I'm not a glance guy so I could be wrong about the code.  The
externally-visible data are:
1) glance-api is holding an open file descriptor to a deleted image file
2) If I kill glance-api the disk space is freed up.
3) If I modify nova to always finish iterating through the file the problem
doesn't occur in the first place.

Chris

** Affects: glance-store
 Importance: High
 Assignee: Flavio Percoco (flaper87)
 Status: New

** Changed in: glance
   Importance: Undecided => High

** Changed in: glance
 Assignee: (unassigned) => Flavio Percoco (flaper87)

** Project changed: glance => glance-store

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1462235

Title:
  File descriptors are left open by the filesystem store

Status in OpenStack Glance backend store-drivers library (glance_store):
  New

Bug description:
  From this m-l thread: http://lists.openstack.org/pipermail/openstack-
  dev/2015-June/065889.html

  I believe what's happening is that the ChunkedFile code opens the file and
  creates the iterator.  Nova then starts iterating through the file.

  If nova (or any other user of glance) iterates all the way through the file
  then the ChunkedFile code will hit the "finally" clause in __iter__() and
  close the file descriptor.

  If nova starts iterating through the file and then stops (due to running out
  of room, for example), the ChunkedFile.__iter__() routine is left with an open
  file descriptor.  At this point deleting the image will not actually free up
  any space.

  I'm not a glance guy so I could be wrong about the code.  The
  externally-visible data are:
  1) glance-api is holding an open file descriptor to a deleted image file
  2) If I kill glance-api the disk space is freed up.
  3) If I modify nova to always finish iterating through the file the problem
  doesn't occur in the first place.

  Chris

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance-store/+bug/1462235/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1462218] [NEW] Disassociate Floating ip button should be disabled when no floating ip associated

2015-06-05 Thread Liyingjun
Public bug reported:

Disassociate Floating ip button should be disabled when no floating ip
associated.

** Affects: horizon
 Importance: Undecided
 Assignee: Liyingjun (liyingjun)
 Status: In Progress

** Changed in: horizon
 Assignee: (unassigned) => Liyingjun (liyingjun)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1462218

Title:
  Disassociate Floating ip button should be disabled when no floating ip
  associated

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  Disassociate Floating ip button should be disabled when no floating ip
  associated.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1462218/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp