[Yahoo-eng-team] [Bug 1434103] Re: SQL schema downgrades are no longer supported

2015-03-25 Thread Jay Lau
** Also affects: magnum
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1434103

Title:
  SQL schema downgrades are no longer supported

Status in OpenStack Identity (Keystone):
  New
Status in Magnum - Containers for OpenStack:
  New
Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in OpenStack Data Processing (Sahara):
  In Progress

Bug description:
  Approved cross-project spec: https://review.openstack.org/152337

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1434103/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1420688] [NEW] keystone notification context is empty

2015-02-11 Thread Jay Lau
Public bug reported:

When keystone send notification, the context will be set as {}, this
caused some problem when related component such as nova to deseralize
the context

2015-01-28 13:01:40.391 20150 ERROR oslo.messaging.notify.dispatcher [-] 
Exception during message handling
2015-01-28 13:01:40.391 20150 TRACE oslo.messaging.notify.dispatcher Traceback 
(most recent call last):
2015-01-28 13:01:40.391 20150 TRACE oslo.messaging.notify.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo/messaging/notify/dispatcher.py", line 
87, in _dispatch_and_handle_error
2015-01-28 13:01:40.391 20150 TRACE oslo.messaging.notify.dispatcher return 
self._dispatch(incoming.ctxt, incoming.message)
2015-01-28 13:01:40.391 20150 TRACE oslo.messaging.notify.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo/messaging/notify/dispatcher.py", line 
103, in _dispatch
2015-01-28 13:01:40.391 20150 TRACE oslo.messaging.notify.dispatcher ctxt = 
self.serializer.deserialize_context(ctxt)
2015-01-28 13:01:40.391 20150 TRACE oslo.messaging.notify.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/rpc.py", line 117, in deserialize_context
2015-01-28 13:01:40.391 20150 TRACE oslo.messaging.notify.dispatcher return 
nova.context.RequestContext.from_dict(context)
2015-01-28 13:01:40.391 20150 TRACE oslo.messaging.notify.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/context.py", line 179, in from_dict
2015-01-28 13:01:40.391 20150 TRACE oslo.messaging.notify.dispatcher return 
cls(**values)
2015-01-28 13:01:40.391 20150 TRACE oslo.messaging.notify.dispatcher TypeError: 
__init__() takes at least 3 arguments (1 given)
2015-01-28 13:01:40.391 20150 TRACE oslo.messaging.notify.dispatcher 

We should at least set project_id and user_id to context.

https://github.com/openstack/keystone/blob/master/keystone/notifications.py#L241

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1420688

Title:
  keystone notification context is empty

Status in OpenStack Identity (Keystone):
  New

Bug description:
  When keystone send notification, the context will be set as {}, this
  caused some problem when related component such as nova to deseralize
  the context

  2015-01-28 13:01:40.391 20150 ERROR oslo.messaging.notify.dispatcher [-] 
Exception during message handling
  2015-01-28 13:01:40.391 20150 TRACE oslo.messaging.notify.dispatcher 
Traceback (most recent call last):
  2015-01-28 13:01:40.391 20150 TRACE oslo.messaging.notify.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo/messaging/notify/dispatcher.py", line 
87, in _dispatch_and_handle_error
  2015-01-28 13:01:40.391 20150 TRACE oslo.messaging.notify.dispatcher 
return self._dispatch(incoming.ctxt, incoming.message)
  2015-01-28 13:01:40.391 20150 TRACE oslo.messaging.notify.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo/messaging/notify/dispatcher.py", line 
103, in _dispatch
  2015-01-28 13:01:40.391 20150 TRACE oslo.messaging.notify.dispatcher ctxt 
= self.serializer.deserialize_context(ctxt)
  2015-01-28 13:01:40.391 20150 TRACE oslo.messaging.notify.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/rpc.py", line 117, in deserialize_context
  2015-01-28 13:01:40.391 20150 TRACE oslo.messaging.notify.dispatcher 
return nova.context.RequestContext.from_dict(context)
  2015-01-28 13:01:40.391 20150 TRACE oslo.messaging.notify.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/context.py", line 179, in from_dict
  2015-01-28 13:01:40.391 20150 TRACE oslo.messaging.notify.dispatcher 
return cls(**values)
  2015-01-28 13:01:40.391 20150 TRACE oslo.messaging.notify.dispatcher 
TypeError: __init__() takes at least 3 arguments (1 given)
  2015-01-28 13:01:40.391 20150 TRACE oslo.messaging.notify.dispatcher 

  We should at least set project_id and user_id to context.

  
https://github.com/openstack/keystone/blob/master/keystone/notifications.py#L241

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1420688/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1259535] Re: Disable reason become "AUTO" when host-update

2014-11-26 Thread Jay Lau
OK, I think that this bug is invalid now. ;-)

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1259535

Title:
  Disable reason become "AUTO" when host-update

Status in OpenStack Compute (Nova):
  Invalid

Bug description:

  when I disable a service without reason by command "nova host-update
  --status disable", the service disable reason is always AUTO:

  jay@jay1:~/devstack$ nova service-list
  
+--+--+--+--+---++-+
  | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
  
+--+--+--+--+---++-+
  | nova-conductor | jay1 | internal | enabled | up | 
2013-12-04T13:41:43.00 | None |
  | nova-cert | jay1 | internal | enabled | up | 2013-12-04T13:41:45.00 | 
None |
  | nova-scheduler | jay1 | internal | enabled | up | 
2013-12-04T13:41:48.00 | None |
  | nova-compute | jay1 | nova | disabled | up | 2013-12-04T13:41:48.00 | 
AUTO: |
  | nova-consoleauth | jay1 | internal | enabled | up | 
2013-12-04T13:41:43.00 | None |
  
+--+--+--+--+---++-+

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1259535/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1365565] [NEW] failed to start up docker container

2014-09-04 Thread Jay Lau
Public bug reported:


1) Install OpenStack with devstack on CentOS6.5, I was using nova-network
2) Enable Docker
3) Start up a docker container, spawn failed

te /opt/stack/nova/nova/openstack/common/processutils.py:194
2014-09-04 17:50:21.845 ERROR novadocker.virt.docker.vifs 
[req-fc51ec3f-cd26-46a3-b055-826d13ebd3e2 admin admin] Failed to attach vif
2014-09-04 17:50:21.845 TRACE novadocker.virt.docker.vifs Traceback (most 
recent call last):
2014-09-04 17:50:21.845 TRACE novadocker.virt.docker.vifs File 
"/opt/stack/src/novadocker/novadocker/virt/docker/vifs.py", line 201, in attach
2014-09-04 17:50:21.845 TRACE novadocker.virt.docker.vifs container_id, 
run_as_root=True)
2014-09-04 17:50:21.845 TRACE novadocker.virt.docker.vifs File 
"/opt/stack/nova/nova/utils.py", line 163, in execute
2014-09-04 17:50:21.845 TRACE novadocker.virt.docker.vifs return 
processutils.execute(*cmd, **kwargs)
2014-09-04 17:50:21.845 TRACE novadocker.virt.docker.vifs File 
"/opt/stack/nova/nova/openstack/common/processutils.py", line 200, in execute
2014-09-04 17:50:21.845 TRACE novadocker.virt.docker.vifs cmd=' '.join(cmd))
2014-09-04 17:50:21.845 TRACE novadocker.virt.docker.vifs 
ProcessExecutionError: Unexpected error while running command.
2014-09-04 17:50:21.845 TRACE novadocker.virt.docker.vifs Command: ip link set 
ns076af6ce-d5 netns 
18a61fea04164e5973f4cf1e2fa5859f772636d8d2bbf22295ae448b9dba176b
2014-09-04 17:50:21.845 TRACE novadocker.virt.docker.vifs Exit code: 255
2014-09-04 17:50:21.845 TRACE novadocker.virt.docker.vifs Stdout: ''
2014-09-04 17:50:21.845 TRACE novadocker.virt.docker.vifs Stderr: 'Error: 
argument "18a61fea04164e5973f4cf1e2fa5859f772636d8d2bbf22295ae448b9dba176b" is 
wrong: Invalid "netns" value\n\n'
2014-09-04 17:50:21.845 TRACE novadocker.virt.docker.vifs

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1365565

Title:
  failed to start up docker container

Status in OpenStack Compute (Nova):
  New

Bug description:
  
  1) Install OpenStack with devstack on CentOS6.5, I was using nova-network
  2) Enable Docker
  3) Start up a docker container, spawn failed

  te /opt/stack/nova/nova/openstack/common/processutils.py:194
  2014-09-04 17:50:21.845 ERROR novadocker.virt.docker.vifs 
[req-fc51ec3f-cd26-46a3-b055-826d13ebd3e2 admin admin] Failed to attach vif
  2014-09-04 17:50:21.845 TRACE novadocker.virt.docker.vifs Traceback (most 
recent call last):
  2014-09-04 17:50:21.845 TRACE novadocker.virt.docker.vifs File 
"/opt/stack/src/novadocker/novadocker/virt/docker/vifs.py", line 201, in attach
  2014-09-04 17:50:21.845 TRACE novadocker.virt.docker.vifs container_id, 
run_as_root=True)
  2014-09-04 17:50:21.845 TRACE novadocker.virt.docker.vifs File 
"/opt/stack/nova/nova/utils.py", line 163, in execute
  2014-09-04 17:50:21.845 TRACE novadocker.virt.docker.vifs return 
processutils.execute(*cmd, **kwargs)
  2014-09-04 17:50:21.845 TRACE novadocker.virt.docker.vifs File 
"/opt/stack/nova/nova/openstack/common/processutils.py", line 200, in execute
  2014-09-04 17:50:21.845 TRACE novadocker.virt.docker.vifs cmd=' '.join(cmd))
  2014-09-04 17:50:21.845 TRACE novadocker.virt.docker.vifs 
ProcessExecutionError: Unexpected error while running command.
  2014-09-04 17:50:21.845 TRACE novadocker.virt.docker.vifs Command: ip link 
set ns076af6ce-d5 netns 
18a61fea04164e5973f4cf1e2fa5859f772636d8d2bbf22295ae448b9dba176b
  2014-09-04 17:50:21.845 TRACE novadocker.virt.docker.vifs Exit code: 255
  2014-09-04 17:50:21.845 TRACE novadocker.virt.docker.vifs Stdout: ''
  2014-09-04 17:50:21.845 TRACE novadocker.virt.docker.vifs Stderr: 'Error: 
argument "18a61fea04164e5973f4cf1e2fa5859f772636d8d2bbf22295ae448b9dba176b" is 
wrong: Invalid "netns" value\n\n'
  2014-09-04 17:50:21.845 TRACE novadocker.virt.docker.vifs

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1365565/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1360720] [NEW] nova network always report error when boot VM

2014-08-23 Thread Jay Lau
Public bug reported:

When boot a VM with nova network, it always report " NovaException:
Failed to add interface: device br100 is a bridge device itself; can't
enslave a bridge device to a bridge device." and this caused my VM can
not be started.

Reproduce steps:
1) Install OpenStack with Devstack
jay@jay001:~/src/devstack$ cat localrc 
HOST_IP=192.168.0.103
ADMIN_PASSWORD=nova
MYSQL_PASSWORD=nova
RABBIT_PASSWORD=nova
SERVICE_PASSWORD=nova
SERVICE_TOKEN=tokentoken
FLAT_INTERFACE=br100
#VIRT_DRIVER=docker
#RECLONE=yes
 
VERBOSE=True
LOG_COLOR=True
SCREEN_LOGDIR=/opt/stack/logs
 
#disable_service horizon
 
#OFFLINE=False
#OFFLINE=True
#ENABLED_SERVICES+=,heat,h-api-cfn,h-api-cw,h-eng
ENABLED_SERVICES+=,heat,h-api,h-api-cfn,h-api-cw,h-eng
#IMAGE_URLS+=",http://fedorapeople.org/groups/heat/prebuilt-jeos-images/F16-x86_64-cfntools.qcow2,http://fedorapeople.org/groups/heat/prebuilt-jeos-images/F16-i386-cfntools.qcow2";
#ENABLED_SERVICES+=ceilometer-acompute,ceilometer-acentral,ceilometer-collector,ceilometer-api,ceilometer-alarm-notify,ceilometer-alarm-eval
#CEILOMETER_BACKEND=mysql
2) After install finished, boot a VM
jay@jay001:~/src/devstack$ nova boot --image  cirros-0.3.2-x86_64-uec --flavor 
1 vm1
+--++
| Property | Value  
|
+--++
| OS-DCF:diskConfig| MANUAL 
|
| OS-EXT-AZ:availability_zone  | nova   
|
| OS-EXT-SRV-ATTR:host | -  
|
| OS-EXT-SRV-ATTR:hypervisor_hostname  | -  
|
| OS-EXT-SRV-ATTR:instance_name| instance-0002  
|
| OS-EXT-STS:power_state   | 0  
|
| OS-EXT-STS:task_state| scheduling 
|
| OS-EXT-STS:vm_state  | building   
|
| OS-SRV-USG:launched_at   | -  
|
| OS-SRV-USG:terminated_at | -  
|
| accessIPv4   |
|
| accessIPv6   |
|
| adminPass| F5NXNAVJMXNi   
|
| config_drive |
|
| created  | 2014-08-23T23:54:50Z   
|
| flavor   | m1.tiny (1)
|
| hostId   |
|
| id   | 48eec530-4279-423c-a134-0bbb19287d72   
|
| image| cirros-0.3.2-x86_64-uec 
(b8e84ec2-a63c-4f24-b9bb-6532f507668e) |
| key_name | -  
|
| metadata | {} 
|
| name | vm1
|
| os-extended-volumes:volumes_attached | [] 
|
| progress | 0  
|
| security_groups  | default
|
| status   | BUILD  
|
| tenant_id| 0694df50d3c34d128160d9a4a90db5ff   
|
| updated  | 2014-08-23T23:54:50Z   
|
| user_id  | 60cfc7aa7cc04b54a6bcb2d778146b86   
|
+--++
jay@jay001:~/src/devstack$ nova list
+--+--+++-+--+
| ID   | Name | Status | Task State | Power 
State | Networks |
+---

[Yahoo-eng-team] [Bug 1348447] Re: Enable metadata when create server groups

2014-07-28 Thread Jay Lau
** Also affects: python-novaclient
   Importance: Undecided
   Status: New

** Changed in: python-novaclient
 Assignee: (unassigned) => Jay Lau (jay-lau-513)

** Also affects: heat
   Importance: Undecided
   Status: New

** Changed in: heat
 Assignee: (unassigned) => Jay Lau (jay-lau-513)

** Changed in: python-novaclient
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1348447

Title:
  Enable metadata when create server groups

Status in Orchestration API (Heat):
  New
Status in OpenStack Compute (Nova):
  In Progress
Status in Python client library for Nova:
  New

Bug description:
  instance_group object already support instance group metadata but the
  api extension do not support this.

  We should enable this by default.

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/1348447/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1348447] [NEW] Enable metadata when create server groups

2014-07-24 Thread Jay Lau
Public bug reported:

instance_group object already support instance group metadata but the
api extension do not support this.

We should enable this by default.

** Affects: nova
 Importance: Undecided
 Assignee: Jay Lau (jay-lau-513)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Jay Lau (jay-lau-513)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1348447

Title:
  Enable metadata when create server groups

Status in OpenStack Compute (Nova):
  New

Bug description:
  instance_group object already support instance group metadata but the
  api extension do not support this.

  We should enable this by default.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1348447/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1346816] [NEW] live migration failed when using shared instance path with QCOW2

2014-07-22 Thread Jay Lau
Public bug reported:


Currently, live migration will be failed when using shared instance path with 
QCOW2 because of 'instance_dir' was not defined.

File "/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py", line 
123, in _do_dispatch
result = getattr(endpoint, method)(ctxt, **new_args)

  File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 410, in 
decorated_function
return function(self, context, *args, **kwargs)

  File "/usr/lib/python2.6/site-packages/nova/exception.py", line 88, in wrapped
payload)

  File "/usr/lib/python2.6/site-packages/nova/openstack/common/excutils.py", 
line 82, in __exit__
six.reraise(self.type_, self.value, self.tb)

  File "/usr/lib/python2.6/site-packages/nova/exception.py", line 71, in wrapped
return f(self, context, *args, **kw)

  File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 323, in 
decorated_function
kwargs['instance'], e, sys.exc_info())

  File "/usr/lib/python2.6/site-packages/nova/openstack/common/excutils.py", 
line 82, in __exit__
six.reraise(self.type_, self.value, self.tb)

  File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 311, in 
decorated_function
return function(self, context, *args, **kwargs)

  File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 4661, 
in pre_live_migration
migrate_data)

  File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 
4739, in pre_live_migration
instance_dir, disk_info)

UnboundLocalError: local variable 'instance_dir' referenced before assignment
 to caller

** Affects: nova
 Importance: Undecided
 Assignee: Jay Lau (jay-lau-513)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1346816

Title:
  live migration failed when using shared instance path with QCOW2

Status in OpenStack Compute (Nova):
  New

Bug description:
  
  Currently, live migration will be failed when using shared instance path with 
QCOW2 because of 'instance_dir' was not defined.

  File "/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py", 
line 123, in _do_dispatch
  result = getattr(endpoint, method)(ctxt, **new_args)

File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 410, 
in decorated_function
  return function(self, context, *args, **kwargs)

File "/usr/lib/python2.6/site-packages/nova/exception.py", line 88, in 
wrapped
  payload)

File "/usr/lib/python2.6/site-packages/nova/openstack/common/excutils.py", 
line 82, in __exit__
  six.reraise(self.type_, self.value, self.tb)

File "/usr/lib/python2.6/site-packages/nova/exception.py", line 71, in 
wrapped
  return f(self, context, *args, **kw)

File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 323, 
in decorated_function
  kwargs['instance'], e, sys.exc_info())

File "/usr/lib/python2.6/site-packages/nova/openstack/common/excutils.py", 
line 82, in __exit__
  six.reraise(self.type_, self.value, self.tb)

File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 311, 
in decorated_function
  return function(self, context, *args, **kwargs)

File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 4661, 
in pre_live_migration
  migrate_data)

File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 
4739, in pre_live_migration
  instance_dir, disk_info)

  UnboundLocalError: local variable 'instance_dir' referenced before assignment
   to caller

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1346816/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1343946] [NEW] nova should not disable retry when there are multiple forcde hosts/nodes

2014-07-18 Thread Jay Lau
Public bug reported:


Currently, when there are force_nodes/force_hosts in a scheduler request, retry 
will be disabled.

This is not right as there might be more than 1
force_nodes/force_hosts, we should not disable retry for such case.

** Affects: nova
 Importance: Medium
 Assignee: Jay Lau (jay-lau-513)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Jay Lau (jay-lau-513)

** Changed in: nova
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1343946

Title:
  nova should not disable retry when there are multiple forcde
  hosts/nodes

Status in OpenStack Compute (Nova):
  New

Bug description:
  
  Currently, when there are force_nodes/force_hosts in a scheduler request, 
retry will be disabled.

  This is not right as there might be more than 1
  force_nodes/force_hosts, we should not disable retry for such case.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1343946/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1343200] [NEW] Add notifications when operating server groups

2014-07-17 Thread Jay Lau
Public bug reported:

Currently, there is no notifications when operating server groups, such
as create/delete/update etc. This caused 3rd party cannot know the
operation result on time.

We should add notification for server group operations.

** Affects: nova
 Importance: Undecided
 Assignee: Jay Lau (jay-lau-513)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Jay Lau (jay-lau-513)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1343200

Title:
  Add notifications when operating server groups

Status in OpenStack Compute (Nova):
  New

Bug description:
  Currently, there is no notifications when operating server groups,
  such as create/delete/update etc. This caused 3rd party cannot know
  the operation result on time.

  We should add notification for server group operations.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1343200/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1269655] Re: Make prune compute_node_stats configurable

2014-06-10 Thread Jay Lau
** Changed in: nova
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1269655

Title:
  Make prune compute_node_stats configurable

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  
  In compute/manager.py, there is a periodic task named as 
update_available_resource(), it will update resource for each compute 
periodically.

   @periodic_task.periodic_task
  def update_available_resource(self, context):
  """See driver.get_available_resource()

  Periodic process that keeps that the compute host's understanding of
  resource availability and usage in sync with the underlying 
hypervisor.

  :param context: security context
  """
  new_resource_tracker_dict = {}
  nodenames = set(self.driver.get_available_nodes())
  for nodename in nodenames:
  rt = self._get_resource_tracker(nodename)
  rt.update_available_resource(context) << Update here
  new_resource_tracker_dict[nodename] = rt

  In resource_tracker.py,
  
https://github.com/openstack/nova/blob/master/nova/compute/resource_tracker.py#L384

  self._update(context, resources, prune_stats=True)

  It always set prune_stats as True, this caused some problems. As if
  someone put some metrics to compute_node_stats table, and if those
  metrics does not change frequently, the periodic task will prune the
  new metrics.

  It is better adding a configuration parameter in nova.cont to make
  prune_stats as configurable.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1269655/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1312087] [NEW] "nova delete" has no output if the VM was deleted

2014-04-24 Thread Jay Lau
Public bug reported:

[root@rhel8249 ~]# nova   aggregate-create  aa
+-+--+---+---+--+
| Id  | Name | Availability Zone | Hosts | Metadata |
+-+--+---+---+--+
| 109 | aa   | None  | []| {}   |
+-+--+---+---+--+
[root@rhel8249 ~]# nova  aggregate-delete  aa
Aggregate 109 has been successfully deleted. << output telling delete succeed.

>From the review commets of
https://review.openstack.org/#/c/85577/1/novaclient/v1_1/shell.py , we
reach to an agreement to file a bug to enable "nova delete" can print
some log message to tell operators which instances have been deleted
succeed and which have been failed.

** Affects: python-novaclient
 Importance: Low
 Status: New

** Project changed: nova => python-novaclient

** Changed in: python-novaclient
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1312087

Title:
  "nova delete" has no output if the VM was deleted

Status in Python client library for Nova:
  New

Bug description:
  [root@rhel8249 ~]# nova   aggregate-create  aa
  +-+--+---+---+--+
  | Id  | Name | Availability Zone | Hosts | Metadata |
  +-+--+---+---+--+
  | 109 | aa   | None  | []| {}   |
  +-+--+---+---+--+
  [root@rhel8249 ~]# nova  aggregate-delete  aa
  Aggregate 109 has been successfully deleted. << output telling delete succeed.

  From the review commets of
  https://review.openstack.org/#/c/85577/1/novaclient/v1_1/shell.py , we
  reach to an agreement to file a bug to enable "nova delete" can print
  some log message to tell operators which instances have been deleted
  succeed and which have been failed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-novaclient/+bug/1312087/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1310874] [NEW] "nova host-update --status disabled host" was not implemented for kvm

2014-04-21 Thread Jay Lau
Public bug reported:

liugya@liugya-ubuntu:~$ nova host-update --status disabled liugya-ubuntu
ERROR (BadRequest): Invalid status: 'disabled' (HTTP 400) (Request-ID: 
req-e37c5c23-e6db-44fd-a814-cda41b967297)

** Affects: nova
 Importance: Undecided
 Assignee: Jay Lau (jay-lau-513)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Jay Lau (jay-lau-513)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1310874

Title:
  "nova host-update --status disabled host" was not implemented for kvm

Status in OpenStack Compute (Nova):
  New

Bug description:
  liugya@liugya-ubuntu:~$ nova host-update --status disabled liugya-ubuntu
  ERROR (BadRequest): Invalid status: 'disabled' (HTTP 400) (Request-ID: 
req-e37c5c23-e6db-44fd-a814-cda41b967297)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1310874/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1310529] [NEW] cold migration should support pause and suspend

2014-04-21 Thread Jay Lau
Public bug reported:

At the moment, cold migration/resize only support ACTIVE and STOPPED VM,
we should also support VMs in the state of SUSPENDED and PAUSED.

** Affects: nova
 Importance: Undecided
 Assignee: Jay Lau (jay-lau-513)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Jay Lau (jay-lau-513)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1310529

Title:
  cold migration should support pause and suspend

Status in OpenStack Compute (Nova):
  New

Bug description:
  At the moment, cold migration/resize only support ACTIVE and STOPPED
  VM, we should also support VMs in the state of SUSPENDED and PAUSED.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1310529/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1302334] [NEW] live migration failed

2014-04-03 Thread Jay Lau
Public bug reported:

When I try to live migration a VM, I got the following exception from
source host.

2014-04-04 12:50:20.330 8862 INFO nova.compute.manager [-] [instance: 
160fb719-7f84-466a-a19d-9284dd6d56fa] NV-FA2EA85 _post_live_migration() is 
started..
2014-04-04 12:50:20.371 8862 ERROR nova.openstack.common.loopingcall [-] in 
fixed duration looping call
2014-04-04 12:50:20.371 8862 TRACE nova.openstack.common.loopingcall Traceback 
(most recent call last):
2014-04-04 12:50:20.371 8862 TRACE nova.openstack.common.loopingcall   File 
"/usr/lib/python2.6/site-packages/nova/openstack/common/loopingcall.py", line 
78, in _inner
2014-04-04 12:50:20.371 8862 TRACE nova.openstack.common.loopingcall 
self.f(*self.args, **self.kw)
2014-04-04 12:50:20.371 8862 TRACE nova.openstack.common.loopingcall   File 
"/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 4495, in 
wait_for_live_migration
2014-04-04 12:50:20.371 8862 TRACE nova.openstack.common.loopingcall 
migrate_data)
2014-04-04 12:50:20.371 8862 TRACE nova.openstack.common.loopingcall   File 
"/usr/lib/python2.6/site-packages/nova/exception.py", line 88, in wrapped
2014-04-04 12:50:20.371 8862 TRACE nova.openstack.common.loopingcall 
payload)
2014-04-04 12:50:20.371 8862 TRACE nova.openstack.common.loopingcall   File 
"/usr/lib/python2.6/site-packages/nova/openstack/common/excutils.py", line 68, 
in __exit__
2014-04-04 12:50:20.371 8862 TRACE nova.openstack.common.loopingcall 
six.reraise(self.type_, self.value, self.tb)
2014-04-04 12:50:20.371 8862 TRACE nova.openstack.common.loopingcall   File 
"/usr/lib/python2.6/site-packages/nova/exception.py", line 71, in wrapped
2014-04-04 12:50:20.371 8862 TRACE nova.openstack.common.loopingcall return 
f(self, context, *args, **kw)
2014-04-04 12:50:20.371 8862 TRACE nova.openstack.common.loopingcall   File 
"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 315, in 
decorated_function
2014-04-04 12:50:20.371 8862 TRACE nova.openstack.common.loopingcall e, 
sys.exc_info())
2014-04-04 12:50:20.371 8862 TRACE nova.openstack.common.loopingcall   File 
"/usr/lib/python2.6/site-packages/nova/openstack/common/excutils.py", line 68, 
in __exit__
2014-04-04 12:50:20.371 8862 TRACE nova.openstack.common.loopingcall 
six.reraise(self.type_, self.value, self.tb)
2014-04-04 12:50:20.371 8862 TRACE nova.openstack.common.loopingcall   File 
"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 302, in 
decorated_function
2014-04-04 12:50:20.371 8862 TRACE nova.openstack.common.loopingcall return 
function(self, context, *args, **kwargs)
2014-04-04 12:50:20.371 8862 TRACE nova.openstack.common.loopingcall   File 
"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 4600, in 
_post_live_migration
2014-04-04 12:50:20.371 8862 TRACE nova.openstack.common.loopingcall ctxt, 
instance.uuid)
2014-04-04 12:50:20.371 8862 TRACE nova.openstack.common.loopingcall 
AttributeError: 'dict' object has no attribute 'uuid'
2014-04-04 12:50:20.371 8862 TRACE nova.openstack.common.loopingcall

** Affects: nova
 Importance: High
     Assignee: Jay Lau (jay-lau-513)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Jay Lau (jay-lau-513)

** Changed in: nova
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1302334

Title:
  live migration failed

Status in OpenStack Compute (Nova):
  New

Bug description:
  When I try to live migration a VM, I got the following exception from
  source host.

  2014-04-04 12:50:20.330 8862 INFO nova.compute.manager [-] [instance: 
160fb719-7f84-466a-a19d-9284dd6d56fa] NV-FA2EA85 _post_live_migration() is 
started..
  2014-04-04 12:50:20.371 8862 ERROR nova.openstack.common.loopingcall [-] in 
fixed duration looping call
  2014-04-04 12:50:20.371 8862 TRACE nova.openstack.common.loopingcall 
Traceback (most recent call last):
  2014-04-04 12:50:20.371 8862 TRACE nova.openstack.common.loopingcall   File 
"/usr/lib/python2.6/site-packages/nova/openstack/common/loopingcall.py", line 
78, in _inner
  2014-04-04 12:50:20.371 8862 TRACE nova.openstack.common.loopingcall 
self.f(*self.args, **self.kw)
  2014-04-04 12:50:20.371 8862 TRACE nova.openstack.common.loopingcall   File 
"/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 4495, in 
wait_for_live_migration
  2014-04-04 12:50:20.371 8862 TRACE nova.openstack.common.loopingcall 
migrate_data)
  2014-04-04 12:50:20.371 8862 TRACE nova.openstack.common.loopingcall   File 
"/usr/lib/python2.6/site-packages/nova/exception.py", line 88, in wrapped
  2014-04-04 12:50:20.371 8862 TRACE nova.openstack.common.loopingcall   

[Yahoo-eng-team] [Bug 1302238] [NEW] Enable ServerGroupAffinityFilter and ServerGroupAntiAffinityFilter by default

2014-04-03 Thread Jay Lau
Public bug reported:

After https://blueprints.launchpad.net/nova/+spec/instance-group-api-
extension, nova has the feature of creating instance groups with
affinity or anti-affinity policy and creating vm instance with affinity
/anti-affinity  group.

If did not enable ServerGroupAffinityFilter and
ServerGroupAntiAffinityFilter, then the instance group will not able to
leverage affinity/anti-affinity.

Take the following case:
1) Create a group with affinity
2) Create two vms with this group
3) The result is that those two vms was not created on the same host.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1302238

Title:
  Enable ServerGroupAffinityFilter and ServerGroupAntiAffinityFilter by
  default

Status in OpenStack Compute (Nova):
  New

Bug description:
  After https://blueprints.launchpad.net/nova/+spec/instance-group-api-
  extension, nova has the feature of creating instance groups with
  affinity or anti-affinity policy and creating vm instance with
  affinity/anti-affinity  group.

  If did not enable ServerGroupAffinityFilter and
  ServerGroupAntiAffinityFilter, then the instance group will not able
  to leverage affinity/anti-affinity.

  Take the following case:
  1) Create a group with affinity
  2) Create two vms with this group
  3) The result is that those two vms was not created on the same host.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1302238/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1301382] [NEW] "nova server-group-list " show no members even if I create one VM in one group

2014-04-02 Thread Jay Lau
 
Metadata |
+--+--+---+-+--+
| 8a974011-c706-4317-8455-bd6b52ad8584 | rg1  | [u'affinity'] | []  | {}
   | << Still no member
+--+--+---+-+--+

** Affects: nova
 Importance: Medium
     Assignee: Jay Lau (jay-lau-513)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Jay Lau (jay-lau-513)

** Changed in: nova
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1301382

Title:
  "nova  server-group-list " show no members even if I create one VM in
  one group

Status in OpenStack Compute (Nova):
  New

Bug description:
  gyliu@devstack1:~$ nova server-group-create rg1 --policy affinity
  
+--+--+---+-+--+
  | Id   | Name | Policies  | Members | 
Metadata |
  
+--+--+---+-+--+
  | 8a974011-c706-4317-8455-bd6b52ad8584 | rg1  | [u'affinity'] | []  | {}  
 |
  
+--+--+---+-+--+

  gyliu@devstack1:~$ nova boot --image cirros-0.3.1-x86_64-uec--flavor 1 
--hint group=8a974011-c706-4317-8455-bd6b52ad8584 vm1
  
+--++
  | Property | Value
  |
  
+--++
  | OS-DCF:diskConfig| MANUAL   
  |
  | OS-EXT-AZ:availability_zone  | nova 
  |
  | OS-EXT-SRV-ATTR:host | -
  |
  | OS-EXT-SRV-ATTR:hypervisor_hostname  | -
  |
  | OS-EXT-SRV-ATTR:instance_name| instance-0002
  |
  | OS-EXT-STS:power_state   | 0
  |
  | OS-EXT-STS:task_state| scheduling   
  |
  | OS-EXT-STS:vm_state  | building 
  |
  | OS-SRV-USG:launched_at   | -
  |
  | OS-SRV-USG:terminated_at | -
  |
  | accessIPv4   |  
  |
  | accessIPv6   |  
  |
  | adminPass| EoGB78c9YHZa 
  |
  | config_drive |  
  |
  | created  | 2014-04-02T12:10:30Z 
  |
  | flavor   | m1.tiny (1)  
  |
  | hostId   |  
  |
  | id   | 19851e3d-9314-4c28-8612-cadae0cbcbf1 
  |
  | image| cirros-0.3.1-x86_64-uec 
(61f1a44e-62a9-44ab-8843-3c050be82502) |
  | key_name | -
  |
  | metadata | {}   
  |
  | name | vm1  
  |
  | os-extended-volumes:volumes_attached | []   
  |
  | progress | 0
  |
  | security_groups  | default  
  |
  | status   | BUILD
  |
  | tenant_id| 2168c6190c7d430d8b9497c610de65f8 
  |
  | updated  | 2014-04-02T12:10:30Z 
  |
  | user_id

[Yahoo-eng-team] [Bug 1258767] Re: Enable VMWare ESXDriver support set_host_enabled

2014-03-19 Thread Jay Lau
ESX driver was now deprecated.

** Changed in: nova
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1258767

Title:
  Enable VMWare ESXDriver support set_host_enabled

Status in OpenStack Compute (Nova):
  Won't Fix

Bug description:
  The API of set_host_enabled was not supportted in VMWare ESXDriver, we
  should support this feature.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1258767/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291730] Re: hyper-V: resize failed

2014-03-12 Thread Jay Lau
My bad, no need to resize vhd if the size was not changed.

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1291730

Title:
  hyper-V: resize failed

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  When I resize a VM with hyperv-V to different host, the nova compute
  on target host will exit.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1291730/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291730] [NEW] hyper-V: resize failed

2014-03-12 Thread Jay Lau
Public bug reported:

When I resize a VM with hyperv-V to different host, the nova compute on
target host will exit.

** Affects: nova
 Importance: Undecided
 Assignee: Jay Lau (jay-lau-513)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Jay Lau (jay-lau-513)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1291730

Title:
  hyper-V: resize failed

Status in OpenStack Compute (Nova):
  New

Bug description:
  When I resize a VM with hyperv-V to different host, the nova compute
  on target host will exit.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1291730/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1285547] [NEW] Add "nova cell-list" to python-novaclient

2014-02-27 Thread Jay Lau
Public bug reported:


Now only "nova-manager cell list" but no "nova cell-list", it is better that we 
add "nova cell-list" as nova sub command as most of users are using "nova" 
command.

** Affects: python-novaclient
     Importance: Undecided
 Assignee: Jay Lau (jay-lau-513)
 Status: New

** Project changed: nova => python-novaclient

** Changed in: python-novaclient
     Assignee: (unassigned) => Jay Lau (jay-lau-513)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1285547

Title:
  Add "nova cell-list" to python-novaclient

Status in Python client library for Nova:
  New

Bug description:
  
  Now only "nova-manager cell list" but no "nova cell-list", it is better that 
we add "nova cell-list" as nova sub command as most of users are using "nova" 
command.

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-novaclient/+bug/1285547/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1280705] [NEW] VCDriver: nova compute failed to start up

2014-02-15 Thread Jay Lau
Public bug reported:


 DC
|
|Cluster1
|  |
|  |9.111.249.56
|
|Cluster2
   |
   |9.111.249.49

Preconditions:
1) Using VCDriver
2) nova compute 1 manage Cluster1
3) nova compute 2 manage Cluster2

Test case:
1) Create one VM located on Cluster2
2) Stop nova compute for Cluster1 and then restart this nova compute.
3) The nova compute which manages Cluster1 failed to start up

2014-02-16 05:43:06.086 14496 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.6/site-packages/nova/openstack/common/threadgroup.py", line 
117, in wait
2014-02-16 05:43:06.086 14496 TRACE nova.openstack.common.threadgroup 
x.wait()
2014-02-16 05:43:06.086 14496 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.6/site-packages/nova/openstack/common/threadgroup.py", line 
49, in wait
2014-02-16 05:43:06.086 14496 TRACE nova.openstack.common.threadgroup 
return self.thread.wait()
2014-02-16 05:43:06.086 14496 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.6/site-packages/eventlet/greenthread.py", line 168, in wait
2014-02-16 05:43:06.086 14496 TRACE nova.openstack.common.threadgroup 
return self._exit_event.wait()
2014-02-16 05:43:06.086 14496 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.6/site-packages/eventlet/event.py", line 116, in wait
2014-02-16 05:43:06.086 14496 TRACE nova.openstack.common.threadgroup 
return hubs.get_hub().switch()
2014-02-16 05:43:06.086 14496 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.6/site-packages/eventlet/hubs/hub.py", line 187, in switch
2014-02-16 05:43:06.086 14496 TRACE nova.openstack.common.threadgroup 
return self.greenlet.switch()
2014-02-16 05:43:06.086 14496 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.6/site-packages/eventlet/greenthread.py", line 194, in main
2014-02-16 05:43:06.086 14496 TRACE nova.openstack.common.threadgroup 
result = function(*args, **kwargs)
2014-02-16 05:43:06.086 14496 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.6/site-packages/nova/openstack/common/service.py", line 480, 
in run_service
2014-02-16 05:43:06.086 14496 TRACE nova.openstack.common.threadgroup 
service.start()
2014-02-16 05:43:06.086 14496 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.6/site-packages/nova/service.py", line 172, in start
2014-02-16 05:43:06.086 14496 TRACE nova.openstack.common.threadgroup 
self.manager.init_host()
2014-02-16 05:43:06.086 14496 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 811, in 
init_host
2014-02-16 05:43:06.086 14496 TRACE nova.openstack.common.threadgroup 
self._destroy_evacuated_instances(context)
2014-02-16 05:43:06.086 14496 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 570, in 
_destroy_evacuated_instances
2014-02-16 05:43:06.086 14496 TRACE nova.openstack.common.threadgroup bdi, 
destroy_disks)
2014-02-16 05:43:06.086 14496 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.6/site-packages/nova/virt/vmwareapi/driver.py", line 635, in 
destroy
2014-02-16 05:43:06.086 14496 TRACE nova.openstack.common.threadgroup 
_vmops = self._get_vmops_for_compute_node(instance['node'])
2014-02-16 05:43:06.086 14496 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.6/site-packages/nova/virt/vmwareapi/driver.py", line 522, in 
_get_vmops_for_compute_node
2014-02-16 05:43:06.086 14496 TRACE nova.openstack.common.threadgroup 
resource = self._get_resource_for_node(nodename)
2014-02-16 05:43:06.086 14496 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.6/site-packages/nova/virt/vmwareapi/driver.py", line 514, in 
_get_resource_for_node
2014-02-16 05:43:06.086 14496 TRACE nova.openstack.common.threadgroup raise 
exception.NotFound(msg)
2014-02-16 05:43:06.086 14496 TRACE nova.openstack.common.threadgroup NotFound: 
NV-3AB798A The resource domain-c16(Cluster2) does not exist
2014-02-16 05:43:06.086 14496 TRACE nova.openstack.common.threadgroup

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1280705

Title:
  VCDriver: nova compute failed to start up

Status in OpenStack Compute (Nova):
  New

Bug description:

   DC
  |
  |Cluster1
  |  |
  |  |9.111.249.56
  |
  |Cluster2
 |
 |9.111.249.49

  Preconditions:
  1) Using VCDriver
  2) nova compute 1 manage Cluster1
  3) nova compute 2 manage Cluster2

  Test case:
  1) Create one VM located on Cluster2
  2) Stop nova compute for Cluster1 and then restart this nova compute.
  3) The nova compute which 

[Yahoo-eng-team] [Bug 1280600] [NEW] VCDriver: cold migration/resize failed

2014-02-15 Thread Jay Lau
Public bug reported:

 DC
|
|Cluster1
|  |
|  |9.111.249.56
|
|Cluster2
   |
   |9.111.249.49

Configure resize to different host

Case 1) 
1) Create a VM on Cluster2
2) Resize the VM
3) After resize finished, the VM goes to verify_resize.
4) confirm the resize, nova compute report error on source host.

69699f] [instance: e3614b2d-ff1c-4c6e-bee6-87e7c02a4932] NV-D132FDD Setting 
instance vm_state to ERROR
2014-02-15 17:21:23.191 4707 TRACE nova.compute.manager [instance: 
e3614b2d-ff1c-4c6e-bee6-87e7c02a4932] Traceback (most recent call last):
2014-02-15 17:21:23.191 4707 TRACE nova.compute.manager [instance: 
e3614b2d-ff1c-4c6e-bee6-87e7c02a4932]   File 
"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 5131, in 
_error_out_instance_on_exception
2014-02-15 17:21:23.191 4707 TRACE nova.compute.manager [instance: 
e3614b2d-ff1c-4c6e-bee6-87e7c02a4932] yield
2014-02-15 17:21:23.191 4707 TRACE nova.compute.manager [instance: 
e3614b2d-ff1c-4c6e-bee6-87e7c02a4932]   File 
"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 2836, in 
_confirm_resize
2014-02-15 17:21:23.191 4707 TRACE nova.compute.manager [instance: 
e3614b2d-ff1c-4c6e-bee6-87e7c02a4932] network_info)
2014-02-15 17:21:23.191 4707 TRACE nova.compute.manager [instance: 
e3614b2d-ff1c-4c6e-bee6-87e7c02a4932]   File 
"/usr/lib/python2.6/site-packages/nova/virt/vmwareapi/driver.py", line 420, in 
confirm_migration
2014-02-15 17:21:23.191 4707 TRACE nova.compute.manager [instance: 
e3614b2d-ff1c-4c6e-bee6-87e7c02a4932] _vmops = 
self._get_vmops_for_compute_node(instance['node'])
2014-02-15 17:21:23.191 4707 TRACE nova.compute.manager [instance: 
e3614b2d-ff1c-4c6e-bee6-87e7c02a4932]   File 
"/usr/lib/python2.6/site-packages/nova/virt/vmwareapi/driver.py", line 523, in 
_get_vmops_for_compute_node
2014-02-15 17:21:23.191 4707 TRACE nova.compute.manager [instance: 
e3614b2d-ff1c-4c6e-bee6-87e7c02a4932] resource = 
self._get_resource_for_node(nodename)
2014-02-15 17:21:23.191 4707 TRACE nova.compute.manager [instance: 
e3614b2d-ff1c-4c6e-bee6-87e7c02a4932]   File 
"/usr/lib/python2.6/site-packages/nova/virt/vmwareapi/driver.py", line 515, in 
_get_resource_for_node
2014-02-15 17:21:23.191 4707 TRACE nova.compute.manager [instance: 
e3614b2d-ff1c-4c6e-bee6-87e7c02a4932] raise exception.NotFound(msg)
2014-02-15 17:21:23.191 4707 TRACE nova.compute.manager [instance: 
e3614b2d-ff1c-4c6e-bee6-87e7c02a4932] NotFound: NV-3AB798A The resource 
domain-c12(Cluster1) does not exist
2014-02-15 17:21:23.191 4707 TRACE nova.compute.manager [instance: 
e3614b2d-ff1c-4c6e-bee6-87e7c02a4932] 
2014-02-15 17:21:23.428 4707 ERROR nova.openstack.common.rpc.amqp 
[req-665c6711-5353-46d7-a7b7-4d3330f41787 7566bb5312984271b2612533c04a2015 
438f975992384bf5b7cf018d9569699f] Exception during message handling
2014-02-15 17:21:23.428 4707 TRACE nova.openstack.common.rpc.amqp Traceback 
(most recent call last):
2014-02-15 17:21:23.428 4707 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/amqp.py", line 461, 
in _process_data
2014-02-15 17:21:23.428 4707 TRACE nova.openstack.common.rpc.amqp **args)
2014-02-15 17:21:23.428 4707 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/dispatcher.py", 
line 172, in dispatch
2014-02-15 17:21:23.428 4707 TRACE nova.openstack.common.rpc.amqp result = 
getattr(proxyobj, method)(ctxt, **kwargs)
2014-02-15 17:21:23.428 4707 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.6/site-packages/nova/exception.py", line 90, in wrapped
2014-02-15 17:21:23.428 4707 TRACE nova.openstack.common.rpc.amqp payload)
2014-02-15 17:21:23.428 4707 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.6/site-packages/nova/openstack/common/excutils.py", line 68, 
in __exit__
2014-02-15 17:21:23.428 4707 TRACE nova.openstack.common.rpc.amqp 
six.reraise(self.type_, self.value, self.tb)
2014-02-15 17:21:23.428 4707 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.6/site-packages/nova/exception.py", line 73, in wrapped
2014-02-15 17:21:23.428 4707 TRACE nova.openstack.common.rpc.amqp return 
f(self, context, *args, **kw)
2014-02-15 17:21:23.428 4707 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 299, in 
decorated_function
2014-02-15 17:21:23.428 4707 TRACE nova.openstack.common.rpc.amqp 
function(self, context, *args, **kwargs)
2014-02-15 17:21:23.428 4707 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 276, in 
decorated_function
2014-02-15 17:21:23.428 4707 TRACE nova.openstack.common.rpc.amqp e, 
sys.exc_info())
2014-02-15 17:21:23.428 4707 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.6/site-packages/nova/openstack/common/excuti

[Yahoo-eng-team] [Bug 1280593] [NEW] VCDriver: Live migration failed.

2014-02-15 Thread Jay Lau
Public bug reported:

 DC1
|
|Cluster1
|  |
|  |9.111.249.56
|
|Cluster2
   |
   |9.111.249.49

Case 1)
One nova compute manage two clusters.

nova.conf:
cluster_name=Cluster2
cluster_name=Cluster1

live migration failed because  target host and source host will be
considered to the same host.

Case 2)
nova compute 1 manage Cluster1
nova.conf:
cluster_name=Cluster1

nova compute 2 manage Cluster2
nova.conf:
cluster_name=Cluster2

live migration failed because of 
2014-02-12 11:47:38.557 32416 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.6/site-packages/nova/virt/driver.py", line 598, in 
check_can_live_migrate_destination
2014-02-12 11:47:38.557 32416 TRACE nova.openstack.common.rpc.amqp raise 
NotImplementedError()
2014-02-12 11:47:38.557 32416 TRACE nova.openstack.common.rpc.amqp 
NotImplementedError

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1280593

Title:
  VCDriver: Live migration failed.

Status in OpenStack Compute (Nova):
  New

Bug description:
   DC1
  |
  |Cluster1
  |  |
  |  |9.111.249.56
  |
  |Cluster2
 |
 |9.111.249.49

  Case 1)
  One nova compute manage two clusters.

  nova.conf:
  cluster_name=Cluster2
  cluster_name=Cluster1

  live migration failed because  target host and source host will be
  considered to the same host.

  Case 2)
  nova compute 1 manage Cluster1
  nova.conf:
  cluster_name=Cluster1

  nova compute 2 manage Cluster2
  nova.conf:
  cluster_name=Cluster2

  live migration failed because of 
  2014-02-12 11:47:38.557 32416 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.6/site-packages/nova/virt/driver.py", line 598, in 
check_can_live_migrate_destination
  2014-02-12 11:47:38.557 32416 TRACE nova.openstack.common.rpc.amqp raise 
NotImplementedError()
  2014-02-12 11:47:38.557 32416 TRACE nova.openstack.common.rpc.amqp 
NotImplementedError

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1280593/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1268622] Re: enable cold migration with target host

2014-01-28 Thread Jay Lau
A bp https://blueprints.launchpad.net/nova/+spec/code-migration-with-
target was filed.

** Changed in: nova
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1268622

Title:
  enable cold migration with target host

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Now cold migration do not support migrate a VM instance with target
  host, we should enable this feature in nova.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1268622/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1269684] [NEW] payload is empty when remove metadata with event updatemetadata.end

2014-01-15 Thread Jay Lau
Public bug reported:

liugya@liugya-ubuntu:~/src/nova-ce$ nova  aggregate-set-metadata 1 a=a1
Aggregate 1 has been successfully updated.
++--+---+---+--+
| Id | Name | Availability Zone | Hosts | Metadata |
++--+---+---+--+
| 1  | agg1 | None  |   | 'a=a1'   |
++--+---+---+--+
liugya@liugya-ubuntu:~/src/nova-ce$ nova  aggregate-set-metadata 1 a

(Pdb) n
> /opt/stack/nova/nova/objects/aggregate.py(100)update_metadata()
-> compute_utils.notify_about_aggregate_update(context,
(Pdb) n
> /opt/stack/nova/nova/objects/aggregate.py(101)update_metadata()
-> "updatemetadata.start",
(Pdb) n
> /opt/stack/nova/nova/objects/aggregate.py(102)update_metadata()
-> payload)
(Pdb) payload
{'meta_data': {u'a': None}, 'aggregate_id': 1}
(Pdb) n

> /opt/stack/nova/nova/objects/aggregate.py(118)update_metadata()
-> payload['meta_data'] = to_add
(Pdb) 
> /opt/stack/nova/nova/objects/aggregate.py(119)update_metadata()
-> compute_utils.notify_about_aggregate_update(context,
(Pdb) 
> /opt/stack/nova/nova/objects/aggregate.py(120)update_metadata()
-> "updatemetadata.end",
(Pdb) 
> /opt/stack/nova/nova/objects/aggregate.py(121)update_metadata()
-> payload)
(Pdb) payload
{'meta_data': {}, 'aggregate_id': 1} <<< meta_data is empty, this caused 3rd 
party do not know which meta_data was now removed after get notification of 
updatemetadata.end

** Affects: nova
     Importance: Undecided
 Assignee: Jay Lau (jay-lau-513)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Jay Lau (jay-lau-513)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1269684

Title:
  payload is empty when remove metadata with event updatemetadata.end

Status in OpenStack Compute (Nova):
  New

Bug description:
  liugya@liugya-ubuntu:~/src/nova-ce$ nova  aggregate-set-metadata 1 a=a1
  Aggregate 1 has been successfully updated.
  ++--+---+---+--+
  | Id | Name | Availability Zone | Hosts | Metadata |
  ++--+---+---+--+
  | 1  | agg1 | None  |   | 'a=a1'   |
  ++--+---+---+--+
  liugya@liugya-ubuntu:~/src/nova-ce$ nova  aggregate-set-metadata 1 a

  (Pdb) n
  > /opt/stack/nova/nova/objects/aggregate.py(100)update_metadata()
  -> compute_utils.notify_about_aggregate_update(context,
  (Pdb) n
  > /opt/stack/nova/nova/objects/aggregate.py(101)update_metadata()
  -> "updatemetadata.start",
  (Pdb) n
  > /opt/stack/nova/nova/objects/aggregate.py(102)update_metadata()
  -> payload)
  (Pdb) payload
  {'meta_data': {u'a': None}, 'aggregate_id': 1}
  (Pdb) n

  > /opt/stack/nova/nova/objects/aggregate.py(118)update_metadata()
  -> payload['meta_data'] = to_add
  (Pdb) 
  > /opt/stack/nova/nova/objects/aggregate.py(119)update_metadata()
  -> compute_utils.notify_about_aggregate_update(context,
  (Pdb) 
  > /opt/stack/nova/nova/objects/aggregate.py(120)update_metadata()
  -> "updatemetadata.end",
  (Pdb) 
  > /opt/stack/nova/nova/objects/aggregate.py(121)update_metadata()
  -> payload)
  (Pdb) payload
  {'meta_data': {}, 'aggregate_id': 1} <<< meta_data is empty, this caused 3rd 
party do not know which meta_data was now removed after get notification of 
updatemetadata.end

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1269684/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1269655] [NEW] Make prune compute_node_stats configurable

2014-01-15 Thread Jay Lau
Public bug reported:


In compute/manager.py, there is a periodic task named as 
update_available_resource(), it will update resource for each compute 
periodically.

 @periodic_task.periodic_task
def update_available_resource(self, context):
"""See driver.get_available_resource()

Periodic process that keeps that the compute host's understanding of
resource availability and usage in sync with the underlying hypervisor.

:param context: security context
"""
new_resource_tracker_dict = {}
nodenames = set(self.driver.get_available_nodes())
for nodename in nodenames:
rt = self._get_resource_tracker(nodename)
rt.update_available_resource(context) << Update here
new_resource_tracker_dict[nodename] = rt

In resource_tracker.py,
https://github.com/openstack/nova/blob/master/nova/compute/resource_tracker.py#L384

self._update(context, resources, prune_stats=True)

It always set prune_stats as True, this caused some problems. As if
someone put some metrics to compute_node_stats table, and if those
metrics does not change frequently, the periodic task will prune the new
metrics.

It is better adding a configuration parameter in nova.cont to make
prune_stats as configurable.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1269655

Title:
  Make prune compute_node_stats configurable

Status in OpenStack Compute (Nova):
  New

Bug description:
  
  In compute/manager.py, there is a periodic task named as 
update_available_resource(), it will update resource for each compute 
periodically.

   @periodic_task.periodic_task
  def update_available_resource(self, context):
  """See driver.get_available_resource()

  Periodic process that keeps that the compute host's understanding of
  resource availability and usage in sync with the underlying 
hypervisor.

  :param context: security context
  """
  new_resource_tracker_dict = {}
  nodenames = set(self.driver.get_available_nodes())
  for nodename in nodenames:
  rt = self._get_resource_tracker(nodename)
  rt.update_available_resource(context) << Update here
  new_resource_tracker_dict[nodename] = rt

  In resource_tracker.py,
  
https://github.com/openstack/nova/blob/master/nova/compute/resource_tracker.py#L384

  self._update(context, resources, prune_stats=True)

  It always set prune_stats as True, this caused some problems. As if
  someone put some metrics to compute_node_stats table, and if those
  metrics does not change frequently, the periodic task will prune the
  new metrics.

  It is better adding a configuration parameter in nova.cont to make
  prune_stats as configurable.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1269655/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1268622] [NEW] enable cold migration with taraget host

2014-01-13 Thread Jay Lau
Public bug reported:

Now cold migration do not support migrate a VM instance with target
host, we should enable this feature in nova.

** Affects: nova
 Importance: Undecided
 Assignee: Jay Lau (jay-lau-513)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Jay Lau (jay-lau-513)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1268622

Title:
  enable cold migration with taraget host

Status in OpenStack Compute (Nova):
  New

Bug description:
  Now cold migration do not support migrate a VM instance with target
  host, we should enable this feature in nova.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1268622/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1267386] [NEW] Add a configuration parameter to enable auto confirm for cold migration

2014-01-09 Thread Jay Lau
Public bug reported:

http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg13163.html
>>> For resize, we need to confirm, as we want to give end user an 
opportunity
>>> to rollback.
>>>
>>> The problem is cold migration, because cold migration and resize share
>>> same code path, so once I submit a cold migration request and after the 
cold
>>> migration finished, the VM will goes to verify_resize state, and I need 
to
>>> confirm resize. I felt a bit confused by this, why do I need to verify
>>> resize for a cold migration operation? Why not reset the VM to original
>>> state directly after cold migration?
>
> I think the idea was allow users/admins to check everything went OK,
> and only delete the original VM when the have confirmed the move went
> OK.
>
> I thought there was an auto_confirm setting. Maybe you want
> auto_confirm cold migrate, but not auto_confirm resize?

I suppose we could add an API parameter to auto-confirm these things.
That's probably a good compromise.

OK, will use auto-confirm to handle this.

** Affects: nova
 Importance: Undecided
     Assignee: Jay Lau (jay-lau-513)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Jay Lau (jay-lau-513)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1267386

Title:
  Add a configuration parameter to enable auto confirm for cold
  migration

Status in OpenStack Compute (Nova):
  New

Bug description:
  http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg13163.html
  >>> For resize, we need to confirm, as we want to give end user an 
opportunity
  >>> to rollback.
  >>>
  >>> The problem is cold migration, because cold migration and resize share
  >>> same code path, so once I submit a cold migration request and after 
the cold
  >>> migration finished, the VM will goes to verify_resize state, and I 
need to
  >>> confirm resize. I felt a bit confused by this, why do I need to verify
  >>> resize for a cold migration operation? Why not reset the VM to 
original
  >>> state directly after cold migration?
  >
  > I think the idea was allow users/admins to check everything went OK,
  > and only delete the original VM when the have confirmed the move went
  > OK.
  >
  > I thought there was an auto_confirm setting. Maybe you want
  > auto_confirm cold migrate, but not auto_confirm resize?

  I suppose we could add an API parameter to auto-confirm these things.
  That's probably a good compromise.

  OK, will use auto-confirm to handle this.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1267386/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1267294] [NEW] Change default value of resize_confirm_window to -1

2014-01-08 Thread Jay Lau
Public bug reported:


Now the default value of resize_confirm_window is 0,  0 means auto confirm was 
disabled.

For some cases, admin might want to confirm immediately, but the minimum
value is 1 which means that we need to wait at least one second.

Also the auto confirm resize logic was in a periodic task, if the
periodic task interval was 60s, and even if we set
resize_confirm_window as 1 we might still need to wait 60s before auto
confirm.

So we should set the default value of  resize_confirm_window to -1 which
means auto confirm was disabled and 0 means auto confirm immediately

** Affects: nova
 Importance: Undecided
 Assignee: Jay Lau (jay-lau-513)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Jay Lau (jay-lau-513)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1267294

Title:
  Change default value of resize_confirm_window to -1

Status in OpenStack Compute (Nova):
  New

Bug description:
  
  Now the default value of resize_confirm_window is 0,  0 means auto confirm 
was disabled.

  For some cases, admin might want to confirm immediately, but the
  minimum value is 1 which means that we need to wait at least one
  second.

  Also the auto confirm resize logic was in a periodic task, if the
  periodic task interval was 60s, and even if we set
  resize_confirm_window as 1 we might still need to wait 60s before auto
  confirm.

  So we should set the default value of  resize_confirm_window to -1
  which means auto confirm was disabled and 0 means auto confirm
  immediately

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1267294/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1266740] [NEW] tempest test_aggregate_add_host_create_server_with_az fails with "server failed to build and is ERROR status"

2014-01-07 Thread Jay Lau
Public bug reported:

2014-01-07 04:25:45.882 | Traceback (most recent call last):
2014-01-07 04:25:45.883 |   File 
"tempest/api/compute/v3/admin/test_aggregates.py", line 212, in 
test_aggregate_add_host_create_server_with_az
2014-01-07 04:25:45.883 | wait_until='ACTIVE')
2014-01-07 04:25:45.883 |   File "tempest/api/compute/base.py", line 138, in 
create_test_server
2014-01-07 04:25:45.883 | server['id'], kwargs['wait_until'])
2014-01-07 04:25:45.884 |   File 
"tempest/services/compute/v3/xml/servers_client.py", line 418, in 
wait_for_server_status
2014-01-07 04:25:45.884 | extra_timeout=extra_timeout)
2014-01-07 04:25:45.884 |   File "tempest/common/waiters.py", line 76, in 
wait_for_server_status
2014-01-07 04:25:45.885 | raise 
exceptions.BuildErrorException(server_id=server_id)
2014-01-07 04:25:45.885 | BuildErrorException: Server 
3fb9c710-51b8-4772-9b43-5b284efa6f45 failed to build and is in ERROR status

http://logs.openstack.org/98/65198/2/check/check-tempest-dsvm-postgres-
full/5b803ac/console.html

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1266740

Title:
  tempest test_aggregate_add_host_create_server_with_az fails with
  "server failed to build and is ERROR status"

Status in OpenStack Compute (Nova):
  New

Bug description:
  2014-01-07 04:25:45.882 | Traceback (most recent call last):
  2014-01-07 04:25:45.883 |   File 
"tempest/api/compute/v3/admin/test_aggregates.py", line 212, in 
test_aggregate_add_host_create_server_with_az
  2014-01-07 04:25:45.883 | wait_until='ACTIVE')
  2014-01-07 04:25:45.883 |   File "tempest/api/compute/base.py", line 138, in 
create_test_server
  2014-01-07 04:25:45.883 | server['id'], kwargs['wait_until'])
  2014-01-07 04:25:45.884 |   File 
"tempest/services/compute/v3/xml/servers_client.py", line 418, in 
wait_for_server_status
  2014-01-07 04:25:45.884 | extra_timeout=extra_timeout)
  2014-01-07 04:25:45.884 |   File "tempest/common/waiters.py", line 76, in 
wait_for_server_status
  2014-01-07 04:25:45.885 | raise 
exceptions.BuildErrorException(server_id=server_id)
  2014-01-07 04:25:45.885 | BuildErrorException: Server 
3fb9c710-51b8-4772-9b43-5b284efa6f45 failed to build and is in ERROR status

  http://logs.openstack.org/98/65198/2/check/check-tempest-dsvm-
  postgres-full/5b803ac/console.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1266740/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1259292] Re: Some tests use assertEqual(observed, expected) , the argument order is wrong

2013-12-31 Thread Jay Lau
** No longer affects: nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1259292

Title:
  Some tests use assertEqual(observed, expected) , the argument order is
  wrong

Status in Orchestration API (Heat):
  In Progress
Status in Python client library for heat:
  In Progress

Bug description:
  The test cases will produce a confusing error message if the tests
  ever fail, so this is worth fixing.

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/1259292/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1259292] Re: Some tests use assertEqual(observed, expected) , the argument order is wrong

2013-12-30 Thread Jay Lau
** Also affects: nova
   Importance: Undecided
   Status: New

** Changed in: nova
 Assignee: (unassigned) => Jay Lau (jay-lau-513)

** Changed in: nova
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1259292

Title:
  Some tests use assertEqual(observed, expected) , the argument order is
  wrong

Status in Orchestration API (Heat):
  In Progress
Status in OpenStack Compute (Nova):
  New
Status in Python client library for heat:
  In Progress

Bug description:
  The test cases will produce a confusing error message if the tests
  ever fail, so this is worth fixing.

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/1259292/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1262424] Re: Files without code should not contain copyright notices

2013-12-25 Thread Jay Lau
** Also affects: oslo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1262424

Title:
  Files without code should not contain copyright notices

Status in OpenStack Telemetry (Ceilometer):
  New
Status in Orchestration API (Heat):
  New
Status in OpenStack Neutron (virtual network service):
  In Progress
Status in OpenStack Compute (Nova):
  New
Status in Oslo - a Library of Common OpenStack Code:
  New

Bug description:
  Due to a recent policy change in HACKING
  (http://docs.openstack.org/developer/hacking/#openstack-licensing),
  empty files should no longer contain copyright notices.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1262424/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1263602] [NEW] Do not use contextlib.nested if only mock one function

2013-12-22 Thread Jay Lau
Public bug reported:

There are some test cases in test_compute_mgr.py using contextlib.nested
to mock up functions even there is only one function.

We should use mock.patch.object directly if only mock one function.

def test_init_instance_sets_building_error(self):
with contextlib.nested(  <<<<< No need use nested here
mock.patch.object(self.compute, '_instance_update')
  ) as (
_instance_update,
  ):

instance = instance_obj.Instance(self.context)
instance.uuid = 'foo'
instance.vm_state = vm_states.BUILDING
instance.task_state = None
self.compute._init_instance(self.context, instance)
call = mock.call(self.context, 'foo',
 task_state=None,
 vm_state=vm_states.ERROR)
_instance_update.assert_has_calls([call])

** Affects: nova
     Importance: Undecided
 Assignee: Jay Lau (jay-lau-513)
 Status: In Progress

** Changed in: nova
     Assignee: (unassigned) => Jay Lau (jay-lau-513)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1263602

Title:
  Do not use contextlib.nested if only mock one function

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  There are some test cases in test_compute_mgr.py using
  contextlib.nested to mock up functions even there is only one
  function.

  We should use mock.patch.object directly if only mock one function.

  def test_init_instance_sets_building_error(self):
  with contextlib.nested(  <<<<< No need use nested here
  mock.patch.object(self.compute, '_instance_update')
) as (
  _instance_update,
):

  instance = instance_obj.Instance(self.context)
  instance.uuid = 'foo'
  instance.vm_state = vm_states.BUILDING
  instance.task_state = None
  self.compute._init_instance(self.context, instance)
  call = mock.call(self.context, 'foo',
   task_state=None,
   vm_state=vm_states.ERROR)
  _instance_update.assert_has_calls([call])

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1263602/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1263569] [NEW] Remove update_service_capabilities from nova

2013-12-22 Thread Jay Lau
Public bug reported:

>From the comments of update_service_capabilities, it is said that once
publish_service_capabilities was removed, then we can begin the process
of  its removal.

Now publish_service_capabilities has been removed, so we can remove
update_service_capabilities now as no one is calling it

def update_service_capabilities(self, context, service_name,
host, capabilities):
"""Process a capability update from a service node."""
#NOTE(jogo) This is deprecated, but is used by the deprecated
# publish_service_capabilities call. So this can begin its removal
# process once publish_service_capabilities is removed.
if not isinstance(capabilities, list):
capabilities = [capabilities]
for capability in capabilities:
if capability is None:
capability = {}
self.driver.update_service_capabilities(service_name, host,
capability)

** Affects: nova
 Importance: Undecided
 Assignee: Jay Lau (jay-lau-513)
 Status: New

** Changed in: nova
     Assignee: (unassigned) => Jay Lau (jay-lau-513)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1263569

Title:
  Remove update_service_capabilities from nova

Status in OpenStack Compute (Nova):
  New

Bug description:
  From the comments of update_service_capabilities, it is said that once
  publish_service_capabilities was removed, then we can begin the
  process of  its removal.

  Now publish_service_capabilities has been removed, so we can remove
  update_service_capabilities now as no one is calling it

  def update_service_capabilities(self, context, service_name,
  host, capabilities):
  """Process a capability update from a service node."""
  #NOTE(jogo) This is deprecated, but is used by the deprecated
  # publish_service_capabilities call. So this can begin its removal
  # process once publish_service_capabilities is removed.
  if not isinstance(capabilities, list):
  capabilities = [capabilities]
  for capability in capabilities:
  if capability is None:
  capability = {}
  self.driver.update_service_capabilities(service_name, host,
  capability)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1263569/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1261621] [NEW] nova api value error is not right

2013-12-16 Thread Jay Lau
Public bug reported:

I was trying to add a json field to DB but forget to dumps the json to
string, and nova api report the following error.

2013-12-17 12:37:51.615 TRACE object Traceback (most recent call last):
2013-12-17 12:37:51.615 TRACE object   File 
"/opt/stack/nova/nova/objects/base.py", line 70, in setter
2013-12-17 12:37:51.615 TRACE object field.coerce(self, name, value))
2013-12-17 12:37:51.615 TRACE object   File 
"/opt/stack/nova/nova/objects/fields.py", line 166, in coerce
2013-12-17 12:37:51.615 TRACE object return self._type.coerce(obj, attr, 
value)
2013-12-17 12:37:51.615 TRACE object   File 
"/opt/stack/nova/nova/objects/fields.py", line 218, in coerce
2013-12-17 12:37:51.615 TRACE object raise ValueError(_('A string is 
required here, not %s'),
2013-12-17 12:37:51.615 TRACE object ValueError: (u'A string is required here, 
not %s', 'dict') <<<

The error should be 

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1261621

Title:
  nova api value error is not right

Status in OpenStack Compute (Nova):
  New

Bug description:
  I was trying to add a json field to DB but forget to dumps the json to
  string, and nova api report the following error.

  2013-12-17 12:37:51.615 TRACE object Traceback (most recent call last):
  2013-12-17 12:37:51.615 TRACE object   File 
"/opt/stack/nova/nova/objects/base.py", line 70, in setter
  2013-12-17 12:37:51.615 TRACE object field.coerce(self, name, value))
  2013-12-17 12:37:51.615 TRACE object   File 
"/opt/stack/nova/nova/objects/fields.py", line 166, in coerce
  2013-12-17 12:37:51.615 TRACE object return self._type.coerce(obj, attr, 
value)
  2013-12-17 12:37:51.615 TRACE object   File 
"/opt/stack/nova/nova/objects/fields.py", line 218, in coerce
  2013-12-17 12:37:51.615 TRACE object raise ValueError(_('A string is 
required here, not %s'),
  2013-12-17 12:37:51.615 TRACE object ValueError: (u'A string is required 
here, not %s', 'dict') <<<

  The error should be 

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1261621/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1260588] Re: Change retry to attempt for retry filter logic

2013-12-13 Thread Jay Lau
Thanks Zhongyue for the comments.

** Changed in: nova
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1260588

Title:
  Change retry to attempt for retry filter logic

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  After patch of Ia355810b106fee14a55f48081301a310979befac,  retry
  filter was renamed to IgnoreAttemptedHostsFilter and its variable
  retry was changed to attempt, so it is better to update nova scheduler
  and compute logic by replace retry to attempt

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1260588/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1260588] [NEW] Change retry to attempt for retry filter logic

2013-12-12 Thread Jay Lau
Public bug reported:

After patch of Ia355810b106fee14a55f48081301a310979befac,  retry filter
was renamed to IgnoreAttemptedHostsFilter and its variable retry was
changed to attempt, so it is better to update nova scheduler and compute
logic by replace retry to attempt

** Affects: nova
 Importance: Undecided
 Assignee: Jay Lau (jay-lau-513)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Jay Lau (jay-lau-513)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1260588

Title:
  Change retry to attempt for retry filter logic

Status in OpenStack Compute (Nova):
  New

Bug description:
  After patch of Ia355810b106fee14a55f48081301a310979befac,  retry
  filter was renamed to IgnoreAttemptedHostsFilter and its variable
  retry was changed to attempt, so it is better to update nova scheduler
  and compute logic by replace retry to attempt

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1260588/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1259535] [NEW] Disable reason become "AUTO" when host-update

2013-12-10 Thread Jay Lau
Public bug reported:


when I disable a service without reason by command "nova host-update
--status disable", the service disable reason is always AUTO:

jay@jay1:~/devstack$ nova service-list
+--+--+--+--+---++-+
| Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
+--+--+--+--+---++-+
| nova-conductor | jay1 | internal | enabled | up | 2013-12-04T13:41:43.00 
| None |
| nova-cert | jay1 | internal | enabled | up | 2013-12-04T13:41:45.00 | 
None |
| nova-scheduler | jay1 | internal | enabled | up | 2013-12-04T13:41:48.00 
| None |
| nova-compute | jay1 | nova | disabled | up | 2013-12-04T13:41:48.00 | 
AUTO: |
| nova-consoleauth | jay1 | internal | enabled | up | 
2013-12-04T13:41:43.00 | None |
+--+--+--+--+---++-+

** Affects: nova
 Importance: Undecided
 Assignee: Jay Lau (jay-lau-513)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Jay Lau (jay-lau-513)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1259535

Title:
  Disable reason become "AUTO" when host-update

Status in OpenStack Compute (Nova):
  New

Bug description:

  when I disable a service without reason by command "nova host-update
  --status disable", the service disable reason is always AUTO:

  jay@jay1:~/devstack$ nova service-list
  
+--+--+--+--+---++-+
  | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
  
+--+--+--+--+---++-+
  | nova-conductor | jay1 | internal | enabled | up | 
2013-12-04T13:41:43.00 | None |
  | nova-cert | jay1 | internal | enabled | up | 2013-12-04T13:41:45.00 | 
None |
  | nova-scheduler | jay1 | internal | enabled | up | 
2013-12-04T13:41:48.00 | None |
  | nova-compute | jay1 | nova | disabled | up | 2013-12-04T13:41:48.00 | 
AUTO: |
  | nova-consoleauth | jay1 | internal | enabled | up | 
2013-12-04T13:41:43.00 | None |
  
+--+--+--+--+---++-+

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1259535/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1258767] [NEW] Enable VMWare ESXDriver support set_host_enabled

2013-12-07 Thread Jay Lau
Public bug reported:

The API of set_host_enabled was not supportted in VMWare ESXDriver, we
should support this feature.

** Affects: nova
 Importance: Undecided
 Assignee: Jay Lau (jay-lau-513)
 Status: New


** Tags: vmware

** Changed in: python-heatclient
 Assignee: (unassigned) => Jay Lau (jay-lau-513)

** Tags added: vmware

** Project changed: python-heatclient => nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1258767

Title:
  Enable VMWare ESXDriver support set_host_enabled

Status in OpenStack Compute (Nova):
  New

Bug description:
  The API of set_host_enabled was not supportted in VMWare ESXDriver, we
  should support this feature.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1258767/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1241587] Re: Can not delete deleted tenant's default security group

2013-10-19 Thread Jay Lau
The following case works well

liugya@liugya-ubuntu:~$ nova --os-username foo --os-password foo --os-tenant-id 
6111614f84b34c5fbd85e988f388a7a9 secgroup-list
++-+-+
| Id | Name| Description |
++-+-+
| 15 | default | default |
| 16 | test| test|
++-+-+

liugya@liugya-ubuntu:~$ keystone user-delete foo
liugya@liugya-ubuntu:~$ keystone tenant-delete foo
liugya@liugya-ubuntu:~$ nova secgroup-delete 16
/usr/lib/python2.7/dist-packages/gobject/constants.py:24: Warning: 
g_boxed_type_register_static: assertion `g_type_from_name (name) == 0' failed
  import gobject._gobject
++--+-+
| Id | Name | Description |
++--+-+
| 16 | test | test|
++--+-+

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1241587

Title:
  Can not delete deleted tenant's default security group

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  $ keystone tenant-create --name foo
  +-+--+
  |   Property  |  Value   |
  +-+--+
  | description |  |
  |   enabled   |   True   |
  |  id | 7149cdf591364e17a15e30229f2e023e |
  | name|   foo|
  +-+--+

  $ keystone user-create --name foo --pass foo --tenant foo
  +--+--+
  | Property |  Value   |
  +--+--+
  |  email   |  |
  | enabled  |   True   |
  |id| e5a5cd548ab446d5b787e6b37415707d |
  |   name   |   foo|
  | tenantId | 7149cdf591364e17a15e30229f2e023e |
  +--+--+

  $nova --os-username foo --os-password foo --os-tenant-id 
7149cdf591364e17a15e30229f2e023e  secgroup-list
  +-+-+-+
  | Id  | Name| Description |
  +-+-+-+
  | 111 | default | default |
  +-+-+-+

  
  ### AS ADMIN ###
  $ keystone user-delete foo
  $ keystone tenant-delete foo
  $ nova secgroup-delete 111
  ERROR: Unable to delete system group 'default' (HTTP 400) (Request-ID: 
req-9f62f3fe-1cd7-46dc-801c-335900b6f903)

  As admin when the tenant does not exists I should be able to delete
  the security group (may be with an additional force argument)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1241587/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1240050] Re: Instance task status should be changed to error without available scheduler service

2013-10-15 Thread Jay Lau
Its design behavior, we cannot mark the task status to ERROR as once
nova scheduler restart, it can get the boot VM request from message
queue and continue to provision the VM.

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1240050

Title:
  Instance task status should be changed to error without available
  scheduler service

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  I found that when the nova-scheduler service was not started normally
  or not started, if we use nova boot to boot an instance, the task
  status will keep as 'scheduling' forever. And there is no error
  messages in some other logs except nova scheduler.log (If the
  scheduler was crashed by some issues). This is very confused, why not
  raise an exception if no scheduler service is available and change the
  task status to error.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1240050/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1237334] Re: nova-api & nova-metadata-api services are using the same port

2013-10-09 Thread Jay Lau
I think that this is not a valid case.

Please check your nova.conf to see if metadata was configured in
enabled_apis.

The default value of  enabled_apis is "enabled_apis =
osapi_compute,metadata", this means that metadata was enabled by
default, it will be started by nova-api.

If you remove metadata from enabled_apis, you will be able to start up
your metadata api.

Thanks.

** Changed in: nova
 Assignee: (unassigned) => Jay Lau (jay-lau-513)

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1237334

Title:
  nova-api & nova-metadata-api services are using the same port

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Description of problem:
  The services nova-api and the nova-metadata-api are both using the same port, 
8775. 
  Thus, the services are 'competing' for the port and one of them will not 
work. 

  Version-Release number of selected component (if applicable):

  Red Hat Enterprise Linux Server release 6.5 Beta (Santiago)

  python-novaclient-2.15.0-1.el6ost.noarch
  python-nova-2013.2-0.24.rc1.el6ost.noarch
  openstack-nova-network-2013.2-0.24.rc1.el6ost.noarch
  openstack-nova-common-2013.2-0.24.rc1.el6ost.noarch
  openstack-nova-console-2013.2-0.24.rc1.el6ost.noarch
  openstack-nova-compute-2013.2-0.24.rc1.el6ost.noarch
  openstack-nova-conductor-2013.2-0.24.rc1.el6ost.noarch
  openstack-nova-novncproxy-2013.2-0.24.rc1.el6ost.noarch
  openstack-nova-scheduler-2013.2-0.24.rc1.el6ost.noarch
  openstack-nova-api-2013.2-0.24.rc1.el6ost.noarch
  openstack-nova-cert-2013.2-0.24.rc1.el6ost.noarch

  How reproducible:
  everytime

  Steps to Reproduce:
  1. Install Havana on RHEL 6.5 AIO installation. 
  2. 
  3.

  Actual results:
  Either the openstack-nova-api or the openstack-nova-metadata-api service is 
down.

  Expected results:
  Both services are up and running.

  Additional info:
  The error from /var/log/nova/metadata-api.log:

  2013-10-09 10:54:21.975 4776 INFO nova.network.driver [-] Loading network 
driver 'nova.network.linux_net'
  2013-10-09 10:54:22.036 4776 DEBUG nova.wsgi [-] Loading app metadata from 
/etc/nova/api-paste.ini load_app 
/usr/lib/python2.6/site-packages/nova/wsgi.py:484
  2013-10-09 10:54:22.076 4776 INFO nova.openstack.common.periodic_task [-] 
Skipping periodic task _periodic_update_dns because its interval is negative
  2013-10-09 10:54:22.079 4776 CRITICAL nova [-] [Errno 98] Address already in 
use
  2013-10-09 10:54:22.079 4776 TRACE nova Traceback (most recent call last):
  2013-10-09 10:54:22.079 4776 TRACE nova   File "/usr/bin/nova-api-metadata", 
line 10, in 
  2013-10-09 10:54:22.079 4776 TRACE nova sys.exit(main())
  2013-10-09 10:54:22.079 4776 TRACE nova   File 
"/usr/lib/python2.6/site-packages/nova/cmd/api_metadata.py", line 33, in main
  2013-10-09 10:54:22.079 4776 TRACE nova server = 
service.WSGIService('metadata')
  2013-10-09 10:54:22.079 4776 TRACE nova   File 
"/usr/lib/python2.6/site-packages/nova/service.py", line 318, in __init__
  2013-10-09 10:54:22.079 4776 TRACE nova max_url_len=max_url_len)
  2013-10-09 10:54:22.079 4776 TRACE nova   File 
"/usr/lib/python2.6/site-packages/nova/wsgi.py", line 123, in __init__
  2013-10-09 10:54:22.079 4776 TRACE nova self._socket = 
eventlet.listen(bind_addr, family, backlog=backlog)
  2013-10-09 10:54:22.079 4776 TRACE nova   File 
"/usr/lib/python2.6/site-packages/eventlet/convenience.py", line 38, in listen
  2013-10-09 10:54:22.079 4776 TRACE nova sock.bind(addr)
  2013-10-09 10:54:22.079 4776 TRACE nova   File "", line 1, in bind
  2013-10-09 10:54:22.079 4776 TRACE nova error: [Errno 98] Address already in 
use
  2013-10-09 10:54:22.079 4776 TRACE nova

  The error from the nova-api log: 
  2013-10-09 11:17:04.520 6048 CRITICAL nova [-] [Errno 98] Address already in 
use
  2013-10-09 11:17:04.520 6048 TRACE nova Traceback (most recent call last):
  2013-10-09 11:17:04.520 6048 TRACE nova   File "/usr/bin/nova-api", line 10, 
in 
  2013-10-09 11:17:04.520 6048 TRACE nova sys.exit(main())
  2013-10-09 11:17:04.520 6048 TRACE nova   File 
"/usr/lib/python2.6/site-packages/nova/cmd/api.py", line 51, in main
  2013-10-09 11:17:04.520 6048 TRACE nova server = service.WSGIService(api, 
use_ssl=should_use_ssl)
  2013-10-09 11:17:04.520 6048 TRACE nova   File 
"/usr/lib/python2.6/site-packages/nova/service.py", line 318, in __init__
  2013-10-09 11:17:04.520 6048 TRACE nova max_url_len=max_url_len)
  2013-10-09 11:17:04.520 6048 TRACE nova   File 
"/usr/lib/python2.6/site-packages/nova/wsgi.py", line 124, in __init__
  2013-10-09 11:17:04.520 6048 TRACE nova self._socket = 
eventlet.listen(bind_addr, family, backlog=bac

[Yahoo-eng-team] [Bug 1225580] Re: Add notification for host operations

2013-09-19 Thread Jay Lau
Logged a blueprint to trace it as it is more like a feature enhancement:
https://blueprints.launchpad.net/nova/+spec/host-operation-notification

** Changed in: nova
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1225580

Title:
  Add notification for host operations

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  When doing host operations, such as power action, maintain action etc,
  there is no notification.

  It is better that we add notification for those operation to enable
  3rd party callers can get operation result ASAP.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1225580/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1209288] Re: No need to construct instance type when resize VM

2013-09-09 Thread Jay Lau
*** This bug is a duplicate of bug 1219761 ***
https://bugs.launchpad.net/bugs/1219761

** This bug has been marked a duplicate of bug 1219761
   The instanceType of the requestSpec is set to be the current instanceType of 
the instance instead of the one that is going to resize to

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1209288

Title:
  No need to construct instance type when resize VM

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  Now, when resize a VM, the instance_type in request_spec is still old
  flavor, this might cause some problem if scheduler want to get new
  instance type from request_spec.

  def migrate_server(self, context, instance, scheduler_hint, live, rebuild,
  flavor, block_migration, disk_over_commit, reservations=None):
  if live and not rebuild and not flavor:
  destination = scheduler_hint.get("host")
  self.scheduler_rpcapi.live_migration(context, block_migration,
  disk_over_commit, instance, destination)
  elif not live and not rebuild and flavor:
  image_ref = instance.get('image_ref')
  if image_ref:
  image = self._get_image(context, image_ref)
  else:
  image = {}
  request_spec = scheduler_utils.build_request_spec(  <<<
  context, image, [instance])
  # NOTE(timello): originally, instance_type in request_spec
  # on compute.api.resize does not have 'extra_specs', so we
  # remove it for now to keep tests backward compatibility.
  request_spec['instance_type'].pop('extra_specs')


  def build_request_spec(ctxt, image, instances):
  """Build a request_spec for the scheduler.

  The request_spec assumes that all instances to be scheduled are the same
  type.
  """
  instance = instances[0]
  instance_type = flavors.extract_flavor(instance) <
  # NOTE(comstud): This is a bit ugly, but will get cleaned up when
  # we're passing an InstanceType internal object.
  extra_specs = db.flavor_extra_specs_get(ctxt,

  def extract_flavor(instance, prefix=''):
  """Create an InstanceType-like object from instance's system_metadata
  information.
  """

  instance_type = {}
  sys_meta = utils.instance_sys_meta(instance)
  for key, type_fn in system_metadata_flavor_props.items():
  type_key = '%sinstance_type_%s' % (prefix, key)
  instance_type[key] = type_fn(sys_meta[type_key])
  return instance_type << It is still the old instance type

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1209288/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1218746] Re: test_create_instance_with_availability_zone failed while run test by nosetests

2013-08-31 Thread Jay Lau
Cannot reproduce such issue with latest code in my environment.

root@liugya-ubuntu:~/src/nova-ce/test/nova# nosetests 
nova/tests/api/openstack/compute/plugins/v3/test_availability_zone.py

--
Ran 8 tests in 3.487s
 
OK
root@liugya-ubuntu:~/src/nova-ce/test/nova# nosetests 
nova.tests.api.openstack.compute.plugins.v3.test_availability_zone:ServersControllerCreateTest
...
--
Ran 3 tests in 1.685s
 
OK

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1218746

Title:
  test_create_instance_with_availability_zone failed while run test by
  nosetests

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  this may because the source code in v3 plugin servers.py doesn't use the 
availability_zone parameter, and also not pass it to the create method of 
compute api.
  
https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/plugins/v3/servers.py#L823

  $ nosetests 
nova.tests.api.openstack.compute.plugins.v3.test_availability_zone:ServersControllerCreateTest
  Traceback (most recent call last):
File 
"/home/hzwangpan/nova/nova/tests/api/openstack/compute/plugins/v3/test_availability_zone.py",
 line 470, in test_create_instance_with_availability_zone
  res = self.controller.create(req, body).obj
File 
"/home/hzwangpan/nova/nova/api/openstack/compute/plugins/v3/servers.py", line 
941, in create
  (instances, resv_id) = self.compute_api.create(context,
File 
"/home/hzwangpan/nova/nova/tests/api/openstack/compute/plugins/v3/test_availability_zone.py",
 line 437, in create
  self.assertIn('availability_zone', kwargs)
File "/usr/local/lib/python2.7/dist-packages/testtools/testcase.py", line 
328, in assertIn
  self.assertThat(haystack, Contains(needle))
File "/usr/local/lib/python2.7/dist-packages/testtools/testcase.py", line 
417, in assertThat
  raise MismatchError(matchee, matcher, mismatch, verbose)
  MismatchError: 'availability_zone' not in {'display_name': 
'config_drive_test', 'access_ip_v6': None, 'access_ip_v4': None, 
'block_device_mapping': None, 'display_description': 'config_drive_test', 
'max_count': 1, 'auto_disk_config': False, 'admin_password': '84pEtaJCiDJA', 
'injected_files': [], 'min_count': 1, 'security_group': ['default'], 
'requested_networks': None, 'metadata': {'open': 'stack', 'hello': 'world'}}

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1218746/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1212653] Re: multi instace display name did not follow template

2013-08-15 Thread Jay Lau
My fault, multi_instance_display_name_template is configurable.

** Changed in: nova
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1212653

Title:
  multi instace display name did not follow template

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  
  cfg.StrOpt('multi_instance_display_name_template',
 default='%(name)s-%(uuid)s',
 help='When creating multiple instances with a single request '
  'using the os-multiple-create API extension, this '
  'template will be used to build the display name for '
  'each instance. The benefit is that the instances '
  'end up with different hostnames. To restore legacy '
  'behavior of every instance having the same name, set '
  'this option to "%(name)s".  Valid keys for the '
  'template are: name, uuid, count.'),

  Try to boot two VMs in one request, then no index in vm display name.

  root@liugya-ubuntu:~# nova boot --image cirros-0.3.1-x86_64-uec --flavor 1 
--num-instances  2 vm
  /usr/lib/python2.7/dist-packages/gobject/constants.py:24: Warning: 
g_boxed_type_register_static: assertion `g_type_from_name (name) == 0' failed
import gobject._gobject
  
+--+-+
  | Property | Value
   |
  
+--+-+
  | OS-EXT-STS:task_state| scheduling   
   |
  | image| cirros-0.3.1-x86_64-uec  
   |
  | OS-EXT-STS:vm_state  | building 
   |
  | OS-EXT-SRV-ATTR:instance_name| instance-0001
   |
  | OS-SRV-USG:launched_at   | None 
   |
  | flavor   | m1.tiny  
   |
  | id   | b2ba79b5-9d3d-450c-8ba7-c0e33592ad6d 
   |
  | security_groups  | [{u'name': u'default'}]  
   |
  | user_id  | 7ed6ad28bc9044688307c45fee43659e 
   |
  | OS-DCF:diskConfig| MANUAL   
   |
  | accessIPv4   |  
   |
  | accessIPv6   |  
   |
  | progress | 0
   |
  | OS-EXT-STS:power_state   | 0
   |
  | OS-EXT-AZ:availability_zone  | nova 
   |
  | config_drive |  
   |
  | status   | BUILD
   |
  | updated  | 2013-08-15T10:06:52Z 
   |
  | hostId   |  
   |
  | OS-EXT-SRV-ATTR:host | None 
   |
  | OS-SRV-USG:terminated_at | None 
   |
  | key_name | None 
   |
  | OS-EXT-SRV-ATTR:hypervisor_hostname  | None 
   |
  | name | 
vm-b2ba79b5-9d3d-450c-8ba7-c0e33592ad6d |
  | adminPass| HQ5zL9iRkYBy 
   |
  | tenant_id| 2b65d1252efd4cd78d5504d176c01924 
   |
  | created  | 2013-08-15T10:06:52Z 
   |
  | os-extended-volumes:volumes_attached | []   
   |
  | metadata | {}   
   |
  
+--+-+

  
  root@liugya-ubuntu:~# nova list
  /usr/lib/python2.7/dist-packages/gobject/constants.py:24: Warning: 
g_boxed_type_register_static: assertion `g_type_from_name (name) == 0' failed
import gobject._gobject
  
+--+-+++-+--+
  | ID   | Name 
   | Status | Task State | Power State | Networks |
  
+--+-+++-+--+
  | 98030793-06de-442d-9fb3-114e174d959d | 
vm-