[Yahoo-eng-team] [Bug 1444630] Re: nova-compute should stop handling virt lifecycle events when it's shutting down

2015-12-17 Thread Marian Horban
Libvirt event threads are not stopped during stopping of nova-compute
service. That'w why during restart nova-compute with SIGHUP signal we
can see traceback:

2015-11-30 10:03:06.013 INFO nova.service [-] Starting compute node (version 
13.0.0)
2015-11-30 10:03:06.013 DEBUG nova.virt.libvirt.host [-] Starting native event 
thread from (pid=17505) _init_events 
/opt/stack/nova/nova/virt/libvirt/host.py:452
2015-11-30 10:03:06.014 DEBUG nova.virt.libvirt.host [-] Starting green 
dispatch thread from (pid=17505) _init_events 
/opt/stack/nova/nova/virt/libvirt/host.py:458
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/eventlet/hubs/poll.py", line 
115, in wait
listener.cb(fileno)
  File "/usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py", line 
214, in main
result = function(*args, **kwargs)
  File "/opt/stack/nova/nova/utils.py", line 1158, in context_wrapper
return func(*args, **kwargs)
  File "/opt/stack/nova/nova/virt/libvirt/host.py", line 248, in 
_dispatch_thread
self._dispatch_events()
  File "/opt/stack/nova/nova/virt/libvirt/host.py", line 353, in 
_dispatch_events
assert _c
AssertionError
Removing descriptor: 9

Started threads should be stopped during stopping of nova-compute
service

** Changed in: nova
   Status: Fix Released => In Progress

** Changed in: nova
     Assignee: Matt Riedemann (mriedem) => Marian Horban (mhorban)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1444630

Title:
  nova-compute should stop handling virt lifecycle events when it's
  shutting down

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) juno series:
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  This is a follow on to bug 1293480 and related to bug 1408176 and bug
  1443186.

  There can be a race when rebooting a compute host where libvirt is
  shutting down guest VMs and sending STOPPED lifecycle events up to
  nova compute which then tries to stop them via the stop API, which
  sometimes works and sometimes doesn't - the compute service can go
  down with a vm_state of ACTIVE and task_state of powering-off which
  isn't resolve on host reboot.

  Sometimes the stop API completes and the instance is stopped with
  power_state=4 (shutdown) in the nova database.  When the host comes
  back up and libvirt restarts, it starts up the guest VMs which sends
  the STARTED lifecycle event and nova handles that but because the
  vm_state in the nova database is STOPPED and the power_state is 1
  (running) from the hypervisor, nova things it started up unexpectedly
  and stops it:

  
http://git.openstack.org/cgit/openstack/nova/tree/nova/compute/manager.py?id=2015.1.0rc1#n6145

  So nova shuts the running guest down.

  Actually the block in:

  
http://git.openstack.org/cgit/openstack/nova/tree/nova/compute/manager.py?id=2015.1.0rc1#n6145

  conflicts with the statement in power_state.py:

  
http://git.openstack.org/cgit/openstack/nova/tree/nova/compute/power_state.py?id=2015.1.0rc1#n19

  "The hypervisor is always considered the authority on the status
  of a particular VM, and the power_state in the DB should be viewed as a
  snapshot of the VMs's state in the (recent) past."

  Anyway, that's a different issue but the point is when nova-compute is
  shutting down it should stop accepting lifecycle events from the
  hypervisor (virt driver code) since it can't really reliably act on
  them anyway - we can leave any sync up that needs to happen in
  init_host() in the compute manager.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1444630/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1526734] [NEW] Restart of nova-compute service fails

2015-12-16 Thread Marian Horban
Public bug reported:

After sending HUP signal to nova-compute process we can observe trace in
logs:

2015-11-30 10:35:26.509 INFO oslo_service.service 
[req-ecb7f866-b041-4abb-9037-164443b8387f None None] Caught SIGHUP, exiting
2015-11-30 10:35:31.894 DEBUG oslo_concurrency.lockutils 
[req-ecb7f866-b041-4abb-9037-164443b8387f None None] Acquired semaphore 
"singleton_lock" from (pid=24742) lock 
/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:212
2015-11-30 10:35:31.900 DEBUG oslo_concurrency.lockutils 
[req-ecb7f866-b041-4abb-9037-164443b8387f None None] Releasing semaphore 
"singleton_lock" from (pid=24742) lock 
/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:225
2015-11-30 10:35:31.903 ERROR nova.service 
[req-ecb7f866-b041-4abb-9037-164443b8387f None None] Service error occurred 
during cleanup_host
2015-11-30 10:35:31.903 TRACE nova.service Traceback (most recent call last):
2015-11-30 10:35:31.903 TRACE nova.service   File 
"/opt/stack/nova/nova/service.py", line 312, in stop
2015-11-30 10:35:31.903 TRACE nova.service self.manager.cleanup_host()
2015-11-30 10:35:31.903 TRACE nova.service   File 
"/opt/stack/nova/nova/compute/manager.py", line 1323, in cleanup_host
2015-11-30 10:35:31.903 TRACE nova.service 
self.instance_events.cancel_all_events()
2015-11-30 10:35:31.903 TRACE nova.service   File 
"/opt/stack/nova/nova/compute/manager.py", line 578, in cancel_all_events
2015-11-30 10:35:31.903 TRACE nova.service for instance_uuid, events in 
our_events.items():
2015-11-30 10:35:31.903 TRACE nova.service AttributeError: 'NoneType' object 
has no attribute 'items'
2015-11-30 10:35:31.903 TRACE nova.service

** Affects: nova
 Importance: Undecided
 Assignee: Marian Horban (mhorban)
 Status: New

** Affects: oslo.service
 Importance: Undecided
 Assignee: Marian Horban (mhorban)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Marian Horban (mhorban)

** Also affects: oslo.service
   Importance: Undecided
   Status: New

** Changed in: oslo.service
 Assignee: (unassigned) => Marian Horban (mhorban)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1526734

Title:
  Restart of nova-compute service fails

Status in OpenStack Compute (nova):
  New
Status in oslo.service:
  New

Bug description:
  After sending HUP signal to nova-compute process we can observe trace
  in logs:

  2015-11-30 10:35:26.509 INFO oslo_service.service 
[req-ecb7f866-b041-4abb-9037-164443b8387f None None] Caught SIGHUP, exiting
  2015-11-30 10:35:31.894 DEBUG oslo_concurrency.lockutils 
[req-ecb7f866-b041-4abb-9037-164443b8387f None None] Acquired semaphore 
"singleton_lock" from (pid=24742) lock 
/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:212
  2015-11-30 10:35:31.900 DEBUG oslo_concurrency.lockutils 
[req-ecb7f866-b041-4abb-9037-164443b8387f None None] Releasing semaphore 
"singleton_lock" from (pid=24742) lock 
/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:225
  2015-11-30 10:35:31.903 ERROR nova.service 
[req-ecb7f866-b041-4abb-9037-164443b8387f None None] Service error occurred 
during cleanup_host
  2015-11-30 10:35:31.903 TRACE nova.service Traceback (most recent call last):
  2015-11-30 10:35:31.903 TRACE nova.service   File 
"/opt/stack/nova/nova/service.py", line 312, in stop
  2015-11-30 10:35:31.903 TRACE nova.service self.manager.cleanup_host()
  2015-11-30 10:35:31.903 TRACE nova.service   File 
"/opt/stack/nova/nova/compute/manager.py", line 1323, in cleanup_host
  2015-11-30 10:35:31.903 TRACE nova.service 
self.instance_events.cancel_all_events()
  2015-11-30 10:35:31.903 TRACE nova.service   File 
"/opt/stack/nova/nova/compute/manager.py", line 578, in cancel_all_events
  2015-11-30 10:35:31.903 TRACE nova.service for instance_uuid, events in 
our_events.items():
  2015-11-30 10:35:31.903 TRACE nova.service AttributeError: 'NoneType' object 
has no attribute 'items'
  2015-11-30 10:35:31.903 TRACE nova.service

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1526734/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1517143] [NEW] nova-api is crashed during creating instance

2015-11-17 Thread Marian Horban
2015-10-16 11:54:13.487 779 ERROR nova.api.openstack.extensions   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 197, in 
__exit__
2015-10-16 11:54:13.487 779 ERROR nova.api.openstack.extensions 
six.reraise(self.type_, self.value, self.tb)
2015-10-16 11:54:13.487 779 ERROR nova.api.openstack.extensions   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 137, in wrapper
2015-10-16 11:54:13.487 779 ERROR nova.api.openstack.extensions return 
f(*args, **kwargs)
2015-10-16 11:54:13.487 779 ERROR nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/db/sqlalchemy/api.py", line 3538, in quota_reserve
2015-10-16 11:54:13.487 779 ERROR nova.api.openstack.extensions elevated = 
context.elevated()
2015-10-16 11:54:13.487 779 ERROR nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/context.py", line 200, in elevated
2015-10-16 11:54:13.487 779 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/copy.py", line 190, in deepcopy
2015-10-16 11:54:13.487 779 ERROR nova.api.openstack.extensions y = 
_reconstruct(x, rv, 1, memo)
2015-10-16 11:54:13.487 779 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/copy.py", line 334, in _reconstruct
2015-10-16 11:54:13.487 779 ERROR nova.api.openstack.extensions state = 
deepcopy(state, memo)
2015-10-16 11:54:13.487 779 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/copy.py", line 163, in deepcopy
2015-10-16 11:54:13.487 779 ERROR nova.api.openstack.extensions y = 
copier(x, memo)
2015-10-16 11:54:13.487 779 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/copy.py", line 257, in _deepcopy_dict
2015-10-16 11:54:13.487 779 ERROR nova.api.openstack.extensions 
y[deepcopy(key, memo)] = deepcopy(value, memo)
2015-10-16 11:54:13.487 779 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/copy.py", line 190, in deepcopy
2015-10-16 11:54:13.487 779 ERROR nova.api.openstack.extensions y = 
_reconstruct(x, rv, 1, memo)
2015-10-16 11:54:13.487 779 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/copy.py", line 334, in _reconstruct
2015-10-16 11:54:13.487 779 ERROR nova.api.openstack.extensions state = 
deepcopy(state, memo)
2015-10-16 11:54:13.487 779 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/copy.py", line 163, in deepcopy
2015-10-16 11:54:13.487 779 ERROR nova.api.openstack.extensions y = 
copier(x, memo)
2015-10-16 11:54:13.487 779 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/copy.py", line 257, in _deepcopy_dict
2015-10-16 11:54:13.487 779 ERROR nova.api.openstack.extensions 
y[deepcopy(key, memo)] = deepcopy(value, memo)
2015-10-16 11:54:13.487 779 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/copy.py", line 190, in deepcopy
2015-10-16 11:54:13.487 779 ERROR nova.api.openstack.extensions y = 
_reconstruct(x, rv, 1, memo)
2015-10-16 11:54:13.487 779 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/copy.py", line 329, in _reconstruct
2015-10-16 11:54:13.487 779 ERROR nova.api.openstack.extensions y = 
callable(*args)
2015-10-16 11:54:13.487 779 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/copy_reg.py", line 93, in __newobj__
2015-10-16 11:54:13.487 779 ERROR nova.api.openstack.extensions return 
cls.__new__(cls, *args)
2015-10-16 11:54:13.487 779 ERROR nova.api.openstack.extensions TypeError: 
object.__new__(thread.lock) is not safe, use thread.lock.__new__()
2015-10-16 11:54:13.487 779 ERROR nova.api.openstack.extensions 
2015-10-16 11:54:13.488 INFO nova.api.openstack.wsgi 
[req-eb2e7b74-e2b2-4248-86ab-9b672f447a06 
tempest-ServersAdminTestJSON-1309219026 tempest-ServersAdminTestJSON-633645645] 
HTTP exception thrown: Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.

2015-10-16 11:54:13.489 DEBUG nova.api.openstack.wsgi 
[req-eb2e7b74-e2b2-4248-86ab-9b672f447a06 
tempest-ServersAdminTestJSON-1309219026 tempest-ServersAdminTestJSON-633645645] 
Returning 500 to user: Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
 __call__ 
/opt/stack/nova/nova/api/openstack/wsgi.py:1175

** Affects: nova
 Importance: Undecided
 Assignee: Marian Horban (mhorban)
 Status: New

** Affects: python-keystoneclient
 Importance: Undecided
 Assignee: Marian Horban (mhorban)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Marian Horban (mhorban)

** Also affects: python-keystoneclient
   Importance: Undecided
   Status: New

** Changed in: python-keystoneclient
 Assignee: (unassigned) => Marian Horban (mhorban)

-- 
You received this bug notification because you are a member of 

[Yahoo-eng-team] [Bug 1515637] [NEW] Double detach volume causes server fault

2015-11-12 Thread Marian Horban
Public bug reported:

If volume in 'detaching'  state and detach operation is called nova-api
fails:

2015-11-10 05:18:19.253 ERROR nova.api.openstack.extensions 
[req-05889195-e70d-4761-a5c6-a69ddfe05d62 
tempest-ServerActionsTestJSON-653602906 
tempest-ServerActionsTestJSON-743378399] Unexpected exception in API method
2015-11-10 05:18:19.253 TRACE nova.api.openstack.extensions Traceback (most 
recent call last):
2015-11-10 05:18:19.253 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/openstack/extensions.py", line 478, in wrapped
2015-11-10 05:18:19.253 TRACE nova.api.openstack.extensions return f(*args, 
**kwargs)
2015-11-10 05:18:19.253 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/openstack/compute/volumes.py", line 395, in delete
2015-11-10 05:18:19.253 TRACE nova.api.openstack.extensions 
self.compute_api.detach_volume(context, instance, volume)
2015-11-10 05:18:19.253 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/compute/api.py", line 235, in wrapped
2015-11-10 05:18:19.253 TRACE nova.api.openstack.extensions return 
func(self, context, target, *args, **kwargs)
2015-11-10 05:18:19.253 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/compute/api.py", line 224, in inner
2015-11-10 05:18:19.253 TRACE nova.api.openstack.extensions return 
function(self, context, instance, *args, **kwargs)
2015-11-10 05:18:19.253 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/compute/api.py", line 205, in inner
2015-11-10 05:18:19.253 TRACE nova.api.openstack.extensions return f(self, 
context, instance, *args, **kw)
2015-11-10 05:18:19.253 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/compute/api.py", line 3098, in detach_volume
2015-11-10 05:18:19.253 TRACE nova.api.openstack.extensions 
self._detach_volume(context, instance, volume)
2015-11-10 05:18:19.253 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/compute/api.py", line 3080, in _detach_volume
2015-11-10 05:18:19.253 TRACE nova.api.openstack.extensions 
self.volume_api.begin_detaching(context, volume['id'])
2015-11-10 05:18:19.253 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/volume/cinder.py", line 235, in wrapper
2015-11-10 05:18:19.253 TRACE nova.api.openstack.extensions 
six.reraise(exc_value, None, exc_trace)
2015-11-10 05:18:19.253 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/volume/cinder.py", line 224, in wrapper
2015-11-10 05:18:19.253 TRACE nova.api.openstack.extensions res = 
method(self, ctx, volume_id, *args, **kwargs)
2015-11-10 05:18:19.253 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/volume/cinder.py", line 335, in begin_detaching
2015-11-10 05:18:19.253 TRACE nova.api.openstack.extensions 
cinderclient(context).volumes.begin_detaching(volume_id)
2015-11-10 05:18:19.253 TRACE nova.api.openstack.extensions   File 
"/usr/local/lib/python2.7/dist-packages/cinderclient/v2/volumes.py", line 454, 
in begin_detaching
2015-11-10 05:18:19.253 TRACE nova.api.openstack.extensions return 
self._action('os-begin_detaching', volume)
2015-11-10 05:18:19.253 TRACE nova.api.openstack.extensions   File 
"/usr/local/lib/python2.7/dist-packages/cinderclient/v2/volumes.py", line 402, 
in _action
2015-11-10 05:18:19.253 TRACE nova.api.openstack.extensions return 
self.api.client.post(url, body=body)
2015-11-10 05:18:19.253 TRACE nova.api.openstack.extensions   File 
"/usr/local/lib/python2.7/dist-packages/cinderclient/client.py", line 104, in 
post
2015-11-10 05:18:19.253 TRACE nova.api.openstack.extensions return 
self._cs_request(url, 'POST', **kwargs)
2015-11-10 05:18:19.253 TRACE nova.api.openstack.extensions   File 
"/usr/local/lib/python2.7/dist-packages/cinderclient/client.py", line 98, in 
_cs_request
2015-11-10 05:18:19.253 TRACE nova.api.openstack.extensions return 
self.request(url, method, **kwargs)
2015-11-10 05:18:19.253 TRACE nova.api.openstack.extensions   File 
"/usr/local/lib/python2.7/dist-packages/cinderclient/client.py", line 91, in 
request
2015-11-10 05:18:19.253 TRACE nova.api.openstack.extensions raise 
exceptions.from_response(resp, body)
2015-11-10 05:18:19.253 TRACE nova.api.openstack.extensions InvalidInput: 
Invalid input received: Invalid volume: Unable to detach volume. Volume status 
must be 'in-use' and attach_status must be 'attached' to detach. Currently: 
status: 'detaching', attach_status: 'attached.' (HTTP 400) (Request-ID: 
req-f91e3713-7538-4285-af29-bfa7dbdbb2ab)
2015-11-10 05:18:19.253 TRACE nova.api.openstack.extensions

It could be easily reproduced with two consecutive detach volume
operations.

** Affects: nova
     Importance: Undecided
 Assignee: Marian Horban (mhorban)
 Status: New

** Changed in: n

[Yahoo-eng-team] [Bug 1486447] [NEW] Removing version of API for bookmark URL is wrong

2015-08-19 Thread Marian Horban
Public bug reported:

Removing version of API for bookmark URL is wrong

Function remove_version_from_href() receives SCRIPT_NAME.
SCRIPT_NAME consists of concatenation of Apache(or other web server) alias and 
URL mapping from api-paste.conf. This concatenation is executed in 
https://github.com/openstack/nova/blob/master/nova/api/openstack/urlmap.py#L179 
.
As result we can have different URL prefixes but the last part of SCRIPT_NAME 
is always version of API.

Right now this function is designed to accept URL where API version is not the 
latest part but the first part of URL. I looked into history and saw that 
several years ago the logic of function remove_version_from_href() was the 
same: function expected version of API in the latest part of URL(git show 
495137fb):
def remove_version_from_href(base_url):
return base_url.rsplit('/', 1).pop(0)
But later this code was refactored many times...

** Affects: nova
 Importance: Undecided
 Assignee: Marian Horban (mhorban)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1486447

Title:
  Removing version of API for bookmark URL is wrong

Status in OpenStack Compute (nova):
  New

Bug description:
  Removing version of API for bookmark URL is wrong

  Function remove_version_from_href() receives SCRIPT_NAME.
  SCRIPT_NAME consists of concatenation of Apache(or other web server) alias 
and URL mapping from api-paste.conf. This concatenation is executed in 
https://github.com/openstack/nova/blob/master/nova/api/openstack/urlmap.py#L179 
.
  As result we can have different URL prefixes but the last part of SCRIPT_NAME 
is always version of API.

  Right now this function is designed to accept URL where API version is not 
the latest part but the first part of URL. I looked into history and saw that 
several years ago the logic of function remove_version_from_href() was the 
same: function expected version of API in the latest part of URL(git show 
495137fb):
  def remove_version_from_href(base_url):
  return base_url.rsplit('/', 1).pop(0)
  But later this code was refactored many times...

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1486447/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1376316] Re: nova absolute-limits floating ip count is incorrect in a neutron based deployment

2015-08-05 Thread Marian Horban
** Changed in: nova
   Status: Confirmed = Triaged

** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1376316

Title:
  nova absolute-limits floating ip count is incorrect in a neutron based
  deployment

Status in neutron:
  New
Status in OpenStack Compute (nova):
  Triaged
Status in nova package in Ubuntu:
  Triaged

Bug description:
  1.
  $ lsb_release -rd
  Description:  Ubuntu 14.04 LTS
  Release:  14.04

  2.
  $ apt-cache policy python-novaclient 
  python-novaclient:
Installed: 1:2.17.0-0ubuntu1
Candidate: 1:2.17.0-0ubuntu1
Version table:
   *** 1:2.17.0-0ubuntu1 0
  500 http://nova.clouds.archive.ubuntu.com/ubuntu/ trusty/main amd64 
Packages
  100 /var/lib/dpkg/status

  3. nova absolute-limits should report the correct value of allocated floating 
ips
  4. nova absolute-limits shows 0 floating ips when I have 5 allocated

  $ nova absolute-limits | grep Floating
  | totalFloatingIpsUsed| 0  |
  | maxTotalFloatingIps | 10 |

  $ nova floating-ip-list
  +---+---++-+
  | Ip| Server Id | Fixed Ip   | Pool|
  +---+---++-+
  | 10.98.191.146 |   | -  | ext_net |
  | 10.98.191.100 |   | 10.5.0.242 | ext_net |
  | 10.98.191.138 |   | 10.5.0.2   | ext_net |
  | 10.98.191.147 |   | -  | ext_net |
  | 10.98.191.102 |   | -  | ext_net |
  +---+---++-+

  ProblemType: Bug
  DistroRelease: Ubuntu 14.04
  Package: python-novaclient 1:2.17.0-0ubuntu1
  ProcVersionSignature: User Name 3.13.0-24.47-generic 3.13.9
  Uname: Linux 3.13.0-24-generic x86_64
  ApportVersion: 2.14.1-0ubuntu3.2
  Architecture: amd64
  Date: Wed Oct  1 15:19:08 2014
  Ec2AMI: ami-0001
  Ec2AMIManifest: FIXME
  Ec2AvailabilityZone: nova
  Ec2InstanceType: m1.small
  Ec2Kernel: aki-0002
  Ec2Ramdisk: ari-0002
  PackageArchitecture: all
  ProcEnviron:
   TERM=xterm
   PATH=(custom, no user)
   XDG_RUNTIME_DIR=set
   LANG=en_US.UTF-8
   SHELL=/bin/bash
  SourcePackage: python-novaclient
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1376316/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1467843] [NEW] Experimental job for Nova API under Apache2 fails

2015-06-23 Thread Marian Horban
Public bug reported:

Experimental job for Nova API services under Apache2 fails with error message:
Syntax error on line 4 of /etc/apache2/sites-enabled/nova-api.conf Invalid 
process count for WSGI daemon process.
And apache config contains line:
WSGIDaemonProcess nova-api processes=0 threads=1 user=stack 
display-name=%{GROUP}
Option 'processes' can not contain 0 value.

** Affects: nova
 Importance: Undecided
 Assignee: Marian Horban (mhorban)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = Marian Horban (mhorban)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1467843

Title:
  Experimental job for Nova API under Apache2 fails

Status in OpenStack Compute (Nova):
  New

Bug description:
  Experimental job for Nova API services under Apache2 fails with error message:
  Syntax error on line 4 of /etc/apache2/sites-enabled/nova-api.conf Invalid 
process count for WSGI daemon process.
  And apache config contains line:
  WSGIDaemonProcess nova-api processes=0 threads=1 user=stack 
display-name=%{GROUP}
  Option 'processes' can not contain 0 value.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1467843/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1428182] [NEW] Removing network bridge cause ERROR state of instance during deletion

2015-03-04 Thread Marian Horban

|
| status   | ERROR  

|
| tenant_id| dc669bf633ae4bd7b995c885e0428fec   

|
| updated  | 2015-03-04T15:14:48Z   

|
| user_id  | bd2a2c6487484a24937c39740a76e501   

|
+--++

** Affects: nova
 Importance: Undecided
 Assignee: Marian Horban (mhorban)
 Status: In Progress

** Changed in: nova
   Status: New = In Progress

** Changed in: nova
 Assignee: (unassigned) = Marian Horban (mhorban)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1428182

Title:
  Removing network bridge cause ERROR state of instance during deletion

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  Instance moves to ERROR state after deleting this instance
  Configuration:
1. network_manager = nova.network.manager.VlanManager
2. teardown_unused_network_gateway = true
  Steps to reproduce:
1. launch instance
2. remove instance
  Expected result:
  instance is removed without error
  Actual result:
  instance is not removed, instance' state become ERROR:
  root@node-1:~# nova list
  
+--+-+++-+--+
  | ID   | Name| Status | Task 
State | Power State | Networks |
  
+--+-+++-+--+
  | fd581745-8ecd-4fa2-af80-82493d083b97 | test_del_bridge | ERROR  | deleting  
 | Running | novanetwork=10.0.0.3 |
  
+--+-+++-+--+
  root@node-1:~# nova show fd581745-8ecd-4fa2-af80-82493d083b97
  
+--++
  | Property | Value

  |
  
+--++
  | OS-DCF:diskConfig| MANUAL   

  |
  | OS-EXT-AZ:availability_zone  | nova 

  |
  | OS-EXT-SRV-ATTR:host | node-2   

  |
  | OS-EXT-SRV-ATTR:hypervisor_hostname  | node-2   

  |
  | OS-EXT-SRV-ATTR:instance_name| instance-0002

  |
  | OS-EXT-STS:power_state   | 1

  |
  | OS-EXT-STS:task_state| deleting 

  |
  | OS-EXT-STS:vm_state  | error

  |
  | OS-SRV-USG:launched_at   | 2015-03-04T15:14:44.00   

  |
  | OS-SRV-USG:terminated_at | -

  |
  | accessIPv4