[Yahoo-eng-team] [Bug 1649581] [NEW] IPv4 Link Local Addresses Not Supported in OVS firewall

2016-12-13 Thread Drew Thorstensen
Public bug reported:

There are certain workloads that require the ability to define IPv4 Link
Local addresses dynamically, as defined in RFC3927.

The openvswitch_firewall service allows for IPv6 link local addresses
(likely because they are deterministic), but does not account for IPv4
Link Local addresses.  Without support of this, workloads that have not
yet made the transition to IPv6 support won't be able to run with the
openvswitch_firewall.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1649581

Title:
  IPv4 Link Local Addresses Not Supported in OVS firewall

Status in neutron:
  New

Bug description:
  There are certain workloads that require the ability to define IPv4
  Link Local addresses dynamically, as defined in RFC3927.

  The openvswitch_firewall service allows for IPv6 link local addresses
  (likely because they are deterministic), but does not account for IPv4
  Link Local addresses.  Without support of this, workloads that have
  not yet made the transition to IPv6 support won't be able to run with
  the openvswitch_firewall.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1649581/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1631647] [NEW] Network downtime during live migration through routers

2016-10-08 Thread Drew Thorstensen
Public bug reported:

neutron/master (close to stable/newton)
VXLAN networks with simple network node (not DVR)

There is network down time of several seconds during a live migration.
The amount of time depends on when the VM resumes on the target host
versus when the migration ‘completes’.

When a live migration occurs, there is a point in its life cycle where
it pauses on the source and starts up (or resumes) on the target.  At
that point, the migration isn’t complete, the system has determined it
is now best to be running on the target.  This of course varies per
hypervisor, but that is the general flow for most hypervisors.

So during the migration the port goes through a few states.
1) Pre migration, its tied solely to the source host.
2) During migration, its tied to the source host.  The port profile has a 
‘migrating_to’ attribute that identifies the target host
3) Post migration, the port is tied solely to the target host.


The OVS agent handles the migration well.  It detects the port, sees the UUID, 
and treats the port properly.  But things like the router don’t seem to handle 
it properly, at least in my testing.

It seems only once the VM hits step 3 (post migration, where nova
updates the port to be on the target host solely) does the routing
information get updated in the router.

In fact, its kinda interesting.  I’ve been running a constant ping during the 
live migration through the router and watching it on both sides with tcpdump.  
When it resumes on the target, but live migration is not completed the 
following happens:
 - Ping request goes out from target server
 - Goes through out the router
 - Comes back into the router
 - Gets sent to the source server

I’m not sure if this is somehow specific to vxlan.  I haven’t had a
chance to try Geneve yet.

This could impact projects like Watcher which will be using the live-
migration to constantly optimize the system.  But that could be
undesirable to optimize because it would introduce down time on the
workloads being moved around.

If the time between a VM resume and live migration complete is minimal,
then the impact can be quite small (couple seconds).  If KVM uses post-
copy, it should be susceptible to it.
http://wiki.qemu.org/Features/PostCopyLiveMigration

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1631647

Title:
  Network downtime during live migration through routers

Status in neutron:
  New

Bug description:
  neutron/master (close to stable/newton)
  VXLAN networks with simple network node (not DVR)

  There is network down time of several seconds during a live migration.
  The amount of time depends on when the VM resumes on the target host
  versus when the migration ‘completes’.

  When a live migration occurs, there is a point in its life cycle where
  it pauses on the source and starts up (or resumes) on the target.  At
  that point, the migration isn’t complete, the system has determined it
  is now best to be running on the target.  This of course varies per
  hypervisor, but that is the general flow for most hypervisors.

  So during the migration the port goes through a few states.
  1) Pre migration, its tied solely to the source host.
  2) During migration, its tied to the source host.  The port profile has a 
‘migrating_to’ attribute that identifies the target host
  3) Post migration, the port is tied solely to the target host.

  
  The OVS agent handles the migration well.  It detects the port, sees the 
UUID, and treats the port properly.  But things like the router don’t seem to 
handle it properly, at least in my testing.

  It seems only once the VM hits step 3 (post migration, where nova
  updates the port to be on the target host solely) does the routing
  information get updated in the router.

  In fact, its kinda interesting.  I’ve been running a constant ping during the 
live migration through the router and watching it on both sides with tcpdump.  
When it resumes on the target, but live migration is not completed the 
following happens:
   - Ping request goes out from target server
   - Goes through out the router
   - Comes back into the router
   - Gets sent to the source server

  I’m not sure if this is somehow specific to vxlan.  I haven’t had a
  chance to try Geneve yet.

  This could impact projects like Watcher which will be using the live-
  migration to constantly optimize the system.  But that could be
  undesirable to optimize because it would introduce down time on the
  workloads being moved around.

  If the time between a VM resume and live migration complete is
  minimal, then the impact can be quite small (couple seconds).  If KVM
  uses post-copy, it should be susceptible to it.
  http://wiki.qemu.org/Features/PostCopyLiveMigration

To manage notifications about this bug go to:

[Yahoo-eng-team] [Bug 1626298] [NEW] Out of tree drivers fail to run hacking checks now

2016-09-21 Thread Drew Thorstensen
Public bug reported:

This change recently went in to the neutron hacking checks:

https://github.com/openstack/neutron/commit/31e1aeb66b2d8abb0d8424e9550693fad6f37c1c

While this serves the purpose of not allowing functional tests from
within neturon to extend unit test functions, it blocks out of tree
neutron drivers/ml2 agents from importing the neutron unit tests within
their own unit tests.

Out of tree drivers/ml2 agents are no longer able to run the hacking
checks that they extend from neutron against their own source tree - or
they have to explicitly exclude the check_no_imports_from_tests rule.


Neutron Code: 
https://github.com/openstack/neutron/blame/master/neutron/hacking/checks.py#L390-L403

Example of broken check (PowerVM): https://github.com/openstack
/networking-powervm/blob/master/networking_powervm/hacking/checks.py#L45

** Affects: networking-powervm
 Importance: Undecided
 Status: New

** Affects: neutron
 Importance: Undecided
 Status: New

** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1626298

Title:
  Out of tree drivers fail to run hacking checks now

Status in networking-powervm:
  New
Status in neutron:
  New

Bug description:
  This change recently went in to the neutron hacking checks:

  
https://github.com/openstack/neutron/commit/31e1aeb66b2d8abb0d8424e9550693fad6f37c1c

  While this serves the purpose of not allowing functional tests from
  within neturon to extend unit test functions, it blocks out of tree
  neutron drivers/ml2 agents from importing the neutron unit tests
  within their own unit tests.

  Out of tree drivers/ml2 agents are no longer able to run the hacking
  checks that they extend from neutron against their own source tree -
  or they have to explicitly exclude the check_no_imports_from_tests
  rule.

  
  Neutron Code: 
https://github.com/openstack/neutron/blame/master/neutron/hacking/checks.py#L390-L403

  Example of broken check (PowerVM): https://github.com/openstack
  /networking-
  powervm/blob/master/networking_powervm/hacking/checks.py#L45

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-powervm/+bug/1626298/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1575335] Re: Out-of-tree compute drivers no longer loading

2016-08-17 Thread Drew Thorstensen
** Changed in: nova-powervm
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1575335

Title:
  Out-of-tree compute drivers no longer loading

Status in OpenStack Compute (nova):
  Fix Released
Status in nova-powervm:
  Fix Released

Bug description:
  Commit
  
https://github.com/openstack/nova/commit/8eb03de1eb83a6cd2d4d41804e1b8253f94e5400
  removed the mechanism by which nova-powervm was loading its Compute
  driver from out of tree, resulting in the following failure to start
  up n-cpu:

  2016-04-25 23:53:46.581 32459 INFO nova.virt.driver [-] Loading compute 
driver 'nova_powervm.virt.powervm.driver.PowerVMDriver'
  2016-04-25 23:53:46.582 32459 ERROR nova.virt.driver [-] Unable to load the 
virtualization driver
  2016-04-25 23:53:46.582 32459 ERROR nova.virt.driver Traceback (most recent 
call last):
  2016-04-25 23:53:46.582 32459 ERROR nova.virt.driver   File 
"/opt/stack/nova/nova/virt/driver.py", line 1623, in load_compute_driver
  2016-04-25 23:53:46.582 32459 ERROR nova.virt.driver virtapi)
  2016-04-25 23:53:46.582 32459 ERROR nova.virt.driver   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/importutils.py", line 44, in 
import_object
  2016-04-25 23:53:46.582 32459 ERROR nova.virt.driver return 
import_class(import_str)(*args, **kwargs)
  2016-04-25 23:53:46.582 32459 ERROR nova.virt.driver   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/importutils.py", line 30, in 
import_class
  2016-04-25 23:53:46.582 32459 ERROR nova.virt.driver __import__(mod_str)
  2016-04-25 23:53:46.582 32459 ERROR nova.virt.driver ImportError: No module 
named nova_powervm.virt.powervm.driver
  2016-04-25 23:53:46.582 32459 ERROR nova.virt.driver 
  n-cpu failed to start

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1575335/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1570748] Re: Bug: resize instance after edit flavor with horizon

2016-05-23 Thread Drew Thorstensen
** Changed in: nova-powervm
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1570748

Title:
  Bug: resize instance after edit flavor with horizon

Status in Ubuntu Cloud Archive:
  New
Status in Ubuntu Cloud Archive liberty series:
  New
Status in Ubuntu Cloud Archive mitaka series:
  New
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released
Status in OpenStack Compute (nova) liberty series:
  Fix Committed
Status in OpenStack Compute (nova) mitaka series:
  Fix Committed
Status in nova-powervm:
  Fix Released
Status in tempest:
  Fix Released
Status in nova package in Ubuntu:
  Fix Released
Status in nova source package in Wily:
  New
Status in nova source package in Xenial:
  New
Status in nova source package in Yakkety:
  Fix Released

Bug description:
  Error occured when resize instance after edit flavor with horizon (and
  also delete flavor used by instance)

  Reproduce step :

  1. create flavor A
  2. boot instance using flavor A
  3. edit flavor with horizon (or delete flavor A)
  -> the result is same to edit or to delelet flavor because edit flavor 
means delete/recreate flavor)
  4. resize or migrate instance
  5. Error occured

  Log : 
  nova-compute.log
 File "/opt/openstack/src/nova/nova/conductor/manager.py", line 422, in 
_object_dispatch
   return getattr(target, method)(*args, **kwargs)

 File "/opt/openstack/src/nova/nova/objects/base.py", line 163, in wrapper
   result = fn(cls, context, *args, **kwargs)

 File "/opt/openstack/src/nova/nova/objects/flavor.py", line 132, in 
get_by_id
   db_flavor = db.flavor_get(context, id)

 File "/opt/openstack/src/nova/nova/db/api.py", line 1479, in flavor_get
   return IMPL.flavor_get(context, id)

 File "/opt/openstack/src/nova/nova/db/sqlalchemy/api.py", line 233, in 
wrapper
   return f(*args, **kwargs)

 File "/opt/openstack/src/nova/nova/db/sqlalchemy/api.py", line 4732, in 
flavor_get
   raise exception.FlavorNotFound(flavor_id=id)

   FlavorNotFound: Flavor 7 could not be found.

  
  This Error is occured because of below code:
  /opt/openstack/src/nova/nova/compute/manager.py

  def resize_instance(self, context, instance, image,
  reservations, migration, instance_type,
  clean_shutdown=True):
  
  if (not instance_type or
  not isinstance(instance_type, objects.Flavor)):
  instance_type = objects.Flavor.get_by_id(
  context, migration['new_instance_type_id'])
  

  I think that deleted flavor should be taken when resize instance. 
  I tested this in stable/kilo, but I think stable/liberty and stable/mitaka 
has same bug because source code is not changed.

  thanks.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1570748/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1570748] Re: Bug: resize instance after edit flavor with horizon

2016-04-29 Thread Drew Thorstensen
** Also affects: nova-powervm
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1570748

Title:
  Bug: resize instance after edit flavor with horizon

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Committed
Status in OpenStack Compute (nova) liberty series:
  Fix Committed
Status in OpenStack Compute (nova) mitaka series:
  Fix Committed
Status in nova-powervm:
  New
Status in tempest:
  Fix Released

Bug description:
  Error occured when resize instance after edit flavor with horizon (and
  also delete flavor used by instance)

  Reproduce step :

  1. create flavor A
  2. boot instance using flavor A
  3. edit flavor with horizon (or delete flavor A)
  -> the result is same to edit or to delelet flavor because edit flavor 
means delete/recreate flavor)
  4. resize or migrate instance
  5. Error occured

  Log : 
  nova-compute.log
 File "/opt/openstack/src/nova/nova/conductor/manager.py", line 422, in 
_object_dispatch
   return getattr(target, method)(*args, **kwargs)

 File "/opt/openstack/src/nova/nova/objects/base.py", line 163, in wrapper
   result = fn(cls, context, *args, **kwargs)

 File "/opt/openstack/src/nova/nova/objects/flavor.py", line 132, in 
get_by_id
   db_flavor = db.flavor_get(context, id)

 File "/opt/openstack/src/nova/nova/db/api.py", line 1479, in flavor_get
   return IMPL.flavor_get(context, id)

 File "/opt/openstack/src/nova/nova/db/sqlalchemy/api.py", line 233, in 
wrapper
   return f(*args, **kwargs)

 File "/opt/openstack/src/nova/nova/db/sqlalchemy/api.py", line 4732, in 
flavor_get
   raise exception.FlavorNotFound(flavor_id=id)

   FlavorNotFound: Flavor 7 could not be found.

  
  This Error is occured because of below code:
  /opt/openstack/src/nova/nova/compute/manager.py

  def resize_instance(self, context, instance, image,
  reservations, migration, instance_type,
  clean_shutdown=True):
  
  if (not instance_type or
  not isinstance(instance_type, objects.Flavor)):
  instance_type = objects.Flavor.get_by_id(
  context, migration['new_instance_type_id'])
  

  I think that deleted flavor should be taken when resize instance. 
  I tested this in stable/kilo, but I think stable/liberty and stable/mitaka 
has same bug because source code is not changed.

  thanks.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1570748/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1539029] Re: rollback_live_migration_at_destination fails in libvirt - expects migrate_data object, gets dictionary

2016-02-01 Thread Drew Thorstensen
** Changed in: nova
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1539029

Title:
  rollback_live_migration_at_destination fails in libvirt - expects
  migrate_data object, gets dictionary

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  The rollback_live_migration_at_destination in the libvirt nova driver
  is currently expecting the migrate_data as an object.  The object
  model for the migrate data was introduced here:

  
https://github.com/openstack/nova/commit/69e01758076d0e89eddfe6945c8c7e423c862a49

  Subsequently, a change set was added to provide transitional support
  for the migrate_data object.  This currently forces all of the
  migrate_data objects that are sent to the manager to be converted to
  dictionaries:

  
https://github.com/openstack/nova/commit/038dfd672f5b2be5ebe30d85bd00d09bae2993fc

  
  It looks like the rollback_live_migration_at_destination method still expects 
the migrate_data in object form.  However the manager passes it down as a 
dictionary.  Which results in this error message upon a rollback:

File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 
204, in __exit__
  six.reraise(self.type_, self.value, self.tb)
File "/opt/stack/nova/nova/compute/manager.py", line 373, in 
decorated_function
  return function(self, context, *args, **kwargs)
File "/opt/stack/nova/nova/compute/manager.py", line 5554, in 
rollback_live_migration_at_destination
  destroy_disks=destroy_disks, migrate_data=migrate_data)
File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 6391, in 
rollback_live_migration_at_destination
  is_shared_instance_path = migrate_data.is_shared_instance_path
AttributeError: 'dict' object has no attribute 'is_shared_instance_path'

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1539029/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1535918] Re: instance.host not updated on evacuation

2016-01-28 Thread Drew Thorstensen
The issue with the PowerVM driver is actually in neutron.  I set up a
libvirt environment, and the difference is that the PowerVM VIF is for
some reason in a BUILD state, where as it is ACTIVE in libvirt.

If the PowerVM VIF was in an ACTIVE state, this wouldn't occur, and no
neutron events would need to be waited for.

I'll investigate what's going on with the port state for networking-
powervm.  The state up is being sent...so this requires some
verification.


It is true that the nova instance.host isn't updated until after the spawn in 
nova.  That could be investigated...but this is the root reason why PowerVM is 
seeing different behavior than Libvirt.

** Project changed: nova => networking-powervm

** Changed in: networking-powervm
 Assignee: Wen Zhi Yu (yuywz) => Drew Thorstensen (thorst)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1535918

Title:
  instance.host not updated on evacuation

Status in networking-powervm:
  In Progress

Bug description:
  I'm working on the nova-powervm driver for Mitaka and trying to add
  support for evacuation.

  The problem I'm hitting is that instance.host is not updated when the
  compute driver is called to spawn the instance on the destination
  host.  It is still set to the source host.  It's not until after the
  spawn completes that the compute manager updates instance.host to
  reflect the destination host.

  The nova-powervm driver uses instance events callback mechanism during
  plug VIF to determine when Neutron has finished provisioning the
  network.  The instance events code sends the event to instance.host
  and hence is sending the event to the source host (which is down).
  This causes the spawn to fail and also causes weirdness when the
  source host gets the events when it's powered back up.

  To temporarily work around the problem, I hacked in setting
  instance.host = CONF.host; instance.save() in the compute driver but
  that's not a good solution.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-powervm/+bug/1535918/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1539029] [NEW] rollback_live_migration_at_destination fails in libvirt - expects migrate_data object, gets dictionary

2016-01-28 Thread Drew Thorstensen
Public bug reported:

The rollback_live_migration_at_destination in the libvirt nova driver is
currently expecting the migrate_data as an object.  The object model for
the migrate data was introduced here:

https://github.com/openstack/nova/commit/69e01758076d0e89eddfe6945c8c7e423c862a49

Subsequently, a change set was added to provide transitional support for
the migrate_data object.  This currently forces all of the migrate_data
objects that are sent to the manager to be converted to dictionaries:

https://github.com/openstack/nova/commit/038dfd672f5b2be5ebe30d85bd00d09bae2993fc


It looks like the rollback_live_migration_at_destination method still expects 
the migrate_data in object form.  However the manager passes it down as a 
dictionary.  Which results in this error message upon a rollback:

  File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 
204, in __exit__
six.reraise(self.type_, self.value, self.tb)
  File "/opt/stack/nova/nova/compute/manager.py", line 373, in 
decorated_function
return function(self, context, *args, **kwargs)
  File "/opt/stack/nova/nova/compute/manager.py", line 5554, in 
rollback_live_migration_at_destination
destroy_disks=destroy_disks, migrate_data=migrate_data)
  File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 6391, in 
rollback_live_migration_at_destination
is_shared_instance_path = migrate_data.is_shared_instance_path
  AttributeError: 'dict' object has no attribute 'is_shared_instance_path'

** Affects: nova
     Importance: Undecided
 Assignee: Drew Thorstensen (thorst)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Drew Thorstensen (thorst)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1539029

Title:
  rollback_live_migration_at_destination fails in libvirt - expects
  migrate_data object, gets dictionary

Status in OpenStack Compute (nova):
  New

Bug description:
  The rollback_live_migration_at_destination in the libvirt nova driver
  is currently expecting the migrate_data as an object.  The object
  model for the migrate data was introduced here:

  
https://github.com/openstack/nova/commit/69e01758076d0e89eddfe6945c8c7e423c862a49

  Subsequently, a change set was added to provide transitional support
  for the migrate_data object.  This currently forces all of the
  migrate_data objects that are sent to the manager to be converted to
  dictionaries:

  
https://github.com/openstack/nova/commit/038dfd672f5b2be5ebe30d85bd00d09bae2993fc

  
  It looks like the rollback_live_migration_at_destination method still expects 
the migrate_data in object form.  However the manager passes it down as a 
dictionary.  Which results in this error message upon a rollback:

File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 
204, in __exit__
  six.reraise(self.type_, self.value, self.tb)
File "/opt/stack/nova/nova/compute/manager.py", line 373, in 
decorated_function
  return function(self, context, *args, **kwargs)
File "/opt/stack/nova/nova/compute/manager.py", line 5554, in 
rollback_live_migration_at_destination
  destroy_disks=destroy_disks, migrate_data=migrate_data)
File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 6391, in 
rollback_live_migration_at_destination
  is_shared_instance_path = migrate_data.is_shared_instance_path
AttributeError: 'dict' object has no attribute 'is_shared_instance_path'

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1539029/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1508442] Re: LOG.warn is deprecated

2016-01-04 Thread Drew Thorstensen
** Changed in: nova-powervm
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1508442

Title:
  LOG.warn is deprecated

Status in Aodh:
  In Progress
Status in Glance:
  In Progress
Status in Ironic:
  Fix Released
Status in Ironic Inspector:
  Fix Released
Status in ironic-lib:
  Fix Committed
Status in ironic-python-agent:
  Fix Committed
Status in kolla:
  Fix Released
Status in Magnum:
  Fix Released
Status in nova-powervm:
  Fix Released
Status in python-magnumclient:
  In Progress
Status in tempest:
  In Progress
Status in tripleo:
  New
Status in zaqar:
  In Progress

Bug description:
  LOG.warn is deprecated. But it still used in a few places, non-
  deprecated LOG.warning should be used instead.

To manage notifications about this bug go to:
https://bugs.launchpad.net/aodh/+bug/1508442/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1366982] [NEW] Exception NoMoreFixedIps doesn't show which network is out of IPs

2014-09-08 Thread Drew Thorstensen
Public bug reported:

The exception NoMoreFixedIps in nova/exception.py has a very generic error 
message:
Zero fixed ips available.

When performing a deploy with multiple networks, it can become difficult
to determine which network has been exhausted.  Slight modification to
this error message will help simplify the debug process for operators.

** Affects: nova
 Importance: Undecided
 Assignee: Drew Thorstensen (thorst)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = Drew Thorstensen (thorst)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1366982

Title:
  Exception NoMoreFixedIps doesn't show which network is out of IPs

Status in OpenStack Compute (Nova):
  New

Bug description:
  The exception NoMoreFixedIps in nova/exception.py has a very generic error 
message:
  Zero fixed ips available.

  When performing a deploy with multiple networks, it can become
  difficult to determine which network has been exhausted.  Slight
  modification to this error message will help simplify the debug
  process for operators.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1366982/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1339098] [NEW] detach_interface may hide issues due to async call

2014-07-08 Thread Drew Thorstensen
Public bug reported:

The detach_interface runs to the compute host via a cast rpc invocation
(async).  As such, the validation that is done on the compute manager
(example: an incorrect port id being passed in) is lost and the HTTP
response code returned to the user is always 202.  Users would need to
look in the logs to determine the error (and it would be indicated to
them that nothing was wrong).

The attach_interface is a synchronous (call) rpc invocation.  This
enables validation to be done and the error codes returned up to the
user.

This behavior should be consistent between the two calls.  Propose that
the detach_interface switch to a 'call' instead of a 'cast' to have
similar behavior.

It appears that the detach_volume also has this similar issue.

** Affects: nova
 Importance: Undecided
 Assignee: Drew Thorstensen (thorst)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = Drew Thorstensen (thorst)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1339098

Title:
  detach_interface may hide issues due to async call

Status in OpenStack Compute (Nova):
  New

Bug description:
  The detach_interface runs to the compute host via a cast rpc
  invocation (async).  As such, the validation that is done on the
  compute manager (example: an incorrect port id being passed in) is
  lost and the HTTP response code returned to the user is always 202.
  Users would need to look in the logs to determine the error (and it
  would be indicated to them that nothing was wrong).

  The attach_interface is a synchronous (call) rpc invocation.  This
  enables validation to be done and the error codes returned up to the
  user.

  This behavior should be consistent between the two calls.  Propose
  that the detach_interface switch to a 'call' instead of a 'cast' to
  have similar behavior.

  It appears that the detach_volume also has this similar issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1339098/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1338551] [NEW] Failure in interface-attach may leave port around

2014-07-07 Thread Drew Thorstensen
Public bug reported:

When the interface-attach action is run, it may be passed in a network
(but no port identifier).  Therefore, the action allocates a port on
that network.  However, if the attach method fails for some reason, the
port is not cleaned up.

This behavior would be appropriate if the invoker had passed in a port
identifier.  However if nova created the port for the action and that
action failed, the port should be cleaned up as part of the failure.

The allocation of the port occurs in nova/compute/manager.py in the
attach_interface method.  Recommend that we de-allocate the port for the
instance had no port_id been passed in.

** Affects: nova
 Importance: Undecided
 Assignee: Drew Thorstensen (thorst)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = Drew Thorstensen (thorst)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1338551

Title:
  Failure in interface-attach may leave port around

Status in OpenStack Compute (Nova):
  New

Bug description:
  When the interface-attach action is run, it may be passed in a network
  (but no port identifier).  Therefore, the action allocates a port on
  that network.  However, if the attach method fails for some reason,
  the port is not cleaned up.

  This behavior would be appropriate if the invoker had passed in a port
  identifier.  However if nova created the port for the action and that
  action failed, the port should be cleaned up as part of the failure.

  The allocation of the port occurs in nova/compute/manager.py in the
  attach_interface method.  Recommend that we de-allocate the port for
  the instance had no port_id been passed in.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1338551/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1336883] [NEW] Should not be able to attach/detach interface during building state

2014-07-02 Thread Drew Thorstensen
Public bug reported:

During the VM building state, the nova API allows for interface attach
and detach.  It generally does not succeed because the VM may not be
created yet on the hypervisor (ex. the image may be transferring if
using local disks and glance).  Then the VM is brought up as defined by
the original spawn request.  However, the attach_interface request
reported as successful and the VM is reported as having the interface
(even though it does not).

Recommend that this operation be blocked unless the VM is in a specific
state where it is allowed to run.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1336883

Title:
  Should not be able to attach/detach interface during building state

Status in OpenStack Compute (Nova):
  New

Bug description:
  During the VM building state, the nova API allows for interface attach
  and detach.  It generally does not succeed because the VM may not be
  created yet on the hypervisor (ex. the image may be transferring if
  using local disks and glance).  Then the VM is brought up as defined
  by the original spawn request.  However, the attach_interface request
  reported as successful and the VM is reported as having the interface
  (even though it does not).

  Recommend that this operation be blocked unless the VM is in a
  specific state where it is allowed to run.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1336883/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp