[Yahoo-eng-team] [Bug 1895976] Re: Fail to get http openstack metadata if the Linux instance runs on Hyper-V

2022-04-28 Thread Lucian Petrut
** Changed in: compute-hyperv
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1895976

Title:
  Fail to get http openstack metadata if the Linux instance runs on
  Hyper-V

Status in cloud-init:
  Fix Released
Status in compute-hyperv:
  Fix Released
Status in OpenStack Compute (nova):
  In Progress
Status in os-win:
  Fix Released

Bug description:
  Because of the commit that introduced platform checks for enabling /
  using http openstack metadata (https://github.com/canonical/cloud-
  init/commit/1efa8a0a030794cec68197100f31a856d0d264ab), cloud-init on
  Linux machines will stop loading http metadata when running on
  "unsupported" platforms / hypervisors like Hyper-V, XEN, OracleCloud,
  VMware, OpenTelekomCloud - leading to a whole suite of bug reports and
  fixes to a non-issue.

  Let's try to solve this problem once for all the upcoming platforms /
  hypervisors by adding a configuration option on the metadata level:
  perform_platform_check or check_if_platform_is_supported (suggestions
  are welcome for the naming). The value of the config option should be
  true in order to maintain backwards compatibility. When set to true,
  cloud-init will check if the platform is supported.

  No one would like to patch well-working OpenStack environments for
  this kind of issues and it is always easier to control / build the
  images you use on private OpenStack.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1895976/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1761748] Re: hyperv: Unable to get ports details for devices: AttributeError: 'NoneType' object has no attribute 'startswith'

2022-04-28 Thread Lucian Petrut
** Changed in: networking-hyperv
   Status: New => Fix Released

** Changed in: neutron
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1761748

Title:
  hyperv: Unable to get ports details for devices: AttributeError:
  'NoneType' object has no attribute 'startswith'

Status in networking-hyperv:
  Fix Released
Status in neutron:
  Fix Released
Status in os-win:
  Fix Released

Bug description:
  In a failed hyperv CI run I'm seeing this in the hyperv agent logs:

  http://cloudbase-ci.com/nova/324720/5/Hyper-
  V_logs/192.168.3.143-compute01/neutron-hyperv-agent.log.gz

  2018-04-06 02:43:29.230 588 91983184 MainThread INFO 
networking_hyperv.neutron.agent.layer2 [-] Hyper-V VM vNIC added: 
5d31e08c-957c-45a5-a13d-fa114ea68b56
  2018-04-06 02:43:29.230 588 91983184 MainThread INFO 
networking_hyperv.neutron.agent.layer2 [-] Hyper-V VM vNIC added: None
  2018-04-06 02:43:29.246 588 91983184 MainThread INFO 
networking_hyperv.neutron.agent.layer2 [-] Hyper-V VM vNIC added: None
  2018-04-06 02:43:29.246 588 91983184 MainThread INFO 
networking_hyperv.neutron.agent.layer2 [-] Hyper-V VM vNIC added: None
  2018-04-06 02:43:29.262 588 91983184 MainThread INFO 
networking_hyperv.neutron.agent.layer2 [-] Hyper-V VM vNIC added: None
  2018-04-06 02:43:30.496 588 35292864 MainThread DEBUG 
networking_hyperv.neutron.agent.layer2 
[req-1c72ef49-f85f-4776-8219-ac410b3a00e6 - - - - -] Agent loop has new 
devices! _work 
c:\openstack\build\networking-hyperv\networking_hyperv\neutron\agent\layer2.py:427
  2018-04-06 02:43:30.526 588 35292864 MainThread DEBUG 
networking_hyperv.neutron.agent.layer2 
[req-1c72ef49-f85f-4776-8219-ac410b3a00e6 - - - - -] Unable to get ports 
details for devices set([u'5d31e08c-957c-45a5-a13d-fa114ea68b56', None]): 
'NoneType' object has no attribute 'startswith'
  Traceback (most recent call last):

File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", 
line 163, in _process_incoming
  res = self.dispatcher.dispatch(message)

File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
220, in dispatch
  return self._do_dispatch(endpoint, method, ctxt, args)

File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
190, in _do_dispatch
  result = func(ctxt, **new_args)

File "/opt/stack/neutron/neutron/plugins/ml2/rpc.py", line 157, in 
get_devices_details_list
  for device in kwargs.pop('devices', [])

File "/opt/stack/neutron/neutron/plugins/ml2/rpc.py", line 80, in 
get_device_details
  port_id = plugin._device_to_port_id(rpc_context, device)

File "/opt/stack/neutron/neutron/plugins/ml2/plugin.py", line 1864, in 
_device_to_port_id
  if device.startswith(prefix):

  AttributeError: 'NoneType' object has no attribute 'startswith'
   _treat_devices_added 
c:\openstack\build\networking-hyperv\networking_hyperv\neutron\agent\layer2.py:360

  In this test run, the nova-compute service is also being reported as
  down, so the nova-scheduler is filtering it out and all server build
  requests fail. I don't know if the two are related, but thta's how I
  stumbled onto this error in the hyperv agent logs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-hyperv/+bug/1761748/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1907216] Re: Wrong image ref after unshelve

2022-04-28 Thread Lucian Petrut
** Changed in: compute-hyperv
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1907216

Title:
  Wrong image ref after unshelve

Status in compute-hyperv:
  Fix Released
Status in OpenStack Compute (nova):
  Invalid

Bug description:
  After an instance is unshelved, the instance image ref will point to
  the original image instead of the snapshot created during the shelving
  [1][2].

  Subsequent instance operations will use the wrong image id. For
  example, in case of cold migrations, Hyper-V instances will be unable
  to boot since the differencing images will have the wrong base [3].
  Other image related operations might be affected as well.

  As pointed out by Matt Riedemann on the patch [1], Nova shouldn't set
  back the original image id, instead it should use the snapshot id.

  [1] I3bba0a230044613e07122a6d122597e5b8d43438
  [2] 
https://github.com/openstack/nova/blob/22.0.1/nova/compute/manager.py#L6625
  [3] http://paste.openstack.org/raw/800822/

To manage notifications about this bug go to:
https://bugs.launchpad.net/compute-hyperv/+bug/1907216/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1895976] Re: Fail to get http openstack metadata if the Linux instance runs on Hyper-V

2022-04-28 Thread Lucian Petrut
** Changed in: os-win
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1895976

Title:
  Fail to get http openstack metadata if the Linux instance runs on
  Hyper-V

Status in cloud-init:
  Fix Released
Status in compute-hyperv:
  New
Status in OpenStack Compute (nova):
  In Progress
Status in os-win:
  Fix Released

Bug description:
  Because of the commit that introduced platform checks for enabling /
  using http openstack metadata (https://github.com/canonical/cloud-
  init/commit/1efa8a0a030794cec68197100f31a856d0d264ab), cloud-init on
  Linux machines will stop loading http metadata when running on
  "unsupported" platforms / hypervisors like Hyper-V, XEN, OracleCloud,
  VMware, OpenTelekomCloud - leading to a whole suite of bug reports and
  fixes to a non-issue.

  Let's try to solve this problem once for all the upcoming platforms /
  hypervisors by adding a configuration option on the metadata level:
  perform_platform_check or check_if_platform_is_supported (suggestions
  are welcome for the naming). The value of the config option should be
  true in order to maintain backwards compatibility. When set to true,
  cloud-init will check if the platform is supported.

  No one would like to patch well-working OpenStack environments for
  this kind of issues and it is always easier to control / build the
  images you use on private OpenStack.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1895976/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1922251] Re: nova instance snapshots fail when using rbd

2021-04-07 Thread Lucian Petrut
*** This bug is a duplicate of bug 1802587 ***
https://bugs.launchpad.net/bugs/1802587

Ok, so I think this is a duplicate of
https://bugs.launchpad.net/glance/+bug/1802587, which was fixed here:
https://review.opendev.org/c/openstack/glance/+/617229.

Dan, thanks a again for mentioning that function.

** This bug has been marked a duplicate of bug 1802587
   With multiple backends enabled, adding a location does not default to 
default store

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1922251

Title:
  nova instance snapshots fail when using rbd

Status in Glance:
  New
Status in OpenStack Compute (nova):
  New

Bug description:
  Snapshotting instances that use the RBD Glance image backend fails if
  Glance is configured to use multiple stores.

  Trace: http://paste.openstack.org/raw/804113/

  The reason is that the Nova Libvirt driver creates the RBD snapshot
  directly and then updates the Glance image location. However, Nova
  isn't aware of the Glance store, so this information won't be
  included.

  Glance will error out when trying to add a location that doesn't
  include the store name when multiple stores are enabled, or even if
  there's a single one passed through the "enabled_backends" glance
  option.

  
https://github.com/openstack/nova/blob/68af588d5c7b5c9472cbc2731fee2956c86206ea/nova/virt/libvirt/imagebackend.py#L1144-L1178
  
https://github.com/openstack/nova/blob/68af588d5c7b5c9472cbc2731fee2956c86206ea/nova/image/glance.py#L701-L702
  
https://github.com/openstack/nova/blob/68af588d5c7b5c9472cbc2731fee2956c86206ea/nova/image/glance.py#L555-L561
  
https://github.com/openstack/python-glanceclient/blob/3.3.0/glanceclient/v2/images.py#L472
  
https://github.com/openstack/glance/blob/b5437773b20db3d6ef20d449a8a43171c8fc7f69/glance/location.py#L122-L129
  
https://github.com/openstack/glance_store/blob/ae9022cd3639bf3d0f482921d03b2b751f757399/glance_store/location.py#L83-L113

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1922251/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1922251] [NEW] nova instance snapshots fail when using rbd

2021-04-01 Thread Lucian Petrut
Public bug reported:

Snapshotting instances that use the RBD Glance image backend fails if
Glance is configured to use multiple stores.

Trace: http://paste.openstack.org/raw/804113/

The reason is that the Nova Libvirt driver creates the RBD snapshot
directly and then updates the Glance image location. However, Nova isn't
aware of the Glance store, so this information won't be included.

Glance will error out when trying to add a location that doesn't include
the store name when multiple stores are enabled, or even if there's a
single one passed through the "enabled_backends" glance option.

https://github.com/openstack/nova/blob/68af588d5c7b5c9472cbc2731fee2956c86206ea/nova/virt/libvirt/imagebackend.py#L1144-L1178
https://github.com/openstack/nova/blob/68af588d5c7b5c9472cbc2731fee2956c86206ea/nova/image/glance.py#L701-L702
https://github.com/openstack/nova/blob/68af588d5c7b5c9472cbc2731fee2956c86206ea/nova/image/glance.py#L555-L561
https://github.com/openstack/python-glanceclient/blob/3.3.0/glanceclient/v2/images.py#L472
https://github.com/openstack/glance/blob/b5437773b20db3d6ef20d449a8a43171c8fc7f69/glance/location.py#L122-L129
https://github.com/openstack/glance_store/blob/ae9022cd3639bf3d0f482921d03b2b751f757399/glance_store/location.py#L83-L113

The image store should probably be fetched by either Nova or Glance.

** Affects: glance
 Importance: Undecided
 Status: New

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: ceph libvirt

** Tags added: ceph

** Also affects: glance
   Importance: Undecided
   Status: New

** Summary changed:

- instance snapshots fail when using rbd
+ nova instance snapshots fail when using rbd

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1922251

Title:
  nova instance snapshots fail when using rbd

Status in Glance:
  New
Status in OpenStack Compute (nova):
  New

Bug description:
  Snapshotting instances that use the RBD Glance image backend fails if
  Glance is configured to use multiple stores.

  Trace: http://paste.openstack.org/raw/804113/

  The reason is that the Nova Libvirt driver creates the RBD snapshot
  directly and then updates the Glance image location. However, Nova
  isn't aware of the Glance store, so this information won't be
  included.

  Glance will error out when trying to add a location that doesn't
  include the store name when multiple stores are enabled, or even if
  there's a single one passed through the "enabled_backends" glance
  option.

  
https://github.com/openstack/nova/blob/68af588d5c7b5c9472cbc2731fee2956c86206ea/nova/virt/libvirt/imagebackend.py#L1144-L1178
  
https://github.com/openstack/nova/blob/68af588d5c7b5c9472cbc2731fee2956c86206ea/nova/image/glance.py#L701-L702
  
https://github.com/openstack/nova/blob/68af588d5c7b5c9472cbc2731fee2956c86206ea/nova/image/glance.py#L555-L561
  
https://github.com/openstack/python-glanceclient/blob/3.3.0/glanceclient/v2/images.py#L472
  
https://github.com/openstack/glance/blob/b5437773b20db3d6ef20d449a8a43171c8fc7f69/glance/location.py#L122-L129
  
https://github.com/openstack/glance_store/blob/ae9022cd3639bf3d0f482921d03b2b751f757399/glance_store/location.py#L83-L113

  The image store should probably be fetched by either Nova or Glance.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1922251/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1907216] Re: Wrong image ref after unshelve

2020-12-14 Thread Lucian Petrut
** Also affects: compute-hyperv
   Importance: Undecided
   Status: New

** Changed in: compute-hyperv
 Assignee: (unassigned) => Lucian Petrut (petrutlucian94)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1907216

Title:
  Wrong image ref after unshelve

Status in compute-hyperv:
  New
Status in OpenStack Compute (nova):
  New

Bug description:
  After an instance is unshelved, the instance image ref will point to
  the original image instead of the snapshot created during the shelving
  [1][2].

  Subsequent instance operations will use the wrong image id. For
  example, in case of cold migrations, Hyper-V instances will be unable
  to boot since the differencing images will have the wrong base [3].
  Other image related operations might be affected as well.

  As pointed out by Matt Riedemann on the patch [1], Nova shouldn't set
  back the original image id, instead it should use the snapshot id.

  [1] I3bba0a230044613e07122a6d122597e5b8d43438
  [2] 
https://github.com/openstack/nova/blob/22.0.1/nova/compute/manager.py#L6625
  [3] http://paste.openstack.org/raw/800822/

To manage notifications about this bug go to:
https://bugs.launchpad.net/compute-hyperv/+bug/1907216/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1907216] [NEW] Wrong image ref after unshelve

2020-12-08 Thread Lucian Petrut
Public bug reported:

After an instance is unshelved, the instance image ref will point to the
original image instead of the snapshot created during the shelving
[1][2].

Subsequent instance operations will use the wrong image id. For example,
in case of cold migrations, Hyper-V instances will be unable to boot
since the differencing images will have the wrong base [3]. Other image
related operations might be affected as well.

As pointed out by Matt Riedemann on the patch [1], Nova shouldn't set
back the original image id, instead it should use the snapshot id.

[1] I3bba0a230044613e07122a6d122597e5b8d43438
[2] https://github.com/openstack/nova/blob/22.0.1/nova/compute/manager.py#L6625
[3] http://paste.openstack.org/raw/800822/

** Affects: nova
 Importance: Undecided
 Assignee: Lucian Petrut (petrutlucian94)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Lucian Petrut (petrutlucian94)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1907216

Title:
  Wrong image ref after unshelve

Status in OpenStack Compute (nova):
  New

Bug description:
  After an instance is unshelved, the instance image ref will point to
  the original image instead of the snapshot created during the shelving
  [1][2].

  Subsequent instance operations will use the wrong image id. For
  example, in case of cold migrations, Hyper-V instances will be unable
  to boot since the differencing images will have the wrong base [3].
  Other image related operations might be affected as well.

  As pointed out by Matt Riedemann on the patch [1], Nova shouldn't set
  back the original image id, instead it should use the snapshot id.

  [1] I3bba0a230044613e07122a6d122597e5b8d43438
  [2] 
https://github.com/openstack/nova/blob/22.0.1/nova/compute/manager.py#L6625
  [3] http://paste.openstack.org/raw/800822/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1907216/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1899775] [NEW] Ovs agent import error on Windows

2020-10-14 Thread Lucian Petrut
Public bug reported:

A recent change[1] added a "iptables_firewall" import to the ovs agent,
which isn't importable on Windows[2].

This prevents the neutron ovs agent from starting on Windows.

[1] Iecf9cffaf02616342f1727ad7db85545d8adbec2
[2] http://paste.openstack.org/raw/799030/

** Affects: neutron
 Importance: Undecided
 Assignee: Lucian Petrut (petrutlucian94)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => Lucian Petrut (petrutlucian94)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1899775

Title:
  Ovs agent import error on Windows

Status in neutron:
  In Progress

Bug description:
  A recent change[1] added a "iptables_firewall" import to the ovs
  agent, which isn't importable on Windows[2].

  This prevents the neutron ovs agent from starting on Windows.

  [1] Iecf9cffaf02616342f1727ad7db85545d8adbec2
  [2] http://paste.openstack.org/raw/799030/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1899775/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1899139] [NEW] Live migrations don't properly handle disk overcommitment

2020-10-09 Thread Lucian Petrut
Public bug reported:

When live migrating libvirt instances, the destination host doesn't
properly check the available disk space when using image files and doing
overcommit, leading to migration failures.

Trace: http://paste.openstack.org/raw/798895/

It seems to be using resource tracker information that is not aware of
disk overcommitment, so we end up with negative values. The
"local_gb_used" value reflects the total allocated space, not the
actually used disk space.

https://github.com/openstack/nova/blob/20.4.0/nova/compute/resource_tracker.py#L1254

The same incorrect values will be reported by "openstack hypervisor show":
http://paste.openstack.org/raw/798898/

Additionally, the "disk_over_commit" boolean flag is incorrectly
checked. The driver checks if the field exists as part of the
"dest_check_data" dict but doesn't actually check its value.

https://github.com/openstack/nova/blob/20.4.0/nova/virt/libvirt/driver.py#L8224

The "disk_over_commit" parameter is deprecated. Recent Nova API versions
do not use it, which bypasses the disk allocation check on the libvirt
driver side. This might be used as a workaround (e.g. using nova client
instead of the openstack client or horizon), but this is not ideal.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: libvirt live-migration resource-tracker

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1899139

Title:
  Live migrations don't properly handle disk overcommitment

Status in OpenStack Compute (nova):
  New

Bug description:
  When live migrating libvirt instances, the destination host doesn't
  properly check the available disk space when using image files and
  doing overcommit, leading to migration failures.

  Trace: http://paste.openstack.org/raw/798895/

  It seems to be using resource tracker information that is not aware of
  disk overcommitment, so we end up with negative values. The
  "local_gb_used" value reflects the total allocated space, not the
  actually used disk space.

  
https://github.com/openstack/nova/blob/20.4.0/nova/compute/resource_tracker.py#L1254

  The same incorrect values will be reported by "openstack hypervisor show":
  http://paste.openstack.org/raw/798898/

  Additionally, the "disk_over_commit" boolean flag is incorrectly
  checked. The driver checks if the field exists as part of the
  "dest_check_data" dict but doesn't actually check its value.

  
https://github.com/openstack/nova/blob/20.4.0/nova/virt/libvirt/driver.py#L8224

  The "disk_over_commit" parameter is deprecated. Recent Nova API
  versions do not use it, which bypasses the disk allocation check on
  the libvirt driver side. This might be used as a workaround (e.g.
  using nova client instead of the openstack client or horizon), but
  this is not ideal.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1899139/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1872663] [NEW] Failing to terminate Windows processes

2020-04-14 Thread Lucian Petrut
Public bug reported:

The neutron Windows exec call helper doesn't properly handle the
situation in which the process that it's trying to kill was already
terminated, in which case using WMI to fetch the process can raise an
exception.

Trace: http://paste.openstack.org/raw/790116/

** Affects: neutron
 Importance: Undecided
 Assignee: Lucian Petrut (petrutlucian94)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1872663

Title:
  Failing to terminate Windows processes

Status in neutron:
  In Progress

Bug description:
  The neutron Windows exec call helper doesn't properly handle the
  situation in which the process that it's trying to kill was already
  terminated, in which case using WMI to fetch the process can raise an
  exception.

  Trace: http://paste.openstack.org/raw/790116/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1872663/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1864186] [NEW] "ping" prepended to ip netns commands

2020-02-21 Thread Lucian Petrut
Public bug reported:

A recent patch[1] updated rootwrap filters so that ping may be used within a 
network namespace. The issue is that now all ip netns commands get altered, the 
"ip" command being replaced with "ping":
http://paste.openstack.org/raw/789845/

In particular, this seems to affect the IpNetnsExecFilter filter.

[1] Ie5cbc0dcc76672b26cd2605f08cfd17a30b4c905
[2] 
https://github.com/openstack/oslo.rootwrap/blob/6.0.0/oslo_rootwrap/filters.py#L71-L75

This seems to be caused by the fact that the executable from the
original command gets replaced with the one from the filter [2]. I can't
tell what's the purpose of that.

** Affects: neutron
 Importance: Undecided
     Assignee: Lucian Petrut (petrutlucian94)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1864186

Title:
  "ping" prepended to ip netns commands

Status in neutron:
  In Progress

Bug description:
  A recent patch[1] updated rootwrap filters so that ping may be used within a 
network namespace. The issue is that now all ip netns commands get altered, the 
"ip" command being replaced with "ping":
  http://paste.openstack.org/raw/789845/

  In particular, this seems to affect the IpNetnsExecFilter filter.

  [1] Ie5cbc0dcc76672b26cd2605f08cfd17a30b4c905
  [2] 
https://github.com/openstack/oslo.rootwrap/blob/6.0.0/oslo_rootwrap/filters.py#L71-L75

  This seems to be caused by the fact that the executable from the
  original command gets replaced with the one from the filter [2]. I
  can't tell what's the purpose of that.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1864186/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1852619] [NEW] RPC get_device_details inconsistency

2019-11-14 Thread Lucian Petrut
Public bug reported:

The "get_device_details" accepts an optional "host" argument, yet [1]
expects it to be set. This inconsistency is breaking agents that do not
pass the optional host argument.

We should probably either make this argument mandatory, either avoid
matching bindings against it if it's missing.

[1] https://github.com/openstack/neutron/commit/5c3bf124#diff-
f37e2160fa93889ab1ae5eee9fef8244R294

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1852619

Title:
  RPC get_device_details inconsistency

Status in neutron:
  New

Bug description:
  The "get_device_details" accepts an optional "host" argument, yet [1]
  expects it to be set. This inconsistency is breaking agents that do
  not pass the optional host argument.

  We should probably either make this argument mandatory, either avoid
  matching bindings against it if it's missing.

  [1] https://github.com/openstack/neutron/commit/5c3bf124#diff-
  f37e2160fa93889ab1ae5eee9fef8244R294

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1852619/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1843889] [NEW] Windows: IPv6 tunnel endpoints

2019-09-13 Thread Lucian Petrut
Public bug reported:

ip_lib.IPDevice.device_has_ip doesn't include ipv6 addresses when
checking if a local adapter is configured to use a certain address. This
prevents IPv6 tunnel endpoints from being used on Windows.

** Affects: neutron
 Importance: Undecided
 Assignee: Lucian Petrut (petrutlucian94)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1843889

Title:
  Windows: IPv6 tunnel endpoints

Status in neutron:
  In Progress

Bug description:
  ip_lib.IPDevice.device_has_ip doesn't include ipv6 addresses when
  checking if a local adapter is configured to use a certain address.
  This prevents IPv6 tunnel endpoints from being used on Windows.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1843889/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1843870] [NEW] ovsdb monitor ignores modified ports

2019-09-13 Thread Lucian Petrut
Public bug reported:

The ovsdb monitor used by Neutron ignores modified ports. For this
reason, some port changes will not be handled.

One particular case comes from ofport changes. On Windows, depending on
the OVS version, the ofport can change after VM reboots. The neutron ovs
agent will have to update the flows in this situation, which it does,
but only when "minimize_polling" is disabled.

The neutron ovsdb monitor should propagate the events received for
modified ports. Those can either be part of the "added" device list or
have a separate one for modified ports.

** Affects: neutron
 Importance: Undecided
     Assignee: Lucian Petrut (petrutlucian94)
 Status: In Progress

** Description changed:

  The ovsdb monitor used by Neutron ignores modified ports. For this
- reason, port changes will not be handled.
+ reason, some port changes will not be handled.
  
  One particular case comes from ofport changes. On Windows, depending on
  the OVS version, the ofport can change after VM reboots. The neutron ovs
  agent will have to update the flows in this situation, which it does,
  but only when "minimize_polling" is disabled.
  
  The neutron ovsdb monitor should propagate the events received for
  modified ports. Those can either be part of the "added" device list or
  have a separate one for modified ports.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1843870

Title:
  ovsdb monitor ignores modified ports

Status in neutron:
  In Progress

Bug description:
  The ovsdb monitor used by Neutron ignores modified ports. For this
  reason, some port changes will not be handled.

  One particular case comes from ofport changes. On Windows, depending
  on the OVS version, the ofport can change after VM reboots. The
  neutron ovs agent will have to update the flows in this situation,
  which it does, but only when "minimize_polling" is disabled.

  The neutron ovsdb monitor should propagate the events received for
  modified ports. Those can either be part of the "added" device list or
  have a separate one for modified ports.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1843870/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1841411] [NEW] Instances recovered after failed migrations enter error state

2019-08-26 Thread Lucian Petrut
Public bug reported:

Most users expect that if a live migration fails but the instance is
fully recovered, it shouldn't enter 'error' state. Setting the migration
status to 'error' should be enough. This simplifies debugging, making it
clear that the instance dosn't have to be manually recovered.

This patch changed this behavior, indirectly affecting the Hyper-V
driver, which propagates migration errors:
Idfdce9e7dd8106af01db0358ada15737cb846395

When using the Hyper-V driver, instances enter error state even after
successful recoveries. We may copy the Libvirt driver behavior and avoid
propagating exceptions in this case.

** Affects: compute-hyperv
 Importance: Undecided
 Status: New

** Affects: nova
 Importance: Undecided
 Assignee: Lucian Petrut (petrutlucian94)
 Status: In Progress


** Tags: hyper-v

** Also affects: nova
   Importance: Undecided
   Status: New

** Tags added: hyper-v

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1841411

Title:
  Instances recovered after failed migrations enter error state

Status in compute-hyperv:
  New
Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Most users expect that if a live migration fails but the instance is
  fully recovered, it shouldn't enter 'error' state. Setting the
  migration status to 'error' should be enough. This simplifies
  debugging, making it clear that the instance dosn't have to be
  manually recovered.

  This patch changed this behavior, indirectly affecting the Hyper-V
  driver, which propagates migration errors:
  Idfdce9e7dd8106af01db0358ada15737cb846395

  When using the Hyper-V driver, instances enter error state even after
  successful recoveries. We may copy the Libvirt driver behavior and
  avoid propagating exceptions in this case.

To manage notifications about this bug go to:
https://bugs.launchpad.net/compute-hyperv/+bug/1841411/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1816041] [NEW] Nova keypair type cannot be selected

2019-02-15 Thread Lucian Petrut
Public bug reported:

Nova supports x509 keypairs, most commonly used with Windows instances.
Horizon doesn't allow picking the keypair type at the moment.

** Affects: horizon
 Importance: Undecided
 Assignee: Daniel Vincze (dvincze)
 Status: In Progress


** Tags: nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1816041

Title:
  Nova keypair type cannot be selected

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  Nova supports x509 keypairs, most commonly used with Windows
  instances. Horizon doesn't allow picking the keypair type at the
  moment.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1816041/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1804180] [NEW] Neutron ovs cleanup cannot run on Windows

2018-11-20 Thread Lucian Petrut
Public bug reported:

The ovs cleanup script fails to run on Windows due to an import error.
The linux utils module will always get imported, which uses platform
specific modules.

Trace: http://paste.openstack.org/raw/735711/

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1804180

Title:
  Neutron ovs cleanup cannot run on Windows

Status in neutron:
  New

Bug description:
  The ovs cleanup script fails to run on Windows due to an import error.
  The linux utils module will always get imported, which uses platform
  specific modules.

  Trace: http://paste.openstack.org/raw/735711/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1804180/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1801713] Re: Improper error message handling

2018-11-05 Thread Lucian Petrut
** Also affects: compute-hyperv
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1801713

Title:
  Improper error message handling

Status in compute-hyperv:
  New
Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Exception messages are expected to be strings, but it's not always the
  case. Some exceptions are wrapped and passed as new exception
  messages, which affects some exception handlers.

  Trace: http://paste.openstack.org/show/733939/

  Finding all the occurrences may be difficult. For now, it may be
  easier to just convert the exception messages to strings in the
  NovaException class.

To manage notifications about this bug go to:
https://bugs.launchpad.net/compute-hyperv/+bug/1801713/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1796689] [NEW] Incorrect results for tenant usage

2018-10-08 Thread Lucian Petrut
Public bug reported:

nova usage-list can return incorrect results, having resources counted
twice. This only occurs when using the 2.40 microversion or later.

http://paste.openstack.org/raw/731560/

This microversion introduced pagination, which doesn't work properly.
Nova API will sort the instances using the tenant id and instance uuid,
but 'os-simple-tenant-usage' will not preserve the order when returning
the results.

For this reason, subsequent API calls made by the client will use the
wrong marker (which is supposed to be the last instance id), ending up
counting the same instances twice.

** Affects: nova
 Importance: Undecided
 Assignee: Lucian Petrut (petrutlucian94)
 Status: In Progress

** Changed in: nova
 Assignee: (unassigned) => Lucian Petrut (petrutlucian94)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1796689

Title:
  Incorrect results for tenant usage

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  nova usage-list can return incorrect results, having resources counted
  twice. This only occurs when using the 2.40 microversion or later.

  http://paste.openstack.org/raw/731560/

  This microversion introduced pagination, which doesn't work properly.
  Nova API will sort the instances using the tenant id and instance
  uuid, but 'os-simple-tenant-usage' will not preserve the order when
  returning the results.

  For this reason, subsequent API calls made by the client will use the
  wrong marker (which is supposed to be the last instance id), ending up
  counting the same instances twice.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1796689/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1761748] Re: hyperv: Unable to get ports details for devices: AttributeError: 'NoneType' object has no attribute 'startswith'

2018-09-28 Thread Lucian Petrut
** Also affects: os-win
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1761748

Title:
  hyperv: Unable to get ports details for devices: AttributeError:
  'NoneType' object has no attribute 'startswith'

Status in networking-hyperv:
  New
Status in neutron:
  Confirmed
Status in os-win:
  New

Bug description:
  In a failed hyperv CI run I'm seeing this in the hyperv agent logs:

  http://cloudbase-ci.com/nova/324720/5/Hyper-
  V_logs/192.168.3.143-compute01/neutron-hyperv-agent.log.gz

  2018-04-06 02:43:29.230 588 91983184 MainThread INFO 
networking_hyperv.neutron.agent.layer2 [-] Hyper-V VM vNIC added: 
5d31e08c-957c-45a5-a13d-fa114ea68b56
  2018-04-06 02:43:29.230 588 91983184 MainThread INFO 
networking_hyperv.neutron.agent.layer2 [-] Hyper-V VM vNIC added: None
  2018-04-06 02:43:29.246 588 91983184 MainThread INFO 
networking_hyperv.neutron.agent.layer2 [-] Hyper-V VM vNIC added: None
  2018-04-06 02:43:29.246 588 91983184 MainThread INFO 
networking_hyperv.neutron.agent.layer2 [-] Hyper-V VM vNIC added: None
  2018-04-06 02:43:29.262 588 91983184 MainThread INFO 
networking_hyperv.neutron.agent.layer2 [-] Hyper-V VM vNIC added: None
  2018-04-06 02:43:30.496 588 35292864 MainThread DEBUG 
networking_hyperv.neutron.agent.layer2 
[req-1c72ef49-f85f-4776-8219-ac410b3a00e6 - - - - -] Agent loop has new 
devices! _work 
c:\openstack\build\networking-hyperv\networking_hyperv\neutron\agent\layer2.py:427
  2018-04-06 02:43:30.526 588 35292864 MainThread DEBUG 
networking_hyperv.neutron.agent.layer2 
[req-1c72ef49-f85f-4776-8219-ac410b3a00e6 - - - - -] Unable to get ports 
details for devices set([u'5d31e08c-957c-45a5-a13d-fa114ea68b56', None]): 
'NoneType' object has no attribute 'startswith'
  Traceback (most recent call last):

File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", 
line 163, in _process_incoming
  res = self.dispatcher.dispatch(message)

File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
220, in dispatch
  return self._do_dispatch(endpoint, method, ctxt, args)

File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
190, in _do_dispatch
  result = func(ctxt, **new_args)

File "/opt/stack/neutron/neutron/plugins/ml2/rpc.py", line 157, in 
get_devices_details_list
  for device in kwargs.pop('devices', [])

File "/opt/stack/neutron/neutron/plugins/ml2/rpc.py", line 80, in 
get_device_details
  port_id = plugin._device_to_port_id(rpc_context, device)

File "/opt/stack/neutron/neutron/plugins/ml2/plugin.py", line 1864, in 
_device_to_port_id
  if device.startswith(prefix):

  AttributeError: 'NoneType' object has no attribute 'startswith'
   _treat_devices_added 
c:\openstack\build\networking-hyperv\networking_hyperv\neutron\agent\layer2.py:360

  In this test run, the nova-compute service is also being reported as
  down, so the nova-scheduler is filtering it out and all server build
  requests fail. I don't know if the two are related, but thta's how I
  stumbled onto this error in the hyperv agent logs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-hyperv/+bug/1761748/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1792869] Re: FS driver: chunk size is hard-coded

2018-09-18 Thread Lucian Petrut
Addressed by https://review.openstack.org/#/c/603023/.

** Project changed: glance => glance-store

** Changed in: glance-store
 Assignee: (unassigned) => Lucian Petrut (petrutlucian94)

** Changed in: glance-store
   Status: New => Incomplete

** Changed in: glance-store
   Status: Incomplete => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1792869

Title:
  FS driver: chunk size is hard-coded

Status in glance_store:
  In Progress

Bug description:
  At the moment, the filesystem driver uses a hardcoded 64 KB size, used
  when reading/writing image.

  This can be extremely inefficient, especially when file shares are
  used. For this reason, the chunk size should be configurable, similar
  to the other glance store drivers.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance-store/+bug/1792869/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1792869] [NEW] FS driver: chunk size is hard-coded

2018-09-17 Thread Lucian Petrut
Public bug reported:

At the moment, the filesystem driver uses a hardcoded 64 KB size, used
when reading/writing image.

This can be extremely inefficient, especially when file shares are used.
For this reason, the chunk size should be configurable, similar to the
other glance store drivers.

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1792869

Title:
  FS driver: chunk size is hard-coded

Status in Glance:
  New

Bug description:
  At the moment, the filesystem driver uses a hardcoded 64 KB size, used
  when reading/writing image.

  This can be extremely inefficient, especially when file shares are
  used. For this reason, the chunk size should be configurable, similar
  to the other glance store drivers.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1792869/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1783556] [NEW] Neutron ovs agent logs flooded with KeyErrors

2018-07-25 Thread Lucian Petrut
Public bug reported:

The Neutron OVS agent logs can get flooded with KeyErrors as the
'_get_port_info' method skips the added/removed dict items if no ports
have been added/removed, which are expected to be present, even if those
are just empty sets.

Trace: http://paste.openstack.org/raw/726614/

** Affects: neutron
 Importance: Undecided
 Assignee: Lucian Petrut (petrutlucian94)
 Status: In Progress


** Tags: ovs

** Tags added: ovs

** Description changed:

  The Neutron OVS agent logs can get flooded with KeyErrors as the
- '_get_port_info' method skips including the added/removed dict items if
- no ports have been added/removed, which are expected to be present, even
- if those are just empty sets.
+ '_get_port_info' method skips the added/removed dict items if no ports
+ have been added/removed, which are expected to be present, even if those
+ are just empty sets.
  
  Trace: http://paste.openstack.org/raw/726614/

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1783556

Title:
  Neutron ovs agent logs flooded with KeyErrors

Status in neutron:
  In Progress

Bug description:
  The Neutron OVS agent logs can get flooded with KeyErrors as the
  '_get_port_info' method skips the added/removed dict items if no ports
  have been added/removed, which are expected to be present, even if
  those are just empty sets.

  Trace: http://paste.openstack.org/raw/726614/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1783556/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1684250] Re: Timeout waiting for vif plugging callback for instance

2018-07-25 Thread Lucian Petrut
Nova fix: https://review.openstack.org/#/c/585661/

** Also affects: nova
   Importance: Undecided
   Status: New

** Changed in: nova
   Status: New => In Progress

** Changed in: nova
 Assignee: (unassigned) => Lucian Petrut (petrutlucian94)

** Description changed:

  At the moment, the Hyper-V driver will bind OVS ports only after
  starting the VMs. This is because of an old limitation of the OVS
  Windows port, which has been addressed in OVS 2.5.
  
  This also means that the Nova driver fails when waiting for neutron vif
- plug events[1], since this step is performed later. So users are forced
- to set "vif_plugging_is_fatal", which means that the ports will be down,
- at least for a short time after the instance is reported as ACTIVE. This
- also breaks some Tempest tests.
+ plug events[1], since this step is performed later. So deployers are
+ forced to set "vif_plugging_is_fatal", which means that the ports will
+ be down, at least for a short time after the instance is reported as
+ ACTIVE. This also breaks some Tempest tests.
  
  [1] Trace: http://paste.openstack.org/show/DNKXXMcfqP9KcLY3Da3v/

** Tags added: hyper-v

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1684250

Title:
  Timeout waiting for vif plugging callback for instance

Status in compute-hyperv:
  Fix Released
Status in OpenStack Compute (nova):
  In Progress

Bug description:
  At the moment, the Hyper-V driver will bind OVS ports only after
  starting the VMs. This is because of an old limitation of the OVS
  Windows port, which has been addressed in OVS 2.5.

  This also means that the Nova driver fails when waiting for neutron
  vif plug events[1], since this step is performed later. So deployers
  are forced to set "vif_plugging_is_fatal", which means that the ports
  will be down, at least for a short time after the instance is reported
  as ACTIVE. This also breaks some Tempest tests.

  [1] Trace: http://paste.openstack.org/show/DNKXXMcfqP9KcLY3Da3v/

To manage notifications about this bug go to:
https://bugs.launchpad.net/compute-hyperv/+bug/1684250/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1773342] [NEW] hyper-v: Unused images are always deleted

2018-05-25 Thread Lucian Petrut
Public bug reported:

The Hyper-V driver will always delete unused images, ignoring the
"remove_unused_base_images" config option.

One workaround would be to set
"remove_unused_original_minimum_age_seconds" to a really large value
(e.g. 2^30). Setting it to -1 won't help either.

** Affects: compute-hyperv
 Importance: Undecided
 Status: New

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: hyper-v

** Also affects: nova
   Importance: Undecided
   Status: New

** Tags added: hyper-v

** Summary changed:

- Unused images are always deleted
+ hyper-v: Unused images are always deleted

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1773342

Title:
  hyper-v: Unused images are always deleted

Status in compute-hyperv:
  New
Status in OpenStack Compute (nova):
  New

Bug description:
  The Hyper-V driver will always delete unused images, ignoring the
  "remove_unused_base_images" config option.

  One workaround would be to set
  "remove_unused_original_minimum_age_seconds" to a really large value
  (e.g. 2^30). Setting it to -1 won't help either.

To manage notifications about this bug go to:
https://bugs.launchpad.net/compute-hyperv/+bug/1773342/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1756077] [NEW] Hyper-V: leaked volume connections after live migration

2018-03-15 Thread Lucian Petrut
Public bug reported:

The Hyper-V driver may leak volume connections on the source node side
after performing live migrations if the volume connection info is not
the same among hosts (e.g. different LUN IDs).

One of the affected backends seems to be 3PAR iSCSI, possibly some Dell
iSCSI backends as well.

Fibre Channel backends are not affected as the volume disconnect
operation is a noop in our case.

The Libvirt driver was affected by this issue as well but the initial
fix only targeted the Libvirt driver. A recent commit should fix this
issue for the Hyper-V driver as well, passing the right BDMs when
cleaning up the volume connections on the source node side.

Related bugs:
* https://bugs.launchpad.net/nova/+bug/1475411
* https://bugs.launchpad.net/nova/+bug/1288039

** Affects: nova
 Importance: Undecided
 Status: Fix Committed


** Tags: hyper-v volumes

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1756077

Title:
  Hyper-V: leaked volume connections after live migration

Status in OpenStack Compute (nova):
  Fix Committed

Bug description:
  The Hyper-V driver may leak volume connections on the source node side
  after performing live migrations if the volume connection info is not
  the same among hosts (e.g. different LUN IDs).

  One of the affected backends seems to be 3PAR iSCSI, possibly some
  Dell iSCSI backends as well.

  Fibre Channel backends are not affected as the volume disconnect
  operation is a noop in our case.

  The Libvirt driver was affected by this issue as well but the initial
  fix only targeted the Libvirt driver. A recent commit should fix this
  issue for the Hyper-V driver as well, passing the right BDMs when
  cleaning up the volume connections on the source node side.

  Related bugs:
  * https://bugs.launchpad.net/nova/+bug/1475411
  * https://bugs.launchpad.net/nova/+bug/1288039

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1756077/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1741476] [NEW] Attaching read-only volumes fails

2018-01-05 Thread Lucian Petrut
Public bug reported:

The introduction of "new style volume attachments" seems to have caused
a regression, breaking read-only volume attachments.

Trace: http://paste.openstack.org/raw/639120/

The reason seems to be the fact that Cinder expects the connector
provided through the "attachment_update" call to include the requested
attach mode [1], otherwise assuming it to be 'rw'. As Nova won't provide
it, Cinder will then error because of an access mode mismatch.

[1]
https://github.com/openstack/cinder/blob/d96b6dfba03424baf8b3ddc7539347554892e941/cinder/volume/manager.py#L4374-L4393

** Affects: cinder
 Importance: Undecided
 Status: New

** Affects: nova
 Importance: Undecided
 Status: New

** Also affects: nova
   Importance: Undecided
   Status: New

** Description changed:

  The introduction of "new style volume attachments" seems to have caused
  a regression, breaking read-only volume attachments.
  
- Trace: http://paste.openstack.org/raw/639097/
+ Trace: http://paste.openstack.org/raw/639120/
  
  The reason seems to be the fact that Cinder expects the connector
  provided through the "attachment_update" call to include the requested
  attach mode [1], otherwise assuming it to be 'rw'. As Nova won't provide
  it, Cinder will then error because of an access mode mismatch.
  
  [1]
  
https://github.com/openstack/cinder/blob/d96b6dfba03424baf8b3ddc7539347554892e941/cinder/volume/manager.py#L4374-L4393

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1741476

Title:
  Attaching read-only volumes fails

Status in Cinder:
  New
Status in OpenStack Compute (nova):
  New

Bug description:
  The introduction of "new style volume attachments" seems to have
  caused a regression, breaking read-only volume attachments.

  Trace: http://paste.openstack.org/raw/639120/

  The reason seems to be the fact that Cinder expects the connector
  provided through the "attachment_update" call to include the requested
  attach mode [1], otherwise assuming it to be 'rw'. As Nova won't
  provide it, Cinder will then error because of an access mode mismatch.

  [1]
  
https://github.com/openstack/cinder/blob/d96b6dfba03424baf8b3ddc7539347554892e941/cinder/volume/manager.py#L4374-L4393

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1741476/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1723431] [NEW] Hyper-V: device metadata not updated

2017-10-13 Thread Lucian Petrut
Public bug reported:

The Hyper-V driver does not update the instance device metadata when
adding/detaching volumes or network interfaces to already existing
instances.

** Affects: compute-hyperv
 Importance: Undecided
 Status: New

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: hyper-v

** Also affects: nova
   Importance: Undecided
   Status: New

** Tags removed: drivers

** Description changed:

  The Hyper-V driver does not update the instance device metadata when
- adding/detaching volumes or network interfaces.
+ adding/detaching volumes or network interfaces to already existing
+ instances.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1723431

Title:
  Hyper-V: device metadata not updated

Status in compute-hyperv:
  New
Status in OpenStack Compute (nova):
  New

Bug description:
  The Hyper-V driver does not update the instance device metadata when
  adding/detaching volumes or network interfaces to already existing
  instances.

To manage notifications about this bug go to:
https://bugs.launchpad.net/compute-hyperv/+bug/1723431/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1208301] Re: The VM will be destroyed on source host during resizing for Hyper-V

2017-10-02 Thread Lucian Petrut
** Changed in: compute-hyperv
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1208301

Title:
  The VM will be destroyed on source host during resizing for Hyper-V

Status in compute-hyperv:
  Invalid
Status in OpenStack Compute (nova):
  Expired

Bug description:
  This defect is originally be found in the following scenario:

  1. Deploy one vm  A  with 100g disk and 1 cpu.
  2. Resize it with 2 cpu and 200g disk
  3. During resizing,  the host of vm is down (power off )
  4. restart the host

  After investigation, I found that in the method of
  migrate_disk_and_power_off of the migrationops, which is called by the
  Hyper-V driver, the VM gets removed as its last step.

  https://github.com/openstack/nova/blob/master/nova/virt/hyperv/driver.py
  https://github.com/openstack/nova/blob/master/nova/virt/hyperv/migrationops.py

  Compared with the same scenario in KVM, the previous case works, which
  means the VM being resized won't be removed and after the host is up
  again, the resize can resume.

  Since I am not familiar with the orginal design and don't know why
  Hyper-V handle resizing like this. So open this defect for tracking
  and discussion.

  One question I can propose here is: is there a standard behavior among
  the hypervisors for resizing? if yes, what is it?

To manage notifications about this bug go to:
https://bugs.launchpad.net/compute-hyperv/+bug/1208301/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1604078] Re: Hyper-V: planned vms are not cleaned up

2017-10-02 Thread Lucian Petrut
** Changed in: os-win
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1604078

Title:
  Hyper-V: planned vms are not cleaned up

Status in compute-hyperv:
  Fix Released
Status in OpenStack Compute (nova):
  In Progress
Status in os-win:
  Fix Released

Bug description:
  We create a planned vm during live migration when having passthrough
  disks attached in order to properly configure the resources of the
  'new' instance.

  The issue is that if the migration fails, this planned vm is not
  cleaned up.

  Although planned vms are destroyed at a second attempt to migrate the
  instance, this issue had an impact on the Hyper-V CI as planned vms
  persisted among CI runs and vms having the same name failed to spawn,
  as there were file handles kept open by the VMMS service, preventing
  the instance path from being cleaned up.

  Trace:
  http://paste.openstack.org/show/536149/

To manage notifications about this bug go to:
https://bugs.launchpad.net/compute-hyperv/+bug/1604078/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1717892] Re: hyperv: Driver does not report disk_available_least

2017-09-19 Thread Lucian Petrut
** Also affects: compute-hyperv
   Importance: Undecided
   Status: New

** Changed in: compute-hyperv
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1717892

Title:
  hyperv: Driver does not report disk_available_least

Status in compute-hyperv:
  In Progress
Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Description
  ===

  The Hyper-V driver does not report the disk_available_least field.
  Reporting the mentioned field can help mitigate the scheduling issue
  regarding hosts using the same shared storage.

  Steps to reproduce
  ==

  Have a compute node having X GB total storage (reported as local_gb),
  out of which, just 1 GB is actually free (unreported). The compute
  node also reports local_gb_used, which is the sum of the allocated
  nova instances' flavor disk sizes. (local_gb > local_gb_used).

  Try to spawn an instance with a flavor's disk higher than 1.or disk
  size of 2 GB on the host.

  Expected result
  ===

  Instance should be in ERROR state, it shouldn't be able to schedule to
  the compute node.

  Actual result
  =

  The instance is active.

  Environment
  ===

  OpenStack Pike.
  Hyper-V 2012 R2 compute node.

  Logs & Configs
  ==

  [1] compute node's resource view VS. actual reported resource view: 
http://paste.openstack.org/show/621318/
  [2] compute node's resources (nova hypervisor-show), spawning an instance: 
http://paste.openstack.org/show/621319/

To manage notifications about this bug go to:
https://bugs.launchpad.net/compute-hyperv/+bug/1717892/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1653862] Re: hyper-v: Exception dispatching event Stopped>:

2017-09-01 Thread Lucian Petrut
This has been fixed at the os-win level, which is now raising a
HyperVVMNotFoundException exception that can properly be handled by the
Nova Hyper-V driver.

Note that a debug message will still be logged in this situation [1], as
seen on a recent Hyper-V CI run[2].

[1] 2017-08-31 12:26:16.625 2336 90871568 MainThread DEBUG os_win._utils 
[req-69dcf4f7-c61f-4fd0-b7fc-c8adb1026821 - - - - -] x_wmi: Not Found exception 
raised while running get_vm_summary_info inner 
C:\Python27\lib\site-packages\os_win\_utils.py:244
[2] 
http://cloudbase-ci.com/nova/499690/2/Hyper-V_logs/192.168.5.111-compute01/nova-compute.log.gz

FWIW, the CI hardware is being moved, for which reason old CI result
URLs may be unavailable.

** Changed in: nova
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1653862

Title:
  hyper-v: Exception dispatching event  Stopped>: 

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  I see these errors quite a bit in the n-cpu logs for hyperv CI runs:

  2017-01-03 23:15:08.141 10848 80822320 MainThread ERROR
  nova.virt.driver [req-050d06ec-3636-4a5d-8b73-0531ef62055a - - - - -]
  Exception dispatching event  Stopped>: 

  http://64.119.130.115/nova/140045/51/Hyper-V_logs/c2-r22-u04-n03/nova-
  compute.log.gz

  I assume this is a lifecycle event for an instance that has been
  deleted. We're logging an error because of a problem dispatching the
  event.

  I'm not totally sure how to handle this, but if the virt driver could
  raise a NotFound or something we could handle that and not log it as
  an ERROR in the n-cpu logs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1653862/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1714285] [NEW] Hyper-V: leaked resources after failed spawn

2017-08-31 Thread Lucian Petrut
Public bug reported:

Volume connections as well as vif ports are not cleaned up after a
failed instance spawn.

** Affects: compute-hyperv
 Importance: Undecided
 Status: New

** Affects: nova
 Importance: Undecided
 Status: New

** Also affects: compute-hyperv
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1714285

Title:
  Hyper-V: leaked resources after failed spawn

Status in compute-hyperv:
  New
Status in OpenStack Compute (nova):
  New

Bug description:
  Volume connections as well as vif ports are not cleaned up after a
  failed instance spawn.

To manage notifications about this bug go to:
https://bugs.launchpad.net/compute-hyperv/+bug/1714285/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1714247] [NEW] Cleaning up deleted instances leaks resources

2017-08-31 Thread Lucian Petrut
Public bug reported:

When the nova-compute service cleans up an instance that still exists on
the host although being deleted from the DB, the according network info
is not properly retrieved.

For this reason, vif ports will not be cleaned up.

In this situation there may also be stale volume connections. Those will
be leaked as well as os-brick attempts to flush those inaccessible
devices, which will fail. As per a recent os-brick change, a 'force'
flag must be set in order to ignore flush errors.

Log: http://paste.openstack.org/raw/620048/

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1714247

Title:
  Cleaning up deleted instances leaks resources

Status in OpenStack Compute (nova):
  New

Bug description:
  When the nova-compute service cleans up an instance that still exists
  on the host although being deleted from the DB, the according network
  info is not properly retrieved.

  For this reason, vif ports will not be cleaned up.

  In this situation there may also be stale volume connections. Those
  will be leaked as well as os-brick attempts to flush those
  inaccessible devices, which will fail. As per a recent os-brick
  change, a 'force' flag must be set in order to ignore flush errors.

  Log: http://paste.openstack.org/raw/620048/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1714247/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1713735] [NEW] Nova assisted volume snapshots issue

2017-08-29 Thread Lucian Petrut
Public bug reported:

When using separate databases for each Nova cell, nova assisted volume 
snapshots always fail with the following error:
BadRequest: No volume Block Device Mapping with id 
a10bd120-9b88-4710-bf6e-f1d34de87da2. (HTTP 400)

The reason is that the according API call does not include an instance
id, which is fetched from the BDM. At the same time, the BDM cannot
properly be retrieved since Nova doesn't know which cell to use, looking
for the BDM in the wrong DB.

Cinder trace: http://paste.openstack.org/raw/619767/

Among others, Cinder NFS backends are affected by this, as per the following 
Cinder NFS CI logs:
http://logs.openstack.org/21/498321/5/check/gate-tempest-dsvm-full-devstack-plugin-nfs-nv/b8bca96/logs/screen-c-vol.txt.gz?level=ERROR

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: api cells volumes

** Tags added: api cells volumes

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1713735

Title:
  Nova assisted volume snapshots issue

Status in OpenStack Compute (nova):
  New

Bug description:
  When using separate databases for each Nova cell, nova assisted volume 
snapshots always fail with the following error:
  BadRequest: No volume Block Device Mapping with id 
a10bd120-9b88-4710-bf6e-f1d34de87da2. (HTTP 400)

  The reason is that the according API call does not include an instance
  id, which is fetched from the BDM. At the same time, the BDM cannot
  properly be retrieved since Nova doesn't know which cell to use,
  looking for the BDM in the wrong DB.

  Cinder trace: http://paste.openstack.org/raw/619767/

  Among others, Cinder NFS backends are affected by this, as per the following 
Cinder NFS CI logs:
  
http://logs.openstack.org/21/498321/5/check/gate-tempest-dsvm-full-devstack-plugin-nfs-nv/b8bca96/logs/screen-c-vol.txt.gz?level=ERROR

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1713735/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1709931] [NEW] Windows: exec calls stdout trimmed

2017-08-10 Thread Lucian Petrut
Public bug reported:

At some point, we've switched to an alternative process launcher that
uses named pipes to communicate with the child processes. This
implementation has some issues, truncating the process output in some
situations.

Trace:
http://paste.openstack.org/show/616053/

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1709931

Title:
  Windows: exec calls stdout trimmed

Status in neutron:
  New

Bug description:
  At some point, we've switched to an alternative process launcher that
  uses named pipes to communicate with the child processes. This
  implementation has some issues, truncating the process output in some
  situations.

  Trace:
  http://paste.openstack.org/show/616053/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1709931/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1564829] Re: Hyper-V SMBFS volume driver cannot handle missing mount options

2017-06-28 Thread Lucian Petrut
Fixed in os-brick, no longer affects Nova.

** Changed in: nova
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1564829

Title:
  Hyper-V SMBFS volume driver cannot handle missing mount options

Status in compute-hyperv:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  We use regex to fetch the crentials from the connection info options
  when mounting SMB shares.

  The issue is that this field may be empty, in which case we may get
  a TypeError.

To manage notifications about this bug go to:
https://bugs.launchpad.net/compute-hyperv/+bug/1564829/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1580122] Re: Hyper-V: cannot attach volumes from local HA shares

2017-06-28 Thread Lucian Petrut
Fixed in os-brick, no longer affects Nova.

** Changed in: nova
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1580122

Title:
  Hyper-V: cannot attach volumes from local HA shares

Status in compute-hyperv:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in os-win:
  Fix Released

Bug description:
  At the moment, the Hyper-V driver uses the UNC path of images stored
  on SMB shares, regardless if the share is remote or not. Citing from
  the MS documentation, this is not supported:

  “Accessing a continuously available file share as a loopback share is
  not supported. For example, if Microsoft SQL Server or Hyper-V store
  data files on SMB file shares, they must run on computers that are not
  a member of the file server cluster for the SMB file shares.”

  This is troublesome for the Hyper-C scenario, as Hyper-V will attempt
  to modify the image ACLs, making them unusable. The easy fix is to
  simply check if the share is local, and use the local path in that
  case.

To manage notifications about this bug go to:
https://bugs.launchpad.net/compute-hyperv/+bug/1580122/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1663238] Re: Hyper-V driver destroys and recreates the VM on cold migration / resize

2017-06-22 Thread Lucian Petrut
** Changed in: os-win
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1663238

Title:
  Hyper-V driver destroys and recreates the VM on cold migration /
  resize

Status in OpenStack Compute (nova):
  Triaged
Status in os-win:
  Invalid

Bug description:
  Currently, the nova Hyper-V driver destroys the instance on the source
  node and recreates it on the destination node, losing some of the
  instance settings.

  For this reason, guest PCI device IDs change, causing undesired
  effects in some cases. For example, this affects guests relying on
  static network configuration, which will be lost after a cold
  migration.

  This issue can be solved by importing the desired VM on the
  destination node, and updating the VM resources according to the new
  flavor.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1663238/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1694636] [NEW] Instance enters error state in case of unavailable live migration destinations

2017-05-31 Thread Lucian Petrut
Public bug reported:

We check whether shared storage is used before live migrating instances.
If the vm disks storage location is unavailable, we'll propagate an
OSError instead of a MigrationPreCheckError exception. For this reason,
we prevent Nova from trying a different compute node.

Trace: http://paste.openstack.org/show/611003/

** Affects: compute-hyperv
 Importance: Undecided
 Status: New

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: hyper-v

** Also affects: nova
   Importance: Undecided
   Status: New

** Tags added: hyper-v

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1694636

Title:
  Instance enters error state in case of unavailable live migration
  destinations

Status in compute-hyperv:
  New
Status in OpenStack Compute (nova):
  New

Bug description:
  We check whether shared storage is used before live migrating
  instances. If the vm disks storage location is unavailable, we'll
  propagate an OSError instead of a MigrationPreCheckError exception.
  For this reason, we prevent Nova from trying a different compute node.

  Trace: http://paste.openstack.org/show/611003/

To manage notifications about this bug go to:
https://bugs.launchpad.net/compute-hyperv/+bug/1694636/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1693470] [NEW] Hyper-V: vhdx images are rejected

2017-05-25 Thread Lucian Petrut
Public bug reported:

Although the Hyper-V driver is used with VHDX images most of the time,
it rejects Glance images marked as VHDX.

Note that until recently, the default supported formats list from Glance
did not include vhdx, so users would usually just mark them as 'vhd'.
Not only can this be confusing, but it may also lead to having those
images rejected when the specified format is actually validated.

** Affects: compute-hyperv
 Importance: Undecided
 Status: New

** Affects: nova
 Importance: Undecided
 Assignee: Lucian Petrut (petrutlucian94)
 Status: In Progress


** Tags: hyper-v

** Tags added: hyper-v

** Also affects: compute-hyperv
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1693470

Title:
  Hyper-V: vhdx images are rejected

Status in compute-hyperv:
  New
Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Although the Hyper-V driver is used with VHDX images most of the time,
  it rejects Glance images marked as VHDX.

  Note that until recently, the default supported formats list from
  Glance did not include vhdx, so users would usually just mark them as
  'vhd'. Not only can this be confusing, but it may also lead to having
  those images rejected when the specified format is actually validated.

To manage notifications about this bug go to:
https://bugs.launchpad.net/compute-hyperv/+bug/1693470/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1671434] [NEW] fdatasync() usage breaks Windows compatibility

2017-03-09 Thread Lucian Petrut
Public bug reported:

The following change uses fdatasync when fetching Glance images, which
is not supported on Windows: Id9905a87f16f66530623800e33e2581c555ae81d

For this reason, this operation is now failing on Windows.
Trace: http://paste.openstack.org/raw/602054/

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1671434

Title:
  fdatasync() usage breaks Windows compatibility

Status in OpenStack Compute (nova):
  New

Bug description:
  The following change uses fdatasync when fetching Glance images, which
  is not supported on Windows: Id9905a87f16f66530623800e33e2581c555ae81d

  For this reason, this operation is now failing on Windows.
  Trace: http://paste.openstack.org/raw/602054/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1671434/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1671435] [NEW] fdatasync() usage breaks Windows compatibility

2017-03-09 Thread Lucian Petrut
Public bug reported:

The following change uses fdatasync when fetching Glance images, which
is not supported on Windows: Id9905a87f16f66530623800e33e2581c555ae81d

For this reason, this operation is now failing on Windows.
Trace: http://paste.openstack.org/raw/602054/

** Affects: nova
 Importance: Undecided
 Assignee: Lucian Petrut (petrutlucian94)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1671435

Title:
  fdatasync() usage breaks Windows compatibility

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  The following change uses fdatasync when fetching Glance images, which
  is not supported on Windows: Id9905a87f16f66530623800e33e2581c555ae81d

  For this reason, this operation is now failing on Windows.
  Trace: http://paste.openstack.org/raw/602054/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1671435/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1657424] [NEW] Hyper-V: Instance ports not bound after resize

2017-01-18 Thread Lucian Petrut
Public bug reported:

When using OVS, the Hyper-V driver creates the OVS ports only after the
instance is powered on (due to a Hyper-V limitation).

The issue is that in case of cold migrations/resize, this step is currently 
skipped, as the driver doesn't pass the network info object when powering on 
the instance:
https://github.com/openstack/nova/blob/07b6580a1648a860eefb5a949cb443c2a335a89a/nova/virt/hyperv/migrationops.py#L300-L301

Simply passing that object will fix the issue.

** Affects: compute-hyperv
 Importance: Undecided
 Status: New

** Affects: nova
 Importance: Undecided
 Status: New

** Also affects: compute-hyperv
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1657424

Title:
  Hyper-V: Instance ports not bound after resize

Status in compute-hyperv:
  New
Status in OpenStack Compute (nova):
  New

Bug description:
  When using OVS, the Hyper-V driver creates the OVS ports only after
  the instance is powered on (due to a Hyper-V limitation).

  The issue is that in case of cold migrations/resize, this step is currently 
skipped, as the driver doesn't pass the network info object when powering on 
the instance:
  
https://github.com/openstack/nova/blob/07b6580a1648a860eefb5a949cb443c2a335a89a/nova/virt/hyperv/migrationops.py#L300-L301

  Simply passing that object will fix the issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/compute-hyperv/+bug/1657424/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1629040] [NEW] Incorrect hyper-v driver capability

2016-09-29 Thread Lucian Petrut
Public bug reported:

The Hyper-V driver incorrectly enables the
'supports_migrate_to_same_host' capability.

This capability seems to have been introduced having the VMWare cluster
architecture in mind, but it leads to unintended behavior in case of the
HyperV driver.

For this reason, the Hyper-V CI is failing on the test_cold_migration
tempest test, which asserts that the host has changed.

** Affects: compute-hyperv
 Importance: Undecided
 Status: New

** Affects: nova
 Importance: Undecided
 Assignee: Lucian Petrut (petrutlucian94)
 Status: In Progress


** Tags: drivers hyper-v

** Description changed:

  The Hyper-V driver incorrectly enables the
- 'supports_migrate_to_same_host' capability. This capability seems to
- have been introduced having the VMWare cluster architecture in mind, but
- it leads to unintended behavior in case of the HyperV driver.
+ 'supports_migrate_to_same_host' capability.
+ 
+ This capability seems to have been introduced having the VMWare cluster
+ architecture in mind, but it leads to unintended behavior in case of the
+ HyperV driver.
  
  For this reason, the Hyper-V CI is failing on the test_cold_migration
  tempest test, which asserts that the host has changed.

** Also affects: compute-hyperv
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1629040

Title:
  Incorrect hyper-v driver capability

Status in compute-hyperv:
  New
Status in OpenStack Compute (nova):
  In Progress

Bug description:
  The Hyper-V driver incorrectly enables the
  'supports_migrate_to_same_host' capability.

  This capability seems to have been introduced having the VMWare
  cluster architecture in mind, but it leads to unintended behavior in
  case of the HyperV driver.

  For this reason, the Hyper-V CI is failing on the test_cold_migration
  tempest test, which asserts that the host has changed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/compute-hyperv/+bug/1629040/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1619602] [NEW] Hyper-V: vhd config drive images are not migrated

2016-09-02 Thread Lucian Petrut
Public bug reported:

During cold migration, vhd config drive images are not copied over, on
the wrong assumption that the instance is already configured and does
not need the config drive.

There is an explicit check at the following location:
https://github.com/openstack/nova/blob/8f35bb321d26bd7d296c57f4188ec12fcde897c3/nova/virt/hyperv/migrationops.py#L75-L76

For this reason, migrating instances using vhd config drivers will fail, as 
there is a check ensuring that the config drive is present, if required:
https://github.com/openstack/nova/blob/8f35bb321d26bd7d296c57f4188ec12fcde897c3/nova/virt/hyperv/migrationops.py#L153-L163

The Hyper-V driver should not skip moving the config drive image.

** Affects: compute-hyperv
 Importance: Undecided
 Status: New

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: drivers hyper-v

** Project changed: os-win => nova

** Also affects: compute-hyperv
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1619602

Title:
  Hyper-V: vhd config drive images are not migrated

Status in compute-hyperv:
  New
Status in OpenStack Compute (nova):
  New

Bug description:
  During cold migration, vhd config drive images are not copied over, on
  the wrong assumption that the instance is already configured and does
  not need the config drive.

  There is an explicit check at the following location:
  
https://github.com/openstack/nova/blob/8f35bb321d26bd7d296c57f4188ec12fcde897c3/nova/virt/hyperv/migrationops.py#L75-L76

  For this reason, migrating instances using vhd config drivers will fail, as 
there is a check ensuring that the config drive is present, if required:
  
https://github.com/openstack/nova/blob/8f35bb321d26bd7d296c57f4188ec12fcde897c3/nova/virt/hyperv/migrationops.py#L153-L163

  The Hyper-V driver should not skip moving the config drive image.

To manage notifications about this bug go to:
https://bugs.launchpad.net/compute-hyperv/+bug/1619602/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1611321] [NEW] HyperV: shelve vm deadlock

2016-08-09 Thread Lucian Petrut
Public bug reported:

At the moment, the instance snapshot operation is synchronized using
the instance uuid. This was added some time ago, as the instance
destroy operation was failing when an instance snapshot was in
proggress.

This is now causing a deadlock, as a similar lock was recently
introduced in the manager for the shelve operation by this change:
Id36b3b9516d72d28519c18c38d98b646b47d288d

We can safely remove the lock from the HyperV driver as we now stop
pending jobs when destroying instances.

** Affects: compute-hyperv
 Importance: Undecided
 Status: New

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: hyper-v

** Also affects: compute-hyperv
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1611321

Title:
  HyperV: shelve vm deadlock

Status in compute-hyperv:
  New
Status in OpenStack Compute (nova):
  New

Bug description:
  At the moment, the instance snapshot operation is synchronized using
  the instance uuid. This was added some time ago, as the instance
  destroy operation was failing when an instance snapshot was in
  proggress.

  This is now causing a deadlock, as a similar lock was recently
  introduced in the manager for the shelve operation by this change:
  Id36b3b9516d72d28519c18c38d98b646b47d288d

  We can safely remove the lock from the HyperV driver as we now stop
  pending jobs when destroying instances.

To manage notifications about this bug go to:
https://bugs.launchpad.net/compute-hyperv/+bug/1611321/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1604078] [NEW] Hyper-V: planned vms are not cleaned up

2016-07-18 Thread Lucian Petrut
Public bug reported:

We create a planned vm during live migration when having passthrough
disks attached in order to properly configure the resources of the 'new'
instance.

The issue is that if the migration fails, this planned vm is not cleaned
up.

Although planned vms are destroyed at a second attempt to migrate the
instance, this issue had an impact on the Hyper-V CI as planned vms
persisted among CI runs and vms having the same name failed to spawn, as
there were file handles kept open by the VMMS service, preventing the
instance path from being cleaned up.

Trace:
http://paste.openstack.org/show/536149/

** Affects: compute-hyperv
 Importance: Undecided
 Status: New

** Affects: nova
 Importance: Undecided
 Status: New

** Also affects: compute-hyperv
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1604078

Title:
  Hyper-V: planned vms are not cleaned up

Status in compute-hyperv:
  New
Status in OpenStack Compute (nova):
  New

Bug description:
  We create a planned vm during live migration when having passthrough
  disks attached in order to properly configure the resources of the
  'new' instance.

  The issue is that if the migration fails, this planned vm is not
  cleaned up.

  Although planned vms are destroyed at a second attempt to migrate the
  instance, this issue had an impact on the Hyper-V CI as planned vms
  persisted among CI runs and vms having the same name failed to spawn,
  as there were file handles kept open by the VMMS service, preventing
  the instance path from being cleaned up.

  Trace:
  http://paste.openstack.org/show/536149/

To manage notifications about this bug go to:
https://bugs.launchpad.net/compute-hyperv/+bug/1604078/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1403836] Re: Nova volume attach fails for a iscsi disk with CHAP enabled.

2016-06-28 Thread Lucian Petrut
** Changed in: nova
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1403836

Title:
  Nova volume attach fails for a iscsi disk with CHAP enabled.

Status in OpenStack Compute (nova):
  Fix Released
Status in os-win:
  Fix Released

Bug description:
  I was trying nova volume attach of a disk with CHAP enabled on
  Windows(HyperV driver), I notice that the attach volume fails with
  CHAP authentication enforced and the same works without CHAP
  authentication set.

  My current setup is Juno based:

  I saw a similar bug reported as
  https://bugs.launchpad.net/nova/+bug/1397549  .  The fix of which is
  as per

  https://review.openstack.org/#/c/137623/ and
  https://review.openstack.org/#/c/134592/ .

  Even after incorporating  these changes  things do not work and it
  needs an additional fix.

  Issue: The issue even after having  the code as in the commits
  mentioned earlier is that – when we try to do nova volume-attach ,  on
  Hyperv host we first do a login to portal , then attach the volume to
  target.

  Now, if we login to portal without chap authentication – it will fail
  (Authentication failure) and hence the code needs to be changed here
  
(https://github.com/openstack/nova/blob/master/nova/virt/hyperv/volumeutilsv2.py#L64-65
  ) .

  
  Resoultion: While creating/adding  the new portal we need to add it with the 
CHAP credentials (as the way it is done on target.connect) . 

  Sample snippet of the fix would be;
  if portal:
  portal[0].Update()
  else:
  # Adding target portal to iscsi initiator. Sending targets
   LOG.debug("Create a new portal")
 auth = {}
  if auth_username and auth_password:
  auth['AuthenticationType'] = self._CHAP_AUTH_TYPE
  auth['ChapUsername'] = auth_username
  auth['ChapSecret'] = auth_password
  LOG.debug(auth)
  portal = self._conn_storage.MSFT_iSCSITargetPortal
  portal.New(TargetPortalAddress=target_address,
 TargetPortalPortNumber=target_port, **auth)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1403836/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1591996] [NEW] Serial console output is not properly handled

2016-06-13 Thread Lucian Petrut
Public bug reported:

The compute API expects the serial console output to be a string, attempting to 
use a regex to remove some characters.
https://github.com/openstack/nova/blob/6d2470ade25b3a58045e7f75afa2629e851ac049/nova/api/openstack/compute/console_output.py#L70

This will fail if the compute node is using Python 3, as we are passing a byte 
array.
https://github.com/openstack/nova/blob/6d2470ade25b3a58045e7f75afa2629e851ac049/nova/compute/manager.py#L4283-L4297

** Affects: nova
 Importance: Undecided
 Assignee: Lucian Petrut (petrutlucian94)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1591996

Title:
  Serial console output is not properly handled

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  The compute API expects the serial console output to be a string, attempting 
to use a regex to remove some characters.
  
https://github.com/openstack/nova/blob/6d2470ade25b3a58045e7f75afa2629e851ac049/nova/api/openstack/compute/console_output.py#L70

  This will fail if the compute node is using Python 3, as we are passing a 
byte array.
  
https://github.com/openstack/nova/blob/6d2470ade25b3a58045e7f75afa2629e851ac049/nova/compute/manager.py#L4283-L4297

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1591996/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1580122] Re: Hyper-V: cannot attach volumes from local HA shares

2016-05-10 Thread Lucian Petrut
** Also affects: compute-hyperv
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1580122

Title:
  Hyper-V: cannot attach volumes from local HA shares

Status in compute-hyperv:
  New
Status in OpenStack Compute (nova):
  New

Bug description:
  At the moment, the Hyper-V driver uses the UNC path of images stored
  on SMB shares, regardless if the share is remote or not. Citing from
  the MS documentation, this is not supported:

  “Accessing a continuously available file share as a loopback share is
  not supported. For example, if Microsoft SQL Server or Hyper-V store
  data files on SMB file shares, they must run on computers that are not
  a member of the file server cluster for the SMB file shares.”

  This is troublesome for the Hyper-C scenario, as Hyper-V will attempt
  to modify the image ACLs, making them unusable. The easy fix is to
  simply check if the share is local, and use the local path in that
  case.

To manage notifications about this bug go to:
https://bugs.launchpad.net/compute-hyperv/+bug/1580122/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1580122] [NEW] Hyper-V: cannot attach volumes from local HA shares

2016-05-10 Thread Lucian Petrut
Public bug reported:

At the moment, the Hyper-V driver uses the UNC path of images stored on
SMB shares, regardless if the share is remote or not. Citing from the MS
documentation, this is not supported:

“Accessing a continuously available file share as a loopback share is
not supported. For example, if Microsoft SQL Server or Hyper-V store
data files on SMB file shares, they must run on computers that are not a
member of the file server cluster for the SMB file shares.”

This is troublesome for the Hyper-C scenario, as Hyper-V will attempt to
modify the image ACLs, making them unusable. The easy fix is to simply
check if the share is local, and use the local path in that case.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: hyper-v

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1580122

Title:
  Hyper-V: cannot attach volumes from local HA shares

Status in OpenStack Compute (nova):
  New

Bug description:
  At the moment, the Hyper-V driver uses the UNC path of images stored
  on SMB shares, regardless if the share is remote or not. Citing from
  the MS documentation, this is not supported:

  “Accessing a continuously available file share as a loopback share is
  not supported. For example, if Microsoft SQL Server or Hyper-V store
  data files on SMB file shares, they must run on computers that are not
  a member of the file server cluster for the SMB file shares.”

  This is troublesome for the Hyper-C scenario, as Hyper-V will attempt
  to modify the image ACLs, making them unusable. The easy fix is to
  simply check if the share is local, and use the local path in that
  case.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1580122/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1565895] [NEW] Hyper-V: cold migrations cannot handle shared storage

2016-04-04 Thread Lucian Petrut
Public bug reported:

At the moment, if the destination host is other than the source host, we 
attempt to move the instance files without checking if
 shared storage is being used.

** Affects: compute-hyperv
 Importance: Undecided
 Status: New

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: driver hyper-v

** Also affects: nova
   Importance: Undecided
   Status: New

** Tags added: driver hyper-v

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1565895

Title:
  Hyper-V: cold migrations cannot handle shared storage

Status in compute-hyperv:
  New
Status in OpenStack Compute (nova):
  New

Bug description:
  At the moment, if the destination host is other than the source host, we 
attempt to move the instance files without checking if
   shared storage is being used.

To manage notifications about this bug go to:
https://bugs.launchpad.net/compute-hyperv/+bug/1565895/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1564829] Re: SMBFS volume driver cannot handle missing mount options

2016-04-01 Thread Lucian Petrut
** Also affects: nova
   Importance: Undecided
   Status: New

** Summary changed:

- SMBFS volume driver cannot handle missing mount options
+ Hyper-V SMBFS volume driver cannot handle missing mount options

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1564829

Title:
  Hyper-V SMBFS volume driver cannot handle missing mount options

Status in compute-hyperv:
  New
Status in OpenStack Compute (nova):
  New

Bug description:
  We use regex to fetch the crentials from the connection info options
  when mounting SMB shares.

  The issue is that this field may be empty, in which case we may get
  a TypeError.

To manage notifications about this bug go to:
https://bugs.launchpad.net/compute-hyperv/+bug/1564829/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1372823] Re: iSCSI LUN list not refreshed in Hyper-V 2012 R2 compute nodes

2016-03-11 Thread Lucian Petrut
Reviewed: https://review.openstack.org/249291
Committed: 
https://git.openstack.org/cgit/openstack/os-win/commit/?id=b72790bacfd356021b2dd870ade6c9c216fd14a0
Submitter: Jenkins
Branch: master

commit b72790bacfd356021b2dd870ade6c9c216fd14a0
Author: Lucian Petrut <lpet...@cloudbasesolutions.com>
Date: Fri Nov 20 16:20:40 2015 +0200

iSCSI initiator refactoring using iscsidsc.dll

This patch adds a new iscsi initiator utils class,
leveraging iscsidsc.dll functions.

The advantages are:
* Same error output as iscsicli, without the proccess spawn
  overhead
* Improved overall performance, having finer control over
  the iSCSI initiator and avoiding unnecessary operations
* Fixed bugs related to LUN discovery
* Static targets are used instead of having portal discovery
  sessions. This will let us use backends that require
  discovery credentials (which may be different than the
  credentials used when logging in targets)
* improved MPIO support (the caller must request logging in the
  target for each of the available portals. Logging in multiple
  targets exporting the same LUN is also supported). Also, a
  specific initiator can be requested when creating sessions.

Closes-Bug: #1403836
Closes-Bug: #1372823
Closes-Bug: #1372827

Co-Authored-By: Alin Balutoiu <abalut...@cloudbasesolutions.com>
Change-Id: Ie037cf1712a28e85e5eca445eea3df883c6b6831

** Also affects: os-win
   Importance: Undecided
   Status: New

** Changed in: os-win
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1372823

Title:
  iSCSI LUN list not refreshed in Hyper-V 2012 R2 compute nodes

Status in os-win:
  Fix Released

Bug description:
  When an iSCSI volume is attached to Hyper-V, the OS has to refresh the
  list of LUNs on the iSCSI target to discover the new one.

  The current mechanism implemented only works for the first LUN because
  the connection to the target is done after the LUN is exposed to the
  hypervisor. The rest of LUNs exposed to the hypervisor hosted in the
  same iSCSI target won't be refreshed on time to be discovered by the
  machine.

  This looks related with the wrong assumption of having one LUN per
  iscsi target, but it's also possible to have several LUNs per iscsi
  target.

  The patch for this bug should refresh the list of LUNs when a new
  attachment request is received. In our test environment (Hyper-V 2012
  R2), a WMI call like the following one helped to solve this issue:

  self._conn_storage.query("SELECT * FROM MSFT_iSCSISessionToDisk")

To manage notifications about this bug go to:
https://bugs.launchpad.net/os-win/+bug/1372823/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1555699] [NEW] Hyper-V: failed cold migrations cannot be reverted

2016-03-10 Thread Lucian Petrut
Public bug reported:

When performing a cold migration, the Hyper-V driver moves the instance
files to a temporary folder, and from there, it copies them to the
destination node.

The instance folder is not moved entirely, as it will hold some Hyper-V
specific files that cannot be deleted/moved while the instance exists,
basically we just take the files that we care about, while leaving the
folder there.

If the migration fails, the driver tries to move the temporary directory
to the actual instance path, but fails as there already is a folder.

Trace: http://paste.openstack.org/show/490025/

** Affects: compute-hyperv
 Importance: Undecided
 Status: New

** Affects: nova
 Importance: Undecided
 Status: New

** Also affects: compute-hyperv
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1555699

Title:
  Hyper-V: failed cold migrations cannot be reverted

Status in compute-hyperv:
  New
Status in OpenStack Compute (nova):
  New

Bug description:
  When performing a cold migration, the Hyper-V driver moves the
  instance files to a temporary folder, and from there, it copies them
  to the destination node.

  The instance folder is not moved entirely, as it will hold some
  Hyper-V specific files that cannot be deleted/moved while the instance
  exists, basically we just take the files that we care about, while
  leaving the folder there.

  If the migration fails, the driver tries to move the temporary
  directory to the actual instance path, but fails as there already is a
  folder.

  Trace: http://paste.openstack.org/show/490025/

To manage notifications about this bug go to:
https://bugs.launchpad.net/compute-hyperv/+bug/1555699/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1550391] [NEW] Unregistered versioned object breaks Hyper-V live migration

2016-02-26 Thread Lucian Petrut
Public bug reported:

The LiveMigrateData seems to have been considered a base class, supposed to be 
inherited by driver specific implementations. For this reason, not being 
registered within the versioned object registry.

Hyper-V does not use this object, so it does not have a driver specific 
implementation. During live migration, this object
will be passed via RPC, which will fail as it can't be deserialized.

As a result, this issue breaks Hyper-V live migration.

This can easily be fixed by simply registering the LiveMigrateData
class.

** Affects: nova
 Importance: Undecided
 Assignee: Lucian Petrut (petrutlucian94)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1550391

Title:
  Unregistered versioned object breaks Hyper-V live migration

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  The LiveMigrateData seems to have been considered a base class, supposed to 
be inherited by driver specific implementations. For this reason, not being 
registered within the versioned object registry.
  
  Hyper-V does not use this object, so it does not have a driver specific 
implementation. During live migration, this object
  will be passed via RPC, which will fail as it can't be deserialized.

  As a result, this issue breaks Hyper-V live migration.

  This can easily be fixed by simply registering the LiveMigrateData
  class.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1550391/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1372827] Re: Improve efficiency of Hyper-V attaching iSCSI volumes

2016-01-07 Thread Lucian Petrut
** Also affects: os-win
   Importance: Undecided
   Status: New

** Changed in: os-win
 Assignee: (unassigned) => Lucian Petrut (petrutlucian94)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1372827

Title:
  Improve efficiency of Hyper-V attaching iSCSI volumes

Status in OpenStack Compute (nova):
  Triaged
Status in os-win:
  In Progress

Bug description:
  The Hyper-V driver in Nova is not very efficient attaching Cinder
  volumes to the VMs.

  It always tries to refresh the entire connection to the iSCSI target:

  
https://github.com/openstack/nova/blob/master/nova/virt/hyperv/volumeutilsv2.py#L87

  This is a time consuming task that also blocks additional calls during
  this time.

  The class should be refactored to work in a more efficient way.
  Calling the 'Update' method everytime a volume is attached should be
  replaced by a more intelligent mechanism. As reported in
  https://bugs.launchpad.net/nova/+bug/1372823 a call to
  'self._conn_storage.query("SELECT * FROM MSFT_iSCSISessionToDisk")'
  could help.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1372827/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1467916] [NEW] Hyper-V: get free SCSI controller slot issue on V1

2015-06-23 Thread Lucian Petrut
Public bug reported:

The method retrieving a free SCSI controller slot gets all the related
disk resources, checking their address using the AddressOnParent
attribute.

The issue is that this WMI object attribute is not available on the V1 
virtualization namespace, for which reason this method will raise
an AttributeError in case there disks connected to the SCSI controller. For 
this reason, attaching a second volume will fail.

This bug affects Windows Server 2008 R2 and Windows Server 2012 when
using the V1 namespace.

** Affects: nova
 Importance: Undecided
 Assignee: Lucian Petrut (petrutlucian94)
 Status: In Progress


** Tags: hyper-v

** Tags added: hyper-v

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1467916

Title:
  Hyper-V: get free SCSI controller slot issue on V1

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  The method retrieving a free SCSI controller slot gets all the related
  disk resources, checking their address using the AddressOnParent
  attribute.

  The issue is that this WMI object attribute is not available on the V1 
virtualization namespace, for which reason this method will raise
  an AttributeError in case there disks connected to the SCSI controller. For 
this reason, attaching a second volume will fail.

  This bug affects Windows Server 2008 R2 and Windows Server 2012 when
  using the V1 namespace.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1467916/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1467451] [NEW] Hyper-V: fail to detach virtual hard disks

2015-06-22 Thread Lucian Petrut
Public bug reported:

Nova Hyper-V driver fails to detach  virtual hard disks when using the
virtualizaton v1 WMI namespace.

The reason is that it cannot find the attached resource, using the wrong
resource object connection attribute.

This affects Windows Server 2008 as well as Windows Server 2012 when the
old namespace is used.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: hyper-v

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1467451

Title:
  Hyper-V: fail to detach virtual hard disks

Status in OpenStack Compute (Nova):
  New

Bug description:
  Nova Hyper-V driver fails to detach  virtual hard disks when using the
  virtualizaton v1 WMI namespace.

  The reason is that it cannot find the attached resource, using the
  wrong resource object connection attribute.

  This affects Windows Server 2008 as well as Windows Server 2012 when
  the old namespace is used.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1467451/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1466056] [NEW] Hyper-V: serial ports issue on Windows Threshold

2015-06-17 Thread Lucian Petrut
Public bug reported:

 On Windows Threshold, a new WMI class was introduced, targeting  VM
serial ports.

 For this reason, attempting to retrieve serial port connections fails
on Windows Threshold.

 This can easily be fixed by using the right WMI class when attempting
to retrieve VM serial ports.

** Affects: nova
 Importance: Undecided
 Assignee: Lucian Petrut (petrutlucian94)
 Status: In Progress


** Tags: hyper-v

** Tags added: hyper-v

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1466056

Title:
  Hyper-V: serial ports issue on Windows Threshold

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
   On Windows Threshold, a new WMI class was introduced, targeting  VM
  serial ports.

   For this reason, attempting to retrieve serial port connections fails
  on Windows Threshold.

   This can easily be fixed by using the right WMI class when attempting
  to retrieve VM serial ports.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1466056/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1463044] [NEW] Hyper-V: the driver fails to initialize on Windows Server 2008 R2

2015-06-08 Thread Lucian Petrut
Public bug reported:

The Hyper-V driver uses the Microsoft\Windows\SMB WMI namespace in order
to handle SMB shares. The issue is that this namespace is not available
on Windows versions prior to Windows Server 2012.

For this reason, the Hyper-V driver fails to initialize on Windows
Server 2008 R2.

Trace: http://paste.openstack.org/show/271422/

** Affects: nova
 Importance: Undecided
 Assignee: Lucian Petrut (petrutlucian94)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1463044

Title:
  Hyper-V: the driver fails to initialize on Windows Server 2008 R2

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  The Hyper-V driver uses the Microsoft\Windows\SMB WMI namespace in
  order to handle SMB shares. The issue is that this namespace is not
  available on Windows versions prior to Windows Server 2012.

  For this reason, the Hyper-V driver fails to initialize on Windows
  Server 2008 R2.

  Trace: http://paste.openstack.org/show/271422/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1463044/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1461970] [NEW] Hyper-V: failed to destroy instance

2015-06-04 Thread Lucian Petrut
Public bug reported:

In some cases, Hyper-V fails to destroy instances, returning a 32775
error code, meaning Invalid state for this operation. Right before
this, the instance is reported as successfully being shut off.

This is a quite serious bug as it can lead to leaked instances.

Trace:  http://paste.openstack.org/show/262589/

** Affects: nova
 Importance: Undecided
 Assignee: Lucian Petrut (petrutlucian94)
 Status: In Progress


** Tags: hyper-v

** Changed in: nova
 Assignee: (unassigned) = Lucian Petrut (petrutlucian94)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1461970

Title:
  Hyper-V: failed to destroy instance

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  In some cases, Hyper-V fails to destroy instances, returning a 32775
  error code, meaning Invalid state for this operation. Right before
  this, the instance is reported as successfully being shut off.

  This is a quite serious bug as it can lead to leaked instances.

  Trace:  http://paste.openstack.org/show/262589/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1461970/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1461081] [NEW] SMBFS volume attach race condition

2015-06-02 Thread Lucian Petrut
Public bug reported:

When the SMBFS volume backend is used and a volume is detached, the
according SMB share is detached if no longer used.

This can cause issues if at the same time, a different volume stored on
the same share is being attached as the according disk image will not be
available.

This affects the Libvirt driver as well as the Hyper-V one. The issue
can easily be fixed by using the share path as a lock when performing
attach/detach volume operations.

Trace: http://paste.openstack.org/show/256096/

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: libvirt smbfs

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1461081

Title:
  SMBFS volume attach race condition

Status in OpenStack Compute (Nova):
  New

Bug description:
  When the SMBFS volume backend is used and a volume is detached, the
  according SMB share is detached if no longer used.

  This can cause issues if at the same time, a different volume stored
  on the same share is being attached as the according disk image will
  not be available.

  This affects the Libvirt driver as well as the Hyper-V one. The issue
  can easily be fixed by using the share path as a lock when performing
  attach/detach volume operations.

  Trace: http://paste.openstack.org/show/256096/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1461081/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp