[Yahoo-eng-team] [Bug 1419592] Re: libvirt crashed when removing instances

2015-02-08 Thread Seyeong Kim
** Attachment added: libvirt log
   
https://bugs.launchpad.net/glance/+bug/1419592/+attachment/4315092/+files/libvirtd.log.1

** Project changed: glance = libvirt

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1419592

Title:
  libvirt crashed when removing instances

Status in libvirt virtualization API:
  New

Bug description:
  libvirt crashed when removing instances

  got just two times. detail info is in attach files.

  nova-compute error
  ==

  2015-01-13 14:05:41.275 42910 INFO nova.compute.manager [-] [instance: 
060d3619-52dd-47b4-8eb7-7fdf165583f3] VM Paused (Lifecycle Event)
  2015-01-13 14:05:41.426 42910 INFO nova.compute.manager [-] [instance: 
060d3619-52dd-47b4-8eb7-7fdf165583f3] During sync_power_state the instance has 
a pending task (suspending). Skip.
  2015-01-13 14:05:41.679 42910 INFO nova.compute.manager [-] [instance: 
31e68eab-2fbd-4956-a52f-2193eb9e10a7] VM Paused (Lifecycle Event)
  2015-01-13 14:05:41.812 42910 INFO nova.compute.manager [-] [instance: 
31e68eab-2fbd-4956-a52f-2193eb9e10a7] During sync_power_state the instance has 
a pending task (suspending). Skip.
  2015-01-13 14:05:41.813 42910 INFO nova.compute.manager [-] [instance: 
125e5bbf-21bb-42e9-baf3-d7225cdb030a] VM Paused (Lifecycle Event)
  2015-01-13 14:05:41.944 42910 INFO nova.compute.manager [-] [instance: 
125e5bbf-21bb-42e9-baf3-d7225cdb030a] During sync_power_state the instance has 
a pending task (suspending). Skip.
  2015-01-13 14:05:41.950 42910 INFO nova.compute.manager [-] [instance: 
92ed8175-af9e-4a89-8498-75d3d00caea3] VM Paused (Lifecycle Event)
  2015-01-13 14:05:42.077 42910 INFO nova.compute.manager [-] [instance: 
92ed8175-af9e-4a89-8498-75d3d00caea3] During sync_power_state the instance has 
a pending task (suspending). Skip.
  2015-01-13 14:05:42.080 42910 INFO nova.compute.manager [-] [instance: 
0ed7c2b6-3d67-4f99-8ec8-4ce392619d01] VM Paused (Lifecycle Event)
  2015-01-13 14:05:42.218 42910 INFO nova.compute.manager [-] [instance: 
0ed7c2b6-3d67-4f99-8ec8-4ce392619d01] During sync_power_state the instance has 
a pending task (suspending). Skip.
  2015-01-13 14:06:11.776 42910 ERROR nova.compute.manager 
[req-6eb9636c-c1d7-4bea-9363-086b1398d300 01879f8cac8144c9890a6798c1612f08 
78e33420c8e64bd79caf7572386868a7] [instance: 
060d3619-52dd-47b4-8eb7-7fdf165583f3] Setting instance vm_state to ERROR
  2015-01-13 14:06:11.776 42910 TRACE nova.compute.manager [instance: 
060d3619-52dd-47b4-8eb7-7fdf165583f3] Traceback (most recent call last):
  2015-01-13 14:06:11.776 42910 TRACE nova.compute.manager [instance: 
060d3619-52dd-47b4-8eb7-7fdf165583f3]   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 5586, in 
_error_out_instance_on_exception
  2015-01-13 14:06:11.776 42910 TRACE nova.compute.manager [instance: 
060d3619-52dd-47b4-8eb7-7fdf165583f3] yield
  2015-01-13 14:06:11.776 42910 TRACE nova.compute.manager [instance: 
060d3619-52dd-47b4-8eb7-7fdf165583f3]   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 3742, in 
suspend_instance
  2015-01-13 14:06:11.776 42910 TRACE nova.compute.manager [instance: 
060d3619-52dd-47b4-8eb7-7fdf165583f3] self.driver.suspend(instance)
  2015-01-13 14:06:11.776 42910 TRACE nova.compute.manager [instance: 
060d3619-52dd-47b4-8eb7-7fdf165583f3]   File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 2177, in 
suspend
  2015-01-13 14:06:11.776 42910 TRACE nova.compute.manager [instance: 
060d3619-52dd-47b4-8eb7-7fdf165583f3] dom.managedSave(0)
  2015-01-13 14:06:11.776 42910 TRACE nova.compute.manager [instance: 
060d3619-52dd-47b4-8eb7-7fdf165583f3]   File 
/usr/lib/python2.7/dist-packages/eventlet/tpool.py, line 179, in doit
  2015-01-13 14:06:11.776 42910 TRACE nova.compute.manager [instance: 
060d3619-52dd-47b4-8eb7-7fdf165583f3] result = proxy_call(self._autowrap, 
f, *args, **kwargs)
  2015-01-13 14:06:11.776 42910 TRACE nova.compute.manager [instance: 
060d3619-52dd-47b4-8eb7-7fdf165583f3]   File 
/usr/lib/python2.7/dist-packages/eventlet/tpool.py, line 139, in proxy_call
  2015-01-13 14:06:11.776 42910 TRACE nova.compute.manager [instance: 
060d3619-52dd-47b4-8eb7-7fdf165583f3] rv = execute(f,*args,**kwargs)
  2015-01-13 14:06:11.776 42910 TRACE nova.compute.manager [instance: 
060d3619-52dd-47b4-8eb7-7fdf165583f3]   File 
/usr/lib/python2.7/dist-packages/eventlet/tpool.py, line 77, in tworker
  2015-01-13 14:06:11.776 42910 TRACE nova.compute.manager [instance: 
060d3619-52dd-47b4-8eb7-7fdf165583f3] rv = meth(*args,**kwargs)
  2015-01-13 14:06:11.776 42910 TRACE nova.compute.manager [instance: 
060d3619-52dd-47b4-8eb7-7fdf165583f3]   File 
/usr/lib/python2.7/dist-packages/libvirt.py, line 1163, in managedSave
  2015-01-13 14:06:11.776 42910 TRACE nova.compute.manager [instance: 

[Yahoo-eng-team] [Bug 1419592] [NEW] libvirt crashed when removing instances

2015-02-08 Thread Seyeong Kim
Public bug reported:

libvirt crashed when removing instances

got just two times. detail info is in attach files.

nova-compute error
==

2015-01-13 14:05:41.275 42910 INFO nova.compute.manager [-] [instance: 
060d3619-52dd-47b4-8eb7-7fdf165583f3] VM Paused (Lifecycle Event)
2015-01-13 14:05:41.426 42910 INFO nova.compute.manager [-] [instance: 
060d3619-52dd-47b4-8eb7-7fdf165583f3] During sync_power_state the instance has 
a pending task (suspending). Skip.
2015-01-13 14:05:41.679 42910 INFO nova.compute.manager [-] [instance: 
31e68eab-2fbd-4956-a52f-2193eb9e10a7] VM Paused (Lifecycle Event)
2015-01-13 14:05:41.812 42910 INFO nova.compute.manager [-] [instance: 
31e68eab-2fbd-4956-a52f-2193eb9e10a7] During sync_power_state the instance has 
a pending task (suspending). Skip.
2015-01-13 14:05:41.813 42910 INFO nova.compute.manager [-] [instance: 
125e5bbf-21bb-42e9-baf3-d7225cdb030a] VM Paused (Lifecycle Event)
2015-01-13 14:05:41.944 42910 INFO nova.compute.manager [-] [instance: 
125e5bbf-21bb-42e9-baf3-d7225cdb030a] During sync_power_state the instance has 
a pending task (suspending). Skip.
2015-01-13 14:05:41.950 42910 INFO nova.compute.manager [-] [instance: 
92ed8175-af9e-4a89-8498-75d3d00caea3] VM Paused (Lifecycle Event)
2015-01-13 14:05:42.077 42910 INFO nova.compute.manager [-] [instance: 
92ed8175-af9e-4a89-8498-75d3d00caea3] During sync_power_state the instance has 
a pending task (suspending). Skip.
2015-01-13 14:05:42.080 42910 INFO nova.compute.manager [-] [instance: 
0ed7c2b6-3d67-4f99-8ec8-4ce392619d01] VM Paused (Lifecycle Event)
2015-01-13 14:05:42.218 42910 INFO nova.compute.manager [-] [instance: 
0ed7c2b6-3d67-4f99-8ec8-4ce392619d01] During sync_power_state the instance has 
a pending task (suspending). Skip.
2015-01-13 14:06:11.776 42910 ERROR nova.compute.manager 
[req-6eb9636c-c1d7-4bea-9363-086b1398d300 01879f8cac8144c9890a6798c1612f08 
78e33420c8e64bd79caf7572386868a7] [instance: 
060d3619-52dd-47b4-8eb7-7fdf165583f3] Setting instance vm_state to ERROR
2015-01-13 14:06:11.776 42910 TRACE nova.compute.manager [instance: 
060d3619-52dd-47b4-8eb7-7fdf165583f3] Traceback (most recent call last):
2015-01-13 14:06:11.776 42910 TRACE nova.compute.manager [instance: 
060d3619-52dd-47b4-8eb7-7fdf165583f3]   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 5586, in 
_error_out_instance_on_exception
2015-01-13 14:06:11.776 42910 TRACE nova.compute.manager [instance: 
060d3619-52dd-47b4-8eb7-7fdf165583f3] yield
2015-01-13 14:06:11.776 42910 TRACE nova.compute.manager [instance: 
060d3619-52dd-47b4-8eb7-7fdf165583f3]   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 3742, in 
suspend_instance
2015-01-13 14:06:11.776 42910 TRACE nova.compute.manager [instance: 
060d3619-52dd-47b4-8eb7-7fdf165583f3] self.driver.suspend(instance)
2015-01-13 14:06:11.776 42910 TRACE nova.compute.manager [instance: 
060d3619-52dd-47b4-8eb7-7fdf165583f3]   File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 2177, in 
suspend
2015-01-13 14:06:11.776 42910 TRACE nova.compute.manager [instance: 
060d3619-52dd-47b4-8eb7-7fdf165583f3] dom.managedSave(0)
2015-01-13 14:06:11.776 42910 TRACE nova.compute.manager [instance: 
060d3619-52dd-47b4-8eb7-7fdf165583f3]   File 
/usr/lib/python2.7/dist-packages/eventlet/tpool.py, line 179, in doit
2015-01-13 14:06:11.776 42910 TRACE nova.compute.manager [instance: 
060d3619-52dd-47b4-8eb7-7fdf165583f3] result = proxy_call(self._autowrap, 
f, *args, **kwargs)
2015-01-13 14:06:11.776 42910 TRACE nova.compute.manager [instance: 
060d3619-52dd-47b4-8eb7-7fdf165583f3]   File 
/usr/lib/python2.7/dist-packages/eventlet/tpool.py, line 139, in proxy_call
2015-01-13 14:06:11.776 42910 TRACE nova.compute.manager [instance: 
060d3619-52dd-47b4-8eb7-7fdf165583f3] rv = execute(f,*args,**kwargs)
2015-01-13 14:06:11.776 42910 TRACE nova.compute.manager [instance: 
060d3619-52dd-47b4-8eb7-7fdf165583f3]   File 
/usr/lib/python2.7/dist-packages/eventlet/tpool.py, line 77, in tworker
2015-01-13 14:06:11.776 42910 TRACE nova.compute.manager [instance: 
060d3619-52dd-47b4-8eb7-7fdf165583f3] rv = meth(*args,**kwargs)
2015-01-13 14:06:11.776 42910 TRACE nova.compute.manager [instance: 
060d3619-52dd-47b4-8eb7-7fdf165583f3]   File 
/usr/lib/python2.7/dist-packages/libvirt.py, line 1163, in managedSave
2015-01-13 14:06:11.776 42910 TRACE nova.compute.manager [instance: 
060d3619-52dd-47b4-8eb7-7fdf165583f3] if ret == -1: raise libvirtError 
('virDomainManagedSave() failed', dom=self)
2015-01-13 14:06:11.776 42910 TRACE nova.compute.manager [instance: 
060d3619-52dd-47b4-8eb7-7fdf165583f3] libvirtError: Cannot recv data: 
Connection reset by peer

.



** Affects: libvirt
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member 

[Yahoo-eng-team] [Bug 1535918] Re: instance.host not updated on evacuation

2017-08-17 Thread Seyeong Kim
** Also affects: cloud-archive
   Importance: Undecided
   Status: New

** Also affects: nova (Ubuntu)
   Importance: Undecided
   Status: New

** Description changed:

  [Impact]
  
- Affected to Xenial, UCA Mitaka
+ Affected to Xenial Mitaka, UCA Mitaka
  
  just after creating vm and state ACTIVE,
  
  When evacuating it, it is failed with ERROR state.
  
  [Test case]
  
- In below env, 
+ In below env,
  http://pastebin.ubuntu.com/25337153/
  
  Network configuration is important in this case, because I tested
  different configuration. but couldn't reproduce it.
  
  ##in progress##
  
  making detail script
  
  [Regression Potential]
  
  this is about evacuation, Could be issue on evacuation.
  especially recreating vm
  
  [Others]
  
  Related Patches.
  
https://github.com/openstack/nova/commit/a5b920a197c70d2ae08a1e1335d979857f923b4f
  
https://github.com/openstack/nova/commit/a2b0824aca5cb4a2ae579f625327c51ed0414d35
- 
  
  Original description
  
  I'm working on the nova-powervm driver for Mitaka and trying to add
  support for evacuation.
  
  The problem I'm hitting is that instance.host is not updated when the
  compute driver is called to spawn the instance on the destination host.
  It is still set to the source host.  It's not until after the spawn
  completes that the compute manager updates instance.host to reflect the
  destination host.
  
  The nova-powervm driver uses instance events callback mechanism during
  plug VIF to determine when Neutron has finished provisioning the
  network.  The instance events code sends the event to instance.host and
  hence is sending the event to the source host (which is down).  This
  causes the spawn to fail and also causes weirdness when the source host
  gets the events when it's powered back up.
  
  To temporarily work around the problem, I hacked in setting
  instance.host = CONF.host; instance.save() in the compute driver but
  that's not a good solution.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1535918

Title:
  instance.host not updated on evacuation

Status in Ubuntu Cloud Archive:
  New
Status in OpenStack Compute (nova):
  Fix Released
Status in nova-powervm:
  Fix Released
Status in nova package in Ubuntu:
  New

Bug description:
  [Impact]

  Affected to Xenial Mitaka, UCA Mitaka

  just after creating vm and state ACTIVE,

  When evacuating it, it is failed with ERROR state.

  [Test case]

  In below env,
  http://pastebin.ubuntu.com/25337153/

  Network configuration is important in this case, because I tested
  different configuration. but couldn't reproduce it.

  ##in progress##

  making detail script

  [Regression Potential]

  this is about evacuation, Could be issue on evacuation.
  especially recreating vm

  [Others]

  Related Patches.
  
https://github.com/openstack/nova/commit/a5b920a197c70d2ae08a1e1335d979857f923b4f
  
https://github.com/openstack/nova/commit/a2b0824aca5cb4a2ae579f625327c51ed0414d35

  Original description

  I'm working on the nova-powervm driver for Mitaka and trying to add
  support for evacuation.

  The problem I'm hitting is that instance.host is not updated when the
  compute driver is called to spawn the instance on the destination
  host.  It is still set to the source host.  It's not until after the
  spawn completes that the compute manager updates instance.host to
  reflect the destination host.

  The nova-powervm driver uses instance events callback mechanism during
  plug VIF to determine when Neutron has finished provisioning the
  network.  The instance events code sends the event to instance.host
  and hence is sending the event to the source host (which is down).
  This causes the spawn to fail and also causes weirdness when the
  source host gets the events when it's powered back up.

  To temporarily work around the problem, I hacked in setting
  instance.host = CONF.host; instance.save() in the compute driver but
  that's not a good solution.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1535918/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1573766] Re: Enable the paste filter HTTPProxyToWSGI by default

2017-11-20 Thread Seyeong Kim
** Also affects: nova (Ubuntu)
   Importance: Undecided
   Status: New

** Changed in: nova (Ubuntu)
 Assignee: (unassigned) => Seyeong Kim (xtrusia)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1573766

Title:
  Enable the paste filter HTTPProxyToWSGI by default

Status in OpenStack nova-cloud-controller charm:
  New
Status in OpenStack Compute (nova):
  Fix Released
Status in nova package in Ubuntu:
  New

Bug description:
  oslo middleware provides a paste filter that sets the correct proxy
  scheme and host. This is needed for the TLS proxy case.

  Without this then enabling the TLS proxy in devstack will fail
  configuring tempest because 'nova flavor-list' returns a http scheme
  in Location in a redirect it returns.

  I've proposed a temporary workaround in devstack using:

  +iniset $NOVA_API_PASTE_INI filter:ssl_header_handler past
  e.filter_factory oslo_middleware.http_proxy_to_wsgi:HTTPProxyToWSGI.factory
  +iniset $NOVA_API_PASTE_INI composite:openstack_compute_ap
  i_v21 keystone "ssl_header_handler cors compute_req_id faultwrap sizelimit 
autht
  oken keystonecontext osapi_compute_app_v21"

  But this isn't a long-term solution because two copies of the default
  paste filters will need to be maintained.

  See https://review.openstack.org/#/c/301172

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-nova-cloud-controller/+bug/1573766/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1573766] Re: Enable the paste filter HTTPProxyToWSGI by default

2017-11-19 Thread Seyeong Kim
** Also affects: charm-nova-cloud-controller
   Importance: Undecided
   Status: New

** Changed in: charm-nova-cloud-controller
 Assignee: (unassigned) => Seyeong Kim (xtrusia)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1573766

Title:
  Enable the paste filter HTTPProxyToWSGI by default

Status in OpenStack nova-cloud-controller charm:
  New
Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  oslo middleware provides a paste filter that sets the correct proxy
  scheme and host. This is needed for the TLS proxy case.

  Without this then enabling the TLS proxy in devstack will fail
  configuring tempest because 'nova flavor-list' returns a http scheme
  in Location in a redirect it returns.

  I've proposed a temporary workaround in devstack using:

  +iniset $NOVA_API_PASTE_INI filter:ssl_header_handler past
  e.filter_factory oslo_middleware.http_proxy_to_wsgi:HTTPProxyToWSGI.factory
  +iniset $NOVA_API_PASTE_INI composite:openstack_compute_ap
  i_v21 keystone "ssl_header_handler cors compute_req_id faultwrap sizelimit 
autht
  oken keystonecontext osapi_compute_app_v21"

  But this isn't a long-term solution because two copies of the default
  paste filters will need to be maintained.

  See https://review.openstack.org/#/c/301172

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-nova-cloud-controller/+bug/1573766/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1558683] Re: Versions endpoint does not support X-Forwarded-Proto

2017-11-19 Thread Seyeong Kim
** Also affects: charm-cinder
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1558683

Title:
  Versions endpoint does not support X-Forwarded-Proto

Status in OpenStack cinder charm:
  New
Status in Cinder:
  Fix Released
Status in Glance:
  Fix Released

Bug description:
  When a project is deployed behind a SSL terminating proxy, the version
  endpoint returns the wrong URLs.  The returned protocol in the reponse
  URLs is  http:// instead of the expected https://.

  This is because the response built by versions.py git the host
  information only from the incoming req.  If SSL has been terminated by
  a proxy, then the information in the req indicates http://.  Other
  projects have addressed this by adding the config parameter
  secure_proxy_ssl_header = HTTP_X_FORWARDED_PROTO.  This will tell the
  project to use the value in X-Forwarded-Proto (https or http) when
  building the URLs in the response.  Nova and Keystone support this
  configuration option.

  One workaround is to set the public_endpoint parameter. However, the
  value set for public_endpoint, is also returned when the internal and
  admin version endpoints are queried, which breaks other things.

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-cinder/+bug/1558683/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1681073] Re: Create Consistency Group form has an exception

2017-12-06 Thread Seyeong Kim
** Also affects: openstack-dashboard (Ubuntu)
   Importance: Undecided
   Status: New

** Changed in: openstack-dashboard (Ubuntu)
 Assignee: (unassigned) => Seyeong Kim (xtrusia)

** Also affects: horizon (Ubuntu)
   Importance: Undecided
   Status: New

** No longer affects: openstack-dashboard (Ubuntu)

** Changed in: horizon (Ubuntu)
 Assignee: (unassigned) => Seyeong Kim (xtrusia)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1681073

Title:
  Create Consistency Group form has an exception

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in horizon package in Ubuntu:
  New

Bug description:
  Env: devstack master branch

  Steps to reproduce:
  1. Go to admin/volume types panel
  2. Create volume type with any name
  3. Go to project/Consistency Groups panel
  4. Create Consistency Group and add the volume type we just created
  5. Submit Create Consistency Group form

  it will throws an exception.

  Exception info:
  Internal Server Error: /project/cgroups/create/
  Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/django/core/handlers/base.py", 
line 132, in get_response
  response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/opt/stack/horizon/horizon/decorators.py", line 36, in dec
  return view_func(request, *args, **kwargs)
File "/opt/stack/horizon/horizon/decorators.py", line 52, in dec
  return view_func(request, *args, **kwargs)
File "/opt/stack/horizon/horizon/decorators.py", line 36, in dec
  return view_func(request, *args, **kwargs)
File "/opt/stack/horizon/horizon/decorators.py", line 84, in dec
  return view_func(request, *args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/django/views/generic/base.py", 
line 71, in view
  return self.dispatch(request, *args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/django/views/generic/base.py", 
line 89, in dispatch
  return handler(request, *args, **kwargs)
File "/opt/stack/horizon/horizon/workflows/views.py", line 199, in post
  exceptions.handle(request)
File "/opt/stack/horizon/horizon/exceptions.py", line 352, in handle
  six.reraise(exc_type, exc_value, exc_traceback)
File "/opt/stack/horizon/horizon/workflows/views.py", line 194, in post
  success = workflow.finalize()
File "/opt/stack/horizon/horizon/workflows/base.py", line 824, in finalize
  if not self.handle(self.request, self.context):
File 
"/opt/stack/horizon/openstack_dashboard/dashboards/project/cgroups/workflows.py",
 line 323, in handle
  vol_type.extra_specs['volume_backend_name']
  KeyError: 'volume_backend_name'

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1681073/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1681073] Re: Create Consistency Group form has an exception

2017-12-06 Thread Seyeong Kim
** Also affects: cloud-archive
   Importance: Undecided
   Status: New

** Description changed:

  Env: devstack master branch
  
  Steps to reproduce:
  1. Go to admin/volume types panel
  2. Create volume type with any name
  3. Go to project/Consistency Groups panel
  4. Create Consistency Group and add the volume type we just created
  5. Submit Create Consistency Group form
  
  it will throws an exception.
  
  Exception info:
  Internal Server Error: /project/cgroups/create/
  Traceback (most recent call last):
-   File "/usr/local/lib/python2.7/dist-packages/django/core/handlers/base.py", 
line 132, in get_response
- response = wrapped_callback(request, *callback_args, **callback_kwargs)
-   File "/opt/stack/horizon/horizon/decorators.py", line 36, in dec
- return view_func(request, *args, **kwargs)
-   File "/opt/stack/horizon/horizon/decorators.py", line 52, in dec
- return view_func(request, *args, **kwargs)
-   File "/opt/stack/horizon/horizon/decorators.py", line 36, in dec
- return view_func(request, *args, **kwargs)
-   File "/opt/stack/horizon/horizon/decorators.py", line 84, in dec
- return view_func(request, *args, **kwargs)
-   File "/usr/local/lib/python2.7/dist-packages/django/views/generic/base.py", 
line 71, in view
- return self.dispatch(request, *args, **kwargs)
-   File "/usr/local/lib/python2.7/dist-packages/django/views/generic/base.py", 
line 89, in dispatch
- return handler(request, *args, **kwargs)
-   File "/opt/stack/horizon/horizon/workflows/views.py", line 199, in post
- exceptions.handle(request)
-   File "/opt/stack/horizon/horizon/exceptions.py", line 352, in handle
- six.reraise(exc_type, exc_value, exc_traceback)
-   File "/opt/stack/horizon/horizon/workflows/views.py", line 194, in post
- success = workflow.finalize()
-   File "/opt/stack/horizon/horizon/workflows/base.py", line 824, in finalize
- if not self.handle(self.request, self.context):
-   File 
"/opt/stack/horizon/openstack_dashboard/dashboards/project/cgroups/workflows.py",
 line 323, in handle
- vol_type.extra_specs['volume_backend_name']
+   File "/usr/local/lib/python2.7/dist-packages/django/core/handlers/base.py", 
line 132, in get_response
+ response = wrapped_callback(request, *callback_args, **callback_kwargs)
+   File "/opt/stack/horizon/horizon/decorators.py", line 36, in dec
+ return view_func(request, *args, **kwargs)
+   File "/opt/stack/horizon/horizon/decorators.py", line 52, in dec
+ return view_func(request, *args, **kwargs)
+   File "/opt/stack/horizon/horizon/decorators.py", line 36, in dec
+ return view_func(request, *args, **kwargs)
+   File "/opt/stack/horizon/horizon/decorators.py", line 84, in dec
+ return view_func(request, *args, **kwargs)
+   File "/usr/local/lib/python2.7/dist-packages/django/views/generic/base.py", 
line 71, in view
+ return self.dispatch(request, *args, **kwargs)
+   File "/usr/local/lib/python2.7/dist-packages/django/views/generic/base.py", 
line 89, in dispatch
+ return handler(request, *args, **kwargs)
+   File "/opt/stack/horizon/horizon/workflows/views.py", line 199, in post
+ exceptions.handle(request)
+   File "/opt/stack/horizon/horizon/exceptions.py", line 352, in handle
+ six.reraise(exc_type, exc_value, exc_traceback)
+   File "/opt/stack/horizon/horizon/workflows/views.py", line 194, in post
+ success = workflow.finalize()
+   File "/opt/stack/horizon/horizon/workflows/base.py", line 824, in finalize
+ if not self.handle(self.request, self.context):
+   File 
"/opt/stack/horizon/openstack_dashboard/dashboards/project/cgroups/workflows.py",
 line 323, in handle
+ vol_type.extra_specs['volume_backend_name']
  KeyError: 'volume_backend_name'

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1681073

Title:
  Create Consistency Group form has an exception

Status in Ubuntu Cloud Archive:
  New
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in horizon package in Ubuntu:
  New

Bug description:
  [Impact]

  Affected
  - UCA Mitaka, Ocata
  - Xenial, Zesty

  After enabling consistency groups by changing api-paste.ini,

  When trying to create consistency group, there are error like below.

  Exception info:
  Internal Server Error: /project/cgroups/create/
  Traceback (most recent call last):
    File "/usr/local/lib/python2.7/dist-packages/django/core/handlers/base.py", 
line 132, in get_response
  response = wrapped_callback(request, *callback_args, **callback_kwargs)
    File "/opt/stack/horizon/horizon/decorators.py", line 36, in dec
  return view_func(request, *args, **kwargs)
    File "/opt/stack/horizon/horizon/decorators.py", line 52, in dec
  return view_func(request, *args, **kwargs)
    File "/opt/stack/horizon/horizon/decorators.py", line 36, in dec
  return view_func(request, *args, 

[Yahoo-eng-team] [Bug 1590608] Re: Services should use http_proxy_to_wsgi middleware

2018-01-12 Thread Seyeong Kim
** Also affects: charm-barbican
   Importance: Undecided
   Status: New

** Also affects: charm-heat
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1590608

Title:
  Services should use http_proxy_to_wsgi middleware

Status in Aodh:
  Fix Released
Status in Barbican:
  Fix Released
Status in Ceilometer:
  Fix Released
Status in OpenStack Barbican Charm:
  New
Status in OpenStack heat charm:
  New
Status in Cinder:
  Fix Released
Status in cloudkitty:
  Fix Released
Status in congress:
  Triaged
Status in OpenStack Backup/Restore and DR (Freezer):
  Fix Released
Status in Glance:
  Fix Released
Status in Gnocchi:
  Fix Released
Status in OpenStack Heat:
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in Magnum:
  Fix Released
Status in neutron:
  Fix Released
Status in Panko:
  Fix Released
Status in Sahara:
  Fix Released
Status in OpenStack Search (Searchlight):
  Fix Released
Status in senlin:
  Fix Released
Status in OpenStack DBaaS (Trove):
  Fix Released

Bug description:
  It's a common problem when putting a service behind a load balancer to
  need to forward the Protocol and hosts of the original request so that
  the receiving service can construct URLs to the loadbalancer and not
  the private worker node.

  Most services have implemented some form of secure_proxy_ssl_header =
  HTTP_X_FORWARDED_PROTO handling however exactly how this is done is
  dependent on the service.

  oslo.middleware provides the http_proxy_to_wsgi middleware that
  handles these headers and the newer RFC7239 forwarding header and
  completely hides the problem from the service.

  This middleware should be adopted by all services in preference to
  their own HTTP_X_FORWARDED_PROTO handling.

To manage notifications about this bug go to:
https://bugs.launchpad.net/aodh/+bug/1590608/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1582725] Re: cinder_policy.json action does not match the Cinder policy.json file

2018-02-06 Thread Seyeong Kim
** Also affects: horizon (Ubuntu)
   Importance: Undecided
   Status: New

** Also affects: cloud-archive
   Importance: Undecided
   Status: New

** Description changed:

+ [Impact]
+ cinder policies are not in horizon's policy.json
+ so unset tab "consistency groups" is enabled by default.
+ 
+ [Test Case]
+ 1. deploy simple openstack deployments via juju
+ 2. horizon -> volume -> check if there is consistency groups tab
+ 
+ [Regression]
+ after this patch, horizon needs to be restarted. so it is down shortly. this 
patch is actually config file changed ( and little source code ). so limited 
affection to behavior it self.
+ 
+ [Other]
+ 
+ related commit
+ 
+ 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=388708b251b0487bb22fb3ebb8fcb36ee4ffdc4f
+ 
+ [Original Description]
+ 
  The horizon/openstack_dashboard/conf/cinder_policy.json actions do not match 
the policy action that are used by the Cinder component.
  Cinder uses "volume_extension:volume_actions:upload_public"
  and Horizon policy.json and code uses "volume:upload_to_image"
  
  This is the only miss match of policy action between the 2 components.
  This also does not allow a user of Cinder and Horizon to update the
  Cinder policy.json and copy it to the Horizon directly and have the
  button function according to Cinder policy.json rules.
  
  This can be missed as the Cinder policy.json file is update and the
  Horizon file is updated.
  
  I think that the action that the Horizon code is using should match it
  component that it is supporting.

** Tags added: sts

** Tags added: sts-sru-needed

** Patch added: "lp1582725_xenial.debdiff"
   
https://bugs.launchpad.net/cloud-archive/+bug/1582725/+attachment/5050163/+files/lp1582725_xenial.debdiff

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1582725

Title:
  cinder_policy.json action does not match the Cinder policy.json file

Status in Ubuntu Cloud Archive:
  New
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in horizon package in Ubuntu:
  New

Bug description:
  [Impact]
  cinder policies are not in horizon's policy.json
  so unset tab "consistency groups" is enabled by default.

  Affected to Xenial, UCA_Mitaka

  
  [Test Case]
  1. deploy simple openstack deployments via juju
  2. horizon -> volume -> check if there is consistency groups tab

  [Regression]
  after this patch, horizon needs to be restarted. so it is down shortly. this 
patch is actually config file changed ( and little source code ). so limited 
affection to behavior it self.

  [Other]

  related commit

  
https://git.openstack.org/cgit/openstack/horizon/commit/?id=388708b251b0487bb22fb3ebb8fcb36ee4ffdc4f

  [Original Description]

  The horizon/openstack_dashboard/conf/cinder_policy.json actions do not match 
the policy action that are used by the Cinder component.
  Cinder uses "volume_extension:volume_actions:upload_public"
  and Horizon policy.json and code uses "volume:upload_to_image"

  This is the only miss match of policy action between the 2 components.
  This also does not allow a user of Cinder and Horizon to update the
  Cinder policy.json and copy it to the Horizon directly and have the
  button function according to Cinder policy.json rules.

  This can be missed as the Cinder policy.json file is update and the
  Horizon file is updated.

  I think that the action that the Horizon code is using should match it
  component that it is supporting.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1582725/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1778771] Re: Backups panel is visible even if enable_backup is False

2018-08-21 Thread Seyeong Kim
** Also affects: charm-openstack-dashboard
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1778771

Title:
  Backups panel is visible even if enable_backup is False

Status in OpenStack openstack-dashboard charm:
  New
Status in OpenStack Dashboard (Horizon):
  Confirmed

Bug description:
  Hi,

  Volumes - Backup panel is visible even if OPENSTACK_CINDER_FEATURES =
  {'enable_backup': False} in local_settings.py

  Meanwhile setting enable_backup to false removes an option to create
  backup of a volume in the volume drop-down options. But panel with
  backups itself stays visible for both admins and users.

  As a work-around I use the following customization script:
  import horizon
  from django.conf import settings
  if not getattr(settings, 'OPENSTACK_CINDER_FEATURES', 
{}).get('enable_backup', False):
  project = horizon.get_dashboard("project")
  backup = project.get_panel("backups")
  project.unregister(backup.__class__)

  And for permanent fix I see the following decision. In 
openstack_dashboard/dashboards/project/backups/panel.py make the following 
changes:
  ...
  +L16: from django.conf import settings
  ...
  +L21: if not getattr(settings, 'OPENSTACK_CINDER_FEATURES', 
{}).get('enable_backup', False):
  +L22: return False
  ...

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-openstack-dashboard/+bug/1778771/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1558683] Re: Versions endpoint does not support X-Forwarded-Proto

2018-03-20 Thread Seyeong Kim
Hello Ryan

This also merged to 18.02 it seems

as 1573766 seems

** Changed in: charm-cinder
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1558683

Title:
  Versions endpoint does not support X-Forwarded-Proto

Status in OpenStack cinder charm:
  Fix Released
Status in Cinder:
  Fix Released
Status in Glance:
  Fix Released

Bug description:
  When a project is deployed behind a SSL terminating proxy, the version
  endpoint returns the wrong URLs.  The returned protocol in the reponse
  URLs is  http:// instead of the expected https://.

  This is because the response built by versions.py git the host
  information only from the incoming req.  If SSL has been terminated by
  a proxy, then the information in the req indicates http://.  Other
  projects have addressed this by adding the config parameter
  secure_proxy_ssl_header = HTTP_X_FORWARDED_PROTO.  This will tell the
  project to use the value in X-Forwarded-Proto (https or http) when
  building the URLs in the response.  Nova and Keystone support this
  configuration option.

  One workaround is to set the public_endpoint parameter. However, the
  value set for public_endpoint, is also returned when the internal and
  admin version endpoints are queried, which breaks other things.

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-cinder/+bug/1558683/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1573766] Re: Enable the paste filter HTTPProxyToWSGI by default

2018-03-20 Thread Seyeong Kim
Hello Ryan

This seems that merged to 18.02
Could you please check once more?

Thanks.

** Changed in: charm-nova-cloud-controller
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1573766

Title:
  Enable the paste filter HTTPProxyToWSGI by default

Status in OpenStack nova-cloud-controller charm:
  Fix Released
Status in Ubuntu Cloud Archive:
  Invalid
Status in Ubuntu Cloud Archive mitaka series:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in nova package in Ubuntu:
  Invalid
Status in nova source package in Xenial:
  Fix Released

Bug description:
  [Impact]

  Getting http link instead of https even if https setting is set.

  [Test case]

  1. deploy openstack ( with keystone charm option use-https, 
https-service-endpoints)
  2. create instance
  3. nova --debug list
 - check the result if https links are there.

  [Regression Potential]

  nova pkg will be affected by this patch. However, this patch modifies
  only api-paste.ini by adding http_proxy_to_wsgi. To accept this patch,
  nova service need to be restarted. Tested no vms are affected this
  patch, but APIs or daemons are temporarily.

  
  [Others]

  related commits ( which are already in comments )

  
https://git.openstack.org/cgit/openstack/nova/commit/?id=b609a3b32ee8e68cef7e66fabff07ca8ad6d4649
  
https://git.openstack.org/cgit/openstack/nova/commit/?id=6051f30a7e61c32833667d3079744b2d4fd1ce7c

  
  [Original Description]

  oslo middleware provides a paste filter that sets the correct proxy
  scheme and host. This is needed for the TLS proxy case.

  Without this then enabling the TLS proxy in devstack will fail
  configuring tempest because 'nova flavor-list' returns a http scheme
  in Location in a redirect it returns.

  I've proposed a temporary workaround in devstack using:

  +iniset $NOVA_API_PASTE_INI filter:ssl_header_handler past
  e.filter_factory oslo_middleware.http_proxy_to_wsgi:HTTPProxyToWSGI.factory
  +iniset $NOVA_API_PASTE_INI composite:openstack_compute_ap
  i_v21 keystone "ssl_header_handler cors compute_req_id faultwrap sizelimit 
autht
  oken keystonecontext osapi_compute_app_v21"

  But this isn't a long-term solution because two copies of the default
  paste filters will need to be maintained.

  See https://review.openstack.org/#/c/301172

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-nova-cloud-controller/+bug/1573766/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1713499] Re: Cannot delete a neutron network, if the currently configured MTU is lower than the network's MTU

2018-09-30 Thread Seyeong Kim
** Changed in: neutron (Ubuntu)
   Status: New => Fix Released

** Changed in: neutron (Ubuntu Bionic)
   Status: New => In Progress

** Changed in: neutron (Ubuntu Bionic)
 Assignee: (unassigned) => Seyeong Kim (xtrusia)

** Patch added: "lp1713499_bionic.debdiff"
   
https://bugs.launchpad.net/neutron/+bug/1713499/+attachment/5195115/+files/lp1713499_bionic.debdiff

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1713499

Title:
  Cannot delete a neutron network, if the currently configured MTU is
  lower than the network's MTU

Status in Ubuntu Cloud Archive:
  New
Status in neutron:
  Fix Released
Status in neutron package in Ubuntu:
  Fix Released
Status in neutron source package in Bionic:
  In Progress

Bug description:
  Currently, the neutron API returns an error [1] when trying to delete
  a neutron network which has a higher MTU than the configured
  MTU[2][3].

  This issue has been noticed in Pike.

  [1] Error: http://paste.openstack.org/show/619627/
  [2] neutron.conf: http://paste.openstack.org/show/619629/
  [3] ml2_conf.ini: http://paste.openstack.org/show/619630/

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1713499/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1662324] Re: linux bridge agent disables ipv6 before adding an ipv6 address

2020-05-22 Thread Seyeong Kim
** Also affects: neutron (Ubuntu)
   Importance: Undecided
   Status: New

** Also affects: neutron (Ubuntu Xenial)
   Importance: Undecided
   Status: New

** Changed in: neutron (Ubuntu)
   Status: New => Fix Released

** Also affects: cloud-archive
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1662324

Title:
  linux bridge agent disables ipv6 before adding an ipv6 address

Status in Ubuntu Cloud Archive:
  New
Status in neutron:
  Fix Released
Status in neutron package in Ubuntu:
  Fix Released
Status in neutron source package in Xenial:
  New

Bug description:
  Summary:
  
  I have a dual-stack NIC with only an IPv6 SLAAC and link local address 
plumbed. This is the designated provider network nic. When I create a network 
and then a subnet, the linux bridge agent first disables IPv6 on the bridge and 
then tries to add the IPv6 address from the NIC to the bridge. Since IPv6 was 
disabled on the bridge, this fails with 'RTNETLINK answers: Permission denied'. 
My intent was to create an IPv4 subnet over this interface with floating IPv4 
addresses for assignment to VMs via this command:
openstack subnet create --network provider \
  --allocation-pool start=10.54.204.200,end=10.54.204.217 \
  --dns-nameserver 69.252.80.80 --dns-nameserver 69.252.81.81 \
  --gateway 10.54.204.129 --subnet-range 10.54.204.128/25 provider

  I don't know why the agent is disabling IPv6 (I wish it wouldn't),
  that's probably the problem. However, if the agent knows to disable
  IPv6 it should also know not to try to add an IPv6 address.

  Details:
  
  Version: Newton on CentOS 7.3 minimal (CentOS-7-x86_64-Minimal-1611.iso) as 
per these instructions: http://docs.openstack.org/newton/install-guide-rdo/

  Seemingly relevant section of /var/log/neutron/linuxbridge-agent.log:
  2017-02-06 15:09:20.863 1551 INFO 
neutron.plugins.ml2.drivers.linuxbridge.agent.arp_protect 
[req-4917c507-369e-4a36-a381-e8b287cbc988 - - - - -] Skipping ARP spoofing 
rules for port 'tap3679987e-ce' because it has port security disabled
  2017-02-06 15:09:20.863 1551 DEBUG neutron.agent.linux.utils 
[req-4917c507-369e-4a36-a381-e8b287cbc988 - - - - -] Running command: ['ip', 
'-o', 'link', 'show', 'tap3679987e-ce'] create_process 
/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py:89
  2017-02-06 15:09:20.870 1551 DEBUG neutron.agent.linux.utils 
[req-4917c507-369e-4a36-a381-e8b287cbc988 - - - - -] Exit code: 0 execute 
/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py:146
  2017-02-06 15:09:20.871 1551 DEBUG neutron.agent.linux.utils 
[req-4917c507-369e-4a36-a381-e8b287cbc988 - - - - -] Running command: ['ip', 
'addr', 'show', 'eno1', 'scope', 'global'] create_process 
/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py:89
  2017-02-06 15:09:20.878 1551 DEBUG neutron.agent.linux.utils 
[req-4917c507-369e-4a36-a381-e8b287cbc988 - - - - -] Exit code: 0 execute 
/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py:146
  2017-02-06 15:09:20.879 1551 DEBUG neutron.agent.linux.utils 
[req-4917c507-369e-4a36-a381-e8b287cbc988 - - - - -] Running command: ['ip', 
'route', 'list', 'dev', 'eno1', 'scope', 'global'] create_process 
/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py:89
  2017-02-06 15:09:20.885 1551 DEBUG neutron.agent.linux.utils 
[req-4917c507-369e-4a36-a381-e8b287cbc988 - - - - -] Exit code: 0 execute 
/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py:146
  2017-02-06 15:09:20.886 1551 DEBUG neutron.agent.linux.utils 
[req-4917c507-369e-4a36-a381-e8b287cbc988 - - - - -] Running command (rootwrap 
daemon): ['ip', 'link', 'set', 'brqe1623c94-1f', 'up'] execute_rootwrap_daemon 
/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py:105
  2017-02-06 15:09:20.895 1551 DEBUG 
neutron.plugins.ml2.drivers.linuxbridge.agent.linuxbridge_neutron_agent 
[req-4917c507-369e-4a36-a381-e8b287cbc988 - - - - -] Starting bridge 
brqe1623c94-1f for subinterface eno1 ensure_bridge 
/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:367
  2017-02-06 15:09:20.895 1551 DEBUG neutron.agent.linux.utils 
[req-4917c507-369e-4a36-a381-e8b287cbc988 - - - - -] Running command (rootwrap 
daemon): ['brctl', 'addbr', 'brqe1623c94-1f'] execute_rootwrap_daemon 
/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py:105
  2017-02-06 15:09:20.905 1551 DEBUG neutron.agent.linux.utils 
[req-4917c507-369e-4a36-a381-e8b287cbc988 - - - - -] Exit code: 0 execute 
/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py:146
  2017-02-06 15:09:20.905 1551 DEBUG neutron.agent.linux.utils 
[req-4917c507-369e-4a36-a381-e8b287cbc988 - - - - -] Running command (rootwrap 
daemon): ['brctl', 'setfd', 'brqe1623c94-1f', '0'] execute_rootwrap_daemon 

[Yahoo-eng-team] [Bug 1662324] Re: linux bridge agent disables ipv6 before adding an ipv6 address

2020-05-26 Thread Seyeong Kim
** Changed in: neutron (Ubuntu Xenial)
   Status: New => In Progress

** Changed in: neutron (Ubuntu Xenial)
 Assignee: (unassigned) => Seyeong Kim (xtrusia)

** Changed in: cloud-archive
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1662324

Title:
  linux bridge agent disables ipv6 before adding an ipv6 address

Status in Ubuntu Cloud Archive:
  Fix Released
Status in neutron:
  Fix Released
Status in neutron package in Ubuntu:
  Fix Released
Status in neutron source package in Xenial:
  In Progress

Bug description:
  [Impact]
  When using linuxbridge and after creating network & interface to ext-net, 
disable_ipv6 is 1. then linuxbridge-agent doesn't add ipv6 properly to newly 
created bridge.

  [Test Case]

  1. deploy basic mitaka env
  2. create external network(ext-net)
  3. create ipv6 network and interface to ext-net
  4. check if related bridge has ipv6 ip
  - no ipv6 originally
  or
  - cat /proc/sys/net/ipv6/conf/[BRIDGE]/disable_ipv6

  after this commit, I was able to see ipv6 address properly.

  [Regression]
  You need to restart neutron-linuxbridge-agent then there could be short 
downtime needed.

  [Others]

  -- original description --

  Summary:
  
  I have a dual-stack NIC with only an IPv6 SLAAC and link local address 
plumbed. This is the designated provider network nic. When I create a network 
and then a subnet, the linux bridge agent first disables IPv6 on the bridge and 
then tries to add the IPv6 address from the NIC to the bridge. Since IPv6 was 
disabled on the bridge, this fails with 'RTNETLINK answers: Permission denied'. 
My intent was to create an IPv4 subnet over this interface with floating IPv4 
addresses for assignment to VMs via this command:
    openstack subnet create --network provider \
  --allocation-pool start=10.54.204.200,end=10.54.204.217 \
  --dns-nameserver 69.252.80.80 --dns-nameserver 69.252.81.81 \
  --gateway 10.54.204.129 --subnet-range 10.54.204.128/25 provider

  I don't know why the agent is disabling IPv6 (I wish it wouldn't),
  that's probably the problem. However, if the agent knows to disable
  IPv6 it should also know not to try to add an IPv6 address.

  Details:
  
  Version: Newton on CentOS 7.3 minimal (CentOS-7-x86_64-Minimal-1611.iso) as 
per these instructions: http://docs.openstack.org/newton/install-guide-rdo/

  Seemingly relevant section of /var/log/neutron/linuxbridge-agent.log:
  2017-02-06 15:09:20.863 1551 INFO 
neutron.plugins.ml2.drivers.linuxbridge.agent.arp_protect 
[req-4917c507-369e-4a36-a381-e8b287cbc988 - - - - -] Skipping ARP spoofing 
rules for port 'tap3679987e-ce' because it has port security disabled
  2017-02-06 15:09:20.863 1551 DEBUG neutron.agent.linux.utils 
[req-4917c507-369e-4a36-a381-e8b287cbc988 - - - - -] Running command: ['ip', 
'-o', 'link', 'show', 'tap3679987e-ce'] create_process 
/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py:89
  2017-02-06 15:09:20.870 1551 DEBUG neutron.agent.linux.utils 
[req-4917c507-369e-4a36-a381-e8b287cbc988 - - - - -] Exit code: 0 execute 
/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py:146
  2017-02-06 15:09:20.871 1551 DEBUG neutron.agent.linux.utils 
[req-4917c507-369e-4a36-a381-e8b287cbc988 - - - - -] Running command: ['ip', 
'addr', 'show', 'eno1', 'scope', 'global'] create_process 
/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py:89
  2017-02-06 15:09:20.878 1551 DEBUG neutron.agent.linux.utils 
[req-4917c507-369e-4a36-a381-e8b287cbc988 - - - - -] Exit code: 0 execute 
/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py:146
  2017-02-06 15:09:20.879 1551 DEBUG neutron.agent.linux.utils 
[req-4917c507-369e-4a36-a381-e8b287cbc988 - - - - -] Running command: ['ip', 
'route', 'list', 'dev', 'eno1', 'scope', 'global'] create_process 
/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py:89
  2017-02-06 15:09:20.885 1551 DEBUG neutron.agent.linux.utils 
[req-4917c507-369e-4a36-a381-e8b287cbc988 - - - - -] Exit code: 0 execute 
/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py:146
  2017-02-06 15:09:20.886 1551 DEBUG neutron.agent.linux.utils 
[req-4917c507-369e-4a36-a381-e8b287cbc988 - - - - -] Running command (rootwrap 
daemon): ['ip', 'link', 'set', 'brqe1623c94-1f', 'up'] execute_rootwrap_daemon 
/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py:105
  2017-02-06 15:09:20.895 1551 DEBUG 
neutron.plugins.ml2.drivers.linuxbridge.agent.linuxbridge_neutron_agent 
[req-4917c507-369e-4a36-a381-e8b287cbc988 - - - - -] Starting bridge 
brqe1623c94-1f for subinterface eno1 ensure_bridge 
/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:367
  2017-02-06 15:09:20.895 1551 DEBUG neutron.agent.linux.utils 
[req-4917c507-369e-4a36-a381

[Yahoo-eng-team] [Bug 1849098] Re: ovs agent is stuck with OVSFWTagNotFound when dealing with unbound port

2021-05-03 Thread Seyeong Kim
** Also affects: neutron (Ubuntu)
   Importance: Undecided
   Status: New

** Also affects: neutron (Ubuntu Bionic)
   Importance: Undecided
   Status: New

** Changed in: neutron (Ubuntu)
   Status: New => Fix Released

** Changed in: neutron (Ubuntu Bionic)
 Assignee: (unassigned) => Seyeong Kim (seyeongkim)

** Changed in: neutron (Ubuntu Bionic)
   Status: New => In Progress

** Patch added: "lp1849098_bionic.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/neutron/+bug/1849098/+attachment/5494377/+files/lp1849098_bionic.debdiff

** Description changed:

+ [Impact]
+ 
+ somehow port is unbounded, then neutron-openvswitch-agent raise 
+ OVSFWTagNotFound, then creating new instance will be failed.
+ 
+ [Test Plan]
+ 1. deploy bionic openstack env
+ 2. launch one instance
+ 3. modify neutron-openvswitch-agent code inside nova-compute
+ - https://pastebin.ubuntu.com/p/nBRKkXmjx8/
+ 4. restart neutron-openvswitch-agent
+ 5. check if there are a lot of cannot get tag for port ..
+ 6. launch another instance.
+ 7. It fails after vif_plugging_timeout, with "virtual interface creation 
failed"
+ 
+ [Where problems could occur]
+ You need to restart service. and as patch, Basically it will be ok as it adds 
only exceptions. but getting or creating vif port part can have issue.
+ 
+ [Others]
+ 
+ Original description.
+ 
  neutron-openvswitch-agent meets unbound port:
  
  2019-10-17 11:32:21.868 135 WARNING
  neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-
  aae68b42-a99f-4bb3-bcf6-a6d3c4ca9e31 - - - - -] Device
  ef34215f-e099-4fd0-935f-c9a42951d166 not defined on plugin or binding
  failed
  
  Later when applying firewall rules:
  
  2019-10-17 11:32:21.901 135 INFO neutron.agent.securitygroups_rpc 
[req-aae68b42-a99f-4bb3-bcf6-a6d3c4ca9e31 - - - - -] Preparing filters for 
devices {'ef34215f-e099-4fd0-935f-c9a42951d166', 
'e9c97cf0-1a5e-4d77-b57b-0ba474d12e29', 'fff1bb24-6423-4486-87c4-1fe17c552cca', 
'2e20f9ee-bcb5-445c-b31f-d70d276d45c9', '03a60047-cb07-42a4-8b49-619d5982a9bd', 
'a452cea2-deaf-4411-bbae-ce83870cbad4', '79b03e5c-9be0-4808-9784-cb4878c3dbd5', 
'9b971e75-3c1b-463d-88cf-3f298105fa6e'}
  2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-aae68b42-a99f-4bb3-bcf6-a6d3c4ca9e31 - - - - -] Error while processing VIF 
ports: neutron.agent.linux.openvswitch_firewall.exceptions.OVSFWTagNotFound: 
Cannot get tag for port o-hm0 from its other_config: {}
  2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent Traceback (most 
recent call last):
  2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent   File 
"/var/lib/openstack/lib/python3.6/site-packages/neutron/agent/linux/openvswitch_firewall/firewall.py",
 line 530, in get_or_create_ofport
  2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent of_port = 
self.sg_port_map.ports[port_id]
  2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent KeyError: 
'ef34215f-e099-4fd0-935f-c9a42951d166'
  2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent
  2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent During handling 
of the above exception, another exception occurred:
  2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent
  2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent Traceback (most 
recent call last):
  2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent   File 
"/var/lib/openstack/lib/python3.6/site-packages/neutron/agent/linux/openvswitch_firewall/firewall.py",
 line 81, in get_tag_from_other_config
  2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent return 
int(other_config['tag'])
  2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent KeyError: 'tag'
  2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent
  2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent During handling 
of the above exception, another exception occurred:
  2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent
  2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent Traceback (most 
recent call last):
  2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent   File 
"/var/lib/openstack/lib/python3.6/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",

[Yahoo-eng-team] [Bug 1950186] Re: Nova doesn't account for hugepages when scheduling VMs

2022-05-18 Thread Seyeong Kim
** Package changed: nova (Ubuntu) => nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1950186

Title:
  Nova doesn't account for hugepages when scheduling VMs

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  Description
  ===

  When hugepages are enabled on the host it's possible to schedule VMs
  using more RAM than available.

  On the node with memory usage presented below it was possible to
  schedule 6 instances using a total of 140G of memory and a non-
  hugepages-enabled flavor. The same machine has 188G of memory in
  total, of which 64G were reserved for hugepages. Additional ~4G were
  used for housekeeping, OpenStack control plane, etc. This resulted in
  overcommitment of roughly 20G.

  After running memory intensive operations on the VMs, some of them got
  OOM killed.

  $ cat /proc/meminfo  | egrep "^(Mem|Huge)" # on the compute node
  MemTotal:   197784792 kB
  MemFree:115005288 kB
  MemAvailable:   116745612 kB
  HugePages_Total:  64
  HugePages_Free:   64
  HugePages_Rsvd:0
  HugePages_Surp:0
  Hugepagesize:1048576 kB
  Hugetlb:67108864 kB

  $ os hypervisor show copmute1 -c memory_mb -c memory_mb_used -c free_ram_mb
  +++
  | Field  | Value  |
  +++
  | free_ram_mb| 29309  |
  | memory_mb  | 193149 |
  | memory_mb_used | 163840 |
  +++

  $ os host show compute1
  +--+--+-+---+-+
  | Host | Project  | CPU | Memory MB | Disk GB |
  +--+--+-+---+-+
  | compute1 | (total)  |   0 |193149 | 893 |
  | compute1 | (used_now)   |  72 |163840 | 460 |
  | compute1 | (used_max)   |  72 |147456 | 460 |
  | compute1 | some_project_id_was_here |   2 |  4096 |  40 |
  | compute1 | another_anonymized_id_here   |  70 |143360 | 420 |
  +--+--+-+---+-+

  $ os resource provider inventory list uuid_of_compute1_node
  
++--+--+--+--+---++
  | resource_class | allocation_ratio | min_unit | max_unit | reserved | 
step_size |  total |
  
++--+--+--+--+---++
  | MEMORY_MB  |  1.0 |1 |   193149 |16384 |
 1 | 193149 |
  | DISK_GB|  1.0 |1 |  893 |0 |
 1 |893 |
  | PCPU   |  1.0 |1 |   72 |0 |
 1 | 72 |
  
++--+--+--+--+---++

  Steps to reproduce
  ==

  1. Reserve a large part of memory for hugepages on the hypervisor.
  2. Create VMs using a flavor that uses a lot of memory that isn't backed by 
hugepages.
  3. Start memory intensive operations on the VMs, e.g.:
  stress-ng --vm-bytes $(awk '/MemAvailable/{printf "%d", $2 * 0.98;}' < 
/proc/meminfo)k --vm-keep -m 1

  Expected result
  ===

  Nova should not allow overcommitment and should be able to
  differentiate between hugepages and "normal" memory.

  Actual result
  =
  Overcommitment resulting in OOM kills.

  Environment
  ===
  nova-api-metadata 2:21.2.1-0ubuntu1~cloud0
  nova-common 2:21.2.1-0ubuntu1~cloud0
  nova-compute 2:21.2.1-0ubuntu1~cloud0
  nova-compute-kvm 2:21.2.1-0ubuntu1~cloud0
  nova-compute-libvirt 2:21.2.1-0ubuntu1~cloud0
  python3-nova 2:21.2.1-0ubuntu1~cloud0
  python3-novaclient 2:17.0.0-0ubuntu1~cloud0

  OS: Ubuntu 18.04.5 LTS
  Hypervisor: libvirt + KVM

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1950186/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2017748] Re: OVN: ovnmeta namespaces missing during scalability test causing DHCP issues

2024-02-15 Thread Seyeong Kim
A customer has the similar issue. Although I can't reproduce this in my local 
environment. I prepared debdiff for yoga.
Our support engineer pointed this out ( patch 2 ) and it makes sense to 
backport.
As you can see the description, it is happening intermittently with high load. 
the customer also faced this few times and can't reproduce even they want.

There are two commits inside the debdiff file

[PATCH 1/2] ovn-metadata: Refactor events
[PATCH 2/2] Handle creation of Port_Binding with chassis set

patch 1 is needed because of massive conflict

Above 2023.1 already has above patches.

** Tags added: sts

** Also affects: neutron (Ubuntu)
   Importance: Undecided
   Status: New

** Also affects: neutron (Ubuntu Focal)
   Importance: Undecided
   Status: New

** Also affects: neutron (Ubuntu Jammy)
   Importance: Undecided
   Status: New

** Changed in: neutron (Ubuntu)
   Status: New => Fix Released

** Patch added: "lp2017748_focal_yoga.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/neutron/+bug/2017748/+attachment/5746530/+files/lp2017748_focal_yoga.debdiff

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2017748

Title:
  OVN:  ovnmeta namespaces missing during scalability test causing DHCP
  issues

Status in neutron:
  Fix Released
Status in neutron package in Ubuntu:
  Fix Released
Status in neutron source package in Focal:
  New
Status in neutron source package in Jammy:
  New

Bug description:
  Reported at: https://bugzilla.redhat.com/show_bug.cgi?id=2187650

  During a scalability test it was noted that a few VMs where having
  issues being pinged (2 out of ~5000 VMs in the test conducted). After
  some investigation it was found that the VMs in question did not
  receive a DHCP lease:

  udhcpc: no lease, failing
  FAIL
  checking http://169.254.169.254/2009-04-04/instance-id
  failed 1/20: up 181.90. request failed

  And the ovnmeta- namespaces for the networks that the VMs was booting
  from were missing. Looking into the ovn-metadata-agent.log:

  2023-04-18 06:56:09.864 353474 DEBUG neutron.agent.ovn.metadata.agent
  [-] There is no metadata port for network
  9029c393-5c40-4bf2-beec-27413417eafa or it has no MAC or IP addresses
  configured, tearing the namespace down if needed _get_provision_params
  /usr/lib/python3.9/site-
  packages/neutron/agent/ovn/metadata/agent.py:495

  Apparently, when the system is under stress (scalability tests) there
  are some edge cases where the metadata port information has not yet
  being propagated by OVN to the Southbound database and when the
  PortBindingChassisEvent event is being handled and try to find either
  the metadata port of the IP information on it (which is updated by
  ML2/OVN during subnet creation) it can not be found and fails silently
  with the error shown above.

  Note that, running the same tests but with less concurrency did not
  trigger this issue. So only happens when the system is overloaded.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2017748/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp