[Bug 1940834] Re: Horizon not show flavor details in instance and resize is not possible - Flavor ID is not supported by nova

2022-05-17 Thread Hemanth Nakkina
Verified on UCA xena, wallaby, victoria and resize works fine on
dashboard with -proposed.

Tested packages:
xena openstack-dashboard  4:20.1.1-0ubuntu2~cloud0
wallaby  openstack-dashboard  4:19.2.0-0ubuntu1~cloud1
victoria openstack-dashboard  4:18.6.3-0ubuntu1~cloud1

** Tags removed: verification-needed verification-victoria-needed 
verification-wallaby-needed verification-xena-needed
** Tags added: verification-done verification-victoria-done 
verification-wallaby-done verification-xena-done

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1940834

Title:
  Horizon not show flavor details in instance and resize is not possible
  - Flavor ID is not supported by nova

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1940834/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1940834] Re: Horizon not show flavor details in instance and resize is not possible - Flavor ID is not supported by nova

2022-04-12 Thread Hemanth Nakkina
Verified on impish and resize works fine on dashboard with -proposed.

Tested packages:
impish openstack-dashboard 4:20.1.1-0ubuntu2


** Tags removed: verification-needed-impish
** Tags added: verification-done-impish

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1940834

Title:
  Horizon not show flavor details in instance and resize is not possible
  - Flavor ID is not supported by nova

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1940834/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1940834] Re: Horizon not show flavor details in instance and resize is not possible - Flavor ID is not supported by nova

2022-03-30 Thread Hemanth Nakkina
Verified on focal and UCA ussuri and resize works fine on dhasboard with
-proposed.

Tested packages:
focal  openstack-dashboard 3:18.3.5-0ubuntu2
UCA ussuri openstack-dashboard 3:18.3.5-0ubuntu2~cloud0

** Tags removed: verification-needed-focal verification-ussuri-needed
** Tags added: verification-done-focal verification-ussuri-done

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1940834

Title:
  Horizon not show flavor details in instance and resize is not possible
  - Flavor ID is not supported by nova

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1940834/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1940834] Re: Horizon not show flavor details in instance and resize is not possible - Flavor ID is not supported by nova

2022-03-24 Thread Hemanth Nakkina
Hi SRU team,

Debdiff's for Impish/Focal, UCA xena/wallaby/victoria/ussuri are
attached.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1940834

Title:
  Horizon not show flavor details in instance and resize is not possible
  - Flavor ID is not supported by nova

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1940834/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1940834] Re: Horizon not show flavor details in instance and resize is not possible - Flavor ID is not supported by nova

2022-03-24 Thread Hemanth Nakkina
** Patch added: "Debdiff for UCA ussuri"
   
https://bugs.launchpad.net/horizon/+bug/1940834/+attachment/5572403/+files/lp1940834_ussuri.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1940834

Title:
  Horizon not show flavor details in instance and resize is not possible
  - Flavor ID is not supported by nova

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1940834/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1940834] Re: Horizon not show flavor details in instance and resize is not possible - Flavor ID is not supported by nova

2022-03-24 Thread Hemanth Nakkina
** Patch added: "Debdiff for UCA victoria"
   
https://bugs.launchpad.net/horizon/+bug/1940834/+attachment/5572402/+files/lp1940834_victoria.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1940834

Title:
  Horizon not show flavor details in instance and resize is not possible
  - Flavor ID is not supported by nova

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1940834/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1940834] Re: Horizon not show flavor details in instance and resize is not possible - Flavor ID is not supported by nova

2022-03-24 Thread Hemanth Nakkina
** Patch added: "Debdiff for UCA wallaby"
   
https://bugs.launchpad.net/horizon/+bug/1940834/+attachment/5572401/+files/lp1940834_wallaby.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1940834

Title:
  Horizon not show flavor details in instance and resize is not possible
  - Flavor ID is not supported by nova

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1940834/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1940834] Re: Horizon not show flavor details in instance and resize is not possible - Flavor ID is not supported by nova

2022-03-24 Thread Hemanth Nakkina
** Patch added: "Debdiff for UCA xena"
   
https://bugs.launchpad.net/horizon/+bug/1940834/+attachment/5572399/+files/lp1940834_xena.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1940834

Title:
  Horizon not show flavor details in instance and resize is not possible
  - Flavor ID is not supported by nova

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1940834/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1940834] Re: Horizon not show flavor details in instance and resize is not possible - Flavor ID is not supported by nova

2022-03-24 Thread Hemanth Nakkina
** Patch added: "Debdiff for focal"
   
https://bugs.launchpad.net/cloud-archive/+bug/1940834/+attachment/5572398/+files/lp1940834_focal.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1940834

Title:
  Horizon not show flavor details in instance and resize is not possible
  - Flavor ID is not supported by nova

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1940834/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1940834] Re: Horizon not show flavor details in instance and resize is not possible - Flavor ID is not supported by nova

2022-03-24 Thread Hemanth Nakkina
** Patch added: "Debdiff for impish"
   
https://bugs.launchpad.net/cloud-archive/+bug/1940834/+attachment/5572397/+files/lp1940834_impish.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1940834

Title:
  Horizon not show flavor details in instance and resize is not possible
  - Flavor ID is not supported by nova

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1940834/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1940834] Re: Horizon not show flavor details in instance and resize is not possible - Flavor ID is not supported by nova

2022-03-23 Thread Hemanth Nakkina
** Description changed:

  In horizon on Wallaby and Victoria release, there are some view and
  function which are using ID value from Instance's Flavor part of JSON.
  The main issue is when you want to resize instance, you are receiving
  output below. The issue is also on Instance detail is specs, where
  Flavor is Not available. But on all instances view, this is working fine
  and base on detail of instance object and it's details, it looks like
  this view is using different methods based on older API.
  
  We are running Wallaby dashboard with openstack-helm project with nova-api 
2.88
  Nova version:
  {"versions": [{"id": "v2.0", "status": "SUPPORTED", "version": "", 
"min_version": "", "updated": "2011-01-21T11:33:21Z", "links": [{"rel": "self", 
"href": "http://nova.openstack.svc.cluster.local/v2/"}]}, {"id": "v2.1", 
"status": "CURRENT", "version": "2.88", "min_version": "2.1", "updated": 
"2013-07-23T11:33:21Z", "links": [{"rel": "self", "href": 
"http://nova.openstack.svc.cluster.local/v2.1/"}]}]})
  
  For example for resize initialization the log output is:
  
  2021-08-23 12:20:30.308473 Internal Server Error: 
/project/instances/a872bcc6-0a56-413a-9bea-b27dc006c707/resize
  2021-08-23 12:20:30.308500 Traceback (most recent call last):
  2021-08-23 12:20:30.308503   File 
"/var/lib/openstack/lib/python3.6/site-packages/horizon/utils/memoized.py", 
line 107, in wrapped
  2021-08-23 12:20:30.308505 value = cache[key] = cache.pop(key)
  2021-08-23 12:20:30.308507 KeyError: ((,), ())
  2021-08-23 12:20:30.308509
  2021-08-23 12:20:30.308512 During handling of the above exception, another 
exception occurred:
  2021-08-23 12:20:30.308513
  2021-08-23 12:20:30.308515 Traceback (most recent call last):
  2021-08-23 12:20:30.308517   File 
"/var/lib/openstack/lib/python3.6/site-packages/django/core/handlers/exception.py",
 line 34, in inner
  2021-08-23 12:20:30.308519 response = get_response(request)
  2021-08-23 12:20:30.308521   File 
"/var/lib/openstack/lib/python3.6/site-packages/django/core/handlers/base.py", 
line 115, in _get_response
  2021-08-23 12:20:30.308523 response = 
self.process_exception_by_middleware(e, request)
  2021-08-23 12:20:30.308525   File 
"/var/lib/openstack/lib/python3.6/site-packages/django/core/handlers/base.py", 
line 113, in _get_response
  2021-08-23 12:20:30.308527 response = wrapped_callback(request, 
*callback_args, **callback_kwargs)
  2021-08-23 12:20:30.308529   File 
"/var/lib/openstack/lib/python3.6/site-packages/horizon/decorators.py", line 
52, in dec
  2021-08-23 12:20:30.308531 return view_func(request, *args, **kwargs)
  2021-08-23 12:20:30.308533   File 
"/var/lib/openstack/lib/python3.6/site-packages/horizon/decorators.py", line 
36, in dec
  2021-08-23 12:20:30.308534 return view_func(request, *args, **kwargs)
  2021-08-23 12:20:30.308536   File 
"/var/lib/openstack/lib/python3.6/site-packages/horizon/decorators.py", line 
36, in dec
  2021-08-23 12:20:30.308538 return view_func(request, *args, **kwargs)
  2021-08-23 12:20:30.308540   File 
"/var/lib/openstack/lib/python3.6/site-packages/horizon/decorators.py", line 
112, in dec
  2021-08-23 12:20:30.308542 return view_func(request, *args, **kwargs)
  2021-08-23 12:20:30.308543   File 
"/var/lib/openstack/lib/python3.6/site-packages/horizon/decorators.py", line 
84, in dec
  2021-08-23 12:20:30.308545 return view_func(request, *args, **kwargs)
  2021-08-23 12:20:30.308547   File 
"/var/lib/openstack/lib/python3.6/site-packages/django/views/generic/base.py", 
line 71, in view
  2021-08-23 12:20:30.308549 return self.dispatch(request, *args, **kwargs)
  2021-08-23 12:20:30.308551   File 
"/var/lib/openstack/lib/python3.6/site-packages/django/views/generic/base.py", 
line 97, in dispatch
  2021-08-23 12:20:30.308553 return handler(request, *args, **kwargs)
  2021-08-23 12:20:30.308554   File 
"/var/lib/openstack/lib/python3.6/site-packages/horizon/workflows/views.py", 
line 153, in get
  2021-08-23 12:20:30.308556 context = self.get_context_data(**kwargs)
  2021-08-23 12:20:30.308559   File 
"/var/lib/openstack/lib/python3.6/site-packages/openstack_dashboard/dashboards/project/instances/views.py",
 line 597, in get_context_data
  2021-08-23 12:20:30.308561 context = super().get_context_data(**kwargs)
  2021-08-23 12:20:30.308563   File 
"/var/lib/openstack/lib/python3.6/site-packages/horizon/workflows/views.py", 
line 91, in get_context_data
  2021-08-23 12:20:30.308565 workflow = self.get_workflow()
  2021-08-23 12:20:30.308567   File 
"/var/lib/openstack/lib/python3.6/site-packages/horizon/workflows/views.py", 
line 77, in get_workflow
  2021-08-23 12:20:30.308570 extra_context = self.get_initial()
  2021-08-23 12:20:30.308571   File 
"/var/lib/openstack/lib/python3.6/site-packages/openstack_dashboard/dashboards/project/instances/views.py",
 line 638, in get_initial
  2021-08-23 12:20:30.308573 _object = self.get_object()
  

[Bug 1928031] Re: neutron-ovn-metadata-agent AttributeError: 'MetadataProxyHandler' object has no attribute 'sb_idl'

2022-03-23 Thread Hemanth Nakkina
** Changed in: neutron (Ubuntu Hirsute)
   Status: New => Won't Fix

** Changed in: openvswitch (Ubuntu Hirsute)
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1928031

Title:
  neutron-ovn-metadata-agent AttributeError: 'MetadataProxyHandler'
  object has no attribute 'sb_idl'

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-ovn-chassis/+bug/1928031/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1883038] Re: Excessive number of ConnectionForced: Too many heartbeats missed in logs

2022-01-12 Thread Hemanth Nakkina
** No longer affects: oslo.messaging (Ubuntu)

** No longer affects: oslo.messaging (Ubuntu Bionic)

** No longer affects: cloud-archive/queens

** No longer affects: cloud-archive/rocky

** Patch added: "Debdiff for UCA train"
   
https://bugs.launchpad.net/cloud-archive/+bug/1883038/+attachment/5553547/+files/lp1883038_train.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1883038

Title:
  Excessive number of ConnectionForced: Too many heartbeats missed in
  logs

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1883038/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1883038] Re: Excessive number of ConnectionForced: Too many heartbeats missed in logs

2022-01-12 Thread Hemanth Nakkina
** Description changed:

  We are using Openstack Rocky as well as rabbitmq 3.7.4 in our
  production.
  
  Occasionally I saw many following lines in log
  
  2020-06-11 02:03:06.753 3877409 WARNING oslo.messaging._drivers.impl_rabbit 
[-] Unexpected error during heartbeart thread processing, retrying...: 
ConnectionForced: Too many heartbeats missed
  2020-06-11 02:03:21.754 3877409 WARNING oslo.messaging._drivers.impl_rabbit 
[-] Unexpected error during heartbeart thread processing, retrying...: 
ConnectionForced: Too many heartbeats missed
  2020-06-11 02:03:36.755 3877409 WARNING oslo.messaging._drivers.impl_rabbit 
[-] Unexpected error during heartbeart thread processing, retrying...: 
ConnectionForced: Too many heartbeats missed
  2020-06-11 02:03:51.756 3877409 WARNING oslo.messaging._drivers.impl_rabbit 
[-] Unexpected error during heartbeart thread processing, retrying...: 
ConnectionForced: Too many heartbeats missed
  2020-06-11 02:04:06.757 3877409 WARNING oslo.messaging._drivers.impl_rabbit 
[-] Unexpected error during heartbeart thread processing, retrying...: 
ConnectionForced: Too many heartbeats missed
  2020-06-11 02:04:21.757 3877409 WARNING oslo.messaging._drivers.impl_rabbit 
[-] Unexpected error during heartbeart thread processing, retrying...: 
ConnectionForced: Too many heartbeats missed
  2020-06-11 02:04:36.758 3877409 WARNING oslo.messaging._drivers.impl_rabbit 
[-] Unexpected error during heartbeart thread processing, retrying...: 
ConnectionForced: Too many heartbeats missed
  2020-06-11 02:04:51.759 3877409 WARNING oslo.messaging._drivers.impl_rabbit 
[-] Unexpected error during heartbeart thread processing, retrying...: 
ConnectionForced: Too many heartbeats missed
  
  heartbeart interval is 60s and rate is 2. Although it is screaming for
  missing hearbeats seems rabbitmq server is running fine and messages are
  received and processed successfully.
+ 
+ ***
+ 
+ SRU Details
+ ---
+ 
+ [Impact]
+ AMQP messages are dropped sometimes resulted in resource creation errors 
(happened on an environment twice in a week).
+ Catching the ConnectionForced AMQP connection and reestablish the connection 
immediately will remediate the issue.
+ 
+ [Test Case]
+ Reproducing the issue is trickysome. Here are the steps that might help in 
reproducing the issue.
+ 
+ 1. Deploy OpenStack 
+ (If stsstack-bundles project is used, run command ./generate-bundle.sh -s 
bionic -r stein -n ddmi:stsstack --run)
+ 2. Change heartbeat_timeout_threshold to 20s in nova.conf and restart nova-api
+ On nova-cloud-controller,
+ 
+ [oslo_messaging_rabbit]
+ heartbeat_timeout_threshold = 20
+ 
+ systemctl restart apache2.service
+ 
+ 3. Create and delete instances continuously
+ 
+ ./tools/instance_launch.sh 10 cirros  # command on stsstack-bundles
+ openstack server list -c ID -f value | xargs openstack server delete
+ 
+ 4. On rabbitmq server, drop packets from nova-api -> rabbitmq and allow them 
randomly
+ sudo iptables -A INPUT -p tcp --dport 5672 -s 10.5.1.55 -j DROP
+ sudo iptables -D INPUT 1
+ 
+ 5. Perform steps 3,4 until you see the following message in nova-api log
+ WARNING oslo.messaging._drivers.impl_rabbit [-] Unexpected error during 
heartbeart thread processing, retrying...: amqp.exceptions.ConnectionForced: 
Too many heartbeats missed
+ 
+ 6. Install the fixed python-oslo.messaging package on nova-cloud-controller
+And restart apache service.
+ 
+ 7. Perform steps 3,4 and verify nova-api log for the following INFO message.
+ INFO oslo.messaging._drivers.impl_rabbit [-] A recoverable connection/channel 
error occurred, trying to reconnect: Too many heartbeats missed
+ 
+ As the above test case is random in nature to reproduce, as additional
+ measure, continuous integration tests for nova-cloud-controller will be
+ run against the packages that are in -proposed.
+ 
+ [Regression Potential]
+ I do not foresee any regression potential as the patch just adds a new 
exception and reconnects to AMQP server immediately.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1883038

Title:
  Excessive number of ConnectionForced: Too many heartbeats missed in
  logs

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1883038/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1883038] Re: Excessive number of ConnectionForced: Too many heartbeats missed in logs

2022-01-12 Thread Hemanth Nakkina
** Also affects: oslo.messaging (Ubuntu)
   Importance: Undecided
   Status: New

** Also affects: oslo.messaging (Ubuntu Bionic)
   Importance: Undecided
   Status: New

** Also affects: cloud-archive
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/train
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/stein
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/queens
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/rocky
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1883038

Title:
  Excessive number of ConnectionForced: Too many heartbeats missed in
  logs

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1883038/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1928031] Re: neutron-ovn-metadata-agent AttributeError: 'MetadataProxyHandler' object has no attribute 'sb_idl'

2022-01-09 Thread Hemanth Nakkina
** Also affects: openvswitch (Ubuntu Hirsute)
   Importance: Undecided
   Status: New

** Also affects: neutron (Ubuntu Hirsute)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1928031

Title:
  neutron-ovn-metadata-agent AttributeError: 'MetadataProxyHandler'
  object has no attribute 'sb_idl'

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-ovn-chassis/+bug/1928031/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1928031] Re: neutron-ovn-metadata-agent AttributeError: 'MetadataProxyHandler' object has no attribute 'sb_idl'

2022-01-04 Thread Hemanth Nakkina
Added openvswitch (Ubuntu) to the affected projects.

Fix in openvswitch side mentioned in #11 is available on 2.13.5, 2.15.2
upstream.

Ubuntu Focal/UCA Ussuri is on 2.13.3-0ubuntu0.20.04.2 and UCA Victoria is on 
2.15.0-0ubuntu3.1.
SRU required for Focal and UCA Ussuri/Victoria.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1928031

Title:
  neutron-ovn-metadata-agent AttributeError: 'MetadataProxyHandler'
  object has no attribute 'sb_idl'

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-ovn-chassis/+bug/1928031/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1928031] Re: neutron-ovn-metadata-agent AttributeError: 'MetadataProxyHandler' object has no attribute 'sb_idl'

2022-01-04 Thread Hemanth Nakkina
** Also affects: openvswitch (Ubuntu)
   Importance: Undecided
   Status: New

** Also affects: openvswitch (Ubuntu Focal)
   Importance: Undecided
   Status: New

** Also affects: neutron (Ubuntu Focal)
   Importance: Undecided
   Status: New

** Also affects: cloud-archive
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/wallaby
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/ussuri
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1928031

Title:
  neutron-ovn-metadata-agent AttributeError: 'MetadataProxyHandler'
  object has no attribute 'sb_idl'

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-ovn-chassis/+bug/1928031/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1934129] Re: disable neutron-fwaas for >= victoria

2021-11-29 Thread Hemanth Nakkina
** Changed in: charm-openstack-dashboard
 Assignee: (unassigned) => Hemanth Nakkina (hemanth-n)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1934129

Title:
  disable neutron-fwaas  for >= victoria

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-guide/+bug/1934129/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1928031] Re: neutron-ovn-metadata-agent AttributeError: 'MetadataProxyHandler' object has no attribute 'sb_idl'

2021-11-24 Thread Hemanth Nakkina
** Changed in: neutron (Ubuntu)
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1928031

Title:
  neutron-ovn-metadata-agent AttributeError: 'MetadataProxyHandler'
  object has no attribute 'sb_idl'

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-ovn-chassis/+bug/1928031/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1934129] Re: disable neutron-fwaas for >= victoria

2021-10-29 Thread Hemanth Nakkina
** Changed in: charm-helpers
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1934129

Title:
  disable neutron-fwaas  for >= victoria

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-guide/+bug/1934129/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1948439] Re: Not able to create image with cinder as storage backend

2021-10-29 Thread Hemanth Nakkina
Verified the test case on bionic ussuri and is successful.

$ sudo dpkg -l | grep glance-store
ii  python3-glance-store 2.0.0-0ubuntu3~cloud0 
all  OpenStack Image Service store library - Python 3.x

$ openstack image create --container-format bare --disk-format qcow2 --file 
/home/ubuntu/images/cirros-0.4.0-x86_64-disk.img test
+--+--+
| Field| Value  
  |
+--+--+
| container_format | bare   
  |
| created_at   | 2021-10-29T10:10:47Z   
  |
| disk_format  | qcow2  
  |
| file | /v2/images/36aaf632-2a38-4998-b1cf-41915c69a4a4/file   
  |
| id   | 36aaf632-2a38-4998-b1cf-41915c69a4a4   
  |
| min_disk | 0  
  |
| min_ram  | 0  
  |
| name | test   
  |
| owner| 4d2649fe53014a57ba6188c0fea0d48a   
  |
| properties   | os_hidden='False', owner_specified.openstack.md5='', 
owner_specified.openstack.object='images/test', 
owner_specified.openstack.sha256='' |
| protected| False  
  |
| schema   | /v2/schemas/image  
  |
| status   | queued 
  |
| tags |
  |
| updated_at   | 2021-10-29T10:10:47Z   
  |
| visibility   | shared 
  |
+--+--+

$ openstack image list
+--+--++
| ID   | Name | Status |
+--+--++
| 36aaf632-2a38-4998-b1cf-41915c69a4a4 | test | active |
+--+--++

Created 4 more images in parallel and successful
$ openstack image list
+--+---++
| ID   | Name  | Status |
+--+---++
| 36aaf632-2a38-4998-b1cf-41915c69a4a4 | test  | active |
| ccd818a6-fb6b-4d87-b072-8f81ea6c78fe | test1 | active |
| 42dc33e0-a693-451d-87da-b8ebad315d0d | test2 | active |
| 0ff81dd0-80f4-4ec9-8248-5128d695c5c3 | test3 | active |
| 25406707-248d-46e0-9044-2b53306854ea | test4 | active |
+--+---++

** Tags removed: verification-needed verification-ussuri-needed
** Tags added: verification-done verification-ussuri-done

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1948439

Title:
  Not able to create image with cinder as storage backend

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1948439/+subscriptions


-- 
ubuntu-bugs mailing list

[Bug 1944666] Re: listener provisioning status in ERROR when port is 1025 and allowed_cidr is explicitly set to 0.0.0.0/0

2021-10-29 Thread Hemanth Nakkina
Verified victoria-proposed and test case is successful.

# dpkg -l | grep octavia
ii  octavia-api7.1.1-0ubuntu1~cloud1 all
  OpenStack Load Balancer as a Service - API frontend
ii  octavia-common 7.1.1-0ubuntu1~cloud1 all
  OpenStack Load Balancer as a Service - Common files
ii  octavia-health-manager 7.1.1-0ubuntu1~cloud1 all
  OpenStack Load Balancer Service - Health manager
ii  octavia-housekeeping   7.1.1-0ubuntu1~cloud1 all
  OpenStack Load Balancer Service - Housekeeping manager
ii  octavia-worker 7.1.1-0ubuntu1~cloud1 all
  OpenStack Load Balancer Service - Worker
ii  python3-octavia7.1.1-0ubuntu1~cloud1 all
  OpenStack Load Balancer as a Service - Python libraries
ii  python3-octavia-lib2.2.0-0ubuntu1~cloud0 all
  Library to support Octavia provider drivers

$ openstack loadbalancer listener show lb1-listener -c provisioning_status
+-++
| Field   | Value  |
+-++
| provisioning_status | ACTIVE |
+-++

** Tags removed: verification-needed verification-victoria-needed
** Tags added: verification-done verification-victoria-done

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1944666

Title:
  listener provisioning status in ERROR when port is 1025 and
  allowed_cidr is explicitly set to 0.0.0.0/0

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1944666/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1944666] Re: listener provisioning status in ERROR when port is 1025 and allowed_cidr is explicitly set to 0.0.0.0/0

2021-10-29 Thread Hemanth Nakkina
** Attachment added: "lp1944666_verification"
   
https://bugs.launchpad.net/ubuntu/focal/+source/octavia/+bug/1944666/+attachment/5536918/+files/lp1944666_verification

** Tags removed: verification-needed-focal verification-needed-hirsute 
verification-ussuri-needed verification-wallaby-needed
** Tags added: verification-done-focal verification-done-hirsute 
verification-ussuri-done verification-wallaby-done

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1944666

Title:
  listener provisioning status in ERROR when port is 1025 and
  allowed_cidr is explicitly set to 0.0.0.0/0

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1944666/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1944666] Re: listener provisioning status in ERROR when port is 1025 and allowed_cidr is explicitly set to 0.0.0.0/0

2021-10-29 Thread Hemanth Nakkina
Verified focal-proposed, hirsute-proposed, wallaby-proposed, ussuri-
proposed and the test case is working as expected. Attached
lp1944666_verification.

@Corey,
Octavia package is not yet available in victoria-proposed.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1944666

Title:
  listener provisioning status in ERROR when port is 1025 and
  allowed_cidr is explicitly set to 0.0.0.0/0

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1944666/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1944666] Re: listener provisioning status in ERROR when port is 1025 and allowed_cidr is explicitly set to 0.0.0.0/0

2021-10-28 Thread Hemanth Nakkina
** Tags added: verification-ussuri-needed

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1944666

Title:
  listener provisioning status in ERROR when port is 1025 and
  allowed_cidr is explicitly set to 0.0.0.0/0

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1944666/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1948439] Re: Not able to create image with cinder as storage backend

2021-10-27 Thread Hemanth Nakkina
Verified the test case on focal and its successful.

Package installed:
$ sudo dpkg -l | grep glance-store
ii  python3-glance-store   2.0.0-0ubuntu3all
  OpenStack Image Service store library - Python 3.x
$ sudo systemctl restart glance-api.service

Create a single image
$ openstack image create --container-format bare --disk-format qcow2 --file 
/home/ubuntu/images/cirros-0.4.0-x86_64-disk.img test
+--+--+
| Field| Value  
  |
+--+--+
| container_format | bare   
  |
| created_at   | 2021-10-28T03:27:03Z   
  |
| disk_format  | qcow2  
  |
| file | /v2/images/00207cb6-a16d-4aad-80ad-a78fbe37a454/file   
  |
| id   | 00207cb6-a16d-4aad-80ad-a78fbe37a454   
  |
| min_disk | 0  
  |
| min_ram  | 0  
  |
| name | test   
  |
| owner| 5f8dd88e2e2c436cb44098f4f63d0fe8   
  |
| properties   | os_hidden='False', owner_specified.openstack.md5='', 
owner_specified.openstack.object='images/test', 
owner_specified.openstack.sha256='' |
| protected| False  
  |
| schema   | /v2/schemas/image  
  |
| status   | queued 
  |
| tags |
  |
| updated_at   | 2021-10-28T03:27:03Z   
  |
| visibility   | shared 
  |
+--+--+

$ openstack image list
+--+--++
| ID   | Name | Status |
+--+--++
| 00207cb6-a16d-4aad-80ad-a78fbe37a454 | test | active |
+--+--++


Created 4 more images in parallel and successful
$ openstack image list
+--+---++
| ID   | Name  | Status |
+--+---++
| 00207cb6-a16d-4aad-80ad-a78fbe37a454 | test  | active |
| 09843e9d-2527-4e38-9c62-e217257cc0c5 | test1 | active |
| 5a577248-30b5-45af-958d-a9a52af46ac6 | test2 | active |
| 8e9e4097-557e-4afc-8e3d-5590d0c90189 | test3 | active |
| 710d3697-b2d3-4259-9028-28bd7609f4c8 | test4 | active |
+--+---++

** Tags removed: verification-needed-focal
** Tags added: verification-done-focal

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1948439

Title:
  Not able to create image with cinder as storage backend

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1948439/+subscriptions


-- 

[Bug 1948439] Re: Not able to create image with cinder as storage backend

2021-10-26 Thread Hemanth Nakkina
@brian-murray I have updated the Regression Potential and modified the
test case to verify some more scenarios. Please take a look.

** Description changed:

  On Ussuri, creation of image with cinder as glance storage backend
  fails.
  
  Reproduction Steps:
  
  1. Deploy cloud environment - focal ussuri
  2. Change the following configuration in /etc/glance/glance-api.conf to setup 
Cinder as Storage backend to Glance.
  
  [DEFAULT]
  enabled_backends = local:file, cinder:cinder
  [glance_store]
  default_backend = cinder
  
  Restart glance-api service
  systemctl restart glance-api.service
  
  3. Upload a cirros image to glance
  openstack image create --container-format bare --disk-format qcow2 --file 
/home/ubuntu/images/cirros-0.4.0-x86_64-disk.img test
  
  The above command throws an exception.
  
  Exception in /var/log/glance/glance-api.log
  2021-10-20 06:34:34.894 279293 ERROR glance_store._drivers.cinder 
[req-282b09b7-3db4-44f7-937c-b7c2fb71453b da9d04a5652c41b98126e1a2b1ce9601 
1c133b846e4e4948873aa3af847c23df - 4d430c7d67e0416a95e38b10aad4fc5f 
4d430c7d67e0416a95e38b10aad4fc5f] Exception while accessing to cinder volume 
7553d4cd-d315-4f97-8abe-f375864fa84a.: TypeError: temporary_chown() got an 
unexpected keyword argument 'backend'
  2021-10-20 06:34:34.894 279293 ERROR glance_store._drivers.cinder Traceback 
(most recent call last):
  2021-10-20 06:34:34.894 279293 ERROR glance_store._drivers.cinder   File 
"/usr/lib/python3/dist-packages/glance_store/_drivers/cinder.py", line 575, in 
_open_cinder_volume
  2021-10-20 06:34:34.894 279293 ERROR glance_store._drivers.cinder with 
self.temporary_chown(
  2021-10-20 06:34:34.894 279293 ERROR glance_store._drivers.cinder   File 
"/usr/lib/python3.8/contextlib.py", line 240, in helper
  2021-10-20 06:34:34.894 279293 ERROR glance_store._drivers.cinder return 
_GeneratorContextManager(func, args, kwds)
  2021-10-20 06:34:34.894 279293 ERROR glance_store._drivers.cinder   File 
"/usr/lib/python3.8/contextlib.py", line 83, in __init__
  2021-10-20 06:34:34.894 279293 ERROR glance_store._drivers.cinder 
self.gen = func(*args, **kwds)
  2021-10-20 06:34:34.894 279293 ERROR glance_store._drivers.cinder TypeError: 
temporary_chown() got an unexpected keyword argument 'backend'
  2021-10-20 06:34:34.894 279293 ERROR glance_store._drivers.cinder
  
  Package used:
  python3-glance-store/focal-updates,now 2.0.0-0ubuntu2 all 
[installed,automatic]
  
  Analysis:
  
  The method signature and the calling method args seems different for function 
temporary_chown
  
https://opendev.org/openstack/glance_store/src/tag/2.0.0/glance_store/_drivers/cinder.py#L458
  
https://opendev.org/openstack/glance_store/src/tag/2.0.0/glance_store/_drivers/cinder.py#L573
  
  This is fixed in upstream as part of 
https://bugs.launchpad.net/glance-store/+bug/1870289
  The bug fix for LP#1870289 need to be SRU'ed to Ubuntu python3-glance-store 
packages.
  
  ++
  
  [Impact]
  Not able to upload an image to glance
  
  [Test Case]
  1. Deploy cloud with focal ussuri
  2. Configure cinder as glance storage backend
  
  Ensure the following configurations in glance-api.conf
  [DEFAULT]
  enabled_backends = local:file, cinder:cinder
  [glance_store]
  default_backend = cinder
  
  Restart glance-api service
  systemctl restart glance-api.service
  
  3. Upload an image to glance
  openstack image create --container-format bare --disk-format qcow2 --file 
/home/ubuntu/images/cirros-0.4.0-x86_64-disk.img test
  
+ Image creation should be successful.
+ 
+ 4. Repeat step 3 with creation of 5 images concurrently.
+Run the command in step3 in 5 different consoles.
+ 
+ Verify if image creation is successful for all of them.
+ 'openstack image list' should list all the 5 images 
+ 
  [Regression Potential]
- The fix is available in upstream and CI unit and functional tempest test 
cases are successful.
+ The fix enhanced the locking mechanism to support concurrent image creation 
requests in addition to correcting the function signature that caused the 
initial problem. The test case is enhanced to verify concurrent
+ creation of images to avoid any regressions.
+ Also the patch is verified upstream via CI unit test cases and functional 
test cases.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1948439

Title:
  Not able to create image with cinder as storage backend

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1948439/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1948439] Re: Not able to create image with cinder as storage backend

2021-10-22 Thread Hemanth Nakkina
** Tags added: sts-sru-needed

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1948439

Title:
  Not able to create image with cinder as storage backend

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1948439/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1948439] Re: Not able to create image with cinder as storage backend

2021-10-22 Thread Hemanth Nakkina
** Patch added: "lp1948439_focal.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/python-glance-store/+bug/1948439/+attachment/5535216/+files/lp1948439_focal.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1948439

Title:
  Not able to create image with cinder as storage backend

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1948439/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1948439] Re: Not able to create image with cinder as storage backend

2021-10-22 Thread Hemanth Nakkina
** Patch added: "lp1948439_ussuri.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/python-glance-store/+bug/1948439/+attachment/5535217/+files/lp1948439_ussuri.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1948439

Title:
  Not able to create image with cinder as storage backend

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1948439/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1948439] Re: Not able to create image with cinder as storage backend

2021-10-22 Thread Hemanth Nakkina
Hi SRU team,

Uploaded debdiff's for focal and UCA ussuri.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1948439

Title:
  Not able to create image with cinder as storage backend

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1948439/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1948439] Re: Not able to create image with cinder as storage backend

2021-10-22 Thread Hemanth Nakkina
The fix for LP#1870289 are available in UCA Victoria and further.

** Description changed:

  On Ussuri, creation of image with cinder as glance storage backend
  fails.
  
  Reproduction Steps:
  
  1. Deploy cloud environment - focal ussuri
  2. Change the following configuration in /etc/glance/glance-api.conf to setup 
Cinder as Storage backend to Glance.
  
  [DEFAULT]
  enabled_backends = local:file, cinder:cinder
  [glance_store]
  default_backend = cinder
  
  Restart glance-api service
  systemctl restart glance-api.service
  
  3. Upload a cirros image to glance
  openstack image create --container-format bare --disk-format qcow2 --file 
/home/ubuntu/images/cirros-0.4.0-x86_64-disk.img test
  
  The above command throws an exception.
  
  Exception in /var/log/glance/glance-api.log
  2021-10-20 06:34:34.894 279293 ERROR glance_store._drivers.cinder 
[req-282b09b7-3db4-44f7-937c-b7c2fb71453b da9d04a5652c41b98126e1a2b1ce9601 
1c133b846e4e4948873aa3af847c23df - 4d430c7d67e0416a95e38b10aad4fc5f 
4d430c7d67e0416a95e38b10aad4fc5f] Exception while accessing to cinder volume 
7553d4cd-d315-4f97-8abe-f375864fa84a.: TypeError: temporary_chown() got an 
unexpected keyword argument 'backend'
  2021-10-20 06:34:34.894 279293 ERROR glance_store._drivers.cinder Traceback 
(most recent call last):
  2021-10-20 06:34:34.894 279293 ERROR glance_store._drivers.cinder   File 
"/usr/lib/python3/dist-packages/glance_store/_drivers/cinder.py", line 575, in 
_open_cinder_volume
  2021-10-20 06:34:34.894 279293 ERROR glance_store._drivers.cinder with 
self.temporary_chown(
  2021-10-20 06:34:34.894 279293 ERROR glance_store._drivers.cinder   File 
"/usr/lib/python3.8/contextlib.py", line 240, in helper
  2021-10-20 06:34:34.894 279293 ERROR glance_store._drivers.cinder return 
_GeneratorContextManager(func, args, kwds)
  2021-10-20 06:34:34.894 279293 ERROR glance_store._drivers.cinder   File 
"/usr/lib/python3.8/contextlib.py", line 83, in __init__
  2021-10-20 06:34:34.894 279293 ERROR glance_store._drivers.cinder 
self.gen = func(*args, **kwds)
  2021-10-20 06:34:34.894 279293 ERROR glance_store._drivers.cinder TypeError: 
temporary_chown() got an unexpected keyword argument 'backend'
  2021-10-20 06:34:34.894 279293 ERROR glance_store._drivers.cinder
  
- 
  Package used:
  python3-glance-store/focal-updates,now 2.0.0-0ubuntu2 all 
[installed,automatic]
  
  Analysis:
  
  The method signature and the calling method args seems different for function 
temporary_chown
  
https://opendev.org/openstack/glance_store/src/tag/2.0.0/glance_store/_drivers/cinder.py#L458
  
https://opendev.org/openstack/glance_store/src/tag/2.0.0/glance_store/_drivers/cinder.py#L573
  
  This is fixed in upstream as part of 
https://bugs.launchpad.net/glance-store/+bug/1870289
  The bug fix for LP#1870289 need to be SRU'ed to Ubuntu python3-glance-store 
packages.
+ 
+ ++
+ 
+ [Impact]
+ Not able to upload an image to glance
+ 
+ [Test Case]
+ 1. Deploy cloud with focal ussuri
+ 2. Configure cinder as glance storage backend
+ 
+ Ensure the following configurations in glance-api.conf
+ [DEFAULT]
+ enabled_backends = local:file, cinder:cinder
+ [glance_store]
+ default_backend = cinder
+ 
+ Restart glance-api service
+ systemctl restart glance-api.service
+ 
+ 3. Upload an image to glance
+ openstack image create --container-format bare --disk-format qcow2 --file 
/home/ubuntu/images/cirros-0.4.0-x86_64-disk.img test
+ 
+ [Regression Potential]
+ The fix is available in upstream and CI unit and functional tempest test 
cases are successful.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1948439

Title:
  Not able to create image with cinder as storage backend

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1948439/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1948439] Re: Not able to create image with cinder as storage backend

2021-10-22 Thread Hemanth Nakkina
** Also affects: python-glance-store (Ubuntu Focal)
   Importance: Undecided
   Status: New

** Also affects: cloud-archive
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/ussuri
   Importance: Undecided
   Status: New

** Changed in: python-glance-store (Ubuntu)
 Assignee: (unassigned) => Hemanth Nakkina (hemanth-n)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1948439

Title:
  Not able to create image with cinder as storage backend

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1948439/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1948439] [NEW] Not able to create image with cinder as storage backend

2021-10-22 Thread Hemanth Nakkina
Public bug reported:

On Ussuri, creation of image with cinder as glance storage backend
fails.

Reproduction Steps:

1. Deploy cloud environment - focal ussuri
2. Change the following configuration in /etc/glance/glance-api.conf to setup 
Cinder as Storage backend to Glance.

[DEFAULT]
enabled_backends = local:file, cinder:cinder
[glance_store]
default_backend = cinder

Restart glance-api service
systemctl restart glance-api.service

3. Upload a cirros image to glance
openstack image create --container-format bare --disk-format qcow2 --file 
/home/ubuntu/images/cirros-0.4.0-x86_64-disk.img test

The above command throws an exception.

Exception in /var/log/glance/glance-api.log
2021-10-20 06:34:34.894 279293 ERROR glance_store._drivers.cinder 
[req-282b09b7-3db4-44f7-937c-b7c2fb71453b da9d04a5652c41b98126e1a2b1ce9601 
1c133b846e4e4948873aa3af847c23df - 4d430c7d67e0416a95e38b10aad4fc5f 
4d430c7d67e0416a95e38b10aad4fc5f] Exception while accessing to cinder volume 
7553d4cd-d315-4f97-8abe-f375864fa84a.: TypeError: temporary_chown() got an 
unexpected keyword argument 'backend'
2021-10-20 06:34:34.894 279293 ERROR glance_store._drivers.cinder Traceback 
(most recent call last):
2021-10-20 06:34:34.894 279293 ERROR glance_store._drivers.cinder   File 
"/usr/lib/python3/dist-packages/glance_store/_drivers/cinder.py", line 575, in 
_open_cinder_volume
2021-10-20 06:34:34.894 279293 ERROR glance_store._drivers.cinder with 
self.temporary_chown(
2021-10-20 06:34:34.894 279293 ERROR glance_store._drivers.cinder   File 
"/usr/lib/python3.8/contextlib.py", line 240, in helper
2021-10-20 06:34:34.894 279293 ERROR glance_store._drivers.cinder return 
_GeneratorContextManager(func, args, kwds)
2021-10-20 06:34:34.894 279293 ERROR glance_store._drivers.cinder   File 
"/usr/lib/python3.8/contextlib.py", line 83, in __init__
2021-10-20 06:34:34.894 279293 ERROR glance_store._drivers.cinder self.gen 
= func(*args, **kwds)
2021-10-20 06:34:34.894 279293 ERROR glance_store._drivers.cinder TypeError: 
temporary_chown() got an unexpected keyword argument 'backend'
2021-10-20 06:34:34.894 279293 ERROR glance_store._drivers.cinder


Package used:
python3-glance-store/focal-updates,now 2.0.0-0ubuntu2 all [installed,automatic]

Analysis:

The method signature and the calling method args seems different for function 
temporary_chown
https://opendev.org/openstack/glance_store/src/tag/2.0.0/glance_store/_drivers/cinder.py#L458
https://opendev.org/openstack/glance_store/src/tag/2.0.0/glance_store/_drivers/cinder.py#L573

This is fixed in upstream as part of 
https://bugs.launchpad.net/glance-store/+bug/1870289
The bug fix for LP#1870289 need to be SRU'ed to Ubuntu python3-glance-store 
packages.

** Affects: python-glance-store (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: sts

** Tags added: sts

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1948439

Title:
  Not able to create image with cinder as storage backend

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/python-glance-store/+bug/1948439/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1944666] Re: listener provisioning status in ERROR when port is 1025 and allowed_cidr is explicitly set to 0.0.0.0/0

2021-10-20 Thread Hemanth Nakkina
** Changed in: octavia (Ubuntu)
 Assignee: (unassigned) => Hemanth Nakkina (hemanth-n)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1944666

Title:
  listener provisioning status in ERROR when port is 1025 and
  allowed_cidr is explicitly set to 0.0.0.0/0

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1944666/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1934912] Re: Router update fails for ports with allowed_address_pairs containg IP range in CIDR notation

2021-10-14 Thread Hemanth Nakkina
** Attachment added: "lp1934912_verification"
   
https://bugs.launchpad.net/ubuntu/+source/neutron/+bug/1934912/+attachment/5532744/+files/lp1934912_verification

** Tags removed: verification-needed verification-needed-focal 
verification-needed-hirsute verification-ussuri-needed 
verification-victoria-needed verification-wallaby-needed
** Tags added: verification-done verification-done-focal 
verification-done-hirsute verification-ussuri-done verification-victoria-done 
verification-wallaby-done

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1934912

Title:
  Router update fails for ports with allowed_address_pairs containg IP
  range in CIDR  notation

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1934912/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1934912] Re: Router update fails for ports with allowed_address_pairs containg IP range in CIDR notation

2021-10-14 Thread Hemanth Nakkina
Verified the test case successfully for Hirsute/Focal and UCA
Wallaby/Victoria/Ussuri. (see attached lp1934912_verification)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1934912

Title:
  Router update fails for ports with allowed_address_pairs containg IP
  range in CIDR  notation

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1934912/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1933092] Re: snat arp entry missing in qrouter namespace

2021-10-13 Thread Hemanth Nakkina
** Changed in: cloud-archive/victoria
   Status: New => Fix Released

** Changed in: cloud-archive/wallaby
   Status: New => Fix Released

** Changed in: cloud-archive/xena
   Status: New => Fix Released

** Changed in: neutron (Ubuntu Groovy)
   Status: New => Fix Released

** Changed in: neutron (Ubuntu Hirsute)
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1933092

Title:
  snat arp entry missing in qrouter namespace

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1933092/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1944666] Re: listener provisioning status in ERROR when port is 1025 and allowed_cidr is explicitly set to 0.0.0.0/0

2021-10-13 Thread Hemanth Nakkina
Hi SRU team,

Debdiff's for hirsute/focal, UCA wallaby/victoria/focal are uploaded

** Tags added: sts sts-sru-needed

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1944666

Title:
  listener provisioning status in ERROR when port is 1025 and
  allowed_cidr is explicitly set to 0.0.0.0/0

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1944666/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1944666] Re: listener provisioning status in ERROR when port is 1025 and allowed_cidr is explicitly set to 0.0.0.0/0

2021-10-13 Thread Hemanth Nakkina
** Patch added: "Debdiff for UCA victoria"
   
https://bugs.launchpad.net/ubuntu/+source/octavia/+bug/1944666/+attachment/5532420/+files/lp1944666_victoria.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1944666

Title:
  listener provisioning status in ERROR when port is 1025 and
  allowed_cidr is explicitly set to 0.0.0.0/0

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1944666/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1944666] Re: listener provisioning status in ERROR when port is 1025 and allowed_cidr is explicitly set to 0.0.0.0/0

2021-10-13 Thread Hemanth Nakkina
** Patch added: "Debdiff for UCA ussuri"
   
https://bugs.launchpad.net/ubuntu/+source/octavia/+bug/1944666/+attachment/5532421/+files/lp1944666_ussuri.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1944666

Title:
  listener provisioning status in ERROR when port is 1025 and
  allowed_cidr is explicitly set to 0.0.0.0/0

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1944666/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1944666] Re: listener provisioning status in ERROR when port is 1025 and allowed_cidr is explicitly set to 0.0.0.0/0

2021-10-13 Thread Hemanth Nakkina
** Patch added: "Debdiff for UCA wallaby"
   
https://bugs.launchpad.net/ubuntu/+source/octavia/+bug/1944666/+attachment/5532419/+files/lp1944666_wallaby.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1944666

Title:
  listener provisioning status in ERROR when port is 1025 and
  allowed_cidr is explicitly set to 0.0.0.0/0

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1944666/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1944666] Re: listener provisioning status in ERROR when port is 1025 and allowed_cidr is explicitly set to 0.0.0.0/0

2021-10-13 Thread Hemanth Nakkina
** Patch added: "Debdiff for focal"
   
https://bugs.launchpad.net/ubuntu/+source/octavia/+bug/1944666/+attachment/5532418/+files/lp1944666_focal.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1944666

Title:
  listener provisioning status in ERROR when port is 1025 and
  allowed_cidr is explicitly set to 0.0.0.0/0

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1944666/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1944666] Re: listener provisioning status in ERROR when port is 1025 and allowed_cidr is explicitly set to 0.0.0.0/0

2021-10-13 Thread Hemanth Nakkina
** Description changed:

- Corresponding upstream story link:
- https://storyboard.openstack.org/#!/story/2009117
+ 
+ Corresponding upstream story link: 
https://storyboard.openstack.org/#!/story/2009117
  
  Created a loadbalancer and a listener with protocol tcp protocol_port
  1025 and allowed_cidr 0.0.0.0/0, the listener ends up in provisioning
  status as ERROR.
  
  Error message in Octavia worker log
  neutronclient.common.exceptions.Conflict: Security group rule already exists
  
  This is a very edge case only when protocol port is 1025 (same as peer
  port which is hardcoded to constants.HAPROXY_BASE_PEER_PORT i.e, 1025)
  and allowed_cidr is explicitly set to 0.0.0.0/0.
  
  Reproducer:
  openstack loadbalancer create --name lb1 --vip-subnet-id private_subnet
  openstack loadbalancer listener create --name lb1-listener --protocol tcp 
--protocol-port 1025 --allowed-cidr 0.0.0.0/0 lb1
  openstack loadbalancer listener show lb1-listener lb1
  
  The culprit is [1] where the allowed_cidr for peer port should handle
  both None and 0.0.0.0/0 as 0.0.0.0/0 is the default value.
  
  Tested on: Ubuntu Focal Ussuri Octavia packages
  
  Fix available in Upstream until stable/train (not part of any point release)
  https://review.opendev.org/c/openstack/octavia/+/804485
  
  [1]
  
https://opendev.org/openstack/octavia/src/commit/b89c929c12fb262f59ba320a37f2a5bf4109df98/octavia/network/drivers/neutron/allowed_address_pairs.py#L150-L178
+ 
+ 
+ 
+ 
+ SRU:
+ 
+ [Impact]
+ Not able to create a Loadbalancer listener
+ 
+ [Test Case]
+ 1. Create a Loadbalancer
+ openstack loadbalancer create --name lb1 --vip-subnet-id private_subnet
+ 2. Create a listener
+ openstack loadbalancer listener create --name lb1-listener --protocol tcp 
--protocol-port 1025 --allowed-cidr 0.0.0.0/0 lb1
+ 3. Check listener status
+ openstack loadbalancer listener show lb1-listener lb1
+ Listener is not in active status.
+ 
+ [Regression Potential]
+ This is a simple change and all the CI unit/functional/tempest test cases are 
successful in upstream.
+ The fix can lead to some edge cases where the updated_ports end up in 
duplicate entries. However the updated_ports list is converted to set while 
determining new ports to be added which will discard the duplicates.

** Patch added: "Debdiff for hirsute"
   
https://bugs.launchpad.net/ubuntu/+source/octavia/+bug/1944666/+attachment/5532417/+files/lp1944666_hisute.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1944666

Title:
  listener provisioning status in ERROR when port is 1025 and
  allowed_cidr is explicitly set to 0.0.0.0/0

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1944666/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1944666] Re: listener provisioning status in ERROR when port is 1025 and allowed_cidr is explicitly set to 0.0.0.0/0

2021-10-13 Thread Hemanth Nakkina
** Changed in: octavia (Ubuntu Impish)
   Status: New => Fix Released

** Changed in: cloud-archive/xena
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1944666

Title:
  listener provisioning status in ERROR when port is 1025 and
  allowed_cidr is explicitly set to 0.0.0.0/0

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1944666/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1946211] Re: [SRU] "radosgw-admin bucket limit check" has duplicate entries if bucket count exceeds 1000 (max_entries)

2021-10-07 Thread Hemanth Nakkina
** Tags added: sts sts-sru-needed

** Tags removed: sts-sru-needed

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1946211

Title:
  [SRU] "radosgw-admin bucket limit check" has duplicate entries if
  bucket count exceeds 1000 (max_entries)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1946211/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1944666] Re: listener provisioning status in ERROR when port is 1025 and allowed_cidr is explicitly set to 0.0.0.0/0

2021-09-24 Thread Hemanth Nakkina
** Also affects: octavia (Ubuntu Impish)
   Importance: Undecided
   Status: New

** Also affects: octavia (Ubuntu Focal)
   Importance: Undecided
   Status: New

** Also affects: octavia (Ubuntu Hirsute)
   Importance: Undecided
   Status: New

** Also affects: cloud-archive
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/victoria
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/wallaby
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/xena
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/ussuri
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1944666

Title:
  listener provisioning status in ERROR when port is 1025 and
  allowed_cidr is explicitly set to 0.0.0.0/0

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1944666/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1943765] Re: ipmitool "timing" flags are not working as expected causing failure to manage power of baremetal nodes

2021-09-22 Thread Hemanth Nakkina
I have verified ironic-conductor ipmitool commands behaviour with the
above PPA in #2 (On focal ussuri)


With configuration use_ipmitool_retries = False, ironic-conductor runs below 
command until 60 seconds timeout expiry.
Command: ipmitool -I lanplus -H 10.5.0.5: -L ADMINISTRATOR -U test -R 1 -N 
5 -f /tmp/tmpmt5292he power status

OpenStack commands used for testing:
/snap/bin/openstack baremetal node create --driver ipmi --driver-info 
ipmi_address=10.5.0.5: --driver-info ipmi_username=test --driver-info 
ipmi_password=test
/snap/bin/openstack baremetal node list
/snap/bin/openstack baremetal node power on 

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1943765

Title:
  ipmitool "timing" flags are not working as expected causing failure to
  manage power of baremetal nodes

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-ironic-conductor/+bug/1943765/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1944666] [NEW] listener provisioning status in ERROR when port is 1025 and allowed_cidr is explicitly set to 0.0.0.0/0

2021-09-22 Thread Hemanth Nakkina
Public bug reported:

Corresponding upstream story link:
https://storyboard.openstack.org/#!/story/2009117

Created a loadbalancer and a listener with protocol tcp protocol_port
1025 and allowed_cidr 0.0.0.0/0, the listener ends up in provisioning
status as ERROR.

Error message in Octavia worker log
neutronclient.common.exceptions.Conflict: Security group rule already exists

This is a very edge case only when protocol port is 1025 (same as peer
port which is hardcoded to constants.HAPROXY_BASE_PEER_PORT i.e, 1025)
and allowed_cidr is explicitly set to 0.0.0.0/0.

Reproducer:
openstack loadbalancer create --name lb1 --vip-subnet-id private_subnet
openstack loadbalancer listener create --name lb1-listener --protocol tcp 
--protocol-port 1025 --allowed-cidr 0.0.0.0/0 lb1
openstack loadbalancer listener show lb1-listener lb1

The culprit is [1] where the allowed_cidr for peer port should handle
both None and 0.0.0.0/0 as 0.0.0.0/0 is the default value.

Tested on: Ubuntu Focal Ussuri Octavia packages

Fix available in Upstream until stable/train (not part of any point release)
https://review.opendev.org/c/openstack/octavia/+/804485

[1]
https://opendev.org/openstack/octavia/src/commit/b89c929c12fb262f59ba320a37f2a5bf4109df98/octavia/network/drivers/neutron/allowed_address_pairs.py#L150-L178

** Affects: octavia (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1944666

Title:
  listener provisioning status in ERROR when port is 1025 and
  allowed_cidr is explicitly set to 0.0.0.0/0

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/octavia/+bug/1944666/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1934129] Re: disable neutron-fwaas for >= victoria

2021-09-13 Thread Hemanth Nakkina
** Changed in: charm-helpers
   Status: In Progress => Fix Committed

** Changed in: charm-neutron-gateway
   Status: In Progress => Fix Committed

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1934129

Title:
  disable neutron-fwaas  for >= victoria

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-guide/+bug/1934129/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1942745] [NEW] htcacheclean service started by default on hirsute

2021-09-06 Thread Hemanth Nakkina
Public bug reported:

Installation of apache2 resulted in start of apache2 service only on focal.
However on hirsute, it resulted in start of apache2 and apache-htcacheclean 
service.

Is this intentional?

This commit seems to trigger the change in behaviour. 
https://git.launchpad.net/ubuntu/+source/apache2/commit/?id=b422d000d4fec1b5f8278c5fc3fe640c0a6e8c39

Adding --no-start to override_dh_installsystemd can result in old behavior.
https://git.launchpad.net/ubuntu/+source/apache2/tree/debian/rules#n175

** Affects: apache2 (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1942745

Title:
  htcacheclean service started by default on hirsute

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/apache2/+bug/1942745/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1933092] Re: snat arp entry missing in qrouter namespace

2021-07-30 Thread Hemanth Nakkina
** Changed in: neutron (Ubuntu Focal)
   Status: New => Fix Released

** Changed in: cloud-archive/ussuri
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1933092

Title:
  snat arp entry missing in qrouter namespace

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1933092/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1928031] Re: neutron-ovn-metadata-agent AttributeError: 'MetadataProxyHandler' object has no attribute 'sb_idl'

2021-07-07 Thread Hemanth Nakkina
Just for completeness, patch on openvswitch side is merged until branch
2.13, thanks to Bodo Petermann for the ovs patch

https://patchwork.ozlabs.org/project/openvswitch/patch/20210616103214.35669-1-b.peterm...@syseleven.de/

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1928031

Title:
  neutron-ovn-metadata-agent AttributeError: 'MetadataProxyHandler'
  object has no attribute 'sb_idl'

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-ovn-chassis/+bug/1928031/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1933092] Re: snat arp entry missing in qrouter namespace

2021-07-07 Thread Hemanth Nakkina
SRU team,
All debdiffs for Ubuntu I/H/G/F and UCA X/W/V/U are uploaded.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1933092

Title:
  snat arp entry missing in qrouter namespace

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1933092/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1933092] Re: snat arp entry missing in qrouter namespace

2021-07-07 Thread Hemanth Nakkina
** Patch added: "Debdiff for UCA xena"
   
https://bugs.launchpad.net/cloud-archive/wallaby/+bug/1933092/+attachment/5509624/+files/lp1933092_xena.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1933092

Title:
  snat arp entry missing in qrouter namespace

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1933092/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1933092] Re: snat arp entry missing in qrouter namespace

2021-07-07 Thread Hemanth Nakkina
** Patch added: "Debdiff for UCA ussuri"
   
https://bugs.launchpad.net/cloud-archive/wallaby/+bug/1933092/+attachment/5509601/+files/lp1933092_ussuri.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1933092

Title:
  snat arp entry missing in qrouter namespace

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1933092/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1933092] Re: snat arp entry missing in qrouter namespace

2021-07-07 Thread Hemanth Nakkina
** Patch added: "Debdiff for UCA victoria"
   
https://bugs.launchpad.net/cloud-archive/wallaby/+bug/1933092/+attachment/5509600/+files/lp1933092_victoria.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1933092

Title:
  snat arp entry missing in qrouter namespace

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1933092/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1933092] Re: snat arp entry missing in qrouter namespace

2021-07-07 Thread Hemanth Nakkina
** Patch added: "Debdiff for UCA wallaby"
   
https://bugs.launchpad.net/cloud-archive/wallaby/+bug/1933092/+attachment/5509599/+files/lp1933092_wallaby.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1933092

Title:
  snat arp entry missing in qrouter namespace

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1933092/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1933092] Re: snat arp entry missing in qrouter namespace

2021-07-07 Thread Hemanth Nakkina
** Patch added: "Debdiff for focal"
   
https://bugs.launchpad.net/cloud-archive/wallaby/+bug/1933092/+attachment/5509598/+files/lp1933092_focal.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1933092

Title:
  snat arp entry missing in qrouter namespace

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1933092/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1933092] Re: snat arp entry missing in qrouter namespace

2021-07-07 Thread Hemanth Nakkina
** Patch added: "Debdiff for groovy"
   
https://bugs.launchpad.net/cloud-archive/wallaby/+bug/1933092/+attachment/5509597/+files/lp1933092_groovy.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1933092

Title:
  snat arp entry missing in qrouter namespace

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1933092/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1933092] Re: snat arp entry missing in qrouter namespace

2021-07-07 Thread Hemanth Nakkina
** Patch added: "Debdiff for hirsute"
   
https://bugs.launchpad.net/cloud-archive/wallaby/+bug/1933092/+attachment/5509596/+files/lp1933092_hirsute.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1933092

Title:
  snat arp entry missing in qrouter namespace

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1933092/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1933092] Re: snat arp entry missing in qrouter namespace

2021-07-07 Thread Hemanth Nakkina
** Patch added: "Debdiff for impish"
   
https://bugs.launchpad.net/cloud-archive/wallaby/+bug/1933092/+attachment/5509595/+files/lp1933092_impish.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1933092

Title:
  snat arp entry missing in qrouter namespace

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1933092/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1933092] Re: snat arp entry missing in qrouter namespace

2021-07-07 Thread Hemanth Nakkina
** Tags added: sts sts-sru-needed

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1933092

Title:
  snat arp entry missing in qrouter namespace

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1933092/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1933092] Re: snat arp entry missing in qrouter namespace

2021-07-06 Thread Hemanth Nakkina
** Description changed:

- In one of the cloud environment, the FIP attached to the Octavia
- Loadbalancer VIP is not reachable. After analysis, we found the ARP
- entry for SNAT IP is missing in the qrouter namespace where Amphora VM
- is running. And so the return packets are not forwarded from qrouter to
- snat on active l3-agent node.
+ [Impact]
+ Load Balancers deployed on the cloud are unreachable
+ 
+ [Test Case]
+ 1. Deploy openstack with atleast 4 compute nodes with networking features DVR 
SNAT+L3HA
+ 2. Execute the script test_snat_arp_entry.sh
+ 3. The script loops for 20 times creating network, router and connecting 
router to external, internal network and checking if ARP entries are populated 
properly on qrouter namespaces
+ 4. The script stops if arp entries are missing.
+ 5. If the script runs for 20 loops, then there are no issues.
+ 
+ [Regression Potential]
+ The issue only happens a few times when a router is created, external gateway 
set and internal subnet attached to router in quick succession. In other cases, 
the arp entry of snat is already added.
+ The fix just adds extra logic to add arp entry retrieving snat information 
from the router. In working cases, this extra logic will execute commands to 
add arp entry twice which should not cause further issues.
+ 
+ [Original Bug Report]
+ In one of the cloud environment, the FIP attached to the Octavia Loadbalancer 
VIP is not reachable. After analysis, we found the ARP entry for SNAT IP is 
missing in the qrouter namespace where Amphora VM is running. And so the return 
packets are not forwarded from qrouter to snat on active l3-agent node.
  
  Version:
  Ubuntu Ussuri packages (16.3.2 point release)
  DVR+SNAT+L3HA enabled
  
  Expectation is to have PERMANENT arp entry for snat ip on qrouter namespace 
on all compute nodes
  192.168.33.238 dev qr-4ee692e0-7a lladdr fa:16:3e:25:6a:73 used 38/38/38 
probes 0 PERMANENT
  
  How to reproduce:
  
  Attaching a script to simulate the problem (without octavia) with following 
steps
  1. network/subnet/router is created, network attached to router
  2. verify if qrouter on all compute nodes has arp entries related to snat ip
  3. if arp entries exists, delete network/subnet/router
  4. Repeat steps 1,2,3 until missing arp entry is observed.
  
  I am able to reproduce missing arp entry sometimes in 3rd loop and
  sometimes in 6th loop.
  
  Observed arp entries for snat ip is updated at the following places [1]
  [2] but get_snat_interfaces() and get_ports_by_subnet() are not updated
  with snat ip in non-working cases.
  
  [1] 
https://opendev.org/openstack/neutron/src/commit/dfd04115b059c2263cdd8ac44ccc2ec47614bcc3/neutron/agent/l3/dvr_local_router.py#L570
  [2] 
https://opendev.org/openstack/neutron/src/commit/dfd04115b059c2263cdd8ac44ccc2ec47614bcc3/neutron/agent/l3/dvr_local_router.py#L317

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1933092

Title:
  snat arp entry missing in qrouter namespace

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1933092/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1933092] Re: snat arp entry missing in qrouter namespace

2021-07-06 Thread Hemanth Nakkina
** Also affects: neutron (Ubuntu)
   Importance: Undecided
   Status: New

** Also affects: neutron (Ubuntu Focal)
   Importance: Undecided
   Status: New

** Also affects: neutron (Ubuntu Hirsute)
   Importance: Undecided
   Status: New

** Also affects: neutron (Ubuntu Groovy)
   Importance: Undecided
   Status: New

** Also affects: neutron (Ubuntu Impish)
   Importance: Undecided
   Status: New

** Also affects: cloud-archive
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/victoria
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/ussuri
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/xena
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/wallaby
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1933092

Title:
  snat arp entry missing in qrouter namespace

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1933092/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1928031] Re: neutron-ovn-metadata-agent AttributeError: 'MetadataProxyHandler' object has no attribute 'sb_idl'

2021-06-30 Thread Hemanth Nakkina
** Tags added: ovn

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1928031

Title:
  neutron-ovn-metadata-agent AttributeError: 'MetadataProxyHandler'
  object has no attribute 'sb_idl'

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-ovn-chassis/+bug/1928031/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1892361] Re: SRIOV instance gets type-PF interface, libvirt kvm fails

2021-06-21 Thread Hemanth Nakkina
Verified the test case on comment #3 on queens-proposed and is
successful.

$ juju run -a nova-compute-a -- sudo apt-cache policy nova-common
nova-common:
  Installed: 2:17.0.13-0ubuntu2~cloud0
  Candidate: 2:17.0.13-0ubuntu2~cloud0
  Version table:
 *** 2:17.0.13-0ubuntu2~cloud0 500
500 http://ubuntu-cloud.archive.canonical.com/ubuntu 
xenial-proposed/queens/main amd64 Packages
100 /var/lib/dpkg/status
 2:17.0.13-0ubuntu1~cloud0 500
500 http://ubuntu-cloud.archive.canonical.com/ubuntu 
xenial-updates/queens/main amd64 Packages
 2:13.1.4-0ubuntu4.5 500
500 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 Packages
500 http://archive.ubuntu.com/ubuntu xenial-security/main amd64 Packages
 2:13.0.0-0ubuntu2 500
500 http://archive.ubuntu.com/ubuntu xenial/main amd64 Packages


$ openstack server list
+--+++-++---+
| ID   | Name   | Status | Networks 
   | Image  | Flavor|
+--+++-++---+
| d1c4f05c-be09-41f6-be3e-370a1a32cf83 | sriov-test | ACTIVE | 
sriov_net=10.230.58.120 | bionic | m1.medium |
+--+++-++---+

** Tags removed: verification-needed verification-queens-needed
** Tags added: verification-done verification-queens-done

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1892361

Title:
  SRIOV instance gets type-PF interface, libvirt kvm fails

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1892361/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1883089] Re: [L3] floating IP failed to bind due to no agent gateway port(fip-ns)

2021-06-20 Thread Hemanth Nakkina
** Changed in: cloud-archive
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1883089

Title:
  [L3] floating IP failed to bind due to no agent gateway port(fip-ns)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1883089/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1883089] Re: [L3] floating IP failed to bind due to no agent gateway port(fip-ns)

2021-06-20 Thread Hemanth Nakkina
UCA Ussuri is released to ussuri-updates in package
2:16.3.2-0ubuntu3~cloud0, so marking the status as Fix released for UCA
Ussuri

** Changed in: cloud-archive/ussuri
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1883089

Title:
  [L3] floating IP failed to bind due to no agent gateway port(fip-ns)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1883089/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1892361] Re: SRIOV instance gets type-PF interface, libvirt kvm fails

2021-06-17 Thread Hemanth Nakkina
** Tags removed: verification-needed-bionic
** Tags added: verification-done-bionic

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1892361

Title:
  SRIOV instance gets type-PF interface, libvirt kvm fails

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1892361/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1892361] Re: SRIOV instance gets type-PF interface, libvirt kvm fails

2021-06-17 Thread Hemanth Nakkina
Verified the test case on comment #3 on bionic-proposed and is
successful.

$ juju run -a nova-compute-a -- sudo apt-cache policy nova-common
nova-common:
  Installed: 2:17.0.13-0ubuntu2
  Candidate: 2:17.0.13-0ubuntu2
  Version table:
 *** 2:17.0.13-0ubuntu2 500
500 http://archive.ubuntu.com/ubuntu bionic-proposed/main amd64 Packages
100 /var/lib/dpkg/status
 2:17.0.13-0ubuntu1 500
500 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 Packages
 2:17.0.10-0ubuntu2.1 500
500 http://archive.ubuntu.com/ubuntu bionic-security/main amd64 Packages
 2:17.0.1-0ubuntu1 500
500 http://archive.ubuntu.com/ubuntu bionic/main amd64 Packages

$ openstack server list
+--+---++-++---+
| ID   | Name  | Status | Networks  
  | Image  | Flavor|
+--+---++-++---+
| e95848e4-a80d-4c01-ae77-b9f3cd30fb01 | bionic-113943 | ACTIVE | 
sriov_net=10.230.58.105 | bionic | m1.medium |
+--+---++-++---+

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1892361

Title:
  SRIOV instance gets type-PF interface, libvirt kvm fails

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1892361/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1892361] Re: SRIOV instance gets type-PF interface, libvirt kvm fails

2021-06-17 Thread Hemanth Nakkina
Verified the test case on comment #3 on bionic-rocky and is successful.

$ juju run -a nova-compute-a -- sudo apt-cache policy nova-common
nova-common:
  Installed: 2:18.3.0-0ubuntu1~cloud2
  Candidate: 2:18.3.0-0ubuntu1~cloud2
  Version table:
 *** 2:18.3.0-0ubuntu1~cloud2 500
500 http://ubuntu-cloud.archive.canonical.com/ubuntu 
bionic-proposed/rocky/main amd64 Packages
100 /var/lib/dpkg/status
 2:18.3.0-0ubuntu1~cloud1 500
500 http://ubuntu-cloud.archive.canonical.com/ubuntu 
bionic-updates/rocky/main amd64 Packages
 2:17.0.13-0ubuntu1 500
500 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 Packages
 2:17.0.10-0ubuntu2.1 500
500 http://archive.ubuntu.com/ubuntu bionic-security/main amd64 Packages
 2:17.0.1-0ubuntu1 500
500 http://archive.ubuntu.com/ubuntu bionic/main amd64 Packages

$ openstack server list
+--+++-++---+
| ID   | Name   | Status | Networks 
   | Image  | Flavor|
+--+++-++---+
| 176d355b-fb07-4329-8ecc-284e62f72cf6 | sriov-test | ACTIVE | 
sriov_net=10.230.58.101 | bionic | m1.medium |
+--+++-++---+

** Tags removed: verification-rocky-needed
** Tags added: verification-rocky-done

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1892361

Title:
  SRIOV instance gets type-PF interface, libvirt kvm fails

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1892361/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1892361] Re: SRIOV instance gets type-PF interface, libvirt kvm fails

2021-06-16 Thread Hemanth Nakkina
Hi Robie

It is intentional as the backport of functional test requires backport of some 
other patches as well and number of patches are increasing to get a clean 
backport. This is discussed during upstream patch review as well here 
https://review.opendev.org/c/openstack/nova/+/761824/2//COMMIT_MSG

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1892361

Title:
  SRIOV instance gets type-PF interface, libvirt kvm fails

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1892361/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1928031] Re: neutron-ovn-metadata-agent AttributeError: 'MetadataProxyHandler' object has no attribute 'sb_idl'

2021-06-16 Thread Hemanth Nakkina
Bodo Petermann, thanks for pointing the scenario.

Could you please mention the link if you have an issue/bug raised on
ovs.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1928031

Title:
  neutron-ovn-metadata-agent AttributeError: 'MetadataProxyHandler'
  object has no attribute 'sb_idl'

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-ovn-chassis/+bug/1928031/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1928031] Re: neutron-ovn-metadata-agent AttributeError: 'MetadataProxyHandler' object has no attribute 'sb_idl'

2021-06-16 Thread Hemanth Nakkina
After looking into errors in case description and logs in comment #4,
both seems to fail waiting for sb_idl object.

AttributeError: 'MetadataProxyHandler' object has no attribute 'sb_idl'
AttributeError: 'MetadataAgent' object has no attribute 'sb_idl'
 
Once the failure is in MetadataAgent process and the other time in forked 
MetadataAgentProxy process.

There is no TimeOut exception second time as this patch is applied [1] which 
retries the connections without timeout. However in both the cases we can see 
logs like below which says SSL connection to OVSDB DB is successful.
INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:10.216.241.118:6642: connected

Seems to stuck at wait_for_change [2]

On neutron-server side, these scenario's are handled by retry logic on getting 
IDL objects and waiting for post-fork events, see [3], [4] 
Similar logic is required for neutron-ovn-metadata-agent as well. I wil submit 
a patch shortly for review.

[1] https://review.opendev.org/c/openstack/neutron/+/788596
[2] 
https://opendev.org/openstack/neutron/src/commit/87f7abb86cad13c8bc04b4e6165600ee6fd9ef7c/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/impl_idl_ovn.py#L53-L56
[3] 
https://opendev.org/openstack/neutron/src/commit/87f7abb86cad13c8bc04b4e6165600ee6fd9ef7c/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/impl_idl_ovn.py#L222-L226
[4] https://review.opendev.org/c/openstack/neutron/+/781555

** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: neutron
 Assignee: (unassigned) => Hemanth Nakkina (hemanth-n)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1928031

Title:
  neutron-ovn-metadata-agent AttributeError: 'MetadataProxyHandler'
  object has no attribute 'sb_idl'

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-ovn-chassis/+bug/1928031/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1931244] Re: ovn sriov broken from ussuri onwards

2021-06-14 Thread Hemanth Nakkina
Verified on wallaby-proposed and test case is successful

$ juju run -a neutron-api -- sudo apt-cache policy neutron-common
neutron-common:
  Installed: 2:18.0.0-0ubuntu2.1~cloud0
  Candidate: 2:18.0.0-0ubuntu2.1~cloud0
  Version table:
 *** 2:18.0.0-0ubuntu2.1~cloud0 500
500 http://ubuntu-cloud.archive.canonical.com/ubuntu 
focal-proposed/wallaby/main amd64 Packages
100 /var/lib/dpkg/status
 2:18.0.0-0ubuntu2~cloud0 500
500 http://ubuntu-cloud.archive.canonical.com/ubuntu 
focal-updates/wallaby/main amd64 Packages
 2:16.3.2-0ubuntu2 500
500 http://archive.ubuntu.com/ubuntu focal-updates/main amd64 Packages
 2:16.0.0~b3~git2020041516.5f42488a9a-0ubuntu2 500
500 http://archive.ubuntu.com/ubuntu focal/main amd64 Packages


Launching of VM with 2 SRIOV ports is successful

$ openstack server list
+--+++++---+
| ID   | Name   | Status | Networks 
  | Image  | Flavor|
+--+++++---+
| 53d397bf-8233-4852-a5c0-c27835cede67 | sriov-test | ACTIVE | 
sriov_net=10.230.58.173, 10.230.58.121 | bionic | m1.medium |
+--+++++---+


** Tags removed: verification-needed
** Tags added: verification-done verification-wallaby-done

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1931244

Title:
  ovn sriov broken from ussuri onwards

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1931244/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1931244] Re: ovn sriov broken from ussuri onwards

2021-06-14 Thread Hemanth Nakkina
SRU team,
As per comment #14, the fix should be available in 
cloud-archive:wallaby-proposed

But I dont see a new neutron package to upgrade when wallaby-proposed is
enabled.

# cat /etc/apt/sources.list.d/cloudarchive-wallaby-proposed.list 
deb http://ubuntu-cloud.archive.canonical.com/ubuntu focal-proposed/wallaby main
  
# apt list --installed | grep neutron

WARNING: apt does not have a stable CLI interface. Use with caution in
scripts.

neutron-common/focal-updates,focal-proposed,now 2:18.0.0-0ubuntu2~cloud0 all 
[installed]
neutron-fwaas-common/focal-updates,now 1:16.0.0-0ubuntu0.20.04.1 all 
[installed,automatic]
neutron-plugin-ml2/focal-updates,focal-proposed,now 2:18.0.0-0ubuntu2~cloud0 
all [installed,automatic]
neutron-server/focal-updates,focal-proposed,now 2:18.0.0-0ubuntu2~cloud0 all 
[installed]
python3-neutron-dynamic-routing/focal-updates,focal-proposed,now 
2:18.0.0-0ubuntu1~cloud0 all [installed]
python3-neutron-fwaas/focal-updates,now 1:16.0.0-0ubuntu0.20.04.1 all 
[installed]
python3-neutron-lib/focal-updates,focal-proposed,now 2.10.1-0ubuntu1~cloud0 all 
[installed,automatic]
python3-neutron/focal-updates,focal-proposed,now 2:18.0.0-0ubuntu2~cloud0 all 
[installed]
python3-neutronclient/focal-updates,focal-proposed,now 1:7.2.1-0ubuntu1~cloud0 
all [installed,automatic]


Also neutron-common in 
http://ubuntu-cloud.archive.canonical.com/ubuntu/dists/focal-proposed/wallaby/main/binary-arm64/Packages
 refers to 2:18.0.0-0ubuntu2~cloud0 (which is same version as 
focal-updates/wallaby)

Could you please crosscheck if new package is released for SRU
verification

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1931244

Title:
  ovn sriov broken from ussuri onwards

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1931244/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1892361] Re: SRIOV instance gets type-PF interface, libvirt kvm fails

2021-06-14 Thread Hemanth Nakkina
Verified the test case on comment #3 on bionic-stein and is successful.

$ juju run -a nova-compute-a -- sudo apt-cache policy nova-common
nova-common:
  Installed: 2:19.3.2-0ubuntu1~cloud1
  Candidate: 2:19.3.2-0ubuntu1~cloud1
  Version table:
 *** 2:19.3.2-0ubuntu1~cloud1 500
500 http://ubuntu-cloud.archive.canonical.com/ubuntu 
bionic-proposed/stein/main amd64 Packages
100 /var/lib/dpkg/status
 2:19.3.2-0ubuntu1~cloud0 500
500 http://ubuntu-cloud.archive.canonical.com/ubuntu 
bionic-updates/stein/main amd64 Packages
 2:17.0.13-0ubuntu1 500
500 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 Packages
 2:17.0.10-0ubuntu2.1 500
500 http://archive.ubuntu.com/ubuntu bionic-security/main amd64 Packages
 2:17.0.1-0ubuntu1 500
500 http://archive.ubuntu.com/ubuntu bionic/main amd64 Packages

$ openstack server list
+--+---++-++---+
| ID   | Name  | Status | Networks  
  | Image  | Flavor|
+--+---++-++---+
| 896b6519-e796-4f4f-9a7c-fffcccb0fce6 | bionic-061754 | ACTIVE | 
sriov_net=10.230.58.185 | bionic | m1.medium |
+--+---++-++---+

** Tags removed: verification-stein-needed
** Tags added: verification-stein-done

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1892361

Title:
  SRIOV instance gets type-PF interface, libvirt kvm fails

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1892361/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1931244] Re: ovn sriov broken from ussuri onwards

2021-06-13 Thread Hemanth Nakkina
Verified on hirsute-proposed and test case is successful

$ juju run -a neutron-api -- sudo apt-cache policy neutron-common
neutron-common:
  Installed: 2:18.0.0-0ubuntu2.1
  Candidate: 2:18.0.0-0ubuntu2.1
  Version table:
 *** 2:18.0.0-0ubuntu2.1 500
500 http://archive.ubuntu.com/ubuntu hirsute-proposed/main amd64 
Packages
100 /var/lib/dpkg/status
 2:18.0.0-0ubuntu2 500
500 http://archive.ubuntu.com/ubuntu hirsute/main amd64 Packages

Created VM with 2 SRIOV ports
$ openstack server list
+--+-++++---+
| ID   | Name| Status | Networks
   | Image  | Flavor|
+--+-++++---+
| 590b45a3-3d93-44cf-b8a5-0ece109c608e | sriov-test2 | ACTIVE | 
sriov_net=10.230.58.156, 10.230.58.170 | bionic | m1.medium |
+--+-++++---+

** Tags removed: verification-needed-hirsute
** Tags added: verification-done-hirsute

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1931244

Title:
  ovn sriov broken from ussuri onwards

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1931244/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1931244] Re: ovn sriov broken from ussuri onwards

2021-06-11 Thread Hemanth Nakkina
Verified on focal-proposed and test case is successful

$ juju run -a neutron-api -- sudo apt-cache policy neutron-common
neutron-common:
  Installed: 2:16.3.2-0ubuntu3
  Candidate: 2:16.3.2-0ubuntu3
  Version table:
 *** 2:16.3.2-0ubuntu3 500
500 http://archive.ubuntu.com/ubuntu focal-proposed/main amd64 Packages
100 /var/lib/dpkg/status
 2:16.3.2-0ubuntu2 500
500 http://archive.ubuntu.com/ubuntu focal-updates/main amd64 Packages
 2:16.0.0~b3~git2020041516.5f42488a9a-0ubuntu2 500
500 http://archive.ubuntu.com/ubuntu focal/main amd64 Packages

Created VM with 2 SRIOV ports

$ openstack server list --long
+--+-+++-+++--+-+---+---+-++
| ID   | Name| Status | Task State | 
Power State | Networks   | Image Name | Image ID
 | Flavor Name | Flavor ID | Availability Zone | Host   
 | Properties |
+--+-+++-+++--+-+---+---+-++
| 0f0c5104-cda8-4b84-95b0-8a713e8a1db6 | sriov-test1 | ACTIVE | None   | 
Running | sriov_net=10.230.58.157, 10.230.58.133 | bionic | 
17cca127-b912-444d-bc9a-5e4cf48156b3 | m1.medium   | 3 | nova   
   | test.test.test ||
+--+-+++-+++--+-+---+---+-++

** Tags removed: verification-needed-focal
** Tags added: verification-done-focal

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1931244

Title:
  ovn sriov broken from ussuri onwards

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1931244/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1892361] Re: SRIOV instance gets type-PF interface, libvirt kvm fails

2021-06-07 Thread Hemanth Nakkina
** Changed in: nova/rocky
   Status: In Progress => Fix Committed

** Changed in: nova/queens
   Status: In Progress => Fix Committed

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1892361

Title:
  SRIOV instance gets type-PF interface, libvirt kvm fails

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1892361/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1928031] Re: neutron-ovn-metadata-agent AttributeError: 'MetadataProxyHandler' object has no attribute 'sb_idl'

2021-06-07 Thread Hemanth Nakkina
The errors mentioned in bug description are from MetadataAgentProxy
process but the one mentioned in comment#4 is from parent MetadataAgent
process itself.

ovsdb.MetadataAgentOvnSbIdl().start() seems never returned and so sb_idl is not 
initialised [1].
However from the logs, we can see the MetadataAgent is connected to OVS DB 
server and the trigger of function in which error happened indicates the 
MetadataAgentOvnSbIdl handling OVSDB SB update events. 

This need to looked further into IDL code.

[1]
https://opendev.org/openstack/neutron/src/commit/1e8197fee5031ee7ba384eb537b13f381a837685/neutron/agent/ovn/metadata/agent.py#L241-L248

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1928031

Title:
  neutron-ovn-metadata-agent AttributeError: 'MetadataProxyHandler'
  object has no attribute 'sb_idl'

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-ovn-chassis/+bug/1928031/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1883089] Re: [L3] floating IP failed to bind due to no agent gateway port(fip-ns)

2021-05-25 Thread Hemanth Nakkina
Verified the test case on bionic-ussuri and the test case works with the
package in cloud:archive:ussuri-proposed

Deleted the floatin ip agent gateway on one of the compute node and
launched a new VM on that compute and assigned FIP. Able to ping
Floating IP.

$ ping -c 1 10.5.153.114
PING 10.5.153.114 (10.5.153.114) 56(84) bytes of data.
64 bytes from 10.5.153.114: icmp_seq=1 ttl=62 time=3.66 ms

--- 10.5.153.114 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 3.662/3.662/3.662/0.000 ms

** Tags removed: verification-needed verification-ussuri-needed
** Tags added: verification-done verification-ussuri-done

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1883089

Title:
  [L3] floating IP failed to bind due to no agent gateway port(fip-ns)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1883089/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1849098] Re: ovs agent is stuck with OVSFWTagNotFound when dealing with unbound port

2021-05-20 Thread Hemanth Nakkina
** Changed in: cloud-archive
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1849098

Title:
  ovs agent is stuck with OVSFWTagNotFound when dealing with unbound
  port

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1849098/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1883089] Re: [L3] floating IP failed to bind due to no agent gateway port(fip-ns)

2021-05-17 Thread Hemanth Nakkina
Tested on focal with neutron package 2:16.3.1-0ubuntu1.1  and the test
case is successful

* Deployed environment with dvr l3ha and centralised neutron snat
gateways.

* Floating IP agent gateway exists on all nova-compute nodes (4) and 
neutron-gateway nodes (3) after launching VMs on all compute nodes
$ openstack port list --network ext_net -c id -c device_id -c binding_host_id 
-c device_owner -c fixed_ips | grep floatingip_agent_gateway | wc -l
7

* Deleted one of the Floating IP agent gateway port
$ openstack port list --network ext_net -c id -c device_id -c binding_host_id 
-c device_owner -c fixed_ips | grep floatingip_agent_gateway | wc -l
6

* Launched VM on the node where gateway port is deleted. Floating IP agent 
gateway came back on the node
$ openstack port list --network ext_net -c id -c device_id -c binding_host_id 
-c device_owner -c fixed_ips | grep floatingip_agent_gateway | wc -l
7

* ping to the floating ip successful
$ ping -c 1 10.5.151.84
PING 10.5.151.84 (10.5.151.84) 56(84) bytes of data.
64 bytes from 10.5.151.84: icmp_seq=1 ttl=62 time=293 ms

--- 10.5.151.84 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 292.825/292.825/292.825/0.000 ms

** Tags removed: verification-needed-focal
** Tags added: verification-done-focal

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1883089

Title:
  [L3] floating IP failed to bind due to no agent gateway port(fip-ns)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1883089/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1887405] Re: Race condition while processing security_groups_member_updated events (ipset)

2021-05-16 Thread Hemanth Nakkina
The fix is available in Ubuntu neutron packages in Hiruste, Groovy,
Focal and UCA Victoria, Ussuri. Marking them as Fix released.

** Changed in: neutron (Ubuntu Hirsute)
   Status: New => Fix Released

** Changed in: neutron (Ubuntu Groovy)
   Status: New => Fix Released

** Changed in: neutron (Ubuntu Focal)
   Status: New => Fix Released

** Changed in: cloud-archive/victoria
   Status: New => Fix Released

** Changed in: cloud-archive/ussuri
   Status: New => Fix Released

** Changed in: cloud-archive
   Status: New => Fix Released

** Changed in: neutron (Ubuntu)
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1887405

Title:
  Race condition while processing security_groups_member_updated events
  (ipset)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1887405/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1907686] Re: ovn: instance unable to retrieve metadata

2021-05-11 Thread Hemanth Nakkina
Another customer hit this bug - waiting for release on uca ussuri

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1907686

Title:
  ovn: instance unable to retrieve metadata

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-ovn-chassis/+bug/1907686/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1895727] Re: OpenSSL.SSL.SysCallError: (111, 'ECONNREFUSED') and Connection thread stops

2021-05-07 Thread Hemanth Nakkina
Verified the test case on 4 environments and installed the new packages
and restarted neutron-server service. The connections towards OVSDB are
reconnected and VMs are launched without any issues

* bionic ussuri  (ussuri-proposed)
* focal  (focal-proposed)
* focal victoria (victoria-proposed)
* groovy (groovy-proposed)

** Tags removed: verification-needed verification-needed-focal 
verification-needed-groovy verification-ussuri-needed 
verification-victoria-needed
** Tags added: verification-done verification-done-focal 
verification-done-groovy verification-ussuri-done verification-victoria-done

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1895727

Title:
  OpenSSL.SSL.SysCallError: (111, 'ECONNREFUSED') and Connection thread
  stops

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1895727/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1894843] Re: [dvr_snat] Router update deletes rfp interface from qrouter even when VM port is present on this host

2021-05-06 Thread Hemanth Nakkina
The test case is verified on xenial-queens with queens-proposed packages
and the external access to VM is restored once the router is re-enabled.

** Tags removed: verification-needed verification-queens-needed
** Tags added: verification-done verification-queens-done

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1894843

Title:
  [dvr_snat] Router update deletes rfp interface from qrouter even when
  VM port is present on this host

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1894843/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1894843] Re: [dvr_snat] Router update deletes rfp interface from qrouter even when VM port is present on this host

2021-05-06 Thread Hemanth Nakkina
The test case is verified on bionic-proposed and the external access to
VM is restored once the router is re-enabled.

** Tags removed: verification-needed-bionic
** Tags added: verification-done-bionic

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1894843

Title:
  [dvr_snat] Router update deletes rfp interface from qrouter even when
  VM port is present on this host

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1894843/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1850779] Re: [L3] snat-ns will be initialized twice for DVR+HA routers during agent restart

2021-05-06 Thread Hemanth Nakkina
The test case is verified succesfully on bionic-proposed and sysctl
executions are done only once during neutron-l3-agent restart

** Tags removed: verification-needed verification-needed-bionic
** Tags added: verification-done verification-done-bionic

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1850779

Title:
  [L3] snat-ns will be initialized twice for DVR+HA routers during agent
  restart

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1850779/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1907686] Re: ovn: instance unable to retrieve metadata

2021-05-05 Thread Hemanth Nakkina
Typo in #26, tested in ussuri-proposed (not bionic-proposed)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1907686

Title:
  ovn: instance unable to retrieve metadata

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-ovn-chassis/+bug/1907686/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1907686] Re: ovn: instance unable to retrieve metadata

2021-05-05 Thread Hemanth Nakkina
After applying the fix in bionic-proposed, 
only one compute node exhibited the logs mentioned in #5
2021-01-12 06:48:49.848 52569 ERROR neutron.agent.ovn.metadata.server [-] 
Unexpected error.: AttributeError: 'MetadataProxyHandler' object has no 
attribute 'sb_idl'


sb_idl is never initialised as the connection timed out during initialisation 
phase.

See error log:

2021-04-15 22:30:08.542 69188 ERROR 
neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.impl_idl_ovn [-] OVS database 
connection to OVN_Southbound failed with error: 'Timeout'. Verify that the OVS 
and OVN services are available and that the 'ovn_nb_connection' and 
'ovn_sb_connection' configuration options are correct.: Exception: Timeout
2021-04-15 22:30:08.542 69188 ERROR 
neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.impl_idl_ovn Traceback (most 
recent call last):
2021-04-15 22:30:08.542 69188 ERROR 
neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.impl_idl_ovn   File 
"/usr/lib/python3/dist-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/impl_idl_ovn.py",
 line 67, in start_connection
2021-04-15 22:30:08.542 69188 ERROR 
neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.impl_idl_ovn 
self.ovsdb_connection.start()
2021-04-15 22:30:08.542 69188 ERROR 
neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.impl_idl_ovn   File 
"/usr/lib/python3/dist-packages/ovsdbapp/backend/ovs_idl/connection.py", line 
79, in start
2021-04-15 22:30:08.542 69188 ERROR 
neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.impl_idl_ovn 
idlutils.wait_for_change(self.idl, self.timeout)
2021-04-15 22:30:08.542 69188 ERROR 
neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.impl_idl_ovn   File 
"/usr/lib/python3/dist-packages/ovsdbapp/backend/ovs_idl/idlutils.py", line 
173, in wait_for_change
2021-04-15 22:30:08.542 69188 ERROR 
neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.impl_idl_ovn raise 
Exception("Timeout")  # TODO(twilson) use TimeoutException?
2021-04-15 22:30:08.542 69188 ERROR 
neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.impl_idl_ovn Exception: 
Timeout
2021-04-15 22:30:08.542 69188 ERROR 
neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.impl_idl_ovn
2021-04-15 22:30:08.544 69188 ERROR neutron_lib.callbacks.manager [-] Error 
during notification for 
neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-476074
 process, after_init: 
neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.impl_idl_ovn.OvsdbConnectionUnavailable:
 OVS database connection to OVN_Southbound failed with error: 'Timeout'. Verify 
that the OVS and OVN services are available and that the 'ovn_nb_connection' 
and 'ovn_sb_connection' configuration options are correct.
2021-04-15 22:30:08.544 69188 ERROR neutron_lib.callbacks.manager Traceback 
(most recent call last):
2021-04-15 22:30:08.544 69188 ERROR neutron_lib.callbacks.manager   File 
"/usr/lib/python3/dist-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/impl_idl_ovn.py",
 line 67, in start_connection
2021-04-15 22:30:08.544 69188 ERROR neutron_lib.callbacks.manager 
self.ovsdb_connection.start()
2021-04-15 22:30:08.544 69188 ERROR neutron_lib.callbacks.manager   File 
"/usr/lib/python3/dist-packages/ovsdbapp/backend/ovs_idl/connection.py", line 
79, in start
2021-04-15 22:30:08.544 69188 ERROR neutron_lib.callbacks.manager 
idlutils.wait_for_change(self.idl, self.timeout)
2021-04-15 22:30:08.544 69188 ERROR neutron_lib.callbacks.manager   File 
"/usr/lib/python3/dist-packages/ovsdbapp/backend/ovs_idl/idlutils.py", line 
173, in wait_for_change
2021-04-15 22:30:08.544 69188 ERROR neutron_lib.callbacks.manager raise 
Exception("Timeout")  # TODO(twilson) use TimeoutException?
2021-04-15 22:30:08.544 69188 ERROR neutron_lib.callbacks.manager Exception: 
Timeout

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1907686

Title:
  ovn: instance unable to retrieve metadata

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-ovn-chassis/+bug/1907686/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1849098] Re: ovs agent is stuck with OVSFWTagNotFound when dealing with unbound port

2021-05-05 Thread Hemanth Nakkina
Verified the test case with the bionic-proposed (neutron
2:12.1.1-0ubuntu7) and test case is working fine

$ openstack server list
+--+-++---++---+
| ID   | Name| Status | 
Networks  | Image  | Flavor|
+--+-++---++---+
| c6ead240-8952-49ab-8f4f-7c3ed2007af9 | testvm-after-fix| ACTIVE | 
private=192.168.21.4  | cirros | m1.cirros |
| 52688c3f-00b9-4ae8-bc08-0f87265e8bb3 | testvm-after-tagcomment | ERROR  | 
  | cirros | m1.cirros |
| 5f23c575-94a5-48f3-b6ef-d0d9f6f2f7d4 | cirros-110548   | ACTIVE | 
private=192.168.21.10 | cirros | m1.cirros |
+--+-++---++---+

cirros-110548: VM launched after deployment
testvm-after-tagcomment: VM launched after changing the code as mentioned in 
test case and VM is in ERROR state and logs shows tag errrors
testvm-after-fix: VM launched after the neutron package is upgraded with the 
one in bionic-proposed.


** Tags removed: verification-needed verification-needed-bionic
** Tags added: verification-done verification-done-bionic

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1849098

Title:
  ovs agent is stuck with OVSFWTagNotFound when dealing with unbound
  port

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1849098/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1850779] Re: [L3] snat-ns will be initialized twice for DVR+HA routers during agent restart

2021-05-04 Thread Hemanth Nakkina
@abaindur

the description in comment #18 sounds more to me like
https://bugs.launchpad.net/neutron/+bug/1894843 and already fixed in
stable/queens upstream.

Here is the commit:
https://opendev.org/openstack/neutron/commit/8f3daf3f9892cd691dd52965f0fa4eaa07ac3788

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1850779

Title:
  [L3] snat-ns will be initialized twice for DVR+HA routers during agent
  restart

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1850779/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1883089] Re: [L3] floating IP failed to bind due to no agent gateway port(fip-ns)

2021-04-26 Thread Hemanth Nakkina
SRU Team,
 
The fix has 2 commits (referred stable/ussuri below)
https://review.opendev.org/c/openstack/neutron/+/779614
https://review.opendev.org/c/openstack/neutron/+/779613

779614 is already part of focal (latest ussuri stable point release on Apr 12)
Uploaded debdiff with changes from 779613

** Changed in: neutron (Ubuntu Focal)
 Assignee: (unassigned) => Hemanth Nakkina (hemanth-n)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1883089

Title:
  [L3] floating IP failed to bind due to no agent gateway port(fip-ns)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1883089/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1883089] Re: [L3] floating IP failed to bind due to no agent gateway port(fip-ns)

2021-04-26 Thread Hemanth Nakkina
** Description changed:

  In patch [1] it introduced a binding of DB uniq constraint for L3
  agent gateway. In some extreme case the DvrFipGatewayPortAgentBinding
  is in DB while the gateway port not. The current code path only checks
  the binding existence which will pass a "None" port to the following
  code path that results an AttributeError.
  
  [1] https://review.opendev.org/#/c/702547/
- 
  
  Exception log:
  
  2020-06-11 15:39:28.361 1285214 INFO neutron.db.l3_dvr_db [None 
req-d6a41187-2495-46bf-a424-ab7195c0ecb1 - - - - -] Floating IP Agent Gateway 
port for network 3fcb7702-ae0b-46b4-807f-8ae94d656dd3 does not exist on host 
host-compute-1. Creating one.
  2020-06-11 15:39:28.370 1285214 DEBUG neutron.db.l3_dvr_db [None 
req-d6a41187-2495-46bf-a424-ab7195c0ecb1 - - - - -] Floating IP Agent Gateway 
port for network 3fcb7702-ae0b-46b4-807f-8ae94d656dd3 already exists on host 
host-compute-1. Probably it was just created by other worker. 
create_fip_agent_gw_port_if_not_exists 
/usr/lib/python2.7/site-packages/neutron/db/l3_dvr_db.py:927
  2020-06-11 15:39:28.390 1285214 DEBUG neutron.db.l3_dvr_db [None 
req-d6a41187-2495-46bf-a424-ab7195c0ecb1 - - - - -] Floating IP Agent Gateway 
port None found for the destination host: host-compute-1 
create_fip_agent_gw_port_if_not_exists 
/usr/lib/python2.7/site-packages/neutron/db/l3_dvr_db.py:933
  2020-06-11 15:39:28.391 1285214 ERROR oslo_messaging.rpc.server [None 
req-d6a41187-2495-46bf-a424-ab7195c0ecb1 - - - - -] Exception during message 
handling: AttributeError: 'NoneType' object has no attribute 'get'
  2020-06-11 15:39:28.391 1285214 ERROR oslo_messaging.rpc.server Traceback 
(most recent call last):
  2020-06-11 15:39:28.391 1285214 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 170, in 
_process_incoming
  2020-06-11 15:39:28.391 1285214 ERROR oslo_messaging.rpc.server res = 
self.dispatcher.dispatch(message)
  2020-06-11 15:39:28.391 1285214 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 220, 
in dispatch
  2020-06-11 15:39:28.391 1285214 ERROR oslo_messaging.rpc.server return 
self._do_dispatch(endpoint, method, ctxt, args)
  2020-06-11 15:39:28.391 1285214 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 190, 
in _do_dispatch
  2020-06-11 15:39:28.391 1285214 ERROR oslo_messaging.rpc.server result = 
func(ctxt, **new_args)
  2020-06-11 15:39:28.391 1285214 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/neutron/db/api.py", line 91, in wrapped
  2020-06-11 15:39:28.391 1285214 ERROR oslo_messaging.rpc.server 
setattr(e, '_RETRY_EXCEEDED', True)
  2020-06-11 15:39:28.391 1285214 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
  2020-06-11 15:39:28.391 1285214 ERROR oslo_messaging.rpc.server 
self.force_reraise()
  2020-06-11 15:39:28.391 1285214 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2020-06-11 15:39:28.391 1285214 ERROR oslo_messaging.rpc.server 
six.reraise(self.type_, self.value, self.tb)
  2020-06-11 15:39:28.391 1285214 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/neutron/db/api.py", line 87, in wrapped
  2020-06-11 15:39:28.391 1285214 ERROR oslo_messaging.rpc.server return 
f(*args, **kwargs)
  2020-06-11 15:39:28.391 1285214 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_db/api.py", line 147, in wrapper
  2020-06-11 15:39:28.391 1285214 ERROR oslo_messaging.rpc.server 
ectxt.value = e.inner_exc
  2020-06-11 15:39:28.391 1285214 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
  2020-06-11 15:39:28.391 1285214 ERROR oslo_messaging.rpc.server 
self.force_reraise()
  2020-06-11 15:39:28.391 1285214 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2020-06-11 15:39:28.391 1285214 ERROR oslo_messaging.rpc.server 
six.reraise(self.type_, self.value, self.tb)
  2020-06-11 15:39:28.391 1285214 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_db/api.py", line 135, in wrapper
  2020-06-11 15:39:28.391 1285214 ERROR oslo_messaging.rpc.server return 
f(*args, **kwargs)
  2020-06-11 15:39:28.391 1285214 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/neutron/db/api.py", line 126, in wrapped
  2020-06-11 15:39:28.391 1285214 ERROR oslo_messaging.rpc.server 
LOG.debug("Retry wrapper got retriable exception: %s", e)
  2020-06-11 15:39:28.391 1285214 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, 

[Bug 1883089] Re: [L3] floating IP failed to bind due to no agent gateway port(fip-ns)

2021-04-26 Thread Hemanth Nakkina
** Also affects: neutron (Ubuntu Impish)
   Importance: Undecided
   Status: New

** Also affects: neutron (Ubuntu Hirsute)
   Importance: Undecided
   Status: New

** Also affects: neutron (Ubuntu Groovy)
   Importance: Undecided
   Status: New

** Changed in: neutron (Ubuntu Impish)
   Status: New => Fix Released

** Changed in: neutron (Ubuntu Hirsute)
   Status: New => Fix Released

** Changed in: neutron (Ubuntu Groovy)
   Status: New => Fix Released

** Tags added: sts

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1883089

Title:
  [L3] floating IP failed to bind due to no agent gateway port(fip-ns)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1883089/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1883089] Re: [L3] floating IP failed to bind due to no agent gateway port(fip-ns)

2021-04-26 Thread Hemanth Nakkina
** Also affects: neutron (Ubuntu)
   Importance: Undecided
   Status: New

** Also affects: neutron (Ubuntu Focal)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1883089

Title:
  [L3] floating IP failed to bind due to no agent gateway port(fip-ns)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1883089/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

  1   2   >