[Bug 2064717] Re: ceph-volume needs "packaging" module

2024-05-16 Thread Felipe Reyes
** Changed in: charm-ceph-osd
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2064717

Title:
  ceph-volume needs "packaging" module

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-ceph-osd/+bug/2064717/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2064717] Re: ceph-volume needs "packaging" module

2024-05-16 Thread Felipe Reyes
** Changed in: charm-ceph-osd/reef
 Assignee: (unassigned) => Felipe Reyes (freyes)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2064717

Title:
  ceph-volume needs "packaging" module

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-ceph-osd/+bug/2064717/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2064717] Re: ceph-volume needs "packaging" module

2024-05-16 Thread Felipe Reyes
** Also affects: charm-ceph-osd
   Importance: Undecided
   Status: New

** Also affects: charm-ceph-osd/reef
   Importance: Undecided
   Status: New

** Changed in: charm-ceph-osd
   Status: New => Fix Committed

** Changed in: charm-ceph-osd
 Assignee: (unassigned) => Peter Sabaini (peter-sabaini)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2064717

Title:
  ceph-volume needs "packaging" module

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-ceph-osd/+bug/2064717/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2064190] Re: Failed to pre-delete resources for cluster XXXX

2024-05-02 Thread Felipe Reyes
** Changed in: cloud-archive/victoria
   Status: New => Won't Fix

** Changed in: cloud-archive/wallaby
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2064190

Title:
  Failed to pre-delete resources for cluster 

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/2064190/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2064190] [NEW] Failed to pre-delete resources for cluster XXXX

2024-04-29 Thread Felipe Reyes
Public bug reported:

[Impact]

When deleting a cluster (k8s) and the load balancer associated to it has
already been deleted, a 404 error is returned by Heat which raises a
heatclient.exc.HTTPNotFound exception.

Full stack trace:

[req-7a8d257d-9cca-4d2d-b108-de8f33f60ae1 - - - - -] Exception during message 
handling: magnum.common.exception.PreDeletionFailed: Failed to pre-delete 
resources for cluster 6d553e2f-74bb-4dbe-9fd1-1a123b76530b, error: ERROR: The 
Stack (37032d90-66f1-41dd-b584-67d10f438bd9) could not be found..
Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/magnum/common/octavia.py", line 109, in 
delete_loadbalancers
lb_resources = heat_client.resources.list(
  File "/usr/lib/python3/dist-packages/heatclient/v1/resources.py", line 70, in 
list
return self._list(url, "resources")
  File "/usr/lib/python3/dist-packages/heatclient/common/base.py", line 114, in 
_list
body = self.client.get(url).json()
  File "/usr/lib/python3/dist-packages/heatclient/common/http.py", line 289, in 
get
return self.client_request("GET", url, **kwargs)
  File "/usr/lib/python3/dist-packages/heatclient/common/http.py", line 282, in 
client_request
resp, body = self.json_request(method, url, **kwargs)
  File "/usr/lib/python3/dist-packages/heatclient/common/http.py", line 271, in 
json_request
resp = self._http_request(url, method, **kwargs)
  File "/usr/lib/python3/dist-packages/heatclient/common/http.py", line 234, in 
_http_request
raise exc.from_response(resp)
heatclient.exc.HTTPNotFound: ERROR: The Stack 
(37032d90-66f1-41dd-b584-67d10f438bd9) could not be found.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/oslo_messaging/rpc/server.py", line 165, 
in _process_incoming
res = self.dispatcher.dispatch(message)
  File "/usr/lib/python3/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
276, in dispatch
return self._do_dispatch(endpoint, method, ctxt, args)
  File "/usr/lib/python3/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
196, in _do_dispatch
result = func(ctxt, **new_args)
  File 
"/usr/lib/python3/dist-packages/magnum/conductor/handlers/cluster_conductor.py",
 line 191, in cluster_delete
cluster_driver.delete_cluster(context, cluster)
  File "/usr/lib/python3/dist-packages/magnum/drivers/heat/driver.py", line 
162, in delete_cluster
self.pre_delete_cluster(context, cluster)
  File "/usr/lib/python3/dist-packages/magnum/drivers/heat/driver.py", line 
307, in pre_delete_cluster
octavia.delete_loadbalancers(context, cluster)
  File "/usr/lib/python3/dist-packages/magnum/common/octavia.py", line 130, in 
delete_loadbalancers
raise exception.PreDeletionFailed(cluster_uuid=cluster.uuid,
magnum.common.exception.PreDeletionFailed: Failed to pre-delete resources for 
cluster 6d553e2f-74bb-4dbe-9fd1-1a123b76530b, error: ERROR: The Stack 
(37032d90-66f1-41dd-b584-67d10f438bd9) could not be found..
2024-03-12 14:38:35.878 3553570 ERROR oslo_messaging.rpc.server

[ Test Plan ]

TBD

[ Where problems could occur ]

TBD

[Other Info]

This issue has been fixed in Magnum by commit
https://opendev.org/openstack/magnum/commit/4888f706c8a0280971df398cbc1ff06ad5d63e7f
( https://review.opendev.org/c/openstack/magnum/+/818563 ), this was
released in Magnum-14.0.0 (Yoga release) and backported to 13.1.0
(Xena), and even when it was backported into the stable/wallaby branch (
https://review.opendev.org/c/openstack/magnum/+/820334 ), no releases
were cut after it was merged

** Affects: cloud-archive
 Importance: Undecided
 Status: Invalid

** Affects: cloud-archive/ussuri
 Importance: Undecided
 Status: New

** Affects: cloud-archive/victoria
 Importance: Undecided
 Status: New

** Affects: cloud-archive/wallaby
 Importance: Undecided
 Status: New

** Affects: magnum (Ubuntu)
 Importance: Undecided
 Status: Invalid

** Affects: magnum (Ubuntu Focal)
 Importance: Undecided
 Assignee: Felipe Reyes (freyes)
 Status: New

** Also affects: magnum (Ubuntu Focal)
   Importance: Undecided
   Status: New

** Changed in: magnum (Ubuntu)
   Status: New => Invalid

** Changed in: magnum (Ubuntu Focal)
 Assignee: (unassigned) => Felipe Reyes (freyes)

** Also affects: cloud-archive
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/victoria
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/wallaby
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/ussuri
   Importance: Undecided
   Status: New

** Changed in: cloud-archive
   Status: New => Invalid

-- 
You received this bug notification because you are a member 

[Bug 2039161] Re: When attaching multiattach volumes apparmor nova-compute profile blocks some operations

2024-03-19 Thread Felipe Reyes
** Also affects: charm-nova-compute/zed
   Importance: Undecided
   Status: New

** Also affects: charm-nova-compute/2023.2
   Importance: Undecided
   Status: New

** Also affects: charm-nova-compute/yoga
   Importance: Undecided
   Status: New

** Also affects: charm-nova-compute/2023.1
   Importance: Undecided
   Status: New

** Changed in: charm-nova-compute/2023.1
   Status: New => Fix Released

** Changed in: charm-nova-compute/2023.2
   Status: New => Fix Released

** Changed in: charm-nova-compute/zed
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2039161

Title:
  When attaching multiattach volumes apparmor nova-compute profile
  blocks some operations

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-nova-compute/+bug/2039161/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2054444] Re: Cluster Resize is failing in UI

2024-03-01 Thread Felipe Reyes
Marking magnum-ui (upstream) as invalid since this is an issue found in
the ubuntu focal archive.

** Changed in: magnum-ui
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/205

Title:
  Cluster Resize is failing in UI

To manage notifications about this bug go to:
https://bugs.launchpad.net/magnum-ui/+bug/205/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2054444] Re: Cluster Resize is failing in UI

2024-03-01 Thread Felipe Reyes
** Also affects: magnum-ui (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/205

Title:
  Cluster Resize is failing in UI

To manage notifications about this bug go to:
https://bugs.launchpad.net/magnum-ui/+bug/205/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1972665] Re: [SRU] Xena stable releases

2022-05-26 Thread Felipe Reyes
On Horizon I was able to launch an instance with a floating ip and
navigate across the different panels with no problem.

ubuntu@freyes-bastion:~$ juju ssh openstack-dashboard/0 apt policy 
openstack-dashboard
openstack-dashboard:
  Installed: 4:20.1.2-0ubuntu1~cloud0
  Candidate: 4:20.1.2-0ubuntu1~cloud0
  Version table:
 *** 4:20.1.2-0ubuntu1~cloud0 500
500 http://ubuntu-cloud.archive.canonical.com/ubuntu 
focal-proposed/xena/main amd64 Packages
100 /var/lib/dpkg/status
 3:18.3.5-0ubuntu2 500
500 http://nova.clouds.archive.ubuntu.com/ubuntu focal-updates/main 
amd64 Packages
 3:18.3.2-0ubuntu0.20.04.4 500
500 http://security.ubuntu.com/ubuntu focal-security/main amd64 Packages
 3:18.2.1~git2020041013.754804667-0ubuntu3 500
500 http://nova.clouds.archive.ubuntu.com/ubuntu focal/main amd64 
Packages
Connection to 10.5.2.169 closed.


** Tags removed: verification-xena-needed
** Tags added: verification-xena-done

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1972665

Title:
  [SRU] Xena stable releases

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1972665/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1972665] Re: [SRU] Xena stable releases

2022-05-26 Thread Felipe Reyes
I was able to run successfully charmed-openstack-tester (tox -e func-
target -- focal-xena) for the Xena UCA, here it's the list of packages
used by the testing bed.

ubuntu@freyes-bastion:~$ juju ssh neutron-api/0 apt policy neutron-common
neutron-common:
  Installed: 2:19.2.0-0ubuntu1~cloud0
  Candidate: 2:19.2.0-0ubuntu1~cloud0
  Version table:
 *** 2:19.2.0-0ubuntu1~cloud0 500
500 http://ubuntu-cloud.archive.canonical.com/ubuntu 
focal-proposed/xena/main amd64 Packages
100 /var/lib/dpkg/status
 2:16.4.2-0ubuntu1 500
500 http://nova.clouds.archive.ubuntu.com/ubuntu focal-updates/main 
amd64 Packages
 2:16.0.0~b3~git2020041516.5f42488a9a-0ubuntu2 500
500 http://nova.clouds.archive.ubuntu.com/ubuntu focal/main amd64 
Packages
Connection to 10.5.2.110 closed.
ubuntu@freyes-bastion:~$ juju ssh nova-compute/0 apt policy nova-common
nova-common:
  Installed: 3:24.1.0-0ubuntu2~cloud0
  Candidate: 3:24.1.0-0ubuntu2~cloud0
  Version table:
 *** 3:24.1.0-0ubuntu2~cloud0 500
500 http://ubuntu-cloud.archive.canonical.com/ubuntu 
focal-proposed/xena/main amd64 Packages
100 /var/lib/dpkg/status
 2:21.2.4-0ubuntu1 500
500 http://nova.clouds.archive.ubuntu.com/ubuntu focal-updates/main 
amd64 Packages
 2:21.0.0~b3~git2020041013.57ff308d6d-0ubuntu2 500
500 http://nova.clouds.archive.ubuntu.com/ubuntu focal/main amd64 
Packages
Connection to 10.5.2.218 closed.
ubuntu@freyes-bastion:~$ juju ssh cinder/0 apt policy cinder-common
cinder-common:
  Installed: 2:19.0.0-0ubuntu4~cloud0
  Candidate: 2:19.0.0-0ubuntu4~cloud0
  Version table:
 *** 2:19.0.0-0ubuntu4~cloud0 500
500 http://ubuntu-cloud.archive.canonical.com/ubuntu 
focal-proposed/xena/main amd64 Packages
100 /var/lib/dpkg/status
 2:16.4.2-0ubuntu2 500
500 http://nova.clouds.archive.ubuntu.com/ubuntu focal-updates/main 
amd64 Packages
 2:16.1.0-0ubuntu1 500
500 http://security.ubuntu.com/ubuntu focal-security/main amd64 Packages
 2:16.0.0~b3~git2020041012.eb915e2db-0ubuntu1 500
500 http://nova.clouds.archive.ubuntu.com/ubuntu focal/main amd64 
Packages
Connection to 10.5.2.13 closed.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1972665

Title:
  [SRU] Xena stable releases

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1972665/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Re: [Bug 1972730] Re: WARNING: crmadmin -S unexpected output

2022-05-11 Thread Felipe Reyes
On Wed, 2022-05-11 at 19:13 +, Lucas Kanashiro wrote:
> Thanks for working with upstream to fix this. Would you be willing to
> backport this fix to Jammy? Or do you prefer me to do it?

I can propose the SRU (via debdiff) if you can sponsor it :-)

> 
> For Kinetic we will be moving to the newer version (4.4.0 as upstream
> suggested) so this will not be an issue. I am going to talk to the
> Debian maintainer to see if we can get version 4.4.0 in unstable so we
> can merge it.

That's great, thanks.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1972730

Title:
  WARNING: crmadmin -S  unexpected output

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-hacluster/+bug/1972730/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1972730] Re: WARNING: crmadmin -S unexpected output

2022-05-11 Thread Felipe Reyes
ppa with the fix proposed at
https://github.com/ClusterLabs/crmsh/pull/972 for testing purposes
available at https://launchpad.net/~freyes/+archive/ubuntu/lp1972730

** Changed in: charm-hacluster
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1972730

Title:
  WARNING: crmadmin -S  unexpected output

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-hacluster/+bug/1972730/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1972730] Re: WARNING: crmadmin -S unexpected output

2022-05-09 Thread Felipe Reyes
The way this error is exposed in the hacluster is when running the
`stop` hook:

unit-hacluster-1: 16:53:41 DEBUG juju.worker.uniter.runner starting jujuc 
server  {unix @/var/lib/juju/agents/unit-hacluster-1/agent.socket }
unit-hacluster-1: 16:53:41 INFO unit.hacluster/1.juju-log Setting node 
juju-0c8f53-zaza-723eab24403d-4 to maintenance
unit-hacluster-1: 16:53:43 WARNING unit.hacluster/1.stop Traceback (most recent 
call last):
unit-hacluster-1: 16:53:43 WARNING unit.hacluster/1.stop   File 
"/var/lib/juju/agents/unit-hacluster-1/charm/hooks/stop", line 767, in 
unit-hacluster-1: 16:53:43 WARNING unit.hacluster/1.stop 
hooks.execute(sys.argv)
unit-hacluster-1: 16:53:43 WARNING unit.hacluster/1.stop   File 
"/var/lib/juju/agents/unit-hacluster-1/charm/charmhelpers/core/hookenv.py", 
line 962, in execute
unit-hacluster-1: 16:53:43 WARNING unit.hacluster/1.stop 
self._hooks[hook_name]()
unit-hacluster-1: 16:53:43 WARNING unit.hacluster/1.stop   File 
"/var/lib/juju/agents/unit-hacluster-1/charm/hooks/stop", line 617, in stop
unit-hacluster-1: 16:53:43 WARNING unit.hacluster/1.stop 
pcmk.set_node_status_to_maintenance(node)
unit-hacluster-1: 16:53:43 WARNING unit.hacluster/1.stop   File 
"/var/lib/juju/agents/unit-hacluster-1/charm/hooks/pcmk.py", line 201, in 
set_node_status_to_maintenance
unit-hacluster-1: 16:53:43 WARNING unit.hacluster/1.stop commit('crm -w -F 
node maintenance {}'.format(node_name),
unit-hacluster-1: 16:53:43 WARNING unit.hacluster/1.stop   File 
"/var/lib/juju/agents/unit-hacluster-1/charm/hooks/pcmk.py", line 90, in commit
unit-hacluster-1: 16:53:43 WARNING unit.hacluster/1.stop return 
subprocess.check_output(cmd.split(), stderr=subprocess.STDOUT)
unit-hacluster-1: 16:53:43 WARNING unit.hacluster/1.stop   File 
"/usr/lib/python3.10/subprocess.py", line 420, in check_output
unit-hacluster-1: 16:53:43 WARNING unit.hacluster/1.stop return 
run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
unit-hacluster-1: 16:53:43 WARNING unit.hacluster/1.stop   File 
"/usr/lib/python3.10/subprocess.py", line 524, in run
unit-hacluster-1: 16:53:43 WARNING unit.hacluster/1.stop raise 
CalledProcessError(retcode, process.args,
unit-hacluster-1: 16:53:43 WARNING unit.hacluster/1.stop 
subprocess.CalledProcessError: Command '['crm', '-w', '-F', 'node', 
'maintenance', 'juju-0c8f53-zaza-723eab24403d-4']' returned non-zero exit 
status 1.
unit-hacluster-1: 16:53:43 ERROR juju.worker.uniter.operation hook "stop" (via 
explicit, bespoke hook script) failed: exit status 1

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1972730

Title:
  WARNING: crmadmin -S  unexpected output

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-hacluster/+bug/1972730/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1972730] [NEW] WARNING: crmadmin -S unexpected output

2022-05-09 Thread Felipe Reyes
Public bug reported:

Pacemaker changed the output string of "crmadmin -S " in 2.1.0
with the commit
https://github.com/ClusterLabs/pacemaker/commit/c26c9951d863e83126f811ee5b91a174fe0cc991
, this is different from the ouput that wait4dc() expects
https://github.com/ClusterLabs/crmsh/blob/master/crmsh/utils.py#L898 .

Example output of `crm -w -F node maintenance ` of a cluster
running on Ubuntu 22.04

```
root@juju-0c8f53-zaza-723eab24403d-4:~# crm -w -F node maintenance 
juju-0c8f53-zaza-723eab24403d-4
WARNING: crmadmin -S juju-0c8f53-zaza-723eab24403d-5 unexpected output: 
Controller on juju-0c8f53-zaza-723eab24403d-5 in state S_IDLE: ok (exit code: 0)
root@juju-0c8f53-zaza-723eab24403d-4:~# echo $?
1
root@juju-0c8f53-zaza-723eab24403d-4:~# crmadmin -S 
juju-0c8f53-zaza-723eab24403d-5
Controller on juju-0c8f53-zaza-723eab24403d-5 in state S_IDLE: ok
root@juju-0c8f53-zaza-723eab24403d-4:~# dpkg -l pacemaker crmsh | cat
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name   VersionArchitecture Description
+++-==-==--===
ii  crmsh  4.3.1-1ubuntu2 all  CRM shell for the pacemaker 
cluster manager
ii  pacemaker  2.1.2-1ubuntu3 amd64cluster resource manager
```

Upstream bug filed: https://github.com/ClusterLabs/crmsh/issues/970

** Affects: charm-hacluster
 Importance: Undecided
 Status: New

** Affects: crmsh (Ubuntu)
 Importance: Undecided
 Status: New

** Affects: crmsh (Ubuntu Jammy)
 Importance: Undecided
 Status: New

** Also affects: crmsh (Ubuntu Jammy)
   Importance: Undecided
   Status: New

** Also affects: charm-hacluster
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1972730

Title:
  WARNING: crmadmin -S  unexpected output

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-hacluster/+bug/1972730/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1972665] Re: [SRU] Xena stable releases

2022-05-09 Thread Felipe Reyes
** Description changed:

  [Impact]
  
  This release comes with bug fixes that we would like to make available
  to our users.
  
  The following packages come in this set of point releases:
  
  * neutron 19.2.0
  * nova 24.1.0
+ * cinder 19.1.0
  
  [Test Case]
  The following SRU process was followed:
  https://wiki.ubuntu.com/OpenStack/StableReleaseUpdates
  
  In order to avoid regression of existing consumers, the OpenStack team will
  run their continuous integration test against the packages that are in
  -proposed. A successful run of all available tests will be required before the
  proposed packages can be let into -updates.
  
  The OpenStack team will be in charge of attaching the output summary of the
  executed tests. The OpenStack team members will not mark ‘verification-done’ 
until
  this has happened.
  
  [Regression Potential]
  In order to mitigate the regression potential, the results of the
  aforementioned tests are attached to this bug.
  
  [Discussion]

** Description changed:

  [Impact]
  
  This release comes with bug fixes that we would like to make available
  to our users.
  
  The following packages come in this set of point releases:
  
  * neutron 19.2.0
  * nova 24.1.0
  * cinder 19.1.0
+ * horizon 20.1.2
  
  [Test Case]
  The following SRU process was followed:
  https://wiki.ubuntu.com/OpenStack/StableReleaseUpdates
  
  In order to avoid regression of existing consumers, the OpenStack team will
  run their continuous integration test against the packages that are in
  -proposed. A successful run of all available tests will be required before the
  proposed packages can be let into -updates.
  
  The OpenStack team will be in charge of attaching the output summary of the
  executed tests. The OpenStack team members will not mark ‘verification-done’ 
until
  this has happened.
  
  [Regression Potential]
  In order to mitigate the regression potential, the results of the
  aforementioned tests are attached to this bug.
  
  [Discussion]

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1972665

Title:
  [SRU] Xena stable releases

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1972665/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1972665] Re: [SRU] Xena stable releases

2022-05-09 Thread Felipe Reyes
** Merge proposal linked:
   
https://code.launchpad.net/~freyes/ubuntu/+source/neutron/+git/neutron/+merge/421764

** Merge proposal linked:
   
https://code.launchpad.net/~freyes/ubuntu/+source/neutron/+git/neutron/+merge/421763

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1972665

Title:
  [SRU] Xena stable releases

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1972665/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1972665] [NEW] [SRU] Xena stable releases

2022-05-09 Thread Felipe Reyes
Public bug reported:

[Impact]

This release comes with bug fixes that we would like to make available
to our users.

The following packages come in this set of point releases:

* neutron 19.2.0
* nova 24.1.0

[Test Case]
The following SRU process was followed:
https://wiki.ubuntu.com/OpenStack/StableReleaseUpdates

In order to avoid regression of existing consumers, the OpenStack team will
run their continuous integration test against the packages that are in
-proposed. A successful run of all available tests will be required before the
proposed packages can be let into -updates.

The OpenStack team will be in charge of attaching the output summary of the
executed tests. The OpenStack team members will not mark ‘verification-done’ 
until
this has happened.

[Regression Potential]
In order to mitigate the regression potential, the results of the
aforementioned tests are attached to this bug.

[Discussion]

** Affects: cloud-archive
 Importance: Undecided
 Status: Invalid

** Affects: cloud-archive/xena
 Importance: Undecided
 Status: New

** Affects: neutron (Ubuntu)
 Importance: Undecided
 Status: Invalid

** Affects: nova (Ubuntu)
 Importance: Undecided
 Status: Invalid

** Affects: neutron (Ubuntu Impish)
 Importance: Undecided
 Status: New

** Affects: nova (Ubuntu Impish)
 Importance: Undecided
 Status: New

** Also affects: cloud-archive/xena
   Importance: Undecided
   Status: New

** Changed in: cloud-archive
   Status: New => Invalid

** Also affects: neutron (Ubuntu)
   Importance: Undecided
   Status: New

** Also affects: neutron (Ubuntu Impish)
   Importance: Undecided
   Status: New

** Changed in: neutron (Ubuntu)
   Status: New => Invalid

** Also affects: nova (Ubuntu)
   Importance: Undecided
   Status: New

** Changed in: nova (Ubuntu)
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1972665

Title:
  [SRU] Xena stable releases

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1972665/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1971565] Re: charm no longer works with latest mysql-router version

2022-05-05 Thread Felipe Reyes
** Changed in: charm-mysql-router
Milestone: None => 22.04

** Changed in: charm-mysql-router
 Assignee: (unassigned) => Felipe Reyes (freyes)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1971565

Title:
  charm no longer works with latest mysql-router version

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-mysql-router/+bug/1971565/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1971565] Re: charm no longer works with latest mysql-router version

2022-05-04 Thread Felipe Reyes
From a charm's perspective, I believe we should be setting
[DEFAULT].unknown_config_option to "warning", so (possible) future
changes in this area won't break a running cluster.

From the docs 
https://dev.mysql.com/doc/mysql-router/8.0/en/mysql-router-conf-options.html#option_mysqlrouter_unknown_config_option
 :
"""
A warning is default behavior, and bootstrapping defines it as error in the 
generated configuration file. MySQL Router versions before 8.0.29 ignore 
unknown configuration options. A warning logs a warning message but does not 
halt, whereas an error means Router fails to initialize and exits. 
"""

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1971565

Title:
  charm no longer works with latest mysql-router version

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-mysql-router/+bug/1971565/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1971565] Re: charm no longer works with latest mysql-router version

2022-05-04 Thread Felipe Reyes
worth to mention that the [DEFAULT].name key is set by "mysqlrouter
--bootstrap" and not by the charm, more details at
https://bugs.launchpad.net/charm-mysql-router/+bug/1907250/comments/10

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1971565

Title:
  charm no longer works with latest mysql-router version

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-mysql-router/+bug/1971565/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1907250] Re: [focal] charm becomes blocked with workload-status "Failed to connect to MySQL"

2022-05-03 Thread Felipe Reyes
I'm seeing this error:

2022-05-03 22:04:39 io INFO [7f91c2e53e00] starting 8 io-threads, using backend 
'linux_epoll'
2022-05-03 22:04:39 main ERROR [7f91c2e53e00] Error: option 'DEFAULT.name' is 
not supported

The error can be reproduced using this bundle
http://paste.ubuntu.com/p/mdY9dJhktH/ + the patch recently merged at
https://review.opendev.org/c/openstack/charm-mysql-router/+/834359
(probably at this point in time the change should be available in the
latest/edge channel).

When I try to generate the bootstrap files that the mysqlrouter creates
we can find that the `name` key is added by it and it's not added by the
charm, see below:


root@juju-b70e35-0-lxd-6:/var/lib/mysql/vault-mysql-router# 
/usr/bin/mysqlrouter --user mysql --name keystone-mysql-router --bootstrap 
mysqlrouteruser:3f4m6w6r2HFjGkfXnbHP3Mr6mphcpxys@10.246.114.60 --direc
tory /var/lib/mysql/keystone-mysql-router --conf-use-sockets 
--conf-bind-address 127.0.0.1  --conf-base-port 3306 --disable-rest --force
# Bootstrapping MySQL Router instance at 
'/var/lib/mysql/keystone-mysql-router'...

- Creating account(s) (only those that are needed, if any)
- Verifying account (using it to run SQL queries that would be run by Router)
- Storing account in keyring
- Adjusting permissions of generated files
- Creating configuration /var/lib/mysql/keystone-mysql-router/mysqlrouter.conf

# MySQL Router 'keystone-mysql-router' configured for the InnoDB Cluster
'jujuCluster'

After this MySQL Router has been started with the generated
configuration

$ /usr/bin/mysqlrouter -c /var/lib/mysql/keystone-mysql-
router/mysqlrouter.conf

InnoDB Cluster 'jujuCluster' can be reached by connecting to:

## MySQL Classic protocol

- Read/Write Connections: localhost:3306, 
/var/lib/mysql/keystone-mysql-router/mysql.sock
- Read/Only Connections:  localhost:3307, 
/var/lib/mysql/keystone-mysql-router/mysqlro.sock

## MySQL X protocol

- Read/Write Connections: localhost:3308, 
/var/lib/mysql/keystone-mysql-router/mysqlx.sock
- Read/Only Connections:  localhost:3309, 
/var/lib/mysql/keystone-mysql-router/mysqlxro.sock

root@juju-b70e35-0-lxd-6:/var/lib/mysql/vault-mysql-router# less 
/var/lib/mysql/keystone-mysql-router/mysqlrouter.conf
root@juju-b70e35-0-lxd-6:/var/lib/mysql/vault-mysql-router# grep -B1 name 
/var/lib/mysql/keystone-mysql-router/mysqlrouter.conf
[DEFAULT]
name=keystone-mysql-router


** Also affects: mysql-8.0 (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1907250

Title:
  [focal] charm becomes blocked with workload-status "Failed to connect
  to MySQL"

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-mysql-router/+bug/1907250/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1962582] Re: [SRU] Xena stable releases

2022-04-08 Thread Felipe Reyes
hello all,

I was able to validate successfully the packages available in the
-proposed pocket, specifically the following versions were installed in
the cloud deployed:

 ~/sources/ubuntu/lp1962582 $ grep ceilometer-common *.dpkg
machine-23.dpkg:ii  ceilometer-common2:17.0.1-0ubuntu1  
  all  ceilometer common files
machine-24.dpkg:ii  ceilometer-common2:17.0.1-0ubuntu1  
  all  ceilometer common files
machine-25.dpkg:ii  ceilometer-common2:17.0.1-0ubuntu1  
  all  ceilometer common files
machine-2.dpkg:ii  ceilometer-common2:17.0.1-0ubuntu1   
  all  ceilometer common files
 ~/sources/ubuntu/lp1962582 $ grep heat-common *.dpkg
machine-15.dpkg:ii  heat-common 1:17.0.1-0ubuntu1   
   all  OpenStack orchestration service - common files
 ~/sources/ubuntu/lp1962582 $ grep openstack-dashboard-common *.dpkg
machine-27.dpkg:ii  openstack-dashboard-common   4:20.1.1-0ubuntu1  
all  Django web interface for OpenStack - common files

These packages were validated running tempest, specifically using the
charmed-openstack-tester[0] and running "tox -e func-target -- impish-
xena".

For the case of Horizon, I logged in and launched a cirros instance that
booted correctly, and navigated through all the panels available on the
left.

[0] https://github.com/openstack-charmers/charmed-openstack-tester

** Tags removed: verification-needed verification-needed-impish
** Tags added: verification-done verification-done-impish

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1962582

Title:
  [SRU] Xena stable releases

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1962582/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1904580] Re: Permissions 0644 for '/var/lib/nova/.ssh/id_rsa' are too open

2022-04-08 Thread Felipe Reyes
I have a patch in this branch
https://git.launchpad.net/~freyes/ubuntu/+source/nova/commit/?id=88c97dc9332b97edf06618b6d4d2c770153821a6
, although I haven't been able to test it, I'm removing myself from the
bug since I won't have cycles to dedicate to this task in the short
term.

** Changed in: nova (Ubuntu)
 Assignee: Felipe Reyes (freyes) => (unassigned)

** Changed in: nova (Ubuntu)
   Status: Confirmed => Triaged

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1904580

Title:
  Permissions 0644 for '/var/lib/nova/.ssh/id_rsa' are too open

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-nova-compute/+bug/1904580/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1904580] Re: Permissions 0644 for '/var/lib/nova/.ssh/id_rsa' are too open

2022-03-25 Thread Felipe Reyes
** Changed in: charm-nova-compute
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1904580

Title:
  Permissions 0644 for '/var/lib/nova/.ssh/id_rsa' are too open

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-nova-compute/+bug/1904580/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1904580] Re: Permissions 0644 for '/var/lib/nova/.ssh/id_rsa' are too open

2022-03-25 Thread Felipe Reyes
** Changed in: nova (Ubuntu)
 Assignee: (unassigned) => Felipe Reyes (freyes)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1904580

Title:
  Permissions 0644 for '/var/lib/nova/.ssh/id_rsa' are too open

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-nova-compute/+bug/1904580/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1904745] Re: File permissions in /var/lib/nova/.ssh broken in upgrade

2022-03-25 Thread Felipe Reyes
*** This bug is a duplicate of bug 1904580 ***
https://bugs.launchpad.net/bugs/1904580

hello everyone, thanks for reporting this bug, I'm going to mark this
bug as duplicate of https://bugs.launchpad.net/charm-nova-
compute/+bug/1904580 since it tracked the analysis and workarounds.

** This bug has been marked a duplicate of bug 1904580
   Permissions 0644 for '/var/lib/nova/.ssh/id_rsa' are too open

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1904745

Title:
  File permissions in /var/lib/nova/.ssh broken in upgrade

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1904745/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1966512] Re: FTBS in jammy: FAIL: magnumclient.tests.test_shell.ShellTest.test_help_on_subcommand

2022-03-25 Thread Felipe Reyes
** Patch added: "lp1966512_jammy.patch"
   
https://bugs.launchpad.net/ubuntu/+source/python-magnumclient/+bug/1966512/+attachment/5573002/+files/lp1966512_jammy.patch

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1966512

Title:
  FTBS in jammy: FAIL:
  magnumclient.tests.test_shell.ShellTest.test_help_on_subcommand

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/python-magnumclient/+bug/1966512/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1966512] [NEW] FTBS in jammy: FAIL: magnumclient.tests.test_shell.ShellTest.test_help_on_subcommand

2022-03-25 Thread Felipe Reyes
Public bug reported:

This failure is due to a change in python 3.10 where the string
"optional arguments" was changed to "options", more info at
https://docs.python.org/3/whatsnew/3.10.html#argparse and
https://bugs.python.org/issue9694 .

Upstream bug: https://storyboard.openstack.org/#!/story/2009946
Upstream fix: 
https://review.opendev.org/c/openstack/python-magnumclient/+/835217

Test failures:

==
FAIL: magnumclient.tests.test_shell.ShellTest.test_help_on_subcommand
magnumclient.tests.test_shell.ShellTest.test_help_on_subcommand
--
testtools.testresult.real._StringException: Traceback (most recent call last):
  File "/<>/magnumclient/tests/test_shell.py", line 127, in 
test_help_on_subcommand
self.assertThat((stdout + stderr),
  File "/usr/lib/python3/dist-packages/testtools/testcase.py", line 480, in 
assertThat
raise mismatch_error
testtools.matchers._impl.MismatchError: 'usage: magnum bay-create [--name 
] --baymodel \n [--node-count 
]\n [--master-count ]\n   
  [--discovery-url ]\n 
[--timeout ]\n\nCreate a bay. (Deprecated in favor of 
cluster-create.)\n\nOptions:\n  --name  Name of the bay to 
create.\n  --baymodel \nID or name of the 
baymodel.\n  --node-count \nThe bay node 
count.\n  --master-count \nThe number of 
master nodes for the bay.\n  --discovery-url \n  
  Specifies custom discovery url for node discovery.\n  --timeout  
  The timeout for bay creation in minutes. The default\n
is 60 minutes.\n' does not match /.*?^Optional arguments:/


==
FAIL: magnumclient.tests.test_shell.ShellTestKeystoneV3.test_help_on_subcommand
magnumclient.tests.test_shell.ShellTestKeystoneV3.test_help_on_subcommand
--
testtools.testresult.real._StringException: Traceback (most recent call last):
  File "/<>/magnumclient/tests/test_shell.py", line 127, in 
test_help_on_subcommand
self.assertThat((stdout + stderr),
  File "/usr/lib/python3/dist-packages/testtools/testcase.py", line 480, in 
assertThat
raise mismatch_error
testtools.matchers._impl.MismatchError: 'usage: magnum bay-create [--name 
] --baymodel \n [--node-count 
]\n [--master-count ]\n   
  [--discovery-url ]\n 
[--timeout ]\n\nCreate a bay. (Deprecated in favor of 
cluster-create.)\n\nOptions:\n  --name  Name of the bay to 
create.\n  --baymodel \nID or name of the 
baymodel.\n  --node-count \nThe bay node 
count.\n  --master-count \nThe number of 
master nodes for the bay.\n  --discovery-url \n  
  Specifies custom discovery url for node discovery.\n  --timeout  
  The timeout for bay creation in minutes. The default\n
is 60 minutes.\n' does not match /.*?^Optional arguments:/

** Affects: python-magnumclient (Ubuntu)
 Importance: Undecided
 Assignee: Felipe Reyes (freyes)
 Status: New

** Changed in: python-magnumclient (Ubuntu)
 Assignee: (unassigned) => Felipe Reyes (freyes)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1966512

Title:
  FTBS in jammy: FAIL:
  magnumclient.tests.test_shell.ShellTest.test_help_on_subcommand

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/python-magnumclient/+bug/1966512/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1966442] Re: AttributeError: module 'collections' has no attribute 'Iterable'

2022-03-25 Thread Felipe Reyes
** Patch added: "lp1966442_jammy.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/python-os-win/+bug/1966442/+attachment/5572999/+files/lp1966442_jammy.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1966442

Title:
  AttributeError: module 'collections' has no attribute 'Iterable'

To manage notifications about this bug go to:
https://bugs.launchpad.net/os-win/+bug/1966442/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1966442] Re: AttributeError: module 'collections' has no attribute 'Iterable'

2022-03-25 Thread Felipe Reyes
** Also affects: python-os-win (Ubuntu)
   Importance: Undecided
   Status: New

** Changed in: python-os-win (Ubuntu)
 Assignee: (unassigned) => Felipe Reyes (freyes)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1966442

Title:
  AttributeError: module 'collections' has no attribute 'Iterable'

To manage notifications about this bug go to:
https://bugs.launchpad.net/os-win/+bug/1966442/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1962582] [NEW] [SRU] Xena stable releases

2022-03-01 Thread Felipe Reyes
Public bug reported:

[Impact]

Users will get the latest stable bug fixes for this release of
OpenStack. Point releases included in this SRU:

ceilometer 17.0.1,
heat 17.0.1
horizon 20.1.1

[Test Case]

Testing of point releases for OpenStack packages is covered by:

 https://wiki.ubuntu.com/OpenStack/StableReleaseUpdates

[Regression Potential]

Very low as this SRU is releasing stable point releases that upstream
has already released.

** Affects: cloud-archive
 Importance: Undecided
 Status: Invalid

** Affects: cloud-archive/xena
 Importance: Undecided
 Status: New

** Affects: ceilometer (Ubuntu)
 Importance: Undecided
 Status: Invalid

** Affects: heat (Ubuntu)
 Importance: Undecided
 Status: Invalid

** Affects: horizon (Ubuntu)
 Importance: Undecided
 Status: Invalid

** Affects: ceilometer (Ubuntu Impish)
 Importance: Undecided
 Status: New

** Affects: heat (Ubuntu Impish)
 Importance: Undecided
 Status: New

** Affects: horizon (Ubuntu Impish)
 Importance: Undecided
 Status: New

** Also affects: cloud-archive/xena
   Importance: Undecided
   Status: New

** Changed in: cloud-archive
   Status: New => Invalid

** Also affects: ceilometer (Ubuntu)
   Importance: Undecided
   Status: New

** Also affects: ceilometer (Ubuntu Impish)
   Importance: Undecided
   Status: New

** Changed in: ceilometer (Ubuntu)
   Status: New => Invalid

** Also affects: heat (Ubuntu)
   Importance: Undecided
   Status: New

** Also affects: horizon (Ubuntu)
   Importance: Undecided
   Status: New

** Changed in: heat (Ubuntu)
   Status: New => Invalid

** Changed in: horizon (Ubuntu)
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1962582

Title:
  [SRU] Xena stable releases

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1962582/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1943765] Re: ipmitool "timing" flags are not working as expected causing failure to manage power of baremetal nodes

2022-02-23 Thread Felipe Reyes
** Changed in: charm-ironic-conductor
Milestone: None => 22.04

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1943765

Title:
  ipmitool "timing" flags are not working as expected causing failure to
  manage power of baremetal nodes

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-ironic-conductor/+bug/1943765/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1959847] Re: Unknown operation (12) 0x0240 Unknown (7680)[Malformed Packet]

2022-02-13 Thread Felipe Reyes
Hi Frode,

I went through the captures I have and effectively, if we filter out the
packets where the srcport != 53 or dstport != 6081 , we end up only with
valid dns traffic.

I believe this explains what we are seeing, and the other findings you
did regards the missing flows, we can say with certainty that the DNS
issues we are seeing are only a symptom of instances losing North-South
traffic. I'm closing the bug as Invalid.

$ for PCAP in *.pcap; do echo -e "$PCAP\t$(tshark -r $PCAP  | grep -i "unknown" 
| wc -l)\t$(tshark -r $PCAP 'udp.dstport ne 6081' | grep -i unknown | wc 
-l)\t$(tshark -r $PCAP 'udp.srcport ne 53' | grep -i unknown | wc -l)";done
dns-port-machine-0.pcap 90  0   0
dns-port-machine-10.pcap104 0   0
dns-port-machine-11.pcap26  0   0
dns-port-machine-12.pcap0   0   0
dns-port-machine-13.pcap0   0   0
dns-port-machine-2.pcap 6   0   0
dns-port-machine-3.pcap 72  0   0
dns-port-machine-4.pcap 0   0   0
dns-port-machine-5.pcap 59  0   0
dns-port-machine-6.pcap 68  0   0
dns-port-machine-7.pcap 46  0   0
dns-port-machine-8.pcap 104 0   0
dns-port-machine-9.pcap 32  0   0


** Changed in: ovn (Ubuntu)
   Status: Triaged => Invalid

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1959847

Title:
  Unknown operation (12) 0x0240 Unknown (7680)[Malformed Packet]

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ovn/+bug/1959847/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1959847] Re: Unknown operation (12) 0x0240 Unknown (7680)[Malformed Packet]

2022-02-10 Thread Felipe Reyes
I have a capture using `tshark -i bond0 port 53` and I have a couple of
streams that exhibit this issue, I would like to highlight this one in
particular:

$ tshark -x -r dns-port-machine-10.pcap "udp.stream eq 9" | pastebinit
https://paste.ubuntu.com/p/yS74KCG2bY/
$ tshark -Tfields -e udp.port -e ip.host  -r dns-port-machine-10.pcap 
"udp.stream eq 9" 
53,6081 10.245.160.3,10.245.160.5
53,6081 10.245.160.3,10.245.160.5
53,6081 10.245.160.3,10.245.160.5
53,6081 10.245.160.3,10.245.160.5
53,6081 10.245.160.3,10.245.160.5

the pcap can be found at https://private-
fileshare.canonical.com/~freyes/dns-port-20220210_154629.tar.bz2

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1959847

Title:
  Unknown operation (12) 0x0240 Unknown (7680)[Malformed Packet]

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ovn/+bug/1959847/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1959847] Re: Unknown operation (12) 0x0240 Unknown (7680)[Malformed Packet]

2022-02-10 Thread Felipe Reyes
An older capture:

[...]
.?.V...#[...
8.0.27-0ubuntu0.20.04.1...%..D/D).1[.[g=WEU(.K_.caching_sha2_password..@eX...$..>M'...>.t...E.@.@.KG...f..t
.?.V...#[...
8.0.27-0ubuntu0.20.04.1...%..D/D).1[.[g=WEU(.K_.caching_sha2_password..@eX...$..>M'...>.t...E.@.@.KG...f..t
.?.V...#[...
8.0.27-0ubuntu0.20.04.1...%..D/D).1[.[g=WEU(.K_.caching_sha2_password..@eX...$..>M'...>.t...E..4..@.@.LV...G...f...z..u
.?.W...$.@eX...$..>M'...>.t...E..4..@.@.LV...G...f...z..u
[...]

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1959847

Title:
  Unknown operation (12) 0x0240 Unknown (7680)[Malformed Packet]

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ovn/+bug/1959847/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1959847] Re: Unknown operation (12) 0x0240 Unknown (7680)[Malformed Packet]

2022-02-10 Thread Felipe Reyes
Something I saw in a new data capture was when I made wireshark to
display the conversation of the stream that was showing "Unknown
operation (12)" packets, it displayed this:

[...]
..O...p<...a_.q.U.
...@..J.[...
8.0.28-0ubuntu0.20.04.3.1i..jA.;.F_b...8#NEqB`U~H6..caching_sha2_password..@eX..O...>Z.B..>FE.@.@.1.
...
[...]

The 2 suspicious strings here are: "8.0.28-0ubuntu0.20.04.3" and
"caching_sha2_password", the former being the version of ubuntu's mysql8
package and the latter is plugin auth of mysql, I have no idea how/why
that data would be present in the capture of "tcpdump -i any port 53".

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1959847

Title:
  Unknown operation (12) 0x0240 Unknown (7680)[Malformed Packet]

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ovn/+bug/1959847/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1925233] Re: [bionic-ussuri] designate-mdns.service: Failed with result 'start-limit-hit'

2022-02-08 Thread Felipe Reyes
here is an example on how to reuse config.rendered flag
https://github.com/openstack/charm-manila/blob/b75e6ed3ce6b2c061105ac8226a778e1ec3685d4/src/reactive/manila_handlers.py#L178

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1925233

Title:
  [bionic-ussuri] designate-mdns.service: Failed with result 'start-
  limit-hit'

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-designate/+bug/1925233/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1925233] Re: [bionic-ussuri] designate-mdns.service: Failed with result 'start-limit-hit'

2022-02-08 Thread Felipe Reyes
We discussed this issue with the team and the agreement is that the
charm should disable/mask the service until the configuration has been
rendered, this is a pattern already used in other charms. For the
reactive charms there is a handler for this -
https://github.com/openstack/charm-layer-
openstack/blob/master/reactive/layer_openstack.py#L161-L167

** Changed in: charm-designate
   Status: Incomplete => Triaged

** Changed in: designate (Ubuntu)
   Status: New => Invalid

** Changed in: charm-designate
   Importance: Undecided => High

** Tags added: good-first-bug

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1925233

Title:
  [bionic-ussuri] designate-mdns.service: Failed with result 'start-
  limit-hit'

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-designate/+bug/1925233/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1959847] Re: Unknown operation (12) 0x0240 Unknown (7680)[Malformed Packet]

2022-02-08 Thread Felipe Reyes
** Bug watch added: github.com/systemd/systemd/issues #12841
   https://github.com/systemd/systemd/issues/12841

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1959847

Title:
  Unknown operation (12) 0x0240 Unknown (7680)[Malformed Packet]

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ovn/+bug/1959847/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Re: [Bug 1959847] Re: Unknown operation (12) 0x0240 Unknown (7680)[Malformed Packet]

2022-02-08 Thread Felipe Reyes
On Tue, 2022-02-08 at 15:11 +, Frode Nordahl wrote:
> Thank you for your bug report, I wonder if this proposed fix [0] could
> be related?

I don't believe so, IIUC this would affect how the replies are
assembled and passed to the guest. I was looking at a similar issue (RR
counters) on the systemd-resolved side[0] and systemd would be logging
"invalid reply" while we see ETIMEDOUT.

> 
> Does tshark provide any detail about what exactly it finds malformed
> about the packets?

yes, the opcode is 12, where a query's opcode is 0 , opcode 12 is
unassigned according to the spec -
https://www.iana.org/assignments/dns-parameters/dns-parameters.xhtml#dns-parameters-5

[0] https://github.com/systemd/systemd/issues/12841

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1959847

Title:
  Unknown operation (12) 0x0240 Unknown (7680)[Malformed Packet]

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ovn/+bug/1959847/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1925233] Re: [bionic-ussuri] next charm fails to start mdns service

2022-02-08 Thread Felipe Reyes
Checking designate/2.

 ~/Downloads/f1badd08-a9a6-405a-b0be-d7f49ece023c/designate_2 $ tail
var/log/designate/designate-mdns.log -n 2
2021-04-19 16:04:24.389 64928 ERROR designate
oslo_db.exception.CantStartEngineError: No sql_connection parameter is
established
2021-04-19 16:04:24.389 64928 ERROR designate 

The last line logged by designate-mdns was at 16:04. While the
configuration files were rendered at 16:05:12

$ grep -C 2 base-config.rendered var/log/juju/unit-designate-2.log |
head -n 5
2021-04-19 16:05:12 DEBUG jujuc server.go:211 running hook tool "juju-
log" for designate/2-identity-service-relation-changed-
330171960584494981
2021-04-19 16:05:12 DEBUG juju-log identity-service:105: tracer>
tracer: set flag base-config.rendered
tracer: ++   queue handler
reactive/designate_handlers.py:144:run_db_migration
tracer: ++   queue handler
reactive/designate_handlers.py:155:sync_pool_manager_cache


designate-mdns failed to start because the systemd service hit start-
limit-hit

Apr 19 16:04:24 juju-9669d8-4-lxd-3 systemd[1]: designate-mdns.service:
Failed with result 'exit-code'.
Apr 19 16:04:24 juju-9669d8-4-lxd-3 systemd[1]: designate-mdns.service:
Service hold-off time over, scheduling restart.
Apr 19 16:04:24 juju-9669d8-4-lxd-3 systemd[1]: designate-mdns.service:
Scheduled restart job, restart counter is at 481.
Apr 19 16:04:24 juju-9669d8-4-lxd-3 systemd[1]: designate-mdns.service:
Failed to reset devices.list: Operation not permitted
Apr 19 16:04:24 juju-9669d8-4-lxd-3 systemd[1]: designate-mdns.service:
Start request repeated too quickly.
Apr 19 16:04:24 juju-9669d8-4-lxd-3 systemd[1]: designate-mdns.service:
Failed with result 'start-limit-hit'.

I think this is something that should be fixed in the package, since we
may need to do adjustments to designate-mdns.service

https://www.freedesktop.org/software/systemd/man/systemd.unit.html#StartLimitIntervalSec=interval



** Also affects: designate (Ubuntu)
   Importance: Undecided
   Status: New

** Summary changed:

- [bionic-ussuri] next charm fails to start mdns service
+ [bionic-ussuri] designate-mdns.service Failed with result 'start-limit-hit'

** Summary changed:

- [bionic-ussuri] designate-mdns.service Failed with result 'start-limit-hit'
+ [bionic-ussuri] designate-mdns.service: Failed with result 'start-limit-hit'

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1925233

Title:
  [bionic-ussuri] designate-mdns.service: Failed with result 'start-
  limit-hit'

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-designate/+bug/1925233/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1959847] Re: Unknown operation (12) 0x0240 Unknown (7680)[Malformed Packet]

2022-02-02 Thread Felipe Reyes
syslog from a VM where some dns queries failed with ETIMEDOUT, e.g.

Feb  2 15:56:08 juju-6c8a68-zaza-1e093df25cbd-7 systemd-resolved[1352]:
Transaction 55701 for  on scope dns on ens3/*
now complete with  from none (unsigned).


** Attachment added: "syslog"
   
https://bugs.launchpad.net/ubuntu/+source/ovn/+bug/1959847/+attachment/5558790/+files/syslog

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1959847

Title:
  Unknown operation (12) 0x0240 Unknown (7680)[Malformed Packet]

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ovn/+bug/1959847/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1959847] [NEW] Unknown operation (12) 0x0240 Unknown (7680)[Malformed Packet]

2022-02-02 Thread Felipe Reyes
Public bug reported:

There have been DNS issues while running CI jobs for the openstack
charms, when I captured the dns traffic in one of the nova-compute
units[0][1], it can be seen how certain queries are malformed:

 ~ $ tshark -r dns-port-53.pcap  | grep -i malformed | tail -n 2
15403 3602.706021 10.245.160.32 → 10.245.160.114 DNS 126 Unknown operation (12) 
0x0240 Unknown (7680)[Malformed Packet]
15404 3602.706023 10.245.160.32 → 10.245.160.114 DNS 126 Unknown operation (12) 
0x0240 Unknown (7680)[Malformed Packet]
 ~ $ tshark -r dns-port-53.pcap  | grep -i malformed | wc -l
408
 ~ $ tshark -r dns-port-53.pcap  | wc -l
15728

Another symptom found is within the VM's systemd-resolved, it logs
queries timing out https://pastebin.ubuntu.com/p/pJnd9sprpF/ [2]


Packages installed:

# dpkg -l ovn-common
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name   VersionArchitecture Description
+++-==-==--=
ii  ovn-common 20.03.2-0ubuntu0.20.04.2.0 amd64OVN common components

# apt policy ovn-common
ovn-common:
  Installed: 20.03.2-0ubuntu0.20.04.2.0
  Candidate: 21.12.0-0ubuntu1.0~20.04.0
  Version table:
 21.12.0-0ubuntu1.0~20.04.0 500
500 http://ppa.launchpad.net/fnordahl/serverstack/ubuntu focal/main 
amd64 Packages
 *** 20.03.2-0ubuntu0.20.04.2.0 500
500 http://ppa.launchpad.net/fnordahl/lp1857026/ubuntu focal/main amd64 
Packages
100 /var/lib/dpkg/status
 20.03.2-0ubuntu0.20.04.2 500
500 http://archive.ubuntu.com/ubuntu focal-updates/main amd64 Packages
 20.03.0-0ubuntu1 500
500 http://archive.ubuntu.com/ubuntu focal/main amd64 Packages

[0] tcpdump -i any -ln port 53
[1] https://private-fileshare.canonical.com/~freyes/dns-port-53.pcap
[2] note: this is a different run from the tcpdump capture, so they could be 
different root causes

** Affects: ovn (Ubuntu)
 Importance: Undecided
 Status: New

** Description changed:

  There have been DNS issues while running CI jobs for the openstack
  charms, when I captured the dns traffic in one of the nova-compute
  units[0][1], it can be seen how certain queries are malformed:
  
-  ~ $ tshark -r dns-port-53.pcap  | grep -i malformed | tail -n 2
+  ~ $ tshark -r dns-port-53.pcap  | grep -i malformed | tail -n 2
  15403 3602.706021 10.245.160.32 → 10.245.160.114 DNS 126 Unknown operation 
(12) 0x0240 Unknown (7680)[Malformed Packet]
  15404 3602.706023 10.245.160.32 → 10.245.160.114 DNS 126 Unknown operation 
(12) 0x0240 Unknown (7680)[Malformed Packet]
-  ~ $ tshark -r dns-port-53.pcap  | grep -i malformed | wc -l
+  ~ $ tshark -r dns-port-53.pcap  | grep -i malformed | wc -l
  408
-  ~ $ tshark -r dns-port-53.pcap  | wc -l
+  ~ $ tshark -r dns-port-53.pcap  | wc -l
  15728
  
  Another symptom found is within the VM's systemd-resolved, it logs
  queries timing out https://pastebin.ubuntu.com/p/pJnd9sprpF/ [2]
  
+ 
+ Packages installed:
+ 
+ # dpkg -l ovn-common
+ Desired=Unknown/Install/Remove/Purge/Hold
+ | Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
+ |/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
+ ||/ Name   VersionArchitecture Description
+ 
+++-==-==--=
+ ii  ovn-common 20.03.2-0ubuntu0.20.04.2.0 amd64OVN common 
components
+ 
+ # apt policy ovn-common
+ ovn-common:
+   Installed: 20.03.2-0ubuntu0.20.04.2.0
+   Candidate: 21.12.0-0ubuntu1.0~20.04.0
+   Version table:
+  21.12.0-0ubuntu1.0~20.04.0 500
+ 500 http://ppa.launchpad.net/fnordahl/serverstack/ubuntu focal/main 
amd64 Packages
+  *** 20.03.2-0ubuntu0.20.04.2.0 500
+ 500 http://ppa.launchpad.net/fnordahl/lp1857026/ubuntu focal/main 
amd64 Packages
+ 100 /var/lib/dpkg/status
+  20.03.2-0ubuntu0.20.04.2 500
+ 500 http://archive.ubuntu.com/ubuntu focal-updates/main amd64 Packages
+  20.03.0-0ubuntu1 500
+ 500 http://archive.ubuntu.com/ubuntu focal/main amd64 Packages
+ 
  [0] tcpdump -i any -ln port 53
  [1] https://private-fileshare.canonical.com/~freyes/dns-port-53.pcap
  [2] note: this is a different run from the tcpdump capture, so they could be 
different root causes

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1959847

Title:
  Unknown operation (12) 0x0240 Unknown (7680)[Malformed Packet]

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ovn/+bug/1959847/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1951841] Re: [SRU] ovn metadata agent randomly timing out

2022-01-31 Thread Felipe Reyes
The fix for this bug will be available in neutron-19.1.0 which at the
moment is available in the -proposed pockets for Impish and Xena, more
details on the progress of the point release can be found at
https://bugs.launchpad.net/ubuntu/+source/neutron/+bug/1956991

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1951841

Title:
  [SRU] ovn metadata agent randomly timing out

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1951841/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1951841] Re: [SRU] ovn metadata agent randomly timing out

2022-01-06 Thread Felipe Reyes
Marking jammy as fix committed since package
neutron_19.0.0+git2022010514.7aba1bddab-0ubuntu1[0] contains the fix
that was merged in upstream[1]

[0] 
https://launchpad.net/ubuntu/+source/neutron/2:19.0.0+git2022010514.7aba1bddab-0ubuntu1
[1] 
https://opendev.org/openstack/neutron/commit/79037c951637dc06d47b6d354776d116a1d2a9ad

** Changed in: neutron (Ubuntu)
   Status: New => Fix Committed

** Changed in: cloud-archive
   Status: New => Fix Committed

** Changed in: cloud-archive/xena
   Status: New => Triaged

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1951841

Title:
  [SRU] ovn metadata agent randomly timing out

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-ovn-chassis/+bug/1951841/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1951841] Re: [SRU] ovn metadata agent randomly timing out

2022-01-06 Thread Felipe Reyes
** Patch added: "lp1951841_impish.debdiff"
   
https://bugs.launchpad.net/charm-ovn-chassis/+bug/1951841/+attachment/5552146/+files/lp1951841_impish.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1951841

Title:
  [SRU] ovn metadata agent randomly timing out

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-ovn-chassis/+bug/1951841/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1951841] Re: [SRU] ovn metadata agent randomly timing out

2022-01-05 Thread Felipe Reyes
** Description changed:

+ [Impact]
+ 
+ When the ovn-controller daemon elects a new leader is expected that
+ clients reconnect to that new instance, for the case of Xena the
+ reconnect attempt will also call register_metadata_agent()[0] and this
+ method enforces that OVS system-id is formatted as UUID which is not
+ true for Charmed OpenStack deployed with OVN, this produces that the
+ neutron-ovn-metadata-agent daemon stays running but disconnected and new
+ launched VMs won't have access to the metadata service.
+ 
+ [0]
+ 
https://github.com/openstack/neutron/blob/stable/xena/neutron/agent/ovn/metadata/agent.py#L157
+ 
+ 
+ [Test Plan]
+ 
+ 1. Deploy an OpenStack cloud using OVN
+ 
+ ```
+ git clone https://git.launchpad.net/stsstack-bundles
+ cd stsstack-bundles/openstack
+ ```
+ 
+ Focal Xena:
+ ./generate-bundle.sh --series focal --release xena --ovn --name focal-xena 
--run
+ 
+ Impish:
+ ./generate-bundle.sh --series impish --ovn --name focal-xena --run
+ 
+ 2. Configure the cloud creating networks, subnets, etc.
+ 
+ ```
+ source ~/novarc
+ ./configure
+ ```
+ 
+ 3. Launch an instance
+ 
+ ```
+ source ./novarc
+ ./tools/instance_launch 1 focal
+ ```
+ 
+ 4. Check the net namespace was correctly provisioned
+ 
+ ```
+ juju ssh nova-compute/0 sudo ip netns
+ ```
+ 
+ Example output:
+ 
+ $ juju ssh nova-compute/0 sudo ip netns | grep ovnmeta
+ ovnmeta-0211506b-233e-4773-a034-3950dfefe23d (id: 0)
+ 
+ 5. Delete the instance: `openstack server delete focal-150930`
+ 
+ 6. Check the netns was removed.
+ 
+ $ juju ssh nova-compute/0 sudo ip netns | grep ovnmeta
+ Connection to 10.5.2.148 closed.
+ 
+ 7. Restart ovn controller leader unit to force a new leader.
+ 
+ juju ssh $(juju status ovn-central | grep leader | tail -n 1 | awk
+ '{print $1}' | tr -d '*') sudo reboot
+ 
+ 8. Wait a few minutes and then launch a new instance
+ ```
+ source ./novarc
+ ./tools/instance_launch 1 focal
+ ```
+ 
+ 9. Wait a few minutes (~5m) and check cloud-init's output and the
+ ovnmeta netns
+ 
+ ```
+ openstack console log show 
+ juju ssh nova-compute/0 sudo ip netns | grep ovnmeta
+ ```
+ 
+ Expected result:
+ * The launched instance is able to read its configuration from the metadata 
service and not timing out.
+ * The ovnmeta- namespace gets created.
+ 
+ Actual result:
+ 
+ * The instance launched can't be accessed via ssh, because cloud-init timed 
out trying to access the metadata service.
+ * The ovnmeta- namespace is missing from the nova-compute unit.
+ 
+ 
+ [Where problems could occur]
+ 
+ * This patch changes the way the UUID used to identify the neutron-ovn-
+ metadata-agent service is generated, hence issues would manifest as the
+ daemon not starting (check `systemctl status neutron-ovn-metadata-
+ agent`) or starting but not being able to connect and provision the
+ datapath needed when launching new instances in the faulty compute unit
+ and those instances would have cloud-init timing out.
+ 
+ [Other Info]
+ 
+ 
+ [Original Description]
+ 
  When creating VMs, they will randomly not get access to metadata
  service.
  
  Openstack focal/Xena, with stock OVN 21.09.0-0ubuntu1~cloud0.
  
  For testing, I created 32 instances (at once), and 19 have access to
  metadata service and the other 13 do not. The proportion will vary
  depending on the iteration and tend to be about 50%.
  
  Because of that, I cannot enter those machines via SSH (I can see in the
  console logs they are not able to get anything from the agent). If I
  create all of them using "ConfigDrive" option then all of them get SSH
  keys. When entering them and trying to 'curl' the metadata ip address, I
  get the correct response on some and timeout on others.
  
  I don't see any correlation between the failures and specific compute
  hosts.
  
  I don't see any suspecting messages in {nova,ovn,neutron,openvswitch}
  logs for the hypervisor that have a problematic vm or for the dedicated
  gateway.
  
  Note: this cloud has 2 extra nodes running ovn-dedicated-chassis and
  those two are the only nodes that have a way out to provider-networks.
  Network tests, except for the metadata problem, seem to be ok, including
  routers and security groups.
  
  This has been very consistent between batches of vm deploys and even
  across redeploys of the cloud.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1951841

Title:
  [SRU] ovn metadata agent randomly timing out

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-ovn-chassis/+bug/1951841/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1951841] Re: ovn metadata agent randomly timing out

2022-01-04 Thread Felipe Reyes
** Changed in: neutron (Ubuntu Impish)
 Assignee: (unassigned) => Felipe Reyes (freyes)

** Changed in: neutron (Ubuntu Impish)
   Status: New => Triaged

** Summary changed:

- ovn metadata agent randomly timing out
+ [SRU] ovn metadata agent randomly timing out

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1951841

Title:
  [SRU] ovn metadata agent randomly timing out

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-ovn-chassis/+bug/1951841/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1951841] Re: ovn metadata agent randomly timing out

2022-01-04 Thread Felipe Reyes
- Marking charm-ovn-chassis as invalid since the issue is in neutron itself and 
not the charm or how it drives the workload.
- Adding tasks for Ubuntu/Impish and cloud-archive/xena.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1951841

Title:
  ovn metadata agent randomly timing out

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-ovn-chassis/+bug/1951841/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1951841] Re: ovn metadata agent randomly timing out

2022-01-04 Thread Felipe Reyes
This change[0] was merged in stable/xena, this relaxes the check that
ovs system-id must be a uuid formatted string, now when it can't parse
it as a uuid it will (re)generate a uuid using uuid.uuid5(), which will
use as input a hardcoded UUID (as namespace) and the chassis name
string.

[0]
https://opendev.org/openstack/neutron/commit/6da4432fed255f3bcf3831f5d0520ab389ce36e5

** Also affects: neutron (Ubuntu)
   Importance: Undecided
   Status: New

** Also affects: neutron (Ubuntu Impish)
   Importance: Undecided
   Status: New

** Also affects: cloud-archive
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/xena
   Importance: Undecided
   Status: New

** Changed in: charm-ovn-chassis
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1951841

Title:
  ovn metadata agent randomly timing out

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-ovn-chassis/+bug/1951841/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1949120] [NEW] Missing PPMTUD for ICMP and UDP when there are dedicated gateways

2021-10-28 Thread Felipe Reyes
Public bug reported:

According to this bug[0] and these patches[1][2][3] the support to emit
a ICMP "need to fragment" packet was added in ovn-21.09, Wallaby UCA
carries ovn-20.12

This limitation becomes a problem when the overlay network is configured
to use jumbo frames and the external network uses a MTU of 1500 .

[Environment]

Focal Wallaby with a dedicated ovn-chassis application to act as gateway

  ovn-chassis-gw:
bindings:
  ? ''
  : oam-space
  data: overlay-space
charm: cs:ovn-dedicated-chassis
options:
  source: cloud:focal-wallaby/proposed
  bridge-interface-mappings: br-data:bond0.3811
  ovn-bridge-mappings: physnet1:br-data
  prefer-chassis-as-gw: true
num_units: 2
to:
- 1001
- 1002

[0] https://bugzilla.redhat.com/show_bug.cgi?id=1547074#c5
[1] 
https://github.com/ovn-org/ovn/commit/2c2f1802dcfc6f7d3e3a25a24e0b8f4c7c7f39d8
[2] 
https://github.com/ovn-org/ovn/commit/1c9e46ab5c05043a8cd6c47b5fec2e1ac4c962db
[3] 
https://github.com/ovn-org/ovn/commit/947e8d450ebaa8ce4ab81cb480a419618f1508c7

** Affects: ovn (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1949120

Title:
  Missing PPMTUD for ICMP and UDP when there are dedicated gateways

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ovn/+bug/1949120/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1944424] Re: AppArmor causing HA routers to be in backup state on wallaby-focal

2021-10-05 Thread Felipe Reyes
** Changed in: charm-neutron-gateway
Milestone: None => 21.10

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1944424

Title:
  AppArmor causing HA routers to be in backup state on wallaby-focal

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-neutron-gateway/+bug/1944424/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1943765] Re: ipmitool "timing" flags are not working as expected causing failure to manage power of baremetal nodes

2021-09-16 Thread Felipe Reyes
** Also affects: ironic (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1943765

Title:
  ipmitool "timing" flags are not working as expected causing failure to
  manage power of baremetal nodes

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-ironic-conductor/+bug/1943765/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1874719] Re: Focal/Groovy deploy creates a 'node1' node

2021-06-09 Thread Felipe Reyes
** Tags added: seg

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1874719

Title:
  Focal/Groovy deploy creates a 'node1' node

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-hacluster/+bug/1874719/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1927219] Re: context deadline exceeded: unknown in containerd with latest runc version

2021-05-28 Thread Felipe Reyes
** Tags added: seg

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1927219

Title:
  context deadline exceeded: unknown in containerd with latest runc
  version

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/runc/+bug/1927219/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1783184] Re: neutron-ovs-cleanup can have unintended side effects

2021-05-20 Thread Felipe Reyes
** Tags added: seg

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1783184

Title:
  neutron-ovs-cleanup can have unintended side effects

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/neutron/+bug/1783184/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1880495] Re: Upgrade from 1.95 to 2.35 failure because column maasserver_domain.ttl does not exist

2021-04-20 Thread Felipe Reyes
** Changed in: maas
 Assignee: Felipe Reyes (freyes) => (unassigned)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1880495

Title:
  Upgrade from 1.95 to 2.35 failure because column maasserver_domain.ttl
  does not exist

To manage notifications about this bug go to:
https://bugs.launchpad.net/maas/+bug/1880495/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Re: [Bug 1920640] Re: EXPKEYSIG C8CAB6595FDFF622 Ubuntu Debug Symbol Archive Automatic Signing Key (2016)

2021-03-20 Thread Felipe Reyes
Hi,

This is a workaround you can use temporarily:

$ wget -O- http://ddebs.ubuntu.com/dbgsym-release-key.asc | sudo apt-
key add -
$ sudo apt update

The key was extended temporarily.

Best,

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1920640

Title:
  EXPKEYSIG C8CAB6595FDFF622 Ubuntu Debug Symbol Archive Automatic
  Signing Key (2016) 

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ubuntu-keyring/+bug/1920640/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1920640] [NEW] EXPKEYSIG C8CAB6595FDFF622 Ubuntu Debug Symbol Archive Automatic Signing Key (2016)

2021-03-20 Thread Felipe Reyes
Public bug reported:

The public key used by the debugging symbols repository
/usr/share/keyrings/ubuntu-dbgsym-keyring.gpg from the package ubuntu-
dbgsym-keyring expired.

$ apt policy ubuntu-dbgsym-keyring
ubuntu-dbgsym-keyring:
  Installed: 2020.02.11.2
  Candidate: 2020.02.11.2
  Version table:
 *** 2020.02.11.2 500
500 http://archive.ubuntu.com/ubuntu focal/main amd64 Packages
500 http://archive.ubuntu.com/ubuntu focal/main i386 Packages
100 /var/lib/dpkg/status
$ gpg --no-default-keyring --keyring 
/usr/share/keyrings/ubuntu-dbgsym-keyring.gpg --list-keys
/usr/share/keyrings/ubuntu-dbgsym-keyring.gpg
-
pub   rsa4096 2016-03-21 [SC] [expired: 2021-03-20]
  F2EDC64DC5AEE1F6B9C621F0C8CAB6595FDFF622
uid   [ expired] Ubuntu Debug Symbol Archive Automatic Signing Key 
(2016) 


Error message on "apt update":

E: The repository 'http://ddebs.ubuntu.com bionic-updates Release' is not 
signed.
N: Updating from such a repository can't be done securely, and is therefore 
disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration 
details.
W: GPG error: http://ddebs.ubuntu.com bionic Release: The following signatures 
were invalid: EXPKEYSIG C8CAB6595FDFF622 Ubuntu Debug Symbol Archive Automatic 
Signing Key (2016) 
E: The repository 'http://ddebs.ubuntu.com bionic Release' is not signed.
N: Updating from such a repository can't be done securely, and is therefore 
disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration 
details.
W: GPG error: http://ddebs.ubuntu.com bionic-proposed Release: The following 
signatures were invalid: EXPKEYSIG C8CAB6595FDFF622 Ubuntu Debug Symbol Archive 
Automatic Signing Key (2016) 
E: The repository 'http://ddebs.ubuntu.com bionic-proposed Release' is not 
signed.
N: Updating from such a repository can't be done securely, and is therefore 
disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration 
details.

** Affects: ubuntu-keyring (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: sts

** Summary changed:

- W: GPG error: http://ddebs.ubuntu.com bionic Release: The following 
signatures were invalid: EXPKEYSIG C8CAB6595FDFF622 Ubuntu Debug Symbol Archive 
Automatic Signing Key (2016) 
+ EXPKEYSIG C8CAB6595FDFF622 Ubuntu Debug Symbol Archive Automatic Signing Key 
(2016) 

** Description changed:

  The public key used by the debugging symbols repository
  /usr/share/keyrings/ubuntu-dbgsym-keyring.gpg from the package ubuntu-
  dbgsym-keyring expired.
  
  $ apt policy ubuntu-dbgsym-keyring
  ubuntu-dbgsym-keyring:
-   Installed: 2020.02.11.2
-   Candidate: 2020.02.11.2
-   Version table:
-  *** 2020.02.11.2 500
- 500 http://archive.ubuntu.com/ubuntu focal/main amd64 Packages
- 500 http://archive.ubuntu.com/ubuntu focal/main i386 Packages
- 100 /var/lib/dpkg/status
- $ gpg --no-default-keyring --keyring 
/usr/share/keyrings/ubuntu-dbgsym-keyring.gpg --list-keys 
+   Installed: 2020.02.11.2
+   Candidate: 2020.02.11.2
+   Version table:
+  *** 2020.02.11.2 500
+ 500 http://archive.ubuntu.com/ubuntu focal/main amd64 Packages
+ 500 http://archive.ubuntu.com/ubuntu focal/main i386 Packages
+ 100 /var/lib/dpkg/status
+ $ gpg --no-default-keyring --keyring 
/usr/share/keyrings/ubuntu-dbgsym-keyring.gpg --list-keys
  /usr/share/keyrings/ubuntu-dbgsym-keyring.gpg
  -
  pub   rsa4096 2016-03-21 [SC] [expired: 2021-03-20]
-   F2EDC64DC5AEE1F6B9C621F0C8CAB6595FDFF622
+   F2EDC64DC5AEE1F6B9C621F0C8CAB6595FDFF622
  uid   [ expired] Ubuntu Debug Symbol Archive Automatic Signing Key 
(2016) 
+ 
+ 
+ Error message on "apt update":
+ 
+ E: The repository 'http://ddebs.ubuntu.com bionic-updates Release' is not 
signed.
+ N: Updating from such a repository can't be done securely, and is therefore 
disabled by default.
+ N: See apt-secure(8) manpage for repository creation and user configuration 
details.
+ W: GPG error: http://ddebs.ubuntu.com bionic Release: The following 
signatures were invalid: EXPKEYSIG C8CAB6595FDFF622 Ubuntu Debug Symbol Archive 
Automatic Signing Key (2016) 
+ E: The repository 'http://ddebs.ubuntu.com bionic Release' is not signed.
+ N: Updating from such a repository can't be done securely, and is therefore 
disabled by default.
+ N: See apt-secure(8) manpage for repository creation and user configuration 
details.
+ W: GPG error: http://ddebs.ubuntu.com bionic-proposed Release: The following 
signatures were invalid: EXPKEYSIG C8CAB6595FDFF622 Ubuntu Debug Symbol Archive 
Automatic Signing Key (2016) 
+ E: The repository 'http://ddebs.ubuntu.com bionic-proposed Release' is not 
signed.
+ N: Updating from such a repository can't be done securely, and is therefore 
disabled by default.
+ N: See apt-secure(8) manpage for repository creation and user 

[Bug 1913583] Re: [plugin][k8s] Canonical Distribution of Kubernetes fixes

2021-02-21 Thread Felipe Reyes
To test this I deployed a Focal based CDK environment, then launched a
machine running groovy and scp'ed /root from kubernetes-master/0 to that
new machine and executed sosreport, the verification executed correctly.
Here it's the evidence.

Before the patch:
root@juju-321ff4-k8s-11:~# cat 
sosreport-juju-321ff4-k8s-11-2021-02-21-ogpqrgd/sos_commands/kubernetes/kubectl_--kubeconfig_.root.cdk.kubeproxyconfig_get_namespaces.1
Error from server (Forbidden): namespaces is forbidden: User 
"system:kube-proxy" cannot list resource "namespaces" in API group "" at the 
cluster scope

versus

After patch:
root@juju-321ff4-k8s-11:~# cat 
sosreport-juju-321ff4-k8s-11-2021-02-21-lmlgkbh/sos_commands/kubernetes/kubectl_--kubeconfig_.root.cdk.cdk_addons_kubectl_config_get_namespaces.1
 
NAME  STATUS   AGE
default   Active   46m
ingress-nginx-kubernetes-worker   Active   43m
kube-node-lease   Active   46m
kube-public   Active   46m
kube-system   Active   46m
kubernetes-dashboard  Active   46m


$ juju add-machine --series groovy 
created machine 11
$ juju ssh kubernetes-master/0 sudo -i
root@juju-321ff4-k8s-4:~# tar czf /tmp/root.tgz /root
tar: Removing leading `/' from member names
tar: /root/cdk/audit/audit.log: file changed as we read it
root@juju-321ff4-k8s-4:~# logout
Connection to 10.7.1.146 closed.
$ juju scp kubernetes-master/0:/tmp/root.tgz ./
$ juju scp root.tgz 11:
$ juju ssh 11
Welcome to Ubuntu 20.10 (GNU/Linux 5.8.0-43-generic x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management: https://landscape.canonical.com
 * Support:https://ubuntu.com/advantage

  System information as of Sun Feb 21 19:02:28 UTC 2021

  System load:  0.04  Processes: 98
  Usage of /:   8.6% of 19.21GB   Users logged in:   0
  Memory usage: 12%   IPv4 address for ens3: 10.7.1.51
  Swap usage:   0%


0 updates can be installed immediately.
0 of these updates are security updates.


*** System restart required ***
To run a command as administrator (user "root"), use "sudo ".
See "man sudo_root" for details.

ubuntu@juju-321ff4-k8s-11:~$ ls
root.tgz
ubuntu@juju-321ff4-k8s-11:~$ sudo tar xzf root.tgz -C /
ubuntu@juju-321ff4-k8s-11:~$ sudo snap install kubectl
error: This revision of snap "kubectl" was published using classic confinement 
and thus may perform
   arbitrary system changes outside of the security sandbox that snaps are 
usually confined to,
   which may put your system at risk.

   If you understand and want to proceed repeat the command including 
--classic.
ubuntu@juju-321ff4-k8s-11:~$ sudo snap install kubectl --classic
kubectl 1.20.4 from Canonical✓ installed
ubuntu@juju-321ff4-k8s-11:~$ sudo -i
root@juju-321ff4-k8s-11:~# ls cdk/
audit   ca.crt client.key  known_tokens.csv 
kubeproxyconfig   rbac-proxy.yaml  serviceaccount.key
auth-webhookcdk_addons_kubectl_config  etcd
kube-scheduler-config.yaml   kubeschedulerconfig   server.crt   
system-monitoring-rbac-role.yaml
basic_auth.csv  client.crt keystone
kubecontrollermanagerconfig  pod-security-policy.yaml  server.key
root@juju-321ff4-k8s-11:~# kubectl get pods -A
NAMESPACE NAME  
READY   STATUSRESTARTS   AGE
ingress-nginx-kubernetes-worker   
default-http-backend-kubernetes-worker-6494cbc7fd-jr7g4   1/1 Running   0   
   34m
ingress-nginx-kubernetes-worker   
nginx-ingress-controller-kubernetes-worker-jbvgh  1/1 Running   0   
   33m
ingress-nginx-kubernetes-worker   
nginx-ingress-controller-kubernetes-worker-kj8x5  1/1 Running   0   
   34m
kube-system   coredns-7bb4d77796-q6sck  
1/1 Running   0  36m
kube-system   csi-cinder-controllerplugin-0 
5/5 Running   0  36m
kube-system   csi-cinder-nodeplugin-8bdl4   
2/2 Running   0  33m
kube-system   csi-cinder-nodeplugin-n825s   
2/2 Running   0  34m
kube-system   k8s-keystone-auth-5976c99b8b-2zx25
1/1 Running   0  36m
kube-system   k8s-keystone-auth-5976c99b8b-pr9w6
1/1 Running   0  36m
kube-system   kube-state-metrics-6f586bb967-f5jt7   
1/1 Running   0  36m
kube-system   metrics-server-v0.3.6-f6cf867b4-87dxm 
2/2 Running   0  31m
kube-system   openstack-cloud-controller-manager-rcsx8  
1/1 Running   0  34m
kube-system   

[Bug 1913583] Re: [plugin][k8s] Canonical Distribution of Kubernetes fixes

2021-02-19 Thread Felipe Reyes
Flannel doesn't like to be installed in groovy, I will need to give some
extra cycles to this verification.

unit-flannel-2: 15:59:56 INFO unit.flannel/2.juju-log Invoking reactive 
handler: reactive/flannel.py:228:set_flannel_version
unit-flannel-2: 15:59:56 ERROR unit.flannel/2.juju-log Hook error:
Traceback (most recent call last):
  File 
"/var/lib/juju/agents/unit-flannel-2/.venv/lib/python3.8/site-packages/charms/reactive/__init__.py",
 line 74, in main
bus.dispatch(restricted=restricted_mode)
  File 
"/var/lib/juju/agents/unit-flannel-2/.venv/lib/python3.8/site-packages/charms/reactive/bus.py",
 line 390, in dispatch
_invoke(other_handlers)
  File 
"/var/lib/juju/agents/unit-flannel-2/.venv/lib/python3.8/site-packages/charms/reactive/bus.py",
 line 359, in _invoke
handler.invoke()
  File 
"/var/lib/juju/agents/unit-flannel-2/.venv/lib/python3.8/site-packages/charms/reactive/bus.py",
 line 181, in invoke
self._action(*args)
  File "/var/lib/juju/agents/unit-flannel-2/charm/reactive/flannel.py", line 
233, in set_flannel_version
version = check_output(split(cmd), stderr=STDOUT).decode('utf-8')
  File "/usr/lib/python3.8/subprocess.py", line 411, in check_output
return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
  File "/usr/lib/python3.8/subprocess.py", line 512, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['flanneld', '-version']' returned 
non-zero exit status 2.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1913583

Title:
  [plugin][k8s] Canonical Distribution of Kubernetes fixes

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/1913583/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1913583] Re: [plugin][k8s] Canonical Distribution of Kubernetes fixes

2021-02-19 Thread Felipe Reyes
I tested the package available in focal-proposed, and it could capture
correctly information that before the patch couldn't, for example:

Before the patch:
root@juju-867473-k8s-4:~# cat 
sosreport-juju-867473-k8s-4-2021-02-19-uskifug/sos_commands/kubernetes/kubectl_--kubeconfig_.root.cdk.kubeproxyconfig_get_namespaces.1
Error from server (Forbidden): namespaces is forbidden: User 
"system:kube-proxy" cannot list resource "namespaces" in API group "" at the 
cluster scope

versus

After the patch:
root@juju-867473-k8s-4:~# cat 
sosreport-juju-867473-k8s-4-2021-02-19-snotach/sos_commands/kubernetes/kubectl_--kubeconfig_.root.cdk.cdk_addons_kubectl_config_get_namespaces.1
NAME  STATUS   AGE
default   Active   26h
ingress-nginx-kubernetes-worker   Active   26h
kube-node-lease   Active   26h
kube-public   Active   26h
kube-system   Active   26h
kubernetes-dashboard  Active   26h


Evidence:

root@juju-867473-k8s-4:~# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:Ubuntu 20.04.2 LTS
Release:20.04
Codename:   focal
root@juju-867473-k8s-4:~# apt policy sosreport 
sosreport:
  Installed: 4.0-1~ubuntu0.20.04.3
  Candidate: 4.0-1~ubuntu0.20.04.3
  Version table:
 *** 4.0-1~ubuntu0.20.04.3 500
500 http://nova.clouds.archive.ubuntu.com/ubuntu focal-updates/main 
amd64 Packages
100 /var/lib/dpkg/status
 3.9-1ubuntu2 500
500 http://nova.clouds.archive.ubuntu.com/ubuntu focal/main amd64 
Packages

root@juju-867473-k8s-4:~# sosreport -o kubernetes
Please note the 'sosreport' command has been deprecated in favor of the new 
'sos' command, E.G. 'sos report'.
Redirecting to 'sos report -o kubernetes'

sosreport (version 4.0)

This command will collect system configuration and diagnostic
information from this Ubuntu system.

For more information on Canonical visit:

  https://www.ubuntu.com/

The generated archive may contain data considered sensitive and its
content should be reviewed by the originating organization before being
passed to any third party.

No changes will be made to system configuration.


Press ENTER to continue, or CTRL-C to quit.

Please enter the case id that you are generating this report for []:

 Setting up archive ...
 Setting up plugins ...
 Running plugins. Please wait ...

  Starting 1/1   kubernetes  [Running: kubernetes]  
  
  Finished running plugins  
 
Creating compressed archive...

Your sosreport has been generated and saved in:
/tmp/sosreport-juju-867473-k8s-4-2021-02-19-uskifug.tar.xz

 Size   2.75MiB
 Owner  root
 md5402d7a949075fe9a06aca191413d7406

Please send this file to your support representative.

root@juju-867473-k8s-4:~# tar xJf 
/tmp/sosreport-juju-867473-k8s-4-2021-02-19-uskifug.tar.xz
root@juju-867473-k8s-4:~# find sosreport-*/ -type d -name kubernetes -exec grep 
-H -i forbidden {} \;
grep: sosreport-juju-867473-k8s-4-2021-02-19-uskifug/sos_commands/kubernetes: 
Is a directory
root@juju-867473-k8s-4:~# find sosreport-*/ -type d -name kubernetes -exec grep 
-r -H -i forbidden {} \;
sosreport-juju-867473-k8s-4-2021-02-19-uskifug/sos_commands/kubernetes/limitranges/kubectl_--kubeconfig_.root.cdk.kubeproxyconfig_get_--all-namespaces_true_limitranges:Error
 from server (Forbidden): limitranges is forbidden: User "system:kube-proxy" 
cannot list resource "limitranges" in API group "" at the cluster scope
sosreport-juju-867473-k8s-4-2021-02-19-uskifug/sos_commands/kubernetes/nodes/kubectl_--kubeconfig_.root.cdk.kubeproxyconfig_describe_node_juju-867473-k8s-6:Lease:
  Failed to get lease: leases.coordination.k8s.io 
"juju-867473-k8s-6" is forbidden: User "system:kube-proxy" cannot get resource 
"leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease"
sosreport-juju-867473-k8s-4-2021-02-19-uskifug/sos_commands/kubernetes/nodes/kubectl_--kubeconfig_.root.cdk.kubeproxyconfig_describe_node_juju-867473-k8s-5:Lease:
  Failed to get lease: leases.coordination.k8s.io 
"juju-867473-k8s-5" is forbidden: User "system:kube-proxy" cannot get resource 
"leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease"
sosreport-juju-867473-k8s-4-2021-02-19-uskifug/sos_commands/kubernetes/ingresses/kubectl_--kubeconfig_.root.cdk.kubeproxyconfig_get_--all-namespaces_true_ingresses:Error
 from server (Forbidden): ingresses.networking.k8s.io is forbidden: User 
"system:kube-proxy" cannot list resource "ingresses" in API group 
"networking.k8s.io" at the cluster scope
sosreport-juju-867473-k8s-4-2021-02-19-uskifug/sos_commands/kubernetes/pvc/kubectl_--kubeconfig_.root.cdk.kubeproxyconfig_get_--all-namespaces_true_pvc:Error
 from server (Forbidden): persistentvolumeclaims is forbidden: User 
"system:kube-proxy" cannot list resource 

[Bug 1913583] Re: [plugin][k8s] Canonical Distribution of Kubernetes fixes

2021-02-09 Thread Felipe Reyes
On Sat, 2021-02-06 at 16:01 +, Eric Desrochers wrote:
> @freyes,
> 
> Can you please fill the SRU template ? And I'll proceed with the
> sponsoring along with all the other patches waiting for SRU.

done ;-)


** Description changed:

+ [Impact]
+ 
+ Running sosreport in a CDK deployed environment won't collect as much
+ information as the plugin could, this is because the kubectl calls are
+ using the wrong paths for the kubeconfig files, this prevents from
+ having more detailed sosreports on the state of the cluster which leads
+ to a back and forth running extra commands to collect the rest of the
+ data.
+ 
+ [Test Case]
+ 
+ * Deploy CDK: juju deploy charmed-kubernetes  # 
https://ubuntu.com/kubernetes/docs/quickstart
+ * ssh into the kubernetes-master/0
+ * Run sosreport
+ 
+ Expected result:
+ 
+ The sosreport contains a 'kubernetes' directory where all the commands
+ executed successfully
+ 
+ Actual result:
+ 
+ The sosreport contains a 'kubernetes' directory where some of the
+ commands contain "Forbidden" errors.
+ 
+ find sosreport-*/ -type d -name kubernetes -exec grep -H -i forbidden {}
+ \;
+ 
+ 
+ [Where problems could occur]
+ 
+ Any issues with this SRU should show themselves as failures in the
+ execution of the kubernetes plugin and that can be verified in the
+ sos.log file.
+ 
+ [Other Info]
+ 
  Upstream:
  https://github.com/sosreport/sos/pull/2387
  https://github.com/sosreport/sos/pull/2387/commits

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1913583

Title:
  [plugin][k8s] Canonical Distribution of Kubernetes fixes

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/1913583/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1906266] Re: After upgrade: "libvirt.libvirtError: Requested operation is not valid: format of backing image %s of image %s was not specified"

2021-01-04 Thread Felipe Reyes
Adding a task for libvirt at Ubuntu/focal since this patch[0] might need
to be backported.

[0]
https://github.com/libvirt/libvirt/commit/ae9e6c2a2b75d958995c661f7bb64ed4353a6404

** Also affects: libvirt (Ubuntu)
   Importance: Undecided
   Status: New

** Also affects: libvirt (Ubuntu Focal)
   Importance: Undecided
   Status: New

** Also affects: nova (Ubuntu Focal)
   Importance: Undecided
   Status: New

** Also affects: libvirt (Ubuntu Groovy)
   Importance: Undecided
   Status: New

** Also affects: nova (Ubuntu Groovy)
   Importance: Undecided
   Status: New

** Changed in: libvirt (Ubuntu Groovy)
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1906266

Title:
  After upgrade: "libvirt.libvirtError: Requested operation is not
  valid: format of backing image %s of image %s was not specified"

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1906266/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1903745] Re: upgrade from 1.1.14-2ubuntu1.8 to 1.1.14-2ubuntu1.9 breaks clusters

2020-11-11 Thread Felipe Reyes
Just some history, in the past we attempted to disable unattended-
upgrades (as a config option) in the hacluster charm, but it was decided
that it wasn't the right place to get this addressed.

Bug https://bugs.launchpad.net/charm-hacluster/+bug/1826898


** Tags added: sts

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1903745

Title:
  upgrade from 1.1.14-2ubuntu1.8 to 1.1.14-2ubuntu1.9 breaks clusters

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/pacemaker/+bug/1903745/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1899455] Re: [SRU] openvswitch 2.12.1

2020-10-14 Thread Felipe Reyes
** Tags added: sts

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1899455

Title:
  [SRU] openvswitch 2.12.1

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1899455/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1890846] [NEW] add output of: rabbitmqctl eval 'rabbit_diagnostics:maybe_stuck().'

2020-08-07 Thread Felipe Reyes
Public bug reported:

Extend the rabbitmq plugin to include the output of:

sudo rabbitmqctl eval 'rabbit_diagnostics:maybe_stuck().'

This information is useful to get insights of the state of the erlang
virtual machine.

** Affects: sosreport (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: seg

** Tags added: seg

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1890846

Title:
  add output of: rabbitmqctl eval 'rabbit_diagnostics:maybe_stuck().'

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/1890846/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1873368] Re: ssshuttle server fails to connect endpoints with python 3.8

2020-07-14 Thread Felipe Reyes
Attaching debdiff for focal that contains a backport of commit
https://github.com/sshuttle/sshuttle/commit/9c873579348e3308123d1bf2a917b0c2f82b9dae

** Patch added: "lp1873368_focal.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/sshuttle/+bug/1873368/+attachment/5392645/+files/lp1873368_focal.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1873368

Title:
  ssshuttle server fails to connect endpoints with python 3.8

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/sshuttle/+bug/1873368/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1873368] Re: ssshuttle server fails to connect endpoints with python 3.8

2020-07-02 Thread Felipe Reyes
I created a ppa that contains the backported patch in case anyone wants
to test it - https://launchpad.net/~freyes/+archive/ubuntu/lp1873368

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1873368

Title:
  ssshuttle server fails to connect endpoints with python 3.8

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/sshuttle/+bug/1873368/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1880495] Re: Upgrade from 1.95 to 2.35 failure because column maasserver_domain.ttl does not exist

2020-05-26 Thread Felipe Reyes
** Changed in: maas
 Assignee: (unassigned) => Felipe Reyes (freyes)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1880495

Title:
  Upgrade from 1.95 to 2.35 failure because column maasserver_domain.ttl
  does not exist

To manage notifications about this bug go to:
https://bugs.launchpad.net/maas/+bug/1880495/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1880495] Re: Upgrade from 1.95 to 2.35 failure because column maasserver_domain.ttl does not exist

2020-05-25 Thread Felipe Reyes
I could reproduce this issue locally, and this is what I believe is
happening with this environment (and not for other upgrades) is that the
database has no domains defined for the interfaces related, so this
piece of code [0] is falling into the "else" section which calls
get_default_domain() method and internally ends up query the 'ttl'
column. So the migration step 0011 is relying on it and expecting that
it was created in step 0010 and it wasn't, it was added way later in
migration step 0023.

[0]
https://github.com/maas/maas/blob/2.3/src/maasserver/migrations/builtin/maasserver/0011_domain_data.py#L106-L109

** Also affects: maas (Ubuntu)
   Importance: Undecided
   Status: New

** Also affects: maas (Ubuntu Xenial)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1880495

Title:
  Upgrade from 1.95 to 2.35 failure because column maasserver_domain.ttl
  does not exist

To manage notifications about this bug go to:
https://bugs.launchpad.net/maas/+bug/1880495/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1802407] Re: ssl_ca not supported

2020-03-10 Thread Felipe Reyes
** Tags added: seg

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1802407

Title:
  ssl_ca not supported

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-glance-simplestreams-sync/+bug/1802407/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1802407] Re: ssl_ca not supported

2020-03-10 Thread Felipe Reyes
** Also affects: simplestreams (Ubuntu Bionic)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1802407

Title:
  ssl_ca not supported

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-glance-simplestreams-sync/+bug/1802407/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1862836] Re: neutron-fwaas Percona-XtraDB-Cluster prohibits use of DML command on a table (neutron.firewall_group_port_associations_v2) without an explicit primary key with pxc_strict_mode = ENFO

2020-02-12 Thread Felipe Reyes
** No longer affects: neutron

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1862836

Title:
  neutron-fwaas Percona-XtraDB-Cluster prohibits use of DML command on a
  table (neutron.firewall_group_port_associations_v2) without an
  explicit primary key with pxc_strict_mode = ENFORCING or MASTER'

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1862836/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1862836] Re: neutron-fwaas Percona-XtraDB-Cluster prohibits use of DML command on a table (neutron.firewall_group_port_associations_v2) without an explicit primary key with pxc_strict_mode = ENFO

2020-02-11 Thread Felipe Reyes
This error is similar to the one reported in Octavia
https://bugs.launchpad.net/ubuntu/+source/octavia/+bug/1826875 , but in
the case of neutron the table firewall_group_port_associations_v2 has a
primary key defined:

firewall_group_port_associations_v2 | CREATE TABLE 
`firewall_group_port_associations_v2` (
  `firewall_group_id` varchar(36) NOT NULL,
  `port_id` varchar(36) NOT NULL,
  PRIMARY KEY (`firewall_group_id`,`port_id`),
  UNIQUE KEY `uniq_firewallgroupportassociation0port_id` (`port_id`),
  CONSTRAINT `firewall_group_port_associations_v2_ibfk_1` FOREIGN KEY 
(`firewall_group_id`) REFERENCES `firewall_groups_v2` (`id`) ON DELETE CASCADE,
  CONSTRAINT `firewall_group_port_associations_v2_ibfk_2` FOREIGN KEY 
(`port_id`) REFERENCES `ports` (`id`) ON DELETE CASCADE
) ENGINE=InnoDB DEFAULT CHARSET=utf8

** Tags added: sts

** Description changed:

- When trying to delete a heat stack because neutron-server couldn't
- update firewall groups, the stack trace found in the logs is:
+ When trying to delete a heat stack in Stein fails, because neutron-
+ server couldn't update firewall groups, the stack trace found in the
+ logs is:
  
  2020-02-11 13:14:21.356 1998511 ERROR neutron_lib.callbacks.manager Traceback 
(most recent call last):
  2020-02-11 13:14:21.356 1998511 ERROR neutron_lib.callbacks.manager   File 
"/usr/lib/python3/dist-packages/neutron_lib/callbacks/manager.py", line 197, in 
_notify_loop
  2020-02-11 13:14:21.356 1998511 ERROR neutron_lib.callbacks.manager 
callback(resource, event, trigger, **kwargs)
  2020-02-11 13:14:21.356 1998511 ERROR neutron_lib.callbacks.manager   File 
"/usr/lib/python3/dist-packages/neutron_fwaas/services/firewall/fwaas_plugin_v2.py",
 line 307, in handle_update_port
  2020-02-11 13:14:21.356 1998511 ERROR neutron_lib.callbacks.manager 
{'firewall_group': {'ports': port_ids}})
  2020-02-11 13:14:21.356 1998511 ERROR neutron_lib.callbacks.manager   File 
"/usr/lib/python3/dist-packages/oslo_log/helpers.py", line 67, in wrapper
  2020-02-11 13:14:21.356 1998511 ERROR neutron_lib.callbacks.manager 
return method(*args, **kwargs)
  2020-02-11 13:14:21.356 1998511 ERROR neutron_lib.callbacks.manager   File 
"/usr/lib/python3/dist-packages/neutron_fwaas/services/firewall/fwaas_plugin_v2.py",
 line 369, in update_firewall_group
  2020-02-11 13:14:21.356 1998511 ERROR neutron_lib.callbacks.manager 
return self.driver.update_firewall_group(context, id, firewall_group)
  2020-02-11 13:14:21.356 1998511 ERROR neutron_lib.callbacks.manager   File 
"/usr/lib/python3/dist-packages/neutron_fwaas/services/firewall/service_drivers/driver_api.py",
 line 211, in update_firewall_group
  2020-02-11 13:14:21.356 1998511 ERROR neutron_lib.callbacks.manager 
context, id, firewall_group_delta)
  2020-02-11 13:14:21.356 1998511 ERROR neutron_lib.callbacks.manager   File 
"/usr/lib/python3/dist-packages/neutron_fwaas/db/firewall/v2/firewall_db_v2.py",
 line 981, in update_firewall_group
  2020-02-11 13:14:21.356 1998511 ERROR neutron_lib.callbacks.manager 
self._delete_ports_in_firewall_group(context, id)
  2020-02-11 13:14:21.356 1998511 ERROR neutron_lib.callbacks.manager   File 
"/usr/lib/python3/dist-packages/neutron_fwaas/db/firewall/v2/firewall_db_v2.py",
 line 832, in _delete_ports_in_firewall_group
  2020-02-11 13:14:21.356 1998511 ERROR neutron_lib.callbacks.manager 
firewall_group_id=firewall_group_id).delete()
  [...]
  2020-02-11 13:14:21.356 1998511 ERROR neutron_lib.callbacks.manager   File 
"/usr/lib/python3/dist-packages/pymysql/err.py", line 107, in 
raise_mysql_exception
  2020-02-11 13:14:21.356 1998511 ERROR neutron_lib.callbacks.manager raise 
errorclass(errno, errval)
  2020-02-11 13:14:21.356 1998511 ERROR neutron_lib.callbacks.manager 
oslo_db.exception.DBError: (pymysql.err.InternalError) (1105, 
'Percona-XtraDB-Cluster prohibits use of DML command on a table 
(neutron.firewall_group_port_associations_v2) without an explicit primary key 
with pxc_strict_mode = ENFORCING or MASTER') [SQL: 'DELETE FROM 
firewall_group_port_associations_v2 WHERE 
firewall_group_port_associations_v2.firewall_group_id = 
%(firewall_group_id_1)s'] [parameters: {'firewall_group_id_1': 
'8da85bcb-1e1d-4d5a-b508-25c1d4c85d50'}] (Background on this error at: 
http://sqlalche.me/e/2j85)
  2020-02-11 13:14:21.356 1998511 ERROR neutron_lib.callbacks.manager

** Also affects: neutron (Ubuntu)
   Importance: Undecided
   Status: New

** No longer affects: neutron (Ubuntu)

** Also affects: neutron-fwaas (Ubuntu)
   Importance: Undecided
   Status: New

** Also affects: cloud-archive
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1862836

Title:
  neutron-fwaas Percona-XtraDB-Cluster prohibits use of DML command on a
  table (neutron.firewall_group_port_associations_v2) without an
  explicit primary key with 

[Bug 1852441] Re: In bionic, one of the ceph packages installed causes chrony to auto-install even on lxd

2020-02-05 Thread Felipe Reyes
I think the charm is the one responsible of disabling chrony, something
like this would do the trick.

if is_container():
subprocess.check_call("sudo timedatectl set-ntp off")

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1852441

Title:
  In bionic, one of the ceph packages installed causes chrony to auto-
  install even on lxd

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-ceph-mon/+bug/1852441/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1852441] Re: In bionic, one of the ceph packages installed causes chrony to auto-install even on lxd

2020-02-05 Thread Felipe Reyes
** Tags added: sts

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1852441

Title:
  In bionic, one of the ceph packages installed causes chrony to auto-
  install even on lxd

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-ceph-mon/+bug/1852441/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1859649] Re: neutron 2:14.0.3-0ubuntu1~cloud0 and 2:14.0.0-0ubuntu1.1~cloud0 not compatible

2020-01-15 Thread Felipe Reyes
taking a look closer, this could have happened since the neutron-gateway
(server side) had an older version that is not aware of 1.5 objects,
while neutron-ovs-agent (client side) requested a 1.5 versioned object,
the backwards compatibility layer is meant to be used the other way
around where the server is aware of newer versions while the client is
not, so the server can remove fields from the response to downgrade the
object and hand back a compatible one.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1859649

Title:
  neutron 2:14.0.3-0ubuntu1~cloud0 and 2:14.0.0-0ubuntu1.1~cloud0 not
  compatible

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/neutron/+bug/1859649/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1859649] Re: neutron 2:14.0.3-0ubuntu1~cloud0 and 2:14.0.0-0ubuntu1.1~cloud0 not compatible

2020-01-15 Thread Felipe Reyes
According to the comments in https://review.opendev.org/#/c/669360/ the
backport mentioned earlier shouldn't have broken things.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1859649

Title:
  neutron 2:14.0.3-0ubuntu1~cloud0 and 2:14.0.0-0ubuntu1.1~cloud0 not
  compatible

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/neutron/+bug/1859649/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1859649] Re: neutron 2:14.0.3-0ubuntu1~cloud0 and 2:14.0.0-0ubuntu1.1~cloud0 not compatible

2020-01-15 Thread Felipe Reyes
About "neutron.agent.rpc oslo_messaging.rpc.client.RemoteError: Remote
error: InvalidTargetVersion Invalid target version 1.5"

commit b452c508b62 landed in 14.0.3 which bumped up the Port object's
version to 1.5 making it incompatible with older versions ->
https://github.com/openstack/neutron/commit/b452c508b62

This commit was backported to fix bug 1834484 ([QoS]
qos_plugin._extend_port_resource_request is killing port retrieval
performance)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1859649

Title:
  neutron 2:14.0.3-0ubuntu1~cloud0 and 2:14.0.0-0ubuntu1.1~cloud0 not
  compatible

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/neutron/+bug/1859649/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1850634] Re: queens regresion: _dn_to_id() not using utf8_encode/decode

2020-01-13 Thread Felipe Reyes
verified xenial-queens, no regressions detected, testing journal:

$  time tox -e func-smoke
func-smoke installed: DEPRECATION: Python 2.7 will reach the end of its life on 
January 1st, 2020. Please upgrade your Python as Python 2.7 won't be maintained 
after that date. A future version of pip will drop support for Python 2.7. More 
details about Python 2 support in pip, can be found at 
https://pip.pypa.io/en/latest/development/release-process/#python-2-support,amulet==1.21.0,aodhclient==1.5.0,appdirs==1.4.3,Babel==2.8.0,backports.os==0.1.1,blessings==1.6,bundletester==0.12.2,certifi==2019.11.28,cffi==1.13.2,chardet==3.0.4,charm-tools==2.7.2,charmhelpers==0.20.7,Cheetah3==3.2.4,cliff==2.18.0,cmd2==0.8.9,colander==1.7.0,configparser==4.0.2,contextlib2==0.6.0.post1,coverage==5.0.3,cryptography==2.8,debtcollector==1.22.0,decorator==4.4.1,dict2colander==0.2,distro==1.4.0,distro-info==0.0.0,dogpile.cache==0.9.0,entrypoints==0.3,enum34==1.1.6,extras==1.0.0,fasteners==0.15,fixtures==3.0.0,flake8==2.4.1,funcsigs==1.0.2,functools32==3.2.3.post2,future==0.18.2,futures==3.3.0,futurist==1.10.0,gnocchiclient==3.1.1,httplib2==0.15.0,idna==2.8,importlib-metadata==1.4.0,ipaddress==1.0.23,iso8601==0.1.12,Jinja2==2.10.3,jmespath==0.9.4,jsonpatch==1.24,jsonpointer==2.0,jsonschema==2.5.1,juju-deployer==0.11.0,juju-wait==2.5.0,jujubundlelib==0.5.6,jujuclient==0.54.0,keyring==18.0.1,keystoneauth1==3.18.0,launchpadlib==1.10.9,lazr.authentication==0.1.3,lazr.restfulclient==0.14.2,lazr.uri==1.0.3,libcharmstore==0.0.9,linecache2==1.0.0,macaroonbakery==1.2.3,MarkupSafe==1.1.1,mccabe==0.3.1,mock==3.0.5,monotonic==1.5,more-itertools==5.0.0,msgpack==0.6.2,munch==2.5.0,netaddr==0.7.19,netifaces==0.10.9,nose==1.3.7,oauth==1.0.1,oauthlib==3.1.0,openstacksdk==0.39.0,os-client-config==2.0.0,os-service-types==1.7.0,osc-lib==1.15.0,oslo.concurrency==3.31.0,oslo.config==7.0.0,oslo.context==2.23.0,oslo.i18n==3.25.1,oslo.log==3.45.2,oslo.serialization==2.29.2,oslo.utils==3.42.1,osprofiler==2.9.0,otherstuf==1.1.0,parse==1.14.0,path.py==11.5.2,pathlib2==2.3.5,pathspec==0.3.4,pbr==5.4.4,pep8==1.7.1,pika==0.13.1,pkg-resources==0.0.0,prettytable==0.7.2,protobuf==3.11.2,pycparser==2.19,pyflakes==0.8.1,pyinotify==0.9.6,pymacaroons==0.13.0,PyNaCl==1.3.0,pyOpenSSL==19.1.0,pyparsing==2.4.6,pyperclip==1.7.0,pyRFC3339==1.1,python-barbicanclient==4.9.0,python-ceilometerclient==2.9.0,python-cinderclient==4.3.0,python-dateutil==2.8.1,python-designateclient==3.0.0,python-glanceclient==2.17.0,python-heatclient==1.18.0,python-keystoneclient==3.22.0,python-manilaclient==1.29.0,python-mimeparse==1.6.0,python-neutronclient==6.14.0,python-novaclient==16.0.0,python-openstackclient==4.0.0,python-subunit==1.3.0,python-swiftclient==3.8.1,pytz==2019.3,pyudev==0.21.0,PyYAML==3.13,requests==2.22.0,requestsexceptions==1.4.0,rfc3986==1.3.2,ruamel.ordereddict==0.4.14,ruamel.yaml==0.15.100,scandir==1.10.0,SecretStorage==2.3.1,simplejson==3.17.0,six==1.13.0,stestr==2.6.0,stevedore==1.31.0,stuf==0.9.16,subprocess32==3.5.4,Tempita==0.5.2,testresources==2.0.1,testtools==2.3.0,theblues==0.5.2,traceback2==1.4.0,translationstring==1.3,unicodecsv==0.14.1,unittest2==1.1.0,urllib3==1.25.7,vergit==1.0.2,virtualenv==16.7.9,voluptuous==0.11.7,wadllib==1.3.3,warlock==1.3.3,wcwidth==0.1.8,WebOb==1.8.5,websocket-client==0.40.0,wrapt==1.11.2,wsgi-intercept==1.9.1,zipp==0.6.0,zope.interface==4.7.1
func-smoke run-test-pre: PYTHONHASHSEED='0'
func-smoke runtests: commands[0] | bundletester -vl DEBUG -r json -o 
func-results.json gate-basic-xenial-queens --no-destroy
DEBUG:bundletester.utils:Updating JUJU_MODEL: "" -> 
"stsstack-stsstack:admin/lp1850634"
DEBUG:root:Bootstrap environment: stsstack-stsstack:admin/lp1850634
DEBUG:deployer.env:Connecting to stsstack-stsstack:admin/lp1850634...
DEBUG:jujuclient.connector:Connecting to 
wss://10.5.0.14:17070/model/5758a1f7-8fb1-42a8-8df9-d19c6bec7804/api
DEBUG:deployer.env:Connected.
DEBUG:deployer.env: Terminating machines forcefully
INFO:deployer.env:  Waiting for machine termination
DEBUG:jujuclient.connector:Connecting to 
wss://10.5.0.14:17070/model/5758a1f7-8fb1-42a8-8df9-d19c6bec7804/api
DEBUG:root:Waiting for applications to be removed...
DEBUG:runner:call 
['/home/freyes/Projects/charms/openstack/builds/keystone-ldap/.tox/func-smoke/bin/charm-proof']
 (cwd: /tmp/bundletester-V3u4BE/keystone-ldap)
DEBUG:runner:I: `display-name` not provided, add for custom naming in the UI
DEBUG:runner:I: config.yaml: option ssl_key has no default value
DEBUG:runner:I: config.yaml: option ssl_cert has no default value
DEBUG:runner:I: config.yaml: option ldap-user has no default value
DEBUG:runner:I: config.yaml: option ldap-server has no default value
DEBUG:runner:I: config.yaml: option ssl_ca has no default value
DEBUG:runner:I: config.yaml: option ldap-password has no default value
DEBUG:runner:I: config.yaml: option domain-name has no default value
DEBUG:runner:I: config.yaml: option ldap-suffix has no default value
DEBUG:runner:I: config.yaml: option 

[Bug 1782922] Re: LDAP: changing user_id_attribute bricks group mapping

2020-01-13 Thread Felipe Reyes
@Corey, verification done ;-)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1782922

Title:
  LDAP: changing user_id_attribute bricks group mapping

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1782922/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1850634] Re: queens regresion: _dn_to_id() not using utf8_encode/decode

2020-01-13 Thread Felipe Reyes
I went through the test case using the package available in -proposed
and everything worked fine, no regressions were detected when using
keystone neither.

 Here it's the journal of my testing.

$  time tox -e func-smoke
func-smoke installed: DEPRECATION: Python 2.7 will reach the end of its life on 
January 1st, 2020. Please upgrade your Python as Python 2.7 won't be maintained 
after that date. A future version of pip will drop support for Python 2.7. More 
details about Python 2 support in pip, can be found at 
https://pip.pypa.io/en/latest/development/release-process/#python-2-support,amulet==1.21.0,aodhclient==1.5.0,appdirs==1.4.3,Babel==2.8.0,backports.os==0.1.1,blessings==1.6,bundletester==0.12.2,certifi==2019.11.28,cffi==1.13.2,chardet==3.0.4,charm-tools==2.7.2,charmhelpers==0.20.7,Cheetah3==3.2.4,cliff==2.18.0,cmd2==0.8.9,colander==1.7.0,configparser==4.0.2,contextlib2==0.6.0.post1,coverage==5.0.3,cryptography==2.8,debtcollector==1.22.0,decorator==4.4.1,dict2colander==0.2,distro==1.4.0,distro-info==0.0.0,dogpile.cache==0.9.0,entrypoints==0.3,enum34==1.1.6,extras==1.0.0,fasteners==0.15,fixtures==3.0.0,flake8==2.4.1,funcsigs==1.0.2,functools32==3.2.3.post2,future==0.18.2,futures==3.3.0,futurist==1.10.0,gnocchiclient==3.1.1,httplib2==0.15.0,idna==2.8,importlib-metadata==1.4.0,ipaddress==1.0.23,iso8601==0.1.12,Jinja2==2.10.3,jmespath==0.9.4,jsonpatch==1.24,jsonpointer==2.0,jsonschema==2.5.1,juju-deployer==0.11.0,juju-wait==2.5.0,jujubundlelib==0.5.6,jujuclient==0.54.0,keyring==18.0.1,keystoneauth1==3.18.0,launchpadlib==1.10.9,lazr.authentication==0.1.3,lazr.restfulclient==0.14.2,lazr.uri==1.0.3,libcharmstore==0.0.9,linecache2==1.0.0,macaroonbakery==1.2.3,MarkupSafe==1.1.1,mccabe==0.3.1,mock==3.0.5,monotonic==1.5,more-itertools==5.0.0,msgpack==0.6.2,munch==2.5.0,netaddr==0.7.19,netifaces==0.10.9,nose==1.3.7,oauth==1.0.1,oauthlib==3.1.0,openstacksdk==0.39.0,os-client-config==2.0.0,os-service-types==1.7.0,osc-lib==1.15.0,oslo.concurrency==3.31.0,oslo.config==7.0.0,oslo.context==2.23.0,oslo.i18n==3.25.1,oslo.log==3.45.2,oslo.serialization==2.29.2,oslo.utils==3.42.1,osprofiler==2.9.0,otherstuf==1.1.0,parse==1.14.0,path.py==11.5.2,pathlib2==2.3.5,pathspec==0.3.4,pbr==5.4.4,pep8==1.7.1,pika==0.13.1,pkg-resources==0.0.0,prettytable==0.7.2,protobuf==3.11.2,pycparser==2.19,pyflakes==0.8.1,pyinotify==0.9.6,pymacaroons==0.13.0,PyNaCl==1.3.0,pyOpenSSL==19.1.0,pyparsing==2.4.6,pyperclip==1.7.0,pyRFC3339==1.1,python-barbicanclient==4.9.0,python-ceilometerclient==2.9.0,python-cinderclient==4.3.0,python-dateutil==2.8.1,python-designateclient==3.0.0,python-glanceclient==2.17.0,python-heatclient==1.18.0,python-keystoneclient==3.22.0,python-manilaclient==1.29.0,python-mimeparse==1.6.0,python-neutronclient==6.14.0,python-novaclient==16.0.0,python-openstackclient==4.0.0,python-subunit==1.3.0,python-swiftclient==3.8.1,pytz==2019.3,pyudev==0.21.0,PyYAML==3.13,requests==2.22.0,requestsexceptions==1.4.0,rfc3986==1.3.2,ruamel.ordereddict==0.4.14,ruamel.yaml==0.15.100,scandir==1.10.0,SecretStorage==2.3.1,simplejson==3.17.0,six==1.13.0,stestr==2.6.0,stevedore==1.31.0,stuf==0.9.16,subprocess32==3.5.4,Tempita==0.5.2,testresources==2.0.1,testtools==2.3.0,theblues==0.5.2,traceback2==1.4.0,translationstring==1.3,unicodecsv==0.14.1,unittest2==1.1.0,urllib3==1.25.7,vergit==1.0.2,virtualenv==16.7.9,voluptuous==0.11.7,wadllib==1.3.3,warlock==1.3.3,wcwidth==0.1.8,WebOb==1.8.5,websocket-client==0.40.0,wrapt==1.11.2,wsgi-intercept==1.9.1,zipp==0.6.0,zope.interface==4.7.1
func-smoke run-test-pre: PYTHONHASHSEED='0'
func-smoke runtests: commands[0] | bundletester -vl DEBUG -r json -o 
func-results.json gate-basic-bionic-queens --no-destroy
DEBUG:bundletester.utils:Updating JUJU_MODEL: "" -> 
"stsstack-stsstack:admin/lp1850634"
DEBUG:root:Bootstrap environment: stsstack-stsstack:admin/lp1850634
DEBUG:deployer.env:Connecting to stsstack-stsstack:admin/lp1850634...
DEBUG:jujuclient.connector:Connecting to 
wss://10.5.0.14:17070/model/8a5aca16-9818-419d-8c01-0839c05d5897/api
DEBUG:deployer.env:Connected.
DEBUG:deployer.env: Destroying application keystone-ldap
DEBUG:deployer.env: Destroying application keystone
DEBUG:deployer.env: Destroying application ldap-server
DEBUG:deployer.env: Destroying application percona-cluster
DEBUG:deployer.env:  No unit errors found.
DEBUG:deployer.env: Terminating machines forcefully
DEBUG:deployer.env:  Terminating machine 0
DEBUG:deployer.env:  Terminating machine 1
DEBUG:deployer.env:  Terminating machine 2
INFO:deployer.env:  Waiting for machine termination
DEBUG:jujuclient.connector:Connecting to 
wss://10.5.0.14:17070/model/8a5aca16-9818-419d-8c01-0839c05d5897/api
DEBUG:root:Waiting for applications to be removed...
DEBUG:root: Remaining applications: [u'percona-cluster']
DEBUG:runner:call 
['/home/freyes/Projects/charms/openstack/builds/keystone-ldap/.tox/func-smoke/bin/charm-proof']
 (cwd: /tmp/bundletester-ogQiBL/keystone-ldap)
DEBUG:runner:I: `display-name` not provided, add for custom naming in the UI

[Bug 1782922] Re: LDAP: changing user_id_attribute bricks group mapping

2019-11-27 Thread Felipe Reyes
tested the package that fixes this bugfollowing the instructions at
https://launchpadlibrarian.net/449185359/bug-1782922-testing.txt,
everything works ok, and no regressions were detected.

testing bed log:

$  tox -e func-smoke
func-smoke installed: DEPRECATION: Python 2.7 will reach the end of its life on 
January 1st, 2020. Please upgrade your Python as Python 2.7 won't be maintained 
after that date. A future version of pip will drop support for Python 2.7. More 
details about Python 2 support in pip, can be found at 
https://pip.pypa.io/en/latest/development/release-process/#python-2-support,amulet==1.21.0,aodhclient==1.3.0,appdirs==1.4.3,Babel==2.7.0,backports.os==0.1.1,blessings==1.6,bundletester==0.12.2,certifi==2019.9.11,cffi==1.13.1,chardet==3.0.4,charm-tools==2.7.2,charmhelpers==0.20.4,Cheetah3==3.2.4,cliff==2.16.0,cmd2==0.8.9,colander==1.7.0,configparser==4.0.2,contextlib2==0.6.0.post1,coverage==4.5.4,cryptography==2.8,debtcollector==1.22.0,decorator==4.4.0,dict2colander==0.2,distro==1.4.0,distro-info==0.0.0,dogpile.cache==0.8.0,entrypoints==0.3,enum34==1.1.6,extras==1.0.0,fasteners==0.15,fixtures==3.0.0,flake8==2.4.1,funcsigs==1.0.2,functools32==3.2.3.post2,future==0.18.1,futures==3.3.0,futurist==1.9.0,gnocchiclient==3.1.1,httplib2==0.14.0,idna==2.8,importlib-metadata==0.23,ipaddress==1.0.23,iso8601==0.1.12,Jinja2==2.10.3,jmespath==0.9.4,jsonpatch==1.24,jsonpointer==2.0,jsonschema==2.5.1,juju-deployer==0.11.0,juju-wait==2.5.0,jujubundlelib==0.5.6,jujuclient==0.54.0,keyring==18.0.1,keystoneauth1==3.18.0,launchpadlib==1.10.7,lazr.authentication==0.1.3,lazr.restfulclient==0.14.2,lazr.uri==1.0.3,libcharmstore==0.0.9,linecache2==1.0.0,macaroonbakery==1.2.3,MarkupSafe==1.1.1,mccabe==0.3.1,mock==3.0.5,monotonic==1.5,more-itertools==5.0.0,msgpack==0.6.2,munch==2.3.2,netaddr==0.7.19,netifaces==0.10.9,nose==1.3.7,oauth==1.0.1,oauthlib==3.1.0,openstacksdk==0.36.0,os-client-config==1.33.0,os-service-types==1.7.0,osc-lib==1.14.1,oslo.concurrency==3.30.0,oslo.config==6.11.1,oslo.context==2.23.0,oslo.i18n==3.24.0,oslo.log==3.44.1,oslo.serialization==2.29.2,oslo.utils==3.41.2,osprofiler==2.8.2,otherstuf==1.1.0,parse==1.12.1,path.py==11.5.2,pathlib2==2.3.5,pathspec==0.3.4,pbr==5.4.3,pep8==1.7.1,pika==0.13.1,pkg-resources==0.0.0,prettytable==0.7.2,protobuf==3.10.0,pycparser==2.19,pyflakes==0.8.1,pyinotify==0.9.6,pymacaroons==0.13.0,PyNaCl==1.3.0,pyOpenSSL==19.0.0,pyparsing==2.4.2,pyperclip==1.7.0,pyRFC3339==1.1,python-barbicanclient==4.9.0,python-ceilometerclient==2.9.0,python-cinderclient==4.3.0,python-dateutil==2.8.0,python-designateclient==3.0.0,python-glanceclient==2.17.0,python-heatclient==1.18.0,python-keystoneclient==3.22.0,python-manilaclient==1.29.0,python-mimeparse==1.6.0,python-neutronclient==6.14.0,python-novaclient==16.0.0,python-openstackclient==4.0.0,python-subunit==1.3.0,python-swiftclient==3.8.1,pytz==2019.3,pyudev==0.21.0,PyYAML==3.13,requests==2.22.0,requestsexceptions==1.4.0,rfc3986==1.3.2,ruamel.ordereddict==0.4.14,ruamel.yaml==0.15.100,scandir==1.10.0,SecretStorage==2.3.1,simplejson==3.16.0,six==1.12.0,stestr==2.5.1,stevedore==1.31.0,stuf==0.9.16,subprocess32==3.5.4,Tempita==0.5.2,testresources==2.0.1,testtools==2.3.0,theblues==0.5.2,traceback2==1.4.0,translationstring==1.3,unicodecsv==0.14.1,unittest2==1.1.0,urllib3==1.25.6,vergit==1.0.2,virtualenv==16.7.7,voluptuous==0.11.7,wadllib==1.3.3,warlock==1.3.3,wcwidth==0.1.7,WebOb==1.8.5,websocket-client==0.40.0,wrapt==1.11.2,wsgi-intercept==1.9.0,zipp==0.6.0,zope.interface==4.6.0
func-smoke run-test-pre: PYTHONHASHSEED='0'
func-smoke runtests: commands[0] | bundletester -vl DEBUG -r json -o 
func-results.json gate-basic-xenial-queens --no-destroy
DEBUG:bundletester.utils:Updating JUJU_MODEL: "" -> 
"laptop:admin/lp1782922-xenial"
DEBUG:root:Bootstrap environment: laptop:admin/lp1782922-xenial
DEBUG:deployer.env:Connecting to laptop:admin/lp1782922-xenial...
DEBUG:jujuclient.connector:Connecting to 
wss://10.5.0.7:17070/model/a92a4e4e-4efa-48c7-8682-62cfbc070af8/api
DEBUG:deployer.env:Connected.
DEBUG:deployer.env: Terminating machines forcefully
INFO:deployer.env:  Waiting for machine termination
DEBUG:jujuclient.connector:Connecting to 
wss://10.5.0.7:17070/model/a92a4e4e-4efa-48c7-8682-62cfbc070af8/api
DEBUG:root:Waiting for applications to be removed...
DEBUG:runner:call 
['/home/freyes/Projects/charms/openstack/builds/keystone-ldap/.tox/func-smoke/bin/charm-proof']
 (cwd: /tmp/bundletester-j7cjEm/keystone-ldap)
DEBUG:runner:I: `display-name` not provided, add for custom naming in the UI
DEBUG:runner:I: config.yaml: option ssl_key has no default value
DEBUG:runner:I: config.yaml: option ssl_cert has no default value
DEBUG:runner:I: config.yaml: option ldap-user has no default value
DEBUG:runner:I: config.yaml: option ldap-server has no default value
DEBUG:runner:I: config.yaml: option ssl_ca has no default value
DEBUG:runner:I: config.yaml: option ldap-password has no default value
DEBUG:runner:I: config.yaml: option domain-name has no 

[Bug 1782922] Re: LDAP: changing user_id_attribute bricks group mapping

2019-11-26 Thread Felipe Reyes
I tested the fix for this code following the instructions at
https://launchpadlibrarian.net/449185359/bug-1782922-testing.txt and
everything works ok, and no regressions were detected.

testing bed log:

$  tox -e func-smoke
func-smoke installed: DEPRECATION: Python 2.7 will reach the end of its life on 
January 1st, 2020. Please upgrade your Python as Python 2.7 won't be maintained 
after that date. A future version of pip will drop support for Python 2.7. More 
details about Python 2 support in pip, can be found at 
https://pip.pypa.io/en/latest/development/release-process/#python-2-support,amulet==1.21.0,aodhclient==1.3.0,appdirs==1.4.3,Babel==2.7.0,backports.os==0.1.1,blessings==1.6,bundletester==0.12.2,certifi==2019.9.11,cffi==1.13.1,chardet==3.0.4,charm-tools==2.7.2,charmhelpers==0.20.4,Cheetah3==3.2.4,cliff==2.16.0,cmd2==0.8.9,colander==1.7.0,configparser==4.0.2,contextlib2==0.6.0.post1,coverage==4.5.4,cryptography==2.8,debtcollector==1.22.0,decorator==4.4.0,dict2colander==0.2,distro==1.4.0,distro-info==0.0.0,dogpile.cache==0.8.0,entrypoints==0.3,enum34==1.1.6,extras==1.0.0,fasteners==0.15,fixtures==3.0.0,flake8==2.4.1,funcsigs==1.0.2,functools32==3.2.3.post2,future==0.18.1,futures==3.3.0,futurist==1.9.0,gnocchiclient==3.1.1,httplib2==0.14.0,idna==2.8,importlib-metadata==0.23,ipaddress==1.0.23,iso8601==0.1.12,Jinja2==2.10.3,jmespath==0.9.4,jsonpatch==1.24,jsonpointer==2.0,jsonschema==2.5.1,juju-deployer==0.11.0,juju-wait==2.5.0,jujubundlelib==0.5.6,jujuclient==0.54.0,keyring==18.0.1,keystoneauth1==3.18.0,launchpadlib==1.10.7,lazr.authentication==0.1.3,lazr.restfulclient==0.14.2,lazr.uri==1.0.3,libcharmstore==0.0.9,linecache2==1.0.0,macaroonbakery==1.2.3,MarkupSafe==1.1.1,mccabe==0.3.1,mock==3.0.5,monotonic==1.5,more-itertools==5.0.0,msgpack==0.6.2,munch==2.3.2,netaddr==0.7.19,netifaces==0.10.9,nose==1.3.7,oauth==1.0.1,oauthlib==3.1.0,openstacksdk==0.36.0,os-client-config==1.33.0,os-service-types==1.7.0,osc-lib==1.14.1,oslo.concurrency==3.30.0,oslo.config==6.11.1,oslo.context==2.23.0,oslo.i18n==3.24.0,oslo.log==3.44.1,oslo.serialization==2.29.2,oslo.utils==3.41.2,osprofiler==2.8.2,otherstuf==1.1.0,parse==1.12.1,path.py==11.5.2,pathlib2==2.3.5,pathspec==0.3.4,pbr==5.4.3,pep8==1.7.1,pika==0.13.1,pkg-resources==0.0.0,prettytable==0.7.2,protobuf==3.10.0,pycparser==2.19,pyflakes==0.8.1,pyinotify==0.9.6,pymacaroons==0.13.0,PyNaCl==1.3.0,pyOpenSSL==19.0.0,pyparsing==2.4.2,pyperclip==1.7.0,pyRFC3339==1.1,python-barbicanclient==4.9.0,python-ceilometerclient==2.9.0,python-cinderclient==4.3.0,python-dateutil==2.8.0,python-designateclient==3.0.0,python-glanceclient==2.17.0,python-heatclient==1.18.0,python-keystoneclient==3.22.0,python-manilaclient==1.29.0,python-mimeparse==1.6.0,python-neutronclient==6.14.0,python-novaclient==16.0.0,python-openstackclient==4.0.0,python-subunit==1.3.0,python-swiftclient==3.8.1,pytz==2019.3,pyudev==0.21.0,PyYAML==3.13,requests==2.22.0,requestsexceptions==1.4.0,rfc3986==1.3.2,ruamel.ordereddict==0.4.14,ruamel.yaml==0.15.100,scandir==1.10.0,SecretStorage==2.3.1,simplejson==3.16.0,six==1.12.0,stestr==2.5.1,stevedore==1.31.0,stuf==0.9.16,subprocess32==3.5.4,Tempita==0.5.2,testresources==2.0.1,testtools==2.3.0,theblues==0.5.2,traceback2==1.4.0,translationstring==1.3,unicodecsv==0.14.1,unittest2==1.1.0,urllib3==1.25.6,vergit==1.0.2,virtualenv==16.7.7,voluptuous==0.11.7,wadllib==1.3.3,warlock==1.3.3,wcwidth==0.1.7,WebOb==1.8.5,websocket-client==0.40.0,wrapt==1.11.2,wsgi-intercept==1.9.0,zipp==0.6.0,zope.interface==4.6.0
func-smoke run-test-pre: PYTHONHASHSEED='0'
func-smoke runtests: commands[0] | bundletester -vl DEBUG -r json -o 
func-results.json gate-basic-bionic-queens --no-destroy
DEBUG:bundletester.utils:Updating JUJU_MODEL: "" -> 
"laptop:admin/lp1782922-bionic"
DEBUG:root:Bootstrap environment: laptop:admin/lp1782922-bionic
DEBUG:deployer.env:Connecting to laptop:admin/lp1782922-bionic...
DEBUG:jujuclient.connector:Connecting to 
wss://10.5.0.7:17070/model/9869a39e-c6c2-4ecd-8e7d-e5736d15ca51/api
DEBUG:deployer.env:Connected.
DEBUG:deployer.env: Terminating machines forcefully
INFO:deployer.env:  Waiting for machine termination
DEBUG:jujuclient.connector:Connecting to 
wss://10.5.0.7:17070/model/9869a39e-c6c2-4ecd-8e7d-e5736d15ca51/api
DEBUG:root:Waiting for applications to be removed...
DEBUG:runner:call 
['/home/freyes/Projects/charms/openstack/builds/keystone-ldap/.tox/func-smoke/bin/charm-proof']
 (cwd: /tmp/bundletester-AmwJen/keystone-ldap)
DEBUG:runner:I: `display-name` not provided, add for custom naming in the UI
DEBUG:runner:I: config.yaml: option ssl_key has no default value
DEBUG:runner:I: config.yaml: option ssl_cert has no default value
DEBUG:runner:I: config.yaml: option ldap-user has no default value
DEBUG:runner:I: config.yaml: option ldap-server has no default value
DEBUG:runner:I: config.yaml: option ssl_ca has no default value
DEBUG:runner:I: config.yaml: option ldap-password has no default value
DEBUG:runner:I: config.yaml: option domain-name has no default 

[Bug 1782922] Re: LDAP: changing user_id_attribute bricks group mapping

2019-10-23 Thread Felipe Reyes
Hello Corey,

I was trying to verify the SRU that it's in disco-proposed without success.
IIUC, the commands "openstack user list" and "openstack group list" should fail
when the package installed is 2:15.0.0-0ubuntu1.1 , here is the output of my
terminal, could you help me understand if I'm doing something wrong?


$  juju add-model lp1782922 && sleep 5 && tox -e func-smoke
Added 'lp1782922' model on stsstack/stsstack with credential 'laptop' for user 
'laptop'
func-smoke installed: DEPRECATION: Python 2.7 will reach the end of its life on 
January 1st, 2020. Please upgrade your Python as Python 2.7 won't be maintained 
after that date. A future version of pip will drop support for Python 2.7. More 
details about Python 2 support in pip, can be found at 
https://pip.pypa.io/en/latest/development/release-process/#python-2-support,amulet==1.21.0,aodhclient==1.3.0,appdirs==1.4.3,Babel==2.7.0,backports.os==0.1.1,blessings==1.6,bundletester==0.12.2,certifi==2019.9.11,cffi==1.13.1,chardet==3.0.4,charm-tools==2.7.2,charmhelpers==0.20.4,Cheetah3==3.2.4,cliff==2.16.0,cmd2==0.8.9,colander==1.7.0,configparser==4.0.2,contextlib2==0.6.0.post1,coverage==4.5.4,cryptography==2.8,debtcollector==1.22.0,decorator==4.4.0,dict2colander==0.2,distro==1.4.0,distro-info==0.0.0,dogpile.cache==0.8.0,entrypoints==0.3,enum34==1.1.6,extras==1.0.0,fasteners==0.15,fixtures==3.0.0,flake8==2.4.1,funcsigs==1.0.2,functools32==3.2.3.post2,future==0.18.1,futures==3.3.0,futurist==1.9.0,gnocchiclient==3.1.1,httplib2==0.14.0,idna==2.8,importlib-metadata==0.23,ipaddress==1.0.23,iso8601==0.1.12,Jinja2==2.10.3,jmespath==0.9.4,jsonpatch==1.24,jsonpointer==2.0,jsonschema==2.5.1,juju-deployer==0.11.0,juju-wait==2.5.0,jujubundlelib==0.5.6,jujuclient==0.54.0,keyring==18.0.1,keystoneauth1==3.18.0,launchpadlib==1.10.7,lazr.authentication==0.1.3,lazr.restfulclient==0.14.2,lazr.uri==1.0.3,libcharmstore==0.0.9,linecache2==1.0.0,macaroonbakery==1.2.3,MarkupSafe==1.1.1,mccabe==0.3.1,mock==3.0.5,monotonic==1.5,more-itertools==5.0.0,msgpack==0.6.2,munch==2.3.2,netaddr==0.7.19,netifaces==0.10.9,nose==1.3.7,oauth==1.0.1,oauthlib==3.1.0,openstacksdk==0.36.0,os-client-config==1.33.0,os-service-types==1.7.0,osc-lib==1.14.1,oslo.concurrency==3.30.0,oslo.config==6.11.1,oslo.context==2.23.0,oslo.i18n==3.24.0,oslo.log==3.44.1,oslo.serialization==2.29.2,oslo.utils==3.41.2,osprofiler==2.8.2,otherstuf==1.1.0,parse==1.12.1,path.py==11.5.2,pathlib2==2.3.5,pathspec==0.3.4,pbr==5.4.3,pep8==1.7.1,pika==0.13.1,pkg-resources==0.0.0,prettytable==0.7.2,protobuf==3.10.0,pycparser==2.19,pyflakes==0.8.1,pyinotify==0.9.6,pymacaroons==0.13.0,PyNaCl==1.3.0,pyOpenSSL==19.0.0,pyparsing==2.4.2,pyperclip==1.7.0,pyRFC3339==1.1,python-barbicanclient==4.9.0,python-ceilometerclient==2.9.0,python-cinderclient==4.3.0,python-dateutil==2.8.0,python-designateclient==3.0.0,python-glanceclient==2.17.0,python-heatclient==1.18.0,python-keystoneclient==3.22.0,python-manilaclient==1.29.0,python-mimeparse==1.6.0,python-neutronclient==6.14.0,python-novaclient==16.0.0,python-openstackclient==4.0.0,python-subunit==1.3.0,python-swiftclient==3.8.1,pytz==2019.3,pyudev==0.21.0,PyYAML==3.13,requests==2.22.0,requestsexceptions==1.4.0,rfc3986==1.3.2,ruamel.ordereddict==0.4.14,ruamel.yaml==0.15.100,scandir==1.10.0,SecretStorage==2.3.1,simplejson==3.16.0,six==1.12.0,stestr==2.5.1,stevedore==1.31.0,stuf==0.9.16,subprocess32==3.5.4,Tempita==0.5.2,testresources==2.0.1,testtools==2.3.0,theblues==0.5.2,traceback2==1.4.0,translationstring==1.3,unicodecsv==0.14.1,unittest2==1.1.0,urllib3==1.25.6,vergit==1.0.2,virtualenv==16.7.7,voluptuous==0.11.7,wadllib==1.3.3,warlock==1.3.3,wcwidth==0.1.7,WebOb==1.8.5,websocket-client==0.40.0,wrapt==1.11.2,wsgi-intercept==1.9.0,zipp==0.6.0,zope.interface==4.6.0
func-smoke run-test-pre: PYTHONHASHSEED='0'
func-smoke runtests: commands[0] | bundletester -vl DEBUG -r json -o 
func-results.json dev-basic-disco-stein --no-destroy
DEBUG:bundletester.utils:Updating JUJU_MODEL: "" -> 
"stsstack-stsstack:laptop/lp1782922"
DEBUG:root:Bootstrap environment: stsstack-stsstack:laptop/lp1782922
DEBUG:deployer.env:Connecting to stsstack-stsstack:laptop/lp1782922...
DEBUG:jujuclient.connector:Connecting to 
wss://10.5.0.5:17070/model/e7ab1a55-5cb4-4787-827f-72c414ce7443/api
DEBUG:deployer.env:Connected.
DEBUG:deployer.env: Terminating machines forcefully
INFO:deployer.env:  Waiting for machine termination
DEBUG:jujuclient.connector:Connecting to 
wss://10.5.0.5:17070/model/e7ab1a55-5cb4-4787-827f-72c414ce7443/api
DEBUG:root:Waiting for applications to be removed...
DEBUG:runner:call 
['/home/freyes/Projects/charms/openstack/builds/keystone-ldap/.tox/func-smoke/bin/charm-proof']
 (cwd: /tmp/bundletester-0AQeci/keystone-ldap)
DEBUG:runner:I: `display-name` not provided, add for custom naming in the UI
DEBUG:runner:I: config.yaml: option ssl_key has no default value
DEBUG:runner:I: config.yaml: option ssl_cert has no default value
DEBUG:runner:I: config.yaml: option ldap-user has no default value

[Bug 1829987] Re: phpldapadmin incompatible with php-7.2 in bionic

2019-10-23 Thread Felipe Reyes
I prepared a ppa for testing purposes, if someone is interested in
giving it a try please report back on how it works for you -
https://launchpad.net/~freyes/+archive/ubuntu/lp1829987

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1829987

Title:
  phpldapadmin incompatible with php-7.2 in bionic

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/phpldapadmin/+bug/1829987/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1845234] Re: do-release-upgrade reinstalls netplan.io, breaking systemd-networkd configuration

2019-09-25 Thread Felipe Reyes
do-release-upgrade in this aspect have 2 modes: server and desktop mode,
the desktop mode is used whenever any of the packages defined in the
MetaPkgs[0] section is installed[2]

So when server mode is detected is when the presence of packages defined
in BaseMetaPkgs  is NOT enforced[1]

[0] 
https://bazaar.launchpad.net/~ubuntu-core-dev/ubuntu-release-upgrader/trunk/view/head:/data/DistUpgrade.cfg.bionic#L13
[1] 
https://bazaar.launchpad.net/~ubuntu-core-dev/ubuntu-release-upgrader/trunk/view/head:/DistUpgrade/DistUpgradeCache.py#L624
[2] 
https://bazaar.launchpad.net/~ubuntu-core-dev/ubuntu-release-upgrader/trunk/view/head:/DistUpgrade/DistUpgradeCache.py#L359

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1845234

Title:
  do-release-upgrade reinstalls netplan.io, breaking systemd-networkd
  configuration

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/netplan.io/+bug/1845234/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1834978] Re: [2.5] too many rndc reload during commissioning

2019-08-01 Thread Felipe Reyes
*** This bug is a duplicate of bug 1710278 ***
https://bugs.launchpad.net/bugs/1710278

** This bug has been marked a duplicate of bug 1710278
   [2.3a1] named stuck on reload, DNS broken

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1834978

Title:
  [2.5] too many rndc reload during commissioning

To manage notifications about this bug go to:
https://bugs.launchpad.net/maas/+bug/1834978/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1834978] [NEW] [2.5] too many rndc reload during commissioning

2019-07-01 Thread Felipe Reyes
Public bug reported:

I've been analyzing an environment that it's issuing too many times
"rndc reload" which eventually leads to the same symptoms seen in bug
1710278 , these are the numbers when the operator attempted to
commission a bit more than 50 machines at the same time.

In regiond.log* log files when filtering for a single day (where the
test mentioned about above was executed) it was found 5032 events where
a new IP was assigned to an existing hostname, and the only 2 hostnames
that appear in this list are "ubuntu.maas" and "maas-enlist.maas".

As the DHCP assigns a 600 secs lease time, it's not rare that a single
host will renew its IP (and by consequence generate a new event) during
the commissioning stage depending on what tests the operator decided to
run.

This behavior of MAAS is inducing a high load on "named" which
ultimately gets stuck and it stops processing requests and only a
restart will take the service back.

[Environment]

MAAS 2.5.3-7533-g65952b418-0ubuntu1~18.04.1
bind9 1:9.11.3+dfsg-1ubuntu1.8
Ubuntu 18.04

** Affects: maas
 Importance: Undecided
 Status: New

** Affects: bind9 (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: sts

** Tags added: sts

** Summary changed:

- too many rndc reload during commissioning
+ [2.5too many rndc reload during commissioning

** Summary changed:

- [2.5too many rndc reload during commissioning
+ [2.5] too many rndc reload during commissioning

** Also affects: bind9 (Ubuntu)
   Importance: Undecided
   Status: New

** Description changed:

  I've been analyzing an environment that it's issuing too many times
  "rndc reload" which eventually leads to the same symptoms seen in bug
  1710278 , these are the numbers when the operator attempted to
  commission a bit more than 50 machines at the same time.
  
  In regiond.log* log files when filtering for a single day (where the
  test mentioned about above was executed) it was found 5032 events where
  a new IP was assigned to an existing hostname, and the only 2 hostnames
  that appear in this list are "ubuntu.maas" and "maas-enlist.maas".
  
  As the DHCP assigns a 600 secs lease time, it's not rare that a single
  host will renew its IP (and by consequence generate a new event) during
  the commissioning stage depending on what tests the operator decided to
  run.
  
  This behavior of MAAS is inducing a high load on "named" which
  ultimately gets stuck and it stops processing requests and only a
  restart will take the service back.
+ 
+ [Environment]
+ 
+ MAAS 2.5.3-7533-g65952b418-0ubuntu1~18.04.1
+ bind9 1:9.11.3+dfsg-1ubuntu1.8
+ Ubuntu 18.04

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1834978

Title:
  [2.5] too many rndc reload during commissioning

To manage notifications about this bug go to:
https://bugs.launchpad.net/maas/+bug/1834978/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1823718] Re: Installation fails with Python3.7 SyntaxError on Ubuntu Disco

2019-06-27 Thread Felipe Reyes
** Tags added: seg

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1823718

Title:
  Installation fails with Python3.7 SyntaxError on Ubuntu Disco

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-hacluster/+bug/1823718/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1816721] Re: [SRU] Python3 librados incompatibility

2019-06-07 Thread Felipe Reyes
** Tags added: sts

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1816721

Title:
  [SRU] Python3 librados incompatibility

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1816721/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1819453] Re: keystone-ldap TypeError: cannot concatenate 'str' and 'NoneType' object

2019-03-19 Thread Felipe Reyes
** Tags added: sts

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1819453

Title:
  keystone-ldap TypeError: cannot concatenate 'str' and 'NoneType'
  object

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1819453/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Re: [Bug 1818239] Re: scheduler: build failure high negative weighting

2019-03-05 Thread Felipe Reyes
On Tue, 2019-03-05 at 18:30 +, Corey Bryant wrote:
> @Jeremy, I think it's more of limited denial of service (if we can
> call
> it that) where a certain amount of computes could get negative weight
> and not considered for scheduling. I don't think it's a complete
> denial
> of service.

I believe the term you are looking for is "degradation of service" - 
https://en.wikipedia.org/wiki/Denial-of-service_attack#Degradation-of-service_attacks

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1818239

Title:
  scheduler: build failure high negative weighting

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-nova-cloud-controller/+bug/1818239/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1578622] Re: [SRU] glance do not require hypervisor_mapping

2019-02-14 Thread Felipe Reyes
** Description changed:

  [Impact]
  currently the glance mirror requires hypervisor_mapping config in the api.
  Better to not required that as a library consumer would not necessarily 
provide it.
  
  [Test Case]
  
- * deploy a openstack environment with keystone v3 enabled
-   - get a copy of the bundle available at 
http://paste.ubuntu.com/p/6VktZ4N34k/ , this bundle deploys a minimal version 
of xenial-mitaka.
+ * deploy a openstack environment with keystone v2 enabled
+   - get a copy of the bundle available at 
http://paste.ubuntu.com/p/qxwSDtDZ52/ , this bundle deploys a minimal version 
of xenial-mitaka.
  
  Expected Result:
  - "glance image-list" lists trusty and xenial images
  - the file glance-simplestreams-sync/0:/var/log/glance-simplestreams-sync.log 
contains details of the images pulled from cloud-images.u.c (example: 
https://pastebin.ubuntu.com/p/RWG8QrkVDz/ )
  
  Actual result:
  
  - "glance image-list" is empty
  - glance-simplestreams-sync/0 is in blocked state and message "Image sync 
failed, retrying soon."
+ 
+ In /var/log/glance-simplestreams-sync.log:
+ ERROR * 02-14 15:46:07 [PID:1898] * root * Exception during syncing:
+ Traceback (most recent call last):
+   File "/usr/share/glance-simplestreams-sync/glance-simplestreams-sync.py", 
line 462, in main
+ do_sync(charm_conf, status_exchange)
+   File "/usr/share/glance-simplestreams-sync/glance-simplestreams-sync.py", 
line 273, in do_sync
+ tmirror.sync(smirror, path=initial_path)
+   File "/usr/lib/python2.7/dist-packages/simplestreams/mirrors/__init__.py", 
line 91, in sync
+ return self.sync_index(reader, path, data, content)
+   File "/usr/lib/python2.7/dist-packages/simplestreams/mirrors/__init__.py", 
line 254, in sync_index
+ self.sync(reader, path=epath)
+   File "/usr/lib/python2.7/dist-packages/simplestreams/mirrors/__init__.py", 
line 89, in sync
+ return self.sync_products(reader, path, data, content)
+   File "/usr/lib/python2.7/dist-packages/simplestreams/mirrors/__init__.py", 
line 341, in sync_products
+ self.insert_item(item, src, target, pgree, ipath_cs)
+   File "/usr/lib/python2.7/dist-packages/simplestreams/mirrors/glance.py", 
line 242, in insert_item
+ if self.config['hypervisor_mapping'] and 'ftype' in flat:
+ KeyError: 'hypervisor_mapping'
+ 
  
  [Regression Potential]
  
  * This patch makes an argument optional only, there is no expected
  regressions in users of this library.
  
  [Other Info]
  
  The bundle used in the test case uses a modified version of the glance-
  simplestreams-sync charm that removes the hypervisor_mapping parameter
  when using simplestreams library.
  https://pastebin.ubuntu.com/p/Ny7jFnGfnY/

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1578622

Title:
  [SRU] glance do not require hypervisor_mapping

To manage notifications about this bug go to:
https://bugs.launchpad.net/simplestreams/+bug/1578622/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1578622] Re: glance do not require hypervisor_mapping

2019-02-13 Thread Felipe Reyes
** Description changed:

+ [Impact]
  currently the glance mirror requires hypervisor_mapping config in the api.
  Better to not required that as a library consumer would not necessarily 
provide it.
+ 
+ [Test Case]
+ 
+ * deploy a openstack environment with keystone v3 enabled
+   - get a copy of the bundle available at 
http://paste.ubuntu.com/p/6VktZ4N34k/ , this bundle deploys a minimal version 
of xenial-mitaka.
+ 
+ Expected Result:
+ - "glance image-list" lists trusty and xenial images
+ - the file glance-simplestreams-sync/0:/var/log/glance-simplestreams-sync.log 
contains details of the images pulled from cloud-images.u.c (example: 
https://pastebin.ubuntu.com/p/RWG8QrkVDz/ )
+ 
+ Actual result:
+ 
+ - "glance image-list" is empty
+ - glance-simplestreams-sync/0 is in blocked state and message "Image sync 
failed, retrying soon."

** Description changed:

  [Impact]
  currently the glance mirror requires hypervisor_mapping config in the api.
  Better to not required that as a library consumer would not necessarily 
provide it.
  
  [Test Case]
  
  * deploy a openstack environment with keystone v3 enabled
-   - get a copy of the bundle available at 
http://paste.ubuntu.com/p/6VktZ4N34k/ , this bundle deploys a minimal version 
of xenial-mitaka.
+   - get a copy of the bundle available at 
http://paste.ubuntu.com/p/6VktZ4N34k/ , this bundle deploys a minimal version 
of xenial-mitaka.
  
  Expected Result:
  - "glance image-list" lists trusty and xenial images
  - the file glance-simplestreams-sync/0:/var/log/glance-simplestreams-sync.log 
contains details of the images pulled from cloud-images.u.c (example: 
https://pastebin.ubuntu.com/p/RWG8QrkVDz/ )
  
  Actual result:
  
  - "glance image-list" is empty
  - glance-simplestreams-sync/0 is in blocked state and message "Image sync 
failed, retrying soon."
+ 
+ [Regression Potential]
+ 
+ * This patch makes an argument optional only, there is no expected
+ regressions in users of this library.

** Summary changed:

- glance do not require hypervisor_mapping
+ [SRU] glance do not require hypervisor_mapping

** Description changed:

  [Impact]
  currently the glance mirror requires hypervisor_mapping config in the api.
  Better to not required that as a library consumer would not necessarily 
provide it.
  
  [Test Case]
  
  * deploy a openstack environment with keystone v3 enabled
    - get a copy of the bundle available at 
http://paste.ubuntu.com/p/6VktZ4N34k/ , this bundle deploys a minimal version 
of xenial-mitaka.
  
  Expected Result:
  - "glance image-list" lists trusty and xenial images
  - the file glance-simplestreams-sync/0:/var/log/glance-simplestreams-sync.log 
contains details of the images pulled from cloud-images.u.c (example: 
https://pastebin.ubuntu.com/p/RWG8QrkVDz/ )
  
  Actual result:
  
  - "glance image-list" is empty
  - glance-simplestreams-sync/0 is in blocked state and message "Image sync 
failed, retrying soon."
  
  [Regression Potential]
  
  * This patch makes an argument optional only, there is no expected
  regressions in users of this library.
+ 
+ [Other Info]
+ 
+ The bundle used in the test case uses a modified version of the glance-
+ simplestreams-sync charm that removes the hypervisor_mapping parameter
+ when using simplestreams library.
+ https://pastebin.ubuntu.com/p/Ny7jFnGfnY/

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1578622

Title:
  [SRU] glance do not require hypervisor_mapping

To manage notifications about this bug go to:
https://bugs.launchpad.net/simplestreams/+bug/1578622/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1815101] Re: netplan removes keepalived configuration

2019-02-07 Thread Felipe Reyes
** Also affects: netplan
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1815101

Title:
  netplan removes keepalived configuration

To manage notifications about this bug go to:
https://bugs.launchpad.net/netplan/+bug/1815101/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1686086] Re: glance mirror and nova-lxd need support for squashfs images

2018-10-16 Thread Felipe Reyes
does anyone have a reproducer for this bug?, I've been having a hard
time trying to come up with one on my own.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1686086

Title:
  glance mirror and nova-lxd need support for squashfs images

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova-lxd/+bug/1686086/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1783203] Re: Upgrade to RabbitMQ 3.6.10 causes beam lockup in clustered deployment

2018-10-05 Thread Felipe Reyes
for people being hit by this bug, it would be helpful if you could
generate a core file while rabbit is still stuck, this may give us some
insights of where the process is looping.

$ sudo gcore -o lp1783203.core $PID

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1783203

Title:
  Upgrade to RabbitMQ 3.6.10 causes beam lockup in clustered deployment

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-rabbitmq-server/+bug/1783203/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1783203] Re: Upgrade to RabbitMQ 3.6.10 causes beam lockup in clustered deployment

2018-07-26 Thread Felipe Reyes
** Tags added: sts

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1783203

Title:
  Upgrade to RabbitMQ 3.6.10 causes beam lockup in clustered deployment

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-rabbitmq-server/+bug/1783203/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

  1   2   3   4   5   >