[Bug 2063221] Re: Drop libglib2.0-0 transitional package

2024-05-08 Thread Liam Proven
You have a typo in the description:

> the massive Y2028 time_t transition

I think you mean Y20*3*8.

Not a big deal, but just for clarity you should probably fix that.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2063221

Title:
  Drop libglib2.0-0 transitional package

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/glib2.0/+bug/2063221/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2043863] Re: Keyboard Settings Won't Open (Ubuntu Unity 23.10)

2024-04-14 Thread Liam Proven
Further info may be relevant:

https://www.reddit.com/r/UbuntuUnity/comments/1axuqy5/2310_cant_open_keyboard_settings/

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2043863

Title:
  Keyboard Settings Won't Open (Ubuntu Unity 23.10)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/unity-control-center/+bug/2043863/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2043863] Re: Keyboard Settings Won't Open (Ubuntu Unity 23.10)

2024-04-14 Thread Liam Proven
Also affects right shift key, but left still works. I had to use cut and
paste even to log in to register this affects me too.

No Ctrl keys, only 1 shift, can't open keyboard settings.

Goes away on reboot but soon recurs.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2043863

Title:
  Keyboard Settings Won't Open (Ubuntu Unity 23.10)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/unity-control-center/+bug/2043863/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1883112] Re: rbd-target-api crashes with python TypeError

2022-06-02 Thread Liam Young
+1 makes sense. Thanks for doing this validation @chris.macnaughton

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1883112

Title:
  rbd-target-api crashes with python TypeError

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ceph-iscsi/+bug/1883112/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1973678] Re: CIFS crash mounting DFS share in 22.04

2022-05-30 Thread Liam Baker
I have the same problem in Windows Subsystem for Linux, Ubuntu 20.04 
I have a CIFS share containing 24 DFS folders.
Opening any subfolder in the share causes an instant kernel panic.
I do not have this problem on embedded hardware reading from the same share 
running the xilinx 4.6.0 kernel and 16.04LTS derivative (Petalinux 2016).


'Virtual Machine' has encountered a fatal error.  The guest operating system 
reported that it failed with the following error codes: ErrorCode0: 0x0, 
ErrorCode1: 0x0, ErrorCode2: 0x0, ErrorCode3: 0x0, ErrorCode4: 0x0.  If the 
problem persists, contact Product Support for the guest operating system.  
(Virtual machine ID 2AE8F1B1-E89F-426B-867C-E089D530D127)

Guest message:
[ 5664.306032] CR2: ffd6 CR3: 0001844ee004 CR4: 003706a0
[ 5664.306033] DR0:  DR1:  DR2: 
[ 5664.306035] DR3:  DR6: fffe0ff0 DR7: 0400
[ 5664.306036] Call Trace:
[ 5664.306549]  __traverse_mounts+0x8f/0x220
[ 5664.306885]  step_into+0x430/0x6c0
[ 5664.307085]  ? cifs_d_revalidate+0x49/0xd0
[ 5664.307088]  walk_component+0x72/0x1b0
[ 5664.307107]  path_lookupat.isra.0+0x6e/0x150
[ 5664.307109]  ? cifs_revalidate_dentry_attr+0x3f/0x230
[ 5664.307111]  filename_lookup+0xae/0x140
[ 5664.307157]  ? __check_object_size+0x136/0x150
[ 5664.307337]  ? strncpy_from_user+0x4e/0x140
[ 5664.307340]  __x64_sys_chdir+0x3e/0xe0
[ 5664.307621]  do_syscall_64+0x33/0x80
[ 5664.307846]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
[ 5664.307906] RIP: 0033:0x7f03cf10ba1b
[ 5664.307908] Code: c3 48 8b 15 77 d4 0d 00 f7 d8 64 89 02 b8 ff ff ff ff eb 
c6 66 2e 0f 1f 84 00 00 00 00 00 90 f3 0f 1e fa b8 50 00 00 00 0f 05 <48> 3d 01 
f0 ff ff 73 01 c3 48 8b 0d 45 d4 0d 00 f7 d8 64 89 01 48
[ 5664.307911] RSP: 002b:7ffdfac2d908 EFLAGS: 0246 ORIG_RAX: 
0050
[ 5664.307913] RAX: ffda RBX:  RCX: 7f03cf10ba1b
[ 5664.307914] RDX: 561b7d79f360 RSI: 561b7d88e310 RDI: 561b7d892df0
[ 5664.307915] RBP: 561b7d892df0 R08: 0003 R09: 0001
[ 5664.307917] R10:  R11: 0246 R12: 561b7d98e1d0
[ 5664.307918] R13:  R14: 000a R15: 
[ 5664.307982] Modules linked in:
[ 5664.308086] CR2: 
[ 5664.308088] ---[ end trace d8722f6ff345c4cf ]---
[ 5664.308289] RIP: 0010:0x0
[ 5664.308291] Code: Unable to access opcode bytes at RIP 0xffd6.
[ 5664.308292] RSP: 0018:c90002b8bca0 EFLAGS: 00010293
[ 5664.308312] RAX:  RBX: c90002b8bd10 RCX: 0001
[ 5664.308314] RDX:  RSI: 0002 RDI: c90002b8bd10
[ 5664.308316] RBP: c90002b8be40 R08: 0002 R09: 0064
[ 5664.308317] R10: 8883e1b9ba80 R11: 432f6b630061 R12: 0002
[ 5664.308318] R13:  R14: 002a0044 R15: 
[ 5664.308320] FS:  7f03ceffa740() GS:8883f7d0() 
knlGS:
[ 5664.308322] CS:  0010 DS:  ES:  CR0: 80050033
[ 5664.308323] CR2: ffd6 CR3: 0001844ee004 CR4: 003706a0
[ 5664.308325] DR0:  DR1:  DR2: 
[ 5664.308326] DR3:  DR6: fffe0ff0 DR7: 0400
[ 5664.308327] Kernel panic - not syncing: Fatal exception

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1973678

Title:
  CIFS crash mounting DFS share in 22.04

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1973678/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1907250] Re: [focal] charm becomes blocked with workload-status "Failed to connect to MySQL"

2022-05-12 Thread Liam Young
I've filed https://bugs.launchpad.net/charm-mysql-router/+bug/1973177 is
track this seperatly

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1907250

Title:
  [focal] charm becomes blocked with workload-status "Failed to connect
  to MySQL"

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-mysql-router/+bug/1907250/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1907250] Re: [focal] charm becomes blocked with workload-status "Failed to connect to MySQL"

2022-05-12 Thread Liam Young
One of the causes of a charm going into a "Failed to connect to MySQL"
state is that a connection to the database failed when the db-router
charm attempted to restart the db-router service. Currently the charm
will only retry the connection in response to one return code from the
mysql. The return code is 2013 which is "Message: Lost connection to
MySQL server during query" *1. However, if the connection fails to be
established in the first place then the error returned is 2003 "Can't
connect to MySQL server on...".



*1 https://dev.mysql.com/doc/mysql-errors/8.0/en/client-error-reference.html

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1907250

Title:
  [focal] charm becomes blocked with workload-status "Failed to connect
  to MySQL"

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-mysql-router/+bug/1907250/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1969775] [NEW] rbd-target-api crashes with `blacklist removal failed`

2022-04-21 Thread Liam Young
Public bug reported:

[Impact]
 * ceph-iscsi on Focal talking to a Pacific or later Ceph cluster

 * rbd-target-api service fails to start if there is a blocklist
   entry for the unit.

 * When the rbd-target-api service starts it checks if any of the
   ip addresses on the machine it is running on are listed as
   blocked. If there are entries it tries to remove them. When it
   issues the block removal command it checks stdout from the
   removal command looking for the string `un-blacklisting`.
   However from Pacific onward a successful unblocking returns
   `un-blocklisting` instead 
(https://github.com/ceph/ceph/commit/dfd01d765304ed8783cef613930e65980d9aee23)

[Test Plan]

 If an existing ceph-iscsi deployment is available then skip to
 step 3.

 1) Deploy the bundle below (tested with OpenStack provider).

series: focal
applications:
  ceph-iscsi:
charm: cs:ceph-iscsi
num_units: 2
  ceph-osd:
charm: ch:ceph-osd
num_units: 3
storage:
  osd-devices: 'cinder,10G'
options:
  osd-devices: '/dev/test-non-existent'
  source: yoga
channel: latest/edge
  ceph-mon:
charm: ch:ceph-mon
num_units: 3
options:
  monitor-count: '3'
  source: yoga
channel: latest/edge
relations:
  - - 'ceph-mon:client'
- 'ceph-iscsi:ceph-client'
  - - 'ceph-osd:mon'
- 'ceph-mon:osd'

 2) Connect to ceph-iscsi unit:

juju ssh -m zaza-a1d88053ab85 ceph-iscsi/0

 3) Stop rbd-target-api via systemd to make test case clearer:

sudo systemctl stop rbd-target-api

 4) Add 2 blocklist entries for this unit (due to another issue the
ordering of the output from `osd blacklist ls` matters which can lead to
the reproduction of this bug being intermittent. To avoid this add two
entries which ensures there is always an entry for this node in the list
of blocklist entries to be removed).

sudo ceph -n client.ceph-iscsi --conf /etc/ceph/iscsi/ceph.conf osd blacklist 
add $(hostname --all-ip-addresses | awk '{print $1}'):0/1
sudo ceph -n client.ceph-iscsi --conf /etc/ceph/iscsi/ceph.conf osd blacklist 
add $(hostname --all-ip-addresses | awk '{print $1}'):0/2
sudo ceph -n client.ceph-iscsi --conf /etc/ceph/iscsi/ceph.conf osd blacklist ls
  listed 2 entries
  172.20.0.135:0/2 2022-02-23T11:14:54.850352+
  172.20.0.135:0/1 2022-02-23T11:14:52.502592+

 5) Attempt to start service:

sudo /usr/bin/python3 /usr/bin/rbd-target-api

At this point the process should be running in the foreground but instead
it will die. The log from the service will have an entry like:

2022-04-21 12:35:21,695 CRITICAL [gateway.py:51:ceph_rm_blacklist()] -
blacklist removal failed. Run 'ceph -n client.ceph-iscsi --conf
/etc/ceph/iscsi/ceph.conf osd blacklist rm 172.20.0.156:0/1'

[Where problems could occur]

 * Problems could occur with the service starting as this blocklist
check is done at startup.

 * Blocklist entries could fail to be removed.


This issue is very similar to Bug #1883112

** Affects: ceph-iscsi (Ubuntu)
 Importance: Undecided
 Status: New

** Description changed:

  [Impact]
-  * ceph-iscsi on Focal talking to a Pacific or later Ceph cluster
+  * ceph-iscsi on Focal talking to a Pacific or later Ceph cluster
  
-  * rbd-target-api service fails to start if there is a blocklist
-entry for the unit.
+  * rbd-target-api service fails to start if there is a blocklist
+    entry for the unit.
  
-  * When the rbd-target-api service starts it checks if any of the
-ip addresses on the machine it is running on are listed as
-blocked. If there are entries it tries to remove them. When it
-issues the block removal command it checks stdout from the
-removal command looking for the string `un-blacklisting`.
-However from Pacific onward a successful unblocking returns
-`un-blocklisting` instead 
(https://github.com/ceph/ceph/commit/dfd01d765304ed8783cef613930e65980d9aee23)
- 
+  * When the rbd-target-api service starts it checks if any of the
+    ip addresses on the machine it is running on are listed as
+    blocked. If there are entries it tries to remove them. When it
+    issues the block removal command it checks stdout from the
+    removal command looking for the string `un-blacklisting`.
+    However from Pacific onward a successful unblocking returns
+    `un-blocklisting` instead 
(https://github.com/ceph/ceph/commit/dfd01d765304ed8783cef613930e65980d9aee23)
  
  [Test Plan]
  
-  If an existing ceph-iscsi deployment is available then skip to
-  step 3.
+  If an existing ceph-iscsi deployment is available then skip to
+  step 3.
  
-  1) Deploy the bundle below (tested with OpenStack provider).
+  1) Deploy the bundle below (tested with OpenStack provider).
  
  series: focal
  applications:
-   ceph-iscsi:
- charm: cs:ceph-iscsi
- num_units: 2
-   ceph-osd:
- charm: ch:ceph-osd
- num_units: 3
- storage:
-   osd-devices: 'cinder,10G'
- options:
-   osd-devices: '/dev/test-non-existent'
-   source: 

[Bug 1909399] Re: Exception during removal of OSD blacklist entries

2022-04-21 Thread Liam Young
*** This bug is a duplicate of bug 1883112 ***
https://bugs.launchpad.net/bugs/1883112

** This bug has been marked a duplicate of bug 1883112
   rbd-target-api crashes with python TypeError

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1909399

Title:
  Exception during removal of OSD blacklist entries

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ceph-iscsi/+bug/1909399/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1968586] [NEW] apparmor rules block socket and log creation

2022-04-11 Thread Liam Young
Public bug reported:

While testing using openstack, guests failed to launch and these denied
messages were logged:

[ 8307.089627] audit: type=1400 audit(1649684291.592:109):
apparmor="DENIED" operation="mknod" profile="swtpm"
name="/run/libvirt/qemu/swtpm/11-instance-000b-swtpm.sock"
pid=141283 comm="swtpm" requested_mask="c" denied_mask="c" fsuid=117
ouid=117

[10363.999211] audit: type=1400 audit(1649686348.455:115):
apparmor="DENIED" operation="open" profile="swtpm"
name="/var/log/swtpm/libvirt/qemu/instance-000e-swtpm.log"
pid=184479 comm="swtpm" requested_mask="ac" denied_mask="ac" fsuid=117
ouid=117

Adding 
  /run/libvirt/qemu/swtpm/* rwk,
  /var/log/swtpm/libvirt/qemu/* rwk,


to /etc/apparmor.d/usr.bin.swtpm and reloading the profile seems to fix the 
issue.

(Note: This is very similar to existing Bug #1968335)

** Affects: swtpm (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1968586

Title:
  apparmor rules block socket and log creation

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/swtpm/+bug/1968586/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1965280] Re: rbd-target-api will not start AttributeError: 'Context' object has no attribute 'wrap_socket'

2022-03-17 Thread Liam Young
** Patch added: "ceph-iscsi-deb.diff"
   
https://bugs.launchpad.net/ubuntu/+source/ceph-iscsi/+bug/1965280/+attachment/5569987/+files/ceph-iscsi-deb.diff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1965280

Title:
  rbd-target-api will not start AttributeError: 'Context' object has no
  attribute 'wrap_socket'

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ceph-iscsi/+bug/1965280/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1883112] Re: rbd-target-api crashes with python TypeError

2022-03-17 Thread Liam Young
Verification on impish failed due to
https://bugs.launchpad.net/ubuntu/+source/ceph-iscsi/+bug/1965280

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1883112

Title:
  rbd-target-api crashes with python TypeError

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ceph-iscsi/+bug/1883112/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1965280] [NEW] rbd-target-api will not start AttributeError: 'Context' object has no attribute 'wrap_socket'

2022-03-17 Thread Liam Young
Public bug reported:

The rbd-target-api fails to start on Ubuntu Impish (21.10) and later.
This appears to be caused by a werkzeug package revision check in rbd-
target-api. The check is used to decide whather to add an
OpenSSL.SSL.Context or a ssl.SSLContext. The code comment suggests that
ssl.SSLContext is used for werkzeug 0.9 so that TLSv1.2 can be used. It
is also worth noting that support for OpenSSL.SSL.Context was dropped in
werkzeug 0.10. The intention of the check appears to be to add
OpenSSL.SSL.Context if the version of is werkzeug is below 0.9
otherwise use ssl.SSLContext. When rbd-target-api checks the werkzeug
revision it only looks at the minor revision number and Ubuntu Impish
contains werkzeug 1.0.1 which obviously has a minor revision number of 0
which causes rbd-target-api to use an OpenSSL.SSL.Context which is not
supported by werkzeug which causes:

# /usr/bin/rbd-target-api
 * Serving Flask app 'rbd-target-api' (lazy loading)
 * Environment: production
   WARNING: This is a development server. Do not use it in a production 
deployment.
   Use a production WSGI server instead.
 * Debug mode: off
Traceback (most recent call last):
  File "/usr/bin/rbd-target-api", line 3022, in 
main()
  File "/usr/bin/rbd-target-api", line 2952, in main
app.run(host=settings.config.api_host,
  File "/usr/lib/python3/dist-packages/flask/app.py", line 922, in run
run_simple(t.cast(str, host), port, self, **options)
  File "/usr/lib/python3/dist-packages/werkzeug/serving.py", line 1010, in 
run_simple
inner()
  File "/usr/lib/python3/dist-packages/werkzeug/serving.py", line 950, in inner
srv = make_server(
  File "/usr/lib/python3/dist-packages/werkzeug/serving.py", line 782, in 
make_server
return ThreadedWSGIServer(
  File "/usr/lib/python3/dist-packages/werkzeug/serving.py", line 708, in 
__init__
self.socket = ssl_context.wrap_socket(self.socket, server_side=True)
AttributeError: 'Context' object has no attribute 'wrap_socket'

Reported upstream here: https://github.com/ceph/ceph-iscsi/issues/255

** Affects: ceph-iscsi (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1965280

Title:
  rbd-target-api will not start AttributeError: 'Context' object has no
  attribute 'wrap_socket'

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ceph-iscsi/+bug/1965280/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1883112] Re: rbd-target-api crashes with python TypeError

2022-03-16 Thread Liam Young
Tested successfully on focal with 3.4-0ubuntu2.1

Tested with ceph-iscsi charms functional tests which were previously
failing.

$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:Ubuntu 20.04.4 LTS
Release:20.04
Codename:   focal

$ apt-cache policy ceph-iscsi
ceph-iscsi:
  Installed: 3.4-0ubuntu2.1
  Candidate: 3.4-0ubuntu2.1
  Version table:
 *** 3.4-0ubuntu2.1 500
500 http://archive.ubuntu.com/ubuntu focal-proposed/universe amd64 
Packages
100 /var/lib/dpkg/status
 3.4-0ubuntu2 500
500 http://nova.clouds.archive.ubuntu.com/ubuntu focal/universe amd64 
Packages


** Tags removed: verification-needed-focal
** Tags added: verification-done-focal

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1883112

Title:
  rbd-target-api crashes with python TypeError

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ceph-iscsi/+bug/1883112/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1883112] Re: rbd-target-api crashes with python TypeError

2022-03-15 Thread Liam Young
** Patch added: "gw-deb.diff"
   
https://bugs.launchpad.net/ubuntu/+source/ceph-iscsi/+bug/1883112/+attachment/5569162/+files/gw-deb.diff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1883112

Title:
  rbd-target-api crashes with python TypeError

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ceph-iscsi/+bug/1883112/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1883112] Re: rbd-target-api crashes with python TypeError

2022-03-15 Thread Liam Young
Thank you for the update Robie. I proposed the deb diff based on the fix
that had landed upstream because I (wrongly) thought that was what the
SRU policy required. I think it makes more sense to go for the minimal
fix you suggest.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1883112

Title:
  rbd-target-api crashes with python TypeError

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ceph-iscsi/+bug/1883112/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1883112] Re: rbd-target-api crashes with python TypeError

2022-02-23 Thread Liam Young
** Description changed:

+ [Impact]
+ 
+  * rbd-target-api service fails to start if there is a blocklist
+entry for the unit making the service unavailable.
+ 
+  * When the rbd-target-api service starts it checks if any of the
+ip addresses on the machine it is running on are listed as
+blocked. If there are entries it tries to remove them. In the
+process of removing the entries the code attempts to test whether
+a string is in the result of a subprocess.check_output call. This 
+would have worked in python2 but with python3 a byte like object
+is returned and check now throws a TypeError. This fix, taken from
+upstream, changes the code to remove the `in` check and replace it
+with a try/except
+ 
+ [Test Plan]
+ 
+  If an existing ceph-iscsi deployment is available then skip to
+  step 3.
+ 
+  1) Deploy the bundle below (tested with OpenStack providor).
+  
+ series: focal
+ applications:
+   ceph-iscsi:
+ charm: cs:ceph-iscsi
+ num_units: 2
+   ceph-osd:
+ charm: ch:ceph-osd
+ num_units: 3
+ storage:
+   osd-devices: 'cinder,10G'
+ options:
+   osd-devices: '/dev/test-non-existent'
+ channel: latest/edge
+   ceph-mon:
+ charm: ch:ceph-mon
+ num_units: 3
+ options:
+   monitor-count: '3'
+ channel: latest/edge
+ relations:
+   - - 'ceph-mon:client'
+ - 'ceph-iscsi:ceph-client'
+   - - 'ceph-osd:mon'
+ - 'ceph-mon:osd'
+ 
+ 
+  2) Connect to ceph-iscsi unit:
+  
+ juju ssh -m zaza-a1d88053ab85 ceph-iscsi/0
+ 
+  3) Stop rbd-target-api via systemd to make test case clearer:
+ 
+ sudo systemctl stop rbd-target-api
+ 
+  4) Add 2 blocklist entries for this unit (due to another issue the
+ ordering of the output from `osd blacklist ls` matters which can lead to
+ the reproduction of this bug being intermittent. To avoid this add two
+ entries which ensures there is always an entry for this node in the list
+ of blocklist entries to be removed).
+ 
+ sudo ceph -n client.ceph-iscsi --conf /etc/ceph/iscsi/ceph.conf osd blacklist 
add $(hostname --all-ip-addresses | awk '{print $1}'):0/1
+ sudo ceph -n client.ceph-iscsi --conf /etc/ceph/iscsi/ceph.conf osd blacklist 
add $(hostname --all-ip-addresses | awk '{print $1}'):0/2
+ sudo ceph -n client.ceph-iscsi --conf /etc/ceph/iscsi/ceph.conf osd blacklist 
ls
+   listed 2 entries
+   172.20.0.135:0/2 2022-02-23T11:14:54.850352+
+   172.20.0.135:0/1 2022-02-23T11:14:52.502592+
+ 
+ 
+  5) Attempt to start service:
+ 
+ sudo /usr/bin/python3 /usr/bin/rbd-target-api
+ Traceback (most recent call last):
+   File "/usr/bin/rbd-target-api", line 2952, in 
+ main()
+   File "/usr/bin/rbd-target-api", line 2862, in main
+ osd_state_ok = ceph_gw.osd_blacklist_cleanup()
+   File "/usr/lib/python3/dist-packages/ceph_iscsi_config/gateway.py", line 
111, in osd_blacklist_cleanup
+ rm_ok = self.ceph_rm_blacklist(blacklist_entry.split(' ')[0])
+   File "/usr/lib/python3/dist-packages/ceph_iscsi_config/gateway.py", line 
46, in ceph_rm_blacklist
+ if ("un-blacklisting" in result) or ("isn't blacklisted" in result):
+ TypeError: a bytes-like object is required, not 'str'
+ 
+ 
+ [Where problems could occur]
+ 
+  * Problems could occur with the service starting as this blocklist check is 
done at startup.
+
+  * Blocklist entries could fail to be removed.
+ 
+ Old bug description:
+ 
  $ lsb_release -rd
  Description:  Ubuntu 20.04 LTS
  Release:  20.04
  
  $ dpkg -S /usr/lib/python3/dist-packages/ceph_iscsi_config/gateway.py
  ceph-iscsi: /usr/lib/python3/dist-packages/ceph_iscsi_config/gateway.py
  
  $ apt-cache policy ceph-iscsi
  ceph-iscsi:
-   Installed: 3.4-0ubuntu2
-   Candidate: 3.4-0ubuntu2
-   Version table:
-  *** 3.4-0ubuntu2 500
- 500 http://de.archive.ubuntu.com/ubuntu focal/universe amd64 Packages
- 500 http://de.archive.ubuntu.com/ubuntu focal/universe i386 Packages
- 100 /var/lib/dpkg/status
+   Installed: 3.4-0ubuntu2
+   Candidate: 3.4-0ubuntu2
+   Version table:
+  *** 3.4-0ubuntu2 500
+ 500 http://de.archive.ubuntu.com/ubuntu focal/universe amd64 Packages
+ 500 http://de.archive.ubuntu.com/ubuntu focal/universe i386 Packages
+ 100 /var/lib/dpkg/status
  
  On second startup after a reboot, rbd-target-api crashes with a
  TypeError:
  
  Traceback (most recent call last):
-   File "/usr/bin/rbd-target-api", line 2952, in 
- main()
-   File "/usr/bin/rbd-target-api", line 2862, in main
- osd_state_ok = ceph_gw.osd_blacklist_cleanup()
-   File "/usr/lib/python3/dist-packages/ceph_iscsi_config/gateway.py", line 
110, in osd_blacklist_cleanup
- rm_ok = self.ceph_rm_blacklist(blacklist_entry.split(' ')[0])
-   File "/usr/lib/python3/dist-packages/ceph_iscsi_config/gateway.py", line 
46, in ceph_rm_blacklist
- if ("un-blacklisting" in result) or ("isn't blacklisted" in result):
+   File "/usr/bin/rbd-target-api", line 2952, in 
+ main()
+   File 

[Bug 1883112] Re: rbd-target-api crashes with python TypeError

2022-02-22 Thread Liam Young
** Patch added: "deb.diff"
   
https://bugs.launchpad.net/ubuntu/+source/ceph-iscsi/+bug/1883112/+attachment/5562748/+files/deb.diff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1883112

Title:
  rbd-target-api crashes with python TypeError

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ceph-iscsi/+bug/1883112/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1883112] Re: rbd-target-api crashes with python TypeError

2022-02-22 Thread Liam Young
** Changed in: ceph-iscsi (Ubuntu)
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1883112

Title:
  rbd-target-api crashes with python TypeError

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ceph-iscsi/+bug/1883112/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1954306] Re: Action `remove-instance` works but appears to fail

2021-12-16 Thread Liam Young
s/The issue appears when using the mysql to/The issue appears when using
the mysql shell to/

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1954306

Title:
  Action `remove-instance` works but appears to fail

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-mysql-innodb-cluster/+bug/1954306/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1954306] Re: Action `remove-instance` works but appears to fail

2021-12-16 Thread Liam Young
I don't think this is a charm bug. The issue appears when using the
mysql to remove a node from the cluster. From what I can see you cannot
persist group_replication_force_members and is correctly unset. So the
error being reported seems wrong

https://pastebin.ubuntu.com/p/sx6ZB3rs6r/

root@juju-1f04f3-zaza-90b9e082f2aa-2:/var/lib/juju/agents/unit-mysql-innodb-cluster-1/charm#
 /snap/bin/mysqlsh 
Cannot set LC_ALL to locale en_US.UTF-8: No such file or directory
MySQL Shell 8.0.23

Copyright (c) 2016, 2021, Oracle and/or its affiliates.
Oracle is a registered trademark of Oracle Corporation and/or its affiliates.
Other names may be trademarks of their respective owners.

Type '\help' or '\?' for help; '\quit' to exit.
mysql-py> 
shell.connect('clusteruser:d2Z27kpxZmJ826tSVWL6SVV4LYZhZwwryHtM@172.20.0.111')
Creating a session to 'clusteruser@172.20.0.111'
Fetching schema names for autocompletion... Press ^C to stop.
Your MySQL connection id is 1644 (X protocol)
Server version: 8.0.27-0ubuntu0.20.04.1 (Ubuntu)
No default schema selected; type \use  to set one.

mysql-py []> cluster = dba.get_cluster('jujuCluster')
mysql-py []> cluster.remove_instance('clusteruser@172.20.0.166', {'force': 
False})
The instance will be removed from the InnoDB cluster. Depending on the instance
being the Seed or not, the Metadata session might become invalid. If so, please
start a new session to the Metadata Storage R/W instance.

Instance '172.20.0.166:3306' is attempting to leave the cluster...
ERROR: Instance '172.20.0.166:3306' failed to leave the cluster: Variable 
'group_replication_force_members' is a non persistent variable
Traceback (most recent call last):
  File "", line 1, in 
mysqlsh.DBError: MySQL Error (1238): Cluster.remove_instance: Variable 
'group_replication_force_members' is a non persistent variable
mysql-py []> \sql show variables like 'group_replication_force_members';
+-+---+
| Variable_name   | Value |
+-+---+
| group_replication_force_members |   |
+-+---+
1 row in set (0.0086 sec)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1954306

Title:
  Action `remove-instance` works but appears to fail

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-mysql-innodb-cluster/+bug/1954306/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1954306] Re: Action `remove-instance` works but appears to fail

2021-12-16 Thread Liam Young
** Also affects: mysql-8.0 (Ubuntu)
   Importance: Undecided
   Status: New

** Changed in: charm-mysql-innodb-cluster
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1954306

Title:
  Action `remove-instance` works but appears to fail

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-mysql-innodb-cluster/+bug/1954306/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1944080] Re: [fan-network] Race-condition between "apt update" and dhcp request causes cloud-init error

2021-12-13 Thread Liam Young
Perhaps I'm missing something but this does not seem to be a bug in the
rabbitmq-server charm. It may be easier to observe there but the root
cause is elsewhere.

** Changed in: charm-rabbitmq-server
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1944080

Title:
  [fan-network] Race-condition between "apt update" and dhcp request
  causes cloud-init error

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-rabbitmq-server/+bug/1944080/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1946787] Re: [SRU] Fix inconsistent encoding secret encoding

2021-11-02 Thread Liam Young
Tested successfully on focal victoria using 1:11.0.0-0ubuntu1~cloud1 . I
created an encrypted volume and attached it to a VM.

cinder type-create LUKS
cinder encryption-type-create --cipher aes-xts-plain64 --key_size 512 
--control_location front-end LUKS nova.volume.encryptors.luks.LuksEncryptor
cinder create --volume-type LUKS --poll --name testvol 1
openstack keypair show guests || openstack keypair create --public-key 
~/.ssh/id_rsa_guests.pub guests
openstack flavor create --id 8 --ram 1024 --disk 8 --vcpus 1 --public m1.ly
openstack server create --image bionic --flavor m1.ly --network private 
--key-name guests --wait test3
openstack floating ip create ext_net
openstack server add floating ip test3 172.20.0.235
openstack server add volume --device /dev/vdb test3 testvol

cinder list
WARNING:cinderclient.shell:API version 3.64 requested, 
WARNING:cinderclient.shell:downgrading to 3.62 based on server support.
+--++-+--+-+--+--+
| ID   | Status | Name| Size | Volume Type 
| Bootable | Attached to  |
+--++-+--+-+--+--+
| 7ea1296e-a478-4aea-ade0-49f00034b58b | in-use | testvol | 1| LUKS
| false| e1b2c025-0ede-4330-9129-80f6c281ac4d |
+--++-+--+-+--+--+

cinder show 7ea1296e-a478-4aea-ade0-49f00034b58b
WARNING:cinderclient.shell:API version 3.64 requested, 
WARNING:cinderclient.shell:downgrading to 3.62 based on server support.
++--+
| Property   | Value|
++--+
| attached_servers   | ['e1b2c025-0ede-4330-9129-80f6c281ac4d'] |
| attachment_ids | ['c4410464-ff27-4234-9f5f-c5a7b094463b'] |
| availability_zone  | nova |
| bootable   | false|
| cluster_name   | None |
| consistencygroup_id| None |
| created_at | 2021-11-02T11:23:28.00   |
| description| None |
| encrypted  | True |
| group_id   | None |
| id | 7ea1296e-a478-4aea-ade0-49f00034b58b |
| metadata   |  |
| migration_status   | None |
| multiattach| False|
| name   | testvol  |
| os-vol-host-attr:host  | juju-4766ac-zaza-f0e92451c718-11@LVM#LVM |
| os-vol-mig-status-attr:migstat | None |
| os-vol-mig-status-attr:name_id | None |
| os-vol-tenant-attr:tenant_id   | 92c507c64e5b47d886e68b0a874499e6 |
| provider_id| None |
| replication_status | None |
| service_uuid   | 4e51ffb9-c259-4647-9a9a-d0adb19d0f6d |
| shared_targets | False|
| size   | 1|
| snapshot_id| None |
| source_volid   | None |
| status | in-use   |
| updated_at | 2021-11-02T11:40:16.00   |
| user_id| 0f41207ddcfd4bd5ab8ac694c772b709 |
| volume_type| LUKS |
++--+


** Tags removed: verification-victoria-needed
** Tags added: verification-victoria-done

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1946787

Title:
  [SRU] Fix inconsistent encoding secret encoding

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1946787/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1946787] Re: [SRU] Fix inconsistent encoding secret encoding

2021-11-01 Thread Liam Young
Tested successfully on focal wallaby using  2:12.0.0-0ubuntu2~cloud0 . I
created an encrypted volume and attached it to a VM.

cinder type-create LUKS
cinder encryption-type-create --cipher aes-xts-plain64 --key_size 512 
--control_location front-end LUKS nova.volume.encryptors.luks.LuksEncryptor
cinder create --volume-type LUKS --poll --name testvol 1
openstack keypair show guests || openstack keypair create --public-key 
~/.ssh/id_rsa_guests.pub guests
openstack flavor create --id 8 --ram 1024 --disk 8 --vcpus 1 --public m1.ly
openstack server create --image bionic --flavor m1.ly --network private 
--key-name guests --wait test3
openstack floating ip create ext_net
openstack server add floating ip test3 172.20.0.207
openstack server add volume --device /dev/vdb test3 testvol
cinder list
+--++-+--+-+--+--+
| ID   | Status | Name| Size | Volume Type 
| Bootable | Attached to  |
+--++-+--+-+--+--+
| ebf6c7d9-aac4-440e-b29f-c4ddd6a3e544 | in-use | testvol | 1| LUKS
| false| 6c47befa-4b32-4d87-9a03-c23e26ed9255 |
+--++-+--+-+--+--+

cinder show testvol

++--+
| Property   | Value|
++--+
| attached_servers   | ['6c47befa-4b32-4d87-9a03-c23e26ed9255'] |
| attachment_ids | ['c6653494-c23e-4312-a441-f86eba08794f'] |
| availability_zone  | nova |
| bootable   | false|
| cluster_name   | None |
| consistencygroup_id| None |
| created_at | 2021-11-01T18:15:41.00   |
| description| None |
| encrypted  | True |
| encryption_key_id  | dde779f5-ad06-45e8-979c-37dd3cea8505 |
| group_id   | None |
| id | ebf6c7d9-aac4-440e-b29f-c4ddd6a3e544 |
| metadata   |  |
| migration_status   | None |
| multiattach| False|
| name   | testvol  |
| os-vol-host-attr:host  | juju-9ce866-zaza-17f25c1dd768-11@LVM#LVM |
| os-vol-mig-status-attr:migstat | None |
| os-vol-mig-status-attr:name_id | None |
| os-vol-tenant-attr:tenant_id   | 9a7dac3a794f42f79bf32707ebbffb5f |
| provider_id| None |
| replication_status | None |
| service_uuid   | 86a123a1-3845-4099-8b37-52cec2a787de |
| shared_targets | False|
| size   | 1|
| snapshot_id| None |
| source_volid   | None |
| status | in-use   |
| updated_at | 2021-11-01T18:33:38.00   |
| user_id| 6f0383a710674745aaffbf083c101f52 |
| volume_type| LUKS |
| volume_type_id | 25408c30-0ffc-4584-99cd-dc834962bab7 |
++--+


** Tags removed: verification-wallaby-needed
** Tags added: verification-wallaby-done

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1946787

Title:
  [SRU] Fix inconsistent encoding secret encoding

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1946787/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1946787] Re: [SRU] Fix inconsistent encoding secret encoding

2021-11-01 Thread Liam Young
Tested successfully on hirsute using 2:12.0.0-0ubuntu2 . I created an
encrypted volume and attached it to a VM.


cinder type-create LUKS
cinder encryption-type-create --cipher aes-xts-plain64 --key_size 512 
--control_location front-end LUKS nova.volume.encryptors.luks.LuksEncryptor
cinder create --volume-type LUKS --poll --name testvol 1
openstack keypair show guests || openstack keypair create --public-key  
~/.ssh/id_rsa_guests.pub guests
openstack flavor create --id 8 --ram 1024 --disk 8 --vcpus 1 --public m1.ly
openstack server create --image bionic --flavor m1.ly --network private 
--key-name guests --wait test3
openstack floating ip create ext_net
openstack server add floating ip test3 172.20.0.207
openstack server add volume --device /dev/vdb test3 testvol
cinder list
+--++-+--+-+--+--+
| ID   | Status | Name| Size | Volume Type 
| Bootable | Attached to  |
+--++-+--+-+--+--+
| 67564b48-54b7-47bf-ac95-d701b455cb7d | in-use | testvol | 1| LUKS
| false| 6c43fed1-a195-47d8-b5a9-dc7fd166bf58 |
+--++-+--+-+--+--+

cinder show testvol
++--+
| Property   | Value|
++--+
| attached_servers   | ['6c43fed1-a195-47d8-b5a9-dc7fd166bf58'] |
| attachment_ids | ['f0c3ed24-2973-407a-b6f6-afcef999ed43'] |
| availability_zone  | nova |
| bootable   | false|
| cluster_name   | None |
| consistencygroup_id| None |
| created_at | 2021-11-01T16:38:32.00   |
| description| None |
| encrypted  | True |
| encryption_key_id  | c6079e38-fe86-4e16-aee0-09d07fdfc719 |
| group_id   | None |
| id | 67564b48-54b7-47bf-ac95-d701b455cb7d |
| metadata   |  |
| migration_status   | None |
| multiattach| False|
| name   | testvol  |
| os-vol-host-attr:host  | juju-86a900-zaza-c440171f601b-11@LVM#LVM |
| os-vol-mig-status-attr:migstat | None |
| os-vol-mig-status-attr:name_id | None |
| os-vol-tenant-attr:tenant_id   | 6485c947c61046c99b88e8f5f3bcae9a |
| provider_id| None |
| replication_status | None |
| service_uuid   | 5a4cf232-59a0-4cd9-8d3f-badd74e9a5e8 |
| shared_targets | False|
| size   | 1|
| snapshot_id| None |
| source_volid   | None |
| status | in-use   |
| updated_at | 2021-11-01T17:27:36.00   |
| user_id| d16ea8b7d0d542d8b2f36f6a121434bc |
| volume_type| LUKS |
| volume_type_id | 2bfe04b8-3e70-412f-a348-f6f5ff359991 |
++--+


** Tags removed: verification-needed-hirsute
** Tags added: verification-done-hirsute

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1946787

Title:
  [SRU] Fix inconsistent encoding secret encoding

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1946787/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1938299] Re: Unable to SSH Into Instance when deploying Impish 21.10

2021-10-11 Thread Liam Hopkins
Just to add some info on guest agent here:

the guest agent does not set up the primary interface

there should be no race between guest agent and cloud-init for the
primary interface

the guest agent does not start any dhclient process for primary
interface, and should not care if any dhclient process on the system is
killed

so a number of comments in this bug such as 'killing dhclient leaves
guest agent dead' are not true

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1938299

Title:
  Unable to SSH Into Instance when deploying Impish 21.10

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/cloud-init/+bug/1938299/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1943863] Re: DPDK instances are failing to start: Failed to bind socket to /run/libvirt-vhost-user/vhu3ba44fdc-7c: No such file or directory

2021-09-22 Thread Liam Young
https://github.com/openstack-charmers/charm-layer-ovn/pull/52

** Also affects: neutron
   Importance: Undecided
   Status: New

** No longer affects: neutron

** No longer affects: neutron (Ubuntu)

** Also affects: charm-layer-ovn
   Importance: Undecided
   Status: New

** Changed in: charm-layer-ovn
   Status: New => Confirmed

** Changed in: charm-layer-ovn
   Importance: Undecided => High

** Changed in: charm-layer-ovn
 Assignee: (unassigned) => Liam Young (gnuoy)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1943863

Title:
  DPDK instances are failing to start: Failed to bind socket to
  /run/libvirt-vhost-user/vhu3ba44fdc-7c: No such file or directory

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-layer-ovn/+bug/1943863/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1944424] Re: AppArmor causing HA routers to be in backup state on wallaby-focal

2021-09-22 Thread Liam Young
** Changed in: charm-neutron-gateway
 Assignee: (unassigned) => Liam Young (gnuoy)

** Changed in: charm-neutron-gateway
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1944424

Title:
  AppArmor causing HA routers to be in backup state on wallaby-focal

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-neutron-gateway/+bug/1944424/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1944424] Re: AppArmor causing HA routers to be in backup state on wallaby-focal

2021-09-22 Thread Liam Young
** Changed in: charm-neutron-gateway
   Status: Invalid => Confirmed

** Changed in: neutron (Ubuntu)
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1944424

Title:
  AppArmor causing HA routers to be in backup state on wallaby-focal

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-neutron-gateway/+bug/1944424/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1944424] Re: AppArmor causing HA routers to be in backup state on wallaby-focal

2021-09-22 Thread Liam Young
A patch was introduced [0] "..which sets the backup gateway
device link down by default. When the VRRP sets the master state in
one host, the L3 agent state change procedure will
do link up action for the gateway device.".

This change causes an issue when using keepalived 2.X (focal+) which
is fixed by patch [1] which adds a new 'no_track' option to all VIPs
and routes in keepalived's config file.

Patch [1] which fixed keepalived 2.X broke keepalived 1.X (https://review.opendev.org/c/openstack/neutron/+/707406
[1] https://review.opendev.org/c/openstack/neutron/+/721799
[2] https://review.opendev.org/c/openstack/neutron/+/745641
[3] https://review.opendev.org/c/openstack/neutron/+/757620


** Also affects: neutron (Ubuntu)
   Importance: Undecided
   Status: New

** Changed in: neutron (Ubuntu)
   Status: New => Confirmed

** Changed in: charm-neutron-gateway
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1944424

Title:
  AppArmor causing HA routers to be in backup state on wallaby-focal

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-neutron-gateway/+bug/1944424/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1943863] Re: DPDK instances are failing to start: Failed to bind socket to /run/libvirt-vhost-user/vhu3ba44fdc-7c: No such file or directory

2021-09-22 Thread Liam Young
** Also affects: neutron (Ubuntu)
   Importance: Undecided
   Status: New

** Changed in: neutron (Ubuntu)
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1943863

Title:
  DPDK instances are failing to start: Failed to bind socket to
  /run/libvirt-vhost-user/vhu3ba44fdc-7c: No such file or directory

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-nova-compute/+bug/1943863/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1743798] Re: Kernel sometimes panics during early boot if CPU microcode archive prepended to initramfs

2021-07-28 Thread Liam Proven
I had the same issue with 20.04 on a Thinkpad X220.

I managed to resolve it by installing the HWE kernel, adding a dedicated
swap partition on another drive, purging ZRAM, and rebuilding my
`initrd`.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1743798

Title:
  Kernel sometimes panics during early boot if CPU microcode archive
  prepended to initramfs

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux-hwe/+bug/1743798/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1893964] Re: Installation of Ubuntu Groovy with manual partitioning without an EFI System Partition fails on 'grub-install /dev/sda' even on non-UEFI systems

2021-04-18 Thread Liam Proven
@jeremie2

Ah, fair enough. Mostly I use Ventoy these days, and once the USB key is
formatted with Ventoy, you just copy .ISO files onto it and they
automagically appear in the Ventoy boot menu. So no need for Balena
Etcher etc. any more. Ventoy itself is bootable on BIOS and UEFI PCs and
on Intel Macs.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1893964

Title:
  Installation of Ubuntu Groovy with manual partitioning without an EFI
  System Partition fails on 'grub-install /dev/sda' even on non-UEFI
  systems

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ubiquity/+bug/1893964/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1893964] Re: Installation of Ubuntu Groovy with manual partitioning without an EFI System Partition fails on 'grub-install /dev/sda' even on non-UEFI systems

2021-04-18 Thread Liam Proven
In replie to @jeremie2 in comment #24:

I don't think this is a general description of the problem, because for
me, my USB boot keys don't have separate EFI boot partitions.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1893964

Title:
  Installation of Ubuntu Groovy with manual partitioning without an EFI
  System Partition fails on 'grub-install /dev/sda' even on non-UEFI
  systems

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ubiquity/+bug/1893964/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1915152] Re: Installation of Ubuntu Unity 20.10 always fails if there is no EFI partition

2021-04-10 Thread Liam Proven
*** This bug is a duplicate of bug 1893964 ***
https://bugs.launchpad.net/bugs/1893964

** This bug has been marked a duplicate of bug 1893964
   Installation of Ubuntu Groovy with manual partitioning without an EFI System 
Partition fails on 'grub-install /dev/sda' even on non-UEFI systems

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1915152

Title:
  Installation of Ubuntu Unity 20.10 always fails if there is no EFI
  partition

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ubiquity/+bug/1915152/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1789177] Re: RabbitMQ fails to synchronize exchanges under high load

2021-03-18 Thread Liam Young
I have tested the rocky scenario that was failing for me. Trilio on
Train + OpenStack on Rocky. The Trilio functional test to snapshot a
server failed without the fix and passed once python3-oslo.messaging
8.1.0-0ubuntu1~cloud2.2 was installed and services restarted

** Tags removed: verification-rocky-needed
** Tags added: verification-rocky-done

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1789177

Title:
  RabbitMQ fails to synchronize exchanges under high load

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1789177/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1917485] [NEW] Adding RBAC role to connection does not affect existing connections

2021-03-02 Thread Liam Young
Public bug reported:

It seems that updating the role attribute of a connection has no affect
on existing connections. For example when investigating another bug I
needed to disable rbac but to get that to take effect I needed to either
restart the southbound listener or the ovn-controller.

fwiw these are the steps I took to disable rbac (excluding the restart):

# ovn-sbctl find connection 
 
_uuid   : a3b68994-4376-4506-81eb-e23d15641305  

  
external_ids: {}

  
inactivity_probe: 6 

  
is_connected: false 

  
max_backoff : []

  
other_config: {}

  
read_only   : false 

  
role: ""

  
status  : {}

  
target  : "pssl:16642"  

  


  
_uuid   : ee53c2b6-ed8b-4b21-9825-a4ecaf2bdc95  

  
external_ids: {}

  
inactivity_probe: 6 

  
is_connected: false 

  
max_backoff : []

  
other_config: {}

  
read_only   : false 

  
role: ovn-controller

  
status  : {}
target  : "pssl:6642"

# ovn-sbctl set connection ee53c2b6-ed8b-4b21-9825-a4ecaf2bdc95 role='""'
# ovn-sbctl find connection
_uuid   : a3b68994-4376-4506-81eb-e23d15641305
external_ids: {}
inactivity_probe: 6
is_connected: false
max_backoff : []
other_config: {}
read_only   : false
role: ""
status  : {}
target  : "pssl:16642"

_uuid   : ee53c2b6-ed8b-4b21-9825-a4ecaf2bdc95
external_ids: {}

[Bug 1917475] [NEW] RBAC Permissions too strict for Port_Binding table

2021-03-02 Thread Liam Young
Public bug reported:

When using Openstack Ussuri with OVN 20.03 and adding a floating IP
address to a port the ovn-controller on the hypervisor repeatedly
reports:

2021-03-02T10:33:35.517Z|35359|ovsdb_idl|WARN|transaction error: 
{"details":"RBAC rules for client 
\"juju-eab186-zaza-d26c8c079cc7-11.project.serverstack\" role 
\"ovn-controller\" prohibit modification of table 
\"Port_Binding\".","error":"permission error"}
2021-03-02T10:33:35.518Z|35360|main|INFO|OVNSB commit failed, force recompute 
next time.

The seams to be because the ovn-controller needs to update the
virtual_parent attribute of the port binding *2 but that is not included
in the list of permissions allowed by the ovn-controller role *1


*1 
https://github.com/ovn-org/ovn/blob/aa8ef5588c119fa8615d78288a7db7e3df2d6fbe/northd/ovn-northd.c#L11331-L11332
*2 https://pastebin.ubuntu.com/p/4CfcxgDgdm/

Disabling rbac by changing the role to "" and stopping and starting the
southbound db listener results in the port being immediately updated and
the floating IP can be accessed.

** Affects: ovn (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1917475

Title:
  RBAC Permissions too strict for Port_Binding table

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ovn/+bug/1917475/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1896603] Re: ovn-octavia-provider: Cannot create listener due to alowed_cidrs validation

2021-02-28 Thread Liam Young
I have tested the package in victoria proposed (0.3.0-0ubuntu2) and it
passed. I verified it by deploying the octavia charm and running its
focal victoria functional tests which create an ovn loadbalancer and
check it is functional.

The log of the test run is here:

https://openstack-ci-
reports.ubuntu.com/artifacts/test_charm_pipeline_func_smoke/openstack
/charm-
octavia/775364/4/22201/consoleText.test_charm_func_smoke_21480.txt

** Tags removed: verification-victoria-needed
** Tags added: verification-victoria-done

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1896603

Title:
  ovn-octavia-provider: Cannot create listener due to alowed_cidrs
  validation

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1896603/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1896603] Re: ovn-octavia-provider: Cannot create listener due to alowed_cidrs validation

2021-02-28 Thread Liam Young
I have tested the package in groovy proposed (0.3.0-0ubuntu2) and it
passed. I verified it by deploying the octavia charm and running its
groovy victoria functional tests which create an ovn loadbalancer and
check it is fuctional.

The log of the test run is here:

https://openstack-ci-
reports.ubuntu.com/artifacts/test_charm_pipeline_func_smoke/openstack
/charm-
octavia/775364/4/22201/consoleText.test_charm_func_smoke_21480.txt


** Tags removed: verification-needed-groovy
** Tags added: verification-done-groovy

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1896603

Title:
  ovn-octavia-provider: Cannot create listener due to alowed_cidrs
  validation

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1896603/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1915152] Re: Installation of Ubuntu Unity 20.10 always fails if there is no EFI partition

2021-02-12 Thread Liam Proven
Confirmed and reproduced in Xubuntu 20.10 as well. This issue is _not_
confined to Ubuntu Unity and is also present in an official remix.

Steps taken to try to resolve it:
* updated system BIOS (machine is a Lenovo Thinkpad W500; was on 3.18, now on 
3.23, latest) -> no change
• tried 2 different flavours of 20.10 -> no change
• placed a bootable DOS primary partition (C:), set active, tested OK; no 
change. 
• installed Windows 7 Enterprise SP1. No UEFI detected, no EFI partition 
created, no separate system partition needed or used.

Ubuntu Unity 20.04 went on this machine without a glitch. openSUSE Leap
15.2 was also fine.

The machine does not have UEFI, as far as I can tell. There is no option
to boot in legacy or UEFI mode.

The only way I have discovered to install 20.10 was to tell it my DOS
partition was the EFI System Partition. This worked fine, installed GRUB
into /dev/sda and now it boots.

However this has rendered my DOS partition unbootable, and now it is
mounted at /boot/efi which is not what I want.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1915152

Title:
  Installation of Ubuntu Unity 20.10 always fails if there is no EFI
  partition

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ubiquity/+bug/1915152/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1915152] [NEW] Installation of Ubuntu Unity 20.10 always fails if there is no EFI partition

2021-02-09 Thread Liam Proven
Public bug reported:

Even on BIOS systems with no UEFI

ProblemType: Bug
DistroRelease: Ubuntu 20.10
Package: ubiquity 20.10.13
ProcVersionSignature: Ubuntu 5.8.0-25.26-generic 5.8.14
Uname: Linux 5.8.0-25-generic x86_64
NonfreeKernelModules: zfs zunicode zavl icp zcommon znvpair
ApportVersion: 2.20.11-0ubuntu50
Architecture: amd64
CasperMD5CheckMismatches: 
./pool/main/b/binutils/binutils-common_2.35.1-1ubuntu1_amd64.deb
CasperMD5CheckResult: skip
CasperVersion: 1.455
Date: Tue Feb  9 15:38:57 2021
InstallCmdLine: BOOT_IMAGE=/casper/vmlinuz boot=casper 
file=/cdrom/preseed/ubuntu.seed maybe-ubiquity ignore_uuid quiet splash ---
LiveMediaBuild: Ubuntu Unity 20.10
RebootRequiredPkgs:
 linux-image-5.8.0-25-generic
 linux-base
SourcePackage: ubiquity
UpgradeStatus: No upgrade log present (probably fresh install)

** Affects: ubiquity (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: amd64 apport-bug groovy ubiquity-20.10.13 ubuntu

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1915152

Title:
  Installation of Ubuntu Unity 20.10 always fails if there is no EFI
  partition

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ubiquity/+bug/1915152/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1896603] Re: ovn-octavia-provider: Cannot create listener due to alowed_cidrs validation

2021-01-27 Thread Liam Young
https://code.launchpad.net/~gnuoy/ubuntu/+source/ovn-octavia-
provider/+git/ovn-octavia-provider/+merge/397023

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1896603

Title:
  ovn-octavia-provider: Cannot create listener due to alowed_cidrs
  validation

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1896603/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1896603] Re: ovn-octavia-provider: Cannot create listener due to alowed_cidrs validation

2021-01-27 Thread Liam Young
** Description changed:

- Kuryr-Kubernetes tests running with ovn-octavia-provider started to fail
- with "Provider 'ovn' does not support a requested option: OVN provider
- does not support allowed_cidrs option" showing up in the o-api logs.
+ [Impact]
  
- We've tracked that to check [1] getting introduced. Apparently it's
- broken and makes the request explode even if the property isn't set at
- all. Please take a look at output from python-openstackclient [2] where
- body I used is just '{"listener": {"loadbalancer_id": "faca9a1b-30dc-
- 45cb-80ce-2ab1c26b5521", "protocol": "TCP", "protocol_port": 80,
- "admin_state_up": true}}'.
+  * Users cannot add listeners to an Octavia loadbalancer if it was created 
using the ovn provider
+  * This makes the ovn provider unusable in Victoria and will force people to 
use the more painful alternative of using the Amphora driver
  
- Also this is all over your gates as well, see o-api log [3]. Somehow
- ovn-octavia-provider tests skip 171 results there, so that's why it's
- green.
+ [Test Case]
  
- [1] 
https://opendev.org/openstack/ovn-octavia-provider/src/branch/master/ovn_octavia_provider/driver.py#L142
- [2] http://paste.openstack.org/show/798197/
- [3] 
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4ba/751085/7/gate/ovn-octavia-provider-v2-dsvm-scenario/4bac575/controller/logs/screen-o-api.txt
+ $ openstack loadbalancer create --provider ovn --vip-subnet-id 
f92fa6ca-0f29-4b61-aeb6-db052caceff5 --name test-lb
+ $ openstack loadbalancer show test-lb -c provisioning_status (Repeat until it 
shows as ACTIVE)
+ $ openstack loadbalancer listener create --name listener1 --protocol TCP 
--protocol-port 80 test-lb
+ Provider 'ovn' does not support a requested option: OVN provider does not 
support allowed_cidrs option (HTTP 501) (Request-ID: 
req-52a10944-951d-4414-8441-fe743444ed7c)
+ 
+ Alternatively run the focal-victoria-ha-ovn functional test in the
+ octavia charm
+ 
+ 
+ [Where problems could occur]
+ 
+  * Problems would be isolated to the managment of octavia loadbalancers
+ within an openstack cloud. Specifically the patch fixes the checking of
+ the allowed_cidr option when a listener is created or updated.
+ 
+ 
+ [Other Info]

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1896603

Title:
  ovn-octavia-provider: Cannot create listener due to alowed_cidrs
  validation

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1896603/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1896603] Re: ovn-octavia-provider: Cannot create listener due to alowed_cidrs validation

2021-01-27 Thread Liam Young
** Also affects: ovn-octavia-provider (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1896603

Title:
  ovn-octavia-provider: Cannot create listener due to alowed_cidrs
  validation

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1896603/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1904199] Re: [groovy-victoria] "gwcli /iscsi-targets/ create ..." fails with 1, GatewayError

2021-01-19 Thread Liam Young
I have tested focal and groovy and is only happening on groovy. I have
not tried Hirsute.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1904199

Title:
  [groovy-victoria] "gwcli /iscsi-targets/ create ..." fails with 1,
  GatewayError

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-ceph-iscsi/+bug/1904199/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1904199] Re: [groovy-victoria] "gwcli /iscsi-targets/ create ..." fails with 1, GatewayError

2021-01-18 Thread Liam Young
I don;t think this is a charm issue. It looks like an incompatibility
between ceph-isci and python3-werkzeug in groovy.

# /usr/bin/rbd-target-api
 * Serving Flask app "rbd-target-api" (lazy loading)
 * Environment: production
   WARNING: This is a development server. Do not use it in a production 
deployment.
   Use a production WSGI server instead.
 * Debug mode: off
Traceback (most recent call last):
  File "/usr/bin/rbd-target-api", line 2952, in 
main()
  File "/usr/bin/rbd-target-api", line 2889, in main
app.run(host=settings.config.api_host,
  File "/usr/lib/python3/dist-packages/flask/app.py", line 990, in run
run_simple(host, port, self, **options)
  File "/usr/lib/python3/dist-packages/werkzeug/serving.py", line 1052, in 
run_simple
inner()
  File "/usr/lib/python3/dist-packages/werkzeug/serving.py", line 996, in inner
srv = make_server(
  File "/usr/lib/python3/dist-packages/werkzeug/serving.py", line 847, in 
make_server
return ThreadedWSGIServer(
  File "/usr/lib/python3/dist-packages/werkzeug/serving.py", line 766, in 
__init__
self.socket = ssl_context.wrap_socket(sock, server_side=True)
AttributeError: 'Context' object has no attribute 'wrap_socket'


** Also affects: ceph-iscsi (Ubuntu)
   Importance: Undecided
   Status: New

** Changed in: charm-ceph-iscsi
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1904199

Title:
  [groovy-victoria] "gwcli /iscsi-targets/ create ..." fails with 1,
  GatewayError

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-ceph-iscsi/+bug/1904199/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1910656] [NEW] Crashed during install.

2021-01-07 Thread Liam Newsam
Public bug reported:

Crashed during install.

ProblemType: Bug
DistroRelease: Ubuntu 20.04
Package: ubiquity 20.04.15.2
ProcVersionSignature: Ubuntu 5.4.0-42.46-generic 5.4.44
Uname: Linux 5.4.0-42-generic x86_64
NonfreeKernelModules: zfs zunicode zavl icp zcommon znvpair
ApportVersion: 2.20.11-0ubuntu27.4
Architecture: amd64
CasperMD5CheckResult: pass
CasperVersion: 1.445.1
Date: Thu Jan  7 20:40:32 2021
InstallCmdLine: BOOT_IMAGE=/casper/vmlinuz file=/cdrom/preseed/ubuntu.seed 
maybe-ubiquity quiet splash nomodeset ---
LiveMediaBuild: Ubuntu 20.04.1 LTS "Focal Fossa" - Release amd64 (20200731)
SourcePackage: ubiquity
UpgradeStatus: No upgrade log present (probably fresh install)

** Affects: ubiquity (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: amd64 apport-bug focal ubiquity-20.04.15.2 ubuntu

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1910656

Title:
  Crashed during install.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ubiquity/+bug/1910656/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1905986] Re: Don't provide transitional package for python3-google-compute-engine, add Breaks: in google-guest-agent instead

2020-12-10 Thread Liam Hopkins
I've never heard of the 'empty python3-google-compute-engine
transitional package'; for upstream packaging, we use "Conflicts:
python3-google-compute-engine" and this will cause the top level package
(called google-compute-engine upstream, I think called gce-compute-
image-packages in Ubuntu) to be skipped in upgrades, and an
administrator would have to issue dist-upgrade to enable the automatic
removal of the conflicting package.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1905986

Title:
  Don't provide transitional package for python3-google-compute-engine,
  add Breaks: in google-guest-agent instead

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/gce-compute-image-packages/+bug/1905986/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1900897] Re: Please build the package as upstream does

2020-11-30 Thread Liam Hopkins
Please also apply this change to the google-guest-agent package

** Also affects: google-guest-agent (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1900897

Title:
  Please build the package as upstream does

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/google-guest-agent/+bug/1900897/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1901248] Re: Please build the package as upstream does Edit

2020-11-30 Thread Liam Hopkins
*** This bug is a duplicate of bug 1900897 ***
https://bugs.launchpad.net/bugs/1900897

** Also affects: google-guest-agent (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1901248

Title:
  Please build the package as upstream does Edit

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/google-guest-agent/+bug/1901248/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1901248] [NEW] Please build the package as upstream does Edit

2020-10-23 Thread Liam Hopkins
Public bug reported:

Upstream's build parameters;

override_dh_auto_build:
   dh_auto_build -O--buildsystem=golang -- -ldflags="-s -w -X 
main.version=$(VERSION)-$(RELEASE)" -mod=readonly

 - Strip the binary
 - Set main.version

** Affects: google-osconfig-agent (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1901248

Title:
  Please build the package as upstream does Edit

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/google-osconfig-agent/+bug/1901248/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1870314] Re: [needs-packaging] google-guest-agent

2020-08-26 Thread Liam Hopkins
It's a complicated situation, but I'll try to highlight some of the
reasons.

First, there is the complexity of existing files. We will only copy the
file if no file already exists because it may exist from the previous,
python guest which automatically generated this file. There are also the
.template and .distro files which may exist, but which we never ship, so
which will never be package owned.

Now, we want to support only ever having the user create or edit this
file (rather than generating it), so we will never attempt to modify the
file again after the first install case. So there is no value in marking
it as a config file. Also, since we support many distributions and not
all distributions support such upgrade paths with user-editable files,
we can't perform such upgrades even if we wanted to.

I think it's somewhat normal to have unowned files for certain cases
like this. I found modern 20.04 image has 7 unowned files in
/etc/default already. However, if you really think it's against Ubuntu
policy to do this, we would prefer the Ubuntu variance be to not copy
the file at all. The code is written with defaults built in, the file
does not even need to exist. We copy it purely for customer convenience.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1870314

Title:
  [needs-packaging] google-guest-agent

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+bug/1870314/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1870314] Re: [needs-packaging] google-guest-agent

2020-08-18 Thread Liam Hopkins
Systemd provides that functionality itself, internally. We don't want to
use UCF or mark this as a config file. We want to copy the file once on
installation iff it doesn't exist. It is otherwise an 'example' file.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1870314

Title:
  [needs-packaging] google-guest-agent

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+bug/1870314/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1870314] Re: [needs-packaging] google-guest-agent

2020-08-14 Thread Liam Hopkins
The way that this file is managed has changed as part of this
replacement, and many customers have automatic updates enabled. We chose
not to mark this file as a config file, as we don't want that dialog to
appear. We only ever copy the file into place if it doesn't already
exist, and after that, it's up to the user to edit. This is similar to
how the SSHD package handles its configuration file.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1870314

Title:
  [needs-packaging] google-guest-agent

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+bug/1870314/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1870314] Re: [needs-packaging] google-guest-agent

2020-08-13 Thread Liam Hopkins
I have looked at this package on a testing image in GCE. The instance
configs file has been shipped differently in this package vs ours - here
you are shipping it as /etc/defaults/instance_configs.cfg, we ship to
/usr/share/google-guest-agent/instance_configs.cfg

There are two problems with this change. First, the directory is
incorrect and should be /etc/default not /etc/defaults. This will be a
breaking change for new installs and new images. Second, we ship the
file to the /usr/share directory and copy it during new install ONLY if
it doesn't exist. This file may already exist from legacy software and
must not be modified if so.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1870314

Title:
  [needs-packaging] google-guest-agent

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+bug/1870314/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1882900] Re: Missing sqlalchemy-utils dep on ussuri

2020-06-10 Thread Liam Young
Yep thats the traceback I'm seeing.

Charm shows:

2020-06-10 12:45:57 ERROR juju-log amqp:40: Hook error:
Traceback (most recent call last):
  File 
"/var/lib/juju/agents/unit-masakari-0/.venv/lib/python3.6/site-packages/charms/reactive/__init__.py",
 line 74, in main
bus.dispatch(restricted=restricted_mode)
  File 
"/var/lib/juju/agents/unit-masakari-0/.venv/lib/python3.6/site-packages/charms/reactive/bus.py",
 line 390, in dispatch
_invoke(other_handlers)
  File 
"/var/lib/juju/agents/unit-masakari-0/.venv/lib/python3.6/site-packages/charms/reactive/bus.py",
 line 359, in _invoke
handler.invoke()
  File 
"/var/lib/juju/agents/unit-masakari-0/.venv/lib/python3.6/site-packages/charms/reactive/bus.py",
 line 181, in invoke
self._action(*args)
  File 
"/var/lib/juju/agents/unit-masakari-0/charm/reactive/masakari_handlers.py", 
line 50, in init_db
charm_class.db_sync()
  File 
"/var/lib/juju/agents/unit-masakari-0/.venv/lib/python3.6/site-packages/charms_openstack/charm/core.py",
 line 849, in db_sync
subprocess.check_call(self.sync_cmd)
  File "/usr/lib/python3.6/subprocess.py", line 311, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['masakari-manage', '--config-file', 
'/etc/masakari/masakari.conf', 'db', 'sync']' returned non-zero exit status 1.

2020-06-10 12:45:57 DEBUG amqp-relation-changed Traceback (most recent call 
last):
2020-06-10 12:45:57 DEBUG amqp-relation-changed   File 
"/var/lib/juju/agents/unit-masakari-0/charm/hooks/amqp-relation-changed", line 
22, in 
2020-06-10 12:45:57 DEBUG amqp-relation-changed main()
2020-06-10 12:45:57 DEBUG amqp-relation-changed   File 
"/var/lib/juju/agents/unit-masakari-0/.venv/lib/python3.6/site-packages/charms/reactive/__init__.py",
 line 74, in main
2020-06-10 12:45:57 DEBUG amqp-relation-changed 
bus.dispatch(restricted=restricted_mode)
2020-06-10 12:45:57 DEBUG amqp-relation-changed   File 
"/var/lib/juju/agents/unit-masakari-0/.venv/lib/python3.6/site-packages/charms/reactive/bus.py",
 line 390, in dispatch
2020-06-10 12:45:57 DEBUG amqp-relation-changed _invoke(other_handlers)
2020-06-10 12:45:57 DEBUG amqp-relation-changed   File 
"/var/lib/juju/agents/unit-masakari-0/.venv/lib/python3.6/site-packages/charms/reactive/bus.py",
 line 359, in _invoke
2020-06-10 12:45:57 DEBUG amqp-relation-changed handler.invoke()
2020-06-10 12:45:57 DEBUG amqp-relation-changed   File 
"/var/lib/juju/agents/unit-masakari-0/.venv/lib/python3.6/site-packages/charms/reactive/bus.py",
 line 181, in invoke
2020-06-10 12:45:57 DEBUG amqp-relation-changed self._action(*args)
2020-06-10 12:45:57 DEBUG amqp-relation-changed   File 
"/var/lib/juju/agents/unit-masakari-0/charm/reactive/masakari_handlers.py", 
line 50, in init_db
2020-06-10 12:45:57 DEBUG amqp-relation-changed charm_class.db_sync()
2020-06-10 12:45:57 DEBUG amqp-relation-changed   File 
"/var/lib/juju/agents/unit-masakari-0/.venv/lib/python3.6/site-packages/charms_openstack/charm/core.py",
 line 849, in db_sync
2020-06-10 12:45:57 DEBUG amqp-relation-changed 
subprocess.check_call(self.sync_cmd)
2020-06-10 12:45:57 DEBUG amqp-relation-changed   File 
"/usr/lib/python3.6/subprocess.py", line 311, in check_call
2020-06-10 12:45:57 DEBUG amqp-relation-changed raise 
CalledProcessError(retcode, cmd)
2020-06-10 12:45:57 DEBUG amqp-relation-changed subprocess.CalledProcessError: 
Command '['masakari-manage', '--config-file', '/etc/masakari/masakari.conf', 
'db', 'sync']' returned non-zero exit status 1.


And manual run of masakari-manage returns:
root@juju-656c93-zaza-74a8633f51ae-9:~# masakari-manage --config-file 
/etc/masakari/masakari.conf db sync
2020-06-10 12:59:29.604 6755 INFO migrate.versioning.api [-] 5 -> 6... 
2020-06-10 12:59:29.606 6755 INFO masakari.engine.driver [-] Loading masakari 
notification driver 'taskflow_driver'
2020-06-10 12:59:29.681 6755 INFO keyring.backend [-] Loading Gnome
2020-06-10 12:59:29.695 6755 INFO keyring.backend [-] Loading Google
2020-06-10 12:59:29.697 6755 INFO keyring.backend [-] Loading Windows (alt)
2020-06-10 12:59:29.699 6755 INFO keyring.backend [-] Loading file
2020-06-10 12:59:29.700 6755 INFO keyring.backend [-] Loading keyczar
2020-06-10 12:59:29.700 6755 INFO keyring.backend [-] Loading multi
2020-06-10 12:59:29.701 6755 INFO keyring.backend [-] Loading pyfs
Invalid input received: No module named 'sqlalchemy_utils'

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1882900

Title:
  Missing sqlalchemy-utils dep on ussuri

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/python-taskflow/+bug/1882900/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1882900] Re: Missing sqlalchemy-utils dep on ussuri

2020-06-10 Thread Liam Young
It seems sqlalchemy-utils may have been removed recently in error
https://git.launchpad.net/ubuntu/+source/masakari/tree/debian/changelog?id=4d933765965f3d02cd68c696cc69cf53b7c6390d#n3

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1882900

Title:
  Missing sqlalchemy-utils dep on ussuri

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/masakari/+bug/1882900/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1882900] [NEW] Missing sqlalchemy-utils dep on ussuri

2020-06-10 Thread Liam Young
Public bug reported:

Package seems to be missing a dependency on sqlalchemy-utils *1. The
issue shows itself when running masakari-manage with the new 'taskflow'
section enabled *2

*1 
https://opendev.org/openstack/masakari/src/branch/stable/ussuri/requirements.txt#L29
*2 https://review.opendev.org/734450

I saw this with bionic ussuri but I assume affects focal too.

** Affects: masakari (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1882900

Title:
  Missing sqlalchemy-utils dep on ussuri

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/masakari/+bug/1882900/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1877547] [NEW] Dead cursor on 125% scaling on x11

2020-05-08 Thread Liam Demafelix
Public bug reported:

Opening a bug for this since all other bugs that reported this have been
closed.

On an X11 session, a dead secondary mouse is displayed when the scaling
for a user session has been set to 125% (fractional scaling).
Presumably, the dead cursor is a left-over from the login screen with
100% scaling.

I've attached a photo for reference. Screen capturing/print screen
functionality does not capture the dead cursor.

Going back to 100% after logging in, then reverting to 125% removes the
dead cursor.

ProblemType: Bug
DistroRelease: Ubuntu 20.04
Package: gnome-shell 3.36.1-5ubuntu1
ProcVersionSignature: Ubuntu 5.4.0-29.33-generic 5.4.30
Uname: Linux 5.4.0-29-generic x86_64
ApportVersion: 2.20.11-0ubuntu27
Architecture: amd64
CasperMD5CheckResult: skip
CurrentDesktop: ubuntu:GNOME
Date: Fri May  8 17:44:03 2020
DisplayManager: gdm3
InstallationDate: Installed on 2020-05-08 (0 days ago)
InstallationMedia: Ubuntu 20.04 LTS "Focal Fossa" - Release amd64 (20200423)
RelatedPackageVersions: mutter-common 3.36.1-3ubuntu3
SourcePackage: gnome-shell
UpgradeStatus: No upgrade log present (probably fresh install)

** Affects: gnome-shell (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: amd64 apport-bug focal

** Attachment added: "20200508_174734.jpg"
   
https://bugs.launchpad.net/bugs/1877547/+attachment/5368243/+files/20200508_174734.jpg

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1877547

Title:
  Dead cursor on 125% scaling on x11

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/gnome-shell/+bug/1877547/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1874719] Re: Focal deploy creates a 'node1' node

2020-04-24 Thread Liam Young
HAving looked into it further it seems to be the name of the node that
has changed.

juju deploy cs:bionic/ubuntu bionic-ubuntu
juju deploy cs:focal/ubuntu focal-ubuntu

juju run --unit bionic-ubuntu/0 "sudo apt install --yes crmsh pacemaker"
juju run --unit focal-ubuntu/0 "sudo apt install --yes crmsh pacemaker"


$ juju run --unit focal-ubuntu/0 "sudo crm status"
Cluster Summary:
  * Stack: corosync
  * Current DC: node1 (version 2.0.3-4b1f869f0f) - partition with quorum
  * Last updated: Fri Apr 24 15:03:52 2020
  * Last change:  Fri Apr 24 15:02:20 2020 by hacluster via crmd on node1
  * 1 node configured
  * 0 resource instances configured

Node List:
  * Online: [ node1 ]

Full List of Resources:
  * No resources


$ juju run --unit bionic-ubuntu/0 "sudo crm status"
Stack: corosync
Current DC: juju-27f7a7-hatest2-0 (version 1.1.18-2b07d5c5a9) - partition 
WITHOUT quorum
Last updated: Fri Apr 24 15:04:05 2020
Last change: Fri Apr 24 15:00:43 2020 by hacluster via crmd on 
juju-27f7a7-hatest2-0

1 node configured
0 resources configured

Online: [ juju-27f7a7-hatest2-0 ]

No resources

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1874719

Title:
  Focal deploy creates a 'node1' node

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-hacluster/+bug/1874719/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1874719] [NEW] Focal deploy creates a 'node1' node

2020-04-24 Thread Liam Young
Public bug reported:

Testing of masakari on focal zaza tests failed because the test checks
that all pacemaker nodes are online. This check failed due the
appearance of a new node called 'node1' which was marked as offline. I
don't know where that node came from or what is supposed to represent
but it seems like an unwanted change in behaviour.

** Affects: charm-hacluster
 Importance: Undecided
 Status: New

** Affects: pacemaker (Ubuntu)
 Importance: Undecided
 Status: New

** Also affects: pacemaker (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1874719

Title:
  Focal deploy creates a 'node1' node

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-hacluster/+bug/1874719/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1873741] Re: Using ceph as a backing store fails on ussuri

2020-04-20 Thread Liam Young
The source option was not set properly for the ceph application leading
to the python rbd lib being way ahead of the ceph cluster.

** Changed in: charm-glance
 Assignee: Liam Young (gnuoy) => (unassigned)

** Changed in: charm-glance
   Status: New => Invalid

** Changed in: glance (Ubuntu)
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1873741

Title:
  Using ceph as a backing store fails on ussuri

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-glance/+bug/1873741/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1873741] Re: Using ceph as a backing store fails on ussuri

2020-04-20 Thread Liam Young
** Also affects: glance (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1873741

Title:
  Using ceph as a backing store fails on ussuri

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-glance/+bug/1873741/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1870318] Re: Handbrake Crash when selecting source after fresh 20.04 install

2020-04-02 Thread Liam Bennett
** Summary changed:

- Handbrake Crash when selecting source after Xubuntu install
+ Handbrake Crash when selecting source after fresh 20.04 install

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1870318

Title:
  Handbrake Crash when selecting source after fresh 20.04 install

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/handbrake/+bug/1870318/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1870318] Re: Handbrake Crash when selecting source after Xubuntu install

2020-04-02 Thread Liam Bennett
Program terminated with signal SIGSEGV, Segmentation fault.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1870318

Title:
  Handbrake Crash when selecting source after Xubuntu install

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/handbrake/+bug/1870318/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1870318] Re: Handbrake Crash when selecting source after Xubuntu install

2020-04-02 Thread Liam Bennett
I repeated the above with fresh 20.04 install (Gnome), and get the same
issue.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1870318

Title:
  Handbrake Crash when selecting source after Xubuntu install

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/handbrake/+bug/1870318/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1870318] [NEW] Handbrake Crash when selecting source after Xubuntu install

2020-04-02 Thread Liam Bennett
Public bug reported:

I had an up to date install of Ubuntu 20.04 (as of 1st April), I had
used Handbrake several times successfully.

I then install Xubuntu core over the top.

Handbrake still opens but upon selecting the DVD source, it crashes.
Instead of loading/processing.


Description:Ubuntu Focal Fossa (development branch)
Release:20.04

handbrake:
  Installed: 1.3.1+ds1-1build1
  Candidate: 1.3.1+ds1-1build1
  Version table:
 *** 1.3.1+ds1-1build1 500
500 http://gb.archive.ubuntu.com/ubuntu focal/universe amd64 Packages
100 /var/lib/dpkg/status

ProblemType: Bug
DistroRelease: Ubuntu 20.04
Package: handbrake 1.3.1+ds1-1build1
ProcVersionSignature: User Name 5.4.0-21.25-generic 5.4.27
Uname: Linux 5.4.0-21-generic x86_64
NonfreeKernelModules: zfs zunicode zavl icp zcommon znvpair
ApportVersion: 2.20.11-0ubuntu22
Architecture: amd64
CurrentDesktop: XFCE
Date: Thu Apr  2 11:59:05 2020
InstallationDate: Installed on 2019-01-13 (444 days ago)
InstallationMedia: Ubuntu 18.10 "Cosmic Cuttlefish" - Release amd64 (20181017.3)
SourcePackage: handbrake
UpgradeStatus: Upgraded to focal on 2020-03-02 (30 days ago)

** Affects: handbrake (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: amd64 apport-bug focal

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1870318

Title:
  Handbrake Crash when selecting source after Xubuntu install

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/handbrake/+bug/1870318/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1864838] Re: Checks fail when creating an iscsi target

2020-02-26 Thread Liam Young
** Summary changed:

- rbd pool name is hardcoded
+ Checks fail when creating an iscsi target

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1864838

Title:
  skipchecks=true is needed when deployed on Ubuntu

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ceph-iscsi/+bug/1864838/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1864838] [NEW] rbd pool name is hardcoded

2020-02-26 Thread Liam Young
Public bug reported:

See https://docs.ceph.com/docs/master/rbd/iscsi-target-cli/ and the
line:

"If not using RHEL/CentOS or using an upstream or ceph-iscsi-test
kernel, the skipchecks=true argument must be used. This will avoid the
Red Hat kernel and rpm checks:"

** Affects: ceph-iscsi (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1864838

Title:
  rbd pool name is hardcoded

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ceph-iscsi/+bug/1864838/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1861321] [NEW] ceilometer-collector fails to stop if cannot connect to message broker

2020-01-29 Thread Liam Young
Public bug reported:

ceilometer-collector fails to stop if it cannot connect to message
broker.

To reproduce (assuming amqp is running on localhost):
1) Comment out the 'oslo_messaging_rabbit' section from 
/etc/ceilometer/ceilometer.conf. This will trigger ceilometer-collector to look 
locally for a rabbit connection
2) Start ceilometer-collector 
3) Observe errors like below in /var/log/ceilometer/ceilometer-collector.log

2020-01-29 18:28:35.848 11808 ERROR oslo.messaging._drivers.impl_rabbit
[-] AMQP server on 127.0.0.1:5672 is unreachable: [Errno 111] Connection
refused. Trying again in 32 seconds.

4) Stop ceilometer-collector 
5) Check if ceilometer-collector processes have gone
   


Getting ceilometer from the cloud archive mitaka pocket.

# apt-cache policy ceilometer-collector
ceilometer-collector:
  Installed: 1:6.1.5-0ubuntu1~cloud0
  Candidate: 1:6.1.5-0ubuntu1~cloud0
  Version table:
 *** 1:6.1.5-0ubuntu1~cloud0 0
500 http://ubuntu-cloud.archive.canonical.com/ubuntu/ 
trusty-updates/mitaka/main amd64 Packages
100 /var/lib/dpkg/status
 2014.1.5-0ubuntu2 0
500 http://nova.clouds.archive.ubuntu.com/ubuntu/ trusty-updates/main 
amd64 Packages
 2014.1.2-0ubuntu1.1 0
500 http://security.ubuntu.com/ubuntu/ trusty-security/main amd64 
Packages
 2014.1-0ubuntu1 0
500 http://nova.clouds.archive.ubuntu.com/ubuntu/ trusty/main amd64 
Packages

** Affects: ceilometer (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1861321

Title:
  ceilometer-collector fails to stop if cannot connect to message broker

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ceilometer/+bug/1861321/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1854718] Re: Groups of swift daemons are all forced to use the same config

2019-12-05 Thread Liam Young
Sahid pointed out that the swift-init will traverse a search path and
start a daemon for every config file it finds so no change to the init
script is needed. Initial tests suggest this completely covers my use
case. I will continue testing and report back. I will mark the bug as
invalid for the moment. Thanks Sahid !

** Changed in: cloud-archive/mitaka
   Status: Triaged => Invalid

** Changed in: cloud-archive/ocata
   Status: Triaged => Invalid

** Changed in: cloud-archive/queens
   Status: Triaged => Invalid

** Changed in: cloud-archive/rocky
   Status: Triaged => Invalid

** Changed in: cloud-archive/stein
   Status: Triaged => Invalid

** Changed in: cloud-archive/train
   Status: Triaged => Invalid

** Changed in: cloud-archive/ussuri
   Status: Triaged => Invalid

** Changed in: swift (Ubuntu Xenial)
   Status: Triaged => Invalid

** Changed in: swift (Ubuntu Bionic)
   Status: Triaged => Invalid

** Changed in: swift (Ubuntu Disco)
   Status: Triaged => Invalid

** Changed in: swift (Ubuntu Eoan)
   Status: Triaged => Invalid

** Changed in: swift (Ubuntu Focal)
   Status: Triaged => Invalid

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1854718

Title:
  Groups of swift daemons are all forced to use the same config

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1854718/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1854718] Re: Groups of swift daemons are all forced to use the same config

2019-12-04 Thread Liam Young
Hi Sahid,

In our deployment for swift global replication we have two account services.
One for local and one for replication:

# cat /etc/swift/account-server/1.conf
[DEFAULT]
bind_ip = 0.0.0.0
bind_port = 6002
workers = 1


[pipeline:main]
pipeline = recon account-server

[filter:recon]
use = egg:swift#recon
recon_cache_path = /var/cache/swift

[app:account-server]
use = egg:swift#account

[account-auditor]

[account-reaper]
#
# cat /etc/swift/account-server/2.conf
[DEFAULT]
bind_ip = 0.0.0.0
bind_port = 6012
workers = 1


[pipeline:main]
pipeline = recon account-server

[filter:recon]
use = egg:swift#recon
recon_cache_path = /var/cache/swift

[app:account-server]
use = egg:swift#account
replication_server = true
#

I believe these two config files are mutually exclusive as they have different
values for the same key in both the 'DEFAULT' and 'app:account-server'
sections.

Similarly, I believe the config file for the local account service is
incompatable with the local config file for the local container service.

# cat /etc/swift/account-server/1.conf
[DEFAULT]
bind_ip = 0.0.0.0
bind_port = 6002
workers = 1


[pipeline:main]
pipeline = recon account-server

[filter:recon]
use = egg:swift#recon
recon_cache_path = /var/cache/swift

[app:account-server]
use = egg:swift#account

[account-auditor]

[account-reaper]
#
# cat /etc/swift/container-server/1.conf 
[DEFAULT]
bind_ip = 0.0.0.0
bind_port = 6001
workers = 1


[pipeline:main]
pipeline = recon container-server

[filter:recon]
use = egg:swift#recon
recon_cache_path = /var/cache/swift

[app:container-server]
use = egg:swift#container
allow_versions = true

[container-updater]

[container-auditor]

I believe these two config files are mutually exclusive as they have different
values for the same key in both the 'DEFAULT' and 'pipeline:main' sections.
sections.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1854718

Title:
  Groups of swift daemons are all forced to use the same config

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1854718/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1854718] Re: Groups of swift daemons are all forced to use the same config

2019-12-03 Thread Liam Young
Hi Cory, the init script update is to support swift global replication.
The upstream code and the proposed changes to the charm support the
feature in mitaka so ideally the support would go right back to trusty-
mitaka.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1854718

Title:
  Groups of swift daemons are all forced to use the same config

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/swift/+bug/1854718/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1854718] Re: Groups of swift daemons are all forced to use the same config

2019-12-02 Thread Liam Young
** Description changed:

- On swift proxy servers there are three groups of services: account,
+ On swift storage servers there are three groups of services: account,
  container and object.
  
  Each of these groups is comprised of a number of services, for instance:
  server, auditor, replicator etc
  
  Each service has its own init script but all the services in a group are
  configured to use the same group config file eg swift-account, swift-
  account-auditor, swift-account-reaper & swift-account-replicator all use
  /etc/swift/account-server.conf.
  
  Obviously this causes a problem when different services need different 
config. In the case of a swift cluster performing global replication the 
replication server need "
  replication_server = true" where as the auditor needs "replication_server = 
false"

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1854718

Title:
  Groups of swift daemons are all forced to use the same config

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/swift/+bug/1854718/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1854718] [NEW] Groups of swift daemons are all forced to use the same config

2019-12-02 Thread Liam Young
Public bug reported:

On swift proxy servers there are three groups of services: account,
container and object.

Each of these groups is comprised of a number of services, for instance:
server, auditor, replicator etc

Each service has its own init script but all the services in a group are
configured to use the same group config file eg swift-account, swift-
account-auditor, swift-account-reaper & swift-account-replicator all use
/etc/swift/account-server.conf.

Obviously this causes a problem when different services need different config. 
In the case of a swift cluster performing global replication the replication 
server need "
replication_server = true" where as the auditor needs "replication_server = 
false"

** Affects: swift (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1854718

Title:
  Groups of swift daemons are all forced to use the same config

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/swift/+bug/1854718/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1834565] Re: python 3.7: wrap_socket() got an unexpected keyword argument '_context'

2019-11-02 Thread Liam Young
I can confirm that the disco proposed repository fixes this issue.

I have run the openstack teams mojo spec for disco stein which fails due
to this bug. I then reran the test with the charms configured to install
from the disco proposed repository and the bug was fixed and the tests
passed.

Log from test: http://paste.ubuntu.com/p/brSgbmsDpB/


** Tags removed: verification-needed-disco
** Tags added: verification-done-disco

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1834565

Title:
  python 3.7: wrap_socket() got an unexpected keyword argument
  '_context'

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1834565/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1833713] Re: Metadata is broken with dpdk bonding, jumbo frames and metadata from qdhcp

2019-07-15 Thread Liam Young
Hi Christian,
Thanks for your comments. I'm sure you spotted it but just to make it 
clear, the issue occurs with bonded and unbonded dpdk interfaces. I've emailed 
upstream here *1.

Thanks
Liam


*1 https://mail.openvswitch.org/pipermail/ovs-discuss/2019-July/048997.html

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1833713

Title:
  Metadata is broken with dpdk bonding, jumbo frames and metadata from
  qdhcp

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1833713/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1833713] Re: Metadata is broken with dpdk bonding, jumbo frames and metadata from qdhcp

2019-07-09 Thread Liam Young
** Changed in: dpdk (Ubuntu)
   Status: Invalid => New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1833713

Title:
  Metadata is broken with dpdk bonding, jumbo frames and metadata from
  qdhcp

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1833713/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1833713] Re: Metadata is broken with dpdk bonding, jumbo frames and metadata from qdhcp

2019-07-08 Thread Liam Young
Ubuntu: eoan
DPDK pkg: 18.11.1-3
OVS DPDK pkg: 2.11.0-0ubuntu2
Kerenl: 5.0.0-20-generic

If a server has an ovs bridge with a dpdk device for external
network access and a network namespace attached then sending data out of
the namespace fails if jumbo frames are enabled. 

Setup:

root@node-licetus:~# uname -r
5.0.0-20-generic

root@node-licetus:~# ovs-vsctl show
523eab62-8d03-4445-a7ba-7570f5027ff6
Bridge br-test
Port "tap1"
Interface "tap1"
type: internal
Port br-test
Interface br-test
type: internal
Port "dpdk-nic1"
Interface "dpdk-nic1"
type: dpdk
options: {dpdk-devargs=":03:00.0"}
ovs_version: "2.11.0"

root@node-licetus:~# ovs-vsctl get Interface dpdk-nic1 mtu
9000

root@node-licetus:~# ip netns exec ns1 ip addr show tap1
12: tap1:  mtu 9000 qdisc fq_codel 
state UNKNOWN group default qlen 1000
link/ether 0a:dd:76:38:52:54 brd ff:ff:ff:ff:ff:ff
inet 10.246.112.101/21 scope global tap1
   valid_lft forever preferred_lft forever
inet6 fe80::8dd:76ff:fe38:5254/64 scope link 
   valid_lft forever preferred_lft forever


* Using iperf to send data out of the netns fails:

root@node-licetus:~# ip netns exec ns1 iperf -c 10.246.114.29

Client connecting to 10.246.114.29, TCP port 5001
TCP window size:  325 KByte (default)

[  3] local 10.246.112.101 port 51590 connected with 10.246.114.29 port 5001
[ ID] Interval   Transfer Bandwidth
[  3]  0.0-10.3 sec   323 KBytes   257 Kbits/sec

root@node-hippalus:~# iperf -s -m

Server listening on TCP port 5001
TCP window size:  128 KByte (default)

root@node-hippalus:~# 

* Switching the direction of flow and sending data into the namespace
works:

root@node-licetus:~# ip netns exec ns1 iperf -s -m

Server listening on TCP port 5001
TCP window size:  128 KByte (default)

[  4] local 10.246.112.101 port 5001 connected with 10.246.114.29 port 59454
[ ID] Interval   Transfer Bandwidth
[  4]  0.0-10.0 sec  6.06 GBytes  5.20 Gbits/sec
[  4] MSS size 8948 bytes (MTU 8988 bytes, unknown interface)

root@node-hippalus:~# iperf -c 10.246.112.101

Client connecting to 10.246.112.101, TCP port 5001
TCP window size:  942 KByte (default)

[  3] local 10.246.114.29 port 59454 connected with 10.246.112.101 port 5001
[ ID] Interval   Transfer Bandwidth
[  3]  0.0-10.0 sec  6.06 GBytes  5.20 Gbits/sec

* Using iperf to send data out of the netns after dropping tap mtu
works:


root@node-licetus:~# ip netns exec ns1 ip link set dev tap1 mtu 1500
root@node-licetus:~# ip netns exec ns1 iperf -c 10.246.114.29

Client connecting to 10.246.114.29, TCP port 5001
TCP window size:  845 KByte (default)

[  3] local 10.246.112.101 port 51594 connected with 10.246.114.29 port 5001
[ ID] Interval   Transfer Bandwidth
[  3]  0.0-10.0 sec   508 MBytes   426 Mbits/sec

root@node-hippalus:~# iperf -s -m

Server listening on TCP port 5001
TCP window size:  128 KByte (default)

[  4] local 10.246.114.29 port 5001 connected with 10.246.112.101 port 51594
[ ID] Interval   Transfer Bandwidth
[  4]  0.0-10.1 sec   508 MBytes   424 Mbits/sec
[  4] MSS size 1448 bytes (MTU 1500 bytes, ethernet)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1833713

Title:
  Metadata is broken with dpdk bonding, jumbo frames and metadata from
  qdhcp

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1833713/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1833713] Re: Metadata is broken with dpdk bonding, jumbo frames and metadata from qdhcp

2019-07-08 Thread Liam Young
Ubuntu: eoan
DPDK pkg: 18.11.1-3 
OVS DPDK pkg: 2.11.0-0ubuntu2
Kerenl: 5.0.0-20-generic

If two servers each have an ovs bridge with a dpdk device for external
network access and a network namespace attached then communication
between taps in the namespaces fails if jumbo frames are enabled. If on one of 
the servers the external nic is switched so it is no longer managed by
dpdk then service is restored.

Server 1:

root@node-licetus:~# ovs-vsctl show
1fed66c2-b7af-477d-b035-0e1d78451f6e
Bridge br-test
Port br-test
Interface br-test
type: internal
Port "tap1"
Interface "tap1"
type: internal
Port "dpdk-nic1"
Interface "dpdk-nic1"
type: dpdk
options: {dpdk-devargs=":03:00.0"}
ovs_version: "2.11.0"

root@node-licetus:~# ovs-vsctl get Interface dpdk-nic1 mtu
9000

root@node-licetus:~# ip netns exec ns1 ip addr show tap1
11: tap1:  mtu 9000 qdisc fq_codel 
state UNKNOWN group default qlen 1000
link/ether 56:b1:9c:a3:de:81 brd ff:ff:ff:ff:ff:ff
inet 10.246.112.101/21 scope global tap1
   valid_lft forever preferred_lft forever
inet6 fe80::54b1:9cff:fea3:de81/64 scope link 
   valid_lft forever preferred_lft forever

Server 2:

root@node-hippalus:~# ovs-vsctl show
cd383272-d341-4be8-b2ab-17ea8cb63ae6
Bridge br-test
Port "dpdk-nic1"
Interface "dpdk-nic1"
type: dpdk
options: {dpdk-devargs=":03:00.0"}
Port br-test
Interface br-test
type: internal
Port "tap1"
Interface "tap1"
type: internal
ovs_version: "2.11.0"

root@node-hippalus:~# ovs-vsctl get Interface dpdk-nic1 mtu
9000

root@node-hippalus:~# ip netns exec ns1 ip addr show tap1
11: tap1:  mtu 9000 qdisc fq_codel 
state UNKNOWN group default qlen 1000
link/ether a6:f2:d8:59:d5:7d brd ff:ff:ff:ff:ff:ff
inet 10.246.112.102/21 scope global tap1
   valid_lft forever preferred_lft forever
inet6 fe80::a4f2:d8ff:fe59:d57d/64 scope link 
   valid_lft forever preferred_lft forever


Test:

root@node-licetus:~# ip netns exec ns1 iperf -s -m

Server listening on TCP port 5001
TCP window size:  128 KByte (default)


root@node-hippalus:~# ip netns exec ns1 iperf -c 10.246.112.101

Client connecting to 10.246.112.101, TCP port 5001
TCP window size:  325 KByte (default)

[  3] local 10.246.112.102 port 52848 connected with 10.246.112.101 port 5001
[ ID] Interval   Transfer Bandwidth
[  3]  0.0-10.4 sec   323 KBytes   256 Kbits/sec


* If the mtu of either tap device is dropped to 1500 then the tests pass:

root@node-licetus:~# ip netns exec ns1 ip link set dev tap1 mtu 9000
root@node-licetus:~# ip netns exec ns1 iperf -s -m

Server listening on TCP port 5001
TCP window size:  128 KByte (default)

[  4] local 10.246.112.101 port 5001 connected with 10.246.112.102 port 52850
[ ID] Interval   Transfer Bandwidth
[  4]  0.0-10.1 sec   502 MBytes   418 Mbits/sec
[  4] MSS size 1448 bytes (MTU 1500 bytes, ethernet)

root@node-hippalus:~# ip netns exec ns1 ip link set dev tap1 mtu 1500
root@node-hippalus:~# ip netns exec ns1 iperf -c 10.246.112.101

Client connecting to 10.246.112.101, TCP port 5001
TCP window size:  748 KByte (default)

[  3] local 10.246.112.102 port 52850 connected with 10.246.112.101 port 5001
[ ID] Interval   Transfer Bandwidth
[  3]  0.0-10.0 sec   502 MBytes   420 Mbits/sec


* If in server 2 the dpdk device is replaced with the same physical
  device but not managed by dpdk jumbo frame then the tests pass:

root@node-hippalus:~# ls -dl 
/sys/devices/pci:00/:00:02.0/:03:00.0/net/enp3s0f0
drwxr-xr-x 6 root root 0 Jul  8 14:04 
/sys/devices/pci:00/:00:02.0/:03:00.0/net/enp3s0f0

root@node-hippalus:~# ovs-vsctl show
cd383272-d341-4be8-b2ab-17ea8cb63ae6
Bridge br-test
Port "tap1"
Interface "tap1"
type: internal
Port br-test
Interface br-test
type: internal
Port "enp3s0f0"
Interface "enp3s0f0"
ovs_version: "2.11.0"

root@node-hippalus:~# ip netns exec ns1 ip addr show tap1
10: tap1:  mtu 9000 qdisc noqueue state 
UNKNOWN group default qlen 1000
link/ether ba:39:55:e2:b8:81 brd ff:ff:ff:ff:ff:ff
inet 10.246.112.102/21 scope global tap1
   valid_lft forever preferred_lft forever
inet6 fe80::b839:55ff:fee2:b881/64 scope link 
  

[Bug 1833713] Re: Metadata is broken with dpdk bonding, jumbo frames and metadata from qdhcp

2019-07-08 Thread Liam Young
At some point when I was attempting to simplify the test case  I
dropped setting the mtu on the dpdk devices via ovs so the above test is
invalid. I've marked the bug against dpdk as invalid while I redo the
tests.

** Changed in: dpdk (Ubuntu)
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1833713

Title:
  Metadata is broken with dpdk bonding, jumbo frames and metadata from
  qdhcp

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1833713/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1833713] Re: Metadata is broken with dpdk bonding, jumbo frames and metadata from qdhcp

2019-07-08 Thread Liam Young
Given the above I'm am going to mark this as affecting the dpdk package
rather than the charm

** Also affects: dpdk (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1833713

Title:
  Metadata is broken with dpdk bonding, jumbo frames and metadata from
  qdhcp

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1833713/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1828534] Re: [19.04][Queens -> Rocky] Upgrading to Rocky resulted in "Services not running that should be: designate-producer"

2019-07-02 Thread Liam Young
I think this is a packaging bug

** Also affects: designate (Ubuntu)
   Importance: Undecided
   Status: New

** Changed in: charm-designate
   Status: Triaged => Invalid

** Changed in: charm-designate
 Assignee: Liam Young (gnuoy) => (unassigned)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1828534

Title:
  [19.04][Queens -> Rocky] Upgrading to Rocky resulted in "Services not
  running that should be: designate-producer"

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-designate/+bug/1828534/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1832075] Re: [19.04][Queens -> Rocky] python3-pymysql is not installed before use

2019-06-17 Thread Liam Young
I haven't been able to reproduce this. Could you retry it ? Also could
you confirm the version being upgraded to as it's slightly unclear if
the error occurred on upgrade from Queens to Rocky (as the bug title
says) or Rocky to Stein (as the  bug description implies "Setting up
openstack-dashboard (3:15.0.0-0ubuntu1~cloud0) ...", thanks.


** Changed in: charm-openstack-dashboard
 Assignee: Liam Young (gnuoy) => (unassigned)

** Changed in: charm-openstack-dashboard
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1832075

Title:
  [19.04][Queens -> Rocky] python3-pymysql is not installed before use

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-openstack-dashboard/+bug/1832075/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1832075] Re: [19.04][Queens -> Rocky] python3-pymysql is not installed before use

2019-06-17 Thread Liam Young
** Changed in: charm-openstack-dashboard
 Assignee: (unassigned) => Liam Young (gnuoy)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1832075

Title:
  [19.04][Queens -> Rocky] python3-pymysql is not installed before use

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-openstack-dashboard/+bug/1832075/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1805332] Re: [Swift backend] Upload image hit error: Unicode-objects must be encoded before hashing

2019-05-22 Thread Liam Young
The package from rocky-proposed worked for me. Version info below:
python3-glance-store:
  Installed: 0.26.1-0ubuntu2.1~cloud0
  Candidate: 0.26.1-0ubuntu2.1~cloud0
  Version table:
 *** 0.26.1-0ubuntu2.1~cloud0 500
500 http://ubuntu-cloud.archive.canonical.com/ubuntu 
bionic-proposed/rocky/main amd64 Packages
100 /var/lib/dpkg/status
 0.26.1-0ubuntu2~cloud0 500
500 http://ubuntu-cloud.archive.canonical.com/ubuntu 
bionic-updates/rocky/main amd64 Packages
 0.23.0-0ubuntu1 500
500 http://nova.clouds.archive.ubuntu.com/ubuntu bionic/universe amd64 
Packages

Test output:

$ openstack image create --public --file 
/home/ubuntu/images/bionic-server-cloudimg-amd64.img bionic-test
500 Internal Server Error: The server has either erred or is incapable of 
performing the requested operation. (HTTP 500)
  
$ juju run --unit glance/0 "apt-cache policy python3-glance-store"
python3-glance-store:
  Installed: 0.26.1-0ubuntu2~cloud0
  Candidate: 0.26.1-0ubuntu2~cloud0
  Version table:
 *** 0.26.1-0ubuntu2~cloud0 500
500 http://ubuntu-cloud.archive.canonical.com/ubuntu 
bionic-updates/rocky/main amd64 Packages
100 /var/lib/dpkg/status
 0.23.0-0ubuntu1 500
500 http://nova.clouds.archive.ubuntu.com/ubuntu bionic/universe amd64 
Packages


$ juju run --unit glance/0 "add-apt-repository cloud-archive:rocky-proposed 
--yes --update"   
...
$ juju run --unit glance/0 "apt install --yes python3-glance-store; systemctl 
restart glance-api"
...
(clients) ubuntu@gnuoy-bastion2:~/branches/nova-compute$ juju run --unit 
glance/0 "apt-cache policy python3-glance-store"
python3-glance-store:
  Installed: 0.26.1-0ubuntu2.1~cloud0
  Candidate: 0.26.1-0ubuntu2.1~cloud0
  Version table:
 *** 0.26.1-0ubuntu2.1~cloud0 500
500 http://ubuntu-cloud.archive.canonical.com/ubuntu 
bionic-proposed/rocky/main amd64 Packages
100 /var/lib/dpkg/status
 0.26.1-0ubuntu2~cloud0 500
500 http://ubuntu-cloud.archive.canonical.com/ubuntu 
bionic-updates/rocky/main amd64 Packages
 0.23.0-0ubuntu1 500
500 http://nova.clouds.archive.ubuntu.com/ubuntu bionic/universe amd64 
Packages

$ openstack image create --public --file 
/home/ubuntu/images/bionic-server-cloudimg-amd64.img bionic-test
+--++
| Field| Value  

|
+--++
| checksum | c8994590c7d61dc68922e461686ef936   

|
| container_format | bare   

|
| created_at   | 2019-05-22T07:41:28Z   

|
| disk_format  | raw

|
| file | /v2/images/788db968-ea48-4b4f-8c91-4e15d23dbe4c/file   

|
| id   | 788db968-ea48-4b4f-8c91-4e15d23dbe4c   

|
| min_disk | 0  

|
| min_ram  | 0  

|
| name | bionic-test

|
| owner| 3d4ca9d5799546bd852db00ee6d5d4c0   
  

[Bug 1805332] Re: [Swift backend] Upload image hit error: Unicode-objects must be encoded before hashing

2019-05-22 Thread Liam Young
The cosmic package worked for me to. Version info below:

python3-glance-store:
  Installed: 0.26.1-0ubuntu2.1
  Candidate: 0.26.1-0ubuntu2.1
  Version table:
 *** 0.26.1-0ubuntu2.1 500
500 http://archive.ubuntu.com/ubuntu cosmic-proposed/universe amd64 
Packages
100 /var/lib/dpkg/status
 0.26.1-0ubuntu2 500
500 http://nova.clouds.archive.ubuntu.com/ubuntu cosmic/universe amd64 
Packages


Test output:

$ juju run --unit glance/0 "apt-cache policy python3-glance-store"
python3-glance-store:
  Installed: 0.26.1-0ubuntu2
  Candidate: 0.26.1-0ubuntu2
  Version table:
 *** 0.26.1-0ubuntu2 500
500 http://nova.clouds.archive.ubuntu.com/ubuntu cosmic/universe amd64 
Packages
100 /var/lib/dpkg/status

$ openstack image create --public --file 
/home/ubuntu/images/bionic-server-cloudimg-amd64.img bionic-test
500 Internal Server Error: The server has either erred or is incapable of 
performing the requested operation. (HTTP 500)

* Enable proposed, upgrade python3-glance-store and restart glance-api
service

$ juju run --unit glance/0 "apt-cache policy python3-glance-store"
python3-glance-store:
  Installed: 0.26.1-0ubuntu2.1
  Candidate: 0.26.1-0ubuntu2.1
  Version table:
 *** 0.26.1-0ubuntu2.1 500
500 http://archive.ubuntu.com/ubuntu cosmic-proposed/universe amd64 
Packages
100 /var/lib/dpkg/status
 0.26.1-0ubuntu2 500
500 http://nova.clouds.archive.ubuntu.com/ubuntu cosmic/universe amd64 
Packages

$ openstack image create --public --file 
/home/ubuntu/images/bionic-server-cloudimg-amd64.img bionic-test
+--++
| Field| Value  

|
+--++
| checksum | c8994590c7d61dc68922e461686ef936   

|
| container_format | bare   

|
| created_at   | 2019-05-22T07:10:26Z   

|
| disk_format  | raw

|
| file | /v2/images/eca7aeb5-4c16-4bb2-ad9a-53acfb3c18ca/file   

|
| id   | eca7aeb5-4c16-4bb2-ad9a-53acfb3c18ca   

|
| min_disk | 0  

|
| min_ram  | 0  

|
| name | bionic-test

|
| owner| 6c8b914f26bc40d9aae58729b818e398   

|
| properties   | os_hash_algo='sha512', 
os_hash_value='be4993640deb7eb99b07667213b1fe3a9145df2c0ed5c72cf786a621fe64e93fb543cbb3fafa9a130988b684da432d2a55493c50e77a9dfe336e7ed996be92d9',
 os_hidden='False' |
| protected| False  

|
| schema   | /v2/schemas/image  

|
| size

[Bug 1805332] Re: [Swift backend] Upload image hit error: Unicode-objects must be encoded before hashing

2019-05-15 Thread Liam Young
The disco package worked for me to. Version info below:

# apt-cache policy python3-glance-store
python3-glance-store:
  Installed: 0.28.0-0ubuntu1.1
  Candidate: 0.28.0-0ubuntu1.1
  Version table:
 *** 0.28.0-0ubuntu1.1 500
500 http://archive.ubuntu.com/ubuntu disco-proposed/main amd64 Packages
100 /var/lib/dpkg/status
 0.28.0-0ubuntu1 500
500 http://nova.clouds.archive.ubuntu.com/ubuntu disco/main amd64 
Packages


** Tags removed: verification-needed-disco
** Tags added: verification-done-disco

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1805332

Title:
  [Swift backend] Upload image hit error: Unicode-objects must be
  encoded before hashing

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1805332/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1805332] Re: [Swift backend] Upload image hit error: Unicode-objects must be encoded before hashing

2019-05-15 Thread Liam Young
Looks good to me. Tested 0.28.0-0ubuntu1.1~cloud0 from cloud-archive
:stein-proposed

$ openstack image create --public --file 
/home/ubuntu/images/bionic-server-cloudimg-amd64.img bionic-test
 
500 Internal Server Error: The server has either erred or is incapable of 
performing the requested operation. (HTTP 500)  
  

$ juju run --unit glance/0 "add-apt-repository cloud-archive:stein-proposed 
--yes --update" 

Reading package lists...


Building dependency tree... 


Reading state information...


ubuntu-cloud-keyring is already the newest version (2018.09.18.1~18.04.0).  


The following package was automatically installed and is no longer required:


  grub-pc-bin   


Use 'apt autoremove' to remove it.  


0 upgraded, 0 newly installed, 0 to remove and 17 not upgraded. 


Get:1 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB] 


Hit:2 http://nova.clouds.archive.ubuntu.com/ubuntu bionic InRelease 


Get:3 http://nova.clouds.archive.ubuntu.com/ubuntu bionic-updates InRelease 
[88.7 kB]   

Ign:4 http://ubuntu-cloud.archive.canonical.com/ubuntu bionic-updates/stein 
InRelease   

Ign:5 http://ubuntu-cloud.archive.canonical.com/ubuntu bionic-proposed/stein 
InRelease   
   
Get:6 http://nova.clouds.archive.ubuntu.com/ubuntu bionic-backports InRelease 
[74.6 kB]   
  
Get:7 http://ubuntu-cloud.archive.canonical.com/ubuntu bionic-updates/stein 
Release [7882 B]

Get:8 http://ubuntu-cloud.archive.canonical.com/ubuntu bionic-proposed/stein 
Release [7884 B]
   
Get:9 http://ubuntu-cloud.archive.canonical.com/ubuntu bionic-updates/stein 
Release.gpg [543 B] 

Get:10 http://ubuntu-cloud.archive.canonical.com/ubuntu bionic-proposed/stein 
Release.gpg [543 B]
Get:11 http://ubuntu-cloud.archive.canonical.com/ubuntu 
bionic-proposed/stein/main amd64 Packages [179 kB]
Fetched 448 kB in 1s (358 kB/s)
Reading package lists...


$ juju run --unit glance/0 "apt install --yes python3-glance-store; systemctl 
restart glance-api"
Reading package lists...
Building dependency tree...
Reading state information...
The following package was automatically installed and is no longer required:
  grub-pc-bin
Use 'apt autoremove' to 

[Bug 1805332] Re: [Swift backend] Upload image hit error: Unicode-objects must be encoded before hashing

2019-05-14 Thread Liam Young
It does not appear to have been fixed upstream yet as this patch is
still in place at master:
https://github.com/openstack/glance_store/blob/master/glance_store/_drivers/swift/store.py#L1635

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1805332

Title:
  [Swift backend] Upload image hit error: Unicode-objects must be
  encoded before hashing

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1805332/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1805332] Re: [Swift backend] Upload image hit error: Unicode-objects must be encoded before hashing

2019-05-14 Thread Liam Young
** Description changed:

  [Impact]
  If we upload a large image (larger than 1G), the glance_store will hit a 
Unicode error. To fix this a patch has been merged in upstream master and 
backported to stable rocky.
  
  [Test Case]
+ Deploy glance related to swift-proxy using the object-store relation. Then 
attempt to upload a large image (not cirros)
+ 
+ $ openstack image create --public --file 
/home/ubuntu/images/bionic-server-cloudimg-amd64.img bionic-test
+ 500 Internal Server Error: The server has either erred or is incapable of 
performing the requested operation. (HTTP 500)
+ 
+ If the patch is manually applied and glance-api restarted then the above
+ command succeeds.
+ 
  In order to avoid regression of existing consumers, the OpenStack team will
  run their continuous integration test against the packages that are in
  -proposed. A successful run of all available tests will be required before the
  proposed packages can be let into -updates.
  
  The OpenStack team will be in charge of attaching the output summary of the
  executed tests. The OpenStack team members will not mark ‘verification-done’ 
until
  this has happened.
  
  [Regression Potential]
  In order to mitigate the regression potential, the results of the
  aforementioned tests are attached to this bug.
  
  [Discussion]
  n/a
  
  [Original Description]
  
  env: master branch, Glance using swift backend.
  
  We hit a strange error, if we upload a large image (larger than 1G), the
  glance_store will hit a error:Unicode-objects must be encoded before
  hashing. But if the image is small enough, the error won't happen.
  
  error log:
  https://www.irccloud.com/pastebin/jP3DapNy/
  
  After dig into the code, it appears that when chunk reading the image
  data, the date piece may be non-byte, so the checksum.updating will
  raise the error.
  
  encoding the date piece to ensure it's byte can solve the problem.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1805332

Title:
  [Swift backend] Upload image hit error: Unicode-objects must be
  encoded before hashing

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1805332/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1825356] Re: libvirt silently fails to attach a cinder ceph volume

2019-04-29 Thread Liam Young
Hi koalinux, please can you provide the requested logs or remove the
field-critical tag please ?

** Changed in: cloud-archive
   Status: New => Incomplete

** Changed in: ceph (Ubuntu)
   Status: New => Incomplete

** Changed in: libvirt (Ubuntu)
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1825356

Title:
  libvirt silently fails to attach a cinder ceph volume

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1825356/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1808951] Re: python3 + Fedora + SSL + wsgi nova deployment, nova api returns RecursionError: maximum recursion depth exceeded while calling a Python object

2019-04-25 Thread Liam Young
** Description changed:

  Description:-
  
  So while testing python3 with Fedora in [1], Found an issue while
  running nova-api behind wsgi. It fails with below Traceback:-
  
  2018-12-18 07:41:55.364 26870 INFO nova.api.openstack.requestlog 
[req-e1af4808-ecd8-47c7-9568-a5dd9691c2c9 - - - - -] 127.0.0.1 "GET 
/v2.1/servers/detail?all_tenants=True=True" status: 500 len: 0 
microversion: - time: 0.007297
  2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack 
[req-e1af4808-ecd8-47c7-9568-a5dd9691c2c9 - - - - -] Caught error: maximum 
recursion depth exceeded while calling a Python object: RecursionError: maximum 
recursion depth exceeded while calling a Python object
  2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack Traceback (most recent 
call last):
  2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack   File 
"/usr/lib/python3.6/site-packages/nova/api/openstack/__init__.py", line 94, in 
__call__
  2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack return 
req.get_response(self.application)
  2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack   File 
"/usr/lib/python3.6/site-packages/webob/request.py", line 1313, in send
  2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack application, 
catch_exc_info=False)
  2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack   File 
"/usr/lib/python3.6/site-packages/webob/request.py", line 1277, in 
call_application
  2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack app_iter = 
application(self.environ, start_response)
  2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack   File 
"/usr/lib/python3.6/site-packages/webob/dec.py", line 129, in __call__
  2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack resp = 
self.call_func(req, *args, **kw)
  2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack   File 
"/usr/lib/python3.6/site-packages/webob/dec.py", line 193, in call_func
  2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack return 
self.func(req, *args, **kwargs)
  2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack   File 
"/usr/lib/python3.6/site-packages/nova/api/openstack/requestlog.py", line 92, 
in __call__
  2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack self._log_req(req, 
res, start)
  2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack   File 
"/usr/lib/python3.6/site-packages/oslo_utils/excutils.py", line 220, in __exit__
  2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack 
self.force_reraise()
  2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack   File 
"/usr/lib/python3.6/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack 
six.reraise(self.type_, self.value, self.tb)
  2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack   File 
"/usr/lib/python3.6/site-packages/six.py", line 693, in reraise
  2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack raise value
  2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack   File 
"/usr/lib/python3.6/site-packages/nova/api/openstack/requestlog.py", line 87, 
in __call__
  2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack res = 
req.get_response(self.application)
  2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack   File 
"/usr/lib/python3.6/site-packages/webob/request.py", line 1313, in send
  2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack application, 
catch_exc_info=False)
  2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack   File 
"/usr/lib/python3.6/site-packages/webob/request.py", line 1277, in 
call_application
  2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack app_iter = 
application(self.environ, start_response)
  2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack   File 
"/usr/lib/python3.6/site-packages/webob/dec.py", line 143, in __call__
  2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack return 
resp(environ, start_response)
  2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack   File 
"/usr/lib/python3.6/site-packages/webob/dec.py", line 129, in __call__
  2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack resp = 
self.call_func(req, *args, **kw)
  2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack   File 
"/usr/lib/python3.6/site-packages/webob/dec.py", line 193, in call_func
  2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack return 
self.func(req, *args, **kwargs)
  2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack   File 
"/usr/lib/python3.6/site-packages/osprofiler/web.py", line 112, in __call__
  2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack return 
request.get_response(self.application)
  2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack   File 
"/usr/lib/python3.6/site-packages/webob/request.py", line 1313, in send
  2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack application, 
catch_exc_info=False)
  2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack   File 

[Bug 73536]

2019-03-12 Thread Liam-0
I'd like this feature, however if this is difficult to implement then a
workaround for my use case would be if the firefox command could support
a '--close' or similar option to exit gracefully, even if handled
asynchronously and I had to poll to wait for exit.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/73536

Title:
  MASTER Firefox crashes on instant X server shutdown

To manage notifications about this bug go to:
https://bugs.launchpad.net/firefox/+bug/73536/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1815844] Re: iscsi multipath dm-N device only used on first volume attachment

2019-02-14 Thread Liam Young
I don't think this is related to the charm, it looks like a bug in
upstream nova.

** Also affects: nova (Ubuntu)
   Importance: Undecided
   Status: New

** No longer affects: nova (Ubuntu)

** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1815844

Title:
  iscsi multipath dm-N device only used on first volume attachment

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-nova-compute/+bug/1815844/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1798868] Re: Compiz Crashed with SIGSEGV [Radeon]

2018-12-29 Thread Liam McDonagh-Greaves
** Description changed:

+ SEE WORKAROUND AT END OF POST
+ 
  Symptoms:
  On login, the desktop takes a long time to load, and when it does, everything 
blinks. It is possible (but difficult) to open a terminal and execute commands, 
through the right-click desktop menu.
  
  I first noticed the issue in mid-July of 2018.
  
  System:
  Acer 5100
  Ubuntu 16.04 (64-bit)
  Graphics card: VGA compatible controller: Advanced Micro Devices, Inc. 
[AMD/ATI] RS482M [Mobility Radeon Xpress 200]
  
  Believed to be mesa-related, as symptoms very similar to bug #1741447
  and #1735594. I have been informed by a mesa developer that my issue is
  different because that bug affects Intel GPUs. This system has a Radeon
  GPU.
  
  Log files are as attached to bug #1795709. That bug was declared invalid
  by Apport because it says I don't have the necessary debug packages
  installed. However, I could not find the debug packages in the
  repository, so that is why I am submitting this report manually.
  
  Many thanks for any help.
+ 
+ Edit 29/12/2018:
+ WORKAROUND
+ This bug is still present. However, I present a workaround. It is aimed at 
relatively inexperienced users like me. It guides the user through 
downgrading/rolling back affected packages to the most recent known working 
versions.
+ 1. Navigate to /var/log/apt in a file explorer. Open the logs until you find 
the offending mesa updates. In my case, the updates were installed on 15/07/18.
+ 2. Copy and paste the log into a text file on your desktop. Close the log.
+ 3. In the text file, do a ctrl+F search on "mesa" to highlight all instances 
of mesa-related packages. Keep these lines and delete the others. Each line 
should have this kind of format:
+   libgles2-mesa:amd64 (17.2.8-0ubuntu0~16.04.1, 18.0.5-0ubuntu0~16.04.1) 
(exact package name/version numbers may differ). The first number in brackets 
is the version of the package that you had before, the second number is the 
version it was upgraded to. You should end up with a list of such lines.
+ 4. The earlier versions of these packages can be downloaded from Launchpad. I 
don't know how to find them using links/searches, but the URLs look like this:
+ 
https://launchpad.net/ubuntu/xenial/amd64/libgles2-mesa/17.2.8-0ubuntu0~16.04.1.
 You will have to alter the url according to your architecture, package name 
and package version required.
+ 5. You will see a page with a download link on the right-hand side for a .deb 
file. Make a new folder on your desktop called "mesa-install" and download the 
.deb file there.
+ 6. Scroll down, and on the left, you will see any dependencies of this 
package. If any of them say " (=version number)" then you must 
make a note of these packages on your list of packages to download and install. 
If they have a ">=" sign, or no version number given, ignore them.
+ 7. Repeat steps 4 - 6 for all the packages on your list, placing the .deb 
files in the "mesa-install" folder.
+ 8. MAKE SURE ALL IMPORTANT FILES ON YOUR SYSTEM ARE BACKED UP.
+ 9. Open a terminal (Ctrl + Alt + T).
+ 10. Type "sudo dpkg -R --install /home//Desktop/mesa-install/
+ Replace " with your username. This command installs all .deb files 
in that folder.
+ 11. Reboot PC.
+ 12. Log in to a Unity session. Rejoice in the fact that it now works 
(hopefully). Assuming it is now working, you need to make sure that the 
packages you downgraded are not subsequently upgraded automatically by Ubuntu. 
Ignore any offers of automatic updates until you have completed all the steps.
+ 13. Open a terminal and type:
+ sudo apt-mark hold 
+ where "" is the name of the first package you downgraded. Press 
enter, enter password, etc. You should receive confirmation that the package 
has been held.
+ 14. Repeat step 13 for all packages on your list.
+ 
+ Good luck to all others who have faced this problem, and fingers crossed
+ for a bug fix!

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1798868

Title:
  Compiz Crashed with SIGSEGV [Radeon]

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/mesa/+bug/1798868/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1799406] Re: [SRU] Alarms fail on Rocky

2018-11-23 Thread Liam Young
** Changed in: charm-aodh
   Status: New => Invalid

** Changed in: oslo.i18n
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1799406

Title:
  [SRU] Alarms fail on Rocky

To manage notifications about this bug go to:
https://bugs.launchpad.net/aodh/+bug/1799406/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1800601] Re: [SRU] Infinite recursion in Python 3

2018-11-14 Thread Liam Young
I have successfully run the mojo spec which was failing
(specs/full_stack/next_openstack_upgrade/queens). This boots an instance
on rocky which indirectly queries glance:
https://pastebin.canonical.com/p/7sVjF6QSNm/

** Tags removed: verification-rocky-needed
** Tags added: verification-rocky-done

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1800601

Title:
  [SRU] Infinite recursion in Python 3

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1800601/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

  1   2   3   4   5   6   7   8   9   10   >