You have a typo in the description:
> the massive Y2028 time_t transition
I think you mean Y20*3*8.
Not a big deal, but just for clarity you should probably fix that.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
Further info may be relevant:
https://www.reddit.com/r/UbuntuUnity/comments/1axuqy5/2310_cant_open_keyboard_settings/
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2043863
Title:
Keyboard Settings
Also affects right shift key, but left still works. I had to use cut and
paste even to log in to register this affects me too.
No Ctrl keys, only 1 shift, can't open keyboard settings.
Goes away on reboot but soon recurs.
--
You received this bug notification because you are a member of Ubuntu
+1 makes sense. Thanks for doing this validation @chris.macnaughton
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1883112
Title:
rbd-target-api crashes with python TypeError
To manage
I have the same problem in Windows Subsystem for Linux, Ubuntu 20.04
I have a CIFS share containing 24 DFS folders.
Opening any subfolder in the share causes an instant kernel panic.
I do not have this problem on embedded hardware reading from the same share
running the xilinx 4.6.0 kernel and
I've filed https://bugs.launchpad.net/charm-mysql-router/+bug/1973177 is
track this seperatly
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1907250
Title:
[focal] charm becomes blocked with
One of the causes of a charm going into a "Failed to connect to MySQL"
state is that a connection to the database failed when the db-router
charm attempted to restart the db-router service. Currently the charm
will only retry the connection in response to one return code from the
mysql. The return
Public bug reported:
[Impact]
* ceph-iscsi on Focal talking to a Pacific or later Ceph cluster
* rbd-target-api service fails to start if there is a blocklist
entry for the unit.
* When the rbd-target-api service starts it checks if any of the
ip addresses on the machine it is running
*** This bug is a duplicate of bug 1883112 ***
https://bugs.launchpad.net/bugs/1883112
** This bug has been marked a duplicate of bug 1883112
rbd-target-api crashes with python TypeError
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to
Public bug reported:
While testing using openstack, guests failed to launch and these denied
messages were logged:
[ 8307.089627] audit: type=1400 audit(1649684291.592:109):
apparmor="DENIED" operation="mknod" profile="swtpm"
name="/run/libvirt/qemu/swtpm/11-instance-000b-swtpm.sock"
** Patch added: "ceph-iscsi-deb.diff"
https://bugs.launchpad.net/ubuntu/+source/ceph-iscsi/+bug/1965280/+attachment/5569987/+files/ceph-iscsi-deb.diff
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
Verification on impish failed due to
https://bugs.launchpad.net/ubuntu/+source/ceph-iscsi/+bug/1965280
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1883112
Title:
rbd-target-api crashes with
Public bug reported:
The rbd-target-api fails to start on Ubuntu Impish (21.10) and later.
This appears to be caused by a werkzeug package revision check in rbd-
target-api. The check is used to decide whather to add an
OpenSSL.SSL.Context or a ssl.SSLContext. The code comment suggests that
Tested successfully on focal with 3.4-0ubuntu2.1
Tested with ceph-iscsi charms functional tests which were previously
failing.
$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:Ubuntu 20.04.4 LTS
Release:20.04
Codename: focal
$ apt-cache policy
** Patch added: "gw-deb.diff"
https://bugs.launchpad.net/ubuntu/+source/ceph-iscsi/+bug/1883112/+attachment/5569162/+files/gw-deb.diff
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1883112
Title:
Thank you for the update Robie. I proposed the deb diff based on the fix
that had landed upstream because I (wrongly) thought that was what the
SRU policy required. I think it makes more sense to go for the minimal
fix you suggest.
--
You received this bug notification because you are a member
** Description changed:
+ [Impact]
+
+ * rbd-target-api service fails to start if there is a blocklist
+entry for the unit making the service unavailable.
+
+ * When the rbd-target-api service starts it checks if any of the
+ip addresses on the machine it is running on are listed as
+
** Patch added: "deb.diff"
https://bugs.launchpad.net/ubuntu/+source/ceph-iscsi/+bug/1883112/+attachment/5562748/+files/deb.diff
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1883112
Title:
** Changed in: ceph-iscsi (Ubuntu)
Status: New => Confirmed
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1883112
Title:
rbd-target-api crashes with python TypeError
To manage notifications
s/The issue appears when using the mysql to/The issue appears when using
the mysql shell to/
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1954306
Title:
Action `remove-instance` works but appears
I don't think this is a charm bug. The issue appears when using the
mysql to remove a node from the cluster. From what I can see you cannot
persist group_replication_force_members and is correctly unset. So the
error being reported seems wrong
https://pastebin.ubuntu.com/p/sx6ZB3rs6r/
** Also affects: mysql-8.0 (Ubuntu)
Importance: Undecided
Status: New
** Changed in: charm-mysql-innodb-cluster
Status: New => Invalid
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
Perhaps I'm missing something but this does not seem to be a bug in the
rabbitmq-server charm. It may be easier to observe there but the root
cause is elsewhere.
** Changed in: charm-rabbitmq-server
Status: New => Invalid
--
You received this bug notification because you are a member of
Tested successfully on focal victoria using 1:11.0.0-0ubuntu1~cloud1 . I
created an encrypted volume and attached it to a VM.
cinder type-create LUKS
cinder encryption-type-create --cipher aes-xts-plain64 --key_size 512
--control_location front-end LUKS nova.volume.encryptors.luks.LuksEncryptor
Tested successfully on focal wallaby using 2:12.0.0-0ubuntu2~cloud0 . I
created an encrypted volume and attached it to a VM.
cinder type-create LUKS
cinder encryption-type-create --cipher aes-xts-plain64 --key_size 512
--control_location front-end LUKS nova.volume.encryptors.luks.LuksEncryptor
Tested successfully on hirsute using 2:12.0.0-0ubuntu2 . I created an
encrypted volume and attached it to a VM.
cinder type-create LUKS
cinder encryption-type-create --cipher aes-xts-plain64 --key_size 512
--control_location front-end LUKS nova.volume.encryptors.luks.LuksEncryptor
cinder create
Just to add some info on guest agent here:
the guest agent does not set up the primary interface
there should be no race between guest agent and cloud-init for the
primary interface
the guest agent does not start any dhclient process for primary
interface, and should not care if any dhclient
in: charm-layer-ovn
Status: New => Confirmed
** Changed in: charm-layer-ovn
Importance: Undecided => High
** Changed in: charm-layer-ovn
Assignee: (unassigned) => Liam Young (gnuoy)
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is s
** Changed in: charm-neutron-gateway
Assignee: (unassigned) => Liam Young (gnuoy)
** Changed in: charm-neutron-gateway
Importance: Undecided => High
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchp
** Changed in: charm-neutron-gateway
Status: Invalid => Confirmed
** Changed in: neutron (Ubuntu)
Status: Confirmed => Invalid
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1944424
A patch was introduced [0] "..which sets the backup gateway
device link down by default. When the VRRP sets the master state in
one host, the L3 agent state change procedure will
do link up action for the gateway device.".
This change causes an issue when using keepalived 2.X (focal+) which
is
** Also affects: neutron (Ubuntu)
Importance: Undecided
Status: New
** Changed in: neutron (Ubuntu)
Status: New => Invalid
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1943863
I had the same issue with 20.04 on a Thinkpad X220.
I managed to resolve it by installing the HWE kernel, adding a dedicated
swap partition on another drive, purging ZRAM, and rebuilding my
`initrd`.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is
@jeremie2
Ah, fair enough. Mostly I use Ventoy these days, and once the USB key is
formatted with Ventoy, you just copy .ISO files onto it and they
automagically appear in the Ventoy boot menu. So no need for Balena
Etcher etc. any more. Ventoy itself is bootable on BIOS and UEFI PCs and
on Intel
In replie to @jeremie2 in comment #24:
I don't think this is a general description of the problem, because for
me, my USB boot keys don't have separate EFI boot partitions.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
*** This bug is a duplicate of bug 1893964 ***
https://bugs.launchpad.net/bugs/1893964
** This bug has been marked a duplicate of bug 1893964
Installation of Ubuntu Groovy with manual partitioning without an EFI System
Partition fails on 'grub-install /dev/sda' even on non-UEFI systems
I have tested the rocky scenario that was failing for me. Trilio on
Train + OpenStack on Rocky. The Trilio functional test to snapshot a
server failed without the fix and passed once python3-oslo.messaging
8.1.0-0ubuntu1~cloud2.2 was installed and services restarted
** Tags removed:
Public bug reported:
It seems that updating the role attribute of a connection has no affect
on existing connections. For example when investigating another bug I
needed to disable rbac but to get that to take effect I needed to either
restart the southbound listener or the ovn-controller.
fwiw
Public bug reported:
When using Openstack Ussuri with OVN 20.03 and adding a floating IP
address to a port the ovn-controller on the hypervisor repeatedly
reports:
2021-03-02T10:33:35.517Z|35359|ovsdb_idl|WARN|transaction error:
{"details":"RBAC rules for client
I have tested the package in victoria proposed (0.3.0-0ubuntu2) and it
passed. I verified it by deploying the octavia charm and running its
focal victoria functional tests which create an ovn loadbalancer and
check it is functional.
The log of the test run is here:
https://openstack-ci-
I have tested the package in groovy proposed (0.3.0-0ubuntu2) and it
passed. I verified it by deploying the octavia charm and running its
groovy victoria functional tests which create an ovn loadbalancer and
check it is fuctional.
The log of the test run is here:
https://openstack-ci-
Confirmed and reproduced in Xubuntu 20.10 as well. This issue is _not_
confined to Ubuntu Unity and is also present in an official remix.
Steps taken to try to resolve it:
* updated system BIOS (machine is a Lenovo Thinkpad W500; was on 3.18, now on
3.23, latest) -> no change
• tried 2 different
Public bug reported:
Even on BIOS systems with no UEFI
ProblemType: Bug
DistroRelease: Ubuntu 20.10
Package: ubiquity 20.10.13
ProcVersionSignature: Ubuntu 5.8.0-25.26-generic 5.8.14
Uname: Linux 5.8.0-25-generic x86_64
NonfreeKernelModules: zfs zunicode zavl icp zcommon znvpair
ApportVersion:
https://code.launchpad.net/~gnuoy/ubuntu/+source/ovn-octavia-
provider/+git/ovn-octavia-provider/+merge/397023
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1896603
Title:
ovn-octavia-provider:
** Description changed:
- Kuryr-Kubernetes tests running with ovn-octavia-provider started to fail
- with "Provider 'ovn' does not support a requested option: OVN provider
- does not support allowed_cidrs option" showing up in the o-api logs.
+ [Impact]
- We've tracked that to check [1]
** Also affects: ovn-octavia-provider (Ubuntu)
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1896603
Title:
ovn-octavia-provider: Cannot create listener
I have tested focal and groovy and is only happening on groovy. I have
not tried Hirsute.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1904199
Title:
[groovy-victoria] "gwcli /iscsi-targets/
I don;t think this is a charm issue. It looks like an incompatibility
between ceph-isci and python3-werkzeug in groovy.
# /usr/bin/rbd-target-api
* Serving Flask app "rbd-target-api" (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production
Public bug reported:
Crashed during install.
ProblemType: Bug
DistroRelease: Ubuntu 20.04
Package: ubiquity 20.04.15.2
ProcVersionSignature: Ubuntu 5.4.0-42.46-generic 5.4.44
Uname: Linux 5.4.0-42-generic x86_64
NonfreeKernelModules: zfs zunicode zavl icp zcommon znvpair
ApportVersion:
I've never heard of the 'empty python3-google-compute-engine
transitional package'; for upstream packaging, we use "Conflicts:
python3-google-compute-engine" and this will cause the top level package
(called google-compute-engine upstream, I think called gce-compute-
image-packages in Ubuntu) to
Please also apply this change to the google-guest-agent package
** Also affects: google-guest-agent (Ubuntu)
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
*** This bug is a duplicate of bug 1900897 ***
https://bugs.launchpad.net/bugs/1900897
** Also affects: google-guest-agent (Ubuntu)
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
Public bug reported:
Upstream's build parameters;
override_dh_auto_build:
dh_auto_build -O--buildsystem=golang -- -ldflags="-s -w -X
main.version=$(VERSION)-$(RELEASE)" -mod=readonly
- Strip the binary
- Set main.version
** Affects: google-osconfig-agent (Ubuntu)
Importance:
It's a complicated situation, but I'll try to highlight some of the
reasons.
First, there is the complexity of existing files. We will only copy the
file if no file already exists because it may exist from the previous,
python guest which automatically generated this file. There are also the
Systemd provides that functionality itself, internally. We don't want to
use UCF or mark this as a config file. We want to copy the file once on
installation iff it doesn't exist. It is otherwise an 'example' file.
--
You received this bug notification because you are a member of Ubuntu
Bugs,
The way that this file is managed has changed as part of this
replacement, and many customers have automatic updates enabled. We chose
not to mark this file as a config file, as we don't want that dialog to
appear. We only ever copy the file into place if it doesn't already
exist, and after that,
I have looked at this package on a testing image in GCE. The instance
configs file has been shipped differently in this package vs ours - here
you are shipping it as /etc/defaults/instance_configs.cfg, we ship to
/usr/share/google-guest-agent/instance_configs.cfg
There are two problems with this
Yep thats the traceback I'm seeing.
Charm shows:
2020-06-10 12:45:57 ERROR juju-log amqp:40: Hook error:
Traceback (most recent call last):
File
"/var/lib/juju/agents/unit-masakari-0/.venv/lib/python3.6/site-packages/charms/reactive/__init__.py",
line 74, in main
It seems sqlalchemy-utils may have been removed recently in error
https://git.launchpad.net/ubuntu/+source/masakari/tree/debian/changelog?id=4d933765965f3d02cd68c696cc69cf53b7c6390d#n3
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
Public bug reported:
Package seems to be missing a dependency on sqlalchemy-utils *1. The
issue shows itself when running masakari-manage with the new 'taskflow'
section enabled *2
*1
https://opendev.org/openstack/masakari/src/branch/stable/ussuri/requirements.txt#L29
*2
Public bug reported:
Opening a bug for this since all other bugs that reported this have been
closed.
On an X11 session, a dead secondary mouse is displayed when the scaling
for a user session has been set to 125% (fractional scaling).
Presumably, the dead cursor is a left-over from the login
HAving looked into it further it seems to be the name of the node that
has changed.
juju deploy cs:bionic/ubuntu bionic-ubuntu
juju deploy cs:focal/ubuntu focal-ubuntu
juju run --unit bionic-ubuntu/0 "sudo apt install --yes crmsh pacemaker"
juju run --unit focal-ubuntu/0 "sudo apt install --yes
Public bug reported:
Testing of masakari on focal zaza tests failed because the test checks
that all pacemaker nodes are online. This check failed due the
appearance of a new node called 'node1' which was marked as offline. I
don't know where that node came from or what is supposed to represent
The source option was not set properly for the ceph application leading
to the python rbd lib being way ahead of the ceph cluster.
** Changed in: charm-glance
Assignee: Liam Young (gnuoy) => (unassigned)
** Changed in: charm-glance
Status: New => Invalid
** Changed in:
** Also affects: glance (Ubuntu)
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1873741
Title:
Using ceph as a backing store fails on ussuri
To manage
** Summary changed:
- Handbrake Crash when selecting source after Xubuntu install
+ Handbrake Crash when selecting source after fresh 20.04 install
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
Program terminated with signal SIGSEGV, Segmentation fault.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1870318
Title:
Handbrake Crash when selecting source after Xubuntu install
To manage
I repeated the above with fresh 20.04 install (Gnome), and get the same
issue.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1870318
Title:
Handbrake Crash when selecting source after Xubuntu
Public bug reported:
I had an up to date install of Ubuntu 20.04 (as of 1st April), I had
used Handbrake several times successfully.
I then install Xubuntu core over the top.
Handbrake still opens but upon selecting the DVD source, it crashes.
Instead of loading/processing.
Description:
** Summary changed:
- rbd pool name is hardcoded
+ Checks fail when creating an iscsi target
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1864838
Title:
skipchecks=true is needed when deployed on
Public bug reported:
See https://docs.ceph.com/docs/master/rbd/iscsi-target-cli/ and the
line:
"If not using RHEL/CentOS or using an upstream or ceph-iscsi-test
kernel, the skipchecks=true argument must be used. This will avoid the
Red Hat kernel and rpm checks:"
** Affects: ceph-iscsi (Ubuntu)
Public bug reported:
ceilometer-collector fails to stop if it cannot connect to message
broker.
To reproduce (assuming amqp is running on localhost):
1) Comment out the 'oslo_messaging_rabbit' section from
/etc/ceilometer/ceilometer.conf. This will trigger ceilometer-collector to look
locally
Sahid pointed out that the swift-init will traverse a search path and
start a daemon for every config file it finds so no change to the init
script is needed. Initial tests suggest this completely covers my use
case. I will continue testing and report back. I will mark the bug as
invalid for the
Hi Sahid,
In our deployment for swift global replication we have two account services.
One for local and one for replication:
# cat /etc/swift/account-server/1.conf
[DEFAULT]
bind_ip = 0.0.0.0
bind_port = 6002
workers = 1
[pipeline:main]
pipeline = recon account-server
[filter:recon]
use =
Hi Cory, the init script update is to support swift global replication.
The upstream code and the proposed changes to the charm support the
feature in mitaka so ideally the support would go right back to trusty-
mitaka.
--
You received this bug notification because you are a member of Ubuntu
** Description changed:
- On swift proxy servers there are three groups of services: account,
+ On swift storage servers there are three groups of services: account,
container and object.
Each of these groups is comprised of a number of services, for instance:
server, auditor, replicator
Public bug reported:
On swift proxy servers there are three groups of services: account,
container and object.
Each of these groups is comprised of a number of services, for instance:
server, auditor, replicator etc
Each service has its own init script but all the services in a group are
I can confirm that the disco proposed repository fixes this issue.
I have run the openstack teams mojo spec for disco stein which fails due
to this bug. I then reran the test with the charms configured to install
from the disco proposed repository and the bug was fixed and the tests
passed.
Log
Hi Christian,
Thanks for your comments. I'm sure you spotted it but just to make it
clear, the issue occurs with bonded and unbonded dpdk interfaces. I've emailed
upstream here *1.
Thanks
Liam
*1 https://mail.openvswitch.org/pipermail/ovs-discuss/2019-July/048997.html
--
You received
** Changed in: dpdk (Ubuntu)
Status: Invalid => New
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1833713
Title:
Metadata is broken with dpdk bonding, jumbo frames and metadata from
qdhcp
Ubuntu: eoan
DPDK pkg: 18.11.1-3
OVS DPDK pkg: 2.11.0-0ubuntu2
Kerenl: 5.0.0-20-generic
If a server has an ovs bridge with a dpdk device for external
network access and a network namespace attached then sending data out of
the namespace fails if jumbo frames are enabled.
Setup:
Ubuntu: eoan
DPDK pkg: 18.11.1-3
OVS DPDK pkg: 2.11.0-0ubuntu2
Kerenl: 5.0.0-20-generic
If two servers each have an ovs bridge with a dpdk device for external
network access and a network namespace attached then communication
between taps in the namespaces fails if jumbo frames are enabled. If
At some point when I was attempting to simplify the test case I
dropped setting the mtu on the dpdk devices via ovs so the above test is
invalid. I've marked the bug against dpdk as invalid while I redo the
tests.
** Changed in: dpdk (Ubuntu)
Status: New => Invalid
--
You received this
Given the above I'm am going to mark this as affecting the dpdk package
rather than the charm
** Also affects: dpdk (Ubuntu)
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
I think this is a packaging bug
** Also affects: designate (Ubuntu)
Importance: Undecided
Status: New
** Changed in: charm-designate
Status: Triaged => Invalid
** Changed in: charm-designate
Assignee: Liam Young (gnuoy) => (unassigned)
--
You received th
board (3:15.0.0-0ubuntu1~cloud0) ...", thanks.
** Changed in: charm-openstack-dashboard
Assignee: Liam Young (gnuoy) => (unassigned)
** Changed in: charm-openstack-dashboard
Status: New => Incomplete
--
You received this bug notification because you are a member of Ub
** Changed in: charm-openstack-dashboard
Assignee: (unassigned) => Liam Young (gnuoy)
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1832075
Title:
[19.04][Queens -> Rocky] python3-p
The package from rocky-proposed worked for me. Version info below:
python3-glance-store:
Installed: 0.26.1-0ubuntu2.1~cloud0
Candidate: 0.26.1-0ubuntu2.1~cloud0
Version table:
*** 0.26.1-0ubuntu2.1~cloud0 500
500 http://ubuntu-cloud.archive.canonical.com/ubuntu
The cosmic package worked for me to. Version info below:
python3-glance-store:
Installed: 0.26.1-0ubuntu2.1
Candidate: 0.26.1-0ubuntu2.1
Version table:
*** 0.26.1-0ubuntu2.1 500
500 http://archive.ubuntu.com/ubuntu cosmic-proposed/universe amd64
Packages
100
The disco package worked for me to. Version info below:
# apt-cache policy python3-glance-store
python3-glance-store:
Installed: 0.28.0-0ubuntu1.1
Candidate: 0.28.0-0ubuntu1.1
Version table:
*** 0.28.0-0ubuntu1.1 500
500 http://archive.ubuntu.com/ubuntu disco-proposed/main amd64
Looks good to me. Tested 0.28.0-0ubuntu1.1~cloud0 from cloud-archive
:stein-proposed
$ openstack image create --public --file
/home/ubuntu/images/bionic-server-cloudimg-amd64.img bionic-test
500 Internal Server Error: The server has
It does not appear to have been fixed upstream yet as this patch is
still in place at master:
https://github.com/openstack/glance_store/blob/master/glance_store/_drivers/swift/store.py#L1635
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to
** Description changed:
[Impact]
If we upload a large image (larger than 1G), the glance_store will hit a
Unicode error. To fix this a patch has been merged in upstream master and
backported to stable rocky.
[Test Case]
+ Deploy glance related to swift-proxy using the object-store
Hi koalinux, please can you provide the requested logs or remove the
field-critical tag please ?
** Changed in: cloud-archive
Status: New => Incomplete
** Changed in: ceph (Ubuntu)
Status: New => Incomplete
** Changed in: libvirt (Ubuntu)
Status: New => Incomplete
--
You
** Description changed:
Description:-
So while testing python3 with Fedora in [1], Found an issue while
running nova-api behind wsgi. It fails with below Traceback:-
2018-12-18 07:41:55.364 26870 INFO nova.api.openstack.requestlog
[req-e1af4808-ecd8-47c7-9568-a5dd9691c2c9 - - - -
I'd like this feature, however if this is difficult to implement then a
workaround for my use case would be if the firefox command could support
a '--close' or similar option to exit gracefully, even if handled
asynchronously and I had to poll to wait for exit.
--
You received this bug
I don't think this is related to the charm, it looks like a bug in
upstream nova.
** Also affects: nova (Ubuntu)
Importance: Undecided
Status: New
** No longer affects: nova (Ubuntu)
** Also affects: nova
Importance: Undecided
Status: New
--
You received this bug
** Description changed:
+ SEE WORKAROUND AT END OF POST
+
Symptoms:
On login, the desktop takes a long time to load, and when it does, everything
blinks. It is possible (but difficult) to open a terminal and execute commands,
through the right-click desktop menu.
I first noticed the
** Changed in: charm-aodh
Status: New => Invalid
** Changed in: oslo.i18n
Status: New => Invalid
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1799406
Title:
[SRU] Alarms fail on
I have successfully run the mojo spec which was failing
(specs/full_stack/next_openstack_upgrade/queens). This boots an instance
on rocky which indirectly queries glance:
https://pastebin.canonical.com/p/7sVjF6QSNm/
** Tags removed: verification-rocky-needed
** Tags added: verification-rocky-done
1 - 100 of 925 matches
Mail list logo