[Bug 1907686] Re: ovn: instance unable to retrieve metadata
Verified focal-ussuri in a converged networking environment. ** Tags removed: verification-needed-focal ** Tags added: verification-done-focal -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1907686 Title: ovn: instance unable to retrieve metadata To manage notifications about this bug go to: https://bugs.launchpad.net/charm-ovn-chassis/+bug/1907686/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1907686] Re: ovn: instance unable to retrieve metadata
@Robie, I have been unable to validate as the amd64 package is failing to build: https://launchpad.net/ubuntu/+source/openvswitch/2.13.3-0ubuntu0.20.04.1/+build/21397125 -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1907686 Title: ovn: instance unable to retrieve metadata To manage notifications about this bug go to: https://bugs.launchpad.net/charm-ovn-chassis/+bug/1907686/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1899104] Re: [SRU] barbican-manage db upgrade fails with MySQL8
Verification on bionic-ussuri is successful: # sudo -u barbican barbican-manage db upgrade 2021-03-30 16:20:55.622 28161 WARNING oslo_db.sqlalchemy.engines [-] MySQL SQL mode is 'ONLY_FULL_GROUP_BY,STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION', consider enabling TRADITIONAL or STRICT_ALL_TABLES 2021-03-30 16:20:55.640 28161 INFO alembic.runtime.migration [-] Context impl MySQLImpl. 2021-03-30 16:20:55.641 28161 INFO alembic.runtime.migration [-] Will assume non-transactional DDL. # dpkg -l |grep barbican ii barbican-api 1:10.0.0-0ubuntu0.20.04.3~cloud0 all OpenStack Key Management Service - API Server ii barbican-common 1:10.0.0-0ubuntu0.20.04.3~cloud0 all OpenStack Key Management Service - common files ii barbican-worker 1:10.0.0-0ubuntu0.20.04.3~cloud0 all OpenStack Key Management Service - Worker Node ii python3-barbican 1:10.0.0-0ubuntu0.20.04.3~cloud0 all OpenStack Key Management Service - Python 3 files ii python3-barbicanclient 4.10.0-0ubuntu1~cloud0 all OpenStack Key Management API client - Python 3.x # apt-cache policy python3-barbican python3-barbican: Installed: 1:10.0.0-0ubuntu0.20.04.3~cloud0 Candidate: 1:10.0.0-0ubuntu0.20.04.3~cloud0 Version table: *** 1:10.0.0-0ubuntu0.20.04.3~cloud0 500 500 http://ubuntu-cloud.archive.canonical.com/ubuntu bionic-proposed/ussuri/main amd64 Packages 100 /var/lib/dpkg/status ** Tags removed: verification-ussuri-needed ** Tags added: verification-done-ussuri -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1899104 Title: [SRU] barbican-manage db upgrade fails with MySQL8 To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/barbican/+bug/1899104/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1899104] Re: [SRU] barbican-manage db upgrade fails with MySQL8
Łukasz, I have verified the 10.0.0-0ubuntu0.20.04.3 version of barbican on Focal. Please complete the SRU. ** Tags removed: verification-needed verification-needed-focal ** Tags added: verification-done-focal -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1899104 Title: [SRU] barbican-manage db upgrade fails with MySQL8 To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/barbican/+bug/1899104/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1912844] Re: Bond with OVS bridging RuntimeError: duplicate mac found!
With the latest update to the PPA [0] I can deploy a full OpenStack with machines with two VLAN interfaces each and respective spaces. For an OVN deploy this currently requires two PPAs [0] and [1]. [0] https://launchpad.net/~oddbloke/+archive/ubuntu/lp1912844 [1] https://launchpad.net/~fnordahl/+archive/ubuntu/ovs -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1912844 Title: Bond with OVS bridging RuntimeError: duplicate mac found! To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/cloud-init/+bug/1912844/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1912844] Re: Bond with OVS bridging RuntimeError: duplicate mac found!
Running with Dan's PPA [0] on the final reboot cloud-init fails with the following. See attached image. Stderr: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory) I'll try and get the rest of the error content (not from console) for further debugging. But it seems the ovs command is being called before ovs is up, perhaps? [0] https://launchpad.net/~oddbloke/+archive/ubuntu/lp1912844 ** Attachment added: "ovs-vsctl-failure.png" https://bugs.launchpad.net/ubuntu/+source/cloud-init/+bug/1912844/+attachment/5467297/+files/ovs-vsctl-failure.png -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1912844 Title: Bond with OVS bridging RuntimeError: duplicate mac found! To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/cloud-init/+bug/1912844/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
Re: [Bug 1899104] Re: [SRU] barbican-manage db upgrade fails with MySQL8
On Tue, Feb 16, 2021 at 12:00 PM Brian Murray <1899...@bugs.launchpad.net> wrote: > > Hello David, or anyone else affected, > > Accepted barbican into focal-proposed. The package will build now and be > available at > https://launchpad.net/ubuntu/+source/barbican/1:10.0.0-0ubuntu0.20.04.2 > in a few hours, and then in the -proposed repository. > > Please help us by testing this new package. See > https://wiki.ubuntu.com/Testing/EnableProposed for documentation on how > to enable and use -proposed. Your feedback will aid us getting this > update out to other Ubuntu users. > > If this package fixes the bug for you, please add a comment to this bug, > mentioning the version of the package you tested, what testing has been > performed on the package and change the tag from verification-needed- > focal to verification-done-focal. If it does not fix the bug for you, > please add a comment stating that, and change the tag to verification- > failed-focal. In either case, without details of your testing we will > not be able to proceed. Unfortunately, we seem to still have a problem. apt-cache policy python3-barbican python3-barbican: Installed: 1:10.0.0-0ubuntu0.20.04.2 Candidate: 1:10.0.0-0ubuntu0.20.04.2 Version table: *** 1:10.0.0-0ubuntu0.20.04.2 500 500 http://archive.ubuntu.com/ubuntu focal-proposed/main amd64 Packages 100 /var/lib/dpkg/status 1:10.0.0-0ubuntu0.20.04.1 500 500 http://nova.clouds.archive.ubuntu.com/ubuntu focal-updates/main amd64 Packages 1:10.0.0~b2~git2020020508.7b14d983-0ubuntu3 500 500 http://nova.clouds.archive.ubuntu.com/ubuntu focal/main amd64 Packages # sudo -u barbican barbican-manage db upgrade /usr/lib/python3/dist-packages/pymysql/cursors.py:170: Warning: (3719, "'utf8' is currently an alias for the character set UTF8MB3, but will be an alias for UTF8MB4 in a future release. Please consider using UTF8MB4 in order to be unambiguous.") result = self._query(query) 2021-02-17 17:12:26.700 232723 WARNING oslo_db.sqlalchemy.engines [-] MySQL SQL mode is 'ONLY_FULL_GROUP_BY,STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_ENGINE_SUBSTITUTION', consider enabling TRADITIONAL or STRICT_ALL_TABLES 2021-02-17 17:12:26.728 232723 INFO alembic.runtime.migration [-] Context impl MySQLImpl. 2021-02-17 17:12:26.729 232723 INFO alembic.runtime.migration [-] Will assume non-transactional DDL. /usr/lib/python3/dist-packages/alembic/script/revision.py:152: UserWarning: Revision 39cf2e645cba referenced from 39cf2e645cba -> 0f8c192a061f (head), Add Secret Consumers table is not present util.warn( ERROR: '39cf2e645cba' -- David Ames -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1899104 Title: [SRU] barbican-manage db upgrade fails with MySQL8 To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/barbican/+bug/1899104/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1827690] Re: [19.04][stein] barbican-worker is down: Requested revision 1a0c2cdafb38 overlaps with other requested revisions 39cf2e645cba
** Changed in: charm-barbican Milestone: 21.01 => None -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1827690 Title: [19.04][stein] barbican-worker is down: Requested revision 1a0c2cdafb38 overlaps with other requested revisions 39cf2e645cba To manage notifications about this bug go to: https://bugs.launchpad.net/charm-barbican/+bug/1827690/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1906280] Re: [SRU] Add support for disabling mlockall() calls in ovs-vswitchd
** Changed in: charm-ovn-chassis Status: Fix Committed => Fix Released ** Changed in: charm-neutron-openvswitch Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1906280 Title: [SRU] Add support for disabling mlockall() calls in ovs-vswitchd To manage notifications about this bug go to: https://bugs.launchpad.net/charm-neutron-gateway/+bug/1906280/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1907081] Re: Clustered OVN database is not upgraded on package upgrade
** Changed in: charm-ovn-central Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1907081 Title: Clustered OVN database is not upgraded on package upgrade To manage notifications about this bug go to: https://bugs.launchpad.net/charm-ovn-central/+bug/1907081/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1912844] Re: Bond with OVS bridging RuntimeError: duplicate mac found!
I am not sure I have any definitivie answers but here are my thoughts. Compare a VLAN device created with `ip link add` ip link add link enp6s0 name enp6s0.100 type vlan id 100 cat /sys/class/net/enp6s0.100/uevent DEVTYPE=vlan INTERFACE=enp6s0.100 IFINDEX=3 To an OVS VLAN interface created with ovs-vsctl: ovs-vsctl add-port br-ex vlan100 tag=200 -- set Interface vlan100 type=internal cat /sys/class/net/br-ex.100/uevent INTERFACE=br-ex.100 IFINDEX=7 I suspect this is down to the tooling. OVS is creating virtual devices so it may not be what `ip link` would create. Could the `is_vlan` function check for the '.' followed by an integer which is the indication of a VLAN in all cases? -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1912844 Title: Bond with OVS bridging RuntimeError: duplicate mac found! To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/cloud-init/+bug/1912844/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1912844] Re: Bond with OVS bridging RuntimeError: duplicate mac found!
Cloud init version: $ dpkg -l |grep cloud-ini ii cloud-init 20.4.1-0ubuntu1~20.04.1 all initialization and customization tool for cloud instances ii cloud-initramfs-copymods 0.45ubuntu1 all copy initramfs modules into root filesystem for later use ii cloud-initramfs-dyn-netconf 0.45ubuntu1 all write a network interface file in /run for BOOTIF -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1912844 Title: Bond with OVS bridging RuntimeError: duplicate mac found! To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/cloud-init/+bug/1912844/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1912844] Re: Bond with OVS bridging RuntimeError: duplicate mac found!
** Attachment added: "Screenshot from 2021-01-22 12-02-05.png" https://bugs.launchpad.net/ubuntu/+source/cloud-init/+bug/1912844/+attachment/5455815/+files/Screenshot%20from%202021-01-22%2012-02-05.png -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1912844 Title: Bond with OVS bridging RuntimeError: duplicate mac found! To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/cloud-init/+bug/1912844/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1912844] Re: Bond with OVS bridging RuntimeError: duplicate mac found!
Screen capture of the network config in MAAS -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1912844 Title: Bond with OVS bridging RuntimeError: duplicate mac found! To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/cloud-init/+bug/1912844/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1912844] [NEW] Bond with OVS bridging RuntimeError: duplicate mac found!
Public bug reported: When using bonds and OVS bridging cloud-init fails with 2021-01-22 18:44:08,094 - util.py[WARNING]: failed stage init failed run of stage init Traceback (most recent call last): File "/usr/lib/python3/dist-packages/cloudinit/cmd/main.py", line 653, in status_wrapper ret = functor(name, args) File "/usr/lib/python3/dist-packages/cloudinit/cmd/main.py", line 362, in main_init init.apply_network_config(bring_up=bool(mode != sources.DSMODE_LOCAL)) File "/usr/lib/python3/dist-packages/cloudinit/stages.py", line 699, in apply_network_config self.distro.networking.wait_for_physdevs(netcfg) File "/usr/lib/python3/dist-packages/cloudinit/distros/networking.py", line 147, in wait_for_physdevs present_macs = self.get_interfaces_by_mac().keys() File "/usr/lib/python3/dist-packages/cloudinit/distros/networking.py", line 75, in get_interfaces_by_mac return net.get_interfaces_by_mac( File "/usr/lib/python3/dist-packages/cloudinit/net/__init__.py", line 769, in get_interfaces_by_mac return get_interfaces_by_mac_on_linux( File "/usr/lib/python3/dist-packages/cloudinit/net/__init__.py", line 839, in get_interfaces_by_mac_on_linux raise RuntimeError( RuntimeError: duplicate mac found! both 'br-ex.100' and 'br-ex' have mac 'e2:86:e6:60:4c:44' snap-id: shY22YTZ3RhJJDOj0MfmShTNZTEb1Jiq tracking: 2.9/candidate refresh-date: 3 days ago, at 20:03 UTC channels: 2.9/stable: 2.9.1-9153-g.66318f531 2021-01-19 (11322) 150MB - 2.9/candidate:↑ 2.9/beta: ↑ 2.9/edge: 2.9.1-9156-g.fe186aec0 2021-01-21 (11371) 150MB - latest/stable:– latest/candidate: – latest/beta: – latest/edge: 2.10.0~alpha1-9367-g.e3a85359d 2021-01-22 (11396) 151MB - 2.8/stable: 2.8.2-8577-g.a3e674063 2020-09-01 (8980) 140MB - 2.8/candidate:2.8.3~rc1-8583-g.9ddc8051f 2020-11-19 (10539) 137MB - 2.8/beta: 2.8.3~rc1-8583-g.9ddc8051f 2020-11-19 (10539) 137MB - 2.8/edge: 2.8.3~rc1-8587-g.0ebf4fb25 2021-01-07 (11161) 139MB - 2.7/stable: 2.7.3-8290-g.ebe2b9884 2020-08-21 (8724) 144MB - 2.7/candidate:↑ 2.7/beta: ↑ 2.7/edge: 2.7.3-8294-g.85233d83e 2020-11-03 (10385) 143MB - installed: 2.9.1-9153-g.66318f531(11322) 150MB - ** Affects: cloud-init (Ubuntu) Importance: Undecided Assignee: Dan Watkins (oddbloke) Status: New ** Attachment added: "Log files" https://bugs.launchpad.net/bugs/1912844/+attachment/5455814/+files/duplicate-mac-addr-logs.tar.gz -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1912844 Title: Bond with OVS bridging RuntimeError: duplicate mac found! To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/cloud-init/+bug/1912844/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1902951] [NEW] swift-bench appears to by a python2 script with a python3 shebang
Public bug reported: swift-bench on Focal appears to by a python2 script with a python3 shebang ii swift-bench 1.2.0-5 all benchmarking tool for Swift SyntaxWarning: "is not" with a literal. Did you mean "!="? # swift-bench /usr/bin/swift-bench:152: SyntaxWarning: "is not" with a literal. Did you mean "!="? if options.concurrency is not '': Usage: swift-bench [OPTIONS] [CONF_FILE] # head /usr/bin/swift-bench #!/usr/bin/python3 # Copyright (c) 2010-2012 OpenStack Foundation The script needs to be updated to python3 syntax. ** Affects: swift-bench (Ubuntu) Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1902951 Title: swift-bench appears to by a python2 script with a python3 shebang To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/swift-bench/+bug/1902951/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1827690] Re: [19.04][stein] barbican-worker is down: Requested revision 1a0c2cdafb38 overlaps with other requested revisions 39cf2e645cba
** Changed in: charm-barbican Milestone: 20.10 => 21.01 -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1827690 Title: [19.04][stein] barbican-worker is down: Requested revision 1a0c2cdafb38 overlaps with other requested revisions 39cf2e645cba To manage notifications about this bug go to: https://bugs.launchpad.net/charm-barbican/+bug/1827690/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1827690] Re: [19.04][stein] barbican-worker is down: Requested revision 1a0c2cdafb38 overlaps with other requested revisions 39cf2e645cba
The fix [0] will resolve this bug but is currently blocked by LP Bug#1899104 [1]. [0] https://review.opendev.org/756931 [1] https://bugs.launchpad.net/ubuntu/+source/barbican/+bug/1899104 -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1827690 Title: [19.04][stein] barbican-worker is down: Requested revision 1a0c2cdafb38 overlaps with other requested revisions 39cf2e645cba To manage notifications about this bug go to: https://bugs.launchpad.net/charm-barbican/+bug/1827690/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1899104] Re: [SRU] barbican-manage db upgrade fails with MySQL8
Attempting to validate the package change ran into another failure mode: # apt-cache policy barbican-common barbican-common: Installed: 1:10.0.0-0ubuntu0.20.04.2~ubuntu20.04.1~ppa202010131146 Candidate: 1:10.0.0-0ubuntu0.20.04.2~ubuntu20.04.1~ppa202010131146 Version table: *** 1:10.0.0-0ubuntu0.20.04.2~ubuntu20.04.1~ppa202010131146 500 500 http://ppa.launchpad.net/chris.macnaughton/focal-ussuri/ubuntu focal/main amd64 Packages 100 /var/lib/dpkg/status 1:10.0.0-0ubuntu0.20.04.1 500 500 http://nova.clouds.archive.ubuntu.com/ubuntu focal-updates/main amd64 Packages 1:10.0.0~b2~git2020020508.7b14d983-0ubuntu3 500 500 http://nova.clouds.archive.ubuntu.com/ubuntu focal/main amd64 Packages # sudo -u barbican barbican-manage db upgrade /usr/lib/python3/dist-packages/pymysql/cursors.py:170: Warning: (3719, "'utf8' is currently an alias for the character set UTF8MB3, but will be an alias for UTF8MB4 in a future release. Please consider using UTF8MB4 in order to be unambiguous.") result = self._query(query) 2020-10-13 15:45:16.211 28298 WARNING oslo_db.sqlalchemy.engines [-] MySQL SQL mode is 'ONLY_FULL_GROUP_BY,STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_ENGINE_SUBSTITUTION', consider enabling TRADITIONAL or STRICT_ALL_TABLES 2020-10-13 15:45:16.217 28298 INFO alembic.runtime.migration [-] Context impl MySQLImpl. 2020-10-13 15:45:16.217 28298 INFO alembic.runtime.migration [-] Will assume non-transactional DDL. 2020-10-13 15:45:16.232 28298 INFO alembic.runtime.migration [-] Running upgrade 1bc885808c76 -> 161f8aceb687, fill project_id to secrets where missing 2020-10-13 15:45:16.399 28298 WARNING oslo_db.sqlalchemy.exc_filters [-] DBAPIError exception wrapped.: pymysql.err.InternalError: (3098, 'The table does not comply with the requirements by an external plugin.') 2020-10-13 15:45:16.399 28298 ERROR oslo_db.sqlalchemy.exc_filters Traceback (most recent call last): 2020-10-13 15:45:16.399 28298 ERROR oslo_db.sqlalchemy.exc_filters File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 1245, in _execute_context 2020-10-13 15:45:16.399 28298 ERROR oslo_db.sqlalchemy.exc_filters self.dialect.do_execute( 2020-10-13 15:45:16.399 28298 ERROR oslo_db.sqlalchemy.exc_filters File "/usr/lib/python3/dist-packages/sqlalchemy/engine/default.py", line 581, in do_execute 2020-10-13 15:45:16.399 28298 ERROR oslo_db.sqlalchemy.exc_filters cursor.execute(statement, parameters) 2020-10-13 15:45:16.399 28298 ERROR oslo_db.sqlalchemy.exc_filters File "/usr/lib/python3/dist-packages/pymysql/cursors.py", line 170, in execute 2020-10-13 15:45:16.399 28298 ERROR oslo_db.sqlalchemy.exc_filters result = self._query(query) 2020-10-13 15:45:16.399 28298 ERROR oslo_db.sqlalchemy.exc_filters File "/usr/lib/python3/dist-packages/pymysql/cursors.py", line 328, in _query 2020-10-13 15:45:16.399 28298 ERROR oslo_db.sqlalchemy.exc_filters conn.query(q) 2020-10-13 15:45:16.399 28298 ERROR oslo_db.sqlalchemy.exc_filters File "/usr/lib/python3/dist-packages/pymysql/connections.py", line 517, in query 2020-10-13 15:45:16.399 28298 ERROR oslo_db.sqlalchemy.exc_filters self._affected_rows = self._read_query_result(unbuffered=unbuffered) 2020-10-13 15:45:16.399 28298 ERROR oslo_db.sqlalchemy.exc_filters File "/usr/lib/python3/dist-packages/pymysql/connections.py", line 732, in _read_query_result 2020-10-13 15:45:16.399 28298 ERROR oslo_db.sqlalchemy.exc_filters result.read() 2020-10-13 15:45:16.399 28298 ERROR oslo_db.sqlalchemy.exc_filters File "/usr/lib/python3/dist-packages/pymysql/connections.py", line 1075, in read 2020-10-13 15:45:16.399 28298 ERROR oslo_db.sqlalchemy.exc_filters first_packet = self.connection._read_packet() 2020-10-13 15:45:16.399 28298 ERROR oslo_db.sqlalchemy.exc_filters File "/usr/lib/python3/dist-packages/pymysql/connections.py", line 684, in _read_packet 2020-10-13 15:45:16.399 28298 ERROR oslo_db.sqlalchemy.exc_filters packet.check_error() 2020-10-13 15:45:16.399 28298 ERROR oslo_db.sqlalchemy.exc_filters File "/usr/lib/python3/dist-packages/pymysql/protocol.py", line 220, in check_error 2020-10-13 15:45:16.399 28298 ERROR oslo_db.sqlalchemy.exc_filters err.raise_mysql_exception(self._data) 2020-10-13 15:45:16.399 28298 ERROR oslo_db.sqlalchemy.exc_filters File "/usr/lib/python3/dist-packages/pymysql/err.py", line 109, in raise_mysql_exception 2020-10-13 15:45:16.399 28298 ERROR oslo_db.sqlalchemy.exc_filters raise errorclass(errno, errval) 2020-10-13 15:45:16.399 28298 ERROR oslo_db.sqlalchemy.exc_filters pymysql.err.InternalError: (3098, 'The table does not comply with the requirements by an external plugin.') 2020-10-13 15:45:16.399 28298 ERROR oslo_db.sqlalchemy.exc_filters ERROR: (pymysql.err.InternalError) (3098, 'The table does not comply with the requirements by an external plugin.') [SQL: UPDATE secrets,
[Bug 1899104] [NEW] barbican-manage db upgrade fails with MySQL8
Public bug reported: Running barbican-manage db upgrade fails with the following traceback when the DB is mysql8: 2020-10-08 22:31:32.028 28131 ERROR oslo_db.sqlalchemy.exc_filters Traceback (most recent call last): 2020-10-08 22:31:32.028 28131 ERROR oslo_db.sqlalchemy.exc_filters File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 1245, in _execute_context 2020-10-08 22:31:32.028 28131 ERROR oslo_db.sqlalchemy.exc_filters self.dialect.do_execute( 2020-10-08 22:31:32.028 28131 ERROR oslo_db.sqlalchemy.exc_filters File "/usr/lib/python3/dist-packages/sqlalchemy/engine/default.py", line 581, in do_execute 2020-10-08 22:31:32.028 28131 ERROR oslo_db.sqlalchemy.exc_filters cursor.execute(statement, parameters) 2020-10-08 22:31:32.028 28131 ERROR oslo_db.sqlalchemy.exc_filters File "/usr/lib/python3/dist-packages/pymysql/cursors.py", line 170, in execute 2020-10-08 22:31:32.028 28131 ERROR oslo_db.sqlalchemy.exc_filters result = self._query(query) 2020-10-08 22:31:32.028 28131 ERROR oslo_db.sqlalchemy.exc_filters File "/usr/lib/python3/dist-packages/pymysql/cursors.py", line 328, in _query 2020-10-08 22:31:32.028 28131 ERROR oslo_db.sqlalchemy.exc_filters conn.query(q) 2020-10-08 22:31:32.028 28131 ERROR oslo_db.sqlalchemy.exc_filters File "/usr/lib/python3/dist-packages/pymysql/connections.py", line 517, in query 2020-10-08 22:31:32.028 28131 ERROR oslo_db.sqlalchemy.exc_filters self._affected_rows = self._read_query_result(unbuffered=unbuffered) 2020-10-08 22:31:32.028 28131 ERROR oslo_db.sqlalchemy.exc_filters File "/usr/lib/python3/dist-packages/pymysql/connections.py", line 732, in _read_query_result 2020-10-08 22:31:32.028 28131 ERROR oslo_db.sqlalchemy.exc_filters result.read() 2020-10-08 22:31:32.028 28131 ERROR oslo_db.sqlalchemy.exc_filters File "/usr/lib/python3/dist-packages/pymysql/connections.py", line 1075, in read 2020-10-08 22:31:32.028 28131 ERROR oslo_db.sqlalchemy.exc_filters first_packet = self.connection._read_packet() 2020-10-08 22:31:32.028 28131 ERROR oslo_db.sqlalchemy.exc_filters File "/usr/lib/python3/dist-packages/pymysql/connections.py", line 684, in _read_packet 2020-10-08 22:31:32.028 28131 ERROR oslo_db.sqlalchemy.exc_filters packet.check_error() 2020-10-08 22:31:32.028 28131 ERROR oslo_db.sqlalchemy.exc_filters File "/usr/lib/python3/dist-packages/pymysql/protocol.py", line 220, in check_error 2020-10-08 22:31:32.028 28131 ERROR oslo_db.sqlalchemy.exc_filters err.raise_mysql_exception(self._data) 2020-10-08 22:31:32.028 28131 ERROR oslo_db.sqlalchemy.exc_filters File "/usr/lib/python3/dist-packages/pymysql/err.py", line 109, in raise_mysql_exception 2020-10-08 22:31:32.028 28131 ERROR oslo_db.sqlalchemy.exc_filters raise errorclass(errno, errval) 2020-10-08 22:31:32.028 28131 ERROR oslo_db.sqlalchemy.exc_filters pymysql.err.InternalError: (3959, "Check constraint 'secret_acls_chk_2' uses column 'creator_only', hence column cannot be dropped or renamed.") 2020-10-08 22:31:32.028 28131 ERROR oslo_db.sqlalchemy.exc_filters ERROR: (pymysql.err.InternalError) (3959, "Check constraint 'secret_acls_chk_2' uses column 'creator_only', hence column cannot be dropped or renamed.") [SQL: ALTER TABLE secret_acls CHANGE creator_only project_access BOOL NULL] Seems this is a known issue with alembic [0] [1] https://github.com/sqlalchemy/alembic/issues/699 ** Affects: barbican (Ubuntu) Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1899104 Title: barbican-manage db upgrade fails with MySQL8 To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/barbican/+bug/1899104/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1827690] Re: [19.04][stein] barbican-worker is down: Requested revision 1a0c2cdafb38 overlaps with other requested revisions 39cf2e645cba
** Changed in: charm-barbican Assignee: (unassigned) => David Ames (thedac) ** Changed in: charm-barbican Milestone: None => 20.10 ** Changed in: charm-barbican Status: New => Triaged -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1827690 Title: [19.04][stein] barbican-worker is down: Requested revision 1a0c2cdafb38 overlaps with other requested revisions 39cf2e645cba To manage notifications about this bug go to: https://bugs.launchpad.net/charm-barbican/+bug/1827690/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1827690] Re: [19.04][stein] barbican-worker is down: Requested revision 1a0c2cdafb38 overlaps with other requested revisions 39cf2e645cba
Doing some research. 1) I don't think this is more than one charm unit trying to do migrations at the same time (comment #2) 2) The problem is barbican itself failing per the Traceback 3) My current theory is a failed migration attempt that leaves behind two alembic versions. Per [1] we should only ever expect 1 alembic version at a time. As a work around we may need to do some DB surgery per [1]. After a backup of course. To find what gets barbican into this state will take a bit more digging through logs. [1] https://stackoverflow.com/questions/42424320/how-do-i-fix-alembics- requested-revision-overlaps-with-other-requested-revisio -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1827690 Title: [19.04][stein] barbican-worker is down: Requested revision 1a0c2cdafb38 overlaps with other requested revisions 39cf2e645cba To manage notifications about this bug go to: https://bugs.launchpad.net/charm-barbican/+bug/1827690/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1882844] Re: No way to set SSL parameters with pymysql
Michael, I have confirmed you are correct. Apologies for the trash. I swear I tested that. Closing this bug. For completeness, the check_hostname parameter comes from pymysql [0]. [0] https://github.com/PyMySQL/PyMySQL/blob/master/pymysql/connections.py#L336 ** Changed in: oslo.db Status: In Progress => Invalid ** Changed in: python-oslo.db (Ubuntu) Status: Triaged => Invalid -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1882844 Title: No way to set SSL parameters with pymysql To manage notifications about this bug go to: https://bugs.launchpad.net/oslo.db/+bug/1882844/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1838109] Re: civetweb does not allow tuning of maximum socket connections
** Changed in: charm-ceph-radosgw Milestone: 20.05 => 20.08 -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1838109 Title: civetweb does not allow tuning of maximum socket connections To manage notifications about this bug go to: https://bugs.launchpad.net/ceph/+bug/1838109/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1774279] Re: unable to create pools before OSD's are up and running
** Changed in: charm-ceph-mon Milestone: 20.05 => 20.08 ** Changed in: charm-ceph-osd Milestone: 20.05 => 20.08 -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1774279 Title: unable to create pools before OSD's are up and running To manage notifications about this bug go to: https://bugs.launchpad.net/charm-ceph-mon/+bug/1774279/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1833713] Re: Metadata is broken with dpdk bonding, jumbo frames and metadata from qdhcp
** Changed in: charm-neutron-openvswitch Milestone: 20.05 => 20.08 -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1833713 Title: Metadata is broken with dpdk bonding, jumbo frames and metadata from qdhcp To manage notifications about this bug go to: https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1833713/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1827690] Re: [19.04][stein] barbican-worker is down: Requested revision 1a0c2cdafb38 overlaps with other requested revisions 39cf2e645cba
** Changed in: charm-barbican Milestone: 20.05 => 20.08 -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1827690 Title: [19.04][stein] barbican-worker is down: Requested revision 1a0c2cdafb38 overlaps with other requested revisions 39cf2e645cba To manage notifications about this bug go to: https://bugs.launchpad.net/charm-barbican/+bug/1827690/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1859844] Re: Impossible to rename the Default domain id to the string 'default.'
** Changed in: charm-keystone Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1859844 Title: Impossible to rename the Default domain id to the string 'default.' To manage notifications about this bug go to: https://bugs.launchpad.net/charm-keystone/+bug/1859844/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1861457] Re: pyroute2 0.5.2 doesn't support neutron-common 14.0.4
FYI, Canonical QA has seen this in a duplicate bug with a clean deploy: https://bugs.launchpad.net/charm-neutron-gateway/+bug/1862200. It presented as FIPs missing associated iptables NAT rules on the gateway node. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1861457 Title: pyroute2 0.5.2 doesn't support neutron-common 14.0.4 To manage notifications about this bug go to: https://bugs.launchpad.net/charm-neutron-gateway/+bug/1861457/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1861216] [NEW] Dummy Output for audio output device after upgrade to focal
Public bug reported: After and upgrade to Focal audio output not working. Dummy Output device in the sound settings. It seems like drivers are loaded: 00:1f.3 Audio device: Intel Corporation Device 02c8 (prog-if 80) Subsystem: Lenovo Device 2292 Flags: bus master, fast devsel, latency 64, IRQ 177 Memory at ea23c000 (64-bit, non-prefetchable) [size=16K] Memory at ea00 (64-bit, non-prefetchable) [size=1M] Capabilities: Kernel driver in use: sof-audio-pci Kernel modules: snd_hda_intel, snd_sof_pci #champagne ProblemType: Bug DistroRelease: Ubuntu 20.04 Package: pulseaudio 1:13.0-3ubuntu1 ProcVersionSignature: Ubuntu 5.4.0-12.15-generic 5.4.8 Uname: Linux 5.4.0-12-generic x86_64 ApportVersion: 2.20.11-0ubuntu15 Architecture: amd64 AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', '/dev/snd/timer'] failed with exit code 1: CurrentDesktop: ubuntu:GNOME Date: Tue Jan 28 10:02:41 2020 InstallationDate: Installed on 2019-12-25 (33 days ago) InstallationMedia: Ubuntu 19.10 "Eoan Ermine" - Release amd64 (20191017) ProcEnviron: TERM=screen-256color PATH=(custom, no user) XDG_RUNTIME_DIR= LANG=en_US.UTF-8 SHELL=/bin/bash SourcePackage: pulseaudio UpgradeStatus: Upgraded to focal on 2020-01-27 (0 days ago) dmi.bios.date: 10/18/2019 dmi.bios.vendor: LENOVO dmi.bios.version: N2QET15W (1.09 ) dmi.board.asset.tag: Not Available dmi.board.name: 20R1S05A00 dmi.board.vendor: LENOVO dmi.board.version: SDK0J40697 WIN dmi.chassis.asset.tag: No Asset Information dmi.chassis.type: 10 dmi.chassis.vendor: LENOVO dmi.chassis.version: None dmi.modalias: dmi:bvnLENOVO:bvrN2QET15W(1.09):bd10/18/2019:svnLENOVO:pn20R1S05A00:pvrThinkPadX1Carbon7th:rvnLENOVO:rn20R1S05A00:rvrSDK0J40697WIN:cvnLENOVO:ct10:cvrNone: dmi.product.family: ThinkPad X1 Carbon 7th dmi.product.name: 20R1S05A00 dmi.product.sku: LENOVO_MT_20R1_BU_Think_FM_ThinkPad X1 Carbon 7th dmi.product.version: ThinkPad X1 Carbon 7th dmi.sys.vendor: LENOVO ** Affects: pulseaudio (Ubuntu) Importance: Undecided Status: New ** Tags: amd64 apport-bug focal -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1861216 Title: Dummy Output for audio output device after upgrade to focal To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/pulseaudio/+bug/1861216/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1854880] [NEW] Conflicting Transaction Sets following Complete Outage of InnoDB Cluster
Public bug reported: The 8.0.18 version of mysql-8.0 has this upstream bug: https://bugs.mysql.com/bug.php?id=97279 Note 8.0.17 did not display this bug. After a complete outage dba.rebootClusterFromCompleteOutage() errors with: output: "ERROR: Conflicting transaction sets between 10.5.0.55:3306 and 10.5.0.52:3306 Dba.rebootClusterFromCompleteOutage: Conflicting transaction sets between 10.5.0.55:3306 and 10.5.0.52:3306 (MYSQLSH 51152) at /tmp/tmpn2y5qo9l.js:3:12 in dba.rebootClusterFromCompleteOutage(); Steps to reproduce: Build a cluster Shut each node of the cluster down (reboot) Start mysql on each node Run dba.rebootClusterFromCompleteOutage() The cs:~openstack-charmers-next/mysql-innodb-cluster has a test zaza.openstack.charm_tests.mysql.tests.MySQLInnoDBClusterColdStartTest that re-creates the failure. Note: It is currently disabled due to this bug. ** Affects: mysql-8.0 (Ubuntu) Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1854880 Title: Conflicting Transaction Sets following Complete Outage of InnoDB Cluster To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/mysql-8.0/+bug/1854880/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1790904] Re: Glance v2 required by newer versions of OpenStack
Timo, Confirmed with 0.1.0~bzr460-0ubuntu1.1 using our charm test on glance- simplestreams-sync. This test used to fail with the previous bionic version. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1790904 Title: Glance v2 required by newer versions of OpenStack To manage notifications about this bug go to: https://bugs.launchpad.net/simplestreams/+bug/1790904/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1834213] Re: After kernel upgrade, nf_conntrack_ipv4 module unloaded, no IP traffic to instances
** Changed in: charm-neutron-openvswitch Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1834213 Title: After kernel upgrade, nf_conntrack_ipv4 module unloaded, no IP traffic to instances To manage notifications about this bug go to: https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1834213/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1828293] Re: [Queens -> Rocky Upgrade] python3-neutron-fwaas-dashboard installation: trying to overwrite '/etc/openstack-dashboard/neutron-fwaas-policy.json', which is also in package python-neut
** Changed in: charm-openstack-dashboard Milestone: 19.10 => 20.01 -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1828293 Title: [Queens -> Rocky Upgrade] python3-neutron-fwaas-dashboard installation: trying to overwrite '/etc/openstack-dashboard/neutron- fwaas-policy.json', which is also in package python-neutron-fwaas- dashboard To manage notifications about this bug go to: https://bugs.launchpad.net/charm-openstack-dashboard/+bug/1828293/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1774279] Re: unable to create pools before OSD's are up and running
** Changed in: charm-ceph-mon Milestone: 19.10 => 20.01 ** Changed in: charm-ceph-osd Milestone: 19.10 => 20.01 -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1774279 Title: unable to create pools before OSD's are up and running To manage notifications about this bug go to: https://bugs.launchpad.net/charm-ceph-mon/+bug/1774279/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1833713] Re: Metadata is broken with dpdk bonding, jumbo frames and metadata from qdhcp
** Changed in: charm-neutron-openvswitch Milestone: 19.10 => 20.01 -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1833713 Title: Metadata is broken with dpdk bonding, jumbo frames and metadata from qdhcp To manage notifications about this bug go to: https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1833713/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1827690] Re: [19.04][stein] barbican-worker is down: Requested revision 1a0c2cdafb38 overlaps with other requested revisions 39cf2e645cba
** Changed in: charm-barbican Milestone: 19.10 => 20.01 -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1827690 Title: [19.04][stein] barbican-worker is down: Requested revision 1a0c2cdafb38 overlaps with other requested revisions 39cf2e645cba To manage notifications about this bug go to: https://bugs.launchpad.net/charm-barbican/+bug/1827690/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1834213] Re: After kernel upgrade, nf_conntrack_ipv4 module unloaded, no IP traffic to instances
** Changed in: charm-neutron-openvswitch Milestone: None => 19.10 -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1834213 Title: After kernel upgrade, nf_conntrack_ipv4 module unloaded, no IP traffic to instances To manage notifications about this bug go to: https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1834213/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1846548] Re: Glance manage db_sync fails with MySQL 8
Upstream patch proposed: https://review.opendev.org/686461 -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1846548 Title: Glance manage db_sync fails with MySQL 8 To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/glance/+bug/1846548/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1846548] [NEW] Glance manage db_sync fails with MySQL 8
Public bug reported: [Steps to recreate] Configure glance to use a MySQL 8 databse Run glance-manage db_sync [Error Output] https://paste.ubuntu.com/p/NbTQgsxJZw/ #glance-manage db_sync /usr/lib/python3/dist-packages/oslo_db/sqlalchemy/enginefacade.py:1374: OsloDBDeprecationWarning: EngineFacade is deprecated; please use oslo_db.sqlalchemy.enginefacade expire_on_commit=expire_on_commit, _conf=conf) 2019-10-03 18:13:32.980 23795 WARNING oslo_config.cfg [-] Deprecated: Option "idle_timeout" from group "database" is deprecated. Use option "connection_recycle_time" from group "database".[00m 2019-10-03 18:13:33.013 23795 INFO alembic.runtime.migration [-] Context impl MySQLImpl.[00m 2019-10-03 18:13:33.014 23795 INFO alembic.runtime.migration [-] Will assume non-transactional DDL.[00m 2019-10-03 18:13:33.025 23795 INFO alembic.runtime.migration [-] Context impl MySQLImpl.[00m 2019-10-03 18:13:33.025 23795 INFO alembic.runtime.migration [-] Will assume non-transactional DDL.[00m INFO [alembic.runtime.migration] Context impl MySQLImpl. INFO [alembic.runtime.migration] Will assume non-transactional DDL. INFO [alembic.runtime.migration] Running upgrade -> liberty, liberty initial /usr/lib/python3/dist-packages/pymysql/cursors.py:170: Warning: (3719, "'utf8' is currently an alias for the character set UTF8MB3, but will be an alias for UTF8MB4 in a future release. Please consider using UTF8MB4 in order to be unambiguous.") result = self._query(query) /usr/lib/python3/dist-packages/pymysql/cursors.py:170: Warning: (3719, "'utf8' is currently an alias for the character set UTF8MB3, but will be an alias for UTF8MB4 in a future release. Please consider using UTF8MB4 in order to be unambiguous.") result = self._query(query) /usr/lib/python3/dist-packages/pymysql/cursors.py:170: Warning: (3719, "'utf8' is currently an alias for the character set UTF8MB3, but will be an alias for UTF8MB4 in a future release. Please consider using UTF8MB4 in order to be unambiguous.") result = self._query(query) CRITI [glance] Unhandled error Traceback (most recent call last): File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 1236, in _execute_context cursor, statement, parameters, context File "/usr/lib/python3/dist-packages/sqlalchemy/engine/default.py", line 536, in do_execute cursor.execute(statement, parameters) File "/usr/lib/python3/dist-packages/pymysql/cursors.py", line 170, in execute result = self._query(query) File "/usr/lib/python3/dist-packages/pymysql/cursors.py", line 328, in _query conn.query(q) File "/usr/lib/python3/dist-packages/pymysql/connections.py", line 517, in query self._affected_rows = self._read_query_result(unbuffered=unbuffered) File "/usr/lib/python3/dist-packages/pymysql/connections.py", line 732, in _read_query_result result.read() File "/usr/lib/python3/dist-packages/pymysql/connections.py", line 1075, in read first_packet = self.connection._read_packet() File "/usr/lib/python3/dist-packages/pymysql/connections.py", line 684, in _read_packet packet.check_error() File "/usr/lib/python3/dist-packages/pymysql/protocol.py", line 220, in check_error err.raise_mysql_exception(self._data) File "/usr/lib/python3/dist-packages/pymysql/err.py", line 109, in raise_mysql_exception raise errorclass(errno, errval) pymysql.err.ProgrammingError: (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'member VARCHAR(255) NOT NULL, \n\tcan_share BOOL NOT NULL, \n\tcreated_at DATETIME N' at line 4") The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/bin/glance-manage", line 10, in sys.exit(main()) File "/usr/lib/python3/dist-packages/glance/cmd/manage.py", line 563, in main return CONF.command.action_fn() File "/usr/lib/python3/dist-packages/glance/cmd/manage.py", line 395, in sync self.command_object.sync(CONF.command.version) File "/usr/lib/python3/dist-packages/glance/cmd/manage.py", line 165, in sync self.expand(online_migration=False) File "/usr/lib/python3/dist-packages/glance/cmd/manage.py", line 222, in expand self._sync(version=expand_head) File "/usr/lib/python3/dist-packages/glance/cmd/manage.py", line 180, in _sync alembic_command.upgrade(a_config, version) File "/usr/lib/python3/dist-packages/alembic/command.py", line 254, in upgrade script.run_env() File "/usr/lib/python3/dist-packages/alembic/script/base.py", line 427, in run_env util.load_python_file(self.dir, 'env.py') File "/usr/lib/python3/dist-packages/alembic/util/pyfiles.py", line 81, in load_python_file module = load_module_py(module_id, path) File "/usr/lib/python3/dist-packages/alembic/util/compat.py", line 82, in load_module_py spec.loader.exec_module(module) File "", line 728, in exec_module File "",
[Bug 1846548] Re: Glance manage db_sync fails with MySQL 8
The following patch allows glance to create its database. It seems subsequent queries must be quoted correctly as the service functions once the DB is created. https://paste.ubuntu.com/p/yZ7yJwHsqQ/ Index: glance/glance/db/sqlalchemy/alembic_migrations/add_images_tables.py === --- glance.orig/glance/db/sqlalchemy/alembic_migrations/add_images_tables.py +++ glance/glance/db/sqlalchemy/alembic_migrations/add_images_tables.py @@ -134,7 +134,7 @@ def _add_image_members_table(): op.create_table('image_members', Column('id', Integer(), nullable=False), Column('image_id', String(length=36), nullable=False), -Column('member', String(length=255), nullable=False), +Column('`member`', String(length=255), nullable=False), Column('can_share', Boolean(), nullable=False), Column('created_at', DateTime(), nullable=False), Column('updated_at', DateTime(), nullable=True), @@ -147,7 +147,7 @@ def _add_image_members_table(): ForeignKeyConstraint(['image_id'], ['images.id'], ), PrimaryKeyConstraint('id'), UniqueConstraint('image_id', - 'member', + '`member`', 'deleted_at', name=deleted_member_constraint), mysql_engine='InnoDB', @@ -164,7 +164,7 @@ def _add_image_members_table(): unique=False) op.create_index('ix_image_members_image_id_member', 'image_members', -['image_id', 'member'], +['image_id', '`member`'], unique=False) -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1846548 Title: Glance manage db_sync fails with MySQL 8 To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/glance/+bug/1846548/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1774279] Re: unable to create pools before OSD's are up and running
** Changed in: charm-ceph-mon Milestone: 19.07 => 19.10 ** Changed in: charm-ceph-osd Milestone: 19.07 => 19.10 -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1774279 Title: unable to create pools before OSD's are up and running To manage notifications about this bug go to: https://bugs.launchpad.net/charm-ceph-mon/+bug/1774279/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1827690] Re: [19.04][stein] barbican-worker is down: Requested revision 1a0c2cdafb38 overlaps with other requested revisions 39cf2e645cba
** Changed in: charm-barbican Milestone: 19.07 => 19.10 -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1827690 Title: [19.04][stein] barbican-worker is down: Requested revision 1a0c2cdafb38 overlaps with other requested revisions 39cf2e645cba To manage notifications about this bug go to: https://bugs.launchpad.net/charm-barbican/+bug/1827690/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1828293] Re: [Queens -> Rocky Upgrade] python3-neutron-fwaas-dashboard installation: trying to overwrite '/etc/openstack-dashboard/neutron-fwaas-policy.json', which is also in package python-neut
** Changed in: charm-openstack-dashboard Milestone: 19.07 => 19.10 -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1828293 Title: [Queens -> Rocky Upgrade] python3-neutron-fwaas-dashboard installation: trying to overwrite '/etc/openstack-dashboard/neutron- fwaas-policy.json', which is also in package python-neutron-fwaas- dashboard To manage notifications about this bug go to: https://bugs.launchpad.net/charm-openstack-dashboard/+bug/1828293/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1825843] Re: systemd issues with bionic-rocky causing nagios alert and can't restart daemon
** Changed in: charm-ceph-radosgw Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1825843 Title: systemd issues with bionic-rocky causing nagios alert and can't restart daemon To manage notifications about this bug go to: https://bugs.launchpad.net/charm-ceph-radosgw/+bug/1825843/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1831181] Re: [aodh.notifier] Not setting user_domain_id raises keystone error: The resource could not be found.
** Changed in: charm-aodh Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1831181 Title: [aodh.notifier] Not setting user_domain_id raises keystone error: The resource could not be found. To manage notifications about this bug go to: https://bugs.launchpad.net/aodh/+bug/1831181/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1830950] Re: Percona cluster with pc.recovery=true failes to automatically recover
"If you starting cluster nodes directly (w/o mysqld_safe) or through systemd (which seems to have some limitation with the invocation of --wsrep_recover) this feature will not work." https://jira.percona.com/browse/PXC-881?focusedCommentId=224039=com.atlassian.jira.plugin.system.issuetabpanels %3Acomment-tabpanel#comment-224039 -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1830950 Title: Percona cluster with pc.recovery=true failes to automatically recover To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/percona-xtradb-cluster-5.7/+bug/1830950/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1830950] [NEW] Percona cluster with pc.recovery=true failes to automatically recover
Public bug reported: Starting this bug as a point of discussion. Per [0] when pc.recovery = true (default) the cluster should be able to automatically recover itself after a power outage. It is possible there is a discrepancy between expectation and reality. This bug is to determine what we can expect from automatic recovery. In re-creating a power outage scenario, percona fails to restore primary component from disk: [Warning] WSREP: Fail to access the file (/var/lib/percona-xtradb-cluster//gvwstate.dat) error (No such file or directory). It is possible if node is booting for first time or re-booting after a graceful shutdown [Note] WSREP: Restoring primary-component from disk failed. Either node is booting for first time or re-booting after a graceful shutdown Furthermore, the cluster appears to timeout in attempting to talk to each of its nodes: [ERROR] WSREP: failed to open gcomm backend connection: 110: failed to reach primary view (pc.wait_prim_timeout): 110 (Connection timed out) at gcomm/src/pc.cpp:connect():159 [ERROR] WSREP: gcs/src/gcs_core.cpp:gcs_core_open():208: Failed to open backend connection: -110 (Connection timed out) ERROR] WSREP: gcs/src/gcs.cpp:gcs_open():1514: Failed to open channel 'juju_cluster' at 'gcomm://10.5.0.49,10.5.0.9': -110 (Connection timed out) [ERROR] WSREP: gcs connect failed: Connection timed out [ERROR] WSREP: Provider/Node (gcomm://10.5.0.49,10.5.0.9) failed to establish connection with cluster (reason: 7) [ERROR] Aborting For Ubuntu devs: dpkg -l |grep percona ii percona-xtrabackup2.4.9-0ubuntu2 amd64Open source backup tool for InnoDB and XtraDB ii percona-xtradb-cluster-server 5.7.20-29.24-0ubuntu2.1 all Percona XtraDB Cluster database server ii percona-xtradb-cluster-server-5.7 5.7.20-29.24-0ubuntu2.1 amd64Percona XtraDB Cluster database server binaries root@juju-fa2938-zaza-eeda2892d6b4-1:/var/lib/percona-xtradb-cluster# lsb_release -rd Description:Ubuntu 18.04.2 LTS Release:18.04 apt-cache policy percona-xtradb-cluster-server percona-xtradb-cluster-server: Installed: 5.7.20-29.24-0ubuntu2.1 Candidate: 5.7.20-29.24-0ubuntu2.1 Version table: *** 5.7.20-29.24-0ubuntu2.1 500 500 http://nova.clouds.archive.ubuntu.com/ubuntu bionic-updates/universe amd64 Packages 100 /var/lib/dpkg/status 5.7.20-29.24-0ubuntu2 500 500 http://nova.clouds.archive.ubuntu.com/ubuntu bionic/universe amd64 Packages Find attached logs from a 3 node cluster including etc config, grastate.dat and logs for each node. [0] https://www.percona.com/blog/2014/09/01/galera-replication-how-to- recover-a-pxc-cluster/ ** Affects: percona-xtradb-cluster-5.7 (Ubuntu) Importance: Undecided Status: New ** Attachment added: "Node logs and files" https://bugs.launchpad.net/bugs/1830950/+attachment/5267447/+files/pc-recovery.tar.gz -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1830950 Title: Percona cluster with pc.recovery=true failes to automatically recover To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/percona-xtradb-cluster-5.7/+bug/1830950/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1828293] Re: [Queens -> Rocky Upgrade] python3-neutron-fwaas-dashboard installation: trying to overwrite '/etc/openstack-dashboard/neutron-fwaas-policy.json', which is also in package python-neut
CHARM TRIAGE: Per comment #1 "The charm bug is about not reporting a failed upgrade in its status." ** Changed in: charm-openstack-dashboard Status: New => Triaged ** Changed in: charm-openstack-dashboard Importance: Undecided => High ** Changed in: charm-openstack-dashboard Milestone: None => 19.07 -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1828293 Title: [Queens -> Rocky Upgrade] python3-neutron-fwaas-dashboard installation: trying to overwrite '/etc/openstack-dashboard/neutron- fwaas-policy.json', which is also in package python-neutron-fwaas- dashboard To manage notifications about this bug go to: https://bugs.launchpad.net/charm-openstack-dashboard/+bug/1828293/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1827690] Re: [19.04][stein] barbican-worker is down: Requested revision 1a0c2cdafb38 overlaps with other requested revisions 39cf2e645cba
TRIAGE: I suspect we are not limiting the db migration to only the leader. Guarantee only the leader runs the migration. ** Changed in: charm-barbican Status: New => Triaged ** Changed in: charm-barbican Importance: Undecided => Critical ** Changed in: charm-barbican Milestone: None => 19.07 -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1827690 Title: [19.04][stein] barbican-worker is down: Requested revision 1a0c2cdafb38 overlaps with other requested revisions 39cf2e645cba To manage notifications about this bug go to: https://bugs.launchpad.net/charm-barbican/+bug/1827690/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1796193] Re: unattended do-release-upgrade asks about /etc/cron.daily/apt-compat
Yes, for juju series upgrades. The following will work around the problem: echo 'DPkg::options { "--force-confdef"; };' > /etc/apt/apt.conf.d /50unattended-upgrades I'll leave the decision on if this remains a bug up to others. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1796193 Title: unattended do-release-upgrade asks about /etc/cron.daily/apt-compat To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/apt/+bug/1796193/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1774279] Re: unable to create pools before OSD's are up and running
** Changed in: charm-ceph-mon Milestone: 19.04 => 19.07 ** Changed in: charm-ceph-osd Milestone: 19.04 => 19.07 -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1774279 Title: unable to create pools before OSD's are up and running To manage notifications about this bug go to: https://bugs.launchpad.net/charm-ceph-mon/+bug/1774279/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1782008] Re: Unable to force delete a volume
** Changed in: charm-cinder Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1782008 Title: Unable to force delete a volume To manage notifications about this bug go to: https://bugs.launchpad.net/charm-cinder/+bug/1782008/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1802407] Re: ssl_ca not supported
** Changed in: charm-glance-simplestreams-sync Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1802407 Title: ssl_ca not supported To manage notifications about this bug go to: https://bugs.launchpad.net/charm-glance-simplestreams-sync/+bug/1802407/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1808168] Re: Neutron FWaaS panel missing from dashboard on Queens
** Changed in: charm-openstack-dashboard Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1808168 Title: Neutron FWaaS panel missing from dashboard on Queens To manage notifications about this bug go to: https://bugs.launchpad.net/charm-openstack-dashboard/+bug/1808168/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1812925] Re: No OSDs has been initialized in random unit with "No block devices detected using current configuration"
** Changed in: charm-ceph-osd Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1812925 Title: No OSDs has been initialized in random unit with "No block devices detected using current configuration" To manage notifications about this bug go to: https://bugs.launchpad.net/charm-ceph-osd/+bug/1812925/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1824154] Re: bionic/stein: python-ceph missing dependencies
** Changed in: charm-ceph-fs Status: Fix Committed => Fix Released ** Changed in: charm-ceph-proxy Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1824154 Title: bionic/stein: python-ceph missing dependencies To manage notifications about this bug go to: https://bugs.launchpad.net/charm-ceph-fs/+bug/1824154/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1793137] Re: [SRU] Fix for KeyError: 'storage.zfs_pool_name' only partially successful -- needs changes
** Changed in: charm-lxd Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1793137 Title: [SRU] Fix for KeyError: 'storage.zfs_pool_name' only partially successful -- needs changes To manage notifications about this bug go to: https://bugs.launchpad.net/charm-lxd/+bug/1793137/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1820279] Re: [FFe] [SRU] build mellon with --enable-diagnostics to ease up SSO debugging
** Description changed: FFE Section for disco - [Rationale] This change to mod_auth_mellon adds a very useful capability for enabling diagnostics output from the module: https://github.com/Uninett/mod_auth_mellon/commit/e8579f6387d9841ce619d836110050fb18117753 It is available as of v0.14.0 (present in Cosmic): git --no-pager tag --contains=e8579f6387d9841ce619d836110050fb18117753 v0.14.0 v0.14.1 This is generally useful for field engineering and operations teams and other users as SAML exchanges are difficult to debug. [Build Verification] https://paste.ubuntu.com/p/2kt3BsxJKn/ [Installation] https://paste.ubuntu.com/p/VcfcgyPHqH/ "MellonDiagnosticsEnable Off" is the default setting and it results in am_diag_open_log returning 1 which does NOT result in an error returned from am_diag_log_init. Also installed a package and verified that setting this to off explicitly or implicitly (default) does not result in errors on startup or page access. https://git.launchpad.net/ubuntu/+source/libapache2-mod-auth- mellon/tree/auth_mellon_diagnostics.c?h=ubuntu/disco=49c8ccfedca2db17d76348573e6daa862e104f6d#n311 int am_diag_log_init(apr_pool_t *pc, apr_pool_t *p, apr_pool_t *pt, server_rec *s) { for ( ; s ; s = s->next) { if (!am_diag_open_log(s, p)) { return HTTP_INTERNAL_SERVER_ERROR; } } // ... static int am_diag_open_log(server_rec *s, apr_pool_t *p) { // ... if (!(diag_cfg->flags & AM_DIAG_FLAG_ENABLED)) { ap_log_error(APLOG_MARK, APLOG_DEBUG, 0, s, "mellon diagnostics disabled for %s", server_desc); return 1; // ... [Upgrades] No impact SRU section for cosmic and bionic - [Impact] See FFE Rationale above. [Test Case] + To test + + Add the following to /etc/apache2/conf-available/mellon.conf + + MellonDiagnosticsFile /var/log/apache2/mellon_diagnostics.log + MellonDiagnosticsEnable On + + a2enconf mellon + systemctl reload apache2 + + After browsing to a location that is mod_auth_mellon enabled (see the + keystone-saml-mellon charm) logging from the mellon module including + environment variables in the SAML messages will be found in + /var/log/apache2/mellon_diagnostics.log. + + Regression testing can be done using the keystone-saml-mellon charm's functional tests. + https://github.com/openstack-charmers/charm-keystone-saml-mellon + At the time of this writing the functional tests are not fully automated and still require some manual configuration: + https://github.com/openstack-charmers/charm-keystone-saml-mellon/blob/master/src/README.md#configuration [Regression Potential] As mentioned above in the FFE section, "MellonDiagnosticsEnable Off" can be set in the apache configuration to disable diagnostics. This is also the default setting, so regression potential is certainly limited by this. In particular the cosmic regression potential is much lower than the bionic potential since there is much less involved. For bionic please see [Discussion] below. [Discussion] ** cosmic SRU ** For the cosmic SRU this will be a fairly straight forward and trivial update to the package to run configure with "--enable-diagnostics". Cosmic is already at version 0.14.0 which has the diagnostics support. ** bionic SRU ** For the bionic SRU, things are more complicated as bionic is at version 0.13.1 which does not include diagnostics support. What I'd like to do is to update the bionic package to 0.14.0. I know this is not business as usual but I think the regression potential is minimized by updating to 0.14.0 rather than risking any missed code when cherry-picking various patches. For some analysis regarding updating bionic to 0.14.0, I've analyzed the delta between 0.13.1 and 0.14.0 and I'm seeing mostly bug fixes and 2 new features (1 for diagnostics support, and 1 for MellonSignatureMethod support). Here's the full commit summary between 0.13.1 and 0.14.0: /tmp/mod_auth_mellon$ git remote -v origin https://github.com/UNINETT/mod_auth_mellon (fetch) origin https://github.com/UNINETT/mod_auth_mellon (push) /tmp/mod_auth_mellon$ git log --no-merges --date-order --pretty=oneline --format=" - [%h] %s" v0.13.1..v0.14.0 - [29d2872] Bump version to 0.14.0. - [21f78ab] Add release notes for version 0.14.0. - [262768a] NEWS: Add consistent whitespace between releases. - [7bb98cf] Fix config.h.in missing in .tar.gz. - [aee068f] Fix typos in the user guide - [8abbcf9] Update User Guide on error responses and ADFS issues - [9b17e5c] Add MellonSignatureMethod to control signature algorithm - [582f283] Log SAML status response information - [524d558] convert README to README.md - [0851045] Fix consistency, grammar, and usage in user guide - [70e8abc] Give clear error if building
[Bug 1809454] Re: [SRU] nova rbd auth fallback uses cinder user with libvirt secret
Adding a bit more context. The original break only occurred with instances launched on Newton with a subsequent upgrade to Ocata. The required fix needs to be in every Ubuntu/OpenStack combination we support from xenial-ocata to cosmic-rocky. I tested the upgrade from xenail-newton to xenial-ocata. I have also tested that no regressions occur with deployments from xenial-pike to cosmic-rocky. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1809454 Title: [SRU] nova rbd auth fallback uses cinder user with libvirt secret To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1809454/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1809454] Re: [SRU] nova rbd auth fallback uses cinder user with libvirt secret
Verified on cosmic. ** Tags removed: verification-needed-cosmic ** Tags added: verification-done-cosmic -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1809454 Title: [SRU] nova rbd auth fallback uses cinder user with libvirt secret To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1809454/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1809454] Re: [SRU] nova rbd auth fallback uses cinder user with libvirt secret
The newton-proposed, ocata-proposed, pike-proposed, queens-proposed, bionic-proposed and rocky-proposed packages have all been tested. Newton to pike upgrades were performed. The bug no longer exists. The fix is verified in the packages. ** Tags removed: verification-needed verification-needed-bionic verification-pike-needed verification-queens-needed verification-rocky-needed ** Tags added: verification-done-bionic verification-newton-done verification-ocata-done verification-pike-done verification-queens-done verification-rocky-done -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1809454 Title: [SRU] nova rbd auth fallback uses cinder user with libvirt secret To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1809454/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1782008] Re: Unable to force delete a volume
Suggest we add gate >= Queens ** Changed in: charm-cinder Assignee: Chris MacNaughton (chris.macnaughton) => David Ames (thedac) ** Changed in: charm-cinder Status: New => Triaged -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1782008 Title: Unable to force delete a volume To manage notifications about this bug go to: https://bugs.launchpad.net/charm-cinder/+bug/1782008/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1686437] Re: [SRU] glance sync: need keystone v3 auth support
For the keystone v3 fixes revno 454 is the minimum we need SRU'd back to xenial. Bionic 0.1.0~bzr460-0ubuntu1 has these changes. These two merges are the pertinent changes: https://code.launchpad.net/~thedac/simplestreams/keystone-v3-support/+merge/325781 https://code.launchpad.net/~thedac/simplestreams/lp1719879/+merge/333011 A package 0.1.0~bzr454-0ubuntu1 existed in xenial-proposed at one time. Still trying to figure out what happened to that package. It would seem the SRU process needs to occur on: https://bugs.launchpad.net/simplestreams/+bug/1719879 -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1686437 Title: [SRU] glance sync: need keystone v3 auth support To manage notifications about this bug go to: https://bugs.launchpad.net/simplestreams/+bug/1686437/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1790904] Re: Glance v2 required by newer versions of OpenStack
This does need to be SRU'd to Bionic. But not Xenial. Rocky is only supported on Bionic. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1790904 Title: Glance v2 required by newer versions of OpenStack To manage notifications about this bug go to: https://bugs.launchpad.net/simplestreams/+bug/1790904/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1793137] Re: [SRU] Fix for KeyError: 'storage.zfs_pool_name' only partially successful -- needs changes
** Changed in: charm-lxd Milestone: None => 19.04 -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1793137 Title: [SRU] Fix for KeyError: 'storage.zfs_pool_name' only partially successful -- needs changes To manage notifications about this bug go to: https://bugs.launchpad.net/charm-lxd/+bug/1793137/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1775229] Re: "Delete Groups" button is missing for a domain admin user
** Changed in: charm-openstack-dashboard Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1775229 Title: "Delete Groups" button is missing for a domain admin user To manage notifications about this bug go to: https://bugs.launchpad.net/charm-openstack-dashboard/+bug/1775229/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1782008] Re: Unable to force delete a volume
** Changed in: charm-cinder Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1782008 Title: Unable to force delete a volume To manage notifications about this bug go to: https://bugs.launchpad.net/charm-cinder/+bug/1782008/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1775224] Re: "Create User" and "Delete User" buttons are missing for a domain admin user
** Changed in: charm-openstack-dashboard Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1775224 Title: "Create User" and "Delete User" buttons are missing for a domain admin user To manage notifications about this bug go to: https://bugs.launchpad.net/charm-openstack-dashboard/+bug/1775224/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1784405] Re: [rocky] ImportError: No module named versions
** Changed in: charm-neutron-api Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1784405 Title: [rocky] ImportError: No module named versions To manage notifications about this bug go to: https://bugs.launchpad.net/charm-neutron-api/+bug/1784405/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1611987] Re: [SRU] glance-simplestreams-sync charm doesn't support keystone v3
** Changed in: charm-glance-simplestreams-sync Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1611987 Title: [SRU] glance-simplestreams-sync charm doesn't support keystone v3 To manage notifications about this bug go to: https://bugs.launchpad.net/charm-glance-simplestreams-sync/+bug/1611987/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1767087] Re: ceph-volume: block device permissions sometimes not set on initial activate call
** Changed in: charm-ceph-osd Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1767087 Title: ceph-volume: block device permissions sometimes not set on initial activate call To manage notifications about this bug go to: https://bugs.launchpad.net/charm-ceph-osd/+bug/1767087/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1772947] Re: You have enabled the binary log, but you haven't provided the mandatory server-id.
** Changed in: charm-percona-cluster Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1772947 Title: You have enabled the binary log, but you haven't provided the mandatory server-id. To manage notifications about this bug go to: https://bugs.launchpad.net/charm-percona-cluster/+bug/1772947/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1772947] Re: You have enabled the binary log, but you haven't provided the mandatory server-id.
** Changed in: percona-xtradb-cluster-5.7 (Ubuntu) Status: New => Invalid -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1772947 Title: You have enabled the binary log, but you haven't provided the mandatory server-id. To manage notifications about this bug go to: https://bugs.launchpad.net/charm-percona-cluster/+bug/1772947/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1772947] Re: You have enabled the binary log, but you haven't provided the mandatory server-id.
** Changed in: charm-percona-cluster Assignee: Corey Bryant (corey.bryant) => David Ames (thedac) ** Changed in: charm-percona-cluster Milestone: None => 18.05 -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1772947 Title: You have enabled the binary log, but you haven't provided the mandatory server-id. To manage notifications about this bug go to: https://bugs.launchpad.net/charm-percona-cluster/+bug/1772947/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1750121] Re: Dynamic routing: adding speaker to agent fails
@Jens, Apologies for the delay. Of course, you were correct all along. The neutron-server node had the older version of the package. Though I looked at that 10 times I failed to process it. My further apologies for dragging you along for this process. This bug is resolved as far as I am concerned. Thank you for your work on this. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1750121 Title: Dynamic routing: adding speaker to agent fails To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1750121/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1750121] Re: Dynamic routing: adding speaker to agent fails
@Jens, Some points of clarification. It is possible this is a different but related bug. First off, no upgrade is occuring. This is a CI environment [0] in which I do a fresh deployment of neutron and neutron-bgp-dragent on Xenial with the Queens UCA. The deployment and initial configuration, including the bgp peer setup, works as expected. The config roughly follows [1]. Subsequent to the working deployment, I see the "KeyError: 'auth_type' Unable to sync BGP speaker state" after a simple restart of the neutron- bgp-dragent. I confirmed the version of the dynamic routing code includes the changes in this bug report. ii neutron-dynamic-routing-common 2:12.0.0-0ubuntu1.1~cloud0 all OpenStack Neutron Dynamic Routing - common files ii python-neutron-dynamic-routing 2:12.0.0-0ubuntu1.1~cloud0 all OpenStack Neutron Dynamic Routing - Python 2.7 library Steps to reproduce are as follows: Deploy stack including neutron, dragent and quagga Configure networking and dynamic routing [1] Validate BGP peering relations via quagga (vtysh -c "show ip route bgp") Note: All is working at this point. Restart neutron-bgp-dragent See "KeyError: 'auth_type' Unable to sync BGP speaker state" in the neutron-bgp-dragent.log Peering relationship is dead on quagga. I have uploaded the neutron-bgp-dragent log with debug=True. I added a note where the dragent restart takes place in the log. Regardless if this is the same bug or a different one it is critical as it means dragent cannot be used in production. Let me know what else I can do to provide information. [0] https://github.com/openstack/charm-neutron-dynamic-routing/blob/master/src/tox.ini#L29 [1] https://docs.openstack.org/neutron-dynamic-routing/latest/contributor/testing.html ** Attachment added: "neutron bgp dragent log with debug=True" https://bugs.launchpad.net/neutron/+bug/1750121/+attachment/5140997/+files/neutron-bgp-dragent.log.gz -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1750121 Title: Dynamic routing: adding speaker to agent fails To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1750121/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1750121] Re: Dynamic routing: adding speaker to agent fails
I still see this failure when the neutron-bgp-dragent is restarted. Bionic proposed: 12.0.0-0ubuntu1.1 The initial setup works fine: 2018-05-07 17:15:59.099 17215 INFO bgpspeaker.api.base [req-9692824c-b285-4304-86d2-00f46df8a216 - - - - -] API method core.start called with args: {'router_id': '10.5.0.82', 'label_range': (100, 10), 'waiter' : , 'bgp_server_port': 0, 'local_as': 12345, 'allow_local_as_in_count': 0, 'refresh_stalepath_time': 0, 'cluster_id': None, 'local_pref': 100, 'refresh_max_eor_time': 0} 2018-05-07 17:15:59.199 17215 INFO neutron_dynamic_routing.services.bgp.agent.driver.ryu.driver [req-9692824c-b285-4304-86d2-00f46df8a216 - - - - -] Added BGP Speaker for local_as=12345 with router_id= 10.5.0.82. 2018-05-07 17:16:00.689 17215 INFO bgpspeaker.api.base [req-6905ebee-595e-4a72-b2ac-d0372116f310 - - - - -] API method network.add called with args: {'prefix': u'192.168.0.0/24', 'next_hop': u'10.5.150.0'} 2018-05-07 17:16:00.691 17215 INFO neutron_dynamic_routing.services.bgp.agent.driver.ryu.driver [req-6905ebee-595e-4a72-b2ac-d0372116f310 - - - - -] Route cidr=192.168.0.0/24, nexthop=10.5.150.0 is advertised for BGP Speaker running for local_as=12345. 2018-05-07 17:16:02.152 17215 INFO bgpspeaker.api.base [req-0f7a3721-db7e-418d-8ee7-0b950b4ddc88 103842446d8b4a029c1892ffb576d57d 15205edcf62643d7a3723ff7e23b74fc - - -] API method neighbor.create called with args : {'connect_mode': 'active', 'cap_mbgp_evpn': False, 'remote_as': 1, 'cap_mbgp_vpnv6': False, 'cap_mbgp_l2vpnfs': False, 'cap_four_octet_as_number': True, 'cap_mbgp_ipv6': False, 'is_next_hop_self': False, 'cap_mbgp_ipv4': True, 'cap_mbgp_ipv4fs': False, 'is_route_reflector_client': False, 'cap_mbgp_ipv6fs': False, 'is_route_server_client': False, 'cap_enhanced_refresh': False, 'peer_next_hop': None, 'password': None, 'ip_address': u'10.5.0.79', 'cap_mbgp_vpnv4fs': False, 'cap_mbgp_vpnv4': False, 'cap_mbgp_vpnv6fs': False} 2018-05-07 17:16:02.153 17215 INFO neutron_dynamic_routing.services.bgp.agent.driver.ryu.driver [req-0f7a3721-db7e-418d-8ee7-0b950b4ddc88 103842446d8b4a029c1892ffb576d57d 15205edcf62643d7a3723ff7e23b74fc - - -] A$ded BGP Peer 10.5.0.79 for remote_as=1 to BGP Speaker running for local_as=12345. 2018-05-07 17:16:03.158 17215 INFO bgpspeaker.peer [-] Connection to peer: 10.5.0.79 established 2018-05-07 17:16:03.159 17215 INFO neutron_dynamic_routing.services.bgp.agent.driver.ryu.driver [-] BGP Peer 10.5.0.79 for remote_as=1 is UP. 2018-05-07 17:16:04.167 17215 INFO neutron_dynamic_routing.services.bgp.agent.driver.ryu.driver [-] Best path change observed. cidr=10.5.0.0/16, nexthop=10.5.0.79, remote_as=1, is_withdraw=False 2018-05-07 17:16:04.169 17215 INFO neutron_dynamic_routing.services.bgp.agent.driver.ryu.driver [-] Best path change observed. cidr=252.0.0.0/8, nexthop=10.5.0.79, remote_as=1, is_withdraw=False 2018-05-07 17:16:08.756 17215 INFO bgpspeaker.api.base [req-55216404-ace9-4f46-8915-c952549a61db - - - - -] API method network.add called with args: {'prefix': u'10.5.150.9/32', 'next_hop': u'10.5.150.0'} 2018-05-07 17:16:08.761 17215 INFO neutron_dynamic_routing.services.bgp.agent.driver.ryu.driver [req-55216404-ace9-4f46-8915-c952549a61db - - - - -] Route cidr=10.5.150.9/32, nexthop=10.5.150.0 is advertised for $GP Speaker running for local_as=12345. At this point the peer (quagga) has the expected routes via BGP. The neutron-bgp-dragent is restarted: 2018-05-07 17:20:09.208 17885 INFO neutron.common.config [-] Logging enabled! 2018-05-07 17:20:09.209 17885 INFO neutron.common.config [-] /usr/bin/neutron-bgp-dragent version 12.0.1 2018-05-07 17:20:09.918 17885 INFO neutron_dynamic_routing.services.bgp.agent.driver.ryu.driver [-] Initializing Ryu driver for BGP Speaker functionality. 2018-05-07 17:20:09.918 17885 INFO neutron_dynamic_routing.services.bgp.agent.driver.ryu.driver [-] Initialized Ryu BGP Speaker driver interface with bgp_router_id=10.5.0.82 2018-05-07 17:20:09.926 17885 WARNING oslo_config.cfg [req-47f966f9-848c-4f7d-89dd-ff2cc60c001c - - - - -] Option "rabbit_host" from group "oslo_messaging_rabbit" is deprecated for removal (Replaced by [DEFAULT]/transport_url). Its value may be silently ignored in the future. 2018-05-07 17:20:09.927 17885 WARNING oslo_config.cfg [req-47f966f9-848c-4f7d-89dd-ff2cc60c001c - - - - -] Option "rabbit_password" from group "oslo_messaging_rabbit" is deprecated for removal (Replaced by [DEFAULT]/transport_url). Its value may be silently ignored in the future. 2018-05-07 17:20:09.928 17885 WARNING oslo_config.cfg [req-47f966f9-848c-4f7d-89dd-ff2cc60c001c - - - - -] Option "rabbit_userid" from group "oslo_messaging_rabbit" is deprecated for removal (Replaced by [DEFAULT]/transport_url). Its value may
[Bug 1686437] Re: [SRU] glance sync: need keystone v3 auth support
Noting here the released version on xenial does not currently support Keystone v3 and blocks Bug #1611987. For the record, we have been running a bzr branch @455 on serverstack (a Keystone v3 cloud) for months now. So the code in simplestreams works, it just needs to get to xenial. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1686437 Title: [SRU] glance sync: need keystone v3 auth support To manage notifications about this bug go to: https://bugs.launchpad.net/simplestreams/+bug/1686437/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1739585] Re: L2 guest failed to boot under nested KVM: entry failed, hardware error 0x0
This may be a duplicate of [0] After finding hints in the above bug. We checked if APIC was enabled. On three of our compute nodes it was: cat /sys/module/kvm_intel/parameters/enable_apicv Y We disabled APCI by setting the following in /etc/modprobe.d/qemu-system-x86.conf and rebooting per [1]: options kvm-intel nested=y enable_apicv=n Now cat /sys/module/kvm_intel/parameters/enable_apicv N Initial testing we have had a number of successful nested KVMs on the compute nodes in question. [0] https://bugs.launchpad.net/ubuntu/+source/linux-lts-xenial/+bug/1682077 [1] https://www.juniper.net/documentation/en_US/vsrx/topics/task/installation/security-vsrx-kvm-nested-virt-enable.html ** Changed in: charm-test-infra Status: Confirmed => Fix Released ** Changed in: charm-test-infra Assignee: (unassigned) => David Ames (thedac) -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1739585 Title: L2 guest failed to boot under nested KVM: entry failed, hardware error 0x0 To manage notifications about this bug go to: https://bugs.launchpad.net/charm-test-infra/+bug/1739585/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1740892] Re: corosync upgrade on 2018-01-02 caused pacemaker to fail
** Also affects: corosync (Ubuntu) Importance: Undecided Status: New ** Also affects: pacemaker (Ubuntu) Importance: Undecided Status: New ** Changed in: charm-hacluster Status: New => Invalid -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1740892 Title: corosync upgrade on 2018-01-02 caused pacemaker to fail To manage notifications about this bug go to: https://bugs.launchpad.net/charm-hacluster/+bug/1740892/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1727063] Re: Pacemaker package upgrades stop but fail to start pacemaker resulting in HA outage
>From the charm perspective we need to determine if the charm does anything beyond the packaging that could lead to this. The charm runs: update-rc.d -f pacemaker defaults Testing. ** Changed in: charm-hacluster Importance: Undecided => Critical -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1727063 Title: Pacemaker package upgrades stop but fail to start pacemaker resulting in HA outage To manage notifications about this bug go to: https://bugs.launchpad.net/charm-hacluster/+bug/1727063/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1727063] Re: Pacemaker package upgrades stop but fail to start pacemaker resulting in HA outage
** Also affects: pacemaker (Ubuntu) Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1727063 Title: Pacemaker package upgrades stop but fail to start pacemaker resulting in HA outage To manage notifications about this bug go to: https://bugs.launchpad.net/charm-hacluster/+bug/1727063/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1654403] Re: Race condition in hacluster charm that leaves pacemaker down
** Changed in: hacluster (Juju Charms Collection) Status: Triaged => Fix Committed ** Changed in: hacluster (Juju Charms Collection) Assignee: (unassigned) => David Ames (thedac) -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1654403 Title: Race condition in hacluster charm that leaves pacemaker down To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/corosync/+bug/1654403/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1654403] Re: Race condition in hacluster charm that leaves pacemaker down
Additional information from the charm: Without cluster_count set to NUM_UNITS a race occurs where the relation to the last hacluster node is not yet set leading to the attempt to startup corosync and pacemaker with only n-1/n nodes. The last node only has one relationship it is aware of yet when there should be 2 relations: relation-list -r hanode:0 hacluster/0 corosync.conf looks like the following when there should be 3 nodes: nodelist { node { ring0_addr: 10.5.35.235 nodeid: 1000 } node { ring0_addr: 10.5.35.237 nodeid: 1001 } } The services themselves (not the charm) fail: corosync logs thousands of RETRANSMIT errors. pacemaker eventually times out after waiting on corosync. Adding more documentation to push the setting of cluster_count and updating the amulet tests to include it. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1654403 Title: Race condition in hacluster charm that leaves pacemaker down To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/corosync/+bug/1654403/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1657305] Re: percona cluster getting wrong private ip
According to John Meinel: 'The charms should be updated to use "network-get --preferred-address" instead of just "unit-get private-address". unit-get doesn't pass the information to Juju for us to know which bit of the configuration we're supposed to be reporting.' I would make the argument that private-address *should* be predictable. It *should* be the PXE boot IP if not configurable. As expressed in this bug: https://bugs.launchpad.net/juju/+bug/1591962 These two bugs express the same issue. They say they are fix-released but the fixes only deal with the symptom not the underlying problem. https://bugs.launchpad.net/juju/+bug/1616098/ https://bugs.launchpad.net/juju/+bug/1603473/ ** Also affects: juju-core (Ubuntu) Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1657305 Title: percona cluster getting wrong private ip To manage notifications about this bug go to: https://bugs.launchpad.net/opnfv/+bug/1657305/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1654116] Re: Attempts to write leadership settings when not the leader during relation-changed hooks
This is a juju is-leader bug. I have tipple checked that any call to leader-set is gated by an is- leader check in our charms. Specifically in rabbitmq-server, percona- cluster and ceilometer. With the juju 2.1b3 and rabbitmq you can see that leadership is bouncing around between the three units. See the timestamps in the following: rabbitmq-server-0/var/log/juju/unit-rabbitmq-server-0.log:2017-01-04 21:12:16 INFO juju-log Unknown hook leader-elected - skipping. rabbitmq-server-0/var/log/juju/unit-rabbitmq-server-0.log:2017-01-04 21:47:22 INFO juju-log Unknown hook leader-elected - skipping. rabbitmq-server-0/var/log/juju/unit-rabbitmq-server-0.log:2017-01-04 22:16:54 INFO juju-log Unknown hook leader-elected - skipping. rabbitmq-server-0/var/log/juju/unit-rabbitmq-server-0.log:2017-01-04 22:25:38 INFO amqp-relation-changed subprocess.CalledProcessError: Command '['leader-set', 'amqp:62_password=VGYqpSqts4R39S9rcJrSwrB7s9ygd2Xp8cnSwcxbTSRKwBjznhHy7fF6247CCRHC']' returned non-zero exit status 1 rabbitmq-server-1/var/log/juju/unit-rabbitmq-server-1.log:2017-01-04 22:01:25 INFO juju-log Unknown hook leader-elected - skipping. rabbitmq-server-1/var/log/juju/unit-rabbitmq-server-1.log:2017-01-04 22:13:54 INFO amqp-relation-changed subprocess.CalledProcessError: Command '['leader-set', 'ceilometer.passwd=4rcYrk2FfPNXFVgghdLtpC4VRCyBb4smXKFNHdwFxxdgsfqSrLy85WwW3MCCdPxM']' returned non-zero exit status 1 rabbitmq-server-2/var/log/juju/unit-rabbitmq-server-2.log:2017-01-04 21:39:21 INFO juju-log Unknown hook leader-elected - skipping. With juju 2.1b4 and percona-cluster unit 0 is the leader but some time goes by before it attempts leader-set. At the end unit 2 takes over leadership. mysql-0/var/log/juju/unit-mysql-0.log:2017-01-12 06:20:33 INFO juju-log Unknown hook leader-elected - skipping. mysql-0/var/log/juju/unit-mysql-0.log:2017-01-12 06:35:01 DEBUG juju-log cluster:2: Leader unit - bootstrap required=True mysql-0/var/log/juju/unit-mysql-0.log:2017-01-12 06:35:28 DEBUG juju-log cluster:2: Leader unit - bootstrap required=False mysql-0/var/log/juju/unit-mysql-0.log:2017-01-12 06:50:55 INFO shared-db-relation-changed subprocess.CalledProcessError: Command '['leader-set', 'shared-db:54_access-network=']' returned non-zero exit status 1 mysql-2/var/log/juju/unit-mysql-2.log:2017-01-12 06:51:43 INFO juju-log Unknown hook leader-elected - skipping. There are 4 possible problems as I see it: 1) is-leader is giving a false positive 2) is-leader is not in the PATH when is-leader is called in the charms 3) A race during leader election in which one or more units believe they are the leader 4) leader-set fails during a leader election ** Changed in: charm-helpers Status: Triaged => Invalid ** Changed in: ceilometer (Juju Charms Collection) Status: New => Invalid ** Changed in: rabbitmq-server (Juju Charms Collection) Status: Triaged => Invalid ** Also affects: juju-core (Ubuntu) Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1654116 Title: Attempts to write leadership settings when not the leader during relation-changed hooks To manage notifications about this bug go to: https://bugs.launchpad.net/autopilot-log-analyser/+bug/1654116/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1654403] Re: Race condition in hacluster charm that leaves pacemaker down
Corey, This is Mitaka on Xenial. I suspect that the package remains the same on Xenial for the other OpenStack releases. I'll try and confirm this. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1654403 Title: Race condition in hacluster charm that leaves pacemaker down To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/corosync/+bug/1654403/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1654403] Re: Race condition in hacluster charm that leaves pacemaker down
Root cause: 1) When corosync is restarted it may take up to a minute for it to finish setting up. 2) The systemd timeout value is exceeded. Jan 10 18:57:49 juju-39e3e2-percona-3 systemd[1]: Failed to start Corosync Cluster Engine. Jan 10 18:57:49 juju-39e3e2-percona-3 systemd[1]: corosync.service: Unit entered failed state. Jan 10 18:57:49 juju-39e3e2-percona-3 systemd[1]: corosync.service: Failed with result 'timeout'. 3) Pacemaker is then started. Pacemaker systemd script has a dependency on corosync which may still be in the process of comming up. 4) Pacemaker fails to start due to dependency Jan 10 18:57:49 juju-39e3e2-percona-3 systemd[1]: pacemaker.service: Job pacemaker.service/start failed with result 'dependency'. 5) Pacemaker remains down. 6) Subsequently, the charm checks for pacemaker health by running `crm node list` in a loop until it succeeds. 7) This is an infinite loop. Soulitions 1) Adding corosync to this bug for systemd script timeout change 2) Charm needs to better handle validation of restart of the services and better communicate to the end user when an error has occured Current Work in Process https://review.openstack.org/#/c/419204/ ** Also affects: corosync (Ubuntu) Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1654403 Title: Race condition in hacluster charm that leaves pacemaker down To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/corosync/+bug/1654403/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1552822] Re: apache2 fails to wait on stop/restart
Christian, Sorry for taking so long to get back to you. We see this in the OpenStack charming team with openstack-dashboard (horizon) and keystone (when run on apache2 >= liberty). We have had to implement hacky work-arounds because apache2 is not waiting until it releases the port(s) before exiting. You can deploy either of these and then test my loop from above. juju deploy openstack-dashboard The 1428796 is similar in that apache2 is not waiting till it releases ports, but we see this on trusty and xenial so it is not systemd specific. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1552822 Title: apache2 fails to wait on stop/restart To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/apache2/+bug/1552822/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1552822] [NEW] apache2 fails to wait on stop/restart
Public bug reported: Apache2 is failing to wait for all its threads to terminate when stopping. This leaves TCP ports still in use when apache2 tires to restart. This has been seen on Trusty and Xenial This becomes a problems on restarts and stop/starts. I have seen this running a simple loop with service apache2 restart. However this is inconsistent. It happens more predictably when wsgi is involved even if stop and then start are used instead of restart: * Stopping web server apache2 * * Starting web server apache2 (98)Address already in use: AH00072: make_sock: could not bind to address [::]:80 (98)Address already in use: AH00072: make_sock: could not bind to address 0.0.0.0:80 no listening sockets available, shutting down AH00015: Unable to open logs Action 'start' failed. The Apache error log may have more information. ** Affects: apache2 (Ubuntu) Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to apache2 in Ubuntu. https://bugs.launchpad.net/bugs/1552822 Title: apache2 fails to wait on stop/restart To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/apache2/+bug/1552822/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1552822] [NEW] apache2 fails to wait on stop/restart
Public bug reported: Apache2 is failing to wait for all its threads to terminate when stopping. This leaves TCP ports still in use when apache2 tires to restart. This has been seen on Trusty and Xenial This becomes a problems on restarts and stop/starts. I have seen this running a simple loop with service apache2 restart. However this is inconsistent. It happens more predictably when wsgi is involved even if stop and then start are used instead of restart: * Stopping web server apache2 * * Starting web server apache2 (98)Address already in use: AH00072: make_sock: could not bind to address [::]:80 (98)Address already in use: AH00072: make_sock: could not bind to address 0.0.0.0:80 no listening sockets available, shutting down AH00015: Unable to open logs Action 'start' failed. The Apache error log may have more information. ** Affects: apache2 (Ubuntu) Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1552822 Title: apache2 fails to wait on stop/restart To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/apache2/+bug/1552822/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1536401] Re: openvswitch requires service restart after charm upgrade
sudo restart openvswitch-switch and sudo restart neutron-plugin- openvswitch-agent with juju resolved --retries fixes the issue. ** Package changed: charms => neutron-openvswitch (Juju Charms Collection) ** Also affects: neutron-gateway (Ubuntu) Importance: Undecided Status: New ** Also affects: nova-compute (Juju Charms Collection) Importance: Undecided Status: New ** No longer affects: neutron-gateway (Ubuntu) ** Also affects: neutron-gateway (Juju Charms Collection) Importance: Undecided Status: New ** Changed in: neutron-gateway (Juju Charms Collection) Status: New => Triaged ** Changed in: nova-compute (Juju Charms Collection) Status: New => Triaged ** Changed in: nova-compute (Juju Charms Collection) Importance: Undecided => Critical ** Changed in: neutron-gateway (Juju Charms Collection) Importance: Undecided => Critical -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1536401 Title: openvswitch requires service restart after charm upgrade To manage notifications about this bug go to: https://bugs.launchpad.net/charms/+source/neutron-gateway/+bug/1536401/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1481362] Re: pxc cluster charm on Vivid and Wily point to old mysql datadir /var/lib/mysql
** Changed in: percona-cluster (Juju Charms Collection) Status: Confirmed => Fix Committed -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1481362 Title: pxc cluster charm on Vivid and Wily point to old mysql datadir /var/lib/mysql To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/percona-xtradb-cluster-5.6/+bug/1481362/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1445616] Re: crmsh in vivid/wily/xenial is not compatible with pacemaker
This is affecting OpenStack HA deployments and testing for liberty -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1445616 Title: crmsh in vivid/wily/xenial is not compatible with pacemaker To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/crmsh/+bug/1445616/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1445616] Re: crmsh in vivid/wily/xenial is not compatible with pacemaker
This is affecting OpenStack HA deployments and testing for liberty -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to the bug report. https://bugs.launchpad.net/bugs/1445616 Title: crmsh in vivid/wily/xenial is not compatible with pacemaker To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/crmsh/+bug/1445616/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1497308] Re: local repository for all Openstack charms
** Also affects: nova-compute (Juju Charms Collection) Importance: Undecided Status: New ** Also affects: keystone (Ubuntu) Importance: Undecided Status: New ** Also affects: cinder (Juju Charms Collection) Importance: Undecided Status: New ** No longer affects: keystone (Ubuntu) ** Also affects: keystone (Juju Charms Collection) Importance: Undecided Status: New ** Also affects: glance (Juju Charms Collection) Importance: Undecided Status: New ** Also affects: neutron-api (Juju Charms Collection) Importance: Undecided Status: New ** Also affects: neutron-gateway (Juju Charms Collection) Importance: Undecided Status: New ** Also affects: openstack-dashboard (Juju Charms Collection) Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1497308 Title: local repository for all Openstack charms To manage notifications about this bug go to: https://bugs.launchpad.net/charms/+source/cinder/+bug/1497308/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1497308] Re: local repository for all Openstack charms
** Also affects: nova-compute (Juju Charms Collection) Importance: Undecided Status: New ** Also affects: keystone (Ubuntu) Importance: Undecided Status: New ** Also affects: cinder (Juju Charms Collection) Importance: Undecided Status: New ** No longer affects: keystone (Ubuntu) ** Also affects: keystone (Juju Charms Collection) Importance: Undecided Status: New ** Also affects: glance (Juju Charms Collection) Importance: Undecided Status: New ** Also affects: neutron-api (Juju Charms Collection) Importance: Undecided Status: New ** Also affects: neutron-gateway (Juju Charms Collection) Importance: Undecided Status: New ** Also affects: openstack-dashboard (Juju Charms Collection) Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to keystone in Ubuntu. https://bugs.launchpad.net/bugs/1497308 Title: local repository for all Openstack charms To manage notifications about this bug go to: https://bugs.launchpad.net/charms/+source/cinder/+bug/1497308/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1487170] [NEW] Sync python-mysqldb 1.3.4-2 (main) from Debian sid (main)
Public bug reported: Please sync python-mysqldb 1.3.4-2 (main) from Debian sid (main) Explanation of the Ubuntu delta and why it can be dropped: * debian/patches/06_fix_error_checking.patch: Check for error if mysql_store_result returns NULL, taken from upstream at https://github.com/farcepest/MySQLdb1/commit/e6d24c358d0c0ad9249044dad09e63e039c527e1 * debian/patches/06_fix_error_checking.patch: Check for error if mysql_store_result returns NULL, taken from upstream at https://github.com/farcepest/MySQLdb1/commit/e6d24c358d0c0ad9249044dad09e63e039c527e1 * debian/patches/06_fix_error_checking.patch: Check for error if mysql_store_result returns NULL, taken from upstream at https://github.com/farcepest/MySQLdb1/commit/e6d24c358d0c0ad9249044dad09e63e039c527e1 * debian/patches/06_fix_error_checking.patch: Check for error if mysql_store_result returns NULL, taken from upstream at https://github.com/farcepest/MySQLdb1/commit/e6d24c358d0c0ad9249044dad09e63e039c527e1 Patch has been applied upstream. Changelog entries since current wily version 1.2.3-2ubuntu1: python-mysqldb (1.3.4-2) unstable; urgency=medium * Uploading to unstable. * Added myself as uploader. * Ran wrap-and-sort -t -a. * Uploading to unstable. * Now using debhelper 9. * Removed version in python-all-dev build-depends. * Removed useless X-Python3-Version: = 3.3. * Rewrote debian/coypright in parseable format 1.0. -- Thomas Goirand z...@debian.org Wed, 29 Jul 2015 18:17:00 +0200 python-mysqldb (1.3.4-1) experimental; urgency=low [ Jakub Wilk ] * Use canonical URIs for Vcs-* fields. * Drop obsolete Conflicts/Replaces with python2.3-mysqldb and python2.4-mysqldb. [ Thomas Goirand ] * The changelog is now again fully encoded in UTF-8 (Closes: 718699). [ Brian May ] * Use mysqlclient fork (Closes: #768096). * Drop old patches. * Add support for Python 3.3 and greater. -- Brian May b...@debian.org Thu, 20 Nov 2014 15:10:36 +1100 ** Affects: python-mysqldb (Ubuntu) Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1487170 Title: Sync python-mysqldb 1.3.4-2 (main) from Debian sid (main) To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/python-mysqldb/+bug/1487170/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1415916] Re: Sync python-mysqldb 1.3.4-1 (main) from Debian experimental (main)
1.3.4-2 version sync request: https://bugs.launchpad.net/bugs/1487170 -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1415916 Title: Sync python-mysqldb 1.3.4-1 (main) from Debian experimental (main) To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/python-mysqldb/+bug/1415916/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs