[Bug 1988457] Re: ovsdbapp can time out on raft leadership change

2024-05-14 Thread Edward Hope-Morley
** Also affects: python-ovsdbapp (Ubuntu)
   Importance: Undecided
   Status: New

** Also affects: python-ovsdbapp (Ubuntu Jammy)
   Importance: Undecided
   Status: New

** Also affects: cloud-archive
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/yoga
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1988457

Title:
  ovsdbapp can time out on raft leadership change

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1988457/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1890858] Re: AppArmor profile causes QEMU/KVM - Not Connected

2024-04-30 Thread Edward Hope-Morley
Comment #58 says "The fix is in Focal and Focal only as that is where the 
problem occurs.
As the old bug stated, this doesn't affect later Ubuntu releases." so there is 
no need to update the cloud archive (and fwiw to update the Yoga uca you would 
first update Jammy but since the Jammy version is not affected there is nothing 
to be done here).

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1890858

Title:
  AppArmor profile causes QEMU/KVM - Not Connected

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1890858/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2054799] Re: [SRU] Issue with Project administration at Cloud Admin level

2024-04-29 Thread Edward Hope-Morley
** Also affects: horizon (Ubuntu)
   Importance: Undecided
   Status: New

** Also affects: horizon (Ubuntu Noble)
   Importance: Undecided
   Status: New

** Also affects: horizon (Ubuntu Jammy)
   Importance: Undecided
   Status: New

** Also affects: horizon (Ubuntu Mantic)
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/bobcat
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/zed
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/antelope
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/caracal
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2054799

Title:
  [SRU] Issue with Project administration at Cloud Admin level

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/2054799/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2061837] [NEW] MOK enrollment is not adequately explained

2024-04-16 Thread Edward Schwartz
Public bug reported:

The installer says:

"After installation completes, Ubuntu will assist you in configuring
UEFI Secure Boot.  ..."

In reality, the MOK management tool opens to a menu with four options
and zero context:

1. Continue boot
2. Enroll MOK
3. Enroll key from disk
4. Enroll hash from disk

The user should not have to select 2 manually, but if they do, you
should probably tell them about it.

ProblemType: Bug
DistroRelease: Ubuntu 22.04
Package: ubiquity (not installed)
ProcVersionSignature: Ubuntu 6.5.0-27.28~22.04.1-generic 6.5.13
Uname: Linux 6.5.0-27-generic x86_64
NonfreeKernelModules: nvidia_modeset nvidia
ApportVersion: 2.20.11-0ubuntu82.5
Architecture: amd64
CasperMD5CheckResult: pass
CurrentDesktop: ubuntu:GNOME
Date: Tue Apr 16 09:19:07 2024
InstallCmdLine: BOOT_IMAGE=/casper/vmlinuz file=/cdrom/preseed/ubuntu.seed 
maybe-ubiquity quiet splash ---
InstallationDate: Installed on 2024-04-16 (0 days ago)
InstallationMedia: Ubuntu 22.04.3 LTS "Jammy Jellyfish" - Release amd64 
(20230807.2)
ProcEnviron:
 TERM=xterm-256color
 PATH=(custom, no user)
 XDG_RUNTIME_DIR=
 LANG=en_US.UTF-8
 SHELL=/bin/bash
SourcePackage: ubiquity
Symptom: installation
UpgradeStatus: No upgrade log present (probably fresh install)

** Affects: ubuntu
 Importance: Undecided
 Status: New


** Tags: amd64 apport-bug jammy ubiquity-22.04.20

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2061837

Title:
  MOK enrollment is not adequately explained

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+bug/2061837/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2061837] Re: MOK enrollment is not adequately explained

2024-04-16 Thread Edward Schwartz
** Attachment added: "What the user is told about the MOK management screen"
   
https://bugs.launchpad.net/ubuntu/+bug/2061837/+attachment/5766145/+files/20240416_085001.jpg

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2061837

Title:
  MOK enrollment is not adequately explained

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+bug/2061837/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2061837] Re: MOK enrollment is not adequately explained

2024-04-16 Thread Edward Schwartz
** Attachment added: "MOK management screen immediately after rebooting"
   
https://bugs.launchpad.net/ubuntu/+bug/2061837/+attachment/5766144/+files/20240416_090146.jpg

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2061837

Title:
  MOK enrollment is not adequately explained

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+bug/2061837/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1978489] Re: libvirt / cgroups v2: cannot boot instance with more than 16 CPUs

2024-04-10 Thread Edward Hope-Morley
** Tags removed: verification-needed
** Tags added: verification-done

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1978489

Title:
  libvirt / cgroups v2: cannot boot instance with more than 16 CPUs

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1978489/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2059315] Re: Failure to upgrade to 22.04.3 LTS from 20.04.6 ( _gcry_logv: internal error (fatal or bug))

2024-03-28 Thread James Edward Beck
** Description changed:

  $ lsb_release -rd
  Description:Ubuntu 20.04.6 LTS
  Release:20.04
  
- 
- executed `do-release-upgrade` and expected to successfully upgrade release to 
UBUNTU 22.04.3.  Note: System has FIPS enbaled.
- 
+ executed `do-release-upgrade` and expected to successfully upgrade
+ release to UBUNTU 22.04.3.  Note: System has FIPS enabled.
  
  `do-release-upgrade log` returned:
  
  Fatal: unexpected error from getentropy: Invalid argument
  fatal error in libgcrypt, file ../../src/misc.c, line 146, function 
_gcry_logv: internal error (fatal or bug)
  
  This error seems identical to bug 2055825

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2059315

Title:
  Failure to upgrade to 22.04.3 LTS from 20.04.6 ( _gcry_logv: internal
  error (fatal or bug))

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ubuntu-release-upgrader/+bug/2059315/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2059315] [NEW] Failure to upgrade to 22.04.3 LTS from 20.04.6 ( _gcry_logv: internal error (fatal or bug))

2024-03-27 Thread James Edward Beck
Public bug reported:

$ lsb_release -rd
Description:Ubuntu 20.04.6 LTS
Release:20.04


executed `do-release-upgrade` and expected to successfully upgrade release to 
UBUNTU 22.04.3.  Note: System has FIPS enbaled.


`do-release-upgrade log` returned:

Fatal: unexpected error from getentropy: Invalid argument
fatal error in libgcrypt, file ../../src/misc.c, line 146, function _gcry_logv: 
internal error (fatal or bug)

This error seems identical to bug 2055825

** Affects: ubuntu
 Importance: Undecided
 Status: New


** Tags: bot-comment

** Attachment added: "script output file capturing do-release-upgrade output"
   
https://bugs.launchpad.net/bugs/2059315/+attachment/5760065/+files/upgrade_22_04_4.script_3.log

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2059315

Title:
  Failure to upgrade to 22.04.3 LTS from 20.04.6 ( _gcry_logv: internal
  error (fatal or bug))

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+bug/2059315/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2043299] Re: Linux 6.5 breaks Novation Components web MIDI application

2024-03-23 Thread Edward
As mentioned above, this turns out to have been a Chrome/Chromium bug
exposed by the new kernel. It has been fixed in Chrome 123, which is now
the stable release. This issue can be closed.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2043299

Title:
  Linux 6.5 breaks Novation Components web MIDI application

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/2043299/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2058286] Re: Requesting SRU for Octavia 10.1.1

2024-03-22 Thread Edward Hope-Morley
** Also affects: octavia
   Importance: Undecided
   Status: New

** No longer affects: octavia

** Also affects: cloud-archive
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/yoga
   Importance: Undecided
   Status: New

** Also affects: octavia (Ubuntu Jammy)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2058286

Title:
  Requesting SRU for Octavia 10.1.1

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/2058286/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2017748] Re: [SRU] OVN: ovnmeta namespaces missing during scalability test causing DHCP issues

2024-03-19 Thread Edward Hope-Morley
** Also affects: cloud-archive
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/yoga
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/zed
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2017748

Title:
  [SRU] OVN:  ovnmeta namespaces missing during scalability test causing
  DHCP issues

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/2017748/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1978489] Re: libvirt / cgroups v2: cannot boot instance with more than 16 CPUs

2024-03-06 Thread Edward Hope-Morley
Forget to add to ^ that instead of removing the default weight (1024 *
guest.vcpus) might it not have made sense to simply cap it at the max
allowed value? Again, perhaps something that could be proposed to Nova
as a new patch.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1978489

Title:
  libvirt / cgroups v2: cannot boot instance with more than 16 CPUs

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1978489/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1978489] Re: libvirt / cgroups v2: cannot boot instance with more than 16 CPUs

2024-03-06 Thread Edward Hope-Morley
As a recap, this patch addresses the problem of moving vms between hosts
running cgroups v1 (e.g. Ubuntu Focal) and v2 (Ubuntu Jammy) which now
has a cap of 10K [1] for cpu.weight, resulting in vms with > 9 vcpus not
being able to boot if they use the default Nova 1024 * guest.vcpus. The
patch addresses the problem by no longer applying a default weight to
instances while keeping the option to apply quota:cpu_shares from a
flavor extra-specs.

The consequence of this is:
Vms booted without quota:cpu_shares extra-specs after upgrading to this patch 
will have the default cgroups v2 weight of 100.
New Vms can get a higher weight if they use a flavor with extra-specs 
quota:cpu_shares BUT this will only apply to existing vms if they are resized 
so as to switch to using the new/modified flavor which will require workload 
downtime - a vm reboot will not consume the new value.
Vms created from a flavor with extra-specs quota:cpu_shares set to a value > 
10K will fail to boot and to fix this will require a new/modified flavor with 
adjusted value then vm resize to consume therefore workload downtime. 

It is important to note that point 3 is not a consequence of this patch
and is therefore neither introduced nor resolved by it and will require
a separate patch solution. One way to resolve this could be to have Nova
cap quota:cpu_shares at cgroup cpu.weight max value and log a warning to
say that was done, that way instances will at least boot and have a max
weight. Therefore I am in favour of proceeding with this SRU to provide
users a way to migrate from v1 to v2 and suggest we propose a new patch
to address the flavor extra-specs issue. As @jamespage has pointed out
there are some interim manual solutions that can be used as a stop-gap
until this is fully resolved in Nova.

[1] https://www.kernel.org/doc/Documentation/cgroup-v2.txt

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1978489

Title:
  libvirt / cgroups v2: cannot boot instance with more than 16 CPUs

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1978489/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1973347] Re: OVN revision_number infinite update loop

2024-03-01 Thread Edward Hope-Morley
** Also affects: neutron (Ubuntu)
   Importance: Undecided
   Status: New

** Also affects: neutron (Ubuntu Focal)
   Importance: Undecided
   Status: New

** Also affects: neutron (Ubuntu Jammy)
   Importance: Undecided
   Status: New

** Changed in: neutron (Ubuntu Jammy)
   Status: New => Fix Released

** Also affects: cloud-archive/ussuri
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/victoria
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/wallaby
   Importance: Undecided
   Status: New

** Changed in: cloud-archive/wallaby
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1973347

Title:
  OVN revision_number infinite update loop

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1973347/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1993480] Re: Multiple ct_clear datapath actions (openvswitch: ovs-system: deferred action limit reached, drop recirc action)

2024-02-29 Thread Edward Hope-Morley
Both of the patches in the description were backported to 2.13.8 (the
version currently in focal-updates).

** Also affects: openvswitch (Ubuntu Focal)
   Importance: Undecided
   Status: New

** Changed in: openvswitch (Ubuntu Focal)
   Status: New => Fix Released

** Changed in: cloud-archive/ussuri
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1993480

Title:
  Multiple ct_clear datapath actions (openvswitch: ovs-system: deferred
  action limit reached, drop recirc action)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1993480/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1987663] Re: cinder-volume: "Failed to re-export volume, setting to ERROR" with "tgtadm: failed to send request hdr to tgt daemon, Transport endpoint is not connected" on service startup

2024-02-26 Thread Edward Hope-Morley
** Also affects: cloud-archive
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/antelope
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/caracal
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/xena
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/yoga
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/wallaby
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/victoria
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/zed
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/bobcat
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/ussuri
   Importance: Undecided
   Status: New

** Changed in: cloud-archive/caracal
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1987663

Title:
  cinder-volume: "Failed to re-export volume, setting to ERROR" with
  "tgtadm: failed to send request hdr to tgt daemon, Transport endpoint
  is not connected" on service startup

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1987663/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1947127] Re: [SRU] Some DNS extensions not working with OVN

2022-05-30 Thread Edward Hope-Morley
this is releases in all but xena which will be available in the Ubuntu
Cloud Archive in the upcoming 19.0.3 stable release

** Changed in: cloud-archive/xena
   Status: Fix Released => New

** Changed in: cloud-archive/yoga
   Status: New => Fix Released

** Changed in: neutron (Ubuntu Jammy)
   Status: New => Fix Released

** Changed in: neutron (Ubuntu Kinetic)
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1947127

Title:
  [SRU] Some DNS extensions not working with OVN

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1947127/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1965297] Re: l3ha don't set backup qg ports down

2022-05-27 Thread Edward Hope-Morley
** Description changed:

  The history to this request is as follows; bug 1916024 fixed an issue
  that subsequently had to be reverted due to a regression that it
  introduced (see bug 1927868) and the original issue can once again
  present itself in that keepalived is unable to send GARP on the qg port
  until the port is marked as UP by neutron which in loaded environments
  can sometimes take longer than keepalived will wait (e.g. when an
  l3-agent is restarted on a host that has hundreds of routers). The
  reason why qg- ports are marked as DOWN is because of the patch landed
  as part of bug 1859832 and as I understand it there is now consensus
  from upstream [1] to revert that patch as well and a better solution is
  needed to fix that particular issue. I have not found a bug open yet for
  the revert hence why I am opening this one.
  
  [1]
  
https://meetings.opendev.org/meetings/neutron_drivers/2022/neutron_drivers.2022-03-04-14.03.log.txt
+ 
+ 
+ 
+ [Impact]
+ Please see LP bug description for full details but in short, this patch is a 
revert of a patch that has show instability in the field for users of Neutron 
L3HA.
+ 
+ [Test Plan]
+   * Deploy Openstack with Neutron L3 HA enabled
+   * Create a number of HA routers
+   * Check all qrouter namespaces and ensure that the qg- port is UP in all
+ 
+ [Regression Potential]
+ Since the original patch was intended to address issues with MLDv2 it is 
possible that reverting it will re-introduce those issues and a new patch will 
need to be proposed to address that.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1965297

Title:
  l3ha don't set backup qg ports down

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1965297/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1797857] Re: /etc/timezone parsing doesn't deal with comments properly

2022-05-26 Thread Edward Betts
** Changed in: python-tzlocal (Ubuntu)
   Status: New => Fix Committed

** Changed in: python-tzlocal (Ubuntu)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1797857

Title:
  /etc/timezone parsing doesn't deal with comments properly

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/python-tzlocal/+bug/1797857/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1973204] [NEW] default journalctl plugin to disabled

2022-05-12 Thread Edward Hope-Morley
Public bug reported:

Sosreports are collecting the systemd journal in two forms which can get
very large i.e. /var/log/journal and sos_commands/logs/journalctl_*. We
would like to disable the sos_commands/logs form since the binary
version is what most/all people use.

** Affects: sosreport (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1973204

Title:
  default journalctl plugin to disabled

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/1973204/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1971565] Re: charm no longer works with latest mysql-router version

2022-05-04 Thread Edward Hope-Morley
Also, new build of mysql-8.0 soon ready for test -
https://launchpad.net/~ubuntu-security-
proposed/+archive/ubuntu/ppa/+packages

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1971565

Title:
  charm no longer works with latest mysql-router version

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-mysql-router/+bug/1971565/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1971565] Re: charm no longer works with latest mysql-router version

2022-05-04 Thread Edward Hope-Morley
** Also affects: mysql-8.0 (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1971565

Title:
  charm no longer works with latest mysql-router version

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-mysql-router/+bug/1971565/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1940834] Re: Horizon not show flavor details in instance and resize is not possible - Flavor ID is not supported by nova

2022-04-11 Thread Edward Hope-Morley
Impish currently has https://bugs.launchpad.net/cloud-
archive/+bug/1962582 in its queue which will need to be released before
the Xena backport (already in unapproved queue for I) can be approves
(and this patch is not yet in an upstream PR hence why an SRU is
needed).

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1940834

Title:
  Horizon not show flavor details in instance and resize is not possible
  - Flavor ID is not supported by nova

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1940834/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1956754] Re: [SRU] openvswitch 2.13.5

2022-04-11 Thread Edward Hope-Morley
Package already released to ussuri uca so marked as Fix Released.

** Changed in: cloud-archive/ussuri
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1956754

Title:
  [SRU] openvswitch 2.13.5

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1956754/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1967343] [NEW] package lvm2 2.03.07-1ubuntu1 failed to install/upgrade: installed lvm2 package post-installation script subprocess returned error exit status 1

2022-03-31 Thread James Edward King
Public bug reported:

running software updater.
not sure what impact this error message will have - no issues found so far

ProblemType: Package
DistroRelease: Ubuntu 20.04
Package: lvm2 2.03.07-1ubuntu1
ProcVersionSignature: Ubuntu 5.4.0-104.118-generic 5.4.166
Uname: Linux 5.4.0-104-generic x86_64
ApportVersion: 2.20.11-0ubuntu27.21
AptOrdering:
 firefox-locale-fr:amd64: Install
 NULL: ConfigurePending
Architecture: amd64
CasperMD5CheckResult: skip
Date: Thu Mar 31 19:31:27 2022
ErrorMessage: installed lvm2 package post-installation script subprocess 
returned error exit status 1
InstallationDate: Installed on 2017-11-03 (1609 days ago)
InstallationMedia: Ubuntu 16.04.1 LTS "Xenial Xerus" - Release amd64 (20160719)
Python3Details: /usr/bin/python3.8, Python 3.8.10, python3-minimal, 
3.8.2-0ubuntu2
PythonDetails: /usr/bin/python2.7, Python 2.7.18, python-is-python2, 2.7.17-4
RelatedPackageVersions:
 dpkg 1.19.7ubuntu3
 apt  2.0.6
SourcePackage: lvm2
Title: package lvm2 2.03.07-1ubuntu1 failed to install/upgrade: installed lvm2 
package post-installation script subprocess returned error exit status 1
UpgradeStatus: Upgraded to focal on 2020-10-02 (545 days ago)

** Affects: lvm2 (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: amd64 apport-package focal need-duplicate-check

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1967343

Title:
  package lvm2 2.03.07-1ubuntu1 failed to install/upgrade: installed
  lvm2 package post-installation script subprocess returned error exit
  status 1

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/lvm2/+bug/1967343/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1875407] Re: package initramfs-tools 0.136ubuntu6 failed to install/upgrade: installed initramfs-tools package post-installation script subprocess returned error exit status 1

2022-02-16 Thread edward prest
while installing kicad v6 on brand new 20.04 install on ssd

using

sudo add-apt-repository --yes ppa:kicad/kicad-6.0-releases
sudo apt update
sudo apt install --install-recommends kicad
# If you want demo projects
sudo apt install kicad-demos

fails with


Processing triggers for desktop-file-utils (0.24-1ubuntu3) ...
Processing triggers for initramfs-tools (0.136ubuntu6.6) ...
update-initramfs: Generating /boot/initrd.img-5.13.0-28-generic
E: /usr/share/initramfs-tools/hooks/iscan failed with return 1.
update-initramfs: failed for /boot/initrd.img-5.13.0-28-generic with 1.
dpkg: error processing package initramfs-tools (--configure):
 installed initramfs-tools package post-installation script subprocess returned 
error exit status 1
Errors were encountered while processing:
 initramfs-tools
E: Sub-process /usr/bin/dpkg returned an error code (1)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1875407

Title:
  package initramfs-tools 0.136ubuntu6 failed to install/upgrade:
  installed initramfs-tools package post-installation script subprocess
  returned error exit status 1

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/initramfs-tools/+bug/1875407/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1960065] Re: cloud archive: apt upgrade from rocky to stein keeps back some ceph packages

2022-02-14 Thread Edward Hope-Morley
While I am aware that not everybody uses the charms to install ceph, I
checked to see how they do this and they are in fact using apt install
to perform an upgrade e.g. the ceph-mon charm does [1] using [2] which I
believe will not behave the same as the "standard" way mentioned in
comment #6 and is also not tested here in this bug.

[1] 
https://github.com/openstack/charm-ceph-mon/blob/05a03bd10d885d161b07e6acf47d030549562768/lib/charms_ceph/utils.py#L2227
[2] 
https://github.com/juju/charm-helpers/blob/b53f741d1c6f34f26f889d79afaad838dc14fdfa/charmhelpers/fetch/ubuntu.py#L361

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1960065

Title:
  cloud archive: apt upgrade from rocky to stein keeps back some ceph
  packages

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1960065/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1941745] Re: [sru] sos upstream 4.2

2022-01-27 Thread Edward Hope-Morley
I've verified version 4.2-1ubuntu0.20.04.1 from focal-proposed and it
looks good. I generated a sos with 4.1-1ubuntu0.20.04.3 and the newer
version and diffed between them and the contents look sane.

** Tags removed: verification-needed verification-needed-focal
** Tags added: verification-done verification-done-focal

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1941745

Title:
  [sru] sos upstream 4.2

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/1941745/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1939604] Re: [SRU] Cannot create 1vcpu instance with multiqueue image, vif_type=tap (calico)

2022-01-18 Thread Edward Hope-Morley
** Changed in: cloud-archive/ussuri
   Status: Triaged => Fix Released

** Changed in: cloud-archive/victoria
   Status: Triaged => Fix Released

** Changed in: cloud-archive/wallaby
   Status: Triaged => Fix Released

** Changed in: nova (Ubuntu Focal)
   Status: Triaged => Fix Released

** Changed in: nova (Ubuntu Hirsute)
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1939604

Title:
  [SRU] Cannot create 1vcpu instance with multiqueue image, vif_type=tap
  (calico)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1939604/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1951210] Re: libreoffice help doesn't open in firefox (404 error on file:///tmp/lu417531j7po.tmp/NewHelp0.html)

2021-12-03 Thread Edward
Installed the Firefox snap package, while leaving the Lubuntu
21.10-provided Firefox .deb package installed.

Launched LibreOffice Writer to see what the Help function did.

*It launched a Thunderbird compose window.*

Firefox snap package was then removed. The Thunderbird installation is
the snap package.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1951210

Title:
  libreoffice help doesn't open in firefox (404 error on
  file:///tmp/lu417531j7po.tmp/NewHelp0.html)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/libreoffice/+bug/1951210/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1939604] Re: [SRU] Cannot create 1vcpu instance with multiqueue image, vif_type=tap (calico)

2021-11-29 Thread Edward Hope-Morley
Currently is proposed for focal-updates, victoria and wallaby

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1939604

Title:
  [SRU] Cannot create 1vcpu instance with multiqueue image, vif_type=tap
  (calico)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1939604/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1946787] Re: [SRU] Fix inconsistent encoding secret encoding

2021-11-25 Thread Edward Hope-Morley
bionic-ussuri-proposed verified using [Test Case] and output is:

# apt-cache policy python3-barbican
python3-barbican:
  Installed: 1:10.1.0-0ubuntu2~cloud0
  Candidate: 1:10.1.0-0ubuntu2~cloud0
  Version table:
 *** 1:10.1.0-0ubuntu2~cloud0 500
500 http://ubuntu-cloud.archive.canonical.com/ubuntu 
bionic-proposed/ussuri/main amd64 Packages
100 /var/lib/dpkg/status


# virsh dumpxml instance-0001| grep -A 10 "device='disk'"| grep encryption
  


$ sudo mkfs.ext4 /dev/vdb
mke2fs 1.45.5 (07-Jan-2020)
Discarding device blocks: done
Creating filesystem with 262144 4k blocks and 65536 inodes
Filesystem UUID: ac572bfb-074f-485c-8c3f-e2c97cb51d12
Superblock backups stored on blocks: 
32768, 98304, 163840, 229376

Allocating group tables: done
Writing inode tables: done
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done

$ sudo mount /dev/vdb /mnt/
$ echo "I'm feeling luksy"| sudo tee /mnt/secure
I'm feeling luksy
$ cat /mnt/secure
I'm feeling luksy


** Tags removed: verification-needed verification-ussuri-needed
** Tags added: verification-done verification-ussuri-done

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1946787

Title:
  [SRU] Fix inconsistent encoding secret encoding

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1946787/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1952225] Re: Allow setting vswitchd opts

2021-11-25 Thread Edward Hope-Morley
Also adding neutron-openvswitch charm since it currently writes the
/etc/default/openvswitch and does not yet support OVS_VSWITCHD_OPTIONS.

** Also affects: charm-neutron-openvswitch
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1952225

Title:
  Allow setting vswitchd opts

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1952225/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1952225] [NEW] Allow setting vswitchd opts

2021-11-25 Thread Edward Hope-Morley
Public bug reported:

/etc/default/openvswitch allows setting OVS_CTL_OPTS but that gets
applies to all daemons. If we want to set ovs-vswitchd specific options
we need a way to pass them through. ovs-ctl [1] has variables like
OVS_VSWITCHD_OPTIONS that get set to '' with no regard for globals so
cannot currently be set in /etc/default/openvswitch. We propose the
following change to [1] to allow these overrides to be set in
/etc/default/openvswitch:

340c340
< OVS_VSWITCHD_OPTIONS=
---
> OVS_VSWITCHD_OPTIONS=${OVS_VSWITCHD_OPTIONS:-''}

This will allow us to do e.g.

OVS_VSWITCHD_OPTIONS="-vnetdev_offload:dbg -vnetdev_offload_tc:dbg"

in /etc/default/openvswitch.

[1] /usr/share/openvswitch/scripts/ovs-ctl

** Affects: openvswitch (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1952225

Title:
  Allow setting vswitchd opts

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/openvswitch/+bug/1952225/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1946787] Re: [SRU] Fix inconsistent encoding secret encoding

2021-11-23 Thread Edward Hope-Morley
focal-ussuri-proposed verified using [Test Case] and output is:

# apt-cache policy python3-barbican
python3-barbican:
  Installed: 1:10.1.0-0ubuntu2
  Candidate: 1:10.1.0-0ubuntu2
  Version table:
 *** 1:10.1.0-0ubuntu2 500
500 http://archive.ubuntu.com/ubuntu focal-proposed/main amd64 Packages
100 /var/lib/dpkg/status
 1:10.1.0-0ubuntu1 500
500 http://nova.clouds.archive.ubuntu.com/ubuntu focal-updates/main 
amd64 Packages
 1:10.0.0~b2~git2020020508.7b14d983-0ubuntu3 500
500 http://nova.clouds.archive.ubuntu.com/ubuntu focal/main amd64 
Packages

# virsh dumpxml instance-0001| grep -A 10 "device='disk'"| grep encryption
  

$ sudo mkfs.ext4 /dev/vdb 
$ sudo mount /dev/vdb /mnt/
$ echo "I'm feeling luksy"| sudo tee /mnt/secure
I'm feeling luksy
$ cat /mnt/secure
I'm feeling luksy


** Tags removed: verification-needed-focal
** Tags added: verification-done-focal

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1946787

Title:
  [SRU] Fix inconsistent encoding secret encoding

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1946787/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1951210] Re: libreoffice help doesn't open in firefox (404 error on file:///tmp/lu417531j7po.tmp/NewHelp0.html)

2021-11-17 Thread Edward
Issue also occurs in 21.10 release, using Lubuntu.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1951210

Title:
  libreoffice help doesn't open in firefox (404 error on
  file:///tmp/lu417531j7po.tmp/NewHelp0.html)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/libreoffice/+bug/1951210/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1839477] Re: Firewall group stuck in PENDING_UPDATE

2021-11-09 Thread Edward Hope-Morley
To close the loop somewhat, since fwaas is deprecated in Neutron it has
been removed entirely for Victoria onwards in Ubuntu and the charms now
also have an option to disable it for earlier releases [1].

[1] https://github.com/openstack/charm-neutron-
api/blob/f7d248e6e6dddc24d503c5cd1ab035fecb2a/config.yaml#L25

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1839477

Title:
  Firewall group stuck in PENDING_UPDATE

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/neutron-fwaas/+bug/1839477/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1915678] Re: [SRU] iSCSI+Multipath: Volume attachment hungs if sessiong scanning fails

2021-11-09 Thread Edward Hope-Morley
** Also affects: cloud-archive
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/victoria
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/train
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/stein
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/ussuri
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1915678

Title:
  [SRU] iSCSI+Multipath: Volume attachment hungs if sessiong scanning
  fails

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1915678/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1931696] Re: ovs offload broken from neutron 16.3.0 onwards

2021-11-02 Thread Edward Hope-Morley
** Merge proposal unlinked:
   
https://code.launchpad.net/~hopem/ubuntu/+source/neutron/+git/neutron/+merge/410049

** Merge proposal linked:
   
https://code.launchpad.net/~hopem/ubuntu/+source/neutron/+git/neutron/+merge/410648

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1931696

Title:
  ovs offload broken from neutron 16.3.0 onwards

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1931696/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1940456] Re: [SRU] radosgw-admin's diagnostics are confusing if user data exists

2021-11-01 Thread Edward Hope-Morley
** Changed in: ceph (Ubuntu Hirsute)
   Status: New => Fix Released

** Changed in: cloud-archive/xena
   Status: New => Fix Released

** Changed in: cloud-archive/wallaby
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1940456

Title:
  [SRU] radosgw-admin's diagnostics are confusing if user data exists

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1940456/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1931696] Re: ovs offload broken from neutron 16.3.0 onwards

2021-10-25 Thread Edward Hope-Morley
We've also found bug 1948656 which means that toggling
explicitly_egress_direct does not remove the flow added when set to
True.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1931696

Title:
  ovs offload broken from neutron 16.3.0 onwards

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1931696/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1931696] Re: ovs offload broken from neutron 16.3.0 onwards

2021-10-22 Thread Edward Hope-Morley
@moshele I have re-tested without dvr-snat and these are the results:

(agent_mode=dvr, offload=true, explicitly_egress_direct=False):

  switchdev port:
ping between vms same network/separate hypervisors: pass
ping network gateway: fail
ping external address: pass

  normal port:
ping between vms same network/separate hypervisors: pass
ping network gateway: fail
ping external address: pass


Results (agent_mode=dvr, offload=true, explicitly_egress_direct=False, 1897637 
patch reverted):

  switchdev port:
ping between vms same network/separate hypervisors: pass
ping network gateway: pass
ping external address: pass

  normal port:
ping between vms same network/separate hypervisors: pass
ping network gateway: pass
ping external address: pass

So as you can see, with your patch in a dvr env (computenode=dvr,
networknode=dvr_snat) that has offload enabled, I am unable to ping my
network gateway. I assume this is an unintended side-effect of your
patch since it does not exist if i remove your patch.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1931696

Title:
  ovs offload broken from neutron 16.3.0 onwards

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1931696/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1934129] Re: disable neutron-fwaas for >= victoria

2021-10-21 Thread Edward Hope-Morley
Deployed bionic-victoria with -proposed and looks good to me:

$ juju run -a neutron-api -- dpkg -l| grep fwaas
$ juju run -a neutron-gateway -- dpkg -l| grep fwaas
$ juju run -a neutron-openvswitch -- dpkg -l| grep fwaas
$ juju run -a neutron-api -- dpkg -l| grep neutron-common
ii  neutron-common  2:17.2.1-0ubuntu1~cloud2  all   
   Neutron is a virtual network service for Openstack - common
$ openstack extension list 2>&1| egrep "fwaas|fire"
$ 

Created a vm and can ping it's fip.

** Tags removed: verification-needed verification-victoria-needed
** Tags added: verification-done verification-victoria-done

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1934129

Title:
  disable neutron-fwaas  for >= victoria

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-guide/+bug/1934129/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1934129] Re: disable neutron-fwaas for >= victoria

2021-10-20 Thread Edward Hope-Morley
Deployed bionic-wallaby with -proposed and looks good to me:

$ juju run -a neutron-gateway -- dpkg -l| grep fwaas
$ juju run -a neutron-openvswitch -- dpkg -l| grep fwaas
$ juju run -a neutron-api -- dpkg -l| grep neutron-common
ii  neutron-common  2:18.1.1-0ubuntu2~cloud0  all   
   Neutron is a virtual network service for Openstack - common
$ openstack extension list 2>&1| egrep "fwaas|fire"
$ 

Created a vm and can ping it's fip.

** Tags removed: verification-wallaby-needed
** Tags added: verification-wallaby-done

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1934129

Title:
  disable neutron-fwaas  for >= victoria

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-guide/+bug/1934129/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1934129] Re: disable neutron-fwaas for >= victoria

2021-10-19 Thread Edward Hope-Morley
Deployed hirsute-wallaby with -proposed and looks good to me:

$ juju run -a neutron-gateway -- dpkg -l| grep fwaas
$ juju run -a neutron-openvswitch -- dpkg -l| grep fwaas
$ juju run -a neutron-api -- dpkg -l| grep neutron-common
ii  neutron-common  2:18.1.1-0ubuntu2   
 all  Neutron is a virtual network service for 
Openstack - common
$ openstack extension list 2>&1| egrep "fwaas|fire"
$

Created a vm and can ping it's fip.

** Tags removed: verification-needed-hirsute
** Tags added: verification-done-hirsute

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1934129

Title:
  disable neutron-fwaas  for >= victoria

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-guide/+bug/1934129/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1946787] Re: [SRU] Fix inconsistent encoding secret encoding

2021-10-18 Thread Edward Hope-Morley
** Also affects: cloud-archive/ussuri
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/victoria
   Importance: Undecided
   Status: New

** Also affects: barbican (Ubuntu Focal)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1946787

Title:
  [SRU] Fix inconsistent encoding secret encoding

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1946787/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1946787] Re: [SRU] Fix inconsistent encoding secret encoding

2021-10-18 Thread Edward Hope-Morley
** Description changed:

  [Impact]
  This SRU corresponds with the following story for upstream barbican
  https://storyboard.openstack.org/#!/story/2008335.
  
  The problem is some secrets were stored in plaintext and some were
  stored encoded. This resulted in the inability to decode some secrets.
  
  This is fixed by always storing secrets in plaintext and decoding
  inconsistently stored data as needed when getting secrets.
  
  [Test Case]
- TBD
+   * deploy Openstack with Barbican using Vault as a backend
+   * openstack volume type create --encryption-provider 
nova.volume.encryptors.luks.LuksEncryptor --encryption-cipher aes-xts-plain64 
--encryption-key-size 256 --encryption-control-location front-end LUKS
+   * openstack volume create --size 1 --type LUKS luks_vol1
+   * ensure volume created successfully
+   * openstack volume show luks_vol1
+   * create vm and attach volume
+   * mkfs and mount then test can read/write
+ 
  
  [Where things could go wrong]
  If things were to go wrong it would probably be in the get_secret() method 
which calls _ensure_legacy_base64(). _ensure_legacy_base64() assumes that 
anything that is not a key was stored base64 encoded. Presumably this is 
correct, but there was a path added to catch a UnicodeDecodeError exception to 
handle unexpected non-base64-encoded secrets.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1946787

Title:
  [SRU] Fix inconsistent encoding secret encoding

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1946787/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1931696] Re: ovs offload broken from neutron 16.3.0 onwards

2021-10-15 Thread Edward Hope-Morley
@dragon889 thanks for the info. To be clear the patch we are reverting
here is not the patch you reference that introduced
explicitly_egress_direct but actually a subsequent patch that alters
flows for offloaded ports when explicitly_egress_direct=False that
appears to have unintended side-effects.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1931696

Title:
  ovs offload broken from neutron 16.3.0 onwards

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1931696/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1911360] Re: Nvidia-Graphics-Drivers-460 Causes System To Not Boot

2021-10-14 Thread Edward Hope-Morley
** Project changed: maas-deployer => maas

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1911360

Title:
  Nvidia-Graphics-Drivers-460 Causes System To Not Boot

To manage notifications about this bug go to:
https://bugs.launchpad.net/maas/+bug/1911360/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1931696] Re: ovs offload broken from neutron 16.3.0 onwards

2021-10-12 Thread Edward Hope-Morley
** Merge proposal linked:
   
https://code.launchpad.net/~hopem/ubuntu/+source/neutron/+git/neutron/+merge/410061

** Merge proposal linked:
   
https://code.launchpad.net/~hopem/ubuntu/+source/neutron/+git/neutron/+merge/410059

** Merge proposal linked:
   
https://code.launchpad.net/~hopem/ubuntu/+source/neutron/+git/neutron/+merge/410055

** Merge proposal linked:
   
https://code.launchpad.net/~hopem/ubuntu/+source/neutron/+git/neutron/+merge/410060

** Merge proposal linked:
   
https://code.launchpad.net/~hopem/ubuntu/+source/neutron/+git/neutron/+merge/410057

** Merge proposal linked:
   
https://code.launchpad.net/~hopem/ubuntu/+source/neutron/+git/neutron/+merge/410054

** Merge proposal linked:
   
https://code.launchpad.net/~hopem/ubuntu/+source/neutron/+git/neutron/+merge/410056

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1931696

Title:
  ovs offload broken from neutron 16.3.0 onwards

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1931696/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1934912] Re: Router update fails for ports with allowed_address_pairs containg IP range in CIDR notation

2021-09-30 Thread Edward Hope-Morley
** Also affects: cloud-archive
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/victoria
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/wallaby
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/xena
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/ussuri
   Importance: Undecided
   Status: New

** Also affects: neutron (Ubuntu)
   Importance: Undecided
   Status: New

** Also affects: neutron (Ubuntu Impish)
   Importance: Undecided
   Status: New

** Also affects: neutron (Ubuntu Focal)
   Importance: Undecided
   Status: New

** Also affects: neutron (Ubuntu Hirsute)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1934912

Title:
  Router update fails for ports with allowed_address_pairs containg IP
  range in CIDR  notation

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1934912/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1915480] Re: DeviceManager's fill_dhcp_udp_checksums assumes IPv6 available

2021-09-27 Thread Edward Hope-Morley
ussuri 16.4.1 will be included in https://bugs.launchpad.net/cloud-
archive/+bug/1943712 and victoria 17.2.1 in
https://bugs.launchpad.net/cloud-archive/+bug/1943711

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1915480

Title:
  DeviceManager's fill_dhcp_udp_checksums assumes IPv6 available

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1915480/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1915480] Re: DeviceManager's fill_dhcp_udp_checksums assumes IPv6 available

2021-09-17 Thread Edward Hope-Morley
** Also affects: cloud-archive/victoria
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1915480

Title:
  DeviceManager's fill_dhcp_udp_checksums assumes IPv6 available

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1915480/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1915480] Re: DeviceManager's fill_dhcp_udp_checksums assumes IPv6 available

2021-09-16 Thread Edward Hope-Morley
** Also affects: cloud-archive
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/ussuri
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/victoria
   Importance: Undecided
   Status: New

** Also affects: neutron (Ubuntu)
   Importance: Undecided
   Status: New

** Also affects: neutron (Ubuntu Focal)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1915480

Title:
  DeviceManager's fill_dhcp_udp_checksums assumes IPv6 available

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1915480/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1934129] Re: disable neutron-fwaas for >= victoria

2021-09-10 Thread Edward Hope-Morley
The neutron package still defines fwaas as a dependency of
neutron-l3-agent which is blocking it from being removed by the charm.
We need to fix that so I will add the package to this bug too.

https://git.launchpad.net/~ubuntu-openstack-
dev/ubuntu/+source/neutron/tree/debian/control#n152

** Also affects: neutron (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1934129

Title:
  disable neutron-fwaas  for >= victoria

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-guide/+bug/1934129/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1879798] Re: designate-manage pool update doesn't reflects targets master dns servers into zones.

2021-09-07 Thread Edward Hope-Morley
There are a set of stable release updates pending which will include
this point release - see bug 1941048

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1879798

Title:
  designate-manage pool update doesn't reflects targets master dns
  servers into zones.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1879798/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1879798] Re: designate-manage pool update doesn't reflects targets master dns servers into zones.

2021-09-06 Thread Edward Hope-Morley
@nicolasbock this needs to be SRUd to Focal first before bionic-ussuri

** Changed in: cloud-archive/ussuri
   Status: In Progress => New

** Changed in: cloud-archive/ussuri
 Assignee: Nicolas Bock (nicolasbock) => (unassigned)

** Changed in: cloud-archive/train
   Status: In Progress => New

** Changed in: cloud-archive/train
 Assignee: Nicolas Bock (nicolasbock) => (unassigned)

** Changed in: cloud-archive/stein
   Status: In Progress => New

** Changed in: cloud-archive/stein
 Assignee: Nicolas Bock (nicolasbock) => (unassigned)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1879798

Title:
  designate-manage pool update doesn't reflects targets master dns
  servers into zones.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1879798/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1751923] Re: [SRU]_heal_instance_info_cache periodic task bases on port list from nova db, not from neutron server

2021-09-01 Thread Edward Hope-Morley
Verified rocky-proposed using [Test Plan] with output as follows:

# apt-cache policy nova-common
nova-common:
  Installed: 2:18.3.0-0ubuntu1~cloud3
  Candidate: 2:18.3.0-0ubuntu1~cloud3
  Version table:
 *** 2:18.3.0-0ubuntu1~cloud3 500
500 http://ubuntu-cloud.archive.canonical.com/ubuntu 
bionic-proposed/rocky/main amd64 Packages
100 /var/lib/dpkg/status
 2:17.0.13-0ubuntu3 500
500 http://nova.clouds.archive.ubuntu.com/ubuntu bionic-updates/main 
amd64 Packages
 2:17.0.10-0ubuntu2.1 500
500 http://security.ubuntu.com/ubuntu bionic-security/main amd64 
Packages
 2:17.0.1-0ubuntu1 500
500 http://nova.clouds.archive.ubuntu.com/ubuntu bionic/main amd64 
Packages

I also tested by manually deleting the network_info for a vm then
waiting for the periodic task to run -
https://pastebin.ubuntu.com/p/7gmZQsvC8H/

** Tags removed: verification-rocky-needed
** Tags added: verification-rocky-done

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1751923

Title:
  [SRU]_heal_instance_info_cache periodic task bases on port list from
  nova db, not from neutron server

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1751923/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1879798] Re: designate-manage pool update doesn't reflects targets master dns servers into zones.

2021-08-25 Thread Edward Hope-Morley
@niedbalski to start the backport sru we will need an updated sru
template in the description of this bug

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1879798

Title:
  designate-manage pool update doesn't reflects targets master dns
  servers into zones.

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-designate/+bug/1879798/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1871721] Re: Second Monitor on HDMI blank screen/blink screen with Nvidia + Intel & Ubuntu 20.04? (ASUS Laptop)

2021-08-20 Thread Edward Hildum
I am seeing the same problem on a Dell Precision 7750.

Ubuntu 20.04LTS
kernel: 5.11.0-27-generic #29~20.04.1-Ubuntu SMP

graphics: VGA compatible controller: NVIDIA Corporation TU106GLM [Quadro RTX 
3000 Mobile / Max-Q] (rev a1)
plus Intel i915
processor: Intel(R) Core(TM) i7-10750H CPU @ 2.60GHz

With nvidia-prime installed, running nvidia-settings presents three options:
NVIDIA (Performance Mode), NVIDIA On-Demand, Power Save
In Performance Mode, only the external HDMI port is active.  Booting with no 
external monitor produces a blank laptop LCD display, but the external monitor 
is active.  Using System Settings / Display, only the external monitor is 
detected.  The keyboard secondary function key Fn F8 has no effect.  Laptop 
screen is blank if booted without an external monitor connected.

In On-Demand mode, only the laptop display is active and the external
monitor has no output.  System Settings / Display detects both monitors.
The Fn F8 key pops up a display mode selection tool to select how the
screens will be configured, if the external-monitor-only mode is
selected, the laptop screen is blank, and there is no display on the
external monitor.  All other selections have no effect.

In Power Save mode, only the laptop display is active and the external
monitor has no output. System Settings / Display detects only the laptop
monitor.  The keyboard secondary function key Fn F8 has no effect.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1871721

Title:
  Second Monitor on HDMI blank screen/blink screen with Nvidia + Intel &
  Ubuntu 20.04? (ASUS Laptop)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1871721/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1927868] Re: vRouter not working after update to 16.3.1

2021-08-20 Thread Edward Hope-Morley
** Changed in: neutron
 Assignee: (unassigned) => Edward Hope-Morley (hopem)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1927868

Title:
  vRouter not working after update to 16.3.1

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1927868/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1927868] Re: vRouter not working after update to 16.3.1

2021-08-20 Thread Edward Hope-Morley
@christian-rohmann The problem essentially boils down to the exception
at [1] being raised because prior to that [2] gets called as a result of
a timeout exception but the code is not actually catching the exception.
This was traced to be the result of a privileged call being used as
argument to [3] from [4] (which is in the patch we reverted).

So the *real* problem with privsep code is that if an unexpected
exception is raised, it does not get caught thus either killing the
reader thread and/or never releasing the lock. There is a separate bug
[5] which was raised about the same issue that led to the fix [6] being
added to privsep which, crucially, replaces the raised AttributeError
with a continue thus stopping it from killing the reader thread. I have
not yet tested whether this actually fixes all the agent issues we have
seen though and while we should do this, there is still room for
improvement in the privsep code namely [7] which should have an except
clause that, if nothing else, prints a log message to say that the
message timed out.

[1] 
https://github.com/openstack/oslo.privsep/blob/6d41ef9f91b297091aa37721ba10456142fc5107/oslo_privsep/comm.py#L141
[2] 
https://github.com/openstack/oslo.privsep/blob/6d41ef9f91b297091aa37721ba10456142fc5107/oslo_privsep/comm.py#L174
[3] 
https://github.com/openstack/neutron/blob/d4b1b4a0729c187551e1fa2b2855db136456d496/neutron/common/utils.py#L689
[4] 
https://github.com/openstack/neutron/blob/d8f1f1118d3cde0b5264220836a250f14687893e/neutron/agent/linux/interface.py#L328
[5] https://bugs.launchpad.net/neutron/+bug/1930401
[6] 
https://github.com/openstack/oslo.privsep/commit/f7f3349d6a4def52f810ab1728879521c12fe2d0
[7] 
https://github.com/openstack/oslo.privsep/blob/f7f3349d6a4def52f810ab1728879521c12fe2d0/oslo_privsep/comm.py#L189

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1927868

Title:
  vRouter not working after update to 16.3.1

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1927868/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1928010] Re: Occasionally crashes in _relocate() on arm64

2021-08-13 Thread Edward Vielmetti
What are the plans for a release to bionic?

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1928010

Title:
  Occasionally crashes in _relocate() on arm64

To manage notifications about this bug go to:
https://bugs.launchpad.net/shim/+bug/1928010/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1927868] Re: vRouter not working after update to 16.3.1

2021-08-04 Thread Edward Hope-Morley
Verified bionic-ussuri/proposed using [Test Case]

** Tags removed: verification-ussuri-needed
** Tags added: verification-ussuri-done

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1927868

Title:
  vRouter not working after update to 16.3.1

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1927868/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1900851] Re: [SRU] Cannot Create Port with Fixed IP Address

2021-08-03 Thread Edward Hope-Morley
managed to get lp branch to upload so deleting the debdiff

** Merge proposal linked:
   
https://code.launchpad.net/~hopem/ubuntu/+source/horizon/+git/horizon/+merge/406592

** Patch removed: "lp1900851-focal.debdiff"
   
https://bugs.launchpad.net/horizon/+bug/1900851/+attachment/5515565/+files/lp1900851-focal.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1900851

Title:
  [SRU] Cannot Create Port with Fixed IP Address

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1900851/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1900851] Re: [SRU] Cannot Create Port with Fixed IP Address

2021-08-03 Thread Edward Hope-Morley
having issues uploaded my branch of ~ubuntu-openstack-
dev/ubuntu/+source/horizon to lp so using a debdiff for now.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1900851

Title:
  [SRU] Cannot Create Port with Fixed IP Address

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1900851/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1900851] Re: [SRU] Cannot Create Port with Fixed IP Address

2021-08-03 Thread Edward Hope-Morley
** Patch added: "lp1900851-focal.debdiff"
   
https://bugs.launchpad.net/cloud-archive/ussuri/+bug/1900851/+attachment/5515565/+files/lp1900851-focal.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1900851

Title:
  [SRU] Cannot Create Port with Fixed IP Address

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1900851/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1900851] Re: Cannot Create Port with Fixed IP Address

2021-08-03 Thread Edward Hope-Morley
** Description changed:

+ [Impact]
+ Fixes python 3.8 compatibility issue with port creation code.
+ 
+ [Test Plan]
+  * deploy openstack ussuri
+  * create a network port on horizon
+  * ensure creation successful
+ 
+ [Regression Potential]
+ No unexpected behaviour is anticipated from this patch since it is minor and 
does not impact any code outside of the feature that it fixes.
+ 
+ =
+ 
  With Ussuri on Ubuntu 20.04, I can create port with fixed IP address by
  CLI but I cannot do the same by Horizon GUI. I find some error like
  following on /var/log/apache2/error.log
  
  openstack_dashboard.dashboards.project.networks.ports.workflows Failed
  to create a port for network 91f04dfb-7f69-4050-8b3b-142ee555ae55:
  dictionary keys changed during iteration
  
  By more inspection, I can find that horizon never send that create port
  request to neutron. So I think it is horizon problem. Is this expected
  result or is this horizon bug? Is this related to policy?
  
  Following debug logs maybe related too.
  
  [Wed Oct 21 17:48:06.123807 2020] [wsgi:error] [pid 3095280:tid 
140002354386688] [remote 192.168.202.12:60886] DEBUG neutronclient.client GET 
call to neutron for http://10.7.55.18:9696/v2.0/extensions used request id 
req-95db8d1f-387b-492b-aff6-8238f09e504d
  [Wed Oct 21 17:48:06.125925 2020] [wsgi:error] [pid 3095280:tid 
140002354386688] [remote 192.168.202.12:60886] DEBUG django.template Exception 
while resolving variable 'add_to_field' in template 
'horizon/common/_workflow.html'.
  [Wed Oct 21 17:48:06.126064 2020] [wsgi:error] [pid 3095280:tid 
140002354386688] [remote 192.168.202.12:60886] 
django.template.base.VariableDoesNotExist: Failed lookup for key [add_to_field] 
in [{'True': True, 'False': False, 'None': None}, {'csrf_token': 
._get_val at 0x7f54c8e30f70>>, 
'LANGUAGES': (('cs', 'Czech'), ('de', 'German'), ('en', 'English'), ('en-au', 
'Australian English'), ('en-gb', 'British English'), ('eo', 'Esperanto'), 
('es', 'Spanish'), ('fr', 'French'), ('id', 'Indonesian'), ('it', 'Italian'), 
('ja', 'Japanese'), ('ko', 'Korean (Korea)'), ('pl', 'Polish'), ('pt-br', 
'Portuguese (Brazil)'), ('ru', 'Russian'), ('tr', 'Turkish'), ('zh-cn', 
'Simplified Chinese'), ('zh-tw', 'Chinese (Taiwan)')), 'LANGUAGE_CODE': 'en', 
'LANGUAGE_BIDI': False, 'request': , 
'MEDIA_URL': '/horizon/media/', 'STATIC_URL': '/horizon/static/', 'messages': 
, 'DEFAULT_MESSAGE_LEVELS': {'DEBUG': 10, 'INFO': 20, 'SUCCESS': 
25, 'WARNING': 30, 'ERROR': 40}, 'HORIZON_CONFIG': , 'True': True, 'False': False, 'authorized_tenants': 
[http://10.7.55.18:5000/v3/projects/84725e39c7a9462495e2cb6ae0cd111b'}, 
name=admin, options={}, parent_id=default, tags=[]>], 'keystone_providers': 
{'support': False}, 'regions': {'support': False, 'current': {'endpoint': 
'http://10.7.55.18:5000/v3/', 'name': 'Default Region'}, 'available': []}, 
'WEBROOT': '/horizon/', 'USER_MENU_LINKS': [{'name': 'OpenStack RC File', 
'icon_classes': ['fa-download'], 'url': 'horizon:project:api_access:openrc'}], 
'LOGOUT_URL': '/horizon/auth/logout/', 'profiler_enabled': False, 'JS_CATALOG': 
'horizon+openstack_dashboard'}, {}, {'network_id': 
'91f04dfb-7f69-4050-8b3b-142ee555ae55', 'view': 
, 'modal_backdrop': 'static', 'workflow': , 'REDIRECT_URL': None, 'layout': ['modal'], 'modal': True}, 
{'entry_point': 'create_info'}]

** Summary changed:

- Cannot Create Port with Fixed IP Address
+ [SRU] Cannot Create Port with Fixed IP Address

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1900851

Title:
  [SRU] Cannot Create Port with Fixed IP Address

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1900851/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1900851] Re: Cannot Create Port with Fixed IP Address

2021-08-03 Thread Edward Hope-Morley
** Changed in: cloud-archive/ussuri
   Status: Fix Committed => New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1900851

Title:
  Cannot Create Port with Fixed IP Address

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1900851/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1885169] Re: Some arping version only accept integer number as -w argument

2021-07-16 Thread Edward Hope-Morley
For context this somewhat explains what changes occurred in iputils to
lead to this issue - https://github.com/iputils/iputils/issues/267

** Bug watch added: github.com/iputils/iputils/issues #267
   https://github.com/iputils/iputils/issues/267

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1885169

Title:
  Some arping version only accept integer number as -w argument

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1885169/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1885169] Re: Some arping version only accept integer number as -w argument

2021-07-16 Thread Edward Hope-Morley
I just tested arping on Focal and I dont see this issue:

ubuntu@arping:~$ sudo arping -U -I eth0 -c 1 -w 1.5 10.48.98.1
ARPING 10.48.98.1
42 bytes from fe:10:17:12:6a:9c (10.48.98.1): index=0 time=14.516 usec

--- 10.48.98.1 statistics ---
1 packets transmitted, 1 packets received,   0% unanswered (0 extra)
rtt min/avg/max/std-dev = 0.015/0.015/0.015/0.000 ms
ubuntu@arping:~$ dpkg -l| grep arping
ii  arping 2.20-1amd64  
  sends IP and/or ARP pings (to the MAC address)

Not sure what i'm missing.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1885169

Title:
  Some arping version only accept integer number as -w argument

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1885169/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1751923] Re: [SRU]_heal_instance_info_cache periodic task bases on port list from nova db, not from neutron server

2021-06-28 Thread Edward Hope-Morley
Restored the bug description to its original format and updated SRU
info.

** Description changed:

  [Impact]
  
  * During periodic task _heal_instance_info_cache the instance_info_caches are 
not updated using instance port_ids taken from neutron, but from nova db.
  * This causes that existing VMs to loose their network interfaces after 
reboot.
  
  [Test Plan]
  
  * This bug is reproducible on Bionic/Queens clouds.
  
  1) Deploy the following Juju bundle: https://paste.ubuntu.com/p/HgsqZfsDGh/
  2) Run the following script: https://paste.ubuntu.com/p/c4VDkqyR2z/
  3) If the script finishes with "Port not found" , the bug is still present.
  
  [Where problems could occur]
  
- ** No specific regression potential has been identified.
- ** Check the other info section ***
- 
- [Other Info]
+ Instances created prior to the Openstack Newton release that have more
+ than one interface will not have associated information in the
+ virtual_interfaces table that is required to repopulate the cache with
+ interfaces in the same order they were attached prior. In the unlikely
+ event that this occurs and you are using Openstack release Queen or
+ Rocky, it will be necessary to either manually populate this table.
+ Openstack Stein has a patch that adds support for generating this data.
+ Since as things stand the guest will be unable to identify it's network
+ information at all in the event the cache gets purged and given the
+ hopefully low risk that a vm was created prior to Newton we hope the
+ potential for this regression is very low.
+ 
+ --
+ 
+ Description
+ ===
+ 
+ During periodic task _heal_instance_info_cache the
+ instance_info_caches are not updated using instance port_ids taken
+ from neutron, but from nova db.
+ 
+ Sometimes, perhaps because of some race-condition, its possible to
+ lose some ports from instance_info_caches. Periodic task
+ _heal_instance_info_cache should clean this up (add missing records),
+ but in fact it's not working this way.
  
  How it looks now?
  =
  
  _heal_instance_info_cache during crontask:
  
  
https://github.com/openstack/nova/blob/ef4000a0d326deb004843ee51d18030224c5630f/nova/compute/manager.py#L6525
  
  is using network_api to get instance_nw_info (instance_info_caches):
  
- \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0try:
- 
\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0#
 Call to network API to get instance info.. this will
- 
\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0#
 force an update to the instance's info_cache
- 
\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0self.network_api.get_instance_nw_info(context,
 instance)
+   try:
+   # Call to network API to get instance info.. this will
+   # force an update to the instance's info_cache
+   self.network_api.get_instance_nw_info(context, instance)
  
  self.network_api.get_instance_nw_info() is listed below:
  
  
https://github.com/openstack/nova/blob/ef4000a0d326deb004843ee51d18030224c5630f/nova/network/neutronv2/api.py#L1377
  
  and it uses _build_network_info_model() without networks and port_ids
  parameters (because we're not adding any new interface to instance):
  
  
https://github.com/openstack/nova/blob/ef4000a0d326deb004843ee51d18030224c5630f/nova/network/neutronv2/api.py#L2356
  
  Next: _gather_port_ids_and_networks() generates the list of instance
  networks and port_ids:
  
- \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0networks, port_ids = 
self._gather_port_ids_and_networks(
- 
\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0context,
 instance, networks, port_ids, client)
+ networks, port_ids = self._gather_port_ids_and_networks(
+   context, instance, networks, port_ids, client)
  
  
https://github.com/openstack/nova/blob/ef4000a0d326deb004843ee51d18030224c5630f/nova/network/neutronv2/api.py#L2389-L2390
  
  
https://github.com/openstack/nova/blob/ef4000a0d326deb004843ee51d18030224c5630f/nova/network/neutronv2/api.py#L1393
  
- As we see that _gather_port_ids_and_networks() takes the port list from
- DB:
+ As we see that _gather_port_ids_and_networks() takes the port list
+ from DB:
  
  
https://github.com/openstack/nova/blob/ef4000a0d326deb004843ee51d18030224c5630f/nova/objects/instance.py#L1173-L1176
  
  And thats it. When we lose a port its not possible to add it again with this 
periodic task.
  The only way is to clean device_id field in neutron port object and re-attach 
the interface using `nova interface-attach`.
  
- When the interface is missing and there is no port configured on compute
- host (for example after compute reboot) - interface is not added to
- instance and from neutron point of 

[Bug 1879798] Re: designate-manage pool update doesn't reflects targets master dns servers into zones.

2021-06-28 Thread Edward Hope-Morley
Not currently available in an upstream point release prior to Victoria:

$ git branch -r --contains b967e9f706373f1aad6db882c2295fbbe1fadfc9
  gerrit/stable/ussuri
$ git tag --contains b967e9f706373f1aad6db882c2295fbbe1fadfc9
$

** Changed in: cloud-archive/victoria
   Status: Fix Committed => New

** Changed in: cloud-archive/ussuri
   Status: Fix Committed => New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1879798

Title:
  designate-manage pool update doesn't reflects targets master dns
  servers into zones.

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-designate/+bug/1879798/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1832021] Re: Checksum drop of metadata traffic on isolated networks with DPDK

2021-06-28 Thread Edward Hope-Morley
** Tags added: verification-needed-queens

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1832021

Title:
  Checksum drop of metadata traffic on isolated networks with DPDK

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1832021/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1751923] Re: [SRU]_heal_instance_info_cache periodic task bases on port list from nova db, not from neutron server

2021-06-28 Thread Edward Hope-Morley
@coreycb I think we have everything we need to proceed with this SRU
now. Since Queens is the oldest release currently supported on Ubuntu
and support for populating vif attach ordering required to rebuild the
cache has been available since Newton I think the risk of anyone being
impacted is very small. VMs created prior to Newton would need the patch
[1] and eventually [2] backported from Stein but I don't see them as
essential and given the impact of not having this fix asap I think it
supersedes those which we can handle separately.

[1] 
https://github.com/openstack/nova/commit/3534471c578eda6236e79f43153788c4725a5634
[2] https://bugs.launchpad.net/nova/+bug/1825034

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1751923

Title:
  [SRU]_heal_instance_info_cache periodic task bases on port list from
  nova db, not from neutron server

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1751923/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1927868] Re: vRouter not working after update to 16.3.1

2021-06-25 Thread Edward Hope-Morley
I have just re-tested all of this as follows:

 * deployed Openstack Train (on Bionic i.e. 2:15.3.3-0ubuntu1~cloud0) with 3 
gateway nodes
 * created one HA router, one vm with one fip
 * can ping fip and confirm single active router
 * upgraded neutron-server (api) to 16.3.0-0ubuntu3~cloud0 (ussuri), stopped 
server, neutron-db-manage upgrade head, start server
 * ping still works
 * upgraded all compute hosts to 16.3.0-0ubuntu3~cloud0, observed vrrp failover 
and short interruption
 * ping still works
 * upgraded one compute to 2:16.3.2-0ubuntu3~cloud0
 * ping still works
 * upgraded neutron-server (api) to 2:16.3.2-0ubuntu3~cloud0, stopped server, 
neutron-db-manage upgrade head (observed no migrations), start server
 * ping still works
 * upgraded remaining compute to 2:16.3.2-0ubuntu3~cloud0
 * ping still works

I noticed that after upgrading to 2:16.3.2-0ubuntu3~cloud0 my interfaces
when from:

root@juju-f0dfb3-lp1927868-6:~# ip netns exec 
qrouter-8b5e4130-6688-45c5-bc8e-ee3781d8719c ip a s; pgrep -alf keepalived| 
grep -v state  
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group 
default qlen 1000  
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00   
 
inet 127.0.0.1/8 scope host lo  
 
   valid_lft forever preferred_lft forever  
 
inet6 ::1/128 scope host
 
   valid_lft forever preferred_lft forever  
 
2: ha-bd1bd9ab-f8@if11:  mtu 1500 qdisc 
noqueue state UP group default qlen 1000
link/ether fa:16:3e:6a:ae:8c brd ff:ff:ff:ff:ff:ff link-netnsid 0   
 
inet 169.254.195.91/18 brd 169.254.255.255 scope global ha-bd1bd9ab-f8  
 
   valid_lft forever preferred_lft forever  
 
inet6 fe80::f816:3eff:fe6a:ae8c/64 scope link   
 
   valid_lft forever preferred_lft forever  
 
3: qg-9e134c20-1f@if13:  mtu 1500 qdisc 
noqueue state UP group default qlen 1000
link/ether fa:16:3e:c4:cc:84 brd ff:ff:ff:ff:ff:ff link-netnsid 0
4: qr-a125b622-2d@if14:  mtu 1500 qdisc 
noqueue state UP group default qlen 1000
link/ether fa:16:3e:0b:d3:74 brd ff:ff:ff:ff:ff:ff link-netnsid 0   
 

to:

1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group 
default qlen 1000  
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00   
 
inet 127.0.0.1/8 scope host lo   
   valid_lft forever preferred_lft forever   
inet6 ::1/128 scope host
 
   valid_lft forever preferred_lft forever
2: ha-bd1bd9ab-f8@if11:  mtu 1500 qdisc 
noqueue state UP group default qlen 1000
link/ether fa:16:3e:6a:ae:8c brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 169.254.195.91/18 brd 169.254.255.255 scope global ha-bd1bd9ab-f8
   valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fe6a:ae8c/64 scope link 
   valid_lft forever preferred_lft forever
3: qg-9e134c20-1f@if13:  mtu 1500 qdisc noqueue state DOWN 
group default qlen 1000
link/ether fa:16:3e:c4:cc:84 brd ff:ff:ff:ff:ff:ff link-netnsid 0
4: qr-a125b622-2d@if14:  mtu 1500 qdisc 
noqueue state UP group default qlen 1000
link/ether fa:16:3e:0b:d3:74 brd ff:ff:ff:ff:ff:ff link-netnsid 0

And it remained like that until the router went vrrp master:

1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group 
default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00   
 
inet 127.0.0.1/8 scope host lo  
 
   valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
   valid_lft forever preferred_lft forever  
 
2: ha-bd1bd9ab-f8@if11:  mtu 1500 qdisc 

[Bug 1900851] Re: Cannot Create Port with Fixed IP Address

2021-06-25 Thread Edward Hope-Morley
** Changed in: cloud-archive/xena
   Status: New => Fix Released

** Changed in: cloud-archive/wallaby
   Status: New => Fix Released

** Changed in: cloud-archive/victoria
   Status: New => Fix Released

** Changed in: horizon (Ubuntu Impish)
   Status: New => Fix Released

** Changed in: horizon (Ubuntu Hirsute)
   Status: New => Fix Released

** Changed in: horizon (Ubuntu Groovy)
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1900851

Title:
  Cannot Create Port with Fixed IP Address

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1900851/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1900851] Re: Cannot Create Port with Fixed IP Address

2021-06-25 Thread Edward Hope-Morley
** Also affects: cloud-archive/xena
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/ussuri
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/victoria
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/wallaby
   Importance: Undecided
   Status: New

** Also affects: horizon (Ubuntu)
   Importance: Undecided
   Status: New

** Also affects: horizon (Ubuntu Hirsute)
   Importance: Undecided
   Status: New

** Also affects: horizon (Ubuntu Focal)
   Importance: Undecided
   Status: New

** Also affects: horizon (Ubuntu Impish)
   Importance: Undecided
   Status: New

** Also affects: horizon (Ubuntu Groovy)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1900851

Title:
  Cannot Create Port with Fixed IP Address

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1900851/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1927868] Re: vRouter not working after update to 16.3.1

2021-06-24 Thread Edward Hope-Morley
I've had a go at deploying Train and upgrading Neutron to latest Ussuri
and I see the same issue. Looking closer what I see is that post-upgrade
Neutron l3-agent has not spawned any keepalived processes hence why no
router goes active. When the agent is restarted it would normally
receive two router updates; first one to spawn_state_change_monitor and
a second to spawn keepalived. In my non-working nodes the second router
update is never received by the l3-agent. Here is an example of a
working agent https://pastebin.ubuntu.com/p/PFb594wkhB vs. a not working
https://pastebin.ubuntu.com/p/MtDNrXmvZB/.

I tested restarted all agents and this did not fix things. I then
rebooted one of my upgraded nodes and it resolved the issue for that
node i.e. two updates received and both spawned then router goes active.
I also noticed that on a non-rebooted node, following ovs agent restart
I see https://pastebin.ubuntu.com/p/2n4KxBv8S2/ which again is not
resolved by an agent restart and is fixed by the node reboot. This
latter issue is described on old bugs e.g.
https://bugs.launchpad.net/neutron/+bug/1625305

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1927868

Title:
  vRouter not working after update to 16.3.1

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1927868/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1929832] Re: stable/ussuri py38 support for keepalived-state-change monitor

2021-06-24 Thread Edward Hope-Morley
This has been released to the ussuri cloud archive (which is currently
on 2:16.3.2-0ubuntu3~cloud0) so marking Fix Released.

** Changed in: cloud-archive/ussuri
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1929832

Title:
  stable/ussuri py38 support for keepalived-state-change monitor

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1929832/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1931244] Re: ovn sriov broken from ussuri onwards

2021-06-11 Thread Edward Hope-Morley
Verified bionic-ussuri-proposed with output:

# apt-cache policy neutron-common 
neutron-common:
  Installed: 2:16.3.2-0ubuntu3~cloud0
  Candidate: 2:16.3.2-0ubuntu3~cloud0
  Version table:
 *** 2:16.3.2-0ubuntu3~cloud0 500
500 http://ubuntu-cloud.archive.canonical.com/ubuntu 
bionic-proposed/ussuri/main amd64 Packages
100 /var/lib/dpkg/status
 2:16.3.1-0ubuntu1~cloud0 500
500 http://ubuntu-cloud.archive.canonical.com/ubuntu 
bionic-updates/ussuri/main amd64 Packages
 2:12.1.1-0ubuntu7 500
500 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 Packages
 2:12.0.1-0ubuntu1 500
500 http://archive.ubuntu.com/ubuntu bionic/main amd64 Packages

I created a two-port sriov vm on bionic-ussuri and it came up in
seconds.

** Tags removed: verification-ussuri-needed
** Tags added: verification-ussuri-done

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1931244

Title:
  ovn sriov broken from ussuri onwards

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1931244/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1931244] Re: ovn sriov broken from ussuri onwards

2021-06-08 Thread Edward Hope-Morley
I think the issue here is basically that the new code relies on [1] to
get number of worker threads but that does not include things like rpc
workers.

https://github.com/openstack/neutron/blob/df94641b43964834ba14c69eb4fb17cc45349117/neutron/service.py#L313

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1931244

Title:
  ovn sriov broken from ussuri onwards

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1931244/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1931244] Re: ovn sriov broken from ussuri onwards

2021-06-08 Thread Edward Hope-Morley
I believe the following bug may also be related -
https://bugs.launchpad.net/neutron/+bug/1927977

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1931244

Title:
  ovn sriov broken from ussuri onwards

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1931244/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1892361] Re: SRIOV instance gets type-PF interface, libvirt kvm fails

2021-06-07 Thread Edward Hope-Morley
** Changed in: nova/rocky
   Status: Fix Committed => New

** Changed in: nova/queens
   Status: Fix Committed => New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1892361

Title:
  SRIOV instance gets type-PF interface, libvirt kvm fails

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1892361/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1908375] Re: ceph-volume lvm list calls blkid numerous times for differrent devices

2021-06-07 Thread Edward Hope-Morley
upload to bionic unapproved queue on 11th May -
https://launchpadlibrarian.net/538166201/ceph_12.2.13-0ubuntu0.18.04.8_source.changes
and still awaiting sru team approval.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1908375

Title:
  ceph-volume lvm list  calls blkid numerous times for
  differrent devices

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1908375/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1908375] Re: ceph-volume lvm list calls blkid numerous times for differrent devices

2021-06-07 Thread Edward Hope-Morley
** Also affects: cloud-archive
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/queens
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1908375

Title:
  ceph-volume lvm list  calls blkid numerous times for
  differrent devices

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1908375/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1929832] Re: stable/ussuri py38 support for keepalived-state-change monitor

2021-06-03 Thread Edward Hope-Morley
bionic-ussuri-proposed verified using [Test Plan] and with the following
output:

root@juju-9c4cdb-lp1929832-verify-6:~# apt-cache policy neutron-common 
neutron-common:
  Installed: 2:16.3.2-0ubuntu2~cloud0
  Candidate: 2:16.3.2-0ubuntu2~cloud0
  Version table:
 *** 2:16.3.2-0ubuntu2~cloud0 500
500 http://ubuntu-cloud.archive.canonical.com/ubuntu 
bionic-proposed/ussuri/main amd64 Packages
100 /var/lib/dpkg/status
 2:12.1.1-0ubuntu7 500
500 http://nova.clouds.archive.ubuntu.com/ubuntu bionic-updates/main 
amd64 Packages
 2:12.0.1-0ubuntu1 500
500 http://nova.clouds.archive.ubuntu.com/ubuntu bionic/main amd64 
Packages

root@juju-9c4cdb-lp1929832-verify-6:~# grep py38 
/etc/neutron/rootwrap.d/l3.filters 
kill_keepalived_monitor_py38: KillFilter, root, python3.8, -15, -9


** Tags removed: verification-needed verification-ussuri-needed
** Tags added: verification-done verification-ussuri-done

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1929832

Title:
  stable/ussuri py38 support for keepalived-state-change monitor

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1929832/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1929832] Re: stable/ussuri py38 support for keepalived-state-change monitor

2021-05-28 Thread Edward Hope-Morley
focal-proposed verified using [Test Plan] and with the following output:

# apt-cache policy neutron-common
neutron-common:
  Installed: 2:16.3.2-0ubuntu2
  Candidate: 2:16.3.2-0ubuntu2
  Version table:
 *** 2:16.3.2-0ubuntu2 500
500 http://nova.clouds.archive.ubuntu.com/ubuntu focal-proposed/main 
amd64 Packages
100 /var/lib/dpkg/status
 2:16.3.1-0ubuntu1.1 500
500 http://nova.clouds.archive.ubuntu.com/ubuntu focal-updates/main 
amd64 Packages
 2:16.0.0~b3~git2020041516.5f42488a9a-0ubuntu2 500
500 http://nova.clouds.archive.ubuntu.com/ubuntu focal/main amd64 
Packages

$ grep kill_keepalived_monitor_py38 /etc/neutron/rootwrap.d/l3.filters 
kill_keepalived_monitor_py38: KillFilter, root, python3.8, -15, -9


** Description changed:

  [Impact]
+ Please see original bug description. Without this fix, the neutron-l3-agent 
is unable to teardown an HA router and leaves it partially configured on every 
node it was running on.
+ 
  [Test Case]
- The victoria release of Openstack received patch [1] which allows the 
neutron-l3-agent to SIGKILL or SIGTERM the keepalived-state-change monitor when 
running under py38. This patch is needed in Ussuri for users running with py38 
so we need to backport it.
+ * deploy Openstack ussuri on Ubuntu Focal
+ * enable L3 HA
+ * create a router and vm on network attached to router
+ * disable or delete the router and check for errors like the one below
+ * ensure that the following line exists in /etc/neutron/rootwrap.d/l3.filters:
+ 
+ kill_keepalived_monitor_py38: KillFilter, root, python3.8, -15, -9
+ 
+ -
+ 
+ The victoria release of Openstack received patch [1] which allows the
+ neutron-l3-agent to SIGKILL or SIGTERM the keepalived-state-change
+ monitor when running under py38. This patch is needed in Ussuri for
+ users running with py38 so we need to backport it.
  
  The consequence of not having this is that you get the following when
  you delete or disable a router:
  
  2021-05-26 02:11:44.653 3457514 ERROR neutron.agent.l3.agent 
[req-8c69af29-8f9c-4721-9cba-81ff4e9be92c - 9320f5ac55a04fb280d9ceb0b1106a6e - 
- -] Error while deleting router ab63ccd8-1197-48d0-815e-31adc40e5193: 
neutron_lib.exceptions.ProcessExecutionError: Exit code: 99; Stdin: ; Stdout: ; 
Stderr: /usr/bin/neutron-rootwrap: Unauthorized command: kill -15 2516433 (no 
filter matched)
  2021-05-26 02:11:44.653 3457514 ERROR neutron.agent.l3.agent Traceback (most 
recent call last):
  2021-05-26 02:11:44.653 3457514 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python3/dist-packages/neutron/agent/l3/agent.py", line 512, in 
_safe_router_removed
  2021-05-26 02:11:44.653 3457514 ERROR neutron.agent.l3.agent 
self._router_removed(ri, router_id)
  2021-05-26 02:11:44.653 3457514 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python3/dist-packages/neutron/agent/l3/agent.py", line 548, in 
_router_removed
  2021-05-26 02:11:44.653 3457514 ERROR neutron.agent.l3.agent 
self.router_info[router_id] = ri
  2021-05-26 02:11:44.653 3457514 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
  2021-05-26 02:11:44.653 3457514 ERROR neutron.agent.l3.agent 
self.force_reraise()
  2021-05-26 02:11:44.653 3457514 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2021-05-26 02:11:44.653 3457514 ERROR neutron.agent.l3.agent 
six.reraise(self.type_, self.value, self.tb)
  2021-05-26 02:11:44.653 3457514 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python3/dist-packages/six.py", line 703, in reraise
  2021-05-26 02:11:44.653 3457514 ERROR neutron.agent.l3.agent raise value
  2021-05-26 02:11:44.653 3457514 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python3/dist-packages/neutron/agent/l3/agent.py", line 545, in 
_router_removed
  2021-05-26 02:11:44.653 3457514 ERROR neutron.agent.l3.agent ri.delete()
  2021-05-26 02:11:44.653 3457514 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python3/dist-packages/neutron/agent/l3/dvr_edge_router.py", line 236, 
in delete
  2021-05-26 02:11:44.653 3457514 ERROR neutron.agent.l3.agent 
super(DvrEdgeRouter, self).delete()
  2021-05-26 02:11:44.653 3457514 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python3/dist-packages/neutron/agent/l3/ha_router.py", line 492, in 
delete
  2021-05-26 02:11:44.653 3457514 ERROR neutron.agent.l3.agent 
self.destroy_state_change_monitor(self.process_monitor)
  2021-05-26 02:11:44.653 3457514 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python3/dist-packages/neutron/agent/l3/ha_router.py", line 438, in 
destroy_state_change_monitor
  2021-05-26 02:11:44.653 3457514 ERROR neutron.agent.l3.agent 
pm.disable(sig=str(int(signal.SIGTERM)))
  2021-05-26 02:11:44.653 3457514 ERROR neutron.agent.l3.agent   File 

[Bug 1929832] Re: stable/ussuri py38 support for keepalived-state-change monitor

2021-05-27 Thread Edward Hope-Morley
stable/ussuri backport -
https://review.opendev.org/c/openstack/neutron/+/793417

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1929832

Title:
  stable/ussuri py38 support for keepalived-state-change monitor

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1929832/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1907686] Re: ovn: instance unable to retrieve metadata

2021-05-24 Thread Edward Hope-Morley
** Changed in: cloud-archive/victoria
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1907686

Title:
  ovn: instance unable to retrieve metadata

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-ovn-chassis/+bug/1907686/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1751923] Re: [SRU]_heal_instance_info_cache periodic task bases on port list from nova db, not from neutron server

2021-05-20 Thread Edward Hope-Morley
Since Queens is populating the virtual_interfaces table as standard I
think we should proceed with this SRU -
https://pastebin.ubuntu.com/p/BdCPsVKGk5/ - since it will provide a
clean fix for Queens clouds.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1751923

Title:
  [SRU]_heal_instance_info_cache periodic task bases on port list from
  nova db, not from neutron server

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1751923/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1927729] Re: Cinder packages should have sysfsutils as a dependency

2021-05-11 Thread Edward Hope-Morley
** Also affects: python-cinderclient
   Importance: Undecided
   Status: New

** No longer affects: python-cinderclient

** Also affects: python-cinderclient
   Importance: Undecided
   Status: New

** No longer affects: python-cinderclient

** Also affects: charm-cinder
   Importance: Undecided
   Status: New

** Also affects: charm-nova-compute
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1927729

Title:
  Cinder packages should have sysfsutils as a dependency

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-cinder/+bug/1927729/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1849098] Re: ovs agent is stuck with OVSFWTagNotFound when dealing with unbound port

2021-05-11 Thread Edward Hope-Morley
** Also affects: cloud-archive
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/queens
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1849098

Title:
  ovs agent is stuck with OVSFWTagNotFound when dealing with unbound
  port

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1849098/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1883089] Re: [L3] floating IP failed to bind due to no agent gateway port(fip-ns)

2021-04-27 Thread Edward Hope-Morley
** Description changed:

  In patch [1] it introduced a binding of DB uniq constraint for L3
  agent gateway. In some extreme case the DvrFipGatewayPortAgentBinding
  is in DB while the gateway port not. The current code path only checks
  the binding existence which will pass a "None" port to the following
  code path that results an AttributeError.
  
  [1] https://review.opendev.org/#/c/702547/
  
  Exception log:
  
  2020-06-11 15:39:28.361 1285214 INFO neutron.db.l3_dvr_db [None 
req-d6a41187-2495-46bf-a424-ab7195c0ecb1 - - - - -] Floating IP Agent Gateway 
port for network 3fcb7702-ae0b-46b4-807f-8ae94d656dd3 does not exist on host 
host-compute-1. Creating one.
  2020-06-11 15:39:28.370 1285214 DEBUG neutron.db.l3_dvr_db [None 
req-d6a41187-2495-46bf-a424-ab7195c0ecb1 - - - - -] Floating IP Agent Gateway 
port for network 3fcb7702-ae0b-46b4-807f-8ae94d656dd3 already exists on host 
host-compute-1. Probably it was just created by other worker. 
create_fip_agent_gw_port_if_not_exists 
/usr/lib/python2.7/site-packages/neutron/db/l3_dvr_db.py:927
  2020-06-11 15:39:28.390 1285214 DEBUG neutron.db.l3_dvr_db [None 
req-d6a41187-2495-46bf-a424-ab7195c0ecb1 - - - - -] Floating IP Agent Gateway 
port None found for the destination host: host-compute-1 
create_fip_agent_gw_port_if_not_exists 
/usr/lib/python2.7/site-packages/neutron/db/l3_dvr_db.py:933
  2020-06-11 15:39:28.391 1285214 ERROR oslo_messaging.rpc.server [None 
req-d6a41187-2495-46bf-a424-ab7195c0ecb1 - - - - -] Exception during message 
handling: AttributeError: 'NoneType' object has no attribute 'get'
  2020-06-11 15:39:28.391 1285214 ERROR oslo_messaging.rpc.server Traceback 
(most recent call last):
  2020-06-11 15:39:28.391 1285214 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 170, in 
_process_incoming
  2020-06-11 15:39:28.391 1285214 ERROR oslo_messaging.rpc.server res = 
self.dispatcher.dispatch(message)
  2020-06-11 15:39:28.391 1285214 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 220, 
in dispatch
  2020-06-11 15:39:28.391 1285214 ERROR oslo_messaging.rpc.server return 
self._do_dispatch(endpoint, method, ctxt, args)
  2020-06-11 15:39:28.391 1285214 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 190, 
in _do_dispatch
  2020-06-11 15:39:28.391 1285214 ERROR oslo_messaging.rpc.server result = 
func(ctxt, **new_args)
  2020-06-11 15:39:28.391 1285214 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/neutron/db/api.py", line 91, in wrapped
  2020-06-11 15:39:28.391 1285214 ERROR oslo_messaging.rpc.server 
setattr(e, '_RETRY_EXCEEDED', True)
  2020-06-11 15:39:28.391 1285214 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
  2020-06-11 15:39:28.391 1285214 ERROR oslo_messaging.rpc.server 
self.force_reraise()
  2020-06-11 15:39:28.391 1285214 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2020-06-11 15:39:28.391 1285214 ERROR oslo_messaging.rpc.server 
six.reraise(self.type_, self.value, self.tb)
  2020-06-11 15:39:28.391 1285214 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/neutron/db/api.py", line 87, in wrapped
  2020-06-11 15:39:28.391 1285214 ERROR oslo_messaging.rpc.server return 
f(*args, **kwargs)
  2020-06-11 15:39:28.391 1285214 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_db/api.py", line 147, in wrapper
  2020-06-11 15:39:28.391 1285214 ERROR oslo_messaging.rpc.server 
ectxt.value = e.inner_exc
  2020-06-11 15:39:28.391 1285214 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
  2020-06-11 15:39:28.391 1285214 ERROR oslo_messaging.rpc.server 
self.force_reraise()
  2020-06-11 15:39:28.391 1285214 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2020-06-11 15:39:28.391 1285214 ERROR oslo_messaging.rpc.server 
six.reraise(self.type_, self.value, self.tb)
  2020-06-11 15:39:28.391 1285214 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_db/api.py", line 135, in wrapper
  2020-06-11 15:39:28.391 1285214 ERROR oslo_messaging.rpc.server return 
f(*args, **kwargs)
  2020-06-11 15:39:28.391 1285214 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/neutron/db/api.py", line 126, in wrapped
  2020-06-11 15:39:28.391 1285214 ERROR oslo_messaging.rpc.server 
LOG.debug("Retry wrapper got retriable exception: %s", e)
  2020-06-11 15:39:28.391 1285214 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in 

[Bug 1883089] Re: [L3] floating IP failed to bind due to no agent gateway port(fip-ns)

2021-04-27 Thread Edward Hope-Morley
** Also affects: cloud-archive
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/victoria
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/ussuri
   Importance: Undecided
   Status: New

** Changed in: cloud-archive/victoria
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1883089

Title:
  [L3] floating IP failed to bind due to no agent gateway port(fip-ns)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1883089/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1895727] Re: OpenSSL.SSL.SysCallError: (111, 'ECONNREFUSED') and Connection thread stops

2021-04-12 Thread Edward Hope-Morley
** Also affects: cloud-archive
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/victoria
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/ussuri
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1895727

Title:
  OpenSSL.SSL.SysCallError: (111, 'ECONNREFUSED') and Connection thread
  stops

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1895727/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1911900] Re: [SRU] Active scrub blocks upmap balancer

2021-04-12 Thread Edward Hope-Morley
Hi Pon, if you still need Bionic SRU for this one can you attach a
debdiff for bionic. Thanks.

** Changed in: ceph (Ubuntu Bionic)
   Status: In Progress => New

** Changed in: cloud-archive
 Assignee: Ponnuvel Palaniyappan (pponnuvel) => (unassigned)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1911900

Title:
  [SRU] Active scrub blocks upmap balancer

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1911900/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1922916] [NEW] package python3.8-minimal 3.8.5-1~20.04.2 failed to install/upgrade: installed python3.8-minimal package post-installation script subprocess returned error exit status 139

2021-04-07 Thread Edward
Public bug reported:

just reinstalled Ubuntu 20.04.2 successfully.  Reboot and system wanted
to update after install.  It just failed on this package.

ProblemType: Package
DistroRelease: Ubuntu 20.04
Package: python3.8-minimal 3.8.5-1~20.04.2
ProcVersionSignature: Ubuntu 5.8.0-48.54~20.04.1-generic 5.8.18
Uname: Linux 5.8.0-48-generic x86_64
ApportVersion: 2.20.11-0ubuntu27.16
Architecture: amd64
CasperMD5CheckResult: skip
Date: Wed Apr  7 07:59:11 2021
DuplicateSignature:
 package:python3.8-minimal:3.8.5-1~20.04.2
 Setting up python3.8-minimal (3.8.5-1~20.04.2) ...
 Segmentation fault (core dumped)
 dpkg: error processing package python3.8-minimal (--configure):
  installed python3.8-minimal package post-installation script subprocess 
returned error exit status 139
ErrorMessage: installed python3.8-minimal package post-installation script 
subprocess returned error exit status 139
InstallationDate: Installed on 2021-04-07 (0 days ago)
InstallationMedia: Ubuntu 20.04.2.0 LTS "Focal Fossa" - Release amd64 
(20210209.1)
Python3Details: /usr/bin/python3.8, Python 3.8.5, python3-minimal, 
3.8.2-0ubuntu2
PythonDetails: N/A
RelatedPackageVersions:
 dpkg 1.19.7ubuntu3
 apt  2.0.4
SourcePackage: python3.8
Title: package python3.8-minimal 3.8.5-1~20.04.2 failed to install/upgrade: 
installed python3.8-minimal package post-installation script subprocess 
returned error exit status 139
UpgradeStatus: No upgrade log present (probably fresh install)

** Affects: python3.8 (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: amd64 apport-package focal

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1922916

Title:
  package python3.8-minimal 3.8.5-1~20.04.2 failed to install/upgrade:
  installed python3.8-minimal package post-installation script
  subprocess returned error exit status 139

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/python3.8/+bug/1922916/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1853613] Re: VMs don't get ip from dhcp after compute restart

2021-04-07 Thread Edward Hope-Morley
All SRU verification completed and performed in
https://bugs.launchpad.net/neutron/+bug/1869808 so please refer to that
LP for the results.

** Tags removed: verification-needed verification-needed-bionic
** Tags added: verification-done verification-done-bionic

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1853613

Title:
  VMs don't get ip from dhcp after compute restart

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1853613/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

  1   2   3   4   5   6   7   8   9   10   >