[Sts-sponsors] [Bug 1817651] Re: Primary slave on the bond not getting set.

2021-01-19 Thread Seyeong Kim
Verification is done for Bionic

# dpkg -l | grep netplan.io
ii  netplan.io 0.99-0ubuntu3~18.04.4
   amd64YAML network configuration abstraction for various 
backends

Test step

deploy bionic vm
set netplan conf as description said.
netplan apply.
faced error
upgrade pkg from -proposed repository
netplan apply

no error

** Tags removed: verification-needed-bionic
** Tags added: verification-done-bionic

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1817651

Title:
  Primary slave on the bond not getting set.

Status in netplan.io package in Ubuntu:
  Fix Released
Status in netplan.io source package in Bionic:
  Fix Committed
Status in netplan.io source package in Focal:
  Fix Released

Bug description:
  [Impact]

  primary slave fails to get set in netplan bonding configuration

  
  [Test Case]

  0. created vm with 3 nics ( ens33, ens38, ens39 )
  1. setup netplan as below
  - https://pastebin.ubuntu.com/p/JGqhYXYY6r/
  - ens38, ens39 is virtual nic, and dummy2 is not.
  2. netplan apply
  3. shows error

  [Where problems could occur]
  As this patch is related to bond, bond may have issue if there is problem.

  
  [Others]

  original description

  
  The primary slave fails to get set in netplan bonding configuration:

  network:
  version: 2
  ethernets:
  e1p1:
  addresses:
  - x.x.x.x/x
  gateway4: x.x.x.x
  match:
  macaddress: xyz
  mtu: 9000
  nameservers:
  addresses:
  - x.x.x.x
  set-name: e1p1
  p1p1:
  match:
  macaddress: xx
  mtu: 1500
  set-name: p1p1
  p1p2:
  match:
  macaddress: xx
  mtu: 1500
  set-name: p1p2

  bonds:
  bond0:
    mtu: 9000
    interfaces: [p1p1, p1p2]
    parameters:
  mode: active-backup
  mii-monitor-interval: 100
  primary: p1p2

  ~$ sudo netplan --debug apply
  sudo netplan --debug apply
  ** (generate:7353): DEBUG: 13:22:31.480: Processing input file 
/etc/netplan/50-cloud-init.yaml..
  ** (generate:7353): DEBUG: 13:22:31.480: starting new processing pass
  ** (generate:7353): DEBUG: 13:22:31.480: Processing input file 
/etc/netplan/60-puppet-netplan.yaml..
  ** (generate:7353): DEBUG: 13:22:31.480: starting new processing pass
  ** (generate:7353): DEBUG: 13:22:31.480: recording missing yaml_node_t bond0
  ** (generate:7353): DEBUG: 13:22:31.480: recording missing yaml_node_t bond0
  ** (generate:7353): DEBUG: 13:22:31.480: recording missing yaml_node_t bond0
  ** (generate:7353): DEBUG: 13:22:31.480: recording missing yaml_node_t bond0
  ** (generate:7353): DEBUG: 13:22:31.480: recording missing yaml_node_t bond0
  ** (generate:7353): DEBUG: 13:22:31.480: starting new processing pass
  Error in network definition /etc/netplan/60-puppet-netplan.yaml line 68 
column 17: bond0: bond already has a primary slave: p1p2

  What's wrong here??

  #apt-cache policy netplan.io
  netplan.io:
    Installed: 0.40.1~18.04.4
    Candidate: 0.40.1~18.04.4
    Version table:
   *** 0.40.1~18.04.4 500
  500 http://mirrors.rc.nectar.org.au/ubuntu bionic-security/main amd64 
Packages
  500 http://mirrors.rc.nectar.org.au/ubuntu bionic-updates/main amd64 
Packages
  100 /var/lib/dpkg/status
   0.36.1 500
  500 http://mirrors.rc.nectar.org.au/ubuntu bionic/main amd64 Packages

  #cat /etc/lsb-release
  DISTRIB_ID=Ubuntu
  DISTRIB_RELEASE=18.04
  DISTRIB_CODENAME=bionic
  DISTRIB_DESCRIPTION="Ubuntu 18.04.2 LTS"

  regards,

  Shahaan

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/netplan.io/+bug/1817651/+subscriptions

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1898129] Re: Cannot configure 'cryptsetup luksFormat' at install time

2021-01-19 Thread Launchpad Bug Tracker
This bug was fixed in the package partman-crypto - 101ubuntu4.1

---
partman-crypto (101ubuntu4.1) focal; urgency=medium

  * Add preseed option 'partman-crypto/luksformat_options' to
provide more options for 'cryptsetup luksFormat' (LP: #1898129)
- d/partman-crypto.templates: add preseed option.
- lib/crypto-base.sh: check for, log, and use it.

 -- Mauricio Faria de Oliveira   Thu, 07 Jan 2021
16:51:37 -0300

** Changed in: partman-crypto (Ubuntu Focal)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1898129

Title:
  Cannot configure 'cryptsetup luksFormat' at install time

Status in partman-crypto package in Ubuntu:
  Invalid
Status in ubiquity package in Ubuntu:
  Fix Released
Status in partman-crypto source package in Focal:
  Fix Released
Status in ubiquity source package in Focal:
  Fix Committed
Status in partman-crypto source package in Groovy:
  Invalid
Status in ubiquity source package in Groovy:
  Won't Fix
Status in partman-crypto source package in Hirsute:
  Invalid
Status in ubiquity source package in Hirsute:
  Fix Released
Status in partman-crypto package in Debian:
  Unknown

Bug description:
  [Impact]

   * Users cannot specify options for 'cryptsetup luksFormat'
 that is used by the installer.

   * Some deployments need the installed disks in LUKS1 format
 for backward compatibility with older releases that don't
 support LUKS2, for backup/audit/management purposes.

   * However, on Focal and later, cryptsetup defaults to LUKS2,
 which broke that functionality.
 
   * Currently it's not possible to request the LUKS format in
 the installer, so this patch allows for that w/ a preseed
 option ('partman-crypto/luksformat_options') for the user.

  [Test Case]

   * Default behavior: LUKS2
   
 - Install Ubuntu (Focal/later); check LUKS header version:
 
   $ sudo cryptsetup luksDump /dev/vda4
   LUKS header information
   Version: 2
   ...
   
   * Opt-in behavior: LUKS1 (for example; can use other options)
   
 - Install Ubuntu (Focal/later) with preseed file/option:

   ubiquity partman-crypto/luksformat_options string \
 --type luks1

 - Check LUKS header version:
 
   $ sudo cryptsetup luksDump /dev/vda4
   LUKS header information for /dev/vda4
   Version: 1
   ...

 - Check install logs for confirmation:
 
   $ grep luksFormat /var/log/partman
   /usr/bin/autopartition-crypto: Additional options for luksFormat: 
'--type luks1'
 
  [Where problems could occur]

   * The changes are contained within the partman-crypto functionality,
 so only install with encrypted disks should be affected by issues.

   * Any additional options specified to 'cryptsetup luksFormat' are
 opt-in _and_ specified by the user via the preseed option, thus
 errors are probably tied to particular options (mis) used.

   * If the preseed option is not specified, original behavior remains.

  [Other Info]
   
   * This patch is applied in Hirsute.
   * This patch is not needed in Groovy (rationale in comment #15.)
   * This patch is targeted at Focal (cryptsetup defaulted to LUKS2.)
   * This patch is not needed in Bionic/earlier (^defaults to LUKS1.)

  [Original Description]
  Most users should be fine with the options to
  'cryptsetup luksFormat' used by the installer.

  However, some users may have reasons to use
  other options, and that is not possible now.

  Let's provide a new preseed option for that:
  'partman-crypto/luksformat_options'

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/partman-crypto/+bug/1898129/+subscriptions

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1898129] Update Released

2021-01-19 Thread Chris Halse Rogers
The verification of the Stable Release Update for partman-crypto has
completed successfully and the package is now being released to
-updates.  Subsequently, the Ubuntu Stable Release Updates Team is being
unsubscribed and will not receive messages about this bug report.  In
the event that you encounter a regression using the package from
-updates please report a new bug using ubuntu-bug and tag the bug report
regression-update so we can easily find any regressions.

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1898129

Title:
  Cannot configure 'cryptsetup luksFormat' at install time

Status in partman-crypto package in Ubuntu:
  Invalid
Status in ubiquity package in Ubuntu:
  Fix Released
Status in partman-crypto source package in Focal:
  Fix Committed
Status in ubiquity source package in Focal:
  Fix Committed
Status in partman-crypto source package in Groovy:
  Invalid
Status in ubiquity source package in Groovy:
  Won't Fix
Status in partman-crypto source package in Hirsute:
  Invalid
Status in ubiquity source package in Hirsute:
  Fix Released
Status in partman-crypto package in Debian:
  Unknown

Bug description:
  [Impact]

   * Users cannot specify options for 'cryptsetup luksFormat'
 that is used by the installer.

   * Some deployments need the installed disks in LUKS1 format
 for backward compatibility with older releases that don't
 support LUKS2, for backup/audit/management purposes.

   * However, on Focal and later, cryptsetup defaults to LUKS2,
 which broke that functionality.
 
   * Currently it's not possible to request the LUKS format in
 the installer, so this patch allows for that w/ a preseed
 option ('partman-crypto/luksformat_options') for the user.

  [Test Case]

   * Default behavior: LUKS2
   
 - Install Ubuntu (Focal/later); check LUKS header version:
 
   $ sudo cryptsetup luksDump /dev/vda4
   LUKS header information
   Version: 2
   ...
   
   * Opt-in behavior: LUKS1 (for example; can use other options)
   
 - Install Ubuntu (Focal/later) with preseed file/option:

   ubiquity partman-crypto/luksformat_options string \
 --type luks1

 - Check LUKS header version:
 
   $ sudo cryptsetup luksDump /dev/vda4
   LUKS header information for /dev/vda4
   Version: 1
   ...

 - Check install logs for confirmation:
 
   $ grep luksFormat /var/log/partman
   /usr/bin/autopartition-crypto: Additional options for luksFormat: 
'--type luks1'
 
  [Where problems could occur]

   * The changes are contained within the partman-crypto functionality,
 so only install with encrypted disks should be affected by issues.

   * Any additional options specified to 'cryptsetup luksFormat' are
 opt-in _and_ specified by the user via the preseed option, thus
 errors are probably tied to particular options (mis) used.

   * If the preseed option is not specified, original behavior remains.

  [Other Info]
   
   * This patch is applied in Hirsute.
   * This patch is not needed in Groovy (rationale in comment #15.)
   * This patch is targeted at Focal (cryptsetup defaulted to LUKS2.)
   * This patch is not needed in Bionic/earlier (^defaults to LUKS1.)

  [Original Description]
  Most users should be fine with the options to
  'cryptsetup luksFormat' used by the installer.

  However, some users may have reasons to use
  other options, and that is not possible now.

  Let's provide a new preseed option for that:
  'partman-crypto/luksformat_options'

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/partman-crypto/+bug/1898129/+subscriptions

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1817651] Re: Primary slave on the bond not getting set.

2021-01-19 Thread Seyeong Kim
** Attachment added: "bionic_s390x_artifacts.tar.gz"
   
https://bugs.launchpad.net/ubuntu/+source/netplan.io/+bug/1817651/+attachment/5454756/+files/bionic_s390x_artifacts.tar.gz

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1817651

Title:
  Primary slave on the bond not getting set.

Status in netplan.io package in Ubuntu:
  Fix Released
Status in netplan.io source package in Bionic:
  Fix Committed
Status in netplan.io source package in Focal:
  Fix Released

Bug description:
  [Impact]

  primary slave fails to get set in netplan bonding configuration

  
  [Test Case]

  0. created vm with 3 nics ( ens33, ens38, ens39 )
  1. setup netplan as below
  - https://pastebin.ubuntu.com/p/JGqhYXYY6r/
  - ens38, ens39 is virtual nic, and dummy2 is not.
  2. netplan apply
  3. shows error

  [Where problems could occur]
  As this patch is related to bond, bond may have issue if there is problem.

  
  [Others]

  original description

  
  The primary slave fails to get set in netplan bonding configuration:

  network:
  version: 2
  ethernets:
  e1p1:
  addresses:
  - x.x.x.x/x
  gateway4: x.x.x.x
  match:
  macaddress: xyz
  mtu: 9000
  nameservers:
  addresses:
  - x.x.x.x
  set-name: e1p1
  p1p1:
  match:
  macaddress: xx
  mtu: 1500
  set-name: p1p1
  p1p2:
  match:
  macaddress: xx
  mtu: 1500
  set-name: p1p2

  bonds:
  bond0:
    mtu: 9000
    interfaces: [p1p1, p1p2]
    parameters:
  mode: active-backup
  mii-monitor-interval: 100
  primary: p1p2

  ~$ sudo netplan --debug apply
  sudo netplan --debug apply
  ** (generate:7353): DEBUG: 13:22:31.480: Processing input file 
/etc/netplan/50-cloud-init.yaml..
  ** (generate:7353): DEBUG: 13:22:31.480: starting new processing pass
  ** (generate:7353): DEBUG: 13:22:31.480: Processing input file 
/etc/netplan/60-puppet-netplan.yaml..
  ** (generate:7353): DEBUG: 13:22:31.480: starting new processing pass
  ** (generate:7353): DEBUG: 13:22:31.480: recording missing yaml_node_t bond0
  ** (generate:7353): DEBUG: 13:22:31.480: recording missing yaml_node_t bond0
  ** (generate:7353): DEBUG: 13:22:31.480: recording missing yaml_node_t bond0
  ** (generate:7353): DEBUG: 13:22:31.480: recording missing yaml_node_t bond0
  ** (generate:7353): DEBUG: 13:22:31.480: recording missing yaml_node_t bond0
  ** (generate:7353): DEBUG: 13:22:31.480: starting new processing pass
  Error in network definition /etc/netplan/60-puppet-netplan.yaml line 68 
column 17: bond0: bond already has a primary slave: p1p2

  What's wrong here??

  #apt-cache policy netplan.io
  netplan.io:
    Installed: 0.40.1~18.04.4
    Candidate: 0.40.1~18.04.4
    Version table:
   *** 0.40.1~18.04.4 500
  500 http://mirrors.rc.nectar.org.au/ubuntu bionic-security/main amd64 
Packages
  500 http://mirrors.rc.nectar.org.au/ubuntu bionic-updates/main amd64 
Packages
  100 /var/lib/dpkg/status
   0.36.1 500
  500 http://mirrors.rc.nectar.org.au/ubuntu bionic/main amd64 Packages

  #cat /etc/lsb-release
  DISTRIB_ID=Ubuntu
  DISTRIB_RELEASE=18.04
  DISTRIB_CODENAME=bionic
  DISTRIB_DESCRIPTION="Ubuntu 18.04.2 LTS"

  regards,

  Shahaan

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/netplan.io/+bug/1817651/+subscriptions

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1817651] Re: Primary slave on the bond not getting set.

2021-01-19 Thread Seyeong Kim
** Attachment added: "bionic_ppc64el_artifacts.tar.gz"
   
https://bugs.launchpad.net/ubuntu/+source/netplan.io/+bug/1817651/+attachment/5454755/+files/bionic_ppc64el_artifacts.tar.gz

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1817651

Title:
  Primary slave on the bond not getting set.

Status in netplan.io package in Ubuntu:
  Fix Released
Status in netplan.io source package in Bionic:
  Fix Committed
Status in netplan.io source package in Focal:
  Fix Released

Bug description:
  [Impact]

  primary slave fails to get set in netplan bonding configuration

  
  [Test Case]

  0. created vm with 3 nics ( ens33, ens38, ens39 )
  1. setup netplan as below
  - https://pastebin.ubuntu.com/p/JGqhYXYY6r/
  - ens38, ens39 is virtual nic, and dummy2 is not.
  2. netplan apply
  3. shows error

  [Where problems could occur]
  As this patch is related to bond, bond may have issue if there is problem.

  
  [Others]

  original description

  
  The primary slave fails to get set in netplan bonding configuration:

  network:
  version: 2
  ethernets:
  e1p1:
  addresses:
  - x.x.x.x/x
  gateway4: x.x.x.x
  match:
  macaddress: xyz
  mtu: 9000
  nameservers:
  addresses:
  - x.x.x.x
  set-name: e1p1
  p1p1:
  match:
  macaddress: xx
  mtu: 1500
  set-name: p1p1
  p1p2:
  match:
  macaddress: xx
  mtu: 1500
  set-name: p1p2

  bonds:
  bond0:
    mtu: 9000
    interfaces: [p1p1, p1p2]
    parameters:
  mode: active-backup
  mii-monitor-interval: 100
  primary: p1p2

  ~$ sudo netplan --debug apply
  sudo netplan --debug apply
  ** (generate:7353): DEBUG: 13:22:31.480: Processing input file 
/etc/netplan/50-cloud-init.yaml..
  ** (generate:7353): DEBUG: 13:22:31.480: starting new processing pass
  ** (generate:7353): DEBUG: 13:22:31.480: Processing input file 
/etc/netplan/60-puppet-netplan.yaml..
  ** (generate:7353): DEBUG: 13:22:31.480: starting new processing pass
  ** (generate:7353): DEBUG: 13:22:31.480: recording missing yaml_node_t bond0
  ** (generate:7353): DEBUG: 13:22:31.480: recording missing yaml_node_t bond0
  ** (generate:7353): DEBUG: 13:22:31.480: recording missing yaml_node_t bond0
  ** (generate:7353): DEBUG: 13:22:31.480: recording missing yaml_node_t bond0
  ** (generate:7353): DEBUG: 13:22:31.480: recording missing yaml_node_t bond0
  ** (generate:7353): DEBUG: 13:22:31.480: starting new processing pass
  Error in network definition /etc/netplan/60-puppet-netplan.yaml line 68 
column 17: bond0: bond already has a primary slave: p1p2

  What's wrong here??

  #apt-cache policy netplan.io
  netplan.io:
    Installed: 0.40.1~18.04.4
    Candidate: 0.40.1~18.04.4
    Version table:
   *** 0.40.1~18.04.4 500
  500 http://mirrors.rc.nectar.org.au/ubuntu bionic-security/main amd64 
Packages
  500 http://mirrors.rc.nectar.org.au/ubuntu bionic-updates/main amd64 
Packages
  100 /var/lib/dpkg/status
   0.36.1 500
  500 http://mirrors.rc.nectar.org.au/ubuntu bionic/main amd64 Packages

  #cat /etc/lsb-release
  DISTRIB_ID=Ubuntu
  DISTRIB_RELEASE=18.04
  DISTRIB_CODENAME=bionic
  DISTRIB_DESCRIPTION="Ubuntu 18.04.2 LTS"

  regards,

  Shahaan

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/netplan.io/+bug/1817651/+subscriptions

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1817651] Re: Primary slave on the bond not getting set.

2021-01-19 Thread Seyeong Kim
** Attachment added: "bionic_i386_artifacts.tar.gz"
   
https://bugs.launchpad.net/ubuntu/+source/netplan.io/+bug/1817651/+attachment/5454754/+files/bionic_i386_artifacts.tar.gz

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1817651

Title:
  Primary slave on the bond not getting set.

Status in netplan.io package in Ubuntu:
  Fix Released
Status in netplan.io source package in Bionic:
  Fix Committed
Status in netplan.io source package in Focal:
  Fix Released

Bug description:
  [Impact]

  primary slave fails to get set in netplan bonding configuration

  
  [Test Case]

  0. created vm with 3 nics ( ens33, ens38, ens39 )
  1. setup netplan as below
  - https://pastebin.ubuntu.com/p/JGqhYXYY6r/
  - ens38, ens39 is virtual nic, and dummy2 is not.
  2. netplan apply
  3. shows error

  [Where problems could occur]
  As this patch is related to bond, bond may have issue if there is problem.

  
  [Others]

  original description

  
  The primary slave fails to get set in netplan bonding configuration:

  network:
  version: 2
  ethernets:
  e1p1:
  addresses:
  - x.x.x.x/x
  gateway4: x.x.x.x
  match:
  macaddress: xyz
  mtu: 9000
  nameservers:
  addresses:
  - x.x.x.x
  set-name: e1p1
  p1p1:
  match:
  macaddress: xx
  mtu: 1500
  set-name: p1p1
  p1p2:
  match:
  macaddress: xx
  mtu: 1500
  set-name: p1p2

  bonds:
  bond0:
    mtu: 9000
    interfaces: [p1p1, p1p2]
    parameters:
  mode: active-backup
  mii-monitor-interval: 100
  primary: p1p2

  ~$ sudo netplan --debug apply
  sudo netplan --debug apply
  ** (generate:7353): DEBUG: 13:22:31.480: Processing input file 
/etc/netplan/50-cloud-init.yaml..
  ** (generate:7353): DEBUG: 13:22:31.480: starting new processing pass
  ** (generate:7353): DEBUG: 13:22:31.480: Processing input file 
/etc/netplan/60-puppet-netplan.yaml..
  ** (generate:7353): DEBUG: 13:22:31.480: starting new processing pass
  ** (generate:7353): DEBUG: 13:22:31.480: recording missing yaml_node_t bond0
  ** (generate:7353): DEBUG: 13:22:31.480: recording missing yaml_node_t bond0
  ** (generate:7353): DEBUG: 13:22:31.480: recording missing yaml_node_t bond0
  ** (generate:7353): DEBUG: 13:22:31.480: recording missing yaml_node_t bond0
  ** (generate:7353): DEBUG: 13:22:31.480: recording missing yaml_node_t bond0
  ** (generate:7353): DEBUG: 13:22:31.480: starting new processing pass
  Error in network definition /etc/netplan/60-puppet-netplan.yaml line 68 
column 17: bond0: bond already has a primary slave: p1p2

  What's wrong here??

  #apt-cache policy netplan.io
  netplan.io:
    Installed: 0.40.1~18.04.4
    Candidate: 0.40.1~18.04.4
    Version table:
   *** 0.40.1~18.04.4 500
  500 http://mirrors.rc.nectar.org.au/ubuntu bionic-security/main amd64 
Packages
  500 http://mirrors.rc.nectar.org.au/ubuntu bionic-updates/main amd64 
Packages
  100 /var/lib/dpkg/status
   0.36.1 500
  500 http://mirrors.rc.nectar.org.au/ubuntu bionic/main amd64 Packages

  #cat /etc/lsb-release
  DISTRIB_ID=Ubuntu
  DISTRIB_RELEASE=18.04
  DISTRIB_CODENAME=bionic
  DISTRIB_DESCRIPTION="Ubuntu 18.04.2 LTS"

  regards,

  Shahaan

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/netplan.io/+bug/1817651/+subscriptions

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1817651] Re: Primary slave on the bond not getting set.

2021-01-19 Thread Seyeong Kim
** Attachment added: "bionic_armhf_artifacts.tar.gz"
   
https://bugs.launchpad.net/ubuntu/+source/netplan.io/+bug/1817651/+attachment/5454753/+files/bionic_armhf_artifacts.tar.gz

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1817651

Title:
  Primary slave on the bond not getting set.

Status in netplan.io package in Ubuntu:
  Fix Released
Status in netplan.io source package in Bionic:
  Fix Committed
Status in netplan.io source package in Focal:
  Fix Released

Bug description:
  [Impact]

  primary slave fails to get set in netplan bonding configuration

  
  [Test Case]

  0. created vm with 3 nics ( ens33, ens38, ens39 )
  1. setup netplan as below
  - https://pastebin.ubuntu.com/p/JGqhYXYY6r/
  - ens38, ens39 is virtual nic, and dummy2 is not.
  2. netplan apply
  3. shows error

  [Where problems could occur]
  As this patch is related to bond, bond may have issue if there is problem.

  
  [Others]

  original description

  
  The primary slave fails to get set in netplan bonding configuration:

  network:
  version: 2
  ethernets:
  e1p1:
  addresses:
  - x.x.x.x/x
  gateway4: x.x.x.x
  match:
  macaddress: xyz
  mtu: 9000
  nameservers:
  addresses:
  - x.x.x.x
  set-name: e1p1
  p1p1:
  match:
  macaddress: xx
  mtu: 1500
  set-name: p1p1
  p1p2:
  match:
  macaddress: xx
  mtu: 1500
  set-name: p1p2

  bonds:
  bond0:
    mtu: 9000
    interfaces: [p1p1, p1p2]
    parameters:
  mode: active-backup
  mii-monitor-interval: 100
  primary: p1p2

  ~$ sudo netplan --debug apply
  sudo netplan --debug apply
  ** (generate:7353): DEBUG: 13:22:31.480: Processing input file 
/etc/netplan/50-cloud-init.yaml..
  ** (generate:7353): DEBUG: 13:22:31.480: starting new processing pass
  ** (generate:7353): DEBUG: 13:22:31.480: Processing input file 
/etc/netplan/60-puppet-netplan.yaml..
  ** (generate:7353): DEBUG: 13:22:31.480: starting new processing pass
  ** (generate:7353): DEBUG: 13:22:31.480: recording missing yaml_node_t bond0
  ** (generate:7353): DEBUG: 13:22:31.480: recording missing yaml_node_t bond0
  ** (generate:7353): DEBUG: 13:22:31.480: recording missing yaml_node_t bond0
  ** (generate:7353): DEBUG: 13:22:31.480: recording missing yaml_node_t bond0
  ** (generate:7353): DEBUG: 13:22:31.480: recording missing yaml_node_t bond0
  ** (generate:7353): DEBUG: 13:22:31.480: starting new processing pass
  Error in network definition /etc/netplan/60-puppet-netplan.yaml line 68 
column 17: bond0: bond already has a primary slave: p1p2

  What's wrong here??

  #apt-cache policy netplan.io
  netplan.io:
    Installed: 0.40.1~18.04.4
    Candidate: 0.40.1~18.04.4
    Version table:
   *** 0.40.1~18.04.4 500
  500 http://mirrors.rc.nectar.org.au/ubuntu bionic-security/main amd64 
Packages
  500 http://mirrors.rc.nectar.org.au/ubuntu bionic-updates/main amd64 
Packages
  100 /var/lib/dpkg/status
   0.36.1 500
  500 http://mirrors.rc.nectar.org.au/ubuntu bionic/main amd64 Packages

  #cat /etc/lsb-release
  DISTRIB_ID=Ubuntu
  DISTRIB_RELEASE=18.04
  DISTRIB_CODENAME=bionic
  DISTRIB_DESCRIPTION="Ubuntu 18.04.2 LTS"

  regards,

  Shahaan

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/netplan.io/+bug/1817651/+subscriptions

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1817651] Re: Primary slave on the bond not getting set.

2021-01-19 Thread Seyeong Kim
** Attachment added: "bionic_arm64_artifacts.tar.gz"
   
https://bugs.launchpad.net/ubuntu/+source/netplan.io/+bug/1817651/+attachment/5454748/+files/bionic_arm64_artifacts.tar.gz

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1817651

Title:
  Primary slave on the bond not getting set.

Status in netplan.io package in Ubuntu:
  Fix Released
Status in netplan.io source package in Bionic:
  Fix Committed
Status in netplan.io source package in Focal:
  Fix Released

Bug description:
  [Impact]

  primary slave fails to get set in netplan bonding configuration

  
  [Test Case]

  0. created vm with 3 nics ( ens33, ens38, ens39 )
  1. setup netplan as below
  - https://pastebin.ubuntu.com/p/JGqhYXYY6r/
  - ens38, ens39 is virtual nic, and dummy2 is not.
  2. netplan apply
  3. shows error

  [Where problems could occur]
  As this patch is related to bond, bond may have issue if there is problem.

  
  [Others]

  original description

  
  The primary slave fails to get set in netplan bonding configuration:

  network:
  version: 2
  ethernets:
  e1p1:
  addresses:
  - x.x.x.x/x
  gateway4: x.x.x.x
  match:
  macaddress: xyz
  mtu: 9000
  nameservers:
  addresses:
  - x.x.x.x
  set-name: e1p1
  p1p1:
  match:
  macaddress: xx
  mtu: 1500
  set-name: p1p1
  p1p2:
  match:
  macaddress: xx
  mtu: 1500
  set-name: p1p2

  bonds:
  bond0:
    mtu: 9000
    interfaces: [p1p1, p1p2]
    parameters:
  mode: active-backup
  mii-monitor-interval: 100
  primary: p1p2

  ~$ sudo netplan --debug apply
  sudo netplan --debug apply
  ** (generate:7353): DEBUG: 13:22:31.480: Processing input file 
/etc/netplan/50-cloud-init.yaml..
  ** (generate:7353): DEBUG: 13:22:31.480: starting new processing pass
  ** (generate:7353): DEBUG: 13:22:31.480: Processing input file 
/etc/netplan/60-puppet-netplan.yaml..
  ** (generate:7353): DEBUG: 13:22:31.480: starting new processing pass
  ** (generate:7353): DEBUG: 13:22:31.480: recording missing yaml_node_t bond0
  ** (generate:7353): DEBUG: 13:22:31.480: recording missing yaml_node_t bond0
  ** (generate:7353): DEBUG: 13:22:31.480: recording missing yaml_node_t bond0
  ** (generate:7353): DEBUG: 13:22:31.480: recording missing yaml_node_t bond0
  ** (generate:7353): DEBUG: 13:22:31.480: recording missing yaml_node_t bond0
  ** (generate:7353): DEBUG: 13:22:31.480: starting new processing pass
  Error in network definition /etc/netplan/60-puppet-netplan.yaml line 68 
column 17: bond0: bond already has a primary slave: p1p2

  What's wrong here??

  #apt-cache policy netplan.io
  netplan.io:
    Installed: 0.40.1~18.04.4
    Candidate: 0.40.1~18.04.4
    Version table:
   *** 0.40.1~18.04.4 500
  500 http://mirrors.rc.nectar.org.au/ubuntu bionic-security/main amd64 
Packages
  500 http://mirrors.rc.nectar.org.au/ubuntu bionic-updates/main amd64 
Packages
  100 /var/lib/dpkg/status
   0.36.1 500
  500 http://mirrors.rc.nectar.org.au/ubuntu bionic/main amd64 Packages

  #cat /etc/lsb-release
  DISTRIB_ID=Ubuntu
  DISTRIB_RELEASE=18.04
  DISTRIB_CODENAME=bionic
  DISTRIB_DESCRIPTION="Ubuntu 18.04.2 LTS"

  regards,

  Shahaan

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/netplan.io/+bug/1817651/+subscriptions

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1817651] Re: Primary slave on the bond not getting set.

2021-01-19 Thread Seyeong Kim
** Attachment added: "bionic_amd64_artifacts.tar.gz"
   
https://bugs.launchpad.net/ubuntu/+source/netplan.io/+bug/1817651/+attachment/5454747/+files/bionic_amd64_artifacts.tar.gz

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1817651

Title:
  Primary slave on the bond not getting set.

Status in netplan.io package in Ubuntu:
  Fix Released
Status in netplan.io source package in Bionic:
  Fix Committed
Status in netplan.io source package in Focal:
  Fix Released

Bug description:
  [Impact]

  primary slave fails to get set in netplan bonding configuration

  
  [Test Case]

  0. created vm with 3 nics ( ens33, ens38, ens39 )
  1. setup netplan as below
  - https://pastebin.ubuntu.com/p/JGqhYXYY6r/
  - ens38, ens39 is virtual nic, and dummy2 is not.
  2. netplan apply
  3. shows error

  [Where problems could occur]
  As this patch is related to bond, bond may have issue if there is problem.

  
  [Others]

  original description

  
  The primary slave fails to get set in netplan bonding configuration:

  network:
  version: 2
  ethernets:
  e1p1:
  addresses:
  - x.x.x.x/x
  gateway4: x.x.x.x
  match:
  macaddress: xyz
  mtu: 9000
  nameservers:
  addresses:
  - x.x.x.x
  set-name: e1p1
  p1p1:
  match:
  macaddress: xx
  mtu: 1500
  set-name: p1p1
  p1p2:
  match:
  macaddress: xx
  mtu: 1500
  set-name: p1p2

  bonds:
  bond0:
    mtu: 9000
    interfaces: [p1p1, p1p2]
    parameters:
  mode: active-backup
  mii-monitor-interval: 100
  primary: p1p2

  ~$ sudo netplan --debug apply
  sudo netplan --debug apply
  ** (generate:7353): DEBUG: 13:22:31.480: Processing input file 
/etc/netplan/50-cloud-init.yaml..
  ** (generate:7353): DEBUG: 13:22:31.480: starting new processing pass
  ** (generate:7353): DEBUG: 13:22:31.480: Processing input file 
/etc/netplan/60-puppet-netplan.yaml..
  ** (generate:7353): DEBUG: 13:22:31.480: starting new processing pass
  ** (generate:7353): DEBUG: 13:22:31.480: recording missing yaml_node_t bond0
  ** (generate:7353): DEBUG: 13:22:31.480: recording missing yaml_node_t bond0
  ** (generate:7353): DEBUG: 13:22:31.480: recording missing yaml_node_t bond0
  ** (generate:7353): DEBUG: 13:22:31.480: recording missing yaml_node_t bond0
  ** (generate:7353): DEBUG: 13:22:31.480: recording missing yaml_node_t bond0
  ** (generate:7353): DEBUG: 13:22:31.480: starting new processing pass
  Error in network definition /etc/netplan/60-puppet-netplan.yaml line 68 
column 17: bond0: bond already has a primary slave: p1p2

  What's wrong here??

  #apt-cache policy netplan.io
  netplan.io:
    Installed: 0.40.1~18.04.4
    Candidate: 0.40.1~18.04.4
    Version table:
   *** 0.40.1~18.04.4 500
  500 http://mirrors.rc.nectar.org.au/ubuntu bionic-security/main amd64 
Packages
  500 http://mirrors.rc.nectar.org.au/ubuntu bionic-updates/main amd64 
Packages
  100 /var/lib/dpkg/status
   0.36.1 500
  500 http://mirrors.rc.nectar.org.au/ubuntu bionic/main amd64 Packages

  #cat /etc/lsb-release
  DISTRIB_ID=Ubuntu
  DISTRIB_RELEASE=18.04
  DISTRIB_CODENAME=bionic
  DISTRIB_DESCRIPTION="Ubuntu 18.04.2 LTS"

  regards,

  Shahaan

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/netplan.io/+bug/1817651/+subscriptions

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1910432] Re: dirmngr doesn't work with kernel parameter ipv6.disable=1

2021-01-19 Thread Dan Streetman
uploaded to f/g/h, thanks @halves!

One more minor comment for b added in the MR

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1910432

Title:
  dirmngr doesn't work with kernel parameter ipv6.disable=1

Status in gnupg2 package in Ubuntu:
  In Progress
Status in gnupg2 source package in Bionic:
  In Progress
Status in gnupg2 source package in Focal:
  In Progress
Status in gnupg2 source package in Groovy:
  In Progress
Status in gnupg2 source package in Hirsute:
  In Progress

Bug description:
  [Impact]
  apt-key fails to fetch keys with "Address family not supported by protocol"

  [Description]
  We've had users report issues about apt-key being unable to fetch keys when 
IPv6 is disabled. As the mentioned kernel command line parameter disables IPV6 
socket support, servers that allow/respond with IPv6 will cause 
connect_server() to fail with EAFNOSUPPORT.

  As this error is not being handled in some version of dirmngr, it'll
  simply fail the connection and could cause other processes to fail as
  well. In the test scenario below, it's easy to demonstrate this
  behaviour through apt-key.

  This has been reported upstream, and has been fixed with the following commit:
  - dirmngr: Handle EAFNOSUPPORT at connect_server. (109d16e8f644)

  The fix has been present upstream starting with GnuPG 2.22, so it's
  not currently available in any Ubuntu releases.

  [Test Case]
  1. Spin up Focal VM

  2. Disable IPv6:
  $ sudo vi /etc/default/grub
  (...)
  GRUB_CMDLINE_LINUX="ipv6.disable=1"
  $ sudo update-grub

  3. Reboot the VM

  4. Try to fetch a key:
  sudo apt-key adv --fetch-keys 
https://www.postgreSQL.org/media/keys/ACCC4CF8.asc

  You'll get the following error:
  gpg: WARNING: unable to fetch URI 
https://www.postgresql.org/media/keys/ACCC4CF8.asc: Address family not 
supported by protocol

  [Regression Potential]
  The patch introduces additional error handling when connecting to servers, to 
properly mark remote hosts as having valid IPv4 and/or IPv6 connectivity. We 
should look out for potential regressions when connecting to servers with 
exclusive IPv4 or IPv6 connectivity, to make sure the server is not getting 
marked as 'dead' due to missing one of the versions.
  This commit has also been tested in the corresponding Ubuntu series, and has 
been deemed safe for backporting to stable branches of upstream GnuPG. The 
overall regression potential for this change should be fairly low, and breakage 
should be easily spotted.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/gnupg2/+bug/1910432/+subscriptions

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


Re: [Sts-sponsors] [Merge] ~halves/ubuntu/+source/gnupg2:lp1910432-bionic into ubuntu/+source/gnupg2:ubuntu/bionic-devel

2021-01-19 Thread Dan Streetman
Review: Needs Fixing

one more minor comment inline below

Diff comments:

> diff --git 
> a/debian/patches/dirmngr-handle-EAFNOSUPPORT-at-connect_server.patch 
> b/debian/patches/dirmngr-handle-EAFNOSUPPORT-at-connect_server.patch
> new file mode 100644
> index 000..d926add
> --- /dev/null
> +++ b/debian/patches/dirmngr-handle-EAFNOSUPPORT-at-connect_server.patch
> @@ -0,0 +1,61 @@
> +From ca937cf390662b830d4fc5d295e69b24b1778050 Mon Sep 17 00:00:00 2001
> +From: NIIBE Yutaka 
> +Date: Mon, 13 Jul 2020 10:00:58 +0900
> +Subject: [PATCH] dirmngr: Handle EAFNOSUPPORT at connect_server.
> +
> +* dirmngr/http.c (connect_server): Skip server with EAFNOSUPPORT.
> +
> +--
> +
> +GnuPG-bug-id: 4977
> +Signed-off-by: NIIBE Yutaka 
> +
> +Origin: backport, 
> https://git.gnupg.org/cgi-bin/gitweb.cgi?p=gnupg.git;a=commit;h=109d16e8f644
> +Bug-Ubuntu: https://bugs.launchpad.net/bugs/1910432
> +---
> + dirmngr/http.c | 9 +
> + 1 file changed, 9 insertions(+)
> +
> +Index: gnupg2/dirmngr/http.c
> +===
> +--- gnupg2.orig/dirmngr/http.c
>  gnupg2/dirmngr/http.c
> +@@ -2843,7 +2843,7 @@ connect_server (const char *server, unsi
> +   unsigned int srvcount = 0;
> +   int hostfound = 0;
> +   int anyhostaddr = 0;
> +-  int srv, connected;
> ++  int srv, connected, v4_valid, v6_valid;
> +   gpg_error_t last_err = 0;
> +   struct srventry *serverlist = NULL;
> +
> +@@ -2930,9 +2930,11 @@ connect_server (const char *server, unsi
> +
> +   for (ai = aibuf; ai && !connected; ai = ai->next)
> + {
> +-  if (ai->family == AF_INET && (flags & HTTP_FLAG_IGNORE_IPv4))
> ++  if (ai->family == AF_INET
> ++  && ((flags & HTTP_FLAG_IGNORE_IPv4) || !v4_valid))

I think this is checking v4_valid before it's initialized; it seems it was 
added with the upstream commit 12def3a84e, but that looks way bigger than is 
needed to backport. Maybe you could just initialize v4_valid = (flags & 
HTTP_FLAG_IGNORE_IPv4) and then you only need to check v4_valid in this if 
statement? And do the same for v6_valid of course

> + continue;
> +-  if (ai->family == AF_INET6 && (flags & HTTP_FLAG_IGNORE_IPv6))
> ++  if (ai->family == AF_INET6
> ++  && ((flags & HTTP_FLAG_IGNORE_IPv6) || !v6_valid))
> + continue;
> +
> +   if (sock != ASSUAN_INVALID_FD)
> +@@ -2940,6 +2942,15 @@ connect_server (const char *server, unsi
> +   sock = my_sock_new_for_addr (ai->addr, ai->socktype, 
> ai->protocol);
> +   if (sock == ASSUAN_INVALID_FD)
> + {
> ++  if (errno == EAFNOSUPPORT)
> ++{
> ++  if (ai->family == AF_INET)
> ++v4_valid = 0;
> ++  if (ai->family == AF_INET6)
> ++v6_valid = 0;
> ++  continue;
> ++}
> ++
> +   err = gpg_err_make (default_errsource,
> +   gpg_err_code_from_syserror ());
> +   log_error ("error creating socket: %s\n", gpg_strerror (err));


-- 
https://code.launchpad.net/~halves/ubuntu/+source/gnupg2/+git/gnupg2/+merge/396408
Your team STS Sponsors is requested to review the proposed merge of 
~halves/ubuntu/+source/gnupg2:lp1910432-bionic into 
ubuntu/+source/gnupg2:ubuntu/bionic-devel.

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


Re: [Sts-sponsors] [Merge] ~halves/ubuntu/+source/gnupg2:lp1910432-focal into ubuntu/+source/gnupg2:ubuntu/focal-devel

2021-01-19 Thread Dan Streetman
Review: Approve

LGTM, uploaded to focal, thanks!
-- 
https://code.launchpad.net/~halves/ubuntu/+source/gnupg2/+git/gnupg2/+merge/396406
Your team STS Sponsors is requested to review the proposed merge of 
~halves/ubuntu/+source/gnupg2:lp1910432-focal into 
ubuntu/+source/gnupg2:ubuntu/focal-devel.

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


Re: [Sts-sponsors] [Merge] ~halves/ubuntu/+source/gnupg2:lp1910432-groovy into ubuntu/+source/gnupg2:ubuntu/groovy-devel

2021-01-19 Thread Dan Streetman
Review: Approve

LGTM, uploaded to g, thanks!
-- 
https://code.launchpad.net/~halves/ubuntu/+source/gnupg2/+git/gnupg2/+merge/396531
Your team STS Sponsors is requested to review the proposed merge of 
~halves/ubuntu/+source/gnupg2:lp1910432-groovy into 
ubuntu/+source/gnupg2:ubuntu/groovy-devel.

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


Re: [Sts-sponsors] [Merge] ~halves/ubuntu/+source/gnupg2:lp1910432-devel into ubuntu/+source/gnupg2:ubuntu/devel

2021-01-19 Thread Dan Streetman
Review: Approve

LGTM, uploaded to h, thanks!
-- 
https://code.launchpad.net/~halves/ubuntu/+source/gnupg2/+git/gnupg2/+merge/396407
Your team STS Sponsors is requested to review the proposed merge of 
~halves/ubuntu/+source/gnupg2:lp1910432-devel into 
ubuntu/+source/gnupg2:ubuntu/devel.

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


Re: [Sts-sponsors] [Bug 1906720] Re: Fix the disable_ssl_certificate_validation option

2021-01-19 Thread Heather Lemon
Did you also remove the 0002 from the d/p/ at the top of the changelog?

+  * d/p/0002-lp1906720-Make-disable_ssl_certificate_validation-work-
wit.patch


On Tue, Jan 19, 2021 at 3:31 PM Dan Streetman <1906...@bugs.launchpad.net>
wrote:

> uploaded to bionic, thanks @hypothetical-lemon
>
> --
> You received this bug notification because you are a bug assignee.
> https://bugs.launchpad.net/bugs/1906720
>
> Title:
>   Fix the disable_ssl_certificate_validation option
>
> Status in python-httplib2 package in Ubuntu:
>   Fix Released
> Status in python-httplib2 source package in Bionic:
>   In Progress
> Status in python-httplib2 source package in Focal:
>   Fix Released
> Status in python-httplib2 source package in Groovy:
>   Fix Released
> Status in python-httplib2 source package in Hirsute:
>   Fix Released
>
> Bug description:
>   [Environment]
>
>   Bionic
>   python3-httplib2 | 0.9.2+dfsg-1ubuntu0.2
>
>   [Description]
>
>   maas cli fails to work with apis over https with self-signed
> certificates due to the lack
>   of disable_ssl_certificate_validation option with python 3.5.
>
>   [Distribution/Release, Package versions, Platform]
>   cat /etc/lsb-release; dpkg -l | grep maas
>   DISTRIB_ID=Ubuntu
>   DISTRIB_RELEASE=18.04
>   DISTRIB_CODENAME=bionic
>   DISTRIB_DESCRIPTION="Ubuntu 18.04.5 LTS"
>   ii maas 2.8.2-8577-g.a3e674063-0ubuntu1~18.04.1 all "Metal as a Service"
> is a physical cloud and IPAM
>   ii maas-cli 2.8.2-8577-g.a3e674063-0ubuntu1~18.04.1 all MAAS client and
> command-line interface
>   ii maas-common 2.8.2-8577-g.a3e674063-0ubuntu1~18.04.1 all MAAS server
> common files
>   ii maas-dhcp 2.8.2-8577-g.a3e674063-0ubuntu1~18.04.1 all MAAS DHCP server
>   ii maas-proxy 2.8.2-8577-g.a3e674063-0ubuntu1~18.04.1 all MAAS Caching
> Proxy
>   ii maas-rack-controller 2.8.2-8577-g.a3e674063-0ubuntu1~18.04.1 all Rack
> Controller for MAAS
>   ii maas-region-api 2.8.2-8577-g.a3e674063-0ubuntu1~18.04.1 all Region
> controller API service for MAAS
>   ii maas-region-controller 2.8.2-8577-g.a3e674063-0ubuntu1~18.04.1 all
> Region Controller for MAAS
>   ii python3-django-maas 2.8.2-8577-g.a3e674063-0ubuntu1~18.04.1 all MAAS
> server Django web framework (Python 3)
>   ii python3-maas-client 2.8.2-8577-g.a3e674063-0ubuntu1~18.04.1 all MAAS
> python API client (Python 3)
>   ii python3-maas-provisioningserver
> 2.8.2-8577-g.a3e674063-0ubuntu1~18.04.1 all MAAS server provisioning
> libraries (Python 3)
>
>   [Steps to Reproduce]
>
>   - prepare a maas server(installed by packages for me and the customer).
> it doesn't have to be HA to reproduce
>   - prepare a set of certificate, key and ca-bundle
>   - place a new conf[2] in /etc/nginx/sites-enabled and `sudo systemctl
> restart nginx`
>   - add the ca certificates to the host
>   sudo mkdir /usr/share/ca-certificates/extra
>   sudo cp -v ca-bundle.crt /usr/share/ca-certificates/extra/
>   dpkg-reconfigure ca-certificates
>   - login with a new profile over https url
>   - when not added the ca-bundle to the trusted ca cert store, it fails to
> login and '--insecure' flag also doesn't work[3]
>
>   [Known Workarounds]
>   None
>
>   [Test]
>   # Note even though this change only affects Python3
>   # I tested it with Python2 with no issues and was able to connect.
>   Also please make note of the 2 packages. One is for Python2 the other
> Python3
>
>   Python2 ===> python-httplib2_0.9.2+dfsg-1ubuntu0.3_all.deb
>   Python3 ===>  python3-httplib2_0.9.2+dfsg-1ubuntu0.3_all.deb
>
>   helpful urls:
>   https://maas.io/docs/deb/2.8/cli/installation
>   https://maas.io/docs/deb/2.8/cli/configuration-journey
>   https://maas.io/docs/deb/2.8/ui/configuration-journey
>
>   # create bionic VM/lxc container
>   lxc launch ubuntu:bionic lp1820083
>
>   # get source code from repo
>   pull-lp-source  python-httplib2 bionic
>
>   # install maas-cli
>   apt-get install maas-cli
>
>   # install maas server
>   apt-get install maas
>
>   # init maas
>   sudo maas init
>
>   # answer questions
>
>   # generate self signed cert and key
>   openssl req -newkey rsa:4096 -x509 -sha256 -days 60 -nodes -out
> localhost.crt -keyout localhost.key
>
>   # add certs
>   sudo cp -v test.crt /usr/share/ca-certificates/extra/
>
>   # add new cert to list
>   sudo dpkg-reconfigure ca-certificates
>
>   # select yes with spacebar
>   # save
>
>   # create api key files
>   touch api_key
>   touch api-key-file
>
>   # remove any packages with this
>   # or this python3-httplib2
>   apt-cache search python-httplib2
>   apt-get remove python-httplib2
>   apt-get remove python3-httplib2
>
>   # create 2 admin users
>   sudo maas createadmin testadmin
>   sudo maas createadmin secureadmin
>
>   # generate maas api keys
>   sudo maas apikey --username=testadmin > api_key
>   sudo maas apikey --username=secureadmin > api-key-file
>
>   # make sure you can login to maas-cli without TLS
>   # by running this script
>   # this is for the non-tls user
>   # this goes into a script called 

[Sts-sponsors] [Bug 1906720] Re: Fix the disable_ssl_certificate_validation option

2021-01-19 Thread Heather Lemon
Did you also remove the 0002 from the changelog?

+  * d/p/0002-lp1906720-Make-disable_ssl_certificate_validation-work-
wit.patch

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1906720

Title:
  Fix the disable_ssl_certificate_validation option

Status in python-httplib2 package in Ubuntu:
  Fix Released
Status in python-httplib2 source package in Bionic:
  In Progress
Status in python-httplib2 source package in Focal:
  Fix Released
Status in python-httplib2 source package in Groovy:
  Fix Released
Status in python-httplib2 source package in Hirsute:
  Fix Released

Bug description:
  [Environment]

  Bionic
  python3-httplib2 | 0.9.2+dfsg-1ubuntu0.2

  [Description]

  maas cli fails to work with apis over https with self-signed certificates due 
to the lack
  of disable_ssl_certificate_validation option with python 3.5.

  [Distribution/Release, Package versions, Platform]
  cat /etc/lsb-release; dpkg -l | grep maas
  DISTRIB_ID=Ubuntu
  DISTRIB_RELEASE=18.04
  DISTRIB_CODENAME=bionic
  DISTRIB_DESCRIPTION="Ubuntu 18.04.5 LTS"
  ii maas 2.8.2-8577-g.a3e674063-0ubuntu1~18.04.1 all "Metal as a Service" is a 
physical cloud and IPAM
  ii maas-cli 2.8.2-8577-g.a3e674063-0ubuntu1~18.04.1 all MAAS client and 
command-line interface
  ii maas-common 2.8.2-8577-g.a3e674063-0ubuntu1~18.04.1 all MAAS server common 
files
  ii maas-dhcp 2.8.2-8577-g.a3e674063-0ubuntu1~18.04.1 all MAAS DHCP server
  ii maas-proxy 2.8.2-8577-g.a3e674063-0ubuntu1~18.04.1 all MAAS Caching Proxy
  ii maas-rack-controller 2.8.2-8577-g.a3e674063-0ubuntu1~18.04.1 all Rack 
Controller for MAAS
  ii maas-region-api 2.8.2-8577-g.a3e674063-0ubuntu1~18.04.1 all Region 
controller API service for MAAS
  ii maas-region-controller 2.8.2-8577-g.a3e674063-0ubuntu1~18.04.1 all Region 
Controller for MAAS
  ii python3-django-maas 2.8.2-8577-g.a3e674063-0ubuntu1~18.04.1 all MAAS 
server Django web framework (Python 3)
  ii python3-maas-client 2.8.2-8577-g.a3e674063-0ubuntu1~18.04.1 all MAAS 
python API client (Python 3)
  ii python3-maas-provisioningserver 2.8.2-8577-g.a3e674063-0ubuntu1~18.04.1 
all MAAS server provisioning libraries (Python 3)

  [Steps to Reproduce]

  - prepare a maas server(installed by packages for me and the customer). it 
doesn't have to be HA to reproduce
  - prepare a set of certificate, key and ca-bundle
  - place a new conf[2] in /etc/nginx/sites-enabled and `sudo systemctl restart 
nginx`
  - add the ca certificates to the host
  sudo mkdir /usr/share/ca-certificates/extra
  sudo cp -v ca-bundle.crt /usr/share/ca-certificates/extra/
  dpkg-reconfigure ca-certificates
  - login with a new profile over https url
  - when not added the ca-bundle to the trusted ca cert store, it fails to 
login and '--insecure' flag also doesn't work[3]

  [Known Workarounds]
  None

  [Test]
  # Note even though this change only affects Python3
  # I tested it with Python2 with no issues and was able to connect.
  Also please make note of the 2 packages. One is for Python2 the other Python3 

  Python2 ===> python-httplib2_0.9.2+dfsg-1ubuntu0.3_all.deb
  Python3 ===>  python3-httplib2_0.9.2+dfsg-1ubuntu0.3_all.deb

  helpful urls:
  https://maas.io/docs/deb/2.8/cli/installation
  https://maas.io/docs/deb/2.8/cli/configuration-journey
  https://maas.io/docs/deb/2.8/ui/configuration-journey

  # create bionic VM/lxc container
  lxc launch ubuntu:bionic lp1820083

  # get source code from repo
  pull-lp-source  python-httplib2 bionic

  # install maas-cli
  apt-get install maas-cli

  # install maas server
  apt-get install maas

  # init maas
  sudo maas init

  # answer questions

  # generate self signed cert and key
  openssl req -newkey rsa:4096 -x509 -sha256 -days 60 -nodes -out localhost.crt 
-keyout localhost.key

  # add certs
  sudo cp -v test.crt /usr/share/ca-certificates/extra/

  # add new cert to list
  sudo dpkg-reconfigure ca-certificates

  # select yes with spacebar
  # save

  # create api key files
  touch api_key
  touch api-key-file

  # remove any packages with this
  # or this python3-httplib2
  apt-cache search python-httplib2
  apt-get remove python-httplib2
  apt-get remove python3-httplib2

  # create 2 admin users
  sudo maas createadmin testadmin
  sudo maas createadmin secureadmin

  # generate maas api keys
  sudo maas apikey --username=testadmin > api_key
  sudo maas apikey --username=secureadmin > api-key-file

  # make sure you can login to maas-cli without TLS
  # by running this script
  # this is for the non-tls user
  # this goes into a script called maas-login.sh
  touch maas-login.sh
  sudo chmod +rwx maas-login.sh
  
  #!/bin/sh
  PROFILE=testadmin
  API_KEY_FILE=/home/ubuntu/api_key
  API_SERVER=127.0.0.1:5240

  MAAS_URL=http://$API_SERVER/MAAS

  maas login $PROFILE $MAAS_URL - < $API_KEY_FILE
  
  sudo chmod +rwx https-maas.sh
  # another script called https-maas.sh
  # for the tls 

[Sts-sponsors] [Bug 1906720] Re: Fix the disable_ssl_certificate_validation option

2021-01-19 Thread Dan Streetman
attached updated debdiff with just minor adjustments:

- added tag "LP: #1906720" to changelog entry
- ran 'quilt refresh' on patch to fix offsets
- added DEP3 fields to patch (https://dep-team.pages.debian.net/deps/dep3/)
  (in general, at least Origin: and Bug-Ubuntu: fields should be added)
- renamed patch to remove leading '0002-' (just personal preference for patch 
naming)

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1906720

Title:
  Fix the disable_ssl_certificate_validation option

Status in python-httplib2 package in Ubuntu:
  Fix Released
Status in python-httplib2 source package in Bionic:
  In Progress
Status in python-httplib2 source package in Focal:
  Fix Released
Status in python-httplib2 source package in Groovy:
  Fix Released
Status in python-httplib2 source package in Hirsute:
  Fix Released

Bug description:
  [Environment]

  Bionic
  python3-httplib2 | 0.9.2+dfsg-1ubuntu0.2

  [Description]

  maas cli fails to work with apis over https with self-signed certificates due 
to the lack
  of disable_ssl_certificate_validation option with python 3.5.

  [Distribution/Release, Package versions, Platform]
  cat /etc/lsb-release; dpkg -l | grep maas
  DISTRIB_ID=Ubuntu
  DISTRIB_RELEASE=18.04
  DISTRIB_CODENAME=bionic
  DISTRIB_DESCRIPTION="Ubuntu 18.04.5 LTS"
  ii maas 2.8.2-8577-g.a3e674063-0ubuntu1~18.04.1 all "Metal as a Service" is a 
physical cloud and IPAM
  ii maas-cli 2.8.2-8577-g.a3e674063-0ubuntu1~18.04.1 all MAAS client and 
command-line interface
  ii maas-common 2.8.2-8577-g.a3e674063-0ubuntu1~18.04.1 all MAAS server common 
files
  ii maas-dhcp 2.8.2-8577-g.a3e674063-0ubuntu1~18.04.1 all MAAS DHCP server
  ii maas-proxy 2.8.2-8577-g.a3e674063-0ubuntu1~18.04.1 all MAAS Caching Proxy
  ii maas-rack-controller 2.8.2-8577-g.a3e674063-0ubuntu1~18.04.1 all Rack 
Controller for MAAS
  ii maas-region-api 2.8.2-8577-g.a3e674063-0ubuntu1~18.04.1 all Region 
controller API service for MAAS
  ii maas-region-controller 2.8.2-8577-g.a3e674063-0ubuntu1~18.04.1 all Region 
Controller for MAAS
  ii python3-django-maas 2.8.2-8577-g.a3e674063-0ubuntu1~18.04.1 all MAAS 
server Django web framework (Python 3)
  ii python3-maas-client 2.8.2-8577-g.a3e674063-0ubuntu1~18.04.1 all MAAS 
python API client (Python 3)
  ii python3-maas-provisioningserver 2.8.2-8577-g.a3e674063-0ubuntu1~18.04.1 
all MAAS server provisioning libraries (Python 3)

  [Steps to Reproduce]

  - prepare a maas server(installed by packages for me and the customer). it 
doesn't have to be HA to reproduce
  - prepare a set of certificate, key and ca-bundle
  - place a new conf[2] in /etc/nginx/sites-enabled and `sudo systemctl restart 
nginx`
  - add the ca certificates to the host
  sudo mkdir /usr/share/ca-certificates/extra
  sudo cp -v ca-bundle.crt /usr/share/ca-certificates/extra/
  dpkg-reconfigure ca-certificates
  - login with a new profile over https url
  - when not added the ca-bundle to the trusted ca cert store, it fails to 
login and '--insecure' flag also doesn't work[3]

  [Known Workarounds]
  None

  [Test]
  # Note even though this change only affects Python3
  # I tested it with Python2 with no issues and was able to connect.
  Also please make note of the 2 packages. One is for Python2 the other Python3 

  Python2 ===> python-httplib2_0.9.2+dfsg-1ubuntu0.3_all.deb
  Python3 ===>  python3-httplib2_0.9.2+dfsg-1ubuntu0.3_all.deb

  helpful urls:
  https://maas.io/docs/deb/2.8/cli/installation
  https://maas.io/docs/deb/2.8/cli/configuration-journey
  https://maas.io/docs/deb/2.8/ui/configuration-journey

  # create bionic VM/lxc container
  lxc launch ubuntu:bionic lp1820083

  # get source code from repo
  pull-lp-source  python-httplib2 bionic

  # install maas-cli
  apt-get install maas-cli

  # install maas server
  apt-get install maas

  # init maas
  sudo maas init

  # answer questions

  # generate self signed cert and key
  openssl req -newkey rsa:4096 -x509 -sha256 -days 60 -nodes -out localhost.crt 
-keyout localhost.key

  # add certs
  sudo cp -v test.crt /usr/share/ca-certificates/extra/

  # add new cert to list
  sudo dpkg-reconfigure ca-certificates

  # select yes with spacebar
  # save

  # create api key files
  touch api_key
  touch api-key-file

  # remove any packages with this
  # or this python3-httplib2
  apt-cache search python-httplib2
  apt-get remove python-httplib2
  apt-get remove python3-httplib2

  # create 2 admin users
  sudo maas createadmin testadmin
  sudo maas createadmin secureadmin

  # generate maas api keys
  sudo maas apikey --username=testadmin > api_key
  sudo maas apikey --username=secureadmin > api-key-file

  # make sure you can login to maas-cli without TLS
  # by running this script
  # this is for the non-tls user
  # this goes into a script called maas-login.sh
  touch maas-login.sh
  sudo chmod +rwx maas-login.sh
  
  #!/bin/sh
  

[Sts-sponsors] [Bug 1817651] Re: Primary slave on the bond not getting set.

2021-01-19 Thread Brian Murray
@seyeongkim - nothing will appear at autopkgtest until the package has
been accepted into the archive i.e. -proposed in this case.

** Changed in: netplan.io (Ubuntu Bionic)
   Status: In Progress => Fix Committed

** Tags added: verification-needed verification-needed-bionic

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1817651

Title:
  Primary slave on the bond not getting set.

Status in netplan.io package in Ubuntu:
  Fix Released
Status in netplan.io source package in Bionic:
  Fix Committed
Status in netplan.io source package in Focal:
  Fix Released

Bug description:
  [Impact]

  primary slave fails to get set in netplan bonding configuration

  
  [Test Case]

  0. created vm with 3 nics ( ens33, ens38, ens39 )
  1. setup netplan as below
  - https://pastebin.ubuntu.com/p/JGqhYXYY6r/
  - ens38, ens39 is virtual nic, and dummy2 is not.
  2. netplan apply
  3. shows error

  [Where problems could occur]
  As this patch is related to bond, bond may have issue if there is problem.

  
  [Others]

  original description

  
  The primary slave fails to get set in netplan bonding configuration:

  network:
  version: 2
  ethernets:
  e1p1:
  addresses:
  - x.x.x.x/x
  gateway4: x.x.x.x
  match:
  macaddress: xyz
  mtu: 9000
  nameservers:
  addresses:
  - x.x.x.x
  set-name: e1p1
  p1p1:
  match:
  macaddress: xx
  mtu: 1500
  set-name: p1p1
  p1p2:
  match:
  macaddress: xx
  mtu: 1500
  set-name: p1p2

  bonds:
  bond0:
    mtu: 9000
    interfaces: [p1p1, p1p2]
    parameters:
  mode: active-backup
  mii-monitor-interval: 100
  primary: p1p2

  ~$ sudo netplan --debug apply
  sudo netplan --debug apply
  ** (generate:7353): DEBUG: 13:22:31.480: Processing input file 
/etc/netplan/50-cloud-init.yaml..
  ** (generate:7353): DEBUG: 13:22:31.480: starting new processing pass
  ** (generate:7353): DEBUG: 13:22:31.480: Processing input file 
/etc/netplan/60-puppet-netplan.yaml..
  ** (generate:7353): DEBUG: 13:22:31.480: starting new processing pass
  ** (generate:7353): DEBUG: 13:22:31.480: recording missing yaml_node_t bond0
  ** (generate:7353): DEBUG: 13:22:31.480: recording missing yaml_node_t bond0
  ** (generate:7353): DEBUG: 13:22:31.480: recording missing yaml_node_t bond0
  ** (generate:7353): DEBUG: 13:22:31.480: recording missing yaml_node_t bond0
  ** (generate:7353): DEBUG: 13:22:31.480: recording missing yaml_node_t bond0
  ** (generate:7353): DEBUG: 13:22:31.480: starting new processing pass
  Error in network definition /etc/netplan/60-puppet-netplan.yaml line 68 
column 17: bond0: bond already has a primary slave: p1p2

  What's wrong here??

  #apt-cache policy netplan.io
  netplan.io:
    Installed: 0.40.1~18.04.4
    Candidate: 0.40.1~18.04.4
    Version table:
   *** 0.40.1~18.04.4 500
  500 http://mirrors.rc.nectar.org.au/ubuntu bionic-security/main amd64 
Packages
  500 http://mirrors.rc.nectar.org.au/ubuntu bionic-updates/main amd64 
Packages
  100 /var/lib/dpkg/status
   0.36.1 500
  500 http://mirrors.rc.nectar.org.au/ubuntu bionic/main amd64 Packages

  #cat /etc/lsb-release
  DISTRIB_ID=Ubuntu
  DISTRIB_RELEASE=18.04
  DISTRIB_CODENAME=bionic
  DISTRIB_DESCRIPTION="Ubuntu 18.04.2 LTS"

  regards,

  Shahaan

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/netplan.io/+bug/1817651/+subscriptions

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1817651] Please test proposed package

2021-01-19 Thread Brian Murray
Hello Shahaan, or anyone else affected,

Accepted netplan.io into bionic-proposed. The package will build now and
be available at
https://launchpad.net/ubuntu/+source/netplan.io/0.99-0ubuntu3~18.04.4 in
a few hours, and then in the -proposed repository.

Please help us by testing this new package.  See
https://wiki.ubuntu.com/Testing/EnableProposed for documentation on how
to enable and use -proposed.  Your feedback will aid us getting this
update out to other Ubuntu users.

If this package fixes the bug for you, please add a comment to this bug,
mentioning the version of the package you tested, what testing has been
performed on the package and change the tag from verification-needed-
bionic to verification-done-bionic. If it does not fix the bug for you,
please add a comment stating that, and change the tag to verification-
failed-bionic. In either case, without details of your testing we will
not be able to proceed.

Further information regarding the verification process can be found at
https://wiki.ubuntu.com/QATeam/PerformingSRUVerification .  Thank you in
advance for helping!

N.B. The updated package will be released to -updates after the bug(s)
fixed by this package have been verified and the package has been in
-proposed for a minimum of 7 days.

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1817651

Title:
  Primary slave on the bond not getting set.

Status in netplan.io package in Ubuntu:
  Fix Released
Status in netplan.io source package in Bionic:
  Fix Committed
Status in netplan.io source package in Focal:
  Fix Released

Bug description:
  [Impact]

  primary slave fails to get set in netplan bonding configuration

  
  [Test Case]

  0. created vm with 3 nics ( ens33, ens38, ens39 )
  1. setup netplan as below
  - https://pastebin.ubuntu.com/p/JGqhYXYY6r/
  - ens38, ens39 is virtual nic, and dummy2 is not.
  2. netplan apply
  3. shows error

  [Where problems could occur]
  As this patch is related to bond, bond may have issue if there is problem.

  
  [Others]

  original description

  
  The primary slave fails to get set in netplan bonding configuration:

  network:
  version: 2
  ethernets:
  e1p1:
  addresses:
  - x.x.x.x/x
  gateway4: x.x.x.x
  match:
  macaddress: xyz
  mtu: 9000
  nameservers:
  addresses:
  - x.x.x.x
  set-name: e1p1
  p1p1:
  match:
  macaddress: xx
  mtu: 1500
  set-name: p1p1
  p1p2:
  match:
  macaddress: xx
  mtu: 1500
  set-name: p1p2

  bonds:
  bond0:
    mtu: 9000
    interfaces: [p1p1, p1p2]
    parameters:
  mode: active-backup
  mii-monitor-interval: 100
  primary: p1p2

  ~$ sudo netplan --debug apply
  sudo netplan --debug apply
  ** (generate:7353): DEBUG: 13:22:31.480: Processing input file 
/etc/netplan/50-cloud-init.yaml..
  ** (generate:7353): DEBUG: 13:22:31.480: starting new processing pass
  ** (generate:7353): DEBUG: 13:22:31.480: Processing input file 
/etc/netplan/60-puppet-netplan.yaml..
  ** (generate:7353): DEBUG: 13:22:31.480: starting new processing pass
  ** (generate:7353): DEBUG: 13:22:31.480: recording missing yaml_node_t bond0
  ** (generate:7353): DEBUG: 13:22:31.480: recording missing yaml_node_t bond0
  ** (generate:7353): DEBUG: 13:22:31.480: recording missing yaml_node_t bond0
  ** (generate:7353): DEBUG: 13:22:31.480: recording missing yaml_node_t bond0
  ** (generate:7353): DEBUG: 13:22:31.480: recording missing yaml_node_t bond0
  ** (generate:7353): DEBUG: 13:22:31.480: starting new processing pass
  Error in network definition /etc/netplan/60-puppet-netplan.yaml line 68 
column 17: bond0: bond already has a primary slave: p1p2

  What's wrong here??

  #apt-cache policy netplan.io
  netplan.io:
    Installed: 0.40.1~18.04.4
    Candidate: 0.40.1~18.04.4
    Version table:
   *** 0.40.1~18.04.4 500
  500 http://mirrors.rc.nectar.org.au/ubuntu bionic-security/main amd64 
Packages
  500 http://mirrors.rc.nectar.org.au/ubuntu bionic-updates/main amd64 
Packages
  100 /var/lib/dpkg/status
   0.36.1 500
  500 http://mirrors.rc.nectar.org.au/ubuntu bionic/main amd64 Packages

  #cat /etc/lsb-release
  DISTRIB_ID=Ubuntu
  DISTRIB_RELEASE=18.04
  DISTRIB_CODENAME=bionic
  DISTRIB_DESCRIPTION="Ubuntu 18.04.2 LTS"

  regards,

  Shahaan

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/netplan.io/+bug/1817651/+subscriptions

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1910432] Re: dirmngr doesn't work with kernel parameter ipv6.disable=1

2021-01-19 Thread Launchpad Bug Tracker
** Merge proposal linked:
   
https://code.launchpad.net/~halves/ubuntu/+source/gnupg2/+git/gnupg2/+merge/396531

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1910432

Title:
  dirmngr doesn't work with kernel parameter ipv6.disable=1

Status in gnupg2 package in Ubuntu:
  In Progress
Status in gnupg2 source package in Bionic:
  In Progress
Status in gnupg2 source package in Focal:
  In Progress
Status in gnupg2 source package in Groovy:
  In Progress
Status in gnupg2 source package in Hirsute:
  In Progress

Bug description:
  [Impact]
  apt-key fails to fetch keys with "Address family not supported by protocol"

  [Description]
  We've had users report issues about apt-key being unable to fetch keys when 
IPv6 is disabled. As the mentioned kernel command line parameter disables IPV6 
socket support, servers that allow/respond with IPv6 will cause 
connect_server() to fail with EAFNOSUPPORT.

  As this error is not being handled in some version of dirmngr, it'll
  simply fail the connection and could cause other processes to fail as
  well. In the test scenario below, it's easy to demonstrate this
  behaviour through apt-key.

  This has been reported upstream, and has been fixed with the following commit:
  - dirmngr: Handle EAFNOSUPPORT at connect_server. (109d16e8f644)

  The fix has been present upstream starting with GnuPG 2.22, so it's
  not currently available in any Ubuntu releases.

  [Test Case]
  1. Spin up Focal VM

  2. Disable IPv6:
  $ sudo vi /etc/default/grub
  (...)
  GRUB_CMDLINE_LINUX="ipv6.disable=1"
  $ sudo update-grub

  3. Reboot the VM

  4. Try to fetch a key:
  sudo apt-key adv --fetch-keys 
https://www.postgreSQL.org/media/keys/ACCC4CF8.asc

  You'll get the following error:
  gpg: WARNING: unable to fetch URI 
https://www.postgresql.org/media/keys/ACCC4CF8.asc: Address family not 
supported by protocol

  [Regression Potential]
  The patch introduces additional error handling when connecting to servers, to 
properly mark remote hosts as having valid IPv4 and/or IPv6 connectivity. We 
should look out for potential regressions when connecting to servers with 
exclusive IPv4 or IPv6 connectivity, to make sure the server is not getting 
marked as 'dead' due to missing one of the versions.
  This commit has also been tested in the corresponding Ubuntu series, and has 
been deemed safe for backporting to stable branches of upstream GnuPG. The 
overall regression potential for this change should be fairly low, and breakage 
should be easily spotted.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/gnupg2/+bug/1910432/+subscriptions

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


Re: [Sts-sponsors] [Merge] ~halves/ubuntu/+source/gnupg2:lp1910432-focal into ubuntu/+source/gnupg2:ubuntu/focal-devel

2021-01-19 Thread Heitor Alves de Siqueira
Pesky version numbers :)
Please let me know if anything else needs to be changed!
-- 
https://code.launchpad.net/~halves/ubuntu/+source/gnupg2/+git/gnupg2/+merge/396406
Your team STS Sponsors is requested to review the proposed merge of 
~halves/ubuntu/+source/gnupg2:lp1910432-focal into 
ubuntu/+source/gnupg2:ubuntu/focal-devel.

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


Re: [Sts-sponsors] [Merge] ~halves/ubuntu/+source/gnupg2:lp1910432-devel into ubuntu/+source/gnupg2:ubuntu/devel

2021-01-19 Thread Heitor Alves de Siqueira
Done! Please let me know if anything else needs to be changed
-- 
https://code.launchpad.net/~halves/ubuntu/+source/gnupg2/+git/gnupg2/+merge/396407
Your team STS Sponsors is requested to review the proposed merge of 
~halves/ubuntu/+source/gnupg2:lp1910432-devel into 
ubuntu/+source/gnupg2:ubuntu/devel.

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


Re: [Sts-sponsors] [Merge] ~halves/ubuntu/+source/gnupg2:lp1910432-bionic into ubuntu/+source/gnupg2:ubuntu/bionic-devel

2021-01-19 Thread Heitor Alves de Siqueira
Sigh, that's what you get when patches for all series have the same name... 
Apologies for the confusion Dan, it should be the right one now.
-- 
https://code.launchpad.net/~halves/ubuntu/+source/gnupg2/+git/gnupg2/+merge/396408
Your team STS Sponsors is requested to review the proposed merge of 
~halves/ubuntu/+source/gnupg2:lp1910432-bionic into 
ubuntu/+source/gnupg2:ubuntu/bionic-devel.

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Merge] ~halves/ubuntu/+source/gnupg2:lp1910432-groovy into ubuntu/+source/gnupg2:ubuntu/groovy-devel

2021-01-19 Thread Heitor Alves de Siqueira
Heitor Alves de Siqueira has proposed merging 
~halves/ubuntu/+source/gnupg2:lp1910432-groovy into 
ubuntu/+source/gnupg2:ubuntu/groovy-devel.

Requested reviews:
  Dan Streetman (ddstreet)
  STS Sponsors (sts-sponsors)
Related bugs:
  Bug #1910432 in gnupg2 (Ubuntu): "dirmngr doesn't work with kernel parameter 
ipv6.disable=1"
  https://bugs.launchpad.net/ubuntu/+source/gnupg2/+bug/1910432

For more details, see:
https://code.launchpad.net/~halves/ubuntu/+source/gnupg2/+git/gnupg2/+merge/396531
-- 
Your team STS Sponsors is requested to review the proposed merge of 
~halves/ubuntu/+source/gnupg2:lp1910432-groovy into 
ubuntu/+source/gnupg2:ubuntu/groovy-devel.
diff --git a/debian/changelog b/debian/changelog
index 974065a..f8d261a 100644
--- a/debian/changelog
+++ b/debian/changelog
@@ -1,3 +1,10 @@
+gnupg2 (2.2.20-1ubuntu1.1) groovy; urgency=medium
+
+  * d/p/dirmngr-handle-EAFNOSUPPORT-at-connect_server.patch:
+- Fix IPv6 connectivity for dirmngr (LP: #1910432)
+
+ -- Heitor Alves de Siqueira   Sat, 16 Jan 2021 14:53:14 +
+
 gnupg2 (2.2.20-1ubuntu1) groovy; urgency=low
 
   * Merge from Debian unstable.  Remaining changes:
diff --git a/debian/patches/dirmngr-handle-EAFNOSUPPORT-at-connect_server.patch b/debian/patches/dirmngr-handle-EAFNOSUPPORT-at-connect_server.patch
new file mode 100644
index 000..15a96a6
--- /dev/null
+++ b/debian/patches/dirmngr-handle-EAFNOSUPPORT-at-connect_server.patch
@@ -0,0 +1,38 @@
+From ca937cf390662b830d4fc5d295e69b24b1778050 Mon Sep 17 00:00:00 2001
+From: NIIBE Yutaka 
+Date: Mon, 13 Jul 2020 10:00:58 +0900
+Subject: [PATCH] dirmngr: Handle EAFNOSUPPORT at connect_server.
+
+* dirmngr/http.c (connect_server): Skip server with EAFNOSUPPORT.
+
+--
+
+GnuPG-bug-id: 4977
+Signed-off-by: NIIBE Yutaka 
+
+Origin: backport, https://git.gnupg.org/cgi-bin/gitweb.cgi?p=gnupg.git;a=commit;h=109d16e8f644
+Bug-Ubuntu: https://bugs.launchpad.net/bugs/1910432
+---
+ dirmngr/http.c | 9 +
+ 1 file changed, 9 insertions(+)
+
+Index: gnupg2/dirmngr/http.c
+===
+--- gnupg2.orig/dirmngr/http.c
 gnupg2/dirmngr/http.c
+@@ -3005,6 +3005,15 @@ connect_server (const char *server, unsi
+   sock = my_sock_new_for_addr (ai->addr, ai->socktype, ai->protocol);
+   if (sock == ASSUAN_INVALID_FD)
+ {
++  if (errno == EAFNOSUPPORT)
++{
++  if (ai->family == AF_INET)
++v4_valid = 0;
++  if (ai->family == AF_INET6)
++v6_valid = 0;
++  continue;
++}
++
+   err = gpg_err_make (default_errsource,
+   gpg_err_code_from_syserror ());
+   log_error ("error creating socket: %s\n", gpg_strerror (err));
diff --git a/debian/patches/series b/debian/patches/series
index a0f1baf..af77b88 100644
--- a/debian/patches/series
+++ b/debian/patches/series
@@ -20,3 +20,4 @@ Use-hkps-keys.openpgp.org-as-the-default-keyserver.patch
 Make-gpg-zip-use-tar-from-PATH.patch
 gpg-drop-import-clean-from-default-keyserver-import-optio.patch
 dirmngr-honor-http-proxy.patch
+dirmngr-handle-EAFNOSUPPORT-at-connect_server.patch
-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1906720] Re: Fix the disable_ssl_certificate_validation option

2021-01-19 Thread Dan Streetman
** Tags added: sts sts-sponsor-ddstreet

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1906720

Title:
  Fix the disable_ssl_certificate_validation option

Status in python-httplib2 package in Ubuntu:
  Fix Released
Status in python-httplib2 source package in Bionic:
  In Progress
Status in python-httplib2 source package in Focal:
  Fix Released
Status in python-httplib2 source package in Groovy:
  Fix Released
Status in python-httplib2 source package in Hirsute:
  Fix Released

Bug description:
  [Environment]

  Bionic
  python3-httplib2 | 0.9.2+dfsg-1ubuntu0.2

  [Description]

  maas cli fails to work with apis over https with self-signed certificates due 
to the lack
  of disable_ssl_certificate_validation option with python 3.5.

  [Distribution/Release, Package versions, Platform]
  cat /etc/lsb-release; dpkg -l | grep maas
  DISTRIB_ID=Ubuntu
  DISTRIB_RELEASE=18.04
  DISTRIB_CODENAME=bionic
  DISTRIB_DESCRIPTION="Ubuntu 18.04.5 LTS"
  ii maas 2.8.2-8577-g.a3e674063-0ubuntu1~18.04.1 all "Metal as a Service" is a 
physical cloud and IPAM
  ii maas-cli 2.8.2-8577-g.a3e674063-0ubuntu1~18.04.1 all MAAS client and 
command-line interface
  ii maas-common 2.8.2-8577-g.a3e674063-0ubuntu1~18.04.1 all MAAS server common 
files
  ii maas-dhcp 2.8.2-8577-g.a3e674063-0ubuntu1~18.04.1 all MAAS DHCP server
  ii maas-proxy 2.8.2-8577-g.a3e674063-0ubuntu1~18.04.1 all MAAS Caching Proxy
  ii maas-rack-controller 2.8.2-8577-g.a3e674063-0ubuntu1~18.04.1 all Rack 
Controller for MAAS
  ii maas-region-api 2.8.2-8577-g.a3e674063-0ubuntu1~18.04.1 all Region 
controller API service for MAAS
  ii maas-region-controller 2.8.2-8577-g.a3e674063-0ubuntu1~18.04.1 all Region 
Controller for MAAS
  ii python3-django-maas 2.8.2-8577-g.a3e674063-0ubuntu1~18.04.1 all MAAS 
server Django web framework (Python 3)
  ii python3-maas-client 2.8.2-8577-g.a3e674063-0ubuntu1~18.04.1 all MAAS 
python API client (Python 3)
  ii python3-maas-provisioningserver 2.8.2-8577-g.a3e674063-0ubuntu1~18.04.1 
all MAAS server provisioning libraries (Python 3)

  [Steps to Reproduce]

  - prepare a maas server(installed by packages for me and the customer). it 
doesn't have to be HA to reproduce
  - prepare a set of certificate, key and ca-bundle
  - place a new conf[2] in /etc/nginx/sites-enabled and `sudo systemctl restart 
nginx`
  - add the ca certificates to the host
  sudo mkdir /usr/share/ca-certificates/extra
  sudo cp -v ca-bundle.crt /usr/share/ca-certificates/extra/
  dpkg-reconfigure ca-certificates
  - login with a new profile over https url
  - when not added the ca-bundle to the trusted ca cert store, it fails to 
login and '--insecure' flag also doesn't work[3]

  [Known Workarounds]
  None

  [Test]
  # Note even though this change only affects Python3
  # I tested it with Python2 with no issues and was able to connect.
  Also please make note of the 2 packages. One is for Python2 the other Python3 

  Python2 ===> python-httplib2_0.9.2+dfsg-1ubuntu0.3_all.deb
  Python3 ===>  python3-httplib2_0.9.2+dfsg-1ubuntu0.3_all.deb

  helpful urls:
  https://maas.io/docs/deb/2.8/cli/installation
  https://maas.io/docs/deb/2.8/cli/configuration-journey
  https://maas.io/docs/deb/2.8/ui/configuration-journey

  # create bionic VM/lxc container
  lxc launch ubuntu:bionic lp1820083

  # get source code from repo
  pull-lp-source  python-httplib2 bionic

  # install maas-cli
  apt-get install maas-cli

  # install maas server
  apt-get install maas

  # init maas
  sudo maas init

  # answer questions

  # generate self signed cert and key
  openssl req -newkey rsa:4096 -x509 -sha256 -days 60 -nodes -out localhost.crt 
-keyout localhost.key

  # add certs
  sudo cp -v test.crt /usr/share/ca-certificates/extra/

  # add new cert to list
  sudo dpkg-reconfigure ca-certificates

  # select yes with spacebar
  # save

  # create api key files
  touch api_key
  touch api-key-file

  # remove any packages with this
  # or this python3-httplib2
  apt-cache search python-httplib2
  apt-get remove python-httplib2
  apt-get remove python3-httplib2

  # create 2 admin users
  sudo maas createadmin testadmin
  sudo maas createadmin secureadmin

  # generate maas api keys
  sudo maas apikey --username=testadmin > api_key
  sudo maas apikey --username=secureadmin > api-key-file

  # make sure you can login to maas-cli without TLS
  # by running this script
  # this is for the non-tls user
  # this goes into a script called maas-login.sh
  touch maas-login.sh
  sudo chmod +rwx maas-login.sh
  
  #!/bin/sh
  PROFILE=testadmin
  API_KEY_FILE=/home/ubuntu/api_key
  API_SERVER=127.0.0.1:5240

  MAAS_URL=http://$API_SERVER/MAAS

  maas login $PROFILE $MAAS_URL - < $API_KEY_FILE
  
  sudo chmod +rwx https-maas.sh
  # another script called https-maas.sh
  # for the tls user
  
  #!/bin/sh
  PROFILE=secureadmin
  API_KEY_FILE=/home/ubuntu/api-key-file
  

[Sts-sponsors] [Bug 1908473] Re: rsyslog-relp: imrelp module leaves sockets in CLOSE_WAIT state which leads to file descriptor leak

2021-01-19 Thread Mauricio Faria de Oliveira
Now the two tests failed again.

I have to EOD, and this might benefit from some delay, in case the transient 
condition just goes away.
Also, let's not overload the riscv64 builder with this so other builds can move 
on.

We have 7 days in -proposed to look at this in more detail (or the bad
condition to go away), and get it built.

** Attachment added: 
"buildlog_ubuntu-focal-riscv64.librelp_1.5.0-1ubuntu2.20.04.1_BUILDING.txt.gz.4"
   
https://bugs.launchpad.net/ubuntu/+source/rsyslog/+bug/1908473/+attachment/5454592/+files/buildlog_ubuntu-focal-riscv64.librelp_1.5.0-1ubuntu2.20.04.1_BUILDING.txt.gz.4

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1908473

Title:
  rsyslog-relp: imrelp module leaves sockets in CLOSE_WAIT state which
  leads to file descriptor leak

Status in librelp package in Ubuntu:
  Fix Released
Status in rsyslog package in Ubuntu:
  Fix Released
Status in librelp source package in Focal:
  Fix Committed
Status in rsyslog source package in Focal:
  Won't Fix
Status in librelp source package in Groovy:
  Fix Committed
Status in rsyslog source package in Groovy:
  Fix Released
Status in librelp source package in Hirsute:
  Fix Released
Status in rsyslog source package in Hirsute:
  Fix Released

Bug description:
  [Impact]

  In recent versions of rsyslog and librelp, the imrelp module leaks
  file descriptors due to a bug where it does not correctly close
  sockets, and instead, leaves them in the CLOSE_WAIT state.

  This causes rsyslogd on busy servers to eventually hit the limit of
  maximum open files allowed, which locks rsyslogd up until it is
  restarted.

  A workaround is to restart rsyslogd every month or so to manually
  close all of the open sockets.

  Only users of the imrelp module are affected, and not rsyslog users in
  general.

  [Testcase]

  Install the rsyslog-relp module like so:

  $ sudo apt install rsyslog rsyslog-relp

  Next, generate a working directory, and make a config file that loads
  the relp module.

  $ sudo mkdir /workdir
  $ cat << EOF >> ./spool.conf
  \$LocalHostName spool
  \$AbortOnUncleanConfig on
  \$PreserveFQDN on

  global(
  workDirectory="/workdir"
  maxMessageSize="256k"
  )

  main_queue(queue.type="Direct")
  module(load="imrelp")
  input(
  type="imrelp"
  name="imrelp"
  port="601"
  ruleset="spool"
  MaxDataSize="256k"
  )

  ruleset(name="spool" queue.type="direct") {
  }

  # Just so rsyslog doesn't whine that we do not have outputs
  ruleset(name="noop" queue.type="direct") {
  action(
  type="omfile"
  name="omfile"
  file="/workdir/spool.log"
  )
  }
  EOF

  Verify that the config is valid, then start a rsyslog server.

  $ sudo rsyslogd -f ./spool.conf -N9
  $ sudo rsyslogd -f ./spool.conf -i /workdir/rsyslogd.pid

  Fetch the rsyslogd PID and check for open files.

  $ RLOGPID=$(cat /workdir/rsyslogd.pid)
  $ sudo ls -l /proc/$RLOGPID/fd
  total 0
  lr-x-- 1 root root 64 Dec 17 01:22 0 -> /dev/urandom
  lrwx-- 1 root root 64 Dec 17 01:22 1 -> 'socket:[41228]'
  lrwx-- 1 root root 64 Dec 17 01:22 3 -> 'socket:[41222]'
  lrwx-- 1 root root 64 Dec 17 01:22 4 -> 'socket:[41223]'
  lrwx-- 1 root root 64 Dec 17 01:22 7 -> 'anon_inode:[eventpoll]'

  We have 3 sockets open by default. Next, use netcat to open 100
  connections:

  $ for i in {1..100} ; do nc -z 127.0.0.1 601 ; done

  Now check for open file descriptors, and there will be an extra 100 sockets
  in the list:

  $ sudo ls -l /proc/$RLOGPID/fd

  https://paste.ubuntu.com/p/f6NQVNbZcR/

  We can check the state of these sockets with:

  $ ss -t

  https://paste.ubuntu.com/p/7Ts2FbxJrg/

  The listening sockets will be in CLOSE-WAIT, and the netcat sockets
  will be in FIN-WAIT-2.

  $ ss -t | grep CLOSE-WAIT | wc -l
  100

  If you install the test package available in the following ppa:

  https://launchpad.net/~mruffell/+archive/ubuntu/sf299578-test

  When you open connections with netcat, these will be closed properly,
  and the file descriptor leak will be fixed.

  [Where problems could occur]

  If a regression were to occur, it would be limited to users of the
  imrelp module, which is a part of the rsyslogd-relp package, and
  depends on librelp.

  rsyslog-relp is not part of a default installation of rsyslog, and is
  opt in by changing a configuration file to enable imrelp.

  The changes to rsyslog implement a testcase which exercises the problematic 
code to ensure things are working as expected; this
  can be enabled manually on build, and has been verified to pass (#7).

  [Other]

  Upstream bug list:

  https://github.com/rsyslog/rsyslog/issues/4350
  https://github.com/rsyslog/rsyslog/issues/4005
  https://github.com/rsyslog/librelp/issues/188
  https://github.com/rsyslog/librelp/pull/193

  The following commits fix the problem:

  rsyslogd
  

  commit 

[Sts-sponsors] [Bug 1908473] Re: rsyslog-relp: imrelp module leaves sockets in CLOSE_WAIT state which leads to file descriptor leak

2021-01-19 Thread Mauricio Faria de Oliveira
Same one failure, re-building again.

** Attachment added: 
"buildlog_ubuntu-focal-riscv64.librelp_1.5.0-1ubuntu2.20.04.1_BUILDING.txt.gz.3"
   
https://bugs.launchpad.net/ubuntu/+source/rsyslog/+bug/1908473/+attachment/5454591/+files/buildlog_ubuntu-focal-riscv64.librelp_1.5.0-1ubuntu2.20.04.1_BUILDING.txt.gz.3

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1908473

Title:
  rsyslog-relp: imrelp module leaves sockets in CLOSE_WAIT state which
  leads to file descriptor leak

Status in librelp package in Ubuntu:
  Fix Released
Status in rsyslog package in Ubuntu:
  Fix Released
Status in librelp source package in Focal:
  Fix Committed
Status in rsyslog source package in Focal:
  Won't Fix
Status in librelp source package in Groovy:
  Fix Committed
Status in rsyslog source package in Groovy:
  Fix Released
Status in librelp source package in Hirsute:
  Fix Released
Status in rsyslog source package in Hirsute:
  Fix Released

Bug description:
  [Impact]

  In recent versions of rsyslog and librelp, the imrelp module leaks
  file descriptors due to a bug where it does not correctly close
  sockets, and instead, leaves them in the CLOSE_WAIT state.

  This causes rsyslogd on busy servers to eventually hit the limit of
  maximum open files allowed, which locks rsyslogd up until it is
  restarted.

  A workaround is to restart rsyslogd every month or so to manually
  close all of the open sockets.

  Only users of the imrelp module are affected, and not rsyslog users in
  general.

  [Testcase]

  Install the rsyslog-relp module like so:

  $ sudo apt install rsyslog rsyslog-relp

  Next, generate a working directory, and make a config file that loads
  the relp module.

  $ sudo mkdir /workdir
  $ cat << EOF >> ./spool.conf
  \$LocalHostName spool
  \$AbortOnUncleanConfig on
  \$PreserveFQDN on

  global(
  workDirectory="/workdir"
  maxMessageSize="256k"
  )

  main_queue(queue.type="Direct")
  module(load="imrelp")
  input(
  type="imrelp"
  name="imrelp"
  port="601"
  ruleset="spool"
  MaxDataSize="256k"
  )

  ruleset(name="spool" queue.type="direct") {
  }

  # Just so rsyslog doesn't whine that we do not have outputs
  ruleset(name="noop" queue.type="direct") {
  action(
  type="omfile"
  name="omfile"
  file="/workdir/spool.log"
  )
  }
  EOF

  Verify that the config is valid, then start a rsyslog server.

  $ sudo rsyslogd -f ./spool.conf -N9
  $ sudo rsyslogd -f ./spool.conf -i /workdir/rsyslogd.pid

  Fetch the rsyslogd PID and check for open files.

  $ RLOGPID=$(cat /workdir/rsyslogd.pid)
  $ sudo ls -l /proc/$RLOGPID/fd
  total 0
  lr-x-- 1 root root 64 Dec 17 01:22 0 -> /dev/urandom
  lrwx-- 1 root root 64 Dec 17 01:22 1 -> 'socket:[41228]'
  lrwx-- 1 root root 64 Dec 17 01:22 3 -> 'socket:[41222]'
  lrwx-- 1 root root 64 Dec 17 01:22 4 -> 'socket:[41223]'
  lrwx-- 1 root root 64 Dec 17 01:22 7 -> 'anon_inode:[eventpoll]'

  We have 3 sockets open by default. Next, use netcat to open 100
  connections:

  $ for i in {1..100} ; do nc -z 127.0.0.1 601 ; done

  Now check for open file descriptors, and there will be an extra 100 sockets
  in the list:

  $ sudo ls -l /proc/$RLOGPID/fd

  https://paste.ubuntu.com/p/f6NQVNbZcR/

  We can check the state of these sockets with:

  $ ss -t

  https://paste.ubuntu.com/p/7Ts2FbxJrg/

  The listening sockets will be in CLOSE-WAIT, and the netcat sockets
  will be in FIN-WAIT-2.

  $ ss -t | grep CLOSE-WAIT | wc -l
  100

  If you install the test package available in the following ppa:

  https://launchpad.net/~mruffell/+archive/ubuntu/sf299578-test

  When you open connections with netcat, these will be closed properly,
  and the file descriptor leak will be fixed.

  [Where problems could occur]

  If a regression were to occur, it would be limited to users of the
  imrelp module, which is a part of the rsyslogd-relp package, and
  depends on librelp.

  rsyslog-relp is not part of a default installation of rsyslog, and is
  opt in by changing a configuration file to enable imrelp.

  The changes to rsyslog implement a testcase which exercises the problematic 
code to ensure things are working as expected; this
  can be enabled manually on build, and has been verified to pass (#7).

  [Other]

  Upstream bug list:

  https://github.com/rsyslog/rsyslog/issues/4350
  https://github.com/rsyslog/rsyslog/issues/4005
  https://github.com/rsyslog/librelp/issues/188
  https://github.com/rsyslog/librelp/pull/193

  The following commits fix the problem:

  rsyslogd
  

  commit baee0bd5420649329793746f0daf87c4f59fe6a6
  Author: Andre lorbach 
  Date:   Thu Apr 9 13:00:35 2020 +0200
  Subject: testbench: Add test for imrelp to check broken session handling.
  Link: 
https://github.com/rsyslog/rsyslog/commit/baee0bd5420649329793746f0daf87c4f59fe6a6

  librelp
  ===

  

[Sts-sponsors] [Bug 1908473] Re: rsyslog-relp: imrelp module leaves sockets in CLOSE_WAIT state which leads to file descriptor leak

2021-01-19 Thread Mauricio Faria de Oliveira
and now it had only 1 failure in the test suite,
thus indeed a transient/arch-specific flakiness.

re-building once again.

$ zgrep '^FAIL: .*sh$' 
buildlog_ubuntu-focal-riscv64.librelp_1.5.0-1ubuntu2.20.04.1_BUILDING.txt.gz.*
buildlog_ubuntu-focal-

riscv64.librelp_1.5.0-1ubuntu2.20.04.1_BUILDING.txt.gz.1:FAIL: 
basic-realistic.sh
buildlog_ubuntu-focal-riscv64.librelp_1.5.0-1ubuntu2.20.04.1_BUILDING.txt.gz.1:FAIL:
 tls-basic-realistic.sh
buildlog_ubuntu-focal-

riscv64.librelp_1.5.0-1ubuntu2.20.04.1_BUILDING.txt.gz.2:FAIL: tls-
basic-realistic.sh

** Attachment added: 
"buildlog_ubuntu-focal-riscv64.librelp_1.5.0-1ubuntu2.20.04.1_BUILDING.txt.gz.2"
   
https://bugs.launchpad.net/ubuntu/+source/rsyslog/+bug/1908473/+attachment/5454590/+files/buildlog_ubuntu-focal-riscv64.librelp_1.5.0-1ubuntu2.20.04.1_BUILDING.txt.gz.2

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1908473

Title:
  rsyslog-relp: imrelp module leaves sockets in CLOSE_WAIT state which
  leads to file descriptor leak

Status in librelp package in Ubuntu:
  Fix Released
Status in rsyslog package in Ubuntu:
  Fix Released
Status in librelp source package in Focal:
  Fix Committed
Status in rsyslog source package in Focal:
  Won't Fix
Status in librelp source package in Groovy:
  Fix Committed
Status in rsyslog source package in Groovy:
  Fix Released
Status in librelp source package in Hirsute:
  Fix Released
Status in rsyslog source package in Hirsute:
  Fix Released

Bug description:
  [Impact]

  In recent versions of rsyslog and librelp, the imrelp module leaks
  file descriptors due to a bug where it does not correctly close
  sockets, and instead, leaves them in the CLOSE_WAIT state.

  This causes rsyslogd on busy servers to eventually hit the limit of
  maximum open files allowed, which locks rsyslogd up until it is
  restarted.

  A workaround is to restart rsyslogd every month or so to manually
  close all of the open sockets.

  Only users of the imrelp module are affected, and not rsyslog users in
  general.

  [Testcase]

  Install the rsyslog-relp module like so:

  $ sudo apt install rsyslog rsyslog-relp

  Next, generate a working directory, and make a config file that loads
  the relp module.

  $ sudo mkdir /workdir
  $ cat << EOF >> ./spool.conf
  \$LocalHostName spool
  \$AbortOnUncleanConfig on
  \$PreserveFQDN on

  global(
  workDirectory="/workdir"
  maxMessageSize="256k"
  )

  main_queue(queue.type="Direct")
  module(load="imrelp")
  input(
  type="imrelp"
  name="imrelp"
  port="601"
  ruleset="spool"
  MaxDataSize="256k"
  )

  ruleset(name="spool" queue.type="direct") {
  }

  # Just so rsyslog doesn't whine that we do not have outputs
  ruleset(name="noop" queue.type="direct") {
  action(
  type="omfile"
  name="omfile"
  file="/workdir/spool.log"
  )
  }
  EOF

  Verify that the config is valid, then start a rsyslog server.

  $ sudo rsyslogd -f ./spool.conf -N9
  $ sudo rsyslogd -f ./spool.conf -i /workdir/rsyslogd.pid

  Fetch the rsyslogd PID and check for open files.

  $ RLOGPID=$(cat /workdir/rsyslogd.pid)
  $ sudo ls -l /proc/$RLOGPID/fd
  total 0
  lr-x-- 1 root root 64 Dec 17 01:22 0 -> /dev/urandom
  lrwx-- 1 root root 64 Dec 17 01:22 1 -> 'socket:[41228]'
  lrwx-- 1 root root 64 Dec 17 01:22 3 -> 'socket:[41222]'
  lrwx-- 1 root root 64 Dec 17 01:22 4 -> 'socket:[41223]'
  lrwx-- 1 root root 64 Dec 17 01:22 7 -> 'anon_inode:[eventpoll]'

  We have 3 sockets open by default. Next, use netcat to open 100
  connections:

  $ for i in {1..100} ; do nc -z 127.0.0.1 601 ; done

  Now check for open file descriptors, and there will be an extra 100 sockets
  in the list:

  $ sudo ls -l /proc/$RLOGPID/fd

  https://paste.ubuntu.com/p/f6NQVNbZcR/

  We can check the state of these sockets with:

  $ ss -t

  https://paste.ubuntu.com/p/7Ts2FbxJrg/

  The listening sockets will be in CLOSE-WAIT, and the netcat sockets
  will be in FIN-WAIT-2.

  $ ss -t | grep CLOSE-WAIT | wc -l
  100

  If you install the test package available in the following ppa:

  https://launchpad.net/~mruffell/+archive/ubuntu/sf299578-test

  When you open connections with netcat, these will be closed properly,
  and the file descriptor leak will be fixed.

  [Where problems could occur]

  If a regression were to occur, it would be limited to users of the
  imrelp module, which is a part of the rsyslogd-relp package, and
  depends on librelp.

  rsyslog-relp is not part of a default installation of rsyslog, and is
  opt in by changing a configuration file to enable imrelp.

  The changes to rsyslog implement a testcase which exercises the problematic 
code to ensure things are working as expected; this
  can be enabled manually on build, and has been verified to pass (#7).

  [Other]

  Upstream bug list:

  https://github.com/rsyslog/rsyslog/issues/4350

[Sts-sponsors] [Bug 1908473] Re: rsyslog-relp: imrelp module leaves sockets in CLOSE_WAIT state which leads to file descriptor leak

2021-01-19 Thread Mauricio Faria de Oliveira
focal/riscv64 had 2 failures in the test suite.

i've requested a re-build so it runs again, as
it seems more related to the arch or something
transient than the patchset, as only this arch
and release hit it.
(groovy: all archs ok, focal: other archs ok)

attaching build log for ref (it's lost on retry)

** Attachment added: 
"buildlog_ubuntu-focal-riscv64.librelp_1.5.0-1ubuntu2.20.04.1_BUILDING.txt.gz"
   
https://bugs.launchpad.net/ubuntu/+source/rsyslog/+bug/1908473/+attachment/5454568/+files/buildlog_ubuntu-focal-riscv64.librelp_1.5.0-1ubuntu2.20.04.1_BUILDING.txt.gz

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1908473

Title:
  rsyslog-relp: imrelp module leaves sockets in CLOSE_WAIT state which
  leads to file descriptor leak

Status in librelp package in Ubuntu:
  Fix Released
Status in rsyslog package in Ubuntu:
  Fix Released
Status in librelp source package in Focal:
  Fix Committed
Status in rsyslog source package in Focal:
  Won't Fix
Status in librelp source package in Groovy:
  Fix Committed
Status in rsyslog source package in Groovy:
  Fix Released
Status in librelp source package in Hirsute:
  Fix Released
Status in rsyslog source package in Hirsute:
  Fix Released

Bug description:
  [Impact]

  In recent versions of rsyslog and librelp, the imrelp module leaks
  file descriptors due to a bug where it does not correctly close
  sockets, and instead, leaves them in the CLOSE_WAIT state.

  This causes rsyslogd on busy servers to eventually hit the limit of
  maximum open files allowed, which locks rsyslogd up until it is
  restarted.

  A workaround is to restart rsyslogd every month or so to manually
  close all of the open sockets.

  Only users of the imrelp module are affected, and not rsyslog users in
  general.

  [Testcase]

  Install the rsyslog-relp module like so:

  $ sudo apt install rsyslog rsyslog-relp

  Next, generate a working directory, and make a config file that loads
  the relp module.

  $ sudo mkdir /workdir
  $ cat << EOF >> ./spool.conf
  \$LocalHostName spool
  \$AbortOnUncleanConfig on
  \$PreserveFQDN on

  global(
  workDirectory="/workdir"
  maxMessageSize="256k"
  )

  main_queue(queue.type="Direct")
  module(load="imrelp")
  input(
  type="imrelp"
  name="imrelp"
  port="601"
  ruleset="spool"
  MaxDataSize="256k"
  )

  ruleset(name="spool" queue.type="direct") {
  }

  # Just so rsyslog doesn't whine that we do not have outputs
  ruleset(name="noop" queue.type="direct") {
  action(
  type="omfile"
  name="omfile"
  file="/workdir/spool.log"
  )
  }
  EOF

  Verify that the config is valid, then start a rsyslog server.

  $ sudo rsyslogd -f ./spool.conf -N9
  $ sudo rsyslogd -f ./spool.conf -i /workdir/rsyslogd.pid

  Fetch the rsyslogd PID and check for open files.

  $ RLOGPID=$(cat /workdir/rsyslogd.pid)
  $ sudo ls -l /proc/$RLOGPID/fd
  total 0
  lr-x-- 1 root root 64 Dec 17 01:22 0 -> /dev/urandom
  lrwx-- 1 root root 64 Dec 17 01:22 1 -> 'socket:[41228]'
  lrwx-- 1 root root 64 Dec 17 01:22 3 -> 'socket:[41222]'
  lrwx-- 1 root root 64 Dec 17 01:22 4 -> 'socket:[41223]'
  lrwx-- 1 root root 64 Dec 17 01:22 7 -> 'anon_inode:[eventpoll]'

  We have 3 sockets open by default. Next, use netcat to open 100
  connections:

  $ for i in {1..100} ; do nc -z 127.0.0.1 601 ; done

  Now check for open file descriptors, and there will be an extra 100 sockets
  in the list:

  $ sudo ls -l /proc/$RLOGPID/fd

  https://paste.ubuntu.com/p/f6NQVNbZcR/

  We can check the state of these sockets with:

  $ ss -t

  https://paste.ubuntu.com/p/7Ts2FbxJrg/

  The listening sockets will be in CLOSE-WAIT, and the netcat sockets
  will be in FIN-WAIT-2.

  $ ss -t | grep CLOSE-WAIT | wc -l
  100

  If you install the test package available in the following ppa:

  https://launchpad.net/~mruffell/+archive/ubuntu/sf299578-test

  When you open connections with netcat, these will be closed properly,
  and the file descriptor leak will be fixed.

  [Where problems could occur]

  If a regression were to occur, it would be limited to users of the
  imrelp module, which is a part of the rsyslogd-relp package, and
  depends on librelp.

  rsyslog-relp is not part of a default installation of rsyslog, and is
  opt in by changing a configuration file to enable imrelp.

  The changes to rsyslog implement a testcase which exercises the problematic 
code to ensure things are working as expected; this
  can be enabled manually on build, and has been verified to pass (#7).

  [Other]

  Upstream bug list:

  https://github.com/rsyslog/rsyslog/issues/4350
  https://github.com/rsyslog/rsyslog/issues/4005
  https://github.com/rsyslog/librelp/issues/188
  https://github.com/rsyslog/librelp/pull/193

  The following commits fix the problem:

  rsyslogd
  

  commit 

[Sts-sponsors] [Bug 1908473] Please test proposed package

2021-01-19 Thread Brian Murray
Hello Matthew, or anyone else affected,

Accepted librelp into focal-proposed. The package will build now and be
available at
https://launchpad.net/ubuntu/+source/librelp/1.5.0-1ubuntu2.20.04.1 in a
few hours, and then in the -proposed repository.

Please help us by testing this new package.  See
https://wiki.ubuntu.com/Testing/EnableProposed for documentation on how
to enable and use -proposed.  Your feedback will aid us getting this
update out to other Ubuntu users.

If this package fixes the bug for you, please add a comment to this bug,
mentioning the version of the package you tested, what testing has been
performed on the package and change the tag from verification-needed-
focal to verification-done-focal. If it does not fix the bug for you,
please add a comment stating that, and change the tag to verification-
failed-focal. In either case, without details of your testing we will
not be able to proceed.

Further information regarding the verification process can be found at
https://wiki.ubuntu.com/QATeam/PerformingSRUVerification .  Thank you in
advance for helping!

N.B. The updated package will be released to -updates after the bug(s)
fixed by this package have been verified and the package has been in
-proposed for a minimum of 7 days.

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1908473

Title:
  rsyslog-relp: imrelp module leaves sockets in CLOSE_WAIT state which
  leads to file descriptor leak

Status in librelp package in Ubuntu:
  Fix Released
Status in rsyslog package in Ubuntu:
  Fix Released
Status in librelp source package in Focal:
  Fix Committed
Status in rsyslog source package in Focal:
  Won't Fix
Status in librelp source package in Groovy:
  Fix Committed
Status in rsyslog source package in Groovy:
  Fix Released
Status in librelp source package in Hirsute:
  Fix Released
Status in rsyslog source package in Hirsute:
  Fix Released

Bug description:
  [Impact]

  In recent versions of rsyslog and librelp, the imrelp module leaks
  file descriptors due to a bug where it does not correctly close
  sockets, and instead, leaves them in the CLOSE_WAIT state.

  This causes rsyslogd on busy servers to eventually hit the limit of
  maximum open files allowed, which locks rsyslogd up until it is
  restarted.

  A workaround is to restart rsyslogd every month or so to manually
  close all of the open sockets.

  Only users of the imrelp module are affected, and not rsyslog users in
  general.

  [Testcase]

  Install the rsyslog-relp module like so:

  $ sudo apt install rsyslog rsyslog-relp

  Next, generate a working directory, and make a config file that loads
  the relp module.

  $ sudo mkdir /workdir
  $ cat << EOF >> ./spool.conf
  \$LocalHostName spool
  \$AbortOnUncleanConfig on
  \$PreserveFQDN on

  global(
  workDirectory="/workdir"
  maxMessageSize="256k"
  )

  main_queue(queue.type="Direct")
  module(load="imrelp")
  input(
  type="imrelp"
  name="imrelp"
  port="601"
  ruleset="spool"
  MaxDataSize="256k"
  )

  ruleset(name="spool" queue.type="direct") {
  }

  # Just so rsyslog doesn't whine that we do not have outputs
  ruleset(name="noop" queue.type="direct") {
  action(
  type="omfile"
  name="omfile"
  file="/workdir/spool.log"
  )
  }
  EOF

  Verify that the config is valid, then start a rsyslog server.

  $ sudo rsyslogd -f ./spool.conf -N9
  $ sudo rsyslogd -f ./spool.conf -i /workdir/rsyslogd.pid

  Fetch the rsyslogd PID and check for open files.

  $ RLOGPID=$(cat /workdir/rsyslogd.pid)
  $ sudo ls -l /proc/$RLOGPID/fd
  total 0
  lr-x-- 1 root root 64 Dec 17 01:22 0 -> /dev/urandom
  lrwx-- 1 root root 64 Dec 17 01:22 1 -> 'socket:[41228]'
  lrwx-- 1 root root 64 Dec 17 01:22 3 -> 'socket:[41222]'
  lrwx-- 1 root root 64 Dec 17 01:22 4 -> 'socket:[41223]'
  lrwx-- 1 root root 64 Dec 17 01:22 7 -> 'anon_inode:[eventpoll]'

  We have 3 sockets open by default. Next, use netcat to open 100
  connections:

  $ for i in {1..100} ; do nc -z 127.0.0.1 601 ; done

  Now check for open file descriptors, and there will be an extra 100 sockets
  in the list:

  $ sudo ls -l /proc/$RLOGPID/fd

  https://paste.ubuntu.com/p/f6NQVNbZcR/

  We can check the state of these sockets with:

  $ ss -t

  https://paste.ubuntu.com/p/7Ts2FbxJrg/

  The listening sockets will be in CLOSE-WAIT, and the netcat sockets
  will be in FIN-WAIT-2.

  $ ss -t | grep CLOSE-WAIT | wc -l
  100

  If you install the test package available in the following ppa:

  https://launchpad.net/~mruffell/+archive/ubuntu/sf299578-test

  When you open connections with netcat, these will be closed properly,
  and the file descriptor leak will be fixed.

  [Where problems could occur]

  If a regression were to occur, it would be limited to users of the
  imrelp module, which is a part of the rsyslogd-relp package, and
  

[Sts-sponsors] [Bug 1908473] Re: rsyslog-relp: imrelp module leaves sockets in CLOSE_WAIT state which leads to file descriptor leak

2021-01-19 Thread Brian Murray
Hello Matthew, or anyone else affected,

Accepted librelp into groovy-proposed. The package will build now and be
available at
https://launchpad.net/ubuntu/+source/librelp/1.5.0-1ubuntu2.20.10.1 in a
few hours, and then in the -proposed repository.

Please help us by testing this new package.  See
https://wiki.ubuntu.com/Testing/EnableProposed for documentation on how
to enable and use -proposed.  Your feedback will aid us getting this
update out to other Ubuntu users.

If this package fixes the bug for you, please add a comment to this bug,
mentioning the version of the package you tested, what testing has been
performed on the package and change the tag from verification-needed-
groovy to verification-done-groovy. If it does not fix the bug for you,
please add a comment stating that, and change the tag to verification-
failed-groovy. In either case, without details of your testing we will
not be able to proceed.

Further information regarding the verification process can be found at
https://wiki.ubuntu.com/QATeam/PerformingSRUVerification .  Thank you in
advance for helping!

N.B. The updated package will be released to -updates after the bug(s)
fixed by this package have been verified and the package has been in
-proposed for a minimum of 7 days.

** Changed in: librelp (Ubuntu Groovy)
   Status: In Progress => Fix Committed

** Tags added: verification-needed verification-needed-groovy

** Changed in: librelp (Ubuntu Focal)
   Status: In Progress => Fix Committed

** Tags added: verification-needed-focal

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1908473

Title:
  rsyslog-relp: imrelp module leaves sockets in CLOSE_WAIT state which
  leads to file descriptor leak

Status in librelp package in Ubuntu:
  Fix Released
Status in rsyslog package in Ubuntu:
  Fix Released
Status in librelp source package in Focal:
  Fix Committed
Status in rsyslog source package in Focal:
  Won't Fix
Status in librelp source package in Groovy:
  Fix Committed
Status in rsyslog source package in Groovy:
  Fix Released
Status in librelp source package in Hirsute:
  Fix Released
Status in rsyslog source package in Hirsute:
  Fix Released

Bug description:
  [Impact]

  In recent versions of rsyslog and librelp, the imrelp module leaks
  file descriptors due to a bug where it does not correctly close
  sockets, and instead, leaves them in the CLOSE_WAIT state.

  This causes rsyslogd on busy servers to eventually hit the limit of
  maximum open files allowed, which locks rsyslogd up until it is
  restarted.

  A workaround is to restart rsyslogd every month or so to manually
  close all of the open sockets.

  Only users of the imrelp module are affected, and not rsyslog users in
  general.

  [Testcase]

  Install the rsyslog-relp module like so:

  $ sudo apt install rsyslog rsyslog-relp

  Next, generate a working directory, and make a config file that loads
  the relp module.

  $ sudo mkdir /workdir
  $ cat << EOF >> ./spool.conf
  \$LocalHostName spool
  \$AbortOnUncleanConfig on
  \$PreserveFQDN on

  global(
  workDirectory="/workdir"
  maxMessageSize="256k"
  )

  main_queue(queue.type="Direct")
  module(load="imrelp")
  input(
  type="imrelp"
  name="imrelp"
  port="601"
  ruleset="spool"
  MaxDataSize="256k"
  )

  ruleset(name="spool" queue.type="direct") {
  }

  # Just so rsyslog doesn't whine that we do not have outputs
  ruleset(name="noop" queue.type="direct") {
  action(
  type="omfile"
  name="omfile"
  file="/workdir/spool.log"
  )
  }
  EOF

  Verify that the config is valid, then start a rsyslog server.

  $ sudo rsyslogd -f ./spool.conf -N9
  $ sudo rsyslogd -f ./spool.conf -i /workdir/rsyslogd.pid

  Fetch the rsyslogd PID and check for open files.

  $ RLOGPID=$(cat /workdir/rsyslogd.pid)
  $ sudo ls -l /proc/$RLOGPID/fd
  total 0
  lr-x-- 1 root root 64 Dec 17 01:22 0 -> /dev/urandom
  lrwx-- 1 root root 64 Dec 17 01:22 1 -> 'socket:[41228]'
  lrwx-- 1 root root 64 Dec 17 01:22 3 -> 'socket:[41222]'
  lrwx-- 1 root root 64 Dec 17 01:22 4 -> 'socket:[41223]'
  lrwx-- 1 root root 64 Dec 17 01:22 7 -> 'anon_inode:[eventpoll]'

  We have 3 sockets open by default. Next, use netcat to open 100
  connections:

  $ for i in {1..100} ; do nc -z 127.0.0.1 601 ; done

  Now check for open file descriptors, and there will be an extra 100 sockets
  in the list:

  $ sudo ls -l /proc/$RLOGPID/fd

  https://paste.ubuntu.com/p/f6NQVNbZcR/

  We can check the state of these sockets with:

  $ ss -t

  https://paste.ubuntu.com/p/7Ts2FbxJrg/

  The listening sockets will be in CLOSE-WAIT, and the netcat sockets
  will be in FIN-WAIT-2.

  $ ss -t | grep CLOSE-WAIT | wc -l
  100

  If you install the test package available in the following ppa:

  https://launchpad.net/~mruffell/+archive/ubuntu/sf299578-test

  When you open 

Re: [Sts-sponsors] Please review and sponsor LP#1912122 for rsyslog

2021-01-19 Thread Mauricio Oliveira
Hi Dan, Eric,

Matthew got an ack from the security team in email and the LP bug
(thanks, Matthew!)

Could you please review LP#1912122 to sponsor for Hirsute?
I know Eric is on a 'sprint' this week, so I guess this is more on you
Dan, sorry x) but the LP bug is not a high priority -- the case has
been closed already.

I can take care of Groovy when I get back next week, or maybe Dariusz,
or you if you wanna tackle both at once :) Just let me know.

cheers,

On Mon, Jan 18, 2021 at 11:36 PM Matthew Ruffell
 wrote:
>
> Hi Mauricio,
>
> The security team have acked the bug, see the email, and the comment in the 
> bug.
>
> I uploaded new debdiffs with the extra step Steve suggested, I built
> it and tested it,
> and it works great.
>
> Please sponsor.
>
> Thanks,
> Matthew
>
> On Tue, Jan 19, 2021 at 2:54 AM Mauricio Oliveira
>  wrote:
> >
> > Hey Matthew,
> >
> > This seems straightforward and the 'right thing to do', and maybe even
> > Groovy isn't an issue since the kernel config option is already there.
> >
> > However, we probably need a review/ack from the security team on it,
> > since it's more on their court than ours.
> >
> > Could you please subscribe the security team to the bug, and ask for
> > their review/ack in a new comment?
> > I've seen good responsiveness from them in a case like this in the
> > past; but if they don't you may also ping them in their channel.
> >
> > Thanks,
> > Mauricio
> >
> > On Mon, Jan 18, 2021 at 12:32 AM Matthew Ruffell
> >  wrote:
> > >
> > > Hi Eric, Mauricio,
> > >
> > > Could you please review and consider sponsoring the rsyslog debdiffs on LP
> > > 1912122?
> > >
> > > https://bugs.launchpad.net/ubuntu/+source/rsyslog/+bug/1912122
> > >
> > > VISA opened a case and mentioned that /var/log/dmesg is 0644, and I 
> > > should have
> > > really caught this and changed it to 0640 during my campaign to get
> > > DMESG_RESTRICT enabled.
> > >
> > > So, fixing it up here.
> > >
> > > I am aware that this might not be possible to SRU to Groovy, or would get
> > > stuck in block-proposed, but its the upload to hirsute that really 
> > > matters.
> > >
> > > Please review and sponsor. You can put timecards under:
> > > SF301554 - VISA - /var/log/dmesg file pemission
> > >
> > > Thanks,
> > > Matthew
> > >
> > > --
> > > Mailing list: https://launchpad.net/~sts-sponsors
> > > Post to : sts-sponsors@lists.launchpad.net
> > > Unsubscribe : https://launchpad.net/~sts-sponsors
> > > More help   : https://help.launchpad.net/ListHelp
> >
> >
> >
> > --
> > Mauricio Faria de Oliveira



-- 
Mauricio Faria de Oliveira

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp