[Sts-sponsors] ec2-hibinit-agent

2022-05-19 Thread Dan Streetman
uploaded to kinetic!

And *ahem* maybe one of you should think about applying for coredev
soon, you know just in case i get hit by a bus or something :-)

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1940141] Re: OpenSSL servers can send a non-empty status_request in a CertificateRequest

2022-05-10 Thread Dan Streetman
** Changed in: openssl (Ubuntu)
 Assignee: Nicolas Bock (nicolasbock) => (unassigned)

** Changed in: openssl (Ubuntu Bionic)
 Assignee: Nicolas Bock (nicolasbock) => Bruce Elrick (virtuous-sloth)

** Changed in: openssl (Ubuntu Bionic)
   Status: Fix Committed => In Progress

** Tags removed: verification-needed verification-needed-bionic

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1940141

Title:
  OpenSSL servers can send a non-empty status_request in a
  CertificateRequest

Status in openssl package in Ubuntu:
  Fix Released
Status in openssl source package in Bionic:
  In Progress

Bug description:
  [Impact]

  openssl does not conform to RFC8446, Sec. 4.4.2.1., by sending a
  CertificateRequest message to the client with a non-empty
  status_request extension.

  This issue was fixed in openssl-1.1.1d and is included in Focal
  onward.

  Upstream issue is tracked at https://github.com/openssl/openssl/issues/9767
  Upstream patch review at https://github.com/openssl/openssl/pull/9780

  The issue leads to various client failures with TLS 1.3 as described
  in, e.g.

  https://github.com/golang/go/issues/35722
  https://github.com/golang/go/issues/34040

  [Test Plan]

  The issue can be reproduced by building with `enable-ssl-trace`
  and then running `s_server` like this:

  ```
  openssl s_server -key key.pem -cert cert.pem -status_file 
test/recipes/ocsp-response.der -Verify 5
  ```

  And running `s_client` like this:

  ```
  openssl s_client -status -trace -cert cert.pem -key key.pem
  ```

  The output shows a `status_request` extension in the
  `CertificateRequest` as follows:

  Received Record
  Header:
    Version = TLS 1.2 (0x303)
    Content Type = ApplicationData (23)
    Length = 1591
    Inner Content Type = Handshake (22)
  CertificateRequest, Length=1570
    request_context (len=0):
    extensions, length = 1567
  extension_type=status_request(5), length=1521
     - 01 00 05 ed 30 82 05 e9-0a 01 00 a0 82 05 e2   
0..
    000f - 30 82 05 de 06 09 2b 06-01 05 05 07 30 01 01   
0.+.0..
    001e - 04 82 05 cf 30 82 05 cb-30 82 01 1a a1 81 86   
0...0..
    002d - 30 81 83 31 0b 30 09 06-03 55 04 06 13 02 47   
0..1.0...UG
  ...more lines omitted...

  If the `status_request` extension is present in a
  `CertificateRequest` then it must be empty according to RFC8446,
  Sec. 4.4.2.1.

  [Where problems could occur]

  The patch disables the `status_request` extension inside a
  `CertificateRequest`. Applications expecting the incorrect,
  non-empty reply for the `status_request` extension will break
  with this patch.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/openssl/+bug/1940141/+subscriptions


-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1940141] Re: OpenSSL servers can send a non-empty status_request in a CertificateRequest

2022-03-23 Thread Dan Streetman
uploaded to bionic queue, thanks!

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1940141

Title:
  OpenSSL servers can send a non-empty status_request in a
  CertificateRequest

Status in openssl package in Ubuntu:
  Fix Released
Status in openssl source package in Bionic:
  New

Bug description:
  [Impact]

  openssl does not conform to RFC8446, Sec. 4.4.2.1., by sending a
  CertificateRequest message to the client with a non-empty
  status_request extension.

  This issue was fixed in openssl-1.1.1d and is included in Focal
  onward.

  Upstream issue is tracked at https://github.com/openssl/openssl/issues/9767
  Upstream patch review at https://github.com/openssl/openssl/pull/9780

  The issue leads to various client failures with TLS 1.3 as described
  in, e.g.

  https://github.com/golang/go/issues/35722
  https://github.com/golang/go/issues/34040

  [Test Plan]

  The issue can be reproduced by building with `enable-ssl-trace`
  and then running `s_server` like this:

  ```
  openssl s_server -key key.pem -cert cert.pem -status_file 
test/recipes/ocsp-response.der -Verify 5
  ```

  And running `s_client` like this:

  ```
  openssl s_client -status -trace -cert cert.pem -key key.pem
  ```

  The output shows a `status_request` extension in the
  `CertificateRequest` as follows:

  Received Record
  Header:
    Version = TLS 1.2 (0x303)
    Content Type = ApplicationData (23)
    Length = 1591
    Inner Content Type = Handshake (22)
  CertificateRequest, Length=1570
    request_context (len=0):
    extensions, length = 1567
  extension_type=status_request(5), length=1521
     - 01 00 05 ed 30 82 05 e9-0a 01 00 a0 82 05 e2   
0..
    000f - 30 82 05 de 06 09 2b 06-01 05 05 07 30 01 01   
0.+.0..
    001e - 04 82 05 cf 30 82 05 cb-30 82 01 1a a1 81 86   
0...0..
    002d - 30 81 83 31 0b 30 09 06-03 55 04 06 13 02 47   
0..1.0...UG
  ...more lines omitted...

  If the `status_request` extension is present in a
  `CertificateRequest` then it must be empty according to RFC8446,
  Sec. 4.4.2.1.

  [Where problems could occur]

  The patch disables the `status_request` extension inside a
  `CertificateRequest`. Applications expecting the incorrect,
  non-empty reply for the `status_request` extension will break
  with this patch.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/openssl/+bug/1940141/+subscriptions


-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1940141] Re: OpenSSL servers can send a non-empty status_request in a CertificateRequest

2022-03-23 Thread Dan Streetman
If you'd rather remove the opt-in part, that's fine with me; I can
sponsor the debdiff then with the opt-in parts left out, if that works
for you Bruce and Nicolas.

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1940141

Title:
  OpenSSL servers can send a non-empty status_request in a
  CertificateRequest

Status in openssl package in Ubuntu:
  Fix Released
Status in openssl source package in Bionic:
  New

Bug description:
  [Impact]

  openssl does not conform to RFC8446, Sec. 4.4.2.1., by sending a
  CertificateRequest message to the client with a non-empty
  status_request extension.

  This issue was fixed in openssl-1.1.1d and is included in Focal
  onward.

  Upstream issue is tracked at https://github.com/openssl/openssl/issues/9767
  Upstream patch review at https://github.com/openssl/openssl/pull/9780

  The issue leads to various client failures with TLS 1.3 as described
  in, e.g.

  https://github.com/golang/go/issues/35722
  https://github.com/golang/go/issues/34040

  [Test Plan]

  The issue can be reproduced by building with `enable-ssl-trace`
  and then running `s_server` like this:

  ```
  openssl s_server -key key.pem -cert cert.pem -status_file 
test/recipes/ocsp-response.der -Verify 5
  ```

  And running `s_client` like this:

  ```
  openssl s_client -status -trace -cert cert.pem -key key.pem
  ```

  The output shows a `status_request` extension in the
  `CertificateRequest` as follows:

  Received Record
  Header:
    Version = TLS 1.2 (0x303)
    Content Type = ApplicationData (23)
    Length = 1591
    Inner Content Type = Handshake (22)
  CertificateRequest, Length=1570
    request_context (len=0):
    extensions, length = 1567
  extension_type=status_request(5), length=1521
     - 01 00 05 ed 30 82 05 e9-0a 01 00 a0 82 05 e2   
0..
    000f - 30 82 05 de 06 09 2b 06-01 05 05 07 30 01 01   
0.+.0..
    001e - 04 82 05 cf 30 82 05 cb-30 82 01 1a a1 81 86   
0...0..
    002d - 30 81 83 31 0b 30 09 06-03 55 04 06 13 02 47   
0..1.0...UG
  ...more lines omitted...

  If the `status_request` extension is present in a
  `CertificateRequest` then it must be empty according to RFC8446,
  Sec. 4.4.2.1.

  [Where problems could occur]

  The patch disables the `status_request` extension inside a
  `CertificateRequest`. Applications expecting the incorrect,
  non-empty reply for the `status_request` extension will break
  with this patch.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/openssl/+bug/1940141/+subscriptions


-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1940141] Re: OpenSSL servers can send a non-empty status_request in a CertificateRequest

2022-03-23 Thread Dan Streetman
@ubuntu-security since this is openssl, could you give the debdiff a
review? I can sponsor it as a normal SRU if you have no objections (and
the changes look ok to you), as it doesn't really seem like it would
specifically need to go to the -security pocket.

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1940141

Title:
  OpenSSL servers can send a non-empty status_request in a
  CertificateRequest

Status in openssl package in Ubuntu:
  Fix Released
Status in openssl source package in Bionic:
  New

Bug description:
  [Impact]

  openssl does not conform to RFC8446, Sec. 4.4.2.1., by sending a
  CertificateRequest message to the client with a non-empty
  status_request extension.

  This issue was fixed in openssl-1.1.1d and is included in Focal
  onward.

  Upstream issue is tracked at https://github.com/openssl/openssl/issues/9767
  Upstream patch review at https://github.com/openssl/openssl/pull/9780

  The issue leads to various client failures with TLS 1.3 as described
  in, e.g.

  https://github.com/golang/go/issues/35722
  https://github.com/golang/go/issues/34040

  [Test Plan]

  The issue can be reproduced by building with `enable-ssl-trace`
  and then running `s_server` like this:

  ```
  openssl s_server -key key.pem -cert cert.pem -status_file 
test/recipes/ocsp-response.der -Verify 5
  ```

  And running `s_client` like this:

  ```
  openssl s_client -status -trace -cert cert.pem -key key.pem
  ```

  The output shows a `status_request` extension in the
  `CertificateRequest` as follows:

  Received Record
  Header:
    Version = TLS 1.2 (0x303)
    Content Type = ApplicationData (23)
    Length = 1591
    Inner Content Type = Handshake (22)
  CertificateRequest, Length=1570
    request_context (len=0):
    extensions, length = 1567
  extension_type=status_request(5), length=1521
     - 01 00 05 ed 30 82 05 e9-0a 01 00 a0 82 05 e2   
0..
    000f - 30 82 05 de 06 09 2b 06-01 05 05 07 30 01 01   
0.+.0..
    001e - 04 82 05 cf 30 82 05 cb-30 82 01 1a a1 81 86   
0...0..
    002d - 30 81 83 31 0b 30 09 06-03 55 04 06 13 02 47   
0..1.0...UG
  ...more lines omitted...

  If the `status_request` extension is present in a
  `CertificateRequest` then it must be empty according to RFC8446,
  Sec. 4.4.2.1.

  [Where problems could occur]

  The patch disables the `status_request` extension inside a
  `CertificateRequest`. Applications expecting the incorrect,
  non-empty reply for the `status_request` extension will break
  with this patch.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/openssl/+bug/1940141/+subscriptions


-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1940141] Re: OpenSSL servers can send a non-empty status_request in a CertificateRequest

2022-03-23 Thread Dan Streetman
> It's not needed for actual functionality of the backport but that
assumes that any future backports or fixes don't break this backport

yes i get that, my comment is about whether or not the patch changes any
code *outside* of the self-tests, e.g. the TLSProxy perl code changes.
If that's *only* used for self-tests then including in the backport
shouldn't cause any regression.

Remember that people reviewing/sponsoring patches may not have deep
experience with the code so it's good to more clearly explain things
that aren't necessarily obvious, such as the 2nd patch only affecting
test code (if that is indeed the case), since at first glance that isn't
what it looks like.

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1940141

Title:
  OpenSSL servers can send a non-empty status_request in a
  CertificateRequest

Status in openssl package in Ubuntu:
  Fix Released
Status in openssl source package in Bionic:
  New

Bug description:
  [Impact]

  openssl does not conform to RFC8446, Sec. 4.4.2.1., by sending a
  CertificateRequest message to the client with a non-empty
  status_request extension.

  This issue was fixed in openssl-1.1.1d and is included in Focal
  onward.

  Upstream issue is tracked at https://github.com/openssl/openssl/issues/9767
  Upstream patch review at https://github.com/openssl/openssl/pull/9780

  The issue leads to various client failures with TLS 1.3 as described
  in, e.g.

  https://github.com/golang/go/issues/35722
  https://github.com/golang/go/issues/34040

  [Test Plan]

  The issue can be reproduced by building with `enable-ssl-trace`
  and then running `s_server` like this:

  ```
  openssl s_server -key key.pem -cert cert.pem -status_file 
test/recipes/ocsp-response.der -Verify 5
  ```

  And running `s_client` like this:

  ```
  openssl s_client -status -trace -cert cert.pem -key key.pem
  ```

  The output shows a `status_request` extension in the
  `CertificateRequest` as follows:

  Received Record
  Header:
    Version = TLS 1.2 (0x303)
    Content Type = ApplicationData (23)
    Length = 1591
    Inner Content Type = Handshake (22)
  CertificateRequest, Length=1570
    request_context (len=0):
    extensions, length = 1567
  extension_type=status_request(5), length=1521
     - 01 00 05 ed 30 82 05 e9-0a 01 00 a0 82 05 e2   
0..
    000f - 30 82 05 de 06 09 2b 06-01 05 05 07 30 01 01   
0.+.0..
    001e - 04 82 05 cf 30 82 05 cb-30 82 01 1a a1 81 86   
0...0..
    002d - 30 81 83 31 0b 30 09 06-03 55 04 06 13 02 47   
0..1.0...UG
  ...more lines omitted...

  If the `status_request` extension is present in a
  `CertificateRequest` then it must be empty according to RFC8446,
  Sec. 4.4.2.1.

  [Where problems could occur]

  The patch disables the `status_request` extension inside a
  `CertificateRequest`. Applications expecting the incorrect,
  non-empty reply for the `status_request` extension will break
  with this patch.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/openssl/+bug/1940141/+subscriptions


-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1940141] Re: OpenSSL servers can send a non-empty status_request in a CertificateRequest

2022-03-23 Thread Dan Streetman
The 2nd upstream patch appears to add new functionality, for actually
parsing a certificate request; is that actually needed (outside of the
self-tests)? If not, it shouldn't be included in the backport.

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1940141

Title:
  OpenSSL servers can send a non-empty status_request in a
  CertificateRequest

Status in openssl package in Ubuntu:
  Fix Released
Status in openssl source package in Bionic:
  New

Bug description:
  [Impact]

  openssl does not conform to RFC8446, Sec. 4.4.2.1., by sending a
  CertificateRequest message to the client with a non-empty
  status_request extension.

  This issue was fixed in openssl-1.1.1d and is included in Focal
  onward.

  Upstream issue is tracked at https://github.com/openssl/openssl/issues/9767
  Upstream patch review at https://github.com/openssl/openssl/pull/9780

  The issue leads to various client failures with TLS 1.3 as described
  in, e.g.

  https://github.com/golang/go/issues/35722
  https://github.com/golang/go/issues/34040

  [Test Plan]

  The issue can be reproduced by building with `enable-ssl-trace`
  and then running `s_server` like this:

  ```
  openssl s_server -key key.pem -cert cert.pem -status_file 
test/recipes/ocsp-response.der -Verify 5
  ```

  And running `s_client` like this:

  ```
  openssl s_client -status -trace -cert cert.pem -key key.pem
  ```

  The output shows a `status_request` extension in the
  `CertificateRequest` as follows:

  Received Record
  Header:
    Version = TLS 1.2 (0x303)
    Content Type = ApplicationData (23)
    Length = 1591
    Inner Content Type = Handshake (22)
  CertificateRequest, Length=1570
    request_context (len=0):
    extensions, length = 1567
  extension_type=status_request(5), length=1521
     - 01 00 05 ed 30 82 05 e9-0a 01 00 a0 82 05 e2   
0..
    000f - 30 82 05 de 06 09 2b 06-01 05 05 07 30 01 01   
0.+.0..
    001e - 04 82 05 cf 30 82 05 cb-30 82 01 1a a1 81 86   
0...0..
    002d - 30 81 83 31 0b 30 09 06-03 55 04 06 13 02 47   
0..1.0...UG
  ...more lines omitted...

  If the `status_request` extension is present in a
  `CertificateRequest` then it must be empty according to RFC8446,
  Sec. 4.4.2.1.

  [Where problems could occur]

  The patch disables the `status_request` extension inside a
  `CertificateRequest`. Applications expecting the incorrect,
  non-empty reply for the `status_request` extension will break
  with this patch.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/openssl/+bug/1940141/+subscriptions


-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1921658] Re: Can't compose kvm host with lvm storage on maas 2.8.4

2021-12-14 Thread Dan Streetman
> I've changed it for testing and MAAS didn't work properly.

did you investigate why? did you also adjust the code as i mentioned in
comment 22?

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1921658

Title:
  Can't compose kvm host with lvm storage on maas 2.8.4

Status in MAAS:
  Triaged
Status in readline package in Ubuntu:
  Fix Released
Status in readline source package in Bionic:
  Incomplete

Bug description:
  [Impact]

  I can't compose kvm host on maas 2.8.4 ( bionic)

  I upgraded twisted and related component with pip but the symptom is
  the same.

  MaaS 2.9.x in Focal works fine.

  in 2.8.x, pexpect virsh vol-path should return [2] but returns [3]

  [2]
  /dev/maas_data_vg/8d4e8b04-4031-4a1b-b5f2-a8306192db11
  [3]
  2021-03-17 20:43:34 stderr: [error] Message: 'this is the result...\n'
  2021-03-17 20:43:34 stderr: [error] Arguments: ([' ', 
'<3ef-46ca-87c8-19171950592f --pool maas_guest_lvm_vg', "error: command 
'attach-disk' doesn't support option --pool"],)

  sometimes it fails in

  def get_volume_path(self, pool, volume):
  """Return the path to the file from `pool` and `volume`."""
  output = self.run(["vol-path", volume, "--pool", pool])
  return output.strip()

  sometimes failes in

  def get_machine_xml(self, machine):
  # Check if we have a cached version of the XML.
  # This is a short-lived object, so we don't need to worry about
  # expiring objects in the cache.
  if machine in self.xml:
  return self.xml[machine]

  # Grab the XML from virsh if we don't have it already.
  output = self.run(["dumpxml", machine]).strip()
  if output.startswith("error:"):
  maaslog.error("%s: Failed to get XML for machine", machine)
  return None

  # Cache the XML, since we'll need it later to reconfigure the VM.
  self.xml[machine] = output
  return output

  I assume that run function has issue.

  Command line virsh vol-path and simple pepect python code works fine.

  Any advice for this issue?

  Thanks.

  [Test Plan]

  0) deploy Bionic and MAAS 2.8

  1) Create file to be used as loopback device

  sudo dd if=/dev/zero of=lvm bs=16000 count=1M

  2) sudo losetup /dev/loop39 lvm

  3) sudo pvcreate /dev/loop39

  4) sudo vgcreate maas_data_vg /dev/loop39

  5) Save below xml:
  
  maas_guest_lvm_vg
  
  maas_data_vg
  
  
  
  /dev/maas_data_vg
  
  

  6) virsh pool-create maas_guest_lvm_vg.xml

  7) Add KVM host in MaaS

  8) Attempt to compose a POD using storage pool maas_guest_lvm_vg

  9) GUI will fail with:

  Pod unable to compose machine: Unable to compose machine because:
  Failed talking to pod: Start tag expected, '<' not found, line 1,
  column 1 (, line 1)

  [Where problems could occer]

  This patch is small peice of huge commit.
  I tested by compiling test pkg with this patch. but actually it is kind of 
underlying library ( libreadline ), so It could affect to any application using 
libreadline.
  e.g running command inside application by code can be affected.

  
  [Other Info]

To manage notifications about this bug go to:
https://bugs.launchpad.net/maas/+bug/1921658/+subscriptions


-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1921658] Re: Can't compose kvm host with lvm storage on maas 2.8.4

2021-12-13 Thread Dan Streetman
@seyeongkim did you have a chance to look at using echo=False in the
VirshSSH() class?

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1921658

Title:
  Can't compose kvm host with lvm storage on maas 2.8.4

Status in MAAS:
  Triaged
Status in readline package in Ubuntu:
  Fix Released
Status in readline source package in Bionic:
  Incomplete

Bug description:
  [Impact]

  I can't compose kvm host on maas 2.8.4 ( bionic)

  I upgraded twisted and related component with pip but the symptom is
  the same.

  MaaS 2.9.x in Focal works fine.

  in 2.8.x, pexpect virsh vol-path should return [2] but returns [3]

  [2]
  /dev/maas_data_vg/8d4e8b04-4031-4a1b-b5f2-a8306192db11
  [3]
  2021-03-17 20:43:34 stderr: [error] Message: 'this is the result...\n'
  2021-03-17 20:43:34 stderr: [error] Arguments: ([' ', 
'<3ef-46ca-87c8-19171950592f --pool maas_guest_lvm_vg', "error: command 
'attach-disk' doesn't support option --pool"],)

  sometimes it fails in

  def get_volume_path(self, pool, volume):
  """Return the path to the file from `pool` and `volume`."""
  output = self.run(["vol-path", volume, "--pool", pool])
  return output.strip()

  sometimes failes in

  def get_machine_xml(self, machine):
  # Check if we have a cached version of the XML.
  # This is a short-lived object, so we don't need to worry about
  # expiring objects in the cache.
  if machine in self.xml:
  return self.xml[machine]

  # Grab the XML from virsh if we don't have it already.
  output = self.run(["dumpxml", machine]).strip()
  if output.startswith("error:"):
  maaslog.error("%s: Failed to get XML for machine", machine)
  return None

  # Cache the XML, since we'll need it later to reconfigure the VM.
  self.xml[machine] = output
  return output

  I assume that run function has issue.

  Command line virsh vol-path and simple pepect python code works fine.

  Any advice for this issue?

  Thanks.

  [Test Plan]

  0) deploy Bionic and MAAS 2.8

  1) Create file to be used as loopback device

  sudo dd if=/dev/zero of=lvm bs=16000 count=1M

  2) sudo losetup /dev/loop39 lvm

  3) sudo pvcreate /dev/loop39

  4) sudo vgcreate maas_data_vg /dev/loop39

  5) Save below xml:
  
  maas_guest_lvm_vg
  
  maas_data_vg
  
  
  
  /dev/maas_data_vg
  
  

  6) virsh pool-create maas_guest_lvm_vg.xml

  7) Add KVM host in MaaS

  8) Attempt to compose a POD using storage pool maas_guest_lvm_vg

  9) GUI will fail with:

  Pod unable to compose machine: Unable to compose machine because:
  Failed talking to pod: Start tag expected, '<' not found, line 1,
  column 1 (, line 1)

  [Where problems could occer]

  This patch is small peice of huge commit.
  I tested by compiling test pkg with this patch. but actually it is kind of 
underlying library ( libreadline ), so It could affect to any application using 
libreadline.
  e.g running command inside application by code can be affected.

  
  [Other Info]

To manage notifications about this bug go to:
https://bugs.launchpad.net/maas/+bug/1921658/+subscriptions


-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1940141] Re: OpenSSL servers can send a non-empty status_request in a CertificateRequest

2021-11-03 Thread Dan Streetman
> the additional information contains valid data.

then I think SRU'ing this would cause a behavior change that could
possibly break someone, which isn't something we should do.

I'd suggest putting the fix behind some opt-in mechanism, so anyone who
is affected can opt-in to the fixed behavior, but there's no change by
default.

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1940141

Title:
  OpenSSL servers can send a non-empty status_request in a
  CertificateRequest

Status in openssl package in Ubuntu:
  Fix Released
Status in openssl source package in Bionic:
  New

Bug description:
  [Impact]

  openssl does not conform to RFC8446, Sec. 4.4.2.1., by sending a
  CertificateRequest message to the client with a non-empty
  status_request extension.

  This issue was fixed in openssl-1.1.1d and is included in Focal
  onward.

  Upstream issue is tracked at https://github.com/openssl/openssl/issues/9767
  Upstream patch review at https://github.com/openssl/openssl/pull/9780

  The issue leads to various client failures with TLS 1.3 as described
  in, e.g.

  https://github.com/golang/go/issues/35722
  https://github.com/golang/go/issues/34040

  [Test Plan]

  The issue can be reproduced by building with `enable-ssl-trace`
  and then running `s_server` like this:

  ```
  openssl s_server -key key.pem -cert cert.pem -status_file 
test/recipes/ocsp-response.der -Verify 5
  ```

  And running `s_client` like this:

  ```
  openssl s_client -status -trace -cert cert.pem -key key.pem
  ```

  The output shows a `status_request` extension in the
  `CertificateRequest` as follows:

  Received Record
  Header:
    Version = TLS 1.2 (0x303)
    Content Type = ApplicationData (23)
    Length = 1591
    Inner Content Type = Handshake (22)
  CertificateRequest, Length=1570
    request_context (len=0):
    extensions, length = 1567
  extension_type=status_request(5), length=1521
     - 01 00 05 ed 30 82 05 e9-0a 01 00 a0 82 05 e2   
0..
    000f - 30 82 05 de 06 09 2b 06-01 05 05 07 30 01 01   
0.+.0..
    001e - 04 82 05 cf 30 82 05 cb-30 82 01 1a a1 81 86   
0...0..
    002d - 30 81 83 31 0b 30 09 06-03 55 04 06 13 02 47   
0..1.0...UG
  ...more lines omitted...

  If the `status_request` extension is present in a
  `CertificateRequest` then it must be empty according to RFC8446,
  Sec. 4.4.2.1.

  [Where problems could occur]

  The patch disables the `status_request` extension inside a
  `CertificateRequest`. Applications expecting the incorrect,
  non-empty reply for the `status_request` extension will break
  with this patch.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/openssl/+bug/1940141/+subscriptions


-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1921658] Re: Can't compose kvm host with lvm storage on maas 2.8.4

2021-10-20 Thread Dan Streetman
> Can you try changing the VirshSSH() class constructor so that it
passes echo=False to its superclass constructor, and see if that helps?

also - it seems like the class is trying to work around the fact it's
not disabling echo, by eliding the first line of the reply, in the run()
method - that will most likely need to be changed to simply use the
entire 'result' instead of just 'result[1:]'. In fact the splitlines()
followed by '\n'.join() probably should be removed and just return the
entire decoded string.

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1921658

Title:
  Can't compose kvm host with lvm storage on maas 2.8.4

Status in MAAS:
  Triaged
Status in readline package in Ubuntu:
  Fix Released
Status in readline source package in Bionic:
  Incomplete

Bug description:
  [Impact]

  I can't compose kvm host on maas 2.8.4 ( bionic)

  I upgraded twisted and related component with pip but the symptom is
  the same.

  MaaS 2.9.x in Focal works fine.

  in 2.8.x, pexpect virsh vol-path should return [2] but returns [3]

  [2]
  /dev/maas_data_vg/8d4e8b04-4031-4a1b-b5f2-a8306192db11
  [3]
  2021-03-17 20:43:34 stderr: [error] Message: 'this is the result...\n'
  2021-03-17 20:43:34 stderr: [error] Arguments: ([' ', 
'<3ef-46ca-87c8-19171950592f --pool maas_guest_lvm_vg', "error: command 
'attach-disk' doesn't support option --pool"],)

  sometimes it fails in

  def get_volume_path(self, pool, volume):
  """Return the path to the file from `pool` and `volume`."""
  output = self.run(["vol-path", volume, "--pool", pool])
  return output.strip()

  sometimes failes in

  def get_machine_xml(self, machine):
  # Check if we have a cached version of the XML.
  # This is a short-lived object, so we don't need to worry about
  # expiring objects in the cache.
  if machine in self.xml:
  return self.xml[machine]

  # Grab the XML from virsh if we don't have it already.
  output = self.run(["dumpxml", machine]).strip()
  if output.startswith("error:"):
  maaslog.error("%s: Failed to get XML for machine", machine)
  return None

  # Cache the XML, since we'll need it later to reconfigure the VM.
  self.xml[machine] = output
  return output

  I assume that run function has issue.

  Command line virsh vol-path and simple pepect python code works fine.

  Any advice for this issue?

  Thanks.

  [Test Plan]

  0) deploy Bionic and MAAS 2.8

  1) Create file to be used as loopback device

  sudo dd if=/dev/zero of=lvm bs=16000 count=1M

  2) sudo losetup /dev/loop39 lvm

  3) sudo pvcreate /dev/loop39

  4) sudo vgcreate maas_data_vg /dev/loop39

  5) Save below xml:
  
  maas_guest_lvm_vg
  
  maas_data_vg
  
  
  
  /dev/maas_data_vg
  
  

  6) virsh pool-create maas_guest_lvm_vg.xml

  7) Add KVM host in MaaS

  8) Attempt to compose a POD using storage pool maas_guest_lvm_vg

  9) GUI will fail with:

  Pod unable to compose machine: Unable to compose machine because:
  Failed talking to pod: Start tag expected, '<' not found, line 1,
  column 1 (, line 1)

  [Where problems could occer]

  This patch is small peice of huge commit.
  I tested by compiling test pkg with this patch. but actually it is kind of 
underlying library ( libreadline ), so It could affect to any application using 
libreadline.
  e.g running command inside application by code can be affected.

  
  [Other Info]

To manage notifications about this bug go to:
https://bugs.launchpad.net/maas/+bug/1921658/+subscriptions


-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1921658] Re: Can't compose kvm host with lvm storage on maas 2.8.4

2021-10-20 Thread Dan Streetman
@seyeongkim, after reviewing this bug and looking at some of the data as
well as maas and pexpect code, it seems to me like this isn't related to
readline, I think the problem is maas is using pexpect without disabling
echo.

Can you try changing the VirshSSH() class constructor so that it passes
echo=False to its superclass constructor, and see if that helps?

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1921658

Title:
  Can't compose kvm host with lvm storage on maas 2.8.4

Status in MAAS:
  Triaged
Status in readline package in Ubuntu:
  Fix Released
Status in readline source package in Bionic:
  Incomplete

Bug description:
  [Impact]

  I can't compose kvm host on maas 2.8.4 ( bionic)

  I upgraded twisted and related component with pip but the symptom is
  the same.

  MaaS 2.9.x in Focal works fine.

  in 2.8.x, pexpect virsh vol-path should return [2] but returns [3]

  [2]
  /dev/maas_data_vg/8d4e8b04-4031-4a1b-b5f2-a8306192db11
  [3]
  2021-03-17 20:43:34 stderr: [error] Message: 'this is the result...\n'
  2021-03-17 20:43:34 stderr: [error] Arguments: ([' ', 
'<3ef-46ca-87c8-19171950592f --pool maas_guest_lvm_vg', "error: command 
'attach-disk' doesn't support option --pool"],)

  sometimes it fails in

  def get_volume_path(self, pool, volume):
  """Return the path to the file from `pool` and `volume`."""
  output = self.run(["vol-path", volume, "--pool", pool])
  return output.strip()

  sometimes failes in

  def get_machine_xml(self, machine):
  # Check if we have a cached version of the XML.
  # This is a short-lived object, so we don't need to worry about
  # expiring objects in the cache.
  if machine in self.xml:
  return self.xml[machine]

  # Grab the XML from virsh if we don't have it already.
  output = self.run(["dumpxml", machine]).strip()
  if output.startswith("error:"):
  maaslog.error("%s: Failed to get XML for machine", machine)
  return None

  # Cache the XML, since we'll need it later to reconfigure the VM.
  self.xml[machine] = output
  return output

  I assume that run function has issue.

  Command line virsh vol-path and simple pepect python code works fine.

  Any advice for this issue?

  Thanks.

  [Test Plan]

  0) deploy Bionic and MAAS 2.8

  1) Create file to be used as loopback device

  sudo dd if=/dev/zero of=lvm bs=16000 count=1M

  2) sudo losetup /dev/loop39 lvm

  3) sudo pvcreate /dev/loop39

  4) sudo vgcreate maas_data_vg /dev/loop39

  5) Save below xml:
  
  maas_guest_lvm_vg
  
  maas_data_vg
  
  
  
  /dev/maas_data_vg
  
  

  6) virsh pool-create maas_guest_lvm_vg.xml

  7) Add KVM host in MaaS

  8) Attempt to compose a POD using storage pool maas_guest_lvm_vg

  9) GUI will fail with:

  Pod unable to compose machine: Unable to compose machine because:
  Failed talking to pod: Start tag expected, '<' not found, line 1,
  column 1 (, line 1)

  [Where problems could occer]

  This patch is small peice of huge commit.
  I tested by compiling test pkg with this patch. but actually it is kind of 
underlying library ( libreadline ), so It could affect to any application using 
libreadline.
  e.g running command inside application by code can be affected.

  
  [Other Info]

To manage notifications about this bug go to:
https://bugs.launchpad.net/maas/+bug/1921658/+subscriptions


-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1771740] Re: Expose link offload options

2021-10-02 Thread Dan Streetman
@nicolasbock before sponsoring, can you cover a few of the steps in the
SRU process:

1) The SRU template in the description isn't actually filled in, I think
you simply copy-n-pasted the actual template text in...it does need to
be actually filled out

2) This is marked as affecting f/h/i, but you only included a debdiff
for focal; are h and i already patched or not?

3) You mentioned in chat that the netplan maintainer suggested you just
SRU the changes as the next scheduled release for netplan is not for a
while; I assume it was @slyon you talked to? I added him in this bug as
a fyi; @slyon if you have any concerns about SRUing this before the next
regular netplan release please let us know.

Thanks!

** Tags added: sts-sponsor-ddstreet

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1771740

Title:
  Expose link offload options

Status in netplan:
  Fix Committed
Status in netplan.io package in Ubuntu:
  New
Status in netplan.io source package in Focal:
  New
Status in netplan.io source package in Hirsute:
  New
Status in netplan.io source package in Impish:
  New

Bug description:
  [Impact]

   * An explanation of the effects of the bug on users and

   * justification for backporting the fix to the stable release.

   * In addition, it is helpful, but not required, to include an
 explanation of how the upload fixes this bug.

  [Test Plan]

   * detailed instructions how to reproduce the bug

   * these should allow someone who is not familiar with the affected
 package to reproduce the bug and verify that the updated package fixes
 the problem.

   * if other testing is appropriate to perform before landing this update,
 this should also be described here.

  [Where problems could occur]

   * Think about what the upload changes in the software. Imagine the change is
 wrong or breaks something else: how would this show up?

   * It is assumed that any SRU candidate patch is well-tested before
 upload and has a low overall risk of regression, but it's important
 to make the effort to think about what ''could'' happen in the
 event of a regression.

   * This must '''never''' be "None" or "Low", or entirely an argument as to why
 your upload is low risk.

   * This both shows the SRU team that the risks have been considered,
 and provides guidance to testers in regression-testing the SRU.

  [Other Info]
   
   * Anything else you think is useful to include
   * Anticipate questions from users, SRU, +1 maintenance, security teams and 
the Technical Board
   * and address these questions in advance

  [Original Description]

  https://www.freedesktop.org/software/systemd/man/systemd.link.html has
  a number of [Link] options which I need to use for a flaky network
  card (TCPSegmentationOffload, TCP6SegmentationOffload,
  GenericSegmentationOffload, GenericReceiveOffload,
  LargeReceiveOffload) which are not exposed via netplan.

To manage notifications about this bug go to:
https://bugs.launchpad.net/netplan/+bug/1771740/+subscriptions


-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1940141] Re: OpenSSL servers can send a non-empty status_request in a CertificateRequest

2021-10-01 Thread Dan Streetman
for later reference, i'd discussed this with nick and asked him to check
if the 'status_request' reply contained any kind of valid data in the
specific cases where this patch will disable it; my concern is if there
is valid data in it, it's possible there are applications out there that
might currently expect and/or use it, even if it's against the RFC,
which might result in a regression after this patch. However, if the
reply is empty or just has garbage, it's unlikely that any application
is using it for anything currently, so there would be less chance of
causing a regression.

** Tags added: sts-sponsor-ddstreet

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1940141

Title:
  OpenSSL servers can send a non-empty status_request in a
  CertificateRequest

Status in openssl package in Ubuntu:
  Fix Released
Status in openssl source package in Bionic:
  New

Bug description:
  [Impact]

  openssl does not conform to RFC8446, Sec. 4.4.2.1., by sending a
  CertificateRequest message to the client with a non-empty
  status_request extension.

  This issue was fixed in openssl-1.1.1d and is included in Focal
  onward.

  Upstream issue is tracked at https://github.com/openssl/openssl/issues/9767
  Upstream patch review at https://github.com/openssl/openssl/pull/9780

  The issue leads to various client failures with TLS 1.3 as described
  in, e.g.

  https://github.com/golang/go/issues/35722
  https://github.com/golang/go/issues/34040

  [Test Plan]

  The issue can be reproduced by building with `enable-ssl-trace`
  and then running `s_server` like this:

  ```
  openssl s_server -key key.pem -cert cert.pem -status_file 
test/recipes/ocsp-response.der -Verify 5
  ```

  And running `s_client` like this:

  ```
  openssl s_client -status -trace -cert cert.pem -key key.pem
  ```

  The output shows a `status_request` extension in the
  `CertificateRequest` as follows:

  Received Record
  Header:
    Version = TLS 1.2 (0x303)
    Content Type = ApplicationData (23)
    Length = 1591
    Inner Content Type = Handshake (22)
  CertificateRequest, Length=1570
    request_context (len=0):
    extensions, length = 1567
  extension_type=status_request(5), length=1521
     - 01 00 05 ed 30 82 05 e9-0a 01 00 a0 82 05 e2   
0..
    000f - 30 82 05 de 06 09 2b 06-01 05 05 07 30 01 01   
0.+.0..
    001e - 04 82 05 cf 30 82 05 cb-30 82 01 1a a1 81 86   
0...0..
    002d - 30 81 83 31 0b 30 09 06-03 55 04 06 13 02 47   
0..1.0...UG
  ...more lines omitted...

  If the `status_request` extension is present in a
  `CertificateRequest` then it must be empty according to RFC8446,
  Sec. 4.4.2.1.

  [Where problems could occur]

  The patch disables the `status_request` extension inside a
  `CertificateRequest`. Applications expecting the incorrect,
  non-empty reply for the `status_request` extension will break
  with this patch.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/openssl/+bug/1940141/+subscriptions


-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1928508] Re: Performance regression on memcpy() calls for AMD Zen

2021-10-01 Thread Dan Streetman
I'm unsubscribing sts-sponsors for now, please feel free to resubscribe
once this is ready for review/spsonsoring to focal.

Note that my understanding on this is that due to the added complexity
of the release and then revert of glibc in focal, due to bug 1914044,
any further upgrades to glibc (like this one) might have to take
additional care, though I have not looked into the details of that bug.

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1928508

Title:
  Performance regression on memcpy() calls for AMD Zen

Status in glibc package in Ubuntu:
  Fix Released
Status in glibc source package in Focal:
  In Progress
Status in glibc source package in Groovy:
  Won't Fix

Bug description:
  [Impact]
  On AMD Zen systems, memcpy() calls see a heavy performance regression in 
Focal and Groovy, due to the way __x86_non_temporal_threshold is calculated.

  Before 'glibc-2.33~455', cache values were calculated taking into
  consideration the number of hardware threads in the CPU. On AMD Ryzen
  and EPYC systems, this can be counter-productive if the number of
  threads is high enough for the last-level caches to "overrun" each
  other and cause cache line flushes. The solution is to reduce the
  allocated size for these non_temporal stores, removing the number of
  threads from the equation.

  [Test Plan]
  Attached to this bug is a short C program that exercises memcpy() calls in 
buffers of variable length. This has been obtained from a similar bug report 
for Red Hat, and is publicly available at [0].
  This test program was compiled with gcc 10.2.0, using the following flags:
  $ gcc -mtune=generic -march=x86_64 -g -03 test_memcpy.c -o test_memcpy64

  Tests were performed with the following criteria:
  - use 32Mb buffers ("./test_memcpy64 32")
  - benchmark with the hyperfine tool [1], as it calculates relevant statistics 
automatically
  - benchmark with at least 10 runs in the same environment, to minimize 
variance
  - measure on AMD Zen (3700X) and on Intel Xeon (E5-2683), to ensure we don't 
penalize one x86 vendor in favor of the other

  Below is a comparison between two Focal containers, leveraging LXD to
  make use of different libc versions on the same host:

  $ hyperfine -n libc-2.31-0ubuntu9.2 'lxc exec focal ./test_memcpy64 32' -n 
libc-patched 'lxc exec focal-patched ./test_memcpy64 32'
  Benchmark #1: libc-2.31-0ubuntu9.2
    Time (mean ± σ):  2.723 s ±  0.013 s[User: 4.7 ms, System: 5.1 ms]
    Range (min … max):2.693 s …  2.735 s10 runs

  Benchmark #2: libc-patched
    Time (mean ± σ):  1.522 s ±  0.004 s[User: 3.9 ms, System: 5.6 ms]
    Range (min … max):1.515 s …  1.528 s10 runs

  Summary
    'libc-patched' ran
  1.79 ± 0.01 times faster than 'libc-2.31-0ubuntu9.2'
  $ head -n5 /proc/cpuinfo
  processor   : 0
  vendor_id   : AuthenticAMD
  cpu family  : 23
  model   : 113
  model name  : AMD Ryzen 7 3700X 8-Core Processor

  [0] https://bugzilla.redhat.com/show_bug.cgi?id=1880670
  [1] https://github.com/sharkdp/hyperfine/

  [Where problems could occur]
  Since we're messing with the cacheinfo for x86 in general, we need to be 
careful not to introduce further performance regressions on memory-heavy 
workloads. Even though initial results might reveal improvement on AMD Ryzen 
and EPYC hardware, we should also validate different configurations (e.g. 
Intel, different buffer sizes, etc) to make sure we won't hurt performance in 
other non-AMD environments.

  [Other Info]
  This issue has been fixed by the following upstream commit:
  - d3c57027470b (Reversing calculation of __x86_shared_non_temporal_threshold)

  $ git describe --contains d3c57027470b
  glibc-2.33~455
  $ rmadison glibc -s focal,focal-updates,groovy,groovy-proposed,hirsute
   glibc | 2.31-0ubuntu9   | focal   | source
   glibc | 2.31-0ubuntu9.2 | focal-updates   | source
   glibc | 2.32-0ubuntu3   | groovy  | source
   glibc | 2.32-0ubuntu3.2 | groovy-proposed | source
   glibc | 2.33-0ubuntu5   | hirsute | source

  Affected releases include Ubuntu Focal and Groovy. Bionic is not
  affected, and releases starting with Hirsute already ship the upstream
  patch to fix this regression.

  glibc exports this specific variable as a tunable, so we could also tweak it 
with the GLIBC_TUNABLES env var:
  $ hyperfine -n clean-env 'lxc exec focal env ./test_memcpy64 32' -n tunables 
'lxc exec focal env 
GLIBC_TUNABLES=glibc.cpu.x86_non_temporal_threshold=1024*1024*3*4 
./test_memcpy64 32'
  Benchmark #1: clean-env
Time (mean ± σ):  2.529 s ±  0.061 s[User: 6.0 ms, System: 4.7 ms]
Range (min … max):2.457 s …  2.615 s10 runs

  Benchmark #2: tunables
Time (mean ± σ):  1.427 s ±  0.030 s[User: 6.5 ms, System: 3.8 ms]
Range (min … max):1.402 s …  1.482 s10 runs

  Summary
'tunables' ran
  

[Sts-sponsors] [Bug 1921658] Re: Can't compose kvm host with lvm storage on maas 2.8.4

2021-09-27 Thread Dan Streetman
** Tags removed: sts-sponsor-ddstreet
** Tags added: sts-sponsor-dgadomski

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1921658

Title:
  Can't compose kvm host with lvm storage on maas 2.8.4

Status in MAAS:
  Triaged
Status in readline package in Ubuntu:
  Fix Released
Status in readline source package in Bionic:
  Incomplete

Bug description:
  [Impact]

  I can't compose kvm host on maas 2.8.4 ( bionic)

  I upgraded twisted and related component with pip but the symptom is
  the same.

  MaaS 2.9.x in Focal works fine.

  in 2.8.x, pexpect virsh vol-path should return [2] but returns [3]

  [2]
  /dev/maas_data_vg/8d4e8b04-4031-4a1b-b5f2-a8306192db11
  [3]
  2021-03-17 20:43:34 stderr: [error] Message: 'this is the result...\n'
  2021-03-17 20:43:34 stderr: [error] Arguments: ([' ', 
'<3ef-46ca-87c8-19171950592f --pool maas_guest_lvm_vg', "error: command 
'attach-disk' doesn't support option --pool"],)

  sometimes it fails in

  def get_volume_path(self, pool, volume):
  """Return the path to the file from `pool` and `volume`."""
  output = self.run(["vol-path", volume, "--pool", pool])
  return output.strip()

  sometimes failes in

  def get_machine_xml(self, machine):
  # Check if we have a cached version of the XML.
  # This is a short-lived object, so we don't need to worry about
  # expiring objects in the cache.
  if machine in self.xml:
  return self.xml[machine]

  # Grab the XML from virsh if we don't have it already.
  output = self.run(["dumpxml", machine]).strip()
  if output.startswith("error:"):
  maaslog.error("%s: Failed to get XML for machine", machine)
  return None

  # Cache the XML, since we'll need it later to reconfigure the VM.
  self.xml[machine] = output
  return output

  I assume that run function has issue.

  Command line virsh vol-path and simple pepect python code works fine.

  Any advice for this issue?

  Thanks.

  [Test Plan]

  0) deploy Bionic and MAAS 2.8

  1) Create file to be used as loopback device

  sudo dd if=/dev/zero of=lvm bs=16000 count=1M

  2) sudo losetup /dev/loop39 lvm

  3) sudo pvcreate /dev/loop39

  4) sudo vgcreate maas_data_vg /dev/loop39

  5) Save below xml:
  
  maas_guest_lvm_vg
  
  maas_data_vg
  
  
  
  /dev/maas_data_vg
  
  

  6) virsh pool-create maas_guest_lvm_vg.xml

  7) Add KVM host in MaaS

  8) Attempt to compose a POD using storage pool maas_guest_lvm_vg

  9) GUI will fail with:

  Pod unable to compose machine: Unable to compose machine because:
  Failed talking to pod: Start tag expected, '<' not found, line 1,
  column 1 (, line 1)

  [Where problems could occer]

  This patch is small peice of huge commit.
  I tested by compiling test pkg with this patch. but actually it is kind of 
underlying library ( libreadline ), so It could affect to any application using 
libreadline.
  e.g running command inside application by code can be affected.

  
  [Other Info]

To manage notifications about this bug go to:
https://bugs.launchpad.net/maas/+bug/1921658/+subscriptions


-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1921658] Re: Can't compose kvm host with lvm storage on maas 2.8.4

2021-09-09 Thread Dan Streetman
sorry, @dgadomski has been out on vacation, i'll pick up the sponsoring
work for this while he's out.

@seyeongkim, can you please clean up the sru template information? @racb
is correct that the current info in the sru template doesn't explain
anything about what the actual readline bug is or how it fixes anything.

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1921658

Title:
  Can't compose kvm host with lvm storage on maas 2.8.4

Status in MAAS:
  Triaged
Status in readline package in Ubuntu:
  Fix Released
Status in readline source package in Bionic:
  Incomplete

Bug description:
  [Impact]

  I can't compose kvm host on maas 2.8.4 ( bionic)

  I upgraded twisted and related component with pip but the symptom is
  the same.

  MaaS 2.9.x in Focal works fine.

  in 2.8.x, pexpect virsh vol-path should return [2] but returns [3]

  [2]
  /dev/maas_data_vg/8d4e8b04-4031-4a1b-b5f2-a8306192db11
  [3]
  2021-03-17 20:43:34 stderr: [error] Message: 'this is the result...\n'
  2021-03-17 20:43:34 stderr: [error] Arguments: ([' ', 
'<3ef-46ca-87c8-19171950592f --pool maas_guest_lvm_vg', "error: command 
'attach-disk' doesn't support option --pool"],)

  sometimes it fails in

  def get_volume_path(self, pool, volume):
  """Return the path to the file from `pool` and `volume`."""
  output = self.run(["vol-path", volume, "--pool", pool])
  return output.strip()

  sometimes failes in

  def get_machine_xml(self, machine):
  # Check if we have a cached version of the XML.
  # This is a short-lived object, so we don't need to worry about
  # expiring objects in the cache.
  if machine in self.xml:
  return self.xml[machine]

  # Grab the XML from virsh if we don't have it already.
  output = self.run(["dumpxml", machine]).strip()
  if output.startswith("error:"):
  maaslog.error("%s: Failed to get XML for machine", machine)
  return None

  # Cache the XML, since we'll need it later to reconfigure the VM.
  self.xml[machine] = output
  return output

  I assume that run function has issue.

  Command line virsh vol-path and simple pepect python code works fine.

  Any advice for this issue?

  Thanks.

  [Test Plan]

  0) deploy Bionic and MAAS 2.8

  1) Create file to be used as loopback device

  sudo dd if=/dev/zero of=lvm bs=16000 count=1M

  2) sudo losetup /dev/loop39 lvm

  3) sudo pvcreate /dev/loop39

  4) sudo vgcreate maas_data_vg /dev/loop39

  5) Save below xml:
  
  maas_guest_lvm_vg
  
  maas_data_vg
  
  
  
  /dev/maas_data_vg
  
  

  6) virsh pool-create maas_guest_lvm_vg.xml

  7) Add KVM host in MaaS

  8) Attempt to compose a POD using storage pool maas_guest_lvm_vg

  9) GUI will fail with:

  Pod unable to compose machine: Unable to compose machine because:
  Failed talking to pod: Start tag expected, '<' not found, line 1,
  column 1 (, line 1)

  [Where problems could occer]

  This patch is small peice of huge commit.
  I tested by compiling test pkg with this patch. but actually it is kind of 
underlying library ( libreadline ), so It could affect to any application using 
libreadline.
  e.g running command inside application by code can be affected.

  
  [Other Info]

To manage notifications about this bug go to:
https://bugs.launchpad.net/maas/+bug/1921658/+subscriptions


-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1921658] Re: Can't compose kvm host with lvm storage on maas 2.8.4

2021-08-19 Thread Dan Streetman
@seyeongkim could you add the SRU template to the bug description?

https://wiki.ubuntu.com/StableReleaseUpdates#SRU_Bug_Template

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1921658

Title:
  Can't compose kvm host with lvm storage on maas 2.8.4

Status in MAAS:
  Triaged
Status in readline package in Ubuntu:
  Fix Released
Status in readline source package in Bionic:
  In Progress

Bug description:
  I can't compose kvm host on maas 2.8.4 ( bionic)

  I upgraded twisted and related component with pip but the symptom is
  the same.

  MaaS 2.9.x in Focal works fine.

  in 2.8.x, pexpect virsh vol-path should return [2] but returns [3]

  [2]
  /dev/maas_data_vg/8d4e8b04-4031-4a1b-b5f2-a8306192db11
  [3]
  2021-03-17 20:43:34 stderr: [error] Message: 'this is the result...\n'
  2021-03-17 20:43:34 stderr: [error] Arguments: ([' ', 
'<3ef-46ca-87c8-19171950592f --pool maas_guest_lvm_vg', "error: command 
'attach-disk' doesn't support option --pool"],)

  sometimes it fails in

  def get_volume_path(self, pool, volume):
  """Return the path to the file from `pool` and `volume`."""
  output = self.run(["vol-path", volume, "--pool", pool])
  return output.strip()

  sometimes failes in

  def get_machine_xml(self, machine):
  # Check if we have a cached version of the XML.
  # This is a short-lived object, so we don't need to worry about
  # expiring objects in the cache.
  if machine in self.xml:
  return self.xml[machine]

  # Grab the XML from virsh if we don't have it already.
  output = self.run(["dumpxml", machine]).strip()
  if output.startswith("error:"):
  maaslog.error("%s: Failed to get XML for machine", machine)
  return None

  # Cache the XML, since we'll need it later to reconfigure the VM.
  self.xml[machine] = output
  return output

  I assume that run function has issue.

  Command line virsh vol-path and simple pepect python code works fine.

  
  Any advice for this issue?

  Thanks.

  Reproducer is below.[1]

  [1]

  1) Create file to be used as loopback device

  sudo dd if=/dev/zero of=lvm bs=16000 count=1M

  2) sudo losetup /dev/loop39 lvm

  3) sudo pvcreate /dev/loop39

  4) sudo vgcreate maas_data_vg /dev/loop39

  5) Save below xml:
  
  maas_guest_lvm_vg
  
  maas_data_vg
  
  
  
  /dev/maas_data_vg
  
  

  6) virsh pool-create maas_guest_lvm_vg.xml

  7) Add KVM host in MaaS

  8) Attempt to compose a POD using storage pool maas_guest_lvm_vg

  9) GUI will fail with:

  Pod unable to compose machine: Unable to compose machine because:
  Failed talking to pod: Start tag expected, '<' not found, line 1,
  column 1 (, line 1)

To manage notifications about this bug go to:
https://bugs.launchpad.net/maas/+bug/1921658/+subscriptions


-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1939853] Re: mysqli: Using a cursor with get_result() and prepared statements causes a segmentation fault

2021-08-18 Thread Dan Streetman
uploaded to b/f queues, thanks!

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1939853

Title:
  mysqli: Using a cursor with get_result() and prepared statements
  causes a segmentation fault

Status in php7.2 package in Ubuntu:
  Fix Released
Status in php7.4 package in Ubuntu:
  Fix Released
Status in php7.2 source package in Bionic:
  In Progress
Status in php7.4 source package in Focal:
  In Progress

Bug description:
  [Impact]

  If you attempt to use a prepared statement with the mysqli database
  driver to create a cursor which you then execute() and fetch rows with
  get_result() and then fetch_assoc(), php with hit a segmentation fault
  on every query and terminate.

  This is because cursors aren't actually implemented for prepared
  statements for the mysqli driver in php 7.2 and 7.4, the versions in
  Bionic and Focal. When we try and use a cursor, we just segfault on a
  type mismatch when the cursor calls fetch_row().

  The fix comes in two forms. The first commit fixes the segfault and
  makes php return an error to the user, and the second commit
  implements support for cursors on prepared statements. When combined,
  these commits fix the issue.

  A workaround is to not use prepared statements, and instead use
  query() directly.

  [Test case]

  Install PHP and mysql-client:

  Focal:
  $ sudo apt install php7.4 php7.4-mysql mysql-client

  Bionic:
  $ sudo apt install php7.2 php7.2-mysql mysql-client

  Next, install and configure mysql 5.3:

  $ sudo apt install libncurses5 libaio1 libmecab2
  $ wget 
https://dev.mysql.com/get/Downloads/MySQL-5.7/mysql-5.7.35-linux-glibc2.12-x86_64.tar.gz

  $ groupadd mysql
  $ useradd -r -g mysql -s /bin/false mysql
  $ cd /usr/local
  $ tar zxvf /home/ubuntu/mysql-5.7.35-linux-glibc2.12-x86_64.tar.gz
  $ ln -s mysql-5.7.35-linux-glibc2.12-x86_64 mysql
  $ cd mysql
  $ mkdir mysql-files
  $ chown mysql:mysql mysql-files
  $ chmod 750 mysql-files
  $ bin/mysqld --initialize --user=mysql
  $ bin/mysql_ssl_rsa_setup
  $ bin/mysqld_safe --user=mysql &

  # [Note] A temporary password is generated for root@localhost:
  *rjfy#_w(8kM

  Access the DBMS and add users, databases and tables, along with some
  data:

  $ mysql -h 127.0.0.1 -P 3306 -u root -p

  ALTER USER 'root'@'localhost' IDENTIFIED BY 'ubuntu';

  CREATE DATABASE ubuntu_releases;
  use ubuntu_releases;
  CREATE TABLE ubuntu_releases (year INT, month INT, name VARCHAR(20)) ;
  INSERT INTO ubuntu_releases VALUES (21, 04, 'hirsute');
  INSERT INTO ubuntu_releases VALUES (20, 10, 'groovy');
  INSERT INTO ubuntu_releases VALUES (20, 04, 'focal');
  CREATE USER 'ubuntu' IDENTIFIED BY 'ubuntu';
  GRANT ALL PRIVILEGES ON ubuntu_releases.* TO 'ubuntu';
  exit

  Save the following script to testcase.php:

  prepare($SQL);
   $stmt->attr_set(MYSQLI_STMT_ATTR_CURSOR_TYPE, MYSQLI_CURSOR_TYPE_READ_ONLY);
   $stmt->execute();
   if ($stmt) {
    $res = $stmt->get_result();
    while($row = $res->fetch_assoc()) {
     echo json_encode($row) . "\n";
    }
   }
   $dbConn->close()
  ?>

  Run the php script:

  $ php testcase.php
  Segmentation fault (core dumped)

  A ppa with test packages is available below:

  https://launchpad.net/~mruffell/+archive/ubuntu/sf315485-test

  When you install the test packages, you should see:

  $ php testcase.php
  {"year":21,"month":4,"name":"hirsute"}
  {"year":20,"month":10,"name":"groovy"}
  {"year":20,"month":4,"name":"focal"}

  [Where problems can occur]

  We are changing the behaviour of how cursors work in the mysqli
  database driver backend. Luckily, we are only changing how they work
  for prepared statements in mysqli, and it doesn't affect any other
  database driver backend, or regular queries with mysqli.

  Since attempting to use a cursor with prepared statements on mysqli
  backend results in a segfault due to it not being implemented, there
  won't be any users that are using cursors with prepared statements,
  since their applications would crash. Adding support likely won't
  break any existing users, as attempting to use such features before
  would result in a hard crash, versus making existing code start
  working or behave differently.

  There is still risk these changes could introduce a regression, and it
  would be restricted to users using the mysqli database driver, which I
  imagine are a significant amount of the userbase, due to the
  popularity of mysql and mariadb. If a regression were to occur, users
  might need to change their database driver interface code, or in worst
  case, change to direct queries from prepared statements while a fix is
  created.

  [Other Info]

  The following commit fixes the segmentation fault and changes it to an
  not implemented error:

  commit b5481defe64c991d0e4307372d69c0ea3cd83378
  Author: Dharman 
  Date:   Thu Sep 17 12:35:26 2020 +0100
  Subject: Fix bug #72413: Segfault with 

[Sts-sponsors] [Bug 1939853] Re: mysqli: Using a cursor with get_result() and prepared statements causes a segmentation fault

2021-08-18 Thread Dan Streetman
** Tags added: sts-sponsor-ddstreet

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1939853

Title:
  mysqli: Using a cursor with get_result() and prepared statements
  causes a segmentation fault

Status in php7.2 package in Ubuntu:
  Fix Released
Status in php7.4 package in Ubuntu:
  Fix Released
Status in php7.2 source package in Bionic:
  In Progress
Status in php7.4 source package in Focal:
  In Progress

Bug description:
  [Impact]

  If you attempt to use a prepared statement with the mysqli database
  driver to create a cursor which you then execute() and fetch rows with
  get_result() and then fetch_assoc(), php with hit a segmentation fault
  on every query and terminate.

  This is because cursors aren't actually implemented for prepared
  statements for the mysqli driver in php 7.2 and 7.4, the versions in
  Bionic and Focal. When we try and use a cursor, we just segfault on a
  type mismatch when the cursor calls fetch_row().

  The fix comes in two forms. The first commit fixes the segfault and
  makes php return an error to the user, and the second commit
  implements support for cursors on prepared statements. When combined,
  these commits fix the issue.

  A workaround is to not use prepared statements, and instead use
  query() directly.

  [Test case]

  Install PHP and mysql-client:

  Focal:
  $ sudo apt install php7.4 php7.4-mysql mysql-client

  Bionic:
  $ sudo apt install php7.2 php7.2-mysql mysql-client

  Next, install and configure mysql 5.3:

  $ sudo apt install libncurses5 libaio1 libmecab2
  $ wget 
https://dev.mysql.com/get/Downloads/MySQL-5.7/mysql-5.7.35-linux-glibc2.12-x86_64.tar.gz

  $ groupadd mysql
  $ useradd -r -g mysql -s /bin/false mysql
  $ cd /usr/local
  $ tar zxvf /home/ubuntu/mysql-5.7.35-linux-glibc2.12-x86_64.tar.gz
  $ ln -s mysql-5.7.35-linux-glibc2.12-x86_64 mysql
  $ cd mysql
  $ mkdir mysql-files
  $ chown mysql:mysql mysql-files
  $ chmod 750 mysql-files
  $ bin/mysqld --initialize --user=mysql
  $ bin/mysql_ssl_rsa_setup
  $ bin/mysqld_safe --user=mysql &

  # [Note] A temporary password is generated for root@localhost:
  *rjfy#_w(8kM

  Access the DBMS and add users, databases and tables, along with some
  data:

  $ mysql -h 127.0.0.1 -P 3306 -u root -p

  ALTER USER 'root'@'localhost' IDENTIFIED BY 'ubuntu';

  CREATE DATABASE ubuntu_releases;
  use ubuntu_releases;
  CREATE TABLE ubuntu_releases (year INT, month INT, name VARCHAR(20)) ;
  INSERT INTO ubuntu_releases VALUES (21, 04, 'hirsute');
  INSERT INTO ubuntu_releases VALUES (20, 10, 'groovy');
  INSERT INTO ubuntu_releases VALUES (20, 04, 'focal');
  CREATE USER 'ubuntu' IDENTIFIED BY 'ubuntu';
  GRANT ALL PRIVILEGES ON ubuntu_releases.* TO 'ubuntu';
  exit

  Save the following script to testcase.php:

  prepare($SQL);
   $stmt->attr_set(MYSQLI_STMT_ATTR_CURSOR_TYPE, MYSQLI_CURSOR_TYPE_READ_ONLY);
   $stmt->execute();
   if ($stmt) {
    $res = $stmt->get_result();
    while($row = $res->fetch_assoc()) {
     echo json_encode($row) . "\n";
    }
   }
   $dbConn->close()
  ?>

  Run the php script:

  $ php testcase.php
  Segmentation fault (core dumped)

  A ppa with test packages is available below:

  https://launchpad.net/~mruffell/+archive/ubuntu/sf315485-test

  When you install the test packages, you should see:

  $ php testcase.php
  {"year":21,"month":4,"name":"hirsute"}
  {"year":20,"month":10,"name":"groovy"}
  {"year":20,"month":4,"name":"focal"}

  [Where problems can occur]

  We are changing the behaviour of how cursors work in the mysqli
  database driver backend. Luckily, we are only changing how they work
  for prepared statements in mysqli, and it doesn't affect any other
  database driver backend, or regular queries with mysqli.

  Since attempting to use a cursor with prepared statements on mysqli
  backend results in a segfault due to it not being implemented, there
  won't be any users that are using cursors with prepared statements,
  since their applications would crash. Adding support likely won't
  break any existing users, as attempting to use such features before
  would result in a hard crash, versus making existing code start
  working or behave differently.

  There is still risk these changes could introduce a regression, and it
  would be restricted to users using the mysqli database driver, which I
  imagine are a significant amount of the userbase, due to the
  popularity of mysql and mariadb. If a regression were to occur, users
  might need to change their database driver interface code, or in worst
  case, change to direct queries from prepared statements while a fix is
  created.

  [Other Info]

  The following commit fixes the segmentation fault and changes it to an
  not implemented error:

  commit b5481defe64c991d0e4307372d69c0ea3cd83378
  Author: Dharman 
  Date:   Thu Sep 17 12:35:26 2020 +0100
  Subject: Fix bug #72413: Segfault with 

Re: [Sts-sponsors] Please review and sponsor LP#1930359 for glib2.0

2021-07-12 Thread Dan Streetman
On Mon, Jul 12, 2021 at 7:49 AM Dariusz Gadomski
 wrote:
>
> Hey Matthew,
> Despite not being called by name, I've decided to cut in, since I've been 
> following the case anyway.
>
> I'll review and sponsor it for you.

Thanks @dgadomski!

Also @mruffell FYI you can just subscribe 'sts-sponsors' to the bug,
we'll get an email when that is done, and for any changes to the bug.
Note that subscribing us is different than adding the sts-sponsor tag.

>
> Cheers,
> Dariusz
>
> On Mon, 12 Jul 2021 at 13:04, Matthew Ruffell  
> wrote:
>>
>> Hi Dan, Eric, Mauricio,
>>
>> Could you please review and sponsor LP #1930359 which contains a 
>> straightforward
>> fix for glib2.0.
>>
>> https://bugs.launchpad.net/ubuntu/+source/glib2.0/+bug/1930359
>>
>> The issue is straightforward to reproduce with valgrind. If you want
>> to reproduce
>> the customer issue of corrupted gschema.compiled causing gdm and gnome-shell 
>> to
>> fail to start, you can use the attached schema.tar file for the set of 
>> customer
>> schemas. Note - their gschema.compiled is already corrupted, just mv it and
>> reboot to see the damage.
>>
>> Thanks,
>> Matthew
>> --
>> Mailing list: https://launchpad.net/~sts-sponsors
>> Post to : sts-sponsors@lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~sts-sponsors
>> More help   : https://help.launchpad.net/ListHelp
>
> --
> Mailing list: https://launchpad.net/~sts-sponsors
> Post to : sts-sponsors@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~sts-sponsors
> More help   : https://help.launchpad.net/ListHelp

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1933378] Re: Unable to build from source mongodb-server-core - focal

2021-07-08 Thread Dan Streetman
** No longer affects: requests (Ubuntu)

** No longer affects: requests (Ubuntu Impish)

** No longer affects: requests (Ubuntu Hirsute)

** No longer affects: requests (Ubuntu Groovy)

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1933378

Title:
  Unable to build from source mongodb-server-core - focal

Status in mongodb package in Ubuntu:
  Confirmed
Status in mongodb source package in Focal:
  New
Status in mongodb source package in Groovy:
  New
Status in mongodb source package in Hirsute:
  New
Status in mongodb source package in Impish:
  Confirmed

Bug description:
  root@focal:~/mongodb-3.6.9+really3.6.8+90~g8e540c0b6d# apt build-dep .
  Note, using directory '.' to get the build dependencies
  Reading package lists... Done
  Building dependency tree   
  Reading state information... Done
  Some packages could not be installed. This may mean that you have
  requested an impossible situation or if you are using the unstable
  distribution that some required packages have not yet been created
  or been moved out of Incoming.
  The following information may help to resolve the situation:

  The following packages have unmet dependencies:
   builddeps:. : Depends: python-requests but it is not installable
  E: Unable to correct problems, you have held broken packages.

  root@focal:~/mongodb-3.6.9+really3.6.8+90~g8e540c0b6d# apt-get install 
python-requests
  Reading package lists... Done
  Building dependency tree   
  Reading state information... Done
  E: Unable to locate package python-requests

  --
  Need to drop support for Python2 in mongodb-server-core focal [0] 
  [0] https://launchpad.net/ubuntu/+source/requests/2.22.0-2ubuntu1

  installed version: 1:3.6.9+really3.6.8+90~g8e540c0b6d-0ubuntu5
  series release: focal 
  container: LXD

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/mongodb/+bug/1933378/+subscriptions

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1849098] Re: ovs agent is stuck with OVSFWTagNotFound when dealing with unbound port

2021-05-04 Thread Dan Streetman
** Description changed:

  [Impact]
  
- somehow port is unbounded, then neutron-openvswitch-agent raise 
+ somehow port is unbounded, then neutron-openvswitch-agent raise
  OVSFWTagNotFound, then creating new instance will be failed.
  
  [Test Plan]
  1. deploy bionic openstack env
  2. launch one instance
  3. modify neutron-openvswitch-agent code inside nova-compute
  - https://pastebin.ubuntu.com/p/nBRKkXmjx8/
  4. restart neutron-openvswitch-agent
  5. check if there are a lot of cannot get tag for port ..
  6. launch another instance.
  7. It fails after vif_plugging_timeout, with "virtual interface creation 
failed"
  
  [Where problems could occur]
- You need to restart service. and as patch, Basically it will be ok as it adds 
only exceptions. but getting or creating vif port part can have issue.
+ issues could occur getting or creating vif port
  
  [Others]
  
  Original description.
  
  neutron-openvswitch-agent meets unbound port:
  
  2019-10-17 11:32:21.868 135 WARNING
  neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-
  aae68b42-a99f-4bb3-bcf6-a6d3c4ca9e31 - - - - -] Device
  ef34215f-e099-4fd0-935f-c9a42951d166 not defined on plugin or binding
  failed
  
  Later when applying firewall rules:
  
  2019-10-17 11:32:21.901 135 INFO neutron.agent.securitygroups_rpc 
[req-aae68b42-a99f-4bb3-bcf6-a6d3c4ca9e31 - - - - -] Preparing filters for 
devices {'ef34215f-e099-4fd0-935f-c9a42951d166', 
'e9c97cf0-1a5e-4d77-b57b-0ba474d12e29', 'fff1bb24-6423-4486-87c4-1fe17c552cca', 
'2e20f9ee-bcb5-445c-b31f-d70d276d45c9', '03a60047-cb07-42a4-8b49-619d5982a9bd', 
'a452cea2-deaf-4411-bbae-ce83870cbad4', '79b03e5c-9be0-4808-9784-cb4878c3dbd5', 
'9b971e75-3c1b-463d-88cf-3f298105fa6e'}
  2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-aae68b42-a99f-4bb3-bcf6-a6d3c4ca9e31 - - - - -] Error while processing VIF 
ports: neutron.agent.linux.openvswitch_firewall.exceptions.OVSFWTagNotFound: 
Cannot get tag for port o-hm0 from its other_config: {}
  2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent Traceback (most 
recent call last):
  2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent   File 
"/var/lib/openstack/lib/python3.6/site-packages/neutron/agent/linux/openvswitch_firewall/firewall.py",
 line 530, in get_or_create_ofport
  2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent of_port = 
self.sg_port_map.ports[port_id]
  2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent KeyError: 
'ef34215f-e099-4fd0-935f-c9a42951d166'
  2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent
  2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent During handling 
of the above exception, another exception occurred:
  2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent
  2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent Traceback (most 
recent call last):
  2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent   File 
"/var/lib/openstack/lib/python3.6/site-packages/neutron/agent/linux/openvswitch_firewall/firewall.py",
 line 81, in get_tag_from_other_config
  2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent return 
int(other_config['tag'])
  2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent KeyError: 'tag'
  2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent
  2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent During handling 
of the above exception, another exception occurred:
  2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent
  2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent Traceback (most 
recent call last):
  2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent   File 
"/var/lib/openstack/lib/python3.6/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 2280, in rpc_loop
  2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent port_info, 
provisioning_needed)
  2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent   File 
"/var/lib/openstack/lib/python3.6/site-packages/osprofiler/profiler.py", line 
160, in wrapper
  2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent result = 
f(*args, **kwargs)
  2019-10-17 

Re: [Sts-sponsors] Please Review LP#1926254 openssl x509 Certificate Validation SRU

2021-04-30 Thread Dan Streetman
On Thu, Apr 29, 2021 at 8:13 PM Matthew Ruffell
 wrote:
>
> Hi Security Team,
>
> VISA opened a case, SF308725 - "openssl unable to process the certificate on
> Ubuntu 20.0" [1], about a minor regression in openssl 1.1.1f that affects
> both Focal and Groovy.
>
> [1] 
> https://canonical.lightning.force.com/lightning/r/Case/5004K05pGePQAU/view
>
> A commit was merged in 1.1.1f which disallows certificates which set
> "basicConstraints=CA:FALSE,pathlen:0" as it violates the RFC for ssl certs, 
> but
> this is a common configuration in certificates in the wild, particularly self
> signed certificates.
>
> This was reported upstream and fixed in 1.1.1g, to relax this particular
> scenario only, to allow it to be accepted as a valid certificate.
>
> More information and a full reproducer is available on the Launchpad bug,
> LP #1926254 - "x509 Certificate verification fails when
> basicConstraints=CA:FALSE,pathlen:0 on self-signed leaf certs" [2].
>
> [2] https://bugs.launchpad.net/ubuntu/+source/openssl/+bug/1926254
>
> Due to the nature of the package, can you please review the launchpad bug and
> debdiffs I have attached to the launchpad bug, and if everything is okay, can
> you write an acknowledgement and approval to a comment on the launchpad bug.
>
> After that I will seek sponsorship to get this submitted for SRU.
>
> I am thinking -updates is okay, no need for -security.

I added ubuntu-security to the bug also, and I'm happy to upload if
there are no objections from security team

>
> Thanks,
> Matthew
>
> --
> Mailing list: https://launchpad.net/~sts-sponsors
> Post to : sts-sponsors@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~sts-sponsors
> More help   : https://help.launchpad.net/ListHelp

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1820083] Re: TLS params not set for session

2021-03-09 Thread Dan Streetman
** Description changed:

  [Impact]
  
  A connection session is opened, but the TLS parameters (timeout, ca,
  cert and key) are not actually set for the session.  This prevents use
  of TLS for the etcd3gw package.
  
  [Test Plan]
  
- # Create self signed certs
+ # Create self signed certs, using the default for all prompts
  
- openssl req -x509 -out localhost.crt -keyout localhost.key -newkey rsa:4096 
-nodes -sha256 -out localhost.csr
- *make sure the key has an empty password
+ $ openssl req -addext "subjectAltName = DNS:localhost" -x509 -keyout
+ localhost.key -newkey rsa:4096 -nodes -sha256 -out localhost.crt
  
- #download binaries & launch etcd locally with TLS enabled
+ # install 'etcd' package, stop the default server, and spin up ectd
+ server
  
- wget https://github.com/etcd-
- io/etcd/releases/download/v3.3.13/etcd-v3.3.13-linux-amd64.tar.gz
+ $ sudo apt install etcd
+ $ sudo systemctl stop etcd
  
- tar -zxvf etcd-v3.3.14-linux-amd64.tar.gz
+ $ etcd --name test --data-dir test --cert-file=localhost.crt --key-
+ file=localhost.key --advertise-client-urls=https://localhost:2379
+ --listen-client-urls=https://localhost:2379
  
- cd etcd-v3.3.14-linux-amd64/
- sudo cp etcd etcdctl /usr/bin/
+ # run test script
  
- # spin up ectd server
- etcd --name infra0 --data-dir infra0 --cert-file=localhost.crt 
--key-file=localhost.key --advertise-client-urls=https://127.0.0.1:2379 
--listen-client-urls=https://127.0.0.1:2379
- *note I named my directory infra0
+ $ cat test.py
+ #!/usr/bin/python3
  
- #test connection with health endpoint:
+ from etcd3gw import Etcd3Client
  
- curl --cacert localhost.crt --key localhost.key --cert localhost.crt
- https://127.0.0.1:2379/health
+ c = Etcd3Client(host="localhost", protocol="https", cert_key="localhost.key", 
cert_cert="localhost.crt", ca_cert="localhost.crt", timeout=10)
+ c.put('test', 'success!')
+ resp = c.get('test')
+ print(b''.join(resp).decode())
  
- #if successful, the etcd server is configured with https
- {"health": "true"}
- 
- Modify ~/python-etcd3gw-0.2.1/etcd3gw/tests/test_client.py
- to add this unit test.
- 
- def test_client_tls(self):
- client = Etcd3Client(host="127.0.0.1", protocol="https", 
ca_cert="/root/etcdserver.crt",
-  cert_key="/root/etcdserver.key",
-  cert_cert="/root/etcdserver.crt",
-  timeout=10)
- client.create("foo", value="bar")
- client.put("foo", "bar")
- resp = client.get("foo")
- print(resp)
- 
- # Run the newly added unit test
- 
- python3 -m unittest test_client.TestEtcd3Gateway.test_client_tls
- 
- We get an error in both the unit test and an error from the etcd server unit 
test error we are looking for:
- # error in etcd
- OpenSSL.SSL.Error: [('SSL routines', 'tls_process_server_certificate', 
'certificate verify failed')] related etcd error: I | embed: rejected 
connection from "127.0.0.1:44244" (error "remote error: tls: bad certificate", 
ServerName "")
- 
- error in unit test
- 
- python3 -m unittest test_client.TestEtcd3Gateway.test_client_tls
- 
- E
- ==
- ERROR: test_client_tls (test_client.TestEtcd3Gateway)
- test_client.TestEtcd3Gateway.test_client_tls
- --
- testtools.testresult.real._StringException: Traceback (most recent call last):
-   File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 601, 
in urlopen
- chunked=chunked)
-   File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 346, 
in _make_request
- self._validate_conn(conn)
-   File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 852, 
in _validate_conn
- conn.connect()
-   File "/usr/lib/python3/dist-packages/urllib3/connection.py", line 340, in 
connect
- ssl_context=context)
-   File "/usr/lib/python3/dist-packages/urllib3/util/ssl_.py", line 332, in 
ssl_wrap_socket
- return context.wrap_socket(sock, server_hostname=server_hostname)
-   File "/usr/lib/python3.6/ssl.py", line 407, in wrap_socket
- _context=self, _session=session)
-   File "/usr/lib/python3.6/ssl.py", line 817, in __init__
- self.do_handshake()
-   File "/usr/lib/python3.6/ssl.py", line 1077, in do_handshake
- self._sslobj.do_handshake()
-   File "/usr/lib/python3.6/ssl.py", line 689, in do_handshake
- self._sslobj.do_handshake()
- ssl.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed 
(_ssl.c:852)
- 
- During handling of the above exception, another exception occurred:
- 
- Traceback (most recent call last):
-   File "/usr/lib/python3/dist-packages/requests/adapters.py", line 440, in 
send
- timeout=timeout
-   File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 639, 
in urlopen
- _stacktrace=sys.exc_info()[2])
-   File "/usr/lib/python3/dist-packages/urllib3/util/retry.py", line 398, in 
increment
- raise MaxRetryError(_pool, url, error or 

[Sts-sponsors] [Bug 1911187] Re: scheduled reboot reboots immediately if dbus or logind is not available

2021-02-23 Thread Dan Streetman
** Description changed:

  [IMPACT]
  
  When, for whatever reason, logind or dbus is not available scheduled reboot 
reboots the machine immediately.
  From the sources it seems that this is intended :
  
https://github.com/systemd/systemd/blob/master/src/systemctl/systemctl-logind.c#L318
  However, I report this as a bug since this is against the logic of a 
scheduled reboot; if someone schedules a reboot they want the system to reboot 
at the specified time not immediately.
  
  There has been a discussion upstream ( 
https://github.com/systemd/systemd/issues/17575 ) and
  a PR ( https://github.com/systemd/systemd/pull/18010 ).
  
  Upstream community is not willing to accept the patch but debian is.
  I open this bug to to pull the patch into Ubuntu once it lands in debian.
  
  [TEST PLAN]
  
  The simpler reproducer is to disable dbus to imitate the real world
  case.
  
  # systemctl stop dbus.service
  # systemctl stop dbus.socket
  # shutdown +1140 -r "REBOOT!"
  Failed to set wall message, ignoring: Failed to activate service 
'org.freedesktop.login1': timed out (service_start_timeout=25000ms)
  Failed to call ScheduleShutdown in logind, proceeding with immediate 
shutdown: Connection timed out
  Connection to groovy closed by remote host.
  Connection to groovy closed.
  
  [WHERE PROBLEM COULD OCCUR]
  
  This patch changes the behaviour of scheduled reboot in case logind or dbus 
has failed.
  Originally, if logind is not available (call to logind bus fails
  
https://github.com/systemd/systemd/blob/master/src/systemctl/systemctl-logind.c#L319)
  it proceeds with immediate shutdown.
  This patch changes this behaviour and instead of shutting down it does 
nothing.
- The actual regression potential is a user asking for a reboot and not getting 
it.
- Other than that the changes in the code are very small and simple and 
unlikely to break anything.
+ The actual regression potential is a user asking for a reboot and not getting 
it, so the largest regression potential is any existing users (human or 
programmatic) that are requesting a scheduled shutdown but not checking the 
return value for error.
+ Any other regression would likely result in the system incorrectly not 
rebooted, or incorrectly scheduled for reboot.
  
  [OTHER]
  
  This is now fixed in H, currently affects B,G,F.
  
  Debian bug reports :
  https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=931235
  https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=960042
  
  Upstream issue : https://github.com/systemd/systemd/issues/17575
  PR : https://github.com/systemd/systemd/pull/18010

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1911187

Title:
  scheduled reboot reboots immediately if dbus or logind is not
  available

Status in systemd package in Ubuntu:
  Fix Released
Status in systemd source package in Bionic:
  In Progress
Status in systemd source package in Focal:
  In Progress
Status in systemd source package in Groovy:
  In Progress

Bug description:
  [IMPACT]

  When, for whatever reason, logind or dbus is not available scheduled reboot 
reboots the machine immediately.
  From the sources it seems that this is intended :
  
https://github.com/systemd/systemd/blob/master/src/systemctl/systemctl-logind.c#L318
  However, I report this as a bug since this is against the logic of a 
scheduled reboot; if someone schedules a reboot they want the system to reboot 
at the specified time not immediately.

  There has been a discussion upstream ( 
https://github.com/systemd/systemd/issues/17575 ) and
  a PR ( https://github.com/systemd/systemd/pull/18010 ).

  Upstream community is not willing to accept the patch but debian is.
  I open this bug to to pull the patch into Ubuntu once it lands in debian.

  [TEST PLAN]

  The simpler reproducer is to disable dbus to imitate the real world
  case.

  # systemctl stop dbus.service
  # systemctl stop dbus.socket
  # shutdown +1140 -r "REBOOT!"
  Failed to set wall message, ignoring: Failed to activate service 
'org.freedesktop.login1': timed out (service_start_timeout=25000ms)
  Failed to call ScheduleShutdown in logind, proceeding with immediate 
shutdown: Connection timed out
  Connection to groovy closed by remote host.
  Connection to groovy closed.

  [WHERE PROBLEM COULD OCCUR]

  This patch changes the behaviour of scheduled reboot in case logind or dbus 
has failed.
  Originally, if logind is not available (call to logind bus fails
  
https://github.com/systemd/systemd/blob/master/src/systemctl/systemctl-logind.c#L319)
  it proceeds with immediate shutdown.
  This patch changes this behaviour and instead of shutting down it does 
nothing.
  The actual regression potential is a user asking for a reboot and not getting 
it, so the largest regression potential is any existing users (human or 
programmatic) that are requesting a scheduled shutdown but not checking the 
return 

[Sts-sponsors] [Bug 1911187] Re: scheduled reboot reboots immediately if dbus or logind is not available

2021-02-23 Thread Dan Streetman
minor comment, for systemd (and really all packages) I like to name the
patches with the lp bug number, so i changed your patch name to add the
lp1911187- prefix.

also another minor comment, as we'd discussed before I made a slight
change to the comment in the patch for clarification:

 if (arg_when > 0)
 return logind_schedule_shutdown();
 
-/* no delay, or logind is not at all available */
+/* no delay */
 if (geteuid() != 0) {
 if (arg_dry_run || arg_force > 0) {
 (void) must_be_root();

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1911187

Title:
  scheduled reboot reboots immediately if dbus or logind is not
  available

Status in systemd package in Ubuntu:
  Fix Released
Status in systemd source package in Bionic:
  In Progress
Status in systemd source package in Focal:
  In Progress
Status in systemd source package in Groovy:
  In Progress

Bug description:
  [IMPACT]

  When, for whatever reason, logind or dbus is not available scheduled reboot 
reboots the machine immediately.
  From the sources it seems that this is intended :
  
https://github.com/systemd/systemd/blob/master/src/systemctl/systemctl-logind.c#L318
  However, I report this as a bug since this is against the logic of a 
scheduled reboot; if someone schedules a reboot they want the system to reboot 
at the specified time not immediately.

  There has been a discussion upstream ( 
https://github.com/systemd/systemd/issues/17575 ) and
  a PR ( https://github.com/systemd/systemd/pull/18010 ).

  Upstream community is not willing to accept the patch but debian is.
  I open this bug to to pull the patch into Ubuntu once it lands in debian.

  [TEST PLAN]

  The simpler reproducer is to disable dbus to imitate the real world
  case.

  # systemctl stop dbus.service
  # systemctl stop dbus.socket
  # shutdown +1140 -r "REBOOT!"
  Failed to set wall message, ignoring: Failed to activate service 
'org.freedesktop.login1': timed out (service_start_timeout=25000ms)
  Failed to call ScheduleShutdown in logind, proceeding with immediate 
shutdown: Connection timed out
  Connection to groovy closed by remote host.
  Connection to groovy closed.

  [WHERE PROBLEM COULD OCCUR]

  This patch changes the behaviour of scheduled reboot in case logind or dbus 
has failed.
  Originally, if logind is not available (call to logind bus fails
  
https://github.com/systemd/systemd/blob/master/src/systemctl/systemctl-logind.c#L319)
  it proceeds with immediate shutdown.
  This patch changes this behaviour and instead of shutting down it does 
nothing.
  The actual regression potential is a user asking for a reboot and not getting 
it.
  Other than that the changes in the code are very small and simple and 
unlikely to break anything.

  [OTHER]

  This is now fixed in H, currently affects B,G,F.

  Debian bug reports :
  https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=931235
  https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=960042

  Upstream issue : https://github.com/systemd/systemd/issues/17575
  PR : https://github.com/systemd/systemd/pull/18010

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1911187/+subscriptions

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1911187] Re: scheduled reboot reboots immediately if dbus or logind is not available

2021-02-23 Thread Dan Streetman
@joalif, one thing I noticed, that isn't important for this SRU, is that
the !ENABLE_LOGIND case still has a log message indicating shutdown will
happen immediately, i.e.:


int logind_schedule_shutdown(void) {


  



  
#if ENABLE_LOGIND   


  
...stuff...
#else   


  
return log_error_errno(SYNTHETIC_ERRNO(ENOSYS), 


  
   "Cannot schedule shutdown without logind 
support, proceeding with immediate shutdown."); 

  
#endif  


  
}

however, since the caller has been changed to return error instead of
immediate reboot, maybe that message should be changed as well. I'd
actually suggest that both messages in this function, that state what
will happen next but rely on the caller to actually do what the log
states, are in the wrong place, and the *calling* function should log an
appropriate message about what it's doing next instead of this function.

Doesn't matter for this though since we do define ENABLE_LOGIND for our
builds, just a suggestion if you want to send a patch to debian :)

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1911187

Title:
  scheduled reboot reboots immediately if dbus or logind is not
  available

Status in systemd package in Ubuntu:
  Fix Released
Status in systemd source package in Bionic:
  In Progress
Status in systemd source package in Focal:
  In Progress
Status in systemd source package in Groovy:
  In Progress

Bug description:
  [IMPACT]

  When, for whatever reason, logind or dbus is not available scheduled reboot 
reboots the machine immediately.
  From the sources it seems that this is intended :
  
https://github.com/systemd/systemd/blob/master/src/systemctl/systemctl-logind.c#L318
  However, I report this as a bug since this is against the logic of a 
scheduled reboot; if someone schedules a reboot they want the system to reboot 
at the specified time not immediately.

  There has been a discussion upstream ( 
https://github.com/systemd/systemd/issues/17575 ) and
  a PR ( https://github.com/systemd/systemd/pull/18010 ).

  Upstream community is not willing to accept the patch but debian is.
  I open this bug to to pull the patch into Ubuntu once it lands in debian.

  [TEST PLAN]

  The simpler reproducer is to disable dbus to imitate the real world
  case.

  # systemctl stop dbus.service
  # systemctl stop dbus.socket
  # shutdown +1140 -r "REBOOT!"
  Failed to set wall message, ignoring: Failed to activate service 
'org.freedesktop.login1': timed out (service_start_timeout=25000ms)
  Failed to call ScheduleShutdown in logind, proceeding with immediate 
shutdown: Connection timed out
  Connection to groovy closed by remote host.
  Connection to groovy closed.

  [WHERE PROBLEM COULD OCCUR]

  This patch changes the behaviour of scheduled reboot in case logind or dbus 
has failed.
  Originally, if logind is not available (call to logind bus fails
  
https://github.com/systemd/systemd/blob/master/src/systemctl/systemctl-logind.c#L319)
  it proceeds with immediate shutdown.
  This patch changes this behaviour and instead of shutting down it does 
nothing.
  The actual regression potential is a user asking for a reboot and not getting 
it.
  Other than that the changes in the code are very small and simple and 
unlikely to break 

[Sts-sponsors] [Bug 1911187] Re: scheduled reboot reboots immediately if dbus or logind is not available

2021-02-23 Thread Dan Streetman
** Changed in: systemd (Ubuntu Focal)
 Assignee: Dan Streetman (ddstreet) => Ioanna Alifieraki (joalif)

** Changed in: systemd (Ubuntu)
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1911187

Title:
  scheduled reboot reboots immediately if dbus or logind is not
  available

Status in systemd package in Ubuntu:
  Fix Released
Status in systemd source package in Bionic:
  In Progress
Status in systemd source package in Focal:
  In Progress
Status in systemd source package in Groovy:
  In Progress

Bug description:
  [IMPACT]

  When, for whatever reason, logind or dbus is not available scheduled reboot 
reboots the machine immediately.
  From the sources it seems that this is intended :
  
https://github.com/systemd/systemd/blob/master/src/systemctl/systemctl-logind.c#L318
  However, I report this as a bug since this is against the logic of a 
scheduled reboot; if someone schedules a reboot they want the system to reboot 
at the specified time not immediately.

  There has been a discussion upstream ( 
https://github.com/systemd/systemd/issues/17575 ) and
  a PR ( https://github.com/systemd/systemd/pull/18010 ).

  Upstream community is not willing to accept the patch but debian is.
  I open this bug to to pull the patch into Ubuntu once it lands in debian.

  [TEST CASE]

  The simpler reproducer is to disable dbus to imitate the real world
  case.

  # systemctl stop dbus.service
  # systemctl stop dbus.socket
  # shutdown +1140 -r "REBOOT!"
  Failed to set wall message, ignoring: Failed to activate service 
'org.freedesktop.login1': timed out (service_start_timeout=25000ms)
  Failed to call ScheduleShutdown in logind, proceeding with immediate 
shutdown: Connection timed out
  Connection to groovy closed by remote host.
  Connection to groovy closed.

  [REGRESSION POTENTIAL]

  This patch changes the behaviour of scheduled reboot in case logind or dbus 
has failed.
  Originally, if logind is not available (call to logind bus fails
  
https://github.com/systemd/systemd/blob/master/src/systemctl/systemctl-logind.c#L319)
  it proceeds with immediate shutdown.
  This patch changes this behaviour and instead of shutting down it does 
nothing.
  The actual regression potential is a user asking for a reboot and not getting 
it.
  Other than that the changes in the code are very small and simple and 
unlikely to break anything.

  [SCOPE]

  This is already in H, need backporting to B,G,F.

  Ubuntu-hirsute commits :

  https://git.launchpad.net/~ubuntu-core-
  dev/ubuntu/+source/systemd/commit/?h=ubuntu-
  hirsute=ce31df6711a8e112cff929ed3bbdcd194f876270

  https://git.launchpad.net/~ubuntu-core-
  dev/ubuntu/+source/systemd/commit/?h=ubuntu-
  hirsute=ec1130fece7ca66273773119775e51045a74122c

  Debian commits :

  https://salsa.debian.org/systemd-
  team/systemd/-/commit/ce31df6711a8e112cff929ed3bbdcd194f876270

  https://salsa.debian.org/systemd-
  team/systemd/-/commit/ec1130fece7ca66273773119775e51045a74122c

  [OTHER]

  Debian bug reports :
  https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=931235
  https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=960042

  Upstream issue : https://github.com/systemd/systemd/issues/17575
  PR : https://github.com/systemd/systemd/pull/18010

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1911187/+subscriptions

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1911187] Re: scheduled reboot reboots immediately if dbus or logind is not available

2021-02-23 Thread Dan Streetman
** Changed in: systemd (Ubuntu Focal)
 Assignee: Ioanna Alifieraki (joalif) => Dan Streetman (ddstreet)

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1911187

Title:
  scheduled reboot reboots immediately if dbus or logind is not
  available

Status in systemd package in Ubuntu:
  Fix Released
Status in systemd source package in Bionic:
  In Progress
Status in systemd source package in Focal:
  In Progress
Status in systemd source package in Groovy:
  In Progress

Bug description:
  [IMPACT]

  When, for whatever reason, logind or dbus is not available scheduled reboot 
reboots the machine immediately.
  From the sources it seems that this is intended :
  
https://github.com/systemd/systemd/blob/master/src/systemctl/systemctl-logind.c#L318
  However, I report this as a bug since this is against the logic of a 
scheduled reboot; if someone schedules a reboot they want the system to reboot 
at the specified time not immediately.

  There has been a discussion upstream ( 
https://github.com/systemd/systemd/issues/17575 ) and
  a PR ( https://github.com/systemd/systemd/pull/18010 ).

  Upstream community is not willing to accept the patch but debian is.
  I open this bug to to pull the patch into Ubuntu once it lands in debian.

  [TEST CASE]

  The simpler reproducer is to disable dbus to imitate the real world
  case.

  # systemctl stop dbus.service
  # systemctl stop dbus.socket
  # shutdown +1140 -r "REBOOT!"
  Failed to set wall message, ignoring: Failed to activate service 
'org.freedesktop.login1': timed out (service_start_timeout=25000ms)
  Failed to call ScheduleShutdown in logind, proceeding with immediate 
shutdown: Connection timed out
  Connection to groovy closed by remote host.
  Connection to groovy closed.

  [REGRESSION POTENTIAL]

  This patch changes the behaviour of scheduled reboot in case logind or dbus 
has failed.
  Originally, if logind is not available (call to logind bus fails
  
https://github.com/systemd/systemd/blob/master/src/systemctl/systemctl-logind.c#L319)
  it proceeds with immediate shutdown.
  This patch changes this behaviour and instead of shutting down it does 
nothing.
  The actual regression potential is a user asking for a reboot and not getting 
it.
  Other than that the changes in the code are very small and simple and 
unlikely to break anything.

  [SCOPE]

  This is already in H, need backporting to B,G,F.

  Ubuntu-hirsute commits :

  https://git.launchpad.net/~ubuntu-core-
  dev/ubuntu/+source/systemd/commit/?h=ubuntu-
  hirsute=ce31df6711a8e112cff929ed3bbdcd194f876270

  https://git.launchpad.net/~ubuntu-core-
  dev/ubuntu/+source/systemd/commit/?h=ubuntu-
  hirsute=ec1130fece7ca66273773119775e51045a74122c

  Debian commits :

  https://salsa.debian.org/systemd-
  team/systemd/-/commit/ce31df6711a8e112cff929ed3bbdcd194f876270

  https://salsa.debian.org/systemd-
  team/systemd/-/commit/ec1130fece7ca66273773119775e51045a74122c

  [OTHER]

  Debian bug reports :
  https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=931235
  https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=960042

  Upstream issue : https://github.com/systemd/systemd/issues/17575
  PR : https://github.com/systemd/systemd/pull/18010

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1911187/+subscriptions

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1820083] Re: TLS params not set for session

2021-02-17 Thread Dan Streetman
** Tags removed: sts-sponsor-ddstreet
** Tags added: sts-sponsor-slashd

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1820083

Title:
  TLS params not set for session

Status in python-etcd3gw package in Ubuntu:
  Fix Released
Status in python-etcd3gw source package in Bionic:
  In Progress
Status in python-etcd3gw source package in Cosmic:
  Won't Fix
Status in python-etcd3gw source package in Disco:
  Won't Fix
Status in python-etcd3gw source package in Eoan:
  Won't Fix
Status in python-etcd3gw source package in Focal:
  In Progress
Status in python-etcd3gw source package in Groovy:
  In Progress
Status in python-etcd3gw source package in Hirsute:
  Fix Released

Bug description:
  [Impact]

  A connection session is opened, but the TLS parameters (timeout, ca,
  cert and key) are not actually set for the session.  This prevents use
  of TLS.

  [Test Case]

  We will be backporting this as part of the python-etcd3gw from upstream 
debian maintainers who bumped the version from 0.2.1-3 to 0.2.5-1
  Running the additional unit tests provided for this would be enough to 
trigger the raised exception.

  [Where Problems Could Occur]

  This adds TLS parameters (if provided) to the session, so regressions
  would involve failed connections, possibly those without TLS that had
  TLS params incorrectly provided before.

  [Other Info]

  the upstream bug is https://github.com/dims/etcd3-gateway/issues/20
  fixed upstream with pull request https://github.com/dims/etcd3-gateway/pull/21
  via commit 90b7a19cdc4daa1230d7f15c10b113abdefdc8c0

  that commit is contained in version 0.2.2, which is not yet pulled
  into Debian, so this patch is needed in Debian, as well as Bionicand
  Focal.  This package was not included in Xenial.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/python-etcd3gw/+bug/1820083/+subscriptions

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1820083] Re: TLS params not set for session

2021-02-17 Thread Dan Streetman
** Tags removed: sts-sponsor-volunteer
** Tags added: sts-sponsor-ddstreet

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1820083

Title:
  TLS params not set for session

Status in python-etcd3gw package in Ubuntu:
  Fix Released
Status in python-etcd3gw source package in Bionic:
  In Progress
Status in python-etcd3gw source package in Cosmic:
  Won't Fix
Status in python-etcd3gw source package in Disco:
  Won't Fix
Status in python-etcd3gw source package in Eoan:
  Won't Fix
Status in python-etcd3gw source package in Focal:
  In Progress
Status in python-etcd3gw source package in Groovy:
  In Progress
Status in python-etcd3gw source package in Hirsute:
  Fix Released

Bug description:
  [Impact]

  A connection session is opened, but the TLS parameters (timeout, ca,
  cert and key) are not actually set for the session.  This prevents use
  of TLS.

  [Test Case]

  We will be backporting this as part of the python-etcd3gw from upstream 
debian maintainers who bumped the version from 0.2.1-3 to 0.2.5-1
  Running the additional unit tests provided for this would be enough to 
trigger the raised exception.

  [Where Problems Could Occur]

  This adds TLS parameters (if provided) to the session, so regressions
  would involve failed connections, possibly those without TLS that had
  TLS params incorrectly provided before.

  [Other Info]

  the upstream bug is https://github.com/dims/etcd3-gateway/issues/20
  fixed upstream with pull request https://github.com/dims/etcd3-gateway/pull/21
  via commit 90b7a19cdc4daa1230d7f15c10b113abdefdc8c0

  that commit is contained in version 0.2.2, which is not yet pulled
  into Debian, so this patch is needed in Debian, as well as Bionicand
  Focal.  This package was not included in Xenial.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/python-etcd3gw/+bug/1820083/+subscriptions

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1906720] Re: Fix the disable_ssl_certificate_validation option

2021-02-11 Thread Dan Streetman
the python-oslo.vmware failures are almost certainly the same as bug
1912792

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1906720

Title:
  Fix the disable_ssl_certificate_validation option

Status in python-httplib2 package in Ubuntu:
  Fix Released
Status in python-httplib2 source package in Bionic:
  Fix Committed
Status in python-httplib2 source package in Focal:
  Fix Released
Status in python-httplib2 source package in Groovy:
  Fix Released
Status in python-httplib2 source package in Hirsute:
  Fix Released

Bug description:
  [Impact]

   * On Bionic, MAAS CLI fails to work with apis over https with self-signed
     certificates due to broken disable_ssl_certificate_validation option
     with python 3.5 and later.

  [Steps to Reproduce]

   1. prepare a maas server (it doesn't have to be HA to reproduce)
   2. prepare a set of certificate, key and ca-bundle
   3. place a new conf in /etc/nginx/sites-enabled and `sudo systemctl
  restart nginx`
   4. add the ca certificates to the host
  sudo mkdir /usr/share/ca-certificates/extra
  sudo cp -v ca-bundle.crt /usr/share/ca-certificates/extra/
  dpkg-reconfigure ca-certificates
   5. login with a new profile over https url
   6. if the certificate is not trusted by the root store, it fails to login
   7. adding the '--insecure' flag should disable the certificate check

  [Where Problems Could Occur]

   * Potential issues could happen if we disable certificate validation for
     all TLS interactions, any connection https related.

   * Should not break existing python3 versions.

   * Should not affect previously working python2 versions.

  [Other Info]

  This change should fix the issue with python3, and you should be able
  to connect with python2 as before.

  python2 => python-httplib2_0.9.2+dfsg-1ubuntu0.3_all.deb
  python3 =>  python3-httplib2_0.9.2+dfsg-1ubuntu0.3_all.deb
  *both are build from the same source package

  helpful urls:
  https://maas.io/docs/deb/2.8/cli/installation
  https://maas.io/docs/deb/2.8/cli/configuration-journey
  https://maas.io/docs/deb/2.8/ui/configuration-journey

  [Test Case]

  # create bionic VM/lxc container
  lxc launch ubuntu:bionic lp1906720

  # get source code from repo
  pull-lp-source  python-httplib2 bionic

  # install maas-cli
  apt-get install maas-cli

  # install maas server
  apt-get install maas

  # init maas
  sudo maas init

  # answer questions

  # generate self signed cert and key
  openssl req -newkey rsa:4096 -x509 -sha256 -days 60 -nodes -out localhost.crt 
-keyout localhost.key

  # add certs
  sudo cp -v localhost.crt /usr/share/ca-certificates/extra/

  # add new cert to list
  sudo dpkg-reconfigure ca-certificates
  [1]

  # select yes with spacebar
  # save and it will reload with 1 new certificate

  # create api key files
  touch api_key
  touch api-key-file

  # remove any packages with this
  # or this python3-httplib2
  apt-cache search python-httplib2
  apt-get remove python-httplib2
  apt-get remove python3-httplib2

  # create 2 admin users
  sudo maas createadmin testadmin
  sudo maas createadmin secureadmin

  # generate maas api keys
  sudo maas apikey --username=testadmin > api_key
  sudo maas apikey --username=secureadmin > api-key-file

  # setup nginx proxy
  sudo apt update
  sudo apt install nginx
  touch /etc/nginx/sites-available/maas-https-default
  # contents of maas-https-default
  server {
   listen 443 ssl http2;

   server_name _;
   ssl_certificate /home/ubuntu/localhost.crt;
   ssl_certificate_key /home/ubuntu/localhost.key;

   location / {
    proxy_pass http://localhost:5240;
    include /etc/nginx/proxy_params;
   }

   location /MAAS/ws {
    proxy_pass http://127.0.0.1:5240/MAAS/ws;
  proxy_http_version 1.1;
  proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection "Upgrade";
   }
  }

  sudo service nginx restart

  # make sure you can login to maas-cli without TLS
  # by running this script
  # this is for the non-tls user
  # this goes into a script called maas-login.sh
  touch maas-login.sh
  sudo chmod +rwx maas-login.sh
  
  #!/bin/sh
  PROFILE=testadmin
  API_KEY_FILE=/home/ubuntu/api_key
  API_SERVER=127.0.0.1:5240

  MAAS_URL=http://$API_SERVER/MAAS

  maas login $PROFILE $MAAS_URL - < $API_KEY_FILE
  

  sudo chmod +rwx https-maas.sh
  # another script called https-maas.sh
  # for the tls user
  
  #!/bin/sh
  PROFILE=secureadmin
  API_KEY_FILE=/home/ubuntu/api-key-file
  API_SERVER=127.0.0.1

  MAAS_URL=https://$API_SERVER/MAAS

  maas login $PROFILE $MAAS_URL - < $API_KEY_FILE
  

  # try to login
  ./maas-login.sh

  cd /etc/nginx/sites-enabled
  sudo touch maas-https-default
  #example nginx config for maas https
  server {
   listen 443 ssl http2;

   server_name _;
   ssl_certificate /home/ubuntu/localhost.crt;
   ssl_certificate_key 

Re: [Sts-sponsors] [Merge] ~halves/ubuntu/+source/gnupg2:lp1910432-bionic into ubuntu/+source/gnupg2:ubuntu/bionic-devel

2021-01-27 Thread Dan Streetman
Review: Approve

LGTM, uploaded, thanks!
-- 
https://code.launchpad.net/~halves/ubuntu/+source/gnupg2/+git/gnupg2/+merge/396408
Your team STS Sponsors is requested to review the proposed merge of 
~halves/ubuntu/+source/gnupg2:lp1910432-bionic into 
ubuntu/+source/gnupg2:ubuntu/bionic-devel.

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1906720] Re: Fix the disable_ssl_certificate_validation option

2021-01-25 Thread Dan Streetman
** Tags removed: sts-sponsor-ddstreet
** Tags added: sts-sponsor-slashd

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1906720

Title:
  Fix the disable_ssl_certificate_validation option

Status in python-httplib2 package in Ubuntu:
  Fix Released
Status in python-httplib2 source package in Bionic:
  In Progress
Status in python-httplib2 source package in Focal:
  Fix Released
Status in python-httplib2 source package in Groovy:
  Fix Released
Status in python-httplib2 source package in Hirsute:
  Fix Released

Bug description:
  [Impact]

   * On Bionic, MAAS CLI fails to work with apis over https with self-signed
     certificates due to broken disable_ssl_certificate_validation option
     with python 3.5 and later.

  [Steps to Reproduce]

   1. prepare a maas server (it doesn't have to be HA to reproduce)
   2. prepare a set of certificate, key and ca-bundle
   3. place a new conf in /etc/nginx/sites-enabled and `sudo systemctl
  restart nginx`
   4. add the ca certificates to the host
  sudo mkdir /usr/share/ca-certificates/extra
  sudo cp -v ca-bundle.crt /usr/share/ca-certificates/extra/
  dpkg-reconfigure ca-certificates
   5. login with a new profile over https url
   6. if the certificate is not trusted by the root store, it fails to login
   7. adding the '--insecure' flag should disable the certificate check

  [Where problems could occur]

   * Potential issues could happen if we disable certificate validation for
     all TLS interactions, any connection https related.

   * Should not break existing python3 versions.

   * Should not affect previously working python2 versions.

  [Other Info]

  This change should fix the issue with python3, and you should be able
  to connect with python2 as before.

  python2 => python-httplib2_0.9.2+dfsg-1ubuntu0.3_all.deb
  python3 =>  python3-httplib2_0.9.2+dfsg-1ubuntu0.3_all.deb

  helpful urls:
  https://maas.io/docs/deb/2.8/cli/installation
  https://maas.io/docs/deb/2.8/cli/configuration-journey
  https://maas.io/docs/deb/2.8/ui/configuration-journey

  # create bionic VM/lxc container
  lxc launch ubuntu:bionic lp1906720

  # get source code from repo
  pull-lp-source  python-httplib2 bionic

  # install maas-cli
  apt-get install maas-cli

  # install maas server
  apt-get install maas

  # init maas
  sudo maas init

  # answer questions

  # generate self signed cert and key
  openssl req -newkey rsa:4096 -x509 -sha256 -days 60 -nodes -out localhost.crt 
-keyout localhost.key

  # add certs
  sudo cp -v test.crt /usr/share/ca-certificates/extra/

  # add new cert to list
  sudo dpkg-reconfigure ca-certificates

  # select yes with spacebar
  # save

  # create api key files
  touch api_key
  touch api-key-file

  # remove any packages with this
  # or this python3-httplib2
  apt-cache search python-httplib2
  apt-get remove python-httplib2
  apt-get remove python3-httplib2

  # create 2 admin users
  sudo maas createadmin testadmin
  sudo maas createadmin secureadmin

  # generate maas api keys
  sudo maas apikey --username=testadmin > api_key
  sudo maas apikey --username=secureadmin > api-key-file

  # make sure you can login to maas-cli without TLS
  # by running this script
  # this is for the non-tls user
  # this goes into a script called maas-login.sh
  touch maas-login.sh
  sudo chmod +rwx maas-login.sh
  
  #!/bin/sh
  PROFILE=testadmin
  API_KEY_FILE=/home/ubuntu/api_key
  API_SERVER=127.0.0.1:5240

  MAAS_URL=http://$API_SERVER/MAAS

  maas login $PROFILE $MAAS_URL - < $API_KEY_FILE
  

  sudo chmod +rwx https-maas.sh
  # another script called https-maas.sh
  # for the tls user
  
  #!/bin/sh
  PROFILE=secureadmin
  API_KEY_FILE=/home/ubuntu/api-key-file
  API_SERVER=127.0.0.1

  MAAS_URL=https://$API_SERVER/MAAS

  maas login --insecure $PROFILE $MAAS_URL - < $API_KEY_FILE
  

  # try to login
  ./maas-login.sh

  cd /etc/nginx/sites-enabled
  sudo touch maas-https-default
  #example nginx config for maas https
  server {
   listen 443 ssl http2;

   server_name _;
   ssl_certificate /home/ubuntu/localhost.crt;
   ssl_certificate_key /home/ubuntu/localhost.key;

   location / {
    proxy_pass http://localhost:5240;
    include /etc/nginx/proxy_params;
   }

   location /MAAS/ws {
    proxy_pass http://127.0.0.1:5240/MAAS/ws;
  proxy_http_version 1.1;
  proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection "Upgrade";
   }
  }

  # create link
  sudo ln -s /etc/nginx/sites-available/maas-https-default 
/etc/nginx/sites-enabled

  # look at errors
  cat /var/log/maas/regiond.log
  cat regiond.log | grep "Python-http"
  *i didn't see any 404's though

  2020-12-15 13:24:48 regiond: [info] 127.0.0.1 GET 
/MAAS/api/2.0/users/?op=whoami HTTP/1.1 --> 200 OK (referrer: -; agent: 
Python-httplib2/0.9.2 (gzip))
  2020-12-15 13:24:48 regiond: [info] 127.0.0.1 

[Sts-sponsors] [Bug 1910432] Re: dirmngr doesn't work with kernel parameter ipv6.disable=1

2021-01-21 Thread Dan Streetman
** Changed in: gnupg2 (Ubuntu Groovy)
   Status: Incomplete => In Progress

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1910432

Title:
  dirmngr doesn't work with kernel parameter ipv6.disable=1

Status in gnupg2 package in Ubuntu:
  Fix Released
Status in gnupg2 source package in Bionic:
  In Progress
Status in gnupg2 source package in Focal:
  Fix Committed
Status in gnupg2 source package in Groovy:
  In Progress
Status in gnupg2 source package in Hirsute:
  Fix Released

Bug description:
  [Impact]
  apt-key fails to fetch keys with "Address family not supported by protocol"

  [Description]
  We've had users report issues about apt-key being unable to fetch keys when 
IPv6 is disabled. As the mentioned kernel command line parameter disables IPV6 
socket support, servers that allow/respond with IPv6 will cause 
connect_server() to fail with EAFNOSUPPORT.

  As this error is not being handled in some version of dirmngr, it'll
  simply fail the connection and could cause other processes to fail as
  well. In the test scenario below, it's easy to demonstrate this
  behaviour through apt-key.

  This has been reported upstream, and has been fixed with the following commit:
  - dirmngr: Handle EAFNOSUPPORT at connect_server. (109d16e8f644)

  The fix has been present upstream starting with GnuPG 2.22, so it's
  not currently available in any Ubuntu releases.

  [Test Case]
  1. Spin up Focal VM

  2. Disable IPv6:
  $ sudo vi /etc/default/grub
  (...)
  GRUB_CMDLINE_LINUX="ipv6.disable=1"
  $ sudo update-grub

  3. Reboot the VM

  4. Try to fetch a key:
  sudo apt-key adv --fetch-keys 
https://www.postgreSQL.org/media/keys/ACCC4CF8.asc

  You'll get the following error:
  gpg: WARNING: unable to fetch URI 
https://www.postgresql.org/media/keys/ACCC4CF8.asc: Address family not 
supported by protocol

  [Regression Potential]
  The patch introduces additional error handling when connecting to servers, to 
properly mark remote hosts as having valid IPv4 and/or IPv6 connectivity. We 
should look out for potential regressions when connecting to servers with 
exclusive IPv4 or IPv6 connectivity, to make sure the server is not getting 
marked as 'dead' due to missing one of the versions.
  This commit has also been tested in the corresponding Ubuntu series, and has 
been deemed safe for backporting to stable branches of upstream GnuPG. The 
overall regression potential for this change should be fairly low, and breakage 
should be easily spotted.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/gnupg2/+bug/1910432/+subscriptions

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1910432] Re: dirmngr doesn't work with kernel parameter ipv6.disable=1

2021-01-19 Thread Dan Streetman
uploaded to f/g/h, thanks @halves!

One more minor comment for b added in the MR

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1910432

Title:
  dirmngr doesn't work with kernel parameter ipv6.disable=1

Status in gnupg2 package in Ubuntu:
  In Progress
Status in gnupg2 source package in Bionic:
  In Progress
Status in gnupg2 source package in Focal:
  In Progress
Status in gnupg2 source package in Groovy:
  In Progress
Status in gnupg2 source package in Hirsute:
  In Progress

Bug description:
  [Impact]
  apt-key fails to fetch keys with "Address family not supported by protocol"

  [Description]
  We've had users report issues about apt-key being unable to fetch keys when 
IPv6 is disabled. As the mentioned kernel command line parameter disables IPV6 
socket support, servers that allow/respond with IPv6 will cause 
connect_server() to fail with EAFNOSUPPORT.

  As this error is not being handled in some version of dirmngr, it'll
  simply fail the connection and could cause other processes to fail as
  well. In the test scenario below, it's easy to demonstrate this
  behaviour through apt-key.

  This has been reported upstream, and has been fixed with the following commit:
  - dirmngr: Handle EAFNOSUPPORT at connect_server. (109d16e8f644)

  The fix has been present upstream starting with GnuPG 2.22, so it's
  not currently available in any Ubuntu releases.

  [Test Case]
  1. Spin up Focal VM

  2. Disable IPv6:
  $ sudo vi /etc/default/grub
  (...)
  GRUB_CMDLINE_LINUX="ipv6.disable=1"
  $ sudo update-grub

  3. Reboot the VM

  4. Try to fetch a key:
  sudo apt-key adv --fetch-keys 
https://www.postgreSQL.org/media/keys/ACCC4CF8.asc

  You'll get the following error:
  gpg: WARNING: unable to fetch URI 
https://www.postgresql.org/media/keys/ACCC4CF8.asc: Address family not 
supported by protocol

  [Regression Potential]
  The patch introduces additional error handling when connecting to servers, to 
properly mark remote hosts as having valid IPv4 and/or IPv6 connectivity. We 
should look out for potential regressions when connecting to servers with 
exclusive IPv4 or IPv6 connectivity, to make sure the server is not getting 
marked as 'dead' due to missing one of the versions.
  This commit has also been tested in the corresponding Ubuntu series, and has 
been deemed safe for backporting to stable branches of upstream GnuPG. The 
overall regression potential for this change should be fairly low, and breakage 
should be easily spotted.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/gnupg2/+bug/1910432/+subscriptions

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


Re: [Sts-sponsors] [Merge] ~halves/ubuntu/+source/gnupg2:lp1910432-bionic into ubuntu/+source/gnupg2:ubuntu/bionic-devel

2021-01-19 Thread Dan Streetman
Review: Needs Fixing

one more minor comment inline below

Diff comments:

> diff --git 
> a/debian/patches/dirmngr-handle-EAFNOSUPPORT-at-connect_server.patch 
> b/debian/patches/dirmngr-handle-EAFNOSUPPORT-at-connect_server.patch
> new file mode 100644
> index 000..d926add
> --- /dev/null
> +++ b/debian/patches/dirmngr-handle-EAFNOSUPPORT-at-connect_server.patch
> @@ -0,0 +1,61 @@
> +From ca937cf390662b830d4fc5d295e69b24b1778050 Mon Sep 17 00:00:00 2001
> +From: NIIBE Yutaka 
> +Date: Mon, 13 Jul 2020 10:00:58 +0900
> +Subject: [PATCH] dirmngr: Handle EAFNOSUPPORT at connect_server.
> +
> +* dirmngr/http.c (connect_server): Skip server with EAFNOSUPPORT.
> +
> +--
> +
> +GnuPG-bug-id: 4977
> +Signed-off-by: NIIBE Yutaka 
> +
> +Origin: backport, 
> https://git.gnupg.org/cgi-bin/gitweb.cgi?p=gnupg.git;a=commit;h=109d16e8f644
> +Bug-Ubuntu: https://bugs.launchpad.net/bugs/1910432
> +---
> + dirmngr/http.c | 9 +
> + 1 file changed, 9 insertions(+)
> +
> +Index: gnupg2/dirmngr/http.c
> +===
> +--- gnupg2.orig/dirmngr/http.c
>  gnupg2/dirmngr/http.c
> +@@ -2843,7 +2843,7 @@ connect_server (const char *server, unsi
> +   unsigned int srvcount = 0;
> +   int hostfound = 0;
> +   int anyhostaddr = 0;
> +-  int srv, connected;
> ++  int srv, connected, v4_valid, v6_valid;
> +   gpg_error_t last_err = 0;
> +   struct srventry *serverlist = NULL;
> +
> +@@ -2930,9 +2930,11 @@ connect_server (const char *server, unsi
> +
> +   for (ai = aibuf; ai && !connected; ai = ai->next)
> + {
> +-  if (ai->family == AF_INET && (flags & HTTP_FLAG_IGNORE_IPv4))
> ++  if (ai->family == AF_INET
> ++  && ((flags & HTTP_FLAG_IGNORE_IPv4) || !v4_valid))

I think this is checking v4_valid before it's initialized; it seems it was 
added with the upstream commit 12def3a84e, but that looks way bigger than is 
needed to backport. Maybe you could just initialize v4_valid = (flags & 
HTTP_FLAG_IGNORE_IPv4) and then you only need to check v4_valid in this if 
statement? And do the same for v6_valid of course

> + continue;
> +-  if (ai->family == AF_INET6 && (flags & HTTP_FLAG_IGNORE_IPv6))
> ++  if (ai->family == AF_INET6
> ++  && ((flags & HTTP_FLAG_IGNORE_IPv6) || !v6_valid))
> + continue;
> +
> +   if (sock != ASSUAN_INVALID_FD)
> +@@ -2940,6 +2942,15 @@ connect_server (const char *server, unsi
> +   sock = my_sock_new_for_addr (ai->addr, ai->socktype, 
> ai->protocol);
> +   if (sock == ASSUAN_INVALID_FD)
> + {
> ++  if (errno == EAFNOSUPPORT)
> ++{
> ++  if (ai->family == AF_INET)
> ++v4_valid = 0;
> ++  if (ai->family == AF_INET6)
> ++v6_valid = 0;
> ++  continue;
> ++}
> ++
> +   err = gpg_err_make (default_errsource,
> +   gpg_err_code_from_syserror ());
> +   log_error ("error creating socket: %s\n", gpg_strerror (err));


-- 
https://code.launchpad.net/~halves/ubuntu/+source/gnupg2/+git/gnupg2/+merge/396408
Your team STS Sponsors is requested to review the proposed merge of 
~halves/ubuntu/+source/gnupg2:lp1910432-bionic into 
ubuntu/+source/gnupg2:ubuntu/bionic-devel.

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


Re: [Sts-sponsors] [Merge] ~halves/ubuntu/+source/gnupg2:lp1910432-focal into ubuntu/+source/gnupg2:ubuntu/focal-devel

2021-01-19 Thread Dan Streetman
Review: Approve

LGTM, uploaded to focal, thanks!
-- 
https://code.launchpad.net/~halves/ubuntu/+source/gnupg2/+git/gnupg2/+merge/396406
Your team STS Sponsors is requested to review the proposed merge of 
~halves/ubuntu/+source/gnupg2:lp1910432-focal into 
ubuntu/+source/gnupg2:ubuntu/focal-devel.

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


Re: [Sts-sponsors] [Merge] ~halves/ubuntu/+source/gnupg2:lp1910432-groovy into ubuntu/+source/gnupg2:ubuntu/groovy-devel

2021-01-19 Thread Dan Streetman
Review: Approve

LGTM, uploaded to g, thanks!
-- 
https://code.launchpad.net/~halves/ubuntu/+source/gnupg2/+git/gnupg2/+merge/396531
Your team STS Sponsors is requested to review the proposed merge of 
~halves/ubuntu/+source/gnupg2:lp1910432-groovy into 
ubuntu/+source/gnupg2:ubuntu/groovy-devel.

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


Re: [Sts-sponsors] [Merge] ~halves/ubuntu/+source/gnupg2:lp1910432-devel into ubuntu/+source/gnupg2:ubuntu/devel

2021-01-19 Thread Dan Streetman
Review: Approve

LGTM, uploaded to h, thanks!
-- 
https://code.launchpad.net/~halves/ubuntu/+source/gnupg2/+git/gnupg2/+merge/396407
Your team STS Sponsors is requested to review the proposed merge of 
~halves/ubuntu/+source/gnupg2:lp1910432-devel into 
ubuntu/+source/gnupg2:ubuntu/devel.

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1906720] Re: Fix the disable_ssl_certificate_validation option

2021-01-19 Thread Dan Streetman
attached updated debdiff with just minor adjustments:

- added tag "LP: #1906720" to changelog entry
- ran 'quilt refresh' on patch to fix offsets
- added DEP3 fields to patch (https://dep-team.pages.debian.net/deps/dep3/)
  (in general, at least Origin: and Bug-Ubuntu: fields should be added)
- renamed patch to remove leading '0002-' (just personal preference for patch 
naming)

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1906720

Title:
  Fix the disable_ssl_certificate_validation option

Status in python-httplib2 package in Ubuntu:
  Fix Released
Status in python-httplib2 source package in Bionic:
  In Progress
Status in python-httplib2 source package in Focal:
  Fix Released
Status in python-httplib2 source package in Groovy:
  Fix Released
Status in python-httplib2 source package in Hirsute:
  Fix Released

Bug description:
  [Environment]

  Bionic
  python3-httplib2 | 0.9.2+dfsg-1ubuntu0.2

  [Description]

  maas cli fails to work with apis over https with self-signed certificates due 
to the lack
  of disable_ssl_certificate_validation option with python 3.5.

  [Distribution/Release, Package versions, Platform]
  cat /etc/lsb-release; dpkg -l | grep maas
  DISTRIB_ID=Ubuntu
  DISTRIB_RELEASE=18.04
  DISTRIB_CODENAME=bionic
  DISTRIB_DESCRIPTION="Ubuntu 18.04.5 LTS"
  ii maas 2.8.2-8577-g.a3e674063-0ubuntu1~18.04.1 all "Metal as a Service" is a 
physical cloud and IPAM
  ii maas-cli 2.8.2-8577-g.a3e674063-0ubuntu1~18.04.1 all MAAS client and 
command-line interface
  ii maas-common 2.8.2-8577-g.a3e674063-0ubuntu1~18.04.1 all MAAS server common 
files
  ii maas-dhcp 2.8.2-8577-g.a3e674063-0ubuntu1~18.04.1 all MAAS DHCP server
  ii maas-proxy 2.8.2-8577-g.a3e674063-0ubuntu1~18.04.1 all MAAS Caching Proxy
  ii maas-rack-controller 2.8.2-8577-g.a3e674063-0ubuntu1~18.04.1 all Rack 
Controller for MAAS
  ii maas-region-api 2.8.2-8577-g.a3e674063-0ubuntu1~18.04.1 all Region 
controller API service for MAAS
  ii maas-region-controller 2.8.2-8577-g.a3e674063-0ubuntu1~18.04.1 all Region 
Controller for MAAS
  ii python3-django-maas 2.8.2-8577-g.a3e674063-0ubuntu1~18.04.1 all MAAS 
server Django web framework (Python 3)
  ii python3-maas-client 2.8.2-8577-g.a3e674063-0ubuntu1~18.04.1 all MAAS 
python API client (Python 3)
  ii python3-maas-provisioningserver 2.8.2-8577-g.a3e674063-0ubuntu1~18.04.1 
all MAAS server provisioning libraries (Python 3)

  [Steps to Reproduce]

  - prepare a maas server(installed by packages for me and the customer). it 
doesn't have to be HA to reproduce
  - prepare a set of certificate, key and ca-bundle
  - place a new conf[2] in /etc/nginx/sites-enabled and `sudo systemctl restart 
nginx`
  - add the ca certificates to the host
  sudo mkdir /usr/share/ca-certificates/extra
  sudo cp -v ca-bundle.crt /usr/share/ca-certificates/extra/
  dpkg-reconfigure ca-certificates
  - login with a new profile over https url
  - when not added the ca-bundle to the trusted ca cert store, it fails to 
login and '--insecure' flag also doesn't work[3]

  [Known Workarounds]
  None

  [Test]
  # Note even though this change only affects Python3
  # I tested it with Python2 with no issues and was able to connect.
  Also please make note of the 2 packages. One is for Python2 the other Python3 

  Python2 ===> python-httplib2_0.9.2+dfsg-1ubuntu0.3_all.deb
  Python3 ===>  python3-httplib2_0.9.2+dfsg-1ubuntu0.3_all.deb

  helpful urls:
  https://maas.io/docs/deb/2.8/cli/installation
  https://maas.io/docs/deb/2.8/cli/configuration-journey
  https://maas.io/docs/deb/2.8/ui/configuration-journey

  # create bionic VM/lxc container
  lxc launch ubuntu:bionic lp1820083

  # get source code from repo
  pull-lp-source  python-httplib2 bionic

  # install maas-cli
  apt-get install maas-cli

  # install maas server
  apt-get install maas

  # init maas
  sudo maas init

  # answer questions

  # generate self signed cert and key
  openssl req -newkey rsa:4096 -x509 -sha256 -days 60 -nodes -out localhost.crt 
-keyout localhost.key

  # add certs
  sudo cp -v test.crt /usr/share/ca-certificates/extra/

  # add new cert to list
  sudo dpkg-reconfigure ca-certificates

  # select yes with spacebar
  # save

  # create api key files
  touch api_key
  touch api-key-file

  # remove any packages with this
  # or this python3-httplib2
  apt-cache search python-httplib2
  apt-get remove python-httplib2
  apt-get remove python3-httplib2

  # create 2 admin users
  sudo maas createadmin testadmin
  sudo maas createadmin secureadmin

  # generate maas api keys
  sudo maas apikey --username=testadmin > api_key
  sudo maas apikey --username=secureadmin > api-key-file

  # make sure you can login to maas-cli without TLS
  # by running this script
  # this is for the non-tls user
  # this goes into a script called maas-login.sh
  touch maas-login.sh
  sudo chmod +rwx maas-login.sh
  
  #!/bin/sh
  

[Sts-sponsors] [Bug 1906720] Re: Fix the disable_ssl_certificate_validation option

2021-01-19 Thread Dan Streetman
** Tags added: sts sts-sponsor-ddstreet

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1906720

Title:
  Fix the disable_ssl_certificate_validation option

Status in python-httplib2 package in Ubuntu:
  Fix Released
Status in python-httplib2 source package in Bionic:
  In Progress
Status in python-httplib2 source package in Focal:
  Fix Released
Status in python-httplib2 source package in Groovy:
  Fix Released
Status in python-httplib2 source package in Hirsute:
  Fix Released

Bug description:
  [Environment]

  Bionic
  python3-httplib2 | 0.9.2+dfsg-1ubuntu0.2

  [Description]

  maas cli fails to work with apis over https with self-signed certificates due 
to the lack
  of disable_ssl_certificate_validation option with python 3.5.

  [Distribution/Release, Package versions, Platform]
  cat /etc/lsb-release; dpkg -l | grep maas
  DISTRIB_ID=Ubuntu
  DISTRIB_RELEASE=18.04
  DISTRIB_CODENAME=bionic
  DISTRIB_DESCRIPTION="Ubuntu 18.04.5 LTS"
  ii maas 2.8.2-8577-g.a3e674063-0ubuntu1~18.04.1 all "Metal as a Service" is a 
physical cloud and IPAM
  ii maas-cli 2.8.2-8577-g.a3e674063-0ubuntu1~18.04.1 all MAAS client and 
command-line interface
  ii maas-common 2.8.2-8577-g.a3e674063-0ubuntu1~18.04.1 all MAAS server common 
files
  ii maas-dhcp 2.8.2-8577-g.a3e674063-0ubuntu1~18.04.1 all MAAS DHCP server
  ii maas-proxy 2.8.2-8577-g.a3e674063-0ubuntu1~18.04.1 all MAAS Caching Proxy
  ii maas-rack-controller 2.8.2-8577-g.a3e674063-0ubuntu1~18.04.1 all Rack 
Controller for MAAS
  ii maas-region-api 2.8.2-8577-g.a3e674063-0ubuntu1~18.04.1 all Region 
controller API service for MAAS
  ii maas-region-controller 2.8.2-8577-g.a3e674063-0ubuntu1~18.04.1 all Region 
Controller for MAAS
  ii python3-django-maas 2.8.2-8577-g.a3e674063-0ubuntu1~18.04.1 all MAAS 
server Django web framework (Python 3)
  ii python3-maas-client 2.8.2-8577-g.a3e674063-0ubuntu1~18.04.1 all MAAS 
python API client (Python 3)
  ii python3-maas-provisioningserver 2.8.2-8577-g.a3e674063-0ubuntu1~18.04.1 
all MAAS server provisioning libraries (Python 3)

  [Steps to Reproduce]

  - prepare a maas server(installed by packages for me and the customer). it 
doesn't have to be HA to reproduce
  - prepare a set of certificate, key and ca-bundle
  - place a new conf[2] in /etc/nginx/sites-enabled and `sudo systemctl restart 
nginx`
  - add the ca certificates to the host
  sudo mkdir /usr/share/ca-certificates/extra
  sudo cp -v ca-bundle.crt /usr/share/ca-certificates/extra/
  dpkg-reconfigure ca-certificates
  - login with a new profile over https url
  - when not added the ca-bundle to the trusted ca cert store, it fails to 
login and '--insecure' flag also doesn't work[3]

  [Known Workarounds]
  None

  [Test]
  # Note even though this change only affects Python3
  # I tested it with Python2 with no issues and was able to connect.
  Also please make note of the 2 packages. One is for Python2 the other Python3 

  Python2 ===> python-httplib2_0.9.2+dfsg-1ubuntu0.3_all.deb
  Python3 ===>  python3-httplib2_0.9.2+dfsg-1ubuntu0.3_all.deb

  helpful urls:
  https://maas.io/docs/deb/2.8/cli/installation
  https://maas.io/docs/deb/2.8/cli/configuration-journey
  https://maas.io/docs/deb/2.8/ui/configuration-journey

  # create bionic VM/lxc container
  lxc launch ubuntu:bionic lp1820083

  # get source code from repo
  pull-lp-source  python-httplib2 bionic

  # install maas-cli
  apt-get install maas-cli

  # install maas server
  apt-get install maas

  # init maas
  sudo maas init

  # answer questions

  # generate self signed cert and key
  openssl req -newkey rsa:4096 -x509 -sha256 -days 60 -nodes -out localhost.crt 
-keyout localhost.key

  # add certs
  sudo cp -v test.crt /usr/share/ca-certificates/extra/

  # add new cert to list
  sudo dpkg-reconfigure ca-certificates

  # select yes with spacebar
  # save

  # create api key files
  touch api_key
  touch api-key-file

  # remove any packages with this
  # or this python3-httplib2
  apt-cache search python-httplib2
  apt-get remove python-httplib2
  apt-get remove python3-httplib2

  # create 2 admin users
  sudo maas createadmin testadmin
  sudo maas createadmin secureadmin

  # generate maas api keys
  sudo maas apikey --username=testadmin > api_key
  sudo maas apikey --username=secureadmin > api-key-file

  # make sure you can login to maas-cli without TLS
  # by running this script
  # this is for the non-tls user
  # this goes into a script called maas-login.sh
  touch maas-login.sh
  sudo chmod +rwx maas-login.sh
  
  #!/bin/sh
  PROFILE=testadmin
  API_KEY_FILE=/home/ubuntu/api_key
  API_SERVER=127.0.0.1:5240

  MAAS_URL=http://$API_SERVER/MAAS

  maas login $PROFILE $MAAS_URL - < $API_KEY_FILE
  
  sudo chmod +rwx https-maas.sh
  # another script called https-maas.sh
  # for the tls user
  
  #!/bin/sh
  PROFILE=secureadmin
  API_KEY_FILE=/home/ubuntu/api-key-file
  

Re: [Sts-sponsors] [Merge] ~halves/ubuntu/+source/gnupg2:lp1910432-focal into ubuntu/+source/gnupg2:ubuntu/focal-devel

2021-01-17 Thread Dan Streetman
sorry didn't click the checkbox on my inline comment last time

Diff comments:

> diff --git a/debian/changelog b/debian/changelog
> index fe05c72..6ada1d4 100644
> --- a/debian/changelog
> +++ b/debian/changelog
> @@ -1,3 +1,10 @@
> +gnupg2 (2.2.19-3ubuntu3) focal; urgency=medium

as focal is an sru release, version number should be 2.2.19-3ubuntu2.1

> +
> +  * d/p/dirmngr-handle-EAFNOSUPPORT-at-connect_server.patch:
> +- Fix IPv6 connectivity for dirmngr (LP: #1910432)
> +
> + -- Heitor Alves de Siqueira   Wed, 06 Jan 2021 
> 18:10:35 +
> +
>  gnupg2 (2.2.19-3ubuntu2) focal; urgency=medium
>  
>* Don't declare diffutils as a test dependency, this package is essential


-- 
https://code.launchpad.net/~halves/ubuntu/+source/gnupg2/+git/gnupg2/+merge/396406
Your team STS Sponsors is requested to review the proposed merge of 
~halves/ubuntu/+source/gnupg2:lp1910432-focal into 
ubuntu/+source/gnupg2:ubuntu/focal-devel.

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


Re: [Sts-sponsors] [Merge] ~halves/ubuntu/+source/gnupg2:lp1910432-focal into ubuntu/+source/gnupg2:ubuntu/focal-devel

2021-01-17 Thread Dan Streetman
Review: Needs Fixing

looks good, minor issue with version number bump
-- 
https://code.launchpad.net/~halves/ubuntu/+source/gnupg2/+git/gnupg2/+merge/396406
Your team STS Sponsors is requested to review the proposed merge of 
~halves/ubuntu/+source/gnupg2:lp1910432-focal into 
ubuntu/+source/gnupg2:ubuntu/focal-devel.

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


Re: [Sts-sponsors] [Merge] ~halves/ubuntu/+source/gnupg2:lp1910432-devel into ubuntu/+source/gnupg2:ubuntu/devel

2021-01-17 Thread Dan Streetman
Review: Needs Fixing

looks good, minor issue with needing separate MR for groovy and hirsute.

Diff comments:

> diff --git a/debian/changelog b/debian/changelog
> index 974065a..493cd72 100644
> --- a/debian/changelog
> +++ b/debian/changelog
> @@ -1,3 +1,10 @@
> +gnupg2 (2.2.20-1ubuntu2) groovy; urgency=medium

Can you change the release to hirsute please? That's the current devel release.

Also can you open a separate MR for groovy? The version number in groovy should 
be 2.2.20-1ubuntu1.1

> +
> +  * d/p/dirmngr-handle-EAFNOSUPPORT-at-connect_server.patch:
> +- Fix IPv6 connectivity for dirmngr (LP: #1910432)
> +
> + -- Heitor Alves de Siqueira   Sat, 16 Jan 2021 
> 14:53:14 +
> +
>  gnupg2 (2.2.20-1ubuntu1) groovy; urgency=low
>  
>* Merge from Debian unstable.  Remaining changes:


-- 
https://code.launchpad.net/~halves/ubuntu/+source/gnupg2/+git/gnupg2/+merge/396407
Your team STS Sponsors is requested to review the proposed merge of 
~halves/ubuntu/+source/gnupg2:lp1910432-devel into 
ubuntu/+source/gnupg2:ubuntu/devel.

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


Re: [Sts-sponsors] [Merge] ~halves/ubuntu/+source/gnupg2:lp1910432-bionic into ubuntu/+source/gnupg2:ubuntu/bionic-devel

2021-01-17 Thread Dan Streetman
Review: Needs Fixing

Looks like this doesn't compile; the v4_valid and v6_valid variables aren't 
present in the bionic code. However, it looks fairly simple to adjust for the 
older code; can you take a look at it?
-- 
https://code.launchpad.net/~halves/ubuntu/+source/gnupg2/+git/gnupg2/+merge/396408
Your team STS Sponsors is requested to review the proposed merge of 
~halves/ubuntu/+source/gnupg2:lp1910432-bionic into 
ubuntu/+source/gnupg2:ubuntu/bionic-devel.

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1830746] Re: memlock setting in systemd (pid 1) too low for containers (bionic)

2021-01-15 Thread Dan Streetman
Oh and also openvswitch, bug 1906280

To summarize, here are all the applications (found so far) that thought
they needed to lock all their current and future memory:

slick-greeter (bug 1902879)
lightdm-gtk-greeter (bug 1890394)
corosync (bug 1911904)
openvswitch (bug 1906280)

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1830746

Title:
  memlock setting in systemd (pid 1) too low for containers (bionic)

Status in systemd package in Ubuntu:
  Fix Released
Status in systemd source package in Bionic:
  Fix Released
Status in systemd source package in Cosmic:
  Won't Fix
Status in systemd source package in Disco:
  Won't Fix
Status in systemd source package in Eoan:
  Fix Released
Status in systemd source package in Focal:
  Fix Released

Bug description:
  [Impact]
  * Since systemd commit fb3ae275cb ("main: bump RLIMIT_NOFILE for the root 
user substantially") [https://github.com/systemd/systemd/commit/fb3ae275cb], 
which is present in Bionic, the memlock ulimit value was bumped to 16M. It's an 
adjustable limit, but the default (in previous Ubuntu releases/systemd 
versions) was really small.
  * Although bumping this value was a good thing, 16M is not enough and we can 
see failures on mlock'ed allocations on Bionic, like the one hereby reported by 
Kees or the recent introduced cryptsetup build failures (due to PPA builder 
updates to Bionic) - see https://bugs.launchpad.net/bugs//1891473.

  * It's especially harmful in containers to have such "small" limit, so
  we are hereby SRUing a more recent bump from upstream systemd, in the
  form of commit 91cfdd8d29 ("core: bump mlock ulimit to 64Mb")
  [https://github.com/systemd/systemd/commit/91cfdd8d29]. Latest Ubuntu
  releases, like Focal and subsequent ones, already include this patch
  so effectively we're putting Bionic on-par with newer releases.

  * A discussion about this topic (leading to this SRU) is present in
  ubuntu-devel ML: https://lists.ubuntu.com/archives/ubuntu-
  devel/2020-September/041159.html.

  [Test Case]
  * The straightforward test is to just look "ulimit -l" and "ulimit -Hl" in a 
current Bionic system, and then install an updated version with the hereby 
proposed SRU to see such limit bump from 16M to 64M (after a reboot) - a 
version containing this fix is available at my PPA as of 2020-09-10 [0] (likely 
to be deleted in next month or so).

  * A more interesting test is to run a Focal container in a current
  Bionic system and try to build the cryptsetup package - it'll fail in
  some tests. After updating the host (Bionic) systemd to include the
  mlock bump patch, the build succeeds in the Focal container.

  [Regression Potential]
  * Since it's a simple bump and it makes Bionic behave like Focal, I don't 
foresee regressions. One potential issue would be if some users rely on the 
lower default limit (16M) and this value is bumped by a package update, but 
that could be circumvented by setting a lower limit in limits.conf. The 
benefits for such bump are likely much bigger than any "regression" caused for 
users relying on such default limit.

  
  [0] https://launchpad.net/~gpiccoli/+archive/ubuntu/test1830746

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1830746/+subscriptions

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1830746] Re: memlock setting in systemd (pid 1) too low for containers (bionic)

2021-01-15 Thread Dan Streetman
found another 'special' application that thinks it needs all its memory
locked: corosync.

opened bug 1911904

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1830746

Title:
  memlock setting in systemd (pid 1) too low for containers (bionic)

Status in systemd package in Ubuntu:
  Fix Released
Status in systemd source package in Bionic:
  Fix Released
Status in systemd source package in Cosmic:
  Won't Fix
Status in systemd source package in Disco:
  Won't Fix
Status in systemd source package in Eoan:
  Fix Released
Status in systemd source package in Focal:
  Fix Released

Bug description:
  [Impact]
  * Since systemd commit fb3ae275cb ("main: bump RLIMIT_NOFILE for the root 
user substantially") [https://github.com/systemd/systemd/commit/fb3ae275cb], 
which is present in Bionic, the memlock ulimit value was bumped to 16M. It's an 
adjustable limit, but the default (in previous Ubuntu releases/systemd 
versions) was really small.
  * Although bumping this value was a good thing, 16M is not enough and we can 
see failures on mlock'ed allocations on Bionic, like the one hereby reported by 
Kees or the recent introduced cryptsetup build failures (due to PPA builder 
updates to Bionic) - see https://bugs.launchpad.net/bugs//1891473.

  * It's especially harmful in containers to have such "small" limit, so
  we are hereby SRUing a more recent bump from upstream systemd, in the
  form of commit 91cfdd8d29 ("core: bump mlock ulimit to 64Mb")
  [https://github.com/systemd/systemd/commit/91cfdd8d29]. Latest Ubuntu
  releases, like Focal and subsequent ones, already include this patch
  so effectively we're putting Bionic on-par with newer releases.

  * A discussion about this topic (leading to this SRU) is present in
  ubuntu-devel ML: https://lists.ubuntu.com/archives/ubuntu-
  devel/2020-September/041159.html.

  [Test Case]
  * The straightforward test is to just look "ulimit -l" and "ulimit -Hl" in a 
current Bionic system, and then install an updated version with the hereby 
proposed SRU to see such limit bump from 16M to 64M (after a reboot) - a 
version containing this fix is available at my PPA as of 2020-09-10 [0] (likely 
to be deleted in next month or so).

  * A more interesting test is to run a Focal container in a current
  Bionic system and try to build the cryptsetup package - it'll fail in
  some tests. After updating the host (Bionic) systemd to include the
  mlock bump patch, the build succeeds in the Focal container.

  [Regression Potential]
  * Since it's a simple bump and it makes Bionic behave like Focal, I don't 
foresee regressions. One potential issue would be if some users rely on the 
lower default limit (16M) and this value is bumped by a package update, but 
that could be circumvented by setting a lower limit in limits.conf. The 
benefits for such bump are likely much bigger than any "regression" caused for 
users relying on such default limit.

  
  [0] https://launchpad.net/~gpiccoli/+archive/ubuntu/test1830746

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1830746/+subscriptions

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1900617] Re: gateway error detail is not passed along in raised exception

2021-01-13 Thread Dan Streetman
** Tags added: sts sts-sponsor-slashd

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1900617

Title:
  gateway error detail is not passed along in raised exception

Status in python-etcd3gw package in Ubuntu:
  Fix Released
Status in python-etcd3gw source package in Bionic:
  In Progress
Status in python-etcd3gw source package in Focal:
  In Progress
Status in python-etcd3gw source package in Groovy:
  In Progress
Status in python-etcd3gw source package in Hirsute:
  Fix Released

Bug description:
  [impact]

  when the gateway reports an error, it is not passed along in the
  exception raised by python-etcd3gw

  [test case]

  This is somewhat difficult to test.
  Running the additional unit tests provided for this would be enough to 
trigger the raised exception.

  [regression potential]

  any regression would likely occur in handling errors sent from the
  gateway to python-etcd3gw, or in handling or later processing the
  exception(s) generated from the gateway error(s).

  [scope]

  this is needed for b/f/g which all have the same version of the
  package.

  this is fixed upstream with PR:
  https://github.com/dims/etcd3-gateway/pull/31
  https://github.com/dims/etcd3-gateway/pull/32
  https://github.com/dims/etcd3-gateway/pull/34

  which are included starting in v0.2.6. Ubuntu and Debian carry an old
  version, v0.2.1.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/python-etcd3gw/+bug/1900617/+subscriptions

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1903733] Re: Out of memory issue for websocket client

2021-01-13 Thread Dan Streetman
** Tags added: sts-sponsor-slashd

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1903733

Title:
  Out of memory issue for websocket client

Status in python-tornado package in Ubuntu:
  Fix Released
Status in python-tornado source package in Xenial:
  In Progress
Status in python-tornado source package in Bionic:
  In Progress
Status in python-tornado source package in Focal:
  Fix Released
Status in python-tornado source package in Groovy:
  Fix Released
Status in python-tornado source package in Hirsute:
  Fix Released

Bug description:
  [Impact]
  Applications using package python-tornado v5.1.1 or earlier are susceptible 
to an out of memory error related to websockets.

  [Other Info]

  Upstream commit(s):
  
https://github.com/tornadoweb/tornado/pull/2351/commits/20becca336caae61cd24f7afba0e177c0a210c70

  $ git remote -v
  originhttps://github.com/tornadoweb/tornado.git (fetch)
  originhttps://github.com/tornadoweb/tornado.git (push)

  $ git describe --contains 20becca3
  v5.1.0b1~28^2~1

  $ rmadison python3-tornardo
   => python3-tornado | 4.2.1-1ubuntu3  | xenial
   python3-tornado | 4.5.3-1 | bionic/universe
   => python3-tornado | 4.5.3-1ubuntu0.1| bionic-updates/universe
   python3-tornado | 6.0.3+really5.1.1-3 | focal/universe
   python3-tornado | 6.0.4-2 | groovy/universe
   python3-tornado | 6.0.4-3 | hirsute/universe
   python3-tornado | 6.1.0-1 | hirsute-proposed/universe

  [Original Description]

  Tornado has no 'flow control', [8] TCP flow control definition, for 
websockets. A websocket will receive data as fast as it can, and store the data 
in a deque. If that data is not consumed as fast as it is written, then that 
deque will grow in size indefinitely, ultimately leading to a memory error and 
killing the process.
  Fix is to use a Queue. Read and get messages from the queue on the client 
side.

  Patch file [0]
  Commit history [1]
  GitHub [2]
  Issue [3]

  [0] 
https://patch-diff.githubusercontent.com/raw/tornadoweb/tornado/pull/2351.patch
  [1] https://github.com/tornadoweb/tornado/pull/2351/commits
  [2] https://github.com/tornadoweb/tornado
  [3] https://github.com/tornadoweb/tornado/issues/2341

  [Test Case]

  I will be attaching two python test files.
  client.py
  server.py

  # create lxc container & limits on memory and turn off swap
  $ sudo apt-get install lxc lxd
  $ lxd init
  $ lxc launch ubuntu:18.04 bionic-python-tornado

  # shrink server size
  lxc config set server limits.cpu 2
  # changes ram setting
  lxc config set server limits.memory 150MB
  # severely limits amount of swap used [4]
  lxc config set bionic-py-tornado limits.memory.swap false

  # install dev tools and download source code
  $ lxc exec bionic-python-tornado bash
  $ apt-get update
  $ apt install ubuntu-dev-tools -y
  $ pull-lp-source python-tornado bionic
  $ sudo apt build-dep .

  # copy client.py and server.py to
  # $ ~/python-tornado-4.5.3/demos
  $ scp or touch client.py and server.py

  # build code
  $ python3 setup.py build
  $ python3 setup.py install

  # I have 3 terminals open
  2 for executing python, one for the client and one for server
  and another one using top to view memory constraints

  # run server.py, client.py, and top in separate terminals
  $ python3 demos/client.py
  $ python3 demos/server.py
  $ top

  What gets print out in the client.py is the length of the
  collections.deque

  In the server.py prints out messages like:
  message: keep alive

  * press ctrl+E for showing memory in MB in the terminal with top
  top - shows that swap is off/ running very low and our memory is only 150MB

  Although I never hit the oom exception that is expected to be thrown,
  you can check dmesg
  $ sudo dmesg | grep -i python

  looks similar to this:
  [ 3250.067833] 
oom-kill:constraint=CONSTRAINT_MEMCG,nodemask=(null),cpuset=lxc.payload.iptest,mems_allowed=0,oom_memcg=/lxc.payload.iptest,task_memcg=/lxc.payload.iptest,task=python3,pid=44889,uid=100
  [ 3250.067838] Memory cgroup out of memory: Killed process 44889 (python3) 
total-vm:304616kB, anon-rss:235152kB, file-rss:0kB, shmem-rss:0kB, UID:100 
pgtables:628kB oom_score_adj:0
  [ 3250.075096] oom_reaper: reaped process 44889 (python3), now anon-rss:0kB, 
file-rss:0kB, shmem-rss:0k

  After either adding the patch or running focal or later versions
  *pull-lp-source python-tornado focal

  We can run the exact same setup again, and this time it shows that the
  new queue object has only a length of 1.

  We have shown that before the patch, what was used to store messages
  in the queue was unbounded and could grow "If maxlen is not specified
  or is None, deques may grow to an arbitrary length over time."[6]
  Afterwards they decided using a blocking queue with size 1.
  (Queue(1)), where there is only ever 1 item in the 

[Sts-sponsors] [Bug 1830746] Re: memlock setting in systemd (pid 1) too low for containers (bionic)

2020-11-09 Thread Dan Streetman
To clarify, the regression appears to be the same problem that the
rlimit increase is fixing, but the applications failing now are simply
bigger. In general, any application that calls mlockall() with
MCL_FUTURE, but doesn't adjust its rlimit (or change its systemd service
file to adjust LimitMEMLOCK) is very likely destined to crash later in
its life.

I believe the only lp bugs for this regression are bug 1890394 and bug
1902879, which are both fix-committed and verified, so this bug should
be ok to release (again) after those are released. Also I will note both
of those applications (slick-greeter and lightdm-gtk-greeter) were fixed
by commenting out their calls to mlockall.

There is also bug 1902871 and bug 1903199 but I believe those are both
dups of bug 1900394.

Also finally to reflect on cryptsetup's use of mlockall(), since it's
the origin for this bug; cryptsetup is maybe "better" about its use of
mlockall() since it keeps the mlock only for the duration of an
'action':

if (action->required_memlock)   


  
crypt_memory_lock(NULL, 1); 


  



  
set_int_handler(0); 


  
r = action->handler();  


  



  
if (action->required_memlock)   


  
crypt_memory_lock(NULL, 0); 


  

however, as this bug shows, that action handler function can still
attempt to allocate enough memory to reach the rlimit and cause
allocation failures. Personally, I think cryptsetup should be fixed
upstream to call setrlimit() to increase its RLIMIT_MEMLOCK to infinity,
at least while the mlock is in effect.

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1830746

Title:
  memlock setting in systemd (pid 1) too low for containers (bionic)

Status in systemd package in Ubuntu:
  Fix Released
Status in systemd source package in Bionic:
  Fix Released
Status in systemd source package in Cosmic:
  Won't Fix
Status in systemd source package in Disco:
  Won't Fix
Status in systemd source package in Eoan:
  Fix Released
Status in systemd source package in Focal:
  Fix Released

Bug description:
  [Impact]
  * Since systemd commit fb3ae275cb ("main: bump RLIMIT_NOFILE for the root 
user substantially") [https://github.com/systemd/systemd/commit/fb3ae275cb], 
which is present in Bionic, the memlock ulimit value was bumped to 16M. It's an 
adjustable limit, but the default (in previous Ubuntu releases/systemd 
versions) was really small.
  * Although bumping this value was a good thing, 16M is not enough and we can 
see failures on mlock'ed allocations on Bionic, like the one hereby reported by 
Kees or the recent introduced cryptsetup build failures (due to PPA builder 
updates to Bionic) - see https://bugs.launchpad.net/bugs//1891473.

  * It's especially harmful in containers to have such "small" limit, so
  we are hereby SRUing a more recent bump from upstream systemd, in the
  form of commit 91cfdd8d29 ("core: bump mlock ulimit to 64Mb")
  

Re: [Sts-sponsors] [URGENT] Please review and sponsor LP1772556. This needs to make 18.04.5 LTS.

2020-08-06 Thread Dan Streetman
 > > > >
> > > > > That dependency comes from ${shlibs:Depends}, handled by 
> > > > > dh_shlibdeps(1):
> > > > >
> > > > > https://www.debian.org/doc/manuals/maint-guide/dreq.en.html
> > > > > """
> > > > > dh_shlibdeps(1) calculates shared library dependencies for binary 
> > > > > packages.
> > > > > It generates a list of ELF executables and shared libraries it has
> > > > > found for each binary package.
> > > > > This list is used for substituting ${shlibs:Depends}.
> > > > > """
> > > > >
> > > > > So,
> > > > >
> > > > > Since d-i needs both udev-udeb and libkmod2-udeb (for udev,
> > > > > load/remove modules, etc)
> > > > > fixing the udev-udeb dependency would require *at least* a
> > > > > change/rebuild to systemd
> > > > > to update the Depends: field of udev-udeb.
> > > > >
> > > > > Say, even if we came up with a hack/solution that doesn't need a
> > > > > change/fix to libkmod2.
> > > > >
> > > > > This is where I think the SRU/release team might prefer to hold things
> > > > > for release.
> > > > > (systemd.)
> > > > >
> > > > > At this point I think that *if* the case/matter is only about the
> > > > > network installer,
> > > > > not the ISO (which is indeed critical with 18.04.5), then it can be
> > > > > fixed via SRU,
> > > > > without any concerns.
> > > > >
> > > > > Would you please evaluate this w/ customer/Lukasƶ on Monday?
> > > > >
> > > > >
> > > > > A bit on libkmod2 change:
> > > > >
> > > > > The change/regression of --add-udeb to libkmod2 is not incorrect on
> > > > > its own, since
> > > > > libkmod2-udeb doesn't ship a shared library.
> > > > >
> > > > > I discussed this a bit with Rafael Tinoco (which introduced the
> > > > > change), and perhaps
> > > > > the change can be reverted (I'd have to check with a rebuild of kmod
> > > > > w/ it reverted.)
> > > > >
> > > > > (then rebuild kmod, then systemd, then d-i -- after debootstrap w/
> > > > > your fix, of course.
> > > > > See it's a chain that looks big for this time before the release. :)
> > > > >
> > > > >
> > > > > Some tech details:
> > > > >
> > > > > udev-udeb depends on libkmod2 shared library because of udevadm, for
> > > > > the kmod built-in.
> > > > >
> > > > > You may notice that even before the --add-udeb flag removal,
> > > > > libkmod2-udeb *did not*
> > > > > ship that shared library!
> > > > >
> > > > > I wondered how the installer worked. It DOES have that library in
> > > > > /lib/libkmod.so.2.
> > > > > And I have no idea where it pulled it from.
> > > > >
> > > > > I checked MANIFEST files, d-i build log, even download all udebs in
> > > > > bionic/bionic-updates,
> > > > > extracted, but nothing shipped it.
> > > > >
> > > > > This might be some d-i build magic, which I didn't look/find yet.
> > > > >
> > > > >
> > > > > Finally, I'm a bit tired to think now, so I'll give you the ball for 
> > > > > now to play
> > > > > a bit on considerations of 1) should this really be fixed before
> > > > > release for ISO,
> > > > > or just network installer is OK?   and more fun in 2) how to properly
> > > > > fix this :)
> > > > >
> > > > > Thanks and have a great weekend!
> > > > >
> > > > > P.S.: I'm attaching my hacky notes about this today, in case it might 
> > > > > help.
> > > > >
> > > > > On Fri, Jul 24, 2020 at 7:27 PM Mauricio Oliveira
> > > > >  wrote:
> > > > > >
> > > > > > Matthew,
> > > > > >
> > > > > > By the way, per the SF case, the customer uses the network install 
> > > > > > method.
> > > > > > This can be fixed post-release, as SRUs to debian-ins

Re: [Sts-sponsors] [URGENT] Please review and sponsor LP1772556. This needs to make 18.04.5 LTS.

2020-07-24 Thread Dan Streetman
Hi Matthew,

As @mfo has done work in this (installer image) area before (LP:
#1807023) I asked him to take a look; I'm cc'ing our sts sponsors
email list (which includes Mauricio) also.

I think he's handling the sponsoring, so hopefully it'll make it for
the point release!

Thanks!

On Fri, Jul 24, 2020 at 1:14 AM Matthew Ruffell
 wrote:
>
> Hi Dan and Eric,
>
> [URGENT] Please review and sponsor LP1772556. This needs to make 18.04.5 LTS.
>
> https://bugs.launchpad.net/ubuntu/+source/debootstrap/+bug/1772556
>
> I have written to Lukasz Zemczak and made him aware of this bug, and I have 
> also
> asked him to review and sponsor the debdiffs, since he was the last to modify
> debootstrap in Bionic, and is also in charge of producing 18.04.5 LTS images.
>
> I haven't heard back from Lukasz though. I emailed him on my Wednesday, and
> on my Thursday.
>
> Could you please maybe ping Lukasz during your day to make sure he got my 
> email?
>
> The debdiffs in the LP bug are kinda weird. debootstrap uses "3.0 (native)"
> instead of "3.0 (quilt)", and the debdiff seems to take diffs of all the 
> ubuntu
> releases, even though they are all symlinks to a single file that got changed,
> which is "gutsy".
>
> Anyway, please nudge Lukasz, or sponsor my debdiffs. Konica Minolta filed a
> L2 about this in SF289200, and it needs to make 18.04.5 LTS or it will never 
> be
> fixed.
>
> Thanks!
> Matthew

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1885562] Re: [fips] freebl_fipsSoftwareIntegrityTest fails in FIPS mode

2020-07-16 Thread Dan Streetman
** Also affects: nss (Ubuntu Groovy)
   Importance: Medium
 Assignee: Dariusz Gadomski (dgadomski)
   Status: In Progress

** Also affects: nss (Ubuntu Focal)
   Importance: Undecided
   Status: New

** Changed in: nss (Ubuntu Focal)
 Assignee: (unassigned) => Dariusz Gadomski (dgadomski)

** Changed in: nss (Ubuntu Focal)
   Importance: Undecided => Medium

** Changed in: nss (Ubuntu Focal)
   Status: New => In Progress

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1885562

Title:
  [fips] freebl_fipsSoftwareIntegrityTest fails in FIPS mode

Status in nss package in Ubuntu:
  In Progress
Status in nss source package in Bionic:
  In Progress
Status in nss source package in Focal:
  In Progress
Status in nss source package in Groovy:
  In Progress

Bug description:
  In FIPS mode there are some additional checks performed.

  They lead to verifying binaries signatures. Those signatures are
  shipped in the libnss3 package as *.chk files installed in
  /usr/lib/$(DEB_HOST_MULTIARCH)/nss. Along with those files are the
  libraries themselves (libfreebl3.so  libfreeblpriv3.so  libnssckbi.so
  libnssdbm3.so  libsoftokn3.so).

  Those libraries are symlinked to be present in /usr/lib/$(DEB_HOST_MULTIARCH):
  ls -l /usr/lib/x86_64-linux-gnu/libfreeblpriv3.so
  lrwxrwxrwx 1 root root 21 Jun 10 18:54 
/usr/lib/x86_64-linux-gnu/libfreeblpriv3.so -> nss/libfreeblpriv3.so

  The client binaries are linked against the symlinks, so when the verification 
happens (lib/freebl/shvfy.c) the mkCheckFileName function takes path to the 
symlink to the shlib and replaces the .so extension with .chk.
  Then it tries to open that file. Obviosly it fails, because the actual file 
is in /usr/lib/$(DEB_HOST_MULTIARCH)/nss.

  [Test case]
  sudo apt install chrony
  sudo chronyd -d
  chronyd: util.c:373 UTI_IPToRefid: Assertion `MD5_hash >= 0' failed.

  Potential solutions:
  Solution A:
  Drop the /usr/lib/$(DEB_HOST_MULTIARCH)/nss directory and put all signatures 
and libs in /usr/lib/$(DEB_HOST_MULTIARCH).

  Solution B:
  Create symlinks to *.chk files in /usr/lib/$(DEB_HOST_MULTIARCH) (like it is 
done for *.so).

  Solution C:
  Implement and upstream NSS feature of resolving symlinks and looking for 
*.chk where the symlinks lead to.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/nss/+bug/1885562/+subscriptions

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1874075] Re: rabbitmq-server startup timeouts differ between SysV and systemd

2020-07-07 Thread Dan Streetman
marking verification-done-focal as i mentioned in comment 71

** Tags removed: verification-needed-focal
** Tags added: verification-done-focal

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1874075

Title:
  rabbitmq-server startup timeouts differ between SysV and systemd

Status in rabbitmq-server package in Ubuntu:
  Fix Released
Status in rabbitmq-server source package in Xenial:
  Fix Committed
Status in rabbitmq-server source package in Bionic:
  Fix Committed
Status in rabbitmq-server source package in Eoan:
  Won't Fix
Status in rabbitmq-server source package in Focal:
  Fix Committed
Status in rabbitmq-server source package in Groovy:
  Fix Released
Status in rabbitmq-server package in Debian:
  New

Bug description:
  The startup timeouts were recently adjusted and synchronized between
  the SysV and systemd startup files.

  https://github.com/rabbitmq/rabbitmq-server-release/pull/129

  The new startup files should be included in this package.

  [Impact]

  After starting the RabbitMQ server process, the startup script will
  wait for the server to start by calling `rabbitmqctl wait` and will
  time out after 10 s.

  The startup time of the server depends on how quickly the Mnesia
  database becomes available and the server will time out after
  `mnesia_table_loading_retry_timeout` ms times
  `mnesia_table_loading_retry_limit` retries. By default this wait is
  30,000 ms times 10 retries, i.e. 300 s.

  The mismatch between these two timeout values might lead to the
  startup script failing prematurely while the server is still waiting
  for the Mnesia tables.

  This change introduces variable `RABBITMQ_STARTUP_TIMEOUT` and the
  `--timeout` option into the startup script. The default value for this
  timeout is set to 10 minutes (600 seconds).

  This change also updates the systemd service file to match the timeout
  values between the two service management methods.

  [Scope]

  Upstream patch: https://github.com/rabbitmq/rabbitmq-server-
  release/pull/129

  * Fix is not included in the Debian package
  * Fix is not included in any Ubuntu series

  * Groovy and Focal can apply the upstream patch as is
  * Bionic and Xenial need an additional fix in the systemd service file
to set the `RABBITMQ_STARTUP_TIMEOUT` variable for the
`rabbitmq-server-wait` helper script.

  [Test Case]

  In a clustered setup with two nodes, A and B.

  1. create queue on A
  2. shut down B
  3. shut down A
  4. boot B

  The broker on B will wait for A. The systemd service will wait for 10
  seconds and then fail. Boot A and the rabbitmq-server process on B
  will complete startup.

  [Regression Potential]

  This change alters the behavior of the startup scripts when the Mnesia
  database takes long to become available. This might lead to failures
  further down the service dependency chain.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/rabbitmq-server/+bug/1874075/+subscriptions

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1874075] Re: rabbitmq-server startup timeouts differ between SysV and systemd

2020-06-30 Thread Dan Streetman
Thanks @nicolasbock @mruffell!  I think the latest changes make sense
and I think the shorter wait time in Xenial is ok, as systemd should
continue to restart it until it's successful.

I rebased @mruffell's debdiffs on the -proposed versions, and added a
short changelog entry, and uploaded to x/b.

As the version in focal-proposed has been tested as working correctly,
and we now understand what was missing from x/b, I'm going to re-mark
this as verification-done-focal.

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1874075

Title:
  rabbitmq-server startup timeouts differ between SysV and systemd

Status in rabbitmq-server package in Ubuntu:
  Fix Released
Status in rabbitmq-server source package in Xenial:
  Fix Committed
Status in rabbitmq-server source package in Bionic:
  Fix Committed
Status in rabbitmq-server source package in Eoan:
  Won't Fix
Status in rabbitmq-server source package in Focal:
  Fix Committed
Status in rabbitmq-server source package in Groovy:
  Fix Released
Status in rabbitmq-server package in Debian:
  New

Bug description:
  The startup timeouts were recently adjusted and synchronized between
  the SysV and systemd startup files.

  https://github.com/rabbitmq/rabbitmq-server-release/pull/129

  The new startup files should be included in this package.

  [Impact]

  After starting the RabbitMQ server process, the startup script will
  wait for the server to start by calling `rabbitmqctl wait` and will
  time out after 10 s.

  The startup time of the server depends on how quickly the Mnesia
  database becomes available and the server will time out after
  `mnesia_table_loading_retry_timeout` ms times
  `mnesia_table_loading_retry_limit` retries. By default this wait is
  30,000 ms times 10 retries, i.e. 300 s.

  The mismatch between these two timeout values might lead to the
  startup script failing prematurely while the server is still waiting
  for the Mnesia tables.

  This change introduces variable `RABBITMQ_STARTUP_TIMEOUT` and the
  `--timeout` option into the startup script. The default value for this
  timeout is set to 10 minutes (600 seconds).

  This change also updates the systemd service file to match the timeout
  values between the two service management methods.

  [Scope]

  Upstream patch: https://github.com/rabbitmq/rabbitmq-server-
  release/pull/129

  * Fix is not included in the Debian package
  * Fix is not included in any Ubuntu series

  * Groovy and Focal can apply the upstream patch as is
  * Bionic and Xenial need an additional fix in the systemd service file
to set the `RABBITMQ_STARTUP_TIMEOUT` variable for the
`rabbitmq-server-wait` helper script.

  [Test Case]

  In a clustered setup with two nodes, A and B.

  1. create queue on A
  2. shut down B
  3. shut down A
  4. boot B

  The broker on B will wait for A. The systemd service will wait for 10
  seconds and then fail. Boot A and the rabbitmq-server process on B
  will complete startup.

  [Regression Potential]

  This change alters the behavior of the startup scripts when the Mnesia
  database takes long to become available. This might lead to failures
  further down the service dependency chain.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/rabbitmq-server/+bug/1874075/+subscriptions

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1861177] Re: seccomp_rule_add is very slow

2020-06-30 Thread Dan Streetman
uploaded to x/b/e/f/g, thanks!

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1861177

Title:
  seccomp_rule_add is very slow

Status in snapd:
  Invalid
Status in libseccomp package in Ubuntu:
  In Progress
Status in libseccomp source package in Xenial:
  In Progress
Status in libseccomp source package in Bionic:
  In Progress
Status in libseccomp source package in Eoan:
  In Progress
Status in libseccomp source package in Focal:
  In Progress
Status in libseccomp source package in Groovy:
  In Progress

Bug description:
  [IMPACT]
  There is a known and patched issue with version 2.4 of libseccomp where 
certain operations have a large performance regression. This is causing some 
packages that use libseccomp such as container orchestration systems to 
occasionally time out or otherwise fail under certain workloads.

  Please consider porting the patch into the various Ubuntu versions
  that have version 2.4 of libseccomp and into the backports. The
  performance patch from version 2.5 (yet to be released) applies
  cleanly on top of the 2.4 branch of libseccomp.

  For more information, and for a copy of the patch (which can also be
  cherry picked from the upstream libseccomp repos) see the similar
  Debian issue: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=943913

  Upstream issue : https://github.com/seccomp/libseccomp/issues/153
  Upstream fix : https://github.com/seccomp/libseccomp/pull/180/

  [Test Case]

  For this test case we use Docker on Ubuntu Groovy (20.10) :

  --> Current libseccomp version
  #dpkg -l | grep libseccomp
  ii  libseccomp2:amd64  2.4.3-1ubuntu3 
 amd64high level interface to Linux seccomp filter

  ## pull ubuntu image
  # docker pull ubuntu
  ## create a container
  # docker run --name test_seccomp -it 74435f89ab78 /bin/bash

  ## run test case
  # for i in `seq 1 40`; do (time sudo docker exec test_seccomp true &); done
  ...
  MAX TIME :
  real  0m10,319s
  user  0m0,018s
  sys   0m0,033s

  
  --> Patched libseccomp version

  # dpkg -l | grep libseccomp
  ii  libseccomp2:amd64  2.4.3-1ubuntu4 
 amd64high level interface to Linux seccomp filter

  # docker start test_seccomp
  ## run test case
  # for i in `seq 1 40`; do (time sudo docker exec test_seccomp true &); done
  ...
  MAX TIME :
  real  0m3,650s
  user  0m0,025s
  sys   0m0,028s

  [Regression Potential]

  The first of the 2 patches cleans up the code that adds rules to a
  single filter without changing the logic of the code. The second patch
  introduces the idea of shadow transactions. On a successful
  transaction commit the old transaction checkpoint is preserved and is
  brought up to date with the current filter. The next time a new
  transaction starts, it checks is the a shadow transaction exist and if
  so the shadow is used instead of creating a new checkpoint from
  scratch [1]. This is the patch that mitigates the performance
  regression. Any potential regression will involve the parts of the
  code that add rules to filters and/or the code that creates and checks
  the shadow transactions.

  
  [Other]

  Affected releases : Groovy, Focal, Eoan, Bionic, Xenial.

  [1]
  
https://github.com/seccomp/libseccomp/pull/180/commits/bc3a6c0453b0350ee43e4925482f705a2fbf5a4d

To manage notifications about this bug go to:
https://bugs.launchpad.net/snapd/+bug/1861177/+subscriptions

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1874075] Re: rabbitmq-server startup timeouts differ between SysV and systemd

2020-06-27 Thread Dan Streetman
> In addition I am uncertain why the service doesn't use
> `Type=notify` as in the later versions. Rabbitmq-server-3.6.10
> understands what `sd_notify` is and I would have thought that this
> implies that we should be able to use `Type=notify` on Bionic

our goal is to make the version in Bionic work; not to make the version
in Bionic identical to later versions.  Unless there is a specific
*need* to change the service type to notify, we don't *need* to do that.

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1874075

Title:
  rabbitmq-server startup timeouts differ between SysV and systemd

Status in rabbitmq-server package in Ubuntu:
  Fix Released
Status in rabbitmq-server source package in Xenial:
  Fix Committed
Status in rabbitmq-server source package in Bionic:
  Fix Committed
Status in rabbitmq-server source package in Eoan:
  Won't Fix
Status in rabbitmq-server source package in Focal:
  Fix Committed
Status in rabbitmq-server source package in Groovy:
  Fix Released
Status in rabbitmq-server package in Debian:
  New

Bug description:
  The startup timeouts were recently adjusted and synchronized between
  the SysV and systemd startup files.

  https://github.com/rabbitmq/rabbitmq-server-release/pull/129

  The new startup files should be included in this package.

  [Impact]

  After starting the RabbitMQ server process, the startup script will
  wait for the server to start by calling `rabbitmqctl wait` and will
  time out after 10 s.

  The startup time of the server depends on how quickly the Mnesia
  database becomes available and the server will time out after
  `mnesia_table_loading_retry_timeout` ms times
  `mnesia_table_loading_retry_limit` retries. By default this wait is
  30,000 ms times 10 retries, i.e. 300 s.

  The mismatch between these two timeout values might lead to the
  startup script failing prematurely while the server is still waiting
  for the Mnesia tables.

  This change introduces variable `RABBITMQ_STARTUP_TIMEOUT` and the
  `--timeout` option into the startup script. The default value for this
  timeout is set to 10 minutes (600 seconds).

  This change also updates the systemd service file to match the timeout
  values between the two service management methods.

  [Scope]

  Upstream patch: https://github.com/rabbitmq/rabbitmq-server-
  release/pull/129

  * Fix is not included in the Debian package
  * Fix is not included in any Ubuntu series

  * Groovy and Focal can apply the upstream patch as is
  * Bionic and Xenial need an additional fix in the systemd service file
to set the `RABBITMQ_STARTUP_TIMEOUT` variable for the
`rabbitmq-server-wait` helper script.

  [Test Case]

  In a clustered setup with two nodes, A and B.

  1. create queue on A
  2. shut down B
  3. shut down A
  4. boot B

  The broker on B will wait for A. The systemd service will wait for 10
  seconds and then fail. Boot A and the rabbitmq-server process on B
  will complete startup.

  [Regression Potential]

  This change alters the behavior of the startup scripts when the Mnesia
  database takes long to become available. This might lead to failures
  further down the service dependency chain.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/rabbitmq-server/+bug/1874075/+subscriptions

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1874075] Re: rabbitmq-server startup timeouts differ between SysV and systemd

2020-06-26 Thread Dan Streetman
> In the context of the issues we are seeing with charms the 10 minute
> timeout should be sufficient.

right, but this isn't just changing rabbitmq-server used by charms, this
is changing the behavior for *all* Ubuntu users of rabbitmq-server, as
well as upstream. Since upstream did accept it, my *assumption* is yes,
10 minutes is a good default, but since mismatched timeouts is
essentially the cause of this entire problem, I thought it was worth
just re-checking again, to make sure we all thought about it carefully
with *all users* in mind, before leaving it at that.

To poke the thought button further, note that since the upstream (and
f/g) service files also have 'Restart=on-failure' set, and will go 10
minutes (as configured with the TimeoutStartSec=600 param), the service
is *effectively* set to never, ever timeout, since it will just restart
itself each time it times out; as the StartLimitIntervalSec= and
StartLimitBurst= will never be exceeded (since they default to 10s and
5, respectively).

So, I suppose since the effective result is that in F and later
(including upstream), the service will wait forever, with restart-on-
failure happening every 10 minutes, until it successfully is able to
start.  With that in mind, I don't think the actual TimeoutStartSec=
setting makes any difference at all (as long as it's long enough to
avoid reaching the restart StartLimit settings), besides controlling how
often the service logs a failure and then restart.

I guess this all means that 1) the version in focal-proposed is correct,
and 2) the xenial and bionic versions need the addition of
TimeoutStartSec=600 and Restart=on-failure to their service file, right?
Is that all that's needed?

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1874075

Title:
  rabbitmq-server startup timeouts differ between SysV and systemd

Status in rabbitmq-server package in Ubuntu:
  Fix Released
Status in rabbitmq-server source package in Xenial:
  Fix Committed
Status in rabbitmq-server source package in Bionic:
  Fix Committed
Status in rabbitmq-server source package in Eoan:
  Won't Fix
Status in rabbitmq-server source package in Focal:
  Fix Committed
Status in rabbitmq-server source package in Groovy:
  Fix Released
Status in rabbitmq-server package in Debian:
  New

Bug description:
  The startup timeouts were recently adjusted and synchronized between
  the SysV and systemd startup files.

  https://github.com/rabbitmq/rabbitmq-server-release/pull/129

  The new startup files should be included in this package.

  [Impact]

  After starting the RabbitMQ server process, the startup script will
  wait for the server to start by calling `rabbitmqctl wait` and will
  time out after 10 s.

  The startup time of the server depends on how quickly the Mnesia
  database becomes available and the server will time out after
  `mnesia_table_loading_retry_timeout` ms times
  `mnesia_table_loading_retry_limit` retries. By default this wait is
  30,000 ms times 10 retries, i.e. 300 s.

  The mismatch between these two timeout values might lead to the
  startup script failing prematurely while the server is still waiting
  for the Mnesia tables.

  This change introduces variable `RABBITMQ_STARTUP_TIMEOUT` and the
  `--timeout` option into the startup script. The default value for this
  timeout is set to 10 minutes (600 seconds).

  This change also updates the systemd service file to match the timeout
  values between the two service management methods.

  [Scope]

  Upstream patch: https://github.com/rabbitmq/rabbitmq-server-
  release/pull/129

  * Fix is not included in the Debian package
  * Fix is not included in any Ubuntu series

  * Groovy and Focal can apply the upstream patch as is
  * Bionic and Xenial need an additional fix in the systemd service file
to set the `RABBITMQ_STARTUP_TIMEOUT` variable for the
`rabbitmq-server-wait` helper script.

  [Test Case]

  In a clustered setup with two nodes, A and B.

  1. create queue on A
  2. shut down B
  3. shut down A
  4. boot B

  The broker on B will wait for A. The systemd service will wait for 10
  seconds and then fail. Boot A and the rabbitmq-server process on B
  will complete startup.

  [Regression Potential]

  This change alters the behavior of the startup scripts when the Mnesia
  database takes long to become available. This might lead to failures
  further down the service dependency chain.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/rabbitmq-server/+bug/1874075/+subscriptions

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1874075] Re: rabbitmq-server startup timeouts differ between SysV and systemd

2020-06-24 Thread Dan Streetman
@mruffell thanks!  Only a few comments below:

> Note because of this bug, groovy and upstream has now been changed to
10 min timeout, down from 1hr.

We should decide if this is really what we want to do.  And if it should
revert to the longer 1hr timeout, propose that upstream.

I don't really know, is either default timeout better, 10 minutes or 1
hour? @nicolasbock did you have specific reasoning for the upstream
reduction to 10 minutes?

If 10 minutes is what we want, then we should be ok upstream and in
Groovy, and in Focal with the current code in -proposed.

> On Eoan: Assuming same behaviour as focal due to systemd service file.
Untested.

yeah, it FTBFS in Eoan unfortunately; there is bug 1843761, and also I
detailed why it fails in the description for bug 1773324. As Eoan is
almost EOL, my opinion is it's safer to simply leave it untouched there.

> If this ExecStartPost script times out (which it does after 90 seconds
it seems, even though documentation suggests infinite timeout)

yep, systemd has DefaultTimeoutStartSec set to 90s (man systemd-
system.conf for more details), so if TimeoutStartSec isn't specified for
a service unit, it will default to 90 seconds (and I believe the timeout
period includes the ExecStartPre, ExecStart, and ExecStartPost actions,
but I'd have to specifically check the code to verify that).

> we need to add a dependency to the package, socat

well, this is usually a problem for SRU releases.  Unfortunately, adding
new deps for SRU releases causes 'sudo apt-get upgrade' to *not* upgrade
any package that pulls in new (not currently installed) deps.  While
'sudo apt upgrade' *does* pull in new deps, the ~ubuntu-sru team
typically rejects adding new runtime deps to any SRU, without a very
strong reason.

Instead of pulling the entire service file back into Bionic, I think it
might be enough to only add 'TimeoutStartSec=600', which should cover
the timeout for the ExecStart= and ExecStartPost= actions.  It may be
also worth adding the Restart=on-failure and RestartSec=10 params.
Could you test with the TimeoutStartSec param in bionic to see if that's
enough to SRU?

If pulling back only the TimeoutStartSec=600 param to Bionic works, that
will hopefully be enough for Xenial, too.

Thanks!

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1874075

Title:
  rabbitmq-server startup timeouts differ between SysV and systemd

Status in rabbitmq-server package in Ubuntu:
  Fix Released
Status in rabbitmq-server source package in Xenial:
  Fix Committed
Status in rabbitmq-server source package in Bionic:
  Fix Committed
Status in rabbitmq-server source package in Eoan:
  Won't Fix
Status in rabbitmq-server source package in Focal:
  Fix Committed
Status in rabbitmq-server source package in Groovy:
  Fix Released
Status in rabbitmq-server package in Debian:
  New

Bug description:
  The startup timeouts were recently adjusted and synchronized between
  the SysV and systemd startup files.

  https://github.com/rabbitmq/rabbitmq-server-release/pull/129

  The new startup files should be included in this package.

  [Impact]

  After starting the RabbitMQ server process, the startup script will
  wait for the server to start by calling `rabbitmqctl wait` and will
  time out after 10 s.

  The startup time of the server depends on how quickly the Mnesia
  database becomes available and the server will time out after
  `mnesia_table_loading_retry_timeout` ms times
  `mnesia_table_loading_retry_limit` retries. By default this wait is
  30,000 ms times 10 retries, i.e. 300 s.

  The mismatch between these two timeout values might lead to the
  startup script failing prematurely while the server is still waiting
  for the Mnesia tables.

  This change introduces variable `RABBITMQ_STARTUP_TIMEOUT` and the
  `--timeout` option into the startup script. The default value for this
  timeout is set to 10 minutes (600 seconds).

  This change also updates the systemd service file to match the timeout
  values between the two service management methods.

  [Scope]

  Upstream patch: https://github.com/rabbitmq/rabbitmq-server-
  release/pull/129

  * Fix is not included in the Debian package
  * Fix is not included in any Ubuntu series

  * Groovy and Focal can apply the upstream patch as is
  * Bionic and Xenial need an additional fix in the systemd service file
to set the `RABBITMQ_STARTUP_TIMEOUT` variable for the
`rabbitmq-server-wait` helper script.

  [Test Case]

  In a clustered setup with two nodes, A and B.

  1. create queue on A
  2. shut down B
  3. shut down A
  4. boot B

  The broker on B will wait for A. The systemd service will wait for 10
  seconds and then fail. Boot A and the rabbitmq-server process on B
  will complete startup.

  [Regression Potential]

  This change alters the behavior of the startup scripts when the Mnesia
  database takes long to become 

[Sts-sponsors] [Bug 1874075] Re: rabbitmq-server startup timeouts differ between SysV and systemd

2020-06-18 Thread Dan Streetman
@sil2100, last time I talked to @nicolasbock he was unclear on why it
was failing in bionic and passing in focal.  I think probably the best
thing to do at this point is reject the versions in -proposed for x/b/f,
or just leave them in -proposed indefinitely, and he can take more time
to come up with the proper patches and then re-request sponsorship.
Sorry for the confusion here.

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1874075

Title:
  rabbitmq-server startup timeouts differ between SysV and systemd

Status in rabbitmq-server package in Ubuntu:
  Fix Released
Status in rabbitmq-server source package in Xenial:
  Fix Committed
Status in rabbitmq-server source package in Bionic:
  Fix Committed
Status in rabbitmq-server source package in Eoan:
  Won't Fix
Status in rabbitmq-server source package in Focal:
  Fix Committed
Status in rabbitmq-server source package in Groovy:
  Fix Released
Status in rabbitmq-server package in Debian:
  New

Bug description:
  The startup timeouts were recently adjusted and synchronized between
  the SysV and systemd startup files.

  https://github.com/rabbitmq/rabbitmq-server-release/pull/129

  The new startup files should be included in this package.

  [Impact]

  After starting the RabbitMQ server process, the startup script will
  wait for the server to start by calling `rabbitmqctl wait` and will
  time out after 10 s.

  The startup time of the server depends on how quickly the Mnesia
  database becomes available and the server will time out after
  `mnesia_table_loading_retry_timeout` ms times
  `mnesia_table_loading_retry_limit` retries. By default this wait is
  30,000 ms times 10 retries, i.e. 300 s.

  The mismatch between these two timeout values might lead to the
  startup script failing prematurely while the server is still waiting
  for the Mnesia tables.

  This change introduces variable `RABBITMQ_STARTUP_TIMEOUT` and the
  `--timeout` option into the startup script. The default value for this
  timeout is set to 10 minutes (600 seconds).

  This change also updates the systemd service file to match the timeout
  values between the two service management methods.

  [Scope]

  Upstream patch: https://github.com/rabbitmq/rabbitmq-server-
  release/pull/129

  * Fix is not included in the Debian package
  * Fix is not included in any Ubuntu series

  * Groovy and Focal can apply the upstream patch as is
  * Bionic and Xenial need an additional fix in the systemd service file
to set the `RABBITMQ_STARTUP_TIMEOUT` variable for the
`rabbitmq-server-wait` helper script.

  [Test Case]

  In a clustered setup with two nodes, A and B.

  1. create queue on A
  2. shut down B
  3. shut down A
  4. boot B

  The broker on B will wait for A. The systemd service will wait for 10
  seconds and then fail. Boot A and the rabbitmq-server process on B
  will complete startup.

  [Regression Potential]

  This change alters the behavior of the startup scripts when the Mnesia
  database takes long to become available. This might lead to failures
  further down the service dependency chain.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/rabbitmq-server/+bug/1874075/+subscriptions

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1874075] Re: rabbitmq-server startup timeouts differ between SysV and systemd

2020-06-09 Thread Dan Streetman
@nicolasbock as we talked about, you can't just throw new changes into
Bionic without working on Focal/Groovy (and upstream) first.

Additionally, to address the actual change, I'm concerned you are adding
a config file to get the mnesia and rabbitmq timeouts to match - that
doesn't help anyone upstream, or using any other distro.  The "correct"
change would be to adjust the actual default instead of Ubuntu carrying
a config file, right?  Please have a talk with upstream to get the
config defaults correctly matching, and then the additional fix should
go to F/G before updating Bionic.

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1874075

Title:
  rabbitmq-server startup timeouts differ between SysV and systemd

Status in rabbitmq-server package in Ubuntu:
  Fix Released
Status in rabbitmq-server source package in Xenial:
  Fix Committed
Status in rabbitmq-server source package in Bionic:
  Fix Committed
Status in rabbitmq-server source package in Eoan:
  Won't Fix
Status in rabbitmq-server source package in Focal:
  Fix Committed
Status in rabbitmq-server source package in Groovy:
  Fix Released
Status in rabbitmq-server package in Debian:
  New

Bug description:
  The startup timeouts were recently adjusted and synchronized between
  the SysV and systemd startup files.

  https://github.com/rabbitmq/rabbitmq-server-release/pull/129

  The new startup files should be included in this package.

  [Impact]

  After starting the RabbitMQ server process, the startup script will
  wait for the server to start by calling `rabbitmqctl wait` and will
  time out after 10 s.

  The startup time of the server depends on how quickly the Mnesia
  database becomes available and the server will time out after
  `mnesia_table_loading_retry_timeout` ms times
  `mnesia_table_loading_retry_limit` retries. By default this wait is
  30,000 ms times 10 retries, i.e. 300 s.

  The mismatch between these two timeout values might lead to the
  startup script failing prematurely while the server is still waiting
  for the Mnesia tables.

  This change introduces variable `RABBITMQ_STARTUP_TIMEOUT` and the
  `--timeout` option into the startup script. The default value for this
  timeout is set to 10 minutes (600 seconds).

  This change also updates the systemd service file to match the timeout
  values between the two service management methods.

  [Scope]

  Upstream patch: https://github.com/rabbitmq/rabbitmq-server-
  release/pull/129

  * Fix is not included in the Debian package
  * Fix is not included in any Ubuntu series

  * Groovy and Focal can apply the upstream patch as is
  * Bionic and Xenial need an additional fix in the systemd service file
to set the `RABBITMQ_STARTUP_TIMEOUT` variable for the
`rabbitmq-server-wait` helper script.

  [Test Case]

  In a clustered setup with two nodes, A and B.

  1. create queue on A
  2. shut down B
  3. shut down A
  4. boot B

  The broker on B will wait for A. The systemd service will wait for 10
  seconds and then fail. Boot A and the rabbitmq-server process on B
  will complete startup.

  [Regression Potential]

  This change alters the behavior of the startup scripts when the Mnesia
  database takes long to become available. This might lead to failures
  further down the service dependency chain.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/rabbitmq-server/+bug/1874075/+subscriptions

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1874075] Re: rabbitmq-server startup timeouts differ between SysV and systemd

2020-06-08 Thread Dan Streetman
Hi @nicolasbock, it looks like the latest debdiff is adding in a new
config file, and that change doesn't appear to be upstream, or even in
focal or groovy...can you clarify what that config file is needed and if
you'll work to get it upstream before it goes into bionic?

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1874075

Title:
  rabbitmq-server startup timeouts differ between SysV and systemd

Status in rabbitmq-server package in Ubuntu:
  Fix Released
Status in rabbitmq-server source package in Xenial:
  Fix Committed
Status in rabbitmq-server source package in Bionic:
  Fix Committed
Status in rabbitmq-server source package in Eoan:
  Won't Fix
Status in rabbitmq-server source package in Focal:
  Fix Committed
Status in rabbitmq-server source package in Groovy:
  Fix Released
Status in rabbitmq-server package in Debian:
  New

Bug description:
  The startup timeouts were recently adjusted and synchronized between
  the SysV and systemd startup files.

  https://github.com/rabbitmq/rabbitmq-server-release/pull/129

  The new startup files should be included in this package.

  [Impact]

  After starting the RabbitMQ server process, the startup script will
  wait for the server to start by calling `rabbitmqctl wait` and will
  time out after 10 s.

  The startup time of the server depends on how quickly the Mnesia
  database becomes available and the server will time out after
  `mnesia_table_loading_retry_timeout` ms times
  `mnesia_table_loading_retry_limit` retries. By default this wait is
  30,000 ms times 10 retries, i.e. 300 s.

  The mismatch between these two timeout values might lead to the
  startup script failing prematurely while the server is still waiting
  for the Mnesia tables.

  This change introduces variable `RABBITMQ_STARTUP_TIMEOUT` and the
  `--timeout` option into the startup script. The default value for this
  timeout is set to 10 minutes (600 seconds).

  This change also updates the systemd service file to match the timeout
  values between the two service management methods.

  [Scope]

  Upstream patch: https://github.com/rabbitmq/rabbitmq-server-
  release/pull/129

  * Fix is not included in the Debian package
  * Fix is not included in any Ubuntu series

  * Groovy and Focal can apply the upstream patch as is
  * Bionic and Xenial need an additional fix in the systemd service file
to set the `RABBITMQ_STARTUP_TIMEOUT` variable for the
`rabbitmq-server-wait` helper script.

  [Test Case]

  In a clustered setup with two nodes, A and B.

  1. create queue on A
  2. shut down B
  3. shut down A
  4. boot B

  The broker on B will wait for A. The systemd service will wait for 10
  seconds and then fail. Boot A and the rabbitmq-server process on B
  will complete startup.

  [Regression Potential]

  This change alters the behavior of the startup scripts when the Mnesia
  database takes long to become available. This might lead to failures
  further down the service dependency chain.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/rabbitmq-server/+bug/1874075/+subscriptions

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1871214] Re: [SRU] nfsd doesn't start if exports depend on mount

2020-05-28 Thread Dan Streetman
** Changed in: nfs-utils (Ubuntu Groovy)
   Status: Fix Committed => In Progress

** Changed in: nfs-utils (Ubuntu Eoan)
   Status: New => In Progress

** Changed in: nfs-utils (Ubuntu Bionic)
   Status: New => In Progress

** Changed in: nfs-utils (Ubuntu Focal)
 Assignee: (unassigned) => Rodrigo Barbieri (rodrigo-barbieri2010)

** Changed in: nfs-utils (Ubuntu Eoan)
 Assignee: (unassigned) => Rodrigo Barbieri (rodrigo-barbieri2010)

** Changed in: nfs-utils (Ubuntu Bionic)
 Assignee: (unassigned) => Rodrigo Barbieri (rodrigo-barbieri2010)

** Changed in: nfs-utils (Ubuntu Bionic)
   Importance: Undecided => Medium

** Changed in: nfs-utils (Ubuntu Eoan)
   Importance: Undecided => Medium

** Changed in: nfs-utils (Ubuntu Focal)
   Importance: Undecided => Medium

** Changed in: nfs-utils (Ubuntu Groovy)
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1871214

Title:
  [SRU] nfsd doesn't start if exports depend on mount

Status in nfs-utils package in Ubuntu:
  In Progress
Status in nfs-utils source package in Bionic:
  In Progress
Status in nfs-utils source package in Eoan:
  In Progress
Status in nfs-utils source package in Focal:
  In Progress
Status in nfs-utils source package in Groovy:
  In Progress

Bug description:
  Reproduced in Bionic and Focal, packages 1:1.3.4-2.1ubuntu5.2 and
  1:1.3.4-2.5ubuntu3 respectively.

  Steps to reproduce:

  1) Set up a ISCSI client to a 1GB+ volume, mount it in /data and set fstab to 
mount at boot
  2) Create a folder in /data like /data/dir1 and set up /etc/exports to export 
it
  3) Reboot
  4) Notice nfs-server does not start. Check journalctl and see it was because 
of "exportfs -r" returning -1 because /data/dir1 is not available.

  In Xenial (1:1.2.8-9ubuntu12.2), exportfs always returns 0, so this
  bug is not present there.

  This can be workaroundable in two ways:

  1) Editing nfs-server.service and adding "-" in
  "ExecStartPre=/usr/sbin/exportfs -r" to be
  "ExecStartPre=-/usr/sbin/exportfs -r". This will retain xenial
  behavior.

  2) Editing nfs-server.service and removing "Before=remote-fs-
  pre.target" and adding "RequiresMountsFor=/data". This will cause the
  systemd service load ordering to change, and nfs-server will wait for
  /data to be available.

  #2 is the upstream approach with commit [0] where this new comment
  identifies mount dependencies and automatically sets up
  RequiresMountFor.

  [0] http://git.linux-nfs.org/?p=steved/nfs-
  utils.git;a=commitdiff;h=4776bd0599420f9d073c9e2601ed438062dccd19

  ===

  [Impact]

  Users attempting to export folders from iSCSI or any remote mounted
  filesystem will experience their exports not being available at system
  start up, requiring workarounds or manual intervention.

  [Test case]

  1. Reproducing the bug:

  1a. Set up a ISCSI client to a 1GB+ volume
  1b. Format /dev/ using mkfs.xfs
  1c. Mount it in /data and set fstab as follows to mount at boot

  UUID="" /data xfs defaults,auto,_netdev 0 0

  1d. Create a folder in /data like /data/dir1 and set permissions as
  follows

  chmod 777 /data/dir1
  chown nobody:nogroup /data/dir1

  1e. Set up /etc/exports as follows to export it

  data/dir1 *(rw,async,root_squash,subtree_check)

  1f. Reboot
  1g. Notice nfs-server does not start. Running "showmount -e" displays error.

  2. No cleanup necessary

  3. Install the updated package that contains the fix

  4. Confirming the fix:

  4a. Reboot
  4b. Notice nfs-server starts sucessfully, "showmount -e" displays the 
exports. 

  
  [Regression Potential]

  Regression potential is minimal. The dependency commit only moves code
  around and the actual fix only introduces an external systemd-
  generator without changing actual pre-existing code.

  I tested and confirmed that the fix introduced [0] also covers the fix
  removed [1], so there should not be any regression on this particular
  code change as well.

  [1] http://git.linux-nfs.org/?p=steved/nfs-
  utils.git;a=commitdiff;h=1e41488f428cd36b200b48b84d31446e38dfdc50

  
  [Other Info]

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/nfs-utils/+bug/1871214/+subscriptions

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1878049] Re: Upgrade rabbitmq-server to v3.8.3 from upstream

2020-05-13 Thread Dan Streetman
with the minor d/watch tweak we talked about in irc, LGTM thanks
@nicolasbock!  Uploaded to groovy.

** Changed in: rabbitmq-server (Ubuntu)
   Status: In Progress => Fix Committed

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1878049

Title:
  Upgrade rabbitmq-server to v3.8.3 from upstream

Status in rabbitmq-server package in Ubuntu:
  Fix Committed

Bug description:
  rabbitmq-server FTBFS in Groovy, and needs to be upgraded to latest
  from upstream.

  Also the current debian/watch file downloads from an outdated
  location. Update the watch file so it gets the sources directly from
  GitHub.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/rabbitmq-server/+bug/1878049/+subscriptions

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1878049] Re: Upgrade rabbitmq-server to v3.8.3 from upstream

2020-05-13 Thread Dan Streetman
** Tags removed: sts-sponsor-slashd
** Tags added: sts-sponsor-ddstreet

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1878049

Title:
  Upgrade rabbitmq-server to v3.8.3 from upstream

Status in rabbitmq-server package in Ubuntu:
  In Progress

Bug description:
  rabbitmq-server FTBFS in Groovy, and needs to be upgraded to latest
  from upstream.

  Also the current debian/watch file downloads from an outdated
  location. Update the watch file so it gets the sources directly from
  GitHub.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/rabbitmq-server/+bug/1878049/+subscriptions

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1878049] Re: Upgrade rabbitmq-server to v3.8.3 from upstream

2020-05-13 Thread Dan Streetman
** Summary changed:

- Update debian/watch file to download from GitHub
+ Upgrade rabbitmq-server to v3.8.3 from upstream

** Description changed:

- The current debian/watch file downloads from an outdated location.
+ rabbitmq-server FTBFS in Groovy, and needs to be upgraded to latest from
+ upstream.
+ 
+ Also the current debian/watch file downloads from an outdated location.
  Update the watch file so it gets the sources directly from GitHub.

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1878049

Title:
  Upgrade rabbitmq-server to v3.8.3 from upstream

Status in rabbitmq-server package in Ubuntu:
  In Progress

Bug description:
  rabbitmq-server FTBFS in Groovy, and needs to be upgraded to latest
  from upstream.

  Also the current debian/watch file downloads from an outdated
  location. Update the watch file so it gets the sources directly from
  GitHub.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/rabbitmq-server/+bug/1878049/+subscriptions

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1874075] Re: rabbitmq-server startup timeouts differ between SysV and systemd

2020-05-13 Thread Dan Streetman
for groovy, the upstream merge is being worked in bug 1878049

** Changed in: rabbitmq-server (Ubuntu Groovy)
 Assignee: (unassigned) => Nicolas Bock (nicolasbock)

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1874075

Title:
  rabbitmq-server startup timeouts differ between SysV and systemd

Status in rabbitmq-server package in Ubuntu:
  In Progress
Status in rabbitmq-server source package in Xenial:
  In Progress
Status in rabbitmq-server source package in Bionic:
  In Progress
Status in rabbitmq-server source package in Eoan:
  Won't Fix
Status in rabbitmq-server source package in Focal:
  In Progress
Status in rabbitmq-server source package in Groovy:
  In Progress
Status in rabbitmq-server package in Debian:
  New

Bug description:
  The startup timeouts were recently adjusted and synchronized between
  the SysV and systemd startup files.

  https://github.com/rabbitmq/rabbitmq-server-release/pull/129

  The new startup files should be included in this package.

  [Impact]

  After starting the RabbitMQ server process, the startup script will
  wait for the server to start by calling `rabbitmqctl wait` and will
  time out after 10 s.

  The startup time of the server depends on how quickly the Mnesia
  database becomes available and the server will time out after
  `mnesia_table_loading_retry_timeout` ms times
  `mnesia_table_loading_retry_limit` retries. By default this wait is
  30,000 ms times 10 retries, i.e. 300 s.

  The mismatch between these two timeout values might lead to the
  startup script failing prematurely while the server is still waiting
  for the Mnesia tables.

  This change introduces variable `RABBITMQ_STARTUP_TIMEOUT` and the
  `--timeout` option into the startup script. The default value for this
  timeout is set to 10 minutes (600 seconds).

  This change also updates the systemd service file to match the timeout
  values between the two service management methods.

  [Scope]

  Upstream patch: https://github.com/rabbitmq/rabbitmq-server-
  release/pull/129

  * Fix is not included in the Debian package
  * Fix is not included in any Ubuntu series

  * Groovy and Focal can apply the upstream patch as is
  * Bionic and Xenial need an additional fix in the systemd service file
to set the `RABBITMQ_STARTUP_TIMEOUT` variable for the
`rabbitmq-server-wait` helper script.

  [Test Case]

  In a clustered setup with two nodes, A and B.

  1. create queue on A
  2. shut down B
  3. shut down A
  4. boot B

  The broker on B will wait for A. The systemd service will wait for 10
  seconds and then fail. Boot A and the rabbitmq-server process on B
  will complete startup.

  [Regression Potential]

  This change alters the behavior of the startup scripts when the Mnesia
  database takes long to become available. This might lead to failures
  further down the service dependency chain.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/rabbitmq-server/+bug/1874075/+subscriptions

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1876230] Re: liburcu: Enable MEMBARRIER_CMD_PRIVATE_EXPEDITED to address performance problems with MEMBARRIER_CMD_SHARED

2020-05-11 Thread Dan Streetman
> - apart from feedback given by @mruffell, to also check if any of
librcu consumers are depending on a full membarrier - driven by kernel -
for ** shared pages among different processes **

this is a good point, although I don't think liburcu makes guarantees
like that, for memory barriers outside of the current process; for
example, the -qsbr, -md, and -signal flavors don't use sys_membarrier at
all.

@mruffell have you looked into that aspect of the change?

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1876230

Title:
  liburcu: Enable MEMBARRIER_CMD_PRIVATE_EXPEDITED to address
  performance problems with MEMBARRIER_CMD_SHARED

Status in liburcu package in Ubuntu:
  Fix Released
Status in liburcu source package in Bionic:
  In Progress

Bug description:
  [Impact]

  In Linux 4.3, a new syscall was defined, called "membarrier". This
  systemcall was defined specifically for use in userspace-rcu (liburcu)
  to speed up the fast path / reader side of the library. The original
  implementation in Linux 4.3 only supported the MEMBARRIER_CMD_SHARED
  subcommand of the membarrier syscall.

  MEMBARRIER_CMD_SHARED executes a memory barrier on all threads from
  all processes running on the system. When it exits, the userspace
  thread which called it is guaranteed that all running threads share
  the same world view in regards to userspace addresses which are
  consumed by readers and writers.

  The problem with MEMBARRIER_CMD_SHARED is system calls made in this
  fashion can block, since it deploys a barrier across all threads in a
  system, and some other threads can be waiting on blocking operations,
  and take time to reach the barrier.

  In Linux 4.14, this was addressed by adding the
  MEMBARRIER_CMD_PRIVATE_EXPEDITED command to the membarrier syscall. It
  only targets threads which share the same mm as the thread calling the
  membarrier syscall, aka, threads in the current process, and not all
  threads / processes in the system.

  Calls to membarrier with the MEMBARRIER_CMD_PRIVATE_EXPEDITED command
  are guaranteed non-blocking, due to using inter-processor interrupts
  to implement memory barriers.

  Because of this, membarrier calls that use
  MEMBARRIER_CMD_PRIVATE_EXPEDITED are much faster than those that use
  MEMBARRIER_CMD_SHARED.

  Since Bionic uses a 4.15 kernel, all kernel requirements are met, and
  this SRU is to enable support for MEMBARRIER_CMD_PRIVATE_EXPEDITED in
  the liburcu package.

  This brings the performance of the liburcu library back in line to
  where it was in Trusty, as this particular user has performance
  problems upon upgrading from Trusty to Bionic.

  [Test]

  Testing performance is heavily dependant on the application which
  links against liburcu, and the workload which it executes.

  A test package is available in the following ppa:
  https://launchpad.net/~mruffell/+archive/ubuntu/sf276198-test

  For the sake of testing, we can use the benchmarks provided in the
  liburcu source code. Download a copy of the source code for liburcu
  either from the repos or from github:

  $ pull-lp-source liburcu bionic
  # OR
  $ git clone https://github.com/urcu/userspace-rcu.git
  $ git checkout v0.10.1 # version in bionic

  Build the code:

  $ ./bootstrap
  $ ./configure
  $ make

  Go into the tests/benchmark directory

  $ cd tests/benchmark

  From there, you can run benchmarks for the four main usages of
  liburcu: urcu, urcu-bp, urcu-signal and urcu-mb.

  On a 8 core machine, 6 threads for readers and 2 threads for writers,
  with a 10 second runtime, execute:

  $ ./test_urcu 6 2 10
  $ ./test_urcu_bp 6 2 10
  $ ./test_urcu_signal 6 2 10
  $ ./test_urcu_mb 6 2 10

  Results:

  ./test_urcu 6 2 10
  0.10.1-1: 17612527667 reads, 268 writes, 17612527935 ops
  0.10.1-1ubuntu1: 14988437247 reads, 810069 writes, 14989247316 ops

  $ ./test_urcu_bp 6 2 10
  0.10.1-1: 1177891079 reads, 1699523 writes, 1179590602 ops
  0.10.1-1ubuntu1: 13230354737 reads, 575314 writes, 13230930051 ops

  $ ./test_urcu_signal 6 2 10
  0.10.1-1: 20128392417 reads, 6859 writes, 20128399276 ops
  0.10.1-1ubuntu1: 20501430707 reads, 6890 writes, 20501437597 ops

  $ ./test_urcu_mb 6 2 10
  0.10.1-1: 627996563 reads, 5409563 writes, 633406126 ops
  0.10.1-1ubuntu1: 653194752 reads, 4590020 writes, 657784772 ops

  The SRU only changes behaviour for urcu and urcu-bp, since they are
  the only "flavours" of liburcu which the patches change. From a pure
  ops standpoint:

  $ ./test_urcu 6 2 10
  17612527935 ops
  14989247316 ops

  $ ./test_urcu_bp 6 2 10
  1179590602 ops
  13230930051 ops

  We see that this particular benchmark workload, test_urcu sees extra
  performance overhead with MEMBARRIER_CMD_PRIVATE_EXPEDITED, which is
  explained by the extra impact that it has on the slowpath, and the
  extra amount of writes it did during my benchmark.

  The real winner in this 

[Sts-sponsors] [Bug 1874075] Re: rabbitmq-server startup timeouts differ between SysV and systemd

2020-05-08 Thread Dan Streetman
@james-page, it looks like elixir-lang in groovy has exceeded the limit
that rabbitmq-server wants to build with; since our rabbitmq-server is
newer than Debian, I'm not sure what your merge process from upstream
is.  Can you do another upstream merge please so rabbitmq-server is
buildable in groovy?  That should also pick up the fix for this bug, as
well.

Thanks!

** Changed in: rabbitmq-server (Ubuntu Groovy)
 Assignee: Nicolas Bock (nicolasbock) => James Page (james-page)

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1874075

Title:
  rabbitmq-server startup timeouts differ between SysV and systemd

Status in rabbitmq-server package in Ubuntu:
  In Progress
Status in rabbitmq-server source package in Xenial:
  In Progress
Status in rabbitmq-server source package in Bionic:
  In Progress
Status in rabbitmq-server source package in Eoan:
  Won't Fix
Status in rabbitmq-server source package in Focal:
  In Progress
Status in rabbitmq-server source package in Groovy:
  In Progress
Status in rabbitmq-server package in Debian:
  New

Bug description:
  The startup timeouts were recently adjusted and synchronized between
  the SysV and systemd startup files.

  https://github.com/rabbitmq/rabbitmq-server-release/pull/129

  The new startup files should be included in this package.

  [Impact]

  After starting the RabbitMQ server process, the startup script will
  wait for the server to start by calling `rabbitmqctl wait` and will
  time out after 10 s.

  The startup time of the server depends on how quickly the Mnesia
  database becomes available and the server will time out after
  `mnesia_table_loading_retry_timeout` ms times
  `mnesia_table_loading_retry_limit` retries. By default this wait is
  30,000 ms times 10 retries, i.e. 300 s.

  The mismatch between these two timeout values might lead to the
  startup script failing prematurely while the server is still waiting
  for the Mnesia tables.

  This change introduces variable `RABBITMQ_STARTUP_TIMEOUT` and the
  `--timeout` option into the startup script. The default value for this
  timeout is set to 10 minutes (600 seconds).

  This change also updates the systemd service file to match the timeout
  values between the two service management methods.

  [Scope]

  Upstream patch: https://github.com/rabbitmq/rabbitmq-server-
  release/pull/129

  * Fix is not included in the Debian package
  * Fix is not included in any Ubuntu series

  * Groovy and Focal can apply the upstream patch as is
  * Bionic and Xenial need an additional fix in the systemd service file
to set the `RABBITMQ_STARTUP_TIMEOUT` variable for the
`rabbitmq-server-wait` helper script.

  [Test Case]

  In a clustered setup with two nodes, A and B.

  1. create queue on A
  2. shut down B
  3. shut down A
  4. boot B

  The broker on B will wait for A. The systemd service will wait for 10
  seconds and then fail. Boot A and the rabbitmq-server process on B
  will complete startup.

  [Regression Potential]

  This change alters the behavior of the startup scripts when the Mnesia
  database takes long to become available. This might lead to failures
  further down the service dependency chain.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/rabbitmq-server/+bug/1874075/+subscriptions

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1876230] Re: liburcu: Enable MEMBARRIER_CMD_PRIVATE_EXPEDITED to address performance problems with MEMBARRIER_CMD_SHARED

2020-05-06 Thread Dan Streetman
excellent analysis, thanks @mruffell!

uploaded to the bionic queue.

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1876230

Title:
  liburcu: Enable MEMBARRIER_CMD_PRIVATE_EXPEDITED to address
  performance problems with MEMBARRIER_CMD_SHARED

Status in liburcu package in Ubuntu:
  Fix Released
Status in liburcu source package in Bionic:
  In Progress

Bug description:
  [Impact]

  In Linux 4.3, a new syscall was defined, called "membarrier". This
  systemcall was defined specifically for use in userspace-rcu (liburcu)
  to speed up the fast path / reader side of the library. The original
  implementation in Linux 4.3 only supported the MEMBARRIER_CMD_SHARED
  subcommand of the membarrier syscall.

  MEMBARRIER_CMD_SHARED executes a memory barrier on all threads from
  all processes running on the system. When it exits, the userspace
  thread which called it is guaranteed that all running threads share
  the same world view in regards to userspace addresses which are
  consumed by readers and writers.

  The problem with MEMBARRIER_CMD_SHARED is system calls made in this
  fashion can block, since it deploys a barrier across all threads in a
  system, and some other threads can be waiting on blocking operations,
  and take time to reach the barrier.

  In Linux 4.14, this was addressed by adding the
  MEMBARRIER_CMD_PRIVATE_EXPEDITED command to the membarrier syscall. It
  only targets threads which share the same mm as the thread calling the
  membarrier syscall, aka, threads in the current process, and not all
  threads / processes in the system.

  Calls to membarrier with the MEMBARRIER_CMD_PRIVATE_EXPEDITED command
  are guaranteed non-blocking, due to using inter-processor interrupts
  to implement memory barriers.

  Because of this, membarrier calls that use
  MEMBARRIER_CMD_PRIVATE_EXPEDITED are much faster than those that use
  MEMBARRIER_CMD_SHARED.

  Since Bionic uses a 4.15 kernel, all kernel requirements are met, and
  this SRU is to enable support for MEMBARRIER_CMD_PRIVATE_EXPEDITED in
  the liburcu package.

  This brings the performance of the liburcu library back in line to
  where it was in Trusty, as this particular user has performance
  problems upon upgrading from Trusty to Bionic.

  [Test]

  Testing performance is heavily dependant on the application which
  links against liburcu, and the workload which it executes.

  A test package is available in the following ppa:
  https://launchpad.net/~mruffell/+archive/ubuntu/sf276198-test

  For the sake of testing, we can use the benchmarks provided in the
  liburcu source code. Download a copy of the source code for liburcu
  either from the repos or from github:

  $ pull-lp-source liburcu bionic
  # OR
  $ git clone https://github.com/urcu/userspace-rcu.git
  $ git checkout v0.10.1 # version in bionic

  Build the code:

  $ ./bootstrap
  $ ./configure
  $ make

  Go into the tests/benchmark directory

  $ cd tests/benchmark

  From there, you can run benchmarks for the four main usages of
  liburcu: urcu, urcu-bp, urcu-signal and urcu-mb.

  On a 8 core machine, 6 threads for readers and 2 threads for writers,
  with a 10 second runtime, execute:

  $ ./test_urcu 6 2 10
  $ ./test_urcu_bp 6 2 10
  $ ./test_urcu_signal 6 2 10
  $ ./test_urcu_mb 6 2 10

  Results:

  ./test_urcu 6 2 10
  0.10.1-1: 17612527667 reads, 268 writes, 17612527935 ops
  0.10.1-1ubuntu1: 14988437247 reads, 810069 writes, 14989247316 ops

  $ ./test_urcu_bp 6 2 10
  0.10.1-1: 1177891079 reads, 1699523 writes, 1179590602 ops
  0.10.1-1ubuntu1: 13230354737 reads, 575314 writes, 13230930051 ops

  $ ./test_urcu_signal 6 2 10
  0.10.1-1: 20128392417 reads, 6859 writes, 20128399276 ops
  0.10.1-1ubuntu1: 20501430707 reads, 6890 writes, 20501437597 ops

  $ ./test_urcu_mb 6 2 10
  0.10.1-1: 627996563 reads, 5409563 writes, 633406126 ops
  0.10.1-1ubuntu1: 653194752 reads, 4590020 writes, 657784772 ops

  The SRU only changes behaviour for urcu and urcu-bp, since they are
  the only "flavours" of liburcu which the patches change. From a pure
  ops standpoint:

  $ ./test_urcu 6 2 10
  17612527935 ops
  14989247316 ops

  $ ./test_urcu_bp 6 2 10
  1179590602 ops
  13230930051 ops

  We see that this particular benchmark workload, test_urcu sees extra
  performance overhead with MEMBARRIER_CMD_PRIVATE_EXPEDITED, which is
  explained by the extra impact that it has on the slowpath, and the
  extra amount of writes it did during my benchmark.

  The real winner in this benchmark workload is test_urcu_bp, which sees
  a 10x performance increase with MEMBARRIER_CMD_PRIVATE_EXPEDITED. Some
  of this may be down to the 3x less writes it did during my benchmark.

  Again, these benchmarks are indicative only are very "random".
  Performance is really dependant on the application which links against
  liburcu and its workload.

  [Regression Potential]

  This SRU 

[Sts-sponsors] [Bug 1876230] Re: liburcu: Enable MEMBARRIER_CMD_PRIVATE_EXPEDITED to address performance problems with MEMBARRIER_CMD_SHARED

2020-05-05 Thread Dan Streetman
Hi @mruffell,

two questions for this sru:

1) it looks like static libs are built/provided by this package:

$ pull-lp-debs liburcu bionic ; for p in *.deb ; do echo "$p:" ; dpkg-deb -c $p 
| grep -E '*\.a' ; done
Found liburcu 0.10.1-1 in bionic
Using existing file liburcu-dev_0.10.1-1_amd64.deb
Using existing file liburcu6_0.10.1-1_amd64.deb
liburcu6_0.10.1-1_amd64.deb:
liburcu-dev_0.10.1-1_amd64.deb:
-rw-r--r-- root/root 47956 2018-01-23 15:46 
./usr/lib/x86_64-linux-gnu/liburcu-bp.a
-rw-r--r-- root/root 69844 2018-01-23 15:46 
./usr/lib/x86_64-linux-gnu/liburcu-cds.a
-rw-r--r-- root/root 23912 2018-01-23 15:46 
./usr/lib/x86_64-linux-gnu/liburcu-common.a
-rw-r--r-- root/root 43750 2018-01-23 15:46 
./usr/lib/x86_64-linux-gnu/liburcu-mb.a
-rw-r--r-- root/root 45642 2018-01-23 15:46 
./usr/lib/x86_64-linux-gnu/liburcu-qsbr.a
-rw-r--r-- root/root 45716 2018-01-23 15:46 
./usr/lib/x86_64-linux-gnu/liburcu-signal.a
-rw-r--r-- root/root 45148 2018-01-23 15:46 
./usr/lib/x86_64-linux-gnu/liburcu.a

and several other pkgs have build-dep for that:

$ reverse-depends -b -r bionic liburcu-dev
Reverse-Build-Depends
* gdnsd
* glusterfs
* knot
* ltt-control
* multipath-tools
* netsniff-ng
* sheepdog
* ust

Can you check those packages to see if any use static linking (and thus
should be recompiled with the updated static liburcu libs)?


2) In your testing results comparison:

> ./test_urcu 6 2 10
> 0.10.1-1: 17612527667 reads, 268 writes, 17612527935 ops
> 0.10.1-1ubuntu1: 14988437247 reads, 810069 writes, 14989247316 ops

The number of writes is obviously much, much better; however the number
of reads actually goes down with the patched code.

> $ ./test_urcu_bp 6 2 10
> 0.10.1-1: 1177891079 reads, 1699523 writes, 1179590602 ops
> 0.10.1-1ubuntu1: 13230354737 reads, 575314 writes, 13230930051 ops

Similarly, while the number of reads increases significantly, the number
of writes goes down.

I may be misreading the results, but it seems like this change is not an
across-the-board improvement, but more of a performance trade-off.  If
that's the case, I think it will be hard to make the case this should be
included as an SRU.  Can you clarify the results comparison in more
detail please?

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1876230

Title:
  liburcu: Enable MEMBARRIER_CMD_PRIVATE_EXPEDITED to address
  performance problems with MEMBARRIER_CMD_SHARED

Status in liburcu package in Ubuntu:
  Fix Released
Status in liburcu source package in Bionic:
  In Progress

Bug description:
  [Impact]

  In Linux 4.3, a new syscall was defined, called "membarrier". This
  systemcall was defined specifically for use in userspace-rcu (liburcu)
  to speed up the fast path / reader side of the library. The original
  implementation in Linux 4.3 only supported the MEMBARRIER_CMD_SHARED
  subcommand of the membarrier syscall.

  MEMBARRIER_CMD_SHARED executes a memory barrier on all threads from
  all processes running on the system. When it exits, the userspace
  thread which called it is guaranteed that all running threads share
  the same world view in regards to userspace addresses which are
  consumed by readers and writers.

  The problem with MEMBARRIER_CMD_SHARED is system calls made in this
  fashion can block, since it deploys a barrier across all threads in a
  system, and some other threads can be waiting on blocking operations,
  and take time to reach the barrier.

  In Linux 4.14, this was addressed by adding the
  MEMBARRIER_CMD_PRIVATE_EXPEDITED command to the membarrier syscall. It
  only targets threads which share the same mm as the thread calling the
  membarrier syscall, aka, threads in the current process, and not all
  threads / processes in the system.

  Calls to membarrier with the MEMBARRIER_CMD_PRIVATE_EXPEDITED command
  are guaranteed non-blocking, due to using inter-processor interrupts
  to implement memory barriers.

  Because of this, membarrier calls that use
  MEMBARRIER_CMD_PRIVATE_EXPEDITED are much faster than those that use
  MEMBARRIER_CMD_SHARED.

  Since Bionic uses a 4.15 kernel, all kernel requirements are met, and
  this SRU is to enable support for MEMBARRIER_CMD_PRIVATE_EXPEDITED in
  the liburcu package.

  This brings the performance of the liburcu library back in line to
  where it was in Trusty, as this particular user has performance
  problems upon upgrading from Trusty to Bionic.

  [Test]

  Testing performance is heavily dependant on the application which
  links against liburcu, and the workload which it executes.

  A test package is available in the following ppa:
  https://launchpad.net/~mruffell/+archive/ubuntu/sf276198-test

  For the sake of testing, we can use the benchmarks provided in the
  liburcu source code. Download a copy of the source code for liburcu
  either from the repos or from github:

  $ pull-lp-source liburcu 

[Sts-sponsors] [Bug 1876230] Re: liburcu: Enable MEMBARRIER_CMD_PRIVATE_EXPEDITED to address performance problems with MEMBARRIER_CMD_SHARED

2020-05-05 Thread Dan Streetman
** Tags added: sts-sponsor-ddstreet

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1876230

Title:
  liburcu: Enable MEMBARRIER_CMD_PRIVATE_EXPEDITED to address
  performance problems with MEMBARRIER_CMD_SHARED

Status in liburcu package in Ubuntu:
  Fix Released
Status in liburcu source package in Bionic:
  In Progress

Bug description:
  [Impact]

  In Linux 4.3, a new syscall was defined, called "membarrier". This
  systemcall was defined specifically for use in userspace-rcu (liburcu)
  to speed up the fast path / reader side of the library. The original
  implementation in Linux 4.3 only supported the MEMBARRIER_CMD_SHARED
  subcommand of the membarrier syscall.

  MEMBARRIER_CMD_SHARED executes a memory barrier on all threads from
  all processes running on the system. When it exits, the userspace
  thread which called it is guaranteed that all running threads share
  the same world view in regards to userspace addresses which are
  consumed by readers and writers.

  The problem with MEMBARRIER_CMD_SHARED is system calls made in this
  fashion can block, since it deploys a barrier across all threads in a
  system, and some other threads can be waiting on blocking operations,
  and take time to reach the barrier.

  In Linux 4.14, this was addressed by adding the
  MEMBARRIER_CMD_PRIVATE_EXPEDITED command to the membarrier syscall. It
  only targets threads which share the same mm as the thread calling the
  membarrier syscall, aka, threads in the current process, and not all
  threads / processes in the system.

  Calls to membarrier with the MEMBARRIER_CMD_PRIVATE_EXPEDITED command
  are guaranteed non-blocking, due to using inter-processor interrupts
  to implement memory barriers.

  Because of this, membarrier calls that use
  MEMBARRIER_CMD_PRIVATE_EXPEDITED are much faster than those that use
  MEMBARRIER_CMD_SHARED.

  Since Bionic uses a 4.15 kernel, all kernel requirements are met, and
  this SRU is to enable support for MEMBARRIER_CMD_PRIVATE_EXPEDITED in
  the liburcu package.

  This brings the performance of the liburcu library back in line to
  where it was in Trusty, as this particular user has performance
  problems upon upgrading from Trusty to Bionic.

  [Test]

  Testing performance is heavily dependant on the application which
  links against liburcu, and the workload which it executes.

  A test package is available in the following ppa:
  https://launchpad.net/~mruffell/+archive/ubuntu/sf276198-test

  For the sake of testing, we can use the benchmarks provided in the
  liburcu source code. Download a copy of the source code for liburcu
  either from the repos or from github:

  $ pull-lp-source liburcu bionic
  # OR
  $ git clone https://github.com/urcu/userspace-rcu.git
  $ git checkout v0.10.1 # version in bionic

  Build the code:

  $ ./bootstrap
  $ ./configure
  $ make

  Go into the tests/benchmark directory

  $ cd tests/benchmark

  From there, you can run benchmarks for the four main usages of
  liburcu: urcu, urcu-bp, urcu-signal and urcu-mb.

  On a 8 core machine, 6 threads for readers and 2 threads for writers,
  with a 10 second runtime, execute:

  $ ./test_urcu 6 2 10
  $ ./test_urcu_bp 6 2 10
  $ ./test_urcu_signal 6 2 10
  $ ./test_urcu_mb 6 2 10

  Results:

  ./test_urcu 6 2 10
  0.10.1-1: 17612527667 reads, 268 writes, 17612527935 ops
  0.10.1-1ubuntu1: 14988437247 reads, 810069 writes, 14989247316 ops

  $ ./test_urcu_bp 6 2 10
  0.10.1-1: 1177891079 reads, 1699523 writes, 1179590602 ops
  0.10.1-1ubuntu1: 13230354737 reads, 575314 writes, 13230930051 ops

  $ ./test_urcu_signal 6 2 10
  0.10.1-1: 20128392417 reads, 6859 writes, 20128399276 ops
  0.10.1-1ubuntu1: 20501430707 reads, 6890 writes, 20501437597 ops

  $ ./test_urcu_mb 6 2 10
  0.10.1-1: 627996563 reads, 5409563 writes, 633406126 ops
  0.10.1-1ubuntu1: 653194752 reads, 4590020 writes, 657784772 ops

  The SRU only changes behaviour for urcu and urcu-bp, since they are
  the only "flavours" of liburcu which the patches change. From a pure
  ops standpoint:

  $ ./test_urcu 6 2 10
  17612527935 ops
  14989247316 ops

  $ ./test_urcu_bp 6 2 10
  1179590602 ops
  13230930051 ops

  We see that this particular benchmark workload, test_urcu sees extra
  performance overhead with MEMBARRIER_CMD_PRIVATE_EXPEDITED, which is
  explained by the extra impact that it has on the slowpath, and the
  extra amount of writes it did during my benchmark.

  The real winner in this benchmark workload is test_urcu_bp, which sees
  a 10x performance increase with MEMBARRIER_CMD_PRIVATE_EXPEDITED. Some
  of this may be down to the 3x less writes it did during my benchmark.

  Again, these benchmarks are indicative only are very "random".
  Performance is really dependant on the application which links against
  liburcu and its workload.

  [Regression Potential]

  This SRU changes the behaviour of the following 

[Sts-sponsors] [Bug 1876230] Re: liburcu: Enable MEMBARRIER_CMD_PRIVATE_EXPEDITED to address performance problems with MEMBARRIER_CMD_SHARED

2020-05-05 Thread Dan Streetman
** Description changed:

  [Impact]
  
  In Linux 4.3, a new syscall was defined, called "membarrier". This
  systemcall was defined specifically for use in userspace-rcu (liburcu)
  to speed up the fast path / reader side of the library. The original
  implementation in Linux 4.3 only supported the MEMBARRIER_CMD_SHARED
  subcommand of the membarrier syscall.
  
  MEMBARRIER_CMD_SHARED executes a memory barrier on all threads from all
  processes running on the system. When it exits, the userspace thread
  which called it is guaranteed that all running threads share the same
  world view in regards to userspace addresses which are consumed by
  readers and writers.
  
  The problem with MEMBARRIER_CMD_SHARED is system calls made in this
  fashion can block, since it deploys a barrier across all threads in a
  system, and some other threads can be waiting on blocking operations,
  and take time to reach the barrier.
  
  In Linux 4.14, this was addressed by adding the
  MEMBARRIER_CMD_PRIVATE_EXPEDITED command to the membarrier syscall. It
  only targets threads which share the same mm as the thread calling the
  membarrier syscall, aka, threads in the current process, and not all
  threads / processes in the system.
  
  Calls to membarrier with the MEMBARRIER_CMD_PRIVATE_EXPEDITED command
  are guaranteed non-blocking, due to using inter-processor interrupts to
  implement memory barriers.
  
  Because of this, membarrier calls that use
  MEMBARRIER_CMD_PRIVATE_EXPEDITED are much faster than those that use
  MEMBARRIER_CMD_SHARED.
  
  Since Bionic uses a 4.15 kernel, all kernel requirements are met, and
  this SRU is to enable support for MEMBARRIER_CMD_PRIVATE_EXPEDITED in
  the liburcu package.
  
  This brings the performance of the liburcu library back in line to where
  it was in Trusty, as this particular user has performance problems upon
  upgrading from Trusty to Bionic.
  
  [Test]
  
  Testing performance is heavily dependant on the application which links
  against liburcu, and the workload which it executes.
  
  A test package is available in the following ppa:
  https://launchpad.net/~mruffell/+archive/ubuntu/sf276198-test
  
  For the sake of testing, we can use the benchmarks provided in the
  liburcu source code. Download a copy of the source code for liburcu
  either from the repos or from github:
  
  $ pull-lp-source liburcu bionic
  # OR
  $ git clone https://github.com/urcu/userspace-rcu.git
  $ git checkout v0.10.1 # version in bionic
  
  Build the code:
  
  $ ./bootstrap
  $ ./configure
  $ make
  
  Go into the tests/benchmark directory
  
  $ cd tests/benchmark
  
  From there, you can run benchmarks for the four main usages of liburcu:
  urcu, urcu-bp, urcu-signal and urcu-mb.
  
  On a 8 core machine, 6 threads for readers and 2 threads for writers,
  with a 10 second runtime, execute:
  
  $ ./test_urcu 6 2 10
  $ ./test_urcu_bp 6 2 10
  $ ./test_urcu_signal 6 2 10
  $ ./test_urcu_mb 6 2 10
  
  Results:
  
  ./test_urcu 6 2 10
  0.10.1-1: 17612527667 reads, 268 writes, 17612527935 ops
  0.10.1-1ubuntu1: 14988437247 reads, 810069 writes, 14989247316 ops
  
  $ ./test_urcu_bp 6 2 10
  0.10.1-1: 1177891079 reads, 1699523 writes, 1179590602 ops
  0.10.1-1ubuntu1: 13230354737 reads, 575314 writes, 13230930051 ops
  
  $ ./test_urcu_signal 6 2 10
  0.10.1-1: 20128392417 reads, 6859 writes, 20128399276 ops
  0.10.1-1ubuntu1: 20501430707 reads, 6890 writes, 20501437597 ops
  
  $ ./test_urcu_mb 6 2 10
  0.10.1-1: 627996563 reads, 5409563 writes, 633406126 ops
  0.10.1-1ubuntu1: 653194752 reads, 4590020 writes, 657784772 ops
  
  The SRU only changes behaviour for urcu and urcu-bp, since they are the
  only "flavours" of liburcu which the patches change. From a pure ops
  standpoint:
  
  $ ./test_urcu 6 2 10
  17612527935 ops
  14989247316 ops
  
  $ ./test_urcu_bp 6 2 10
  1179590602 ops
  13230930051 ops
  
  We see that this particular benchmark workload, test_urcu sees extra
  performance overhead with MEMBARRIER_CMD_PRIVATE_EXPEDITED, which is
  explained by the extra impact that it has on the slowpath, and the extra
  amount of writes it did during my benchmark.
  
  The real winner in this benchmark workload is test_urcu_bp, which sees a
  10x performance increase with MEMBARRIER_CMD_PRIVATE_EXPEDITED. Some of
  this may be down to the 3x less writes it did during my benchmark.
  
  Again, these benchmarks are indicative only are very "random".
  Performance is really dependant on the application which links against
  liburcu and its workload.
  
  [Regression Potential]
  
  This SRU changes the behaviour of the following libraries which
  applications link against: -lurcu and -lurcu-bp. Behaviour is not
  changed in the rest: -lurcu-qsbr, -lucru-signal and -lucru-mb.
  
  On Bionic, liburcu will call the membarrier syscall in urcu and urcu-bp.
  This does not change. What is changing is the semantics of that syscall,
  from MEMBARRIER_CMD_SHARED to 

[Sts-sponsors] [Bug 1874075] Re: rabbitmq-server startup timeouts differ between SysV and systemd

2020-04-27 Thread Dan Streetman
** Tags added: sts-sponsor-ddstreet

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1874075

Title:
  rabbitmq-server startup timeouts differ between SysV and systemd

Status in rabbitmq-server package in Ubuntu:
  In Progress
Status in rabbitmq-server source package in Xenial:
  In Progress
Status in rabbitmq-server source package in Bionic:
  In Progress
Status in rabbitmq-server source package in Eoan:
  Won't Fix
Status in rabbitmq-server source package in Focal:
  In Progress
Status in rabbitmq-server source package in Groovy:
  In Progress

Bug description:
  The startup timeouts were recently adjusted and synchronized between
  the SysV and systemd startup files.

  https://github.com/rabbitmq/rabbitmq-server-release/pull/129

  The new startup files should be included in this package.

  [Impact]

  After starting the RabbitMQ server process, the startup script will
  wait for the server to start by calling `rabbitmqctl wait` and will
  time out after 10 s.

  The startup time of the server depends on how quickly the Mnesia
  database becomes available and the server will time out after
  `mnesia_table_loading_retry_timeout` ms times
  `mnesia_table_loading_retry_limit` retries. By default this wait is
  30,000 ms times 10 retries, i.e. 300 s.

  The mismatch between these two timeout values might lead to the
  startup script failing prematurely while the server is still waiting
  for the Mnesia tables.

  This change introduces variable `RABBITMQ_STARTUP_TIMEOUT` and the
  `--timeout` option into the startup script. The default value for this
  timeout is set to 10 minutes (600 seconds).

  This change also updates the systemd service file to match the timeout
  values between the two service management methods.

  [Test Case]

  In a clustered setup with two nodes, A and B.

  1. create queue on A
  2. shut down B
  3. shut down A
  4. boot B

  The broker on B will wait for A. The systemd service will wait for 10
  seconds and then fail. Boot A and the rabbitmq-server process on B
  will complete startup.

  [Regression Potential]

  This change alters the behavior of the startup scripts when the Mnesia
  database takes long to become available. This might lead to failures
  further down the service dependency chain.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/rabbitmq-server/+bug/1874075/+subscriptions

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1867676] Re: Fetching by secret container doesn't raises 404 exception

2020-04-13 Thread Dan Streetman
since this is in the Bionic unapproved upload queue already, i'm
removing ubuntu-sponsors.  I'm leaving sts-sponsors subscribed to help
nudge the upload through until it reaches bionic-updates.

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1867676

Title:
  Fetching by secret container doesn't raises 404 exception

Status in Ubuntu Cloud Archive:
  Invalid
Status in Ubuntu Cloud Archive queens series:
  Triaged
Status in python-barbicanclient package in Ubuntu:
  Fix Released
Status in python-barbicanclient source package in Bionic:
  Triaged
Status in python-barbicanclient source package in Disco:
  Fix Released
Status in python-barbicanclient source package in Eoan:
  Fix Released
Status in python-barbicanclient source package in Focal:
  Fix Released

Bug description:
  [Impact]

  Users of Ubuntu bionic running openstack clouds >= rocky
  can't create octavia load balancers listeners anymore since the backport of 
the following patch:

  
https://opendev.org/openstack/octavia/commit/a501714a76e04b33dfb24c4ead9956ed4696d1df

  This change was introduced as part of the following backports and
  their posterior syncs into the current Bionic version.

  This fix being SRUed here is contained in 4.8.1-0ubuntu1 (disco onwards)
  but not on the Bionic version 4.6.0-0ubuntu1.

  The issue gets exposed with the following octavia
  packages from UCA + python-barbicanclient 4.6.0ubuntu1.

  Please note that likely this python-barbicanclient dependency should
  be part of UCA and not of main/universe.

   octavia-api | 3.0.0-0ubuntu3~cloud0   | rocky  | all
   octavia-api | 4.0.0-0ubuntu1.1~cloud0 | stein  | all
   octavia-api | 4.0.0-0ubuntu1~cloud0   | train  | all

  This change added a new exception handler in the code
  that manages the decoding of the given PCKS12 certicate bundle when the 
listener is created, this handler now captures the PCKS12 decoding error and 
then raises it preventing
  the listener creation to happen (when its invoked with i.e.: 
--default-tls-container="https://10.5.0.4:9312/v1/containers/68154f38-fccf-4990-b88c-86eb3cc7fe1a;
 ) , this was originally being hidden
  under the legacy code handler as can be seen here:

  
https://opendev.org/openstack/octavia/commit/a501714a76e04b33dfb24c4ead9956ed4696d1df

  This exception is raised because the barbicanclient doesn't know how to 
distinguish between a given secret and a container, therefore, when the
  user specifies a container UUID the client tries to fetch a secret with that 
uuid (including the /containers/UUID path) and a error 400 (not the expected 
404 http error) is returned.

  The change proposed on the SRU makes the client aware of container and
  secret UUID(s) and is able to split the path to distinguish a non-
  secret (such as a container), in that way if a container is passed, it
  fails to pass the parsing validation and the right return code (404)
  is returned by the client.

  If a error 404 gets returned, then the except Exception block gets
  executed and the legacy driver code for decoding the pcks12 certicate in 
octavia is invoked, this legacy
  driver is able to decode the container payloads and the decoding of the 
pcks12 certificate succeeds.

  This differentiation was implemented here:

  https://github.com/openstack/python-
  barbicanclient/commit/6651c8ffce48ce7ff08f5563a8e6212677ea0468

  As an example (this worked before the latest bionic version was
  pushed)

  openstack loadbalancer listener create --protocol-port 443 --protocol
  "TERMINATED_HTTPS" --name "test-listener" --default-tls-
  container="https://10.5.0.4:9312/v1/containers/68154f38-fccf-4990
  -b88c-86eb3cc7fe1a" -- lb1

  With the newest package upgrade this creation will fail with the
  following exception:

  The PKCS12 bundle is unreadable. Please check the PKCS12 bundle
  validity. In addition, make sure it does not require a pass phrase.
  Error: [('asn1 encoding routines', 'asn1_d2i_read_bio', 'not enough
  data')] (HTTP 400) (Request-ID: req-8e48d0b5-3f5b-
  4d26-9920-72b03343596a)

  Further rationale on this can be found on
  https://storyboard.openstack.org/#!/story/2007371

  [Test Case]

  1) Deploy this bundle or similar
  (http://paste.ubuntu.com/p/cgbwKNZHbW/)

  2) Create self-signed certificate, key and ca
  (http://paste.ubuntu.com/p/xyyxHZGDFR/)

  3) Create the 3 certs at barbican

  $ openstack secret store --name "test-pk-1" --secret-type "private"
  --payload-content-type "text/plain" --payload="$(cat
  ./keys/controller_key.pem)"

  $ openstack secret store --name "test-ca-1" --secret-type
  "certificate" --payload-content-type "text/plain" --payload="$(cat
  ./keys/controller_ca.pem)"

  $ openstack secret store --name "test-pub-1" --secret-type
  "certificate" --payload-content-type "text/plain" --payload="$(cat
  ./keys/controller_cert.pem)"

  4) Create a loadbalancer
  $ 

[Sts-sponsors] [Bug 1870619] Re: rabbitmq-server startup does not wait long enough

2020-04-07 Thread Dan Streetman
** Changed in: rabbitmq-server (Ubuntu Disco)
   Status: Confirmed => Won't Fix

** Changed in: rabbitmq-server (Ubuntu Focal)
 Assignee: (unassigned) => Nicolas Bock (nicolasbock)

** Changed in: rabbitmq-server (Ubuntu Eoan)
 Assignee: (unassigned) => Nicolas Bock (nicolasbock)

** Changed in: rabbitmq-server (Ubuntu Bionic)
 Assignee: (unassigned) => Nicolas Bock (nicolasbock)

** Changed in: rabbitmq-server (Ubuntu Bionic)
   Importance: Undecided => Medium

** Changed in: rabbitmq-server (Ubuntu Eoan)
   Importance: Undecided => Medium

** Changed in: rabbitmq-server (Ubuntu Focal)
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1870619

Title:
  rabbitmq-server startup does not wait long enough

Status in OpenStack rabbitmq-server charm:
  New
Status in rabbitmq-server package in Ubuntu:
  Confirmed
Status in rabbitmq-server source package in Bionic:
  Confirmed
Status in rabbitmq-server source package in Disco:
  Won't Fix
Status in rabbitmq-server source package in Eoan:
  Confirmed
Status in rabbitmq-server source package in Focal:
  Confirmed

Bug description:
  [Impact]

   * Rabbitmq-server has 2 configuration settings that affect how long it will 
wait for the mnesia database to become available
   * The default is 30 seconds x 10 retries = 300 seconds
   * The startup wrapper rabbitmq-server-wait will wait only 10 seconds
   * If the database does not come online within 10 seconds the startup script 
will fail despite the fact that rabbitmq-server is still waiting for another 
290 seconds.
   * This behavior leads to falsely identified failures in OpenStack for 
example when a Rabbitmq cluster is restarted out of order (LP: #1828988)

  [Test Case]

   * Create Rabbitmq cluster and create a queue with "ha-mode: all" policy
   * Shut down nodes one by one
   * Restart the node that was shut down first
   * This node will fail to start because it was not the master of the queue
   * Note that the startup script (SysV or systemd) will fail after 10 seconds 
while the rabbitmq-server process is still waiting for the database to come 
online

  [Regression Potential]

   * I am not aware of any potential regressions

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-rabbitmq-server/+bug/1870619/+subscriptions

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


Re: [Sts-sponsors] easy bugs for seyeongkim to take

2020-03-30 Thread Dan Streetman
Sounds great!  I added the tag to a few bugs :)

On Mon, Mar 30, 2020 at 9:36 AM Mauricio Oliveira
 wrote:
>
> Thought?
>
> Yes, that sounds like a great Friday night!
>
> On Mon, Mar 30, 2020 at 10:34 AM Eric Desrochers
>  wrote:
> >
> > I like "sts-sponsor-volunteer"
> >
> > Mauricio, your concern about skill levels make a lot of sense.
> > Note to myself, don't pick tag name on Friday night without thinking too 
> > much about it while watching Star Wars with the kids. ;)
> >
> > Thought ?
> >
> > Eric
> >
> > On Mon, Mar 30, 2020 at 7:08 AM Mauricio Oliveira 
> >  wrote:
> >>
> >> I do like the tag idea too.
> >>
> >> However, I think we should not use wording associated with skill
> >> levels (no matter how great and cool padawan is :-) to avoid the
> >> impression a bug requires less skill from the assignee to handle it
> >> (even though it may be the case, technically.)
> >>
> >> If that makes sense, perhaps tags like "sts-sponsor-volunteer" or
> >> "sts-sponsor-help" indicate a more proactive attitude from the person
> >> willing to take it.
> >>
> >> Then a search link for the tag, with a banner like "We need you for
> >> SRUs!" (lol, just kidding) prompting people to volunteer for
> >> fixes/SRUs to help with their own review/sponsoring practice, would
> >> help! :-)
> >>
> >> cheers,
> >>
> >> On Fri, Mar 27, 2020 at 10:50 PM Eric Desrochers
> >>  wrote:
> >> >
> >> > I like the tag idea. What about "sts-sponsor-padawan" ?
> >> >
> >> > On Fri, Mar 27, 2020 at 5:19 PM Dan Streetman 
> >> >  wrote:
> >> >>
> >> >> going thru my old watched bugs, here are some bugs that should be easy
> >> >> to handle.  Maybe we should figure out a LP bug tag to use for bugs
> >> >> that we find that are good for potential sponsors, like seyeongkim, to
> >> >> take?
> >> >>
> >> >> https://bugs.launchpad.net/ubuntu/bionic/+source/nvme-cli/+bug/1800544
> >> >> -super easy bug
> >> >>
> >> >> https://bugs.launchpad.net/ubuntu/bionic/+source/python-etcd3gw/+bug/1820083
> >> >> -the actual patch is trivial, but this needs fixing in debian as well
> >> >> and setting up a reproducer to verify might be difficult.  This
> >> >> originally came from a case from Bloomberg, so setuid might be able to
> >> >> help with reproducer and/or verification.
> >> >>
> >> >> https://bugs.launchpad.net/ubuntu/xenial/+source/drbd-utils/+bug/1673255
> >> >> -i have not actually looked at this one in a long time, so i'm not
> >> >> sure if it is still needed, but should be easy enough to check if it's
> >> >> still needed, and if so then it should be easy to patch
> >> >>
> >> >> --
> >> >> Mailing list: https://launchpad.net/~sts-sponsors
> >> >> Post to : sts-sponsors@lists.launchpad.net
> >> >> Unsubscribe : https://launchpad.net/~sts-sponsors
> >> >> More help   : https://help.launchpad.net/ListHelp
> >> >
> >> > --
> >> > Mailing list: https://launchpad.net/~sts-sponsors
> >> > Post to : sts-sponsors@lists.launchpad.net
> >> > Unsubscribe : https://launchpad.net/~sts-sponsors
> >> > More help   : https://help.launchpad.net/ListHelp
> >>
> >>
> >>
> >> --
> >> Mauricio Faria de Oliveira
>
>
>
> --
> Mauricio Faria de Oliveira

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] easy bugs for seyeongkim to take

2020-03-27 Thread Dan Streetman
going thru my old watched bugs, here are some bugs that should be easy
to handle.  Maybe we should figure out a LP bug tag to use for bugs
that we find that are good for potential sponsors, like seyeongkim, to
take?

https://bugs.launchpad.net/ubuntu/bionic/+source/nvme-cli/+bug/1800544
-super easy bug

https://bugs.launchpad.net/ubuntu/bionic/+source/python-etcd3gw/+bug/1820083
-the actual patch is trivial, but this needs fixing in debian as well
and setting up a reproducer to verify might be difficult.  This
originally came from a case from Bloomberg, so setuid might be able to
help with reproducer and/or verification.

https://bugs.launchpad.net/ubuntu/xenial/+source/drbd-utils/+bug/1673255
-i have not actually looked at this one in a long time, so i'm not
sure if it is still needed, but should be easy enough to check if it's
still needed, and if so then it should be easy to patch

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1860548] Re: systemd crashes when logging long message

2020-01-22 Thread Dan Streetman
** Description changed:

- [Description]
+ [Impact]
  
  Systemd crashes when logging very long messages. This regression was 
introduced with
  upstream commit d054f0a4d451 [1] due to xsprintf.
- Commits e68eedbbdc98 [2] and 574432f889ce [3] replace some uses of xsprintf 
with 
+ Commits e68eedbbdc98 [2] and 574432f889ce [3] replace some uses of xsprintf 
with
  snprintf and fix it.
  
  [Test Case]
  
  # systemd-run --scope apt-get -q -y -o DPkg::Options::=--force-confold
  -o DPkg::Options::=--force-confdef --allow-unauthenticated install acl
  adduser amd64-microcode apt base-files base-passwd bash bash-completion
  bind9-host binfmt-support binutils-common binutils-x86-64-linux-gnu
  bsdmainutils bsdutils busybox-initramfs busybox-static bzip2 ca-
  certificates console-setup console-setup-linux coreutils cpio cpp cpp-7
  crda cron curl dash dbus dctrl-tools debconf debconf-i18n debianutils
  dictionaries-common diffutils dirmngr distro-info-data dmeventd dmsetup
  dnsmasq-base dnsutils dpkg e2fslibs e2fsprogs ed eject fakeroot fdisk
  file findutils friendly-recovery gawk gcc-7-base gcc-8-base gettext-base
  gir1.2-glib-2.0 gnupg gnupg-l10n gnupg-utils gpg gpg-agent gpg-wks-
  client gpg-wks-server gpgconf gpgsm gpgv grep groff-base grub-common
  grub-pc grub-pc-bin grub2-common gzip hostname info init init-system-
  helpers initramfs-tools initramfs-tools-bin initramfs-tools-core
  install-info intel-microcode iproute2 iptables iputils-ping iputils-
  tracepath irqbalance isc-dhcp-client isc-dhcp-common iso-codes iw
  keyboard-configuration keyutils klibc-utils kmod krb5-locales krb5-user
  language-pack-en language-pack-en-base language-pack-gnome-en language-
  pack-gnome-en-base less libaccountsservice0 libacl1 libapparmor1
  libargon2-0 libasan4 libasn1-8-heimdal libassuan0 libatm1 libatomic1
  libattr1 libaudit-common libaudit1  libbinutils libblkid1 libbsd0
  libbz2-1.0 libc-bin libc-dev-bin libc6 libc6-dev libcap-ng0 libcap2
  libcap2-bin libcc1-0 libcilkrts5 libcom-err2 libcryptsetup12
  libcurl3-gnutls libcurl4 libdb5.3 libdbus-1-3 libdebconfclient0
  libdevmapper-event1.02.1 libdevmapper1.02.1 libdpkg-perl libdrm-common
  libdrm2 libdumbnet1 libedit2 libelf1 libestr0 libevent-2.1-6 libexpat1
  libexpat1-dev libext2fs2 libfakeroot libfastjson4 libfdisk1 libffi6
  libfreetype6 libfribidi0 libfuse2 libgc1c2 libgcc-7-dev libgcc1
  libgcrypt20 libgdbm-compat4 libgeoip1 libgirepository-1.0-1 libglib2.0-0
  libglib2.0-data libgmp10 libgnutls30 libgomp1 libgpg-error0 libgpm2
  libgssapi-krb5-2 libgssapi3-heimdal libgssrpc4 libhcrypto4-heimdal
  libheimbase1-heimdal libheimntlm0-heimdal libhogweed4 libhx509-5-heimdal
  libitm1 libjs-jquery libjs-sphinxdoc libjs-underscore libk5crypto3
  libkadm5clnt-mit11 libkadm5srv-mit11 libkdb5-9 libkeyutils1 libkmod2
  libkrb5-26-heimdal libkrb5-3 libkrb5support0 libksba8 libldap-2.4-2
  libldap-common liblocale-gettext-perl liblsan0 liblz4-1 liblzma5
  libmagic-mgc libmagic1 libmnl0 libmount1 libmpc3 libmpdec2 libmpfr6
  libmpx2 libmspack0 libncurses5 libncursesw5 libnetfilter-conntrack3
  libnettle6 libnewt0.52 libnfnetlink0 libnfsidmap2 libnghttp2-14
  libnl-3-200 libnl-genl-3-200 libnorm1 libnpth0 libnss-systemd libnuma1
  libp11-kit0 libpam-cap libpam-krb5 libpam-modules libpam-modules-bin
  libpam-runtime libpam-systemd libpam0g libparted2 libpcap0.8 libpci3
  libpcre3 libpgm-5.2-0 libpipeline1 libplymouth4 libpng16-16 libpolkit-
  gobject-1-0 libpopt0 libpsl5 libpython-all-dev libpython-dev libpython-
  stdlib libpython2.7 libpython2.7-dev libpython2.7-minimal
  
- 
  # tail -f /var/log/syslog
  ...
  Jan 22 12:50:33 bionic-kernel systemd[1]: Assertion 'xsprintf: buf[] must be 
big enough' failed at ../src/core/job.c:803, function job_log_status_message(). 
Aborting.
  
  Broadcast message from systemd-journald@bionic-kernel (Wed 2020-01-22
  12:50:33 UTC):
  
  systemd[1]: Caught , dumped core as pid 14620.
  
- 
- Broadcast message from systemd-journald@bionic-kernel (Wed 2020-01-22 
12:50:33 UTC):
+ Broadcast message from systemd-journald@bionic-kernel (Wed 2020-01-22
+ 12:50:33 UTC):
  
  systemd[1]: Freezing execution.
  
  Jan 22 12:50:33 bionic-kernel systemd[1]: Caught , dumped core as pid 
14620.
  Jan 22 12:50:33 bionic-kernel systemd[1]: Freezing execution.
  
- 
  [Regression Potential]
  
  The patches replace xsprintf with snprintf and the regression potential
  is small.
+ 
+ Any regression would likely involve additional systemd crashes and/or
+ truncated log/output messages.
  
  [Other]
  
  Only Bionic is affected.
  
  [1] https://github.com/systemd/systemd/issues/4534
- [2] 
https://github.com/systemd/systemd/commit/e68eedbbdc98fa13449756b7fee3bed689d76493
 
+ [2] 
https://github.com/systemd/systemd/commit/e68eedbbdc98fa13449756b7fee3bed689d76493
  [3] 
https://github.com/systemd/systemd/commit/574432f889ce3de126bbc6736bcbd22ee170ff82

-- 
You received this bug notification because you are a member of STS
Sponsors, which is 

[Sts-sponsors] [Bug 1847924] Re: Introduce broken state parsing to mdadm

2020-01-22 Thread Dan Streetman
** Tags removed: sts-sponsor-ddstreet

** Tags removed: sts-sponsor

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1847924

Title:
  Introduce broken state parsing to mdadm

Status in mdadm package in Ubuntu:
  Fix Released
Status in mdadm source package in Bionic:
  Fix Committed
Status in mdadm source package in Disco:
  Won't Fix
Status in mdadm source package in Eoan:
  Fix Committed
Status in mdadm source package in Focal:
  Fix Released
Status in mdadm package in Debian:
  New

Bug description:
  [Impact]

  * Currently, mounted raid0/md-linear arrays have no indication/warning
  when one or more members are removed or suffer from some non-
  recoverable error condition. The mdadm tool shows "clean" state
  regardless if a member was removed.

  * The patch proposed in this SRU addresses this issue by introducing a
  new state "broken", which is analog to "clean" but indicates that
  array is not in a good/correct state. The commit, available upstream
  as 43ebc910 ("mdadm: Introduce new array state 'broken' for
  raid0/linear") [0], was extensively discussed and received a good
  amount of reviews/analysis by both the current mdadm maintainer as
  well as an old maintainer.

  * One important note here is that this patch requires a counter-part in the 
kernel to be fully functional, which was SRUed in LP: #1847773.
  It works fine/transparently without this kernel counter-part though.

  * We had reports of users testing a scenario of failed raid0 arrays,
  and getting 'clean' in mdadm proved to cause confusion and doesn't
  help on noticing something went wrong with the arrays.

  * The potential situation this patch (with its kernel counter-part)
  addresses is: an user has raid0/linear array, and it's mounted. If one
  member fails and gets removed (either physically, like a power or
  firmware issue, or in software, like a driver-induced removal due to
  detected failure), _without_ this patch (and its kernel counter-part)
  there's nothing to let user know it failed, except filesystem errors
  in dmesg. Also, non-direct writes to the filesystem will succeed, due
  to how page-cache/writeback work; even a 'sync' command run will
  succeed.

  * The case described in above bullet was tested and the writes to
  failed devices succeeded - after a reboot, the files written were
  present in the array, but corrupted. An user wouldn't noticed that
  unless if the writes were directed or some checksum was performed in
  the files. With this patch (and its kernel counter-part), the writes
  to such failed raid0/linear array are fast-failed and the filesystem
  goes read-only quickly.

  [Test case]

  * To test this patch, create a raid0 or linear md array on Linux using
  mdadm, like: "mdadm --create md0 --level=0 --raid-devices=2
  /dev/nvme0n1 /dev/nvme1n1";

  * Format the array using a FS of your choice (for example ext4) and
  mount the array;

  * Remove one member of the array, for example using sysfs interface
  (for nvme: echo 1 > /sys/block/nvme0n1/device/device/remove, for scsi:
  echo 1 > /sys/block/sdX/device/delete);

  * Without this patch, the array state shown by "mdadm --detail" is
  "clean", regardless a member is missing/failed.

  [Regression potential]

  * There are mainly two potential regressions here; the first is user-
  visible changes introduced by this mdadm patch. The second is if the
  patch itself has some unnoticed bug.

  * For the first type of potential regression: this patch introduces a
  change in how the array state is displayed in "mdadm --detail "
  output for raid0/linear arrays *only*. Currently, the tool shows just
  2 states, "clean" or "active". In the patch being SRUed here, this
  changes for raid0/linear arrays to read the sysfs array state instead.
  So for example, we could read "readonly" state here for raid0/linear
  if the user (or some tool) changes the array to such state. This only
  affects raid0/linear, the output for other levels didn't change at
  all.

  * Regarding potential unnoticed issues in the code, we changed mainly
  structs and the "detail" command. Structs were incremented with the
  new "broken" state and the detail output was changed for raid0/linear
  as discussed in the previous bullet.

  * Note that we *proactively* skipped Xenial SRU here, in order to
  prevent potential regressions - Xenial mdadm tool lacks code
  infrastructure used by this patch, so the decision was for
  safety/stability, by only SRUing Bionic / Disco / Eoan mdadm versions.

  [0]
  https://git.kernel.org/pub/scm/utils/mdadm/mdadm.git/commit/?id=43ebc910

  [other info]

  The last mdadm upload for bug 1850540 added changes that depend on as-
  yet unreleased kernel changes, and thus blocks any further release of
  mdadm until the next Bionic point release; see bug 1850540 comment 11.
  So, this bug (and all future mdadm bugs for Bionic, until the 

[Sts-sponsors] [Bug 1860548] Re: systemd crashes when logging long message

2020-01-22 Thread Dan Streetman
** Tags added: ddstreet-next

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1860548

Title:
  systemd crashes when logging long message

Status in systemd package in Ubuntu:
  Fix Released
Status in systemd source package in Bionic:
  Confirmed
Status in systemd source package in Disco:
  Fix Released
Status in systemd source package in Eoan:
  Fix Released

Bug description:
  [Description]

  Systemd crashes when logging very long messages. This regression was 
introduced with
  upstream commit d054f0a4d451 [1] due to xsprintf.
  Commits e68eedbbdc98 [2] and 574432f889ce [3] replace some uses of xsprintf 
with 
  snprintf and fix it.

  [Test Case]

  # systemd-run --scope apt-get -q -y -o DPkg::Options::=--force-confold
  -o DPkg::Options::=--force-confdef --allow-unauthenticated install acl
  adduser amd64-microcode apt base-files base-passwd bash bash-
  completion bind9-host binfmt-support binutils-common binutils-x86-64
  -linux-gnu bsdmainutils bsdutils busybox-initramfs busybox-static
  bzip2 ca-certificates console-setup console-setup-linux coreutils cpio
  cpp cpp-7 crda cron curl dash dbus dctrl-tools debconf debconf-i18n
  debianutils dictionaries-common diffutils dirmngr distro-info-data
  dmeventd dmsetup dnsmasq-base dnsutils dpkg e2fslibs e2fsprogs ed
  eject fakeroot fdisk file findutils friendly-recovery gawk gcc-7-base
  gcc-8-base gettext-base gir1.2-glib-2.0 gnupg gnupg-l10n gnupg-utils
  gpg gpg-agent gpg-wks-client gpg-wks-server gpgconf gpgsm gpgv grep
  groff-base grub-common grub-pc grub-pc-bin grub2-common gzip hostname
  info init init-system-helpers initramfs-tools initramfs-tools-bin
  initramfs-tools-core install-info intel-microcode iproute2 iptables
  iputils-ping iputils-tracepath irqbalance isc-dhcp-client isc-dhcp-
  common iso-codes iw keyboard-configuration keyutils klibc-utils kmod
  krb5-locales krb5-user language-pack-en language-pack-en-base
  language-pack-gnome-en language-pack-gnome-en-base less
  libaccountsservice0 libacl1 libapparmor1 libargon2-0 libasan4
  libasn1-8-heimdal libassuan0 libatm1 libatomic1 libattr1 libaudit-
  common libaudit1  libbinutils libblkid1 libbsd0 libbz2-1.0 libc-bin
  libc-dev-bin libc6 libc6-dev libcap-ng0 libcap2 libcap2-bin libcc1-0
  libcilkrts5 libcom-err2 libcryptsetup12 libcurl3-gnutls libcurl4
  libdb5.3 libdbus-1-3 libdebconfclient0 libdevmapper-event1.02.1
  libdevmapper1.02.1 libdpkg-perl libdrm-common libdrm2 libdumbnet1
  libedit2 libelf1 libestr0 libevent-2.1-6 libexpat1 libexpat1-dev
  libext2fs2 libfakeroot libfastjson4 libfdisk1 libffi6 libfreetype6
  libfribidi0 libfuse2 libgc1c2 libgcc-7-dev libgcc1 libgcrypt20
  libgdbm-compat4 libgeoip1 libgirepository-1.0-1 libglib2.0-0
  libglib2.0-data libgmp10 libgnutls30 libgomp1 libgpg-error0 libgpm2
  libgssapi-krb5-2 libgssapi3-heimdal libgssrpc4 libhcrypto4-heimdal
  libheimbase1-heimdal libheimntlm0-heimdal libhogweed4
  libhx509-5-heimdal libitm1 libjs-jquery libjs-sphinxdoc libjs-
  underscore libk5crypto3 libkadm5clnt-mit11 libkadm5srv-mit11 libkdb5-9
  libkeyutils1 libkmod2 libkrb5-26-heimdal libkrb5-3 libkrb5support0
  libksba8 libldap-2.4-2 libldap-common liblocale-gettext-perl liblsan0
  liblz4-1 liblzma5 libmagic-mgc libmagic1 libmnl0 libmount1 libmpc3
  libmpdec2 libmpfr6 libmpx2 libmspack0 libncurses5 libncursesw5
  libnetfilter-conntrack3 libnettle6 libnewt0.52 libnfnetlink0
  libnfsidmap2 libnghttp2-14  libnl-3-200 libnl-genl-3-200 libnorm1
  libnpth0 libnss-systemd libnuma1 libp11-kit0 libpam-cap libpam-krb5
  libpam-modules libpam-modules-bin libpam-runtime libpam-systemd
  libpam0g libparted2 libpcap0.8 libpci3 libpcre3 libpgm-5.2-0
  libpipeline1 libplymouth4 libpng16-16 libpolkit-gobject-1-0 libpopt0
  libpsl5 libpython-all-dev libpython-dev libpython-stdlib libpython2.7
  libpython2.7-dev libpython2.7-minimal

  
  # tail -f /var/log/syslog
  ...
  Jan 22 12:50:33 bionic-kernel systemd[1]: Assertion 'xsprintf: buf[] must be 
big enough' failed at ../src/core/job.c:803, function job_log_status_message(). 
Aborting.

  Broadcast message from systemd-journald@bionic-kernel (Wed 2020-01-22
  12:50:33 UTC):

  systemd[1]: Caught , dumped core as pid 14620.

  
  Broadcast message from systemd-journald@bionic-kernel (Wed 2020-01-22 
12:50:33 UTC):

  systemd[1]: Freezing execution.

  Jan 22 12:50:33 bionic-kernel systemd[1]: Caught , dumped core as pid 
14620.
  Jan 22 12:50:33 bionic-kernel systemd[1]: Freezing execution.


  [Regression Potential]

  The patches replace xsprintf with snprintf and the regression
  potential is small.

  [Other]

  Only Bionic is affected.

  [1] https://github.com/systemd/systemd/issues/4534
  [2] 
https://github.com/systemd/systemd/commit/e68eedbbdc98fa13449756b7fee3bed689d76493
 
  [3] 
https://github.com/systemd/systemd/commit/574432f889ce3de126bbc6736bcbd22ee170ff82

To manage notifications about this bug go to:

[Sts-sponsors] [Bug 1847924] Re: Introduce broken state parsing to mdadm

2020-01-16 Thread Dan Streetman
re-uploaded the V3 to Bionic and Eoan, @rbasak @sil2100 does this change
address your concerns?

As this removes (comments out) the patches from bug 1850540, it no
longer needs to be blocked until a new kernel is released, as @gpiccoli
explains in comment 21.

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1847924

Title:
  Introduce broken state parsing to mdadm

Status in mdadm package in Ubuntu:
  Fix Released
Status in mdadm source package in Bionic:
  In Progress
Status in mdadm source package in Disco:
  Won't Fix
Status in mdadm source package in Eoan:
  In Progress
Status in mdadm source package in Focal:
  Fix Released
Status in mdadm package in Debian:
  New

Bug description:
  [Impact]

  * Currently, mounted raid0/md-linear arrays have no indication/warning
  when one or more members are removed or suffer from some non-
  recoverable error condition. The mdadm tool shows "clean" state
  regardless if a member was removed.

  * The patch proposed in this SRU addresses this issue by introducing a
  new state "broken", which is analog to "clean" but indicates that
  array is not in a good/correct state. The commit, available upstream
  as 43ebc910 ("mdadm: Introduce new array state 'broken' for
  raid0/linear") [0], was extensively discussed and received a good
  amount of reviews/analysis by both the current mdadm maintainer as
  well as an old maintainer.

  * One important note here is that this patch requires a counter-part in the 
kernel to be fully functional, which was SRUed in LP: #1847773.
  It works fine/transparently without this kernel counter-part though.

  * We had reports of users testing a scenario of failed raid0 arrays,
  and getting 'clean' in mdadm proved to cause confusion and doesn't
  help on noticing something went wrong with the arrays.

  * The potential situation this patch (with its kernel counter-part)
  addresses is: an user has raid0/linear array, and it's mounted. If one
  member fails and gets removed (either physically, like a power or
  firmware issue, or in software, like a driver-induced removal due to
  detected failure), _without_ this patch (and its kernel counter-part)
  there's nothing to let user know it failed, except filesystem errors
  in dmesg. Also, non-direct writes to the filesystem will succeed, due
  to how page-cache/writeback work; even a 'sync' command run will
  succeed.

  * The case described in above bullet was tested and the writes to
  failed devices succeeded - after a reboot, the files written were
  present in the array, but corrupted. An user wouldn't noticed that
  unless if the writes were directed or some checksum was performed in
  the files. With this patch (and its kernel counter-part), the writes
  to such failed raid0/linear array are fast-failed and the filesystem
  goes read-only quickly.

  [Test case]

  * To test this patch, create a raid0 or linear md array on Linux using
  mdadm, like: "mdadm --create md0 --level=0 --raid-devices=2
  /dev/nvme0n1 /dev/nvme1n1";

  * Format the array using a FS of your choice (for example ext4) and
  mount the array;

  * Remove one member of the array, for example using sysfs interface
  (for nvme: echo 1 > /sys/block/nvme0n1/device/device/remove, for scsi:
  echo 1 > /sys/block/sdX/device/delete);

  * Without this patch, the array state shown by "mdadm --detail" is
  "clean", regardless a member is missing/failed.

  [Regression potential]

  * There are mainly two potential regressions here; the first is user-
  visible changes introduced by this mdadm patch. The second is if the
  patch itself has some unnoticed bug.

  * For the first type of potential regression: this patch introduces a
  change in how the array state is displayed in "mdadm --detail "
  output for raid0/linear arrays *only*. Currently, the tool shows just
  2 states, "clean" or "active". In the patch being SRUed here, this
  changes for raid0/linear arrays to read the sysfs array state instead.
  So for example, we could read "readonly" state here for raid0/linear
  if the user (or some tool) changes the array to such state. This only
  affects raid0/linear, the output for other levels didn't change at
  all.

  * Regarding potential unnoticed issues in the code, we changed mainly
  structs and the "detail" command. Structs were incremented with the
  new "broken" state and the detail output was changed for raid0/linear
  as discussed in the previous bullet.

  * Note that we *proactively* skipped Xenial SRU here, in order to
  prevent potential regressions - Xenial mdadm tool lacks code
  infrastructure used by this patch, so the decision was for
  safety/stability, by only SRUing Bionic / Disco / Eoan mdadm versions.

  [0]
  https://git.kernel.org/pub/scm/utils/mdadm/mdadm.git/commit/?id=43ebc910

  [other info]

  The last mdadm upload for bug 1850540 added changes that depend on as-
  yet unreleased 

[Sts-sponsors] [Bug 1847924] Re: Introduce broken state parsing to mdadm

2019-12-21 Thread Dan Streetman
Uploaded mdadm with V2 patches to B/D/E, however please note as I just
updated this bug description to explain, all future updates to mdadm are
now temporarily blocked due to the mdadm changes from bug 1850540
requiring corresponding kernel patches that are not yet released.  I've
added the block-proposed-* tags to this bug to prevent release to
-updates.  Please see bug 1850540 comment 11 for details.

** Description changed:

  [Impact]
  
  * Currently, mounted raid0/md-linear arrays have no indication/warning
  when one or more members are removed or suffer from some non-recoverable
  error condition. The mdadm tool shows "clean" state regardless if a
  member was removed.
  
  * The patch proposed in this SRU addresses this issue by introducing a
  new state "broken", which is analog to "clean" but indicates that array
  is not in a good/correct state. The commit, available upstream as
  43ebc910 ("mdadm: Introduce new array state 'broken' for raid0/linear")
  [0], was extensively discussed and received a good amount of
  reviews/analysis by both the current mdadm maintainer as well as an old
  maintainer.
  
  * One important note here is that this patch requires a counter-part in the 
kernel to be fully functional, which was SRUed in LP: #1847773.
  It works fine/transparently without this kernel counter-part though.
  
  * We had reports of users testing a scenario of failed raid0 arrays, and
  getting 'clean' in mdadm proved to cause confusion and doesn't help on
  noticing something went wrong with the arrays.
  
  * The potential situation this patch (with its kernel counter-part)
  addresses is: an user has raid0/linear array, and it's mounted. If one
  member fails and gets removed (either physically, like a power or
  firmware issue, or in software, like a driver-induced removal due to
  detected failure), _without_ this patch (and its kernel counter-part)
  there's nothing to let user know it failed, except filesystem errors in
  dmesg. Also, non-direct writes to the filesystem will succeed, due to
  how page-cache/writeback work; even a 'sync' command run will succeed.
  
  * The case described in above bullet was tested and the writes to failed
  devices succeeded - after a reboot, the files written were present in
  the array, but corrupted. An user wouldn't noticed that unless if the
  writes were directed or some checksum was performed in the files. With
  this patch (and its kernel counter-part), the writes to such failed
  raid0/linear array are fast-failed and the filesystem goes read-only
  quickly.
  
  [Test case]
  
  * To test this patch, create a raid0 or linear md array on Linux using
  mdadm, like: "mdadm --create md0 --level=0 --raid-devices=2 /dev/nvme0n1
  /dev/nvme1n1";
  
  * Format the array using a FS of your choice (for example ext4) and
  mount the array;
  
  * Remove one member of the array, for example using sysfs interface (for
  nvme: echo 1 > /sys/block/nvme0n1/device/device/remove, for scsi: echo 1
  > /sys/block/sdX/device/delete);
  
  * Without this patch, the array state shown by "mdadm --detail" is
  "clean", regardless a member is missing/failed.
  
  [Regression potential]
  
  * There are mainly two potential regressions here; the first is user-
  visible changes introduced by this mdadm patch. The second is if the
  patch itself has some unnoticed bug.
  
  * For the first type of potential regression: this patch introduces a
  change in how the array state is displayed in "mdadm --detail "
  output for raid0/linear arrays *only*. Currently, the tool shows just 2
  states, "clean" or "active". In the patch being SRUed here, this changes
  for raid0/linear arrays to read the sysfs array state instead. So for
  example, we could read "readonly" state here for raid0/linear if the
  user (or some tool) changes the array to such state. This only affects
  raid0/linear, the output for other levels didn't change at all.
  
  * Regarding potential unnoticed issues in the code, we changed mainly
  structs and the "detail" command. Structs were incremented with the new
  "broken" state and the detail output was changed for raid0/linear as
  discussed in the previous bullet.
  
  * Note that we *proactively* skipped Xenial SRU here, in order to
  prevent potential regressions - Xenial mdadm tool lacks code
  infrastructure used by this patch, so the decision was for
  safety/stability, by only SRUing Bionic / Disco / Eoan mdadm versions.
  
  [0]
  https://git.kernel.org/pub/scm/utils/mdadm/mdadm.git/commit/?id=43ebc910
  
  [other info]
  
- As mdadm for focal (20.04) hasn't been merged yet, this will need to be
- added there during or after merge.
+ The last mdadm upload for bug 1850540 added changes that depend on as-
+ yet unreleased kernel changes, and thus blocks any further release of
+ mdadm until the next Bionic point release; see bug 1850540 comment 11.
+ So, this bug (and all future mdadm bugs for Bionic, until the next point
+ release) must include 

[Sts-sponsors] [Bug 1847924] Re: Introduce broken state parsing to mdadm

2019-10-25 Thread Dan Streetman
uploaded to b/d/e, and i'll look at merging f next week and including
this patch.

** Tags added: sts-sponsor-ddstreet

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1847924

Title:
  Introduce broken state parsing to mdadm

Status in mdadm package in Ubuntu:
  In Progress
Status in mdadm source package in Bionic:
  In Progress
Status in mdadm source package in Disco:
  In Progress
Status in mdadm source package in Eoan:
  In Progress
Status in mdadm source package in Focal:
  In Progress
Status in mdadm package in Debian:
  Unknown

Bug description:
  [Impact]

  * Currently, mounted raid0/md-linear arrays have no indication/warning
  when one or more members are removed or suffer from some non-
  recoverable error condition. The mdadm tool shows "clean" state
  regardless if a member was removed.

  * The patch proposed in this SRU addresses this issue by introducing a
  new state "broken", which is analog to "clean" but indicates that
  array is not in a good/correct state. The commit, available upstream
  as 43ebc910 ("mdadm: Introduce new array state 'broken' for
  raid0/linear") [0], was extensively discussed and received a good
  amount of reviews/analysis by both the current mdadm maintainer as
  well as an old maintainer.

  * One important note here is that this patch requires a counter-part in the 
kernel to be fully functional, which was SRUed in LP: #1847773.
  It works fine/transparently without this kernel counter-part though.

  [Test case]

  * To test this patch, create a raid0 or linear md array on Linux using
  mdadm, like: "mdadm --create md0 --level=0 --raid-devices=2
  /dev/nvme0n1 /dev/nvme1n1";

  * Format the array using a FS of your choice (for example ext4) and
  mount the array;

  * Remove one member of the array, for example using sysfs interface
  (for nvme: echo 1 > /sys/block/nvme0n1/device/device/remove, for scsi:
  echo 1 > /sys/block/sdX/device/delete);

  * Without this patch, the array state shown by "mdadm --detail" is
  "clean", regardless a member is missing/failed.

  [Regression potential]

  * There's not much potential regression here; we just exhibit arrays'
  state as "broken" if they have one or more missing/failed members; we
  believe the most common "issue" that could be reported from this patch
  is if an userspace tool rely on the array status as being always
  "clean" even for broken devices, then such tool may behave differently
  with this patch.

  * Note that we *proactively* skipped Xenial SRU here, in order to
  prevent potential regressions - Xenial mdadm tool lacks code
  infrastructure used by this patch, so the decision was for
  safety/stability, by only SRUing Bionic / Disco / Eoan mdadm versions.

  [0]
  https://git.kernel.org/pub/scm/utils/mdadm/mdadm.git/commit/?id=43ebc910

  [other info]

  As mdadm for focal hasn't been merged yet, this will need to be added
  there during or after merge.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/mdadm/+bug/1847924/+subscriptions

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1847924] Re: Introduce broken state parsing to mdadm

2019-10-25 Thread Dan Streetman
** Description changed:

  [Impact]
  
  * Currently, mounted raid0/md-linear arrays have no indication/warning
  when one or more members are removed or suffer from some non-recoverable
  error condition. The mdadm tool shows "clean" state regardless if a
  member was removed.
  
  * The patch proposed in this SRU addresses this issue by introducing a
  new state "broken", which is analog to "clean" but indicates that array
  is not in a good/correct state. The commit, available upstream as
  43ebc910 ("mdadm: Introduce new array state 'broken' for raid0/linear")
  [0], was extensively discussed and received a good amount of
  reviews/analysis by both the current mdadm maintainer as well as an old
  maintainer.
  
  * One important note here is that this patch requires a counter-part in the 
kernel to be fully functional, which was SRUed in LP: #1847773.
  It works fine/transparently without this kernel counter-part though.
  
  [Test case]
  
  * To test this patch, create a raid0 or linear md array on Linux using
  mdadm, like: "mdadm --create md0 --level=0 --raid-devices=2 /dev/nvme0n1
  /dev/nvme1n1";
  
  * Format the array using a FS of your choice (for example ext4) and
  mount the array;
  
  * Remove one member of the array, for example using sysfs interface (for
  nvme: echo 1 > /sys/block/nvme0n1/device/device/remove, for scsi: echo 1
  > /sys/block/sdX/device/delete);
  
  * Without this patch, the array state shown by "mdadm --detail" is
  "clean", regardless a member is missing/failed.
  
  [Regression potential]
  
  * There's not much potential regression here; we just exhibit arrays'
  state as "broken" if they have one or more missing/failed members; we
  believe the most common "issue" that could be reported from this patch
  is if an userspace tool rely on the array status as being always "clean"
  even for broken devices, then such tool may behave differently with this
  patch.
  
  * Note that we *proactively* skipped Xenial SRU here, in order to
  prevent potential regressions - Xenial mdadm tool lacks code
  infrastructure used by this patch, so the decision was for
  safety/stability, by only SRUing Bionic / Disco / Eoan mdadm versions.
  
  [0]
  https://git.kernel.org/pub/scm/utils/mdadm/mdadm.git/commit/?id=43ebc910
+ 
+ [other info]
+ 
+ As mdadm for focal hasn't been merged yet, this will need to be added
+ there during or after merge.

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1847924

Title:
  Introduce broken state parsing to mdadm

Status in mdadm package in Ubuntu:
  In Progress
Status in mdadm source package in Bionic:
  In Progress
Status in mdadm source package in Disco:
  In Progress
Status in mdadm source package in Eoan:
  In Progress
Status in mdadm source package in Focal:
  In Progress
Status in mdadm package in Debian:
  Unknown

Bug description:
  [Impact]

  * Currently, mounted raid0/md-linear arrays have no indication/warning
  when one or more members are removed or suffer from some non-
  recoverable error condition. The mdadm tool shows "clean" state
  regardless if a member was removed.

  * The patch proposed in this SRU addresses this issue by introducing a
  new state "broken", which is analog to "clean" but indicates that
  array is not in a good/correct state. The commit, available upstream
  as 43ebc910 ("mdadm: Introduce new array state 'broken' for
  raid0/linear") [0], was extensively discussed and received a good
  amount of reviews/analysis by both the current mdadm maintainer as
  well as an old maintainer.

  * One important note here is that this patch requires a counter-part in the 
kernel to be fully functional, which was SRUed in LP: #1847773.
  It works fine/transparently without this kernel counter-part though.

  [Test case]

  * To test this patch, create a raid0 or linear md array on Linux using
  mdadm, like: "mdadm --create md0 --level=0 --raid-devices=2
  /dev/nvme0n1 /dev/nvme1n1";

  * Format the array using a FS of your choice (for example ext4) and
  mount the array;

  * Remove one member of the array, for example using sysfs interface
  (for nvme: echo 1 > /sys/block/nvme0n1/device/device/remove, for scsi:
  echo 1 > /sys/block/sdX/device/delete);

  * Without this patch, the array state shown by "mdadm --detail" is
  "clean", regardless a member is missing/failed.

  [Regression potential]

  * There's not much potential regression here; we just exhibit arrays'
  state as "broken" if they have one or more missing/failed members; we
  believe the most common "issue" that could be reported from this patch
  is if an userspace tool rely on the array status as being always
  "clean" even for broken devices, then such tool may behave differently
  with this patch.

  * Note that we *proactively* skipped Xenial SRU here, in order to
  prevent potential regressions - Xenial mdadm tool lacks code
  infrastructure used by 

[Sts-sponsors] [Bug 1847924] Re: Introduce broken state parsing to mdadm

2019-10-25 Thread Dan Streetman
** Bug watch added: Debian Bug tracker #943520
   https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=943520

** Also affects: mdadm (Debian) via
   https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=943520
   Importance: Unknown
   Status: Unknown

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1847924

Title:
  Introduce broken state parsing to mdadm

Status in mdadm package in Ubuntu:
  In Progress
Status in mdadm source package in Bionic:
  In Progress
Status in mdadm source package in Disco:
  In Progress
Status in mdadm source package in Eoan:
  In Progress
Status in mdadm source package in Focal:
  In Progress
Status in mdadm package in Debian:
  Unknown

Bug description:
  [Impact]

  * Currently, mounted raid0/md-linear arrays have no indication/warning
  when one or more members are removed or suffer from some non-
  recoverable error condition. The mdadm tool shows "clean" state
  regardless if a member was removed.

  * The patch proposed in this SRU addresses this issue by introducing a
  new state "broken", which is analog to "clean" but indicates that
  array is not in a good/correct state. The commit, available upstream
  as 43ebc910 ("mdadm: Introduce new array state 'broken' for
  raid0/linear") [0], was extensively discussed and received a good
  amount of reviews/analysis by both the current mdadm maintainer as
  well as an old maintainer.

  * One important note here is that this patch requires a counter-part in the 
kernel to be fully functional, which was SRUed in LP: #1847773.
  It works fine/transparently without this kernel counter-part though.

  [Test case]

  * To test this patch, create a raid0 or linear md array on Linux using
  mdadm, like: "mdadm --create md0 --level=0 --raid-devices=2
  /dev/nvme0n1 /dev/nvme1n1";

  * Format the array using a FS of your choice (for example ext4) and
  mount the array;

  * Remove one member of the array, for example using sysfs interface
  (for nvme: echo 1 > /sys/block/nvme0n1/device/device/remove, for scsi:
  echo 1 > /sys/block/sdX/device/delete);

  * Without this patch, the array state shown by "mdadm --detail" is
  "clean", regardless a member is missing/failed.

  [Regression potential]

  * There's not much potential regression here; we just exhibit arrays'
  state as "broken" if they have one or more missing/failed members; we
  believe the most common "issue" that could be reported from this patch
  is if an userspace tool rely on the array status as being always
  "clean" even for broken devices, then such tool may behave differently
  with this patch.

  * Note that we *proactively* skipped Xenial SRU here, in order to
  prevent potential regressions - Xenial mdadm tool lacks code
  infrastructure used by this patch, so the decision was for
  safety/stability, by only SRUing Bionic / Disco / Eoan mdadm versions.

  [0]
  https://git.kernel.org/pub/scm/utils/mdadm/mdadm.git/commit/?id=43ebc910

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/mdadm/+bug/1847924/+subscriptions

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1847924] Re: Introduce broken state parsing to mdadm

2019-10-21 Thread Dan Streetman
** Changed in: mdadm (Ubuntu Focal)
   Status: Confirmed => In Progress

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1847924

Title:
  Introduce broken state parsing to mdadm

Status in mdadm package in Ubuntu:
  In Progress
Status in mdadm source package in Bionic:
  In Progress
Status in mdadm source package in Disco:
  In Progress
Status in mdadm source package in Eoan:
  In Progress
Status in mdadm source package in Focal:
  In Progress

Bug description:
  [Impact]

  * Currently, mounted raid0/md-linear arrays have no indication/warning
  when one or more members are removed or suffer from some non-
  recoverable error condition. The mdadm tool shows "clean" state
  regardless if a member was removed.

  * The patch proposed in this SRU addresses this issue by introducing a
  new state "broken", which is analog to "clean" but indicates that
  array is not in a good/correct state. The commit, available upstream
  as 43ebc910 ("mdadm: Introduce new array state 'broken' for
  raid0/linear") [0], was extensively discussed and received a good
  amount of reviews/analysis by both the current mdadm maintainer as
  well as an old maintainer.

  * One important note here is that this patch requires a counter-part in the 
kernel to be fully functional, which was SRUed in LP: #1847773.
  It works fine/transparently without this kernel counter-part though.

  [Test case]

  * To test this patch, create a raid0 or linear md array on Linux using
  mdadm, like: "mdadm --create md0 --level=0 --raid-devices=2
  /dev/nvme0n1 /dev/nvme1n1";

  * Format the array using a FS of your choice (for example ext4) and
  mount the array;

  * Remove one member of the array, for example using sysfs interface
  (for nvme: echo 1 > /sys/block/nvme0n1/device/device/remove, for scsi:
  echo 1 > /sys/block/sdX/device/delete);

  * Without this patch, the array state shown by "mdadm --detail" is
  "clean", regardless a member is missing/failed.

  [Regression potential]

  * There's not much potential regression here; we just exhibit arrays'
  state as "broken" if they have one or more missing/failed members; we
  believe the most common "issue" that could be reported from this patch
  is if an userspace tool rely on the array status as being always
  "clean" even for broken devices, then such tool may behave differently
  with this patch.

  * Note that we *proactively* skipped Xenial SRU here, in order to
  prevent potential regressions - Xenial mdadm tool lacks code
  infrastructure used by this patch, so the decision was for
  safety/stability, by only SRUing Bionic / Disco / Eoan mdadm versions.

  [0]
  https://git.kernel.org/pub/scm/utils/mdadm/mdadm.git/commit/?id=43ebc910

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/mdadm/+bug/1847924/+subscriptions

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1847924] Re: Introduce broken state parsing to mdadm

2019-10-14 Thread Dan Streetman
Thanks @gpiccoli; I can't sponsor this now, since Eoan is in final
freeze, but once it's released I'll be happy to sponsor.

** Changed in: mdadm (Ubuntu Eoan)
   Status: Confirmed => In Progress

** Changed in: mdadm (Ubuntu Disco)
   Status: Confirmed => In Progress

** Changed in: mdadm (Ubuntu Bionic)
   Status: Confirmed => In Progress

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1847924

Title:
  Introduce broken state parsing to mdadm

Status in mdadm package in Ubuntu:
  In Progress
Status in mdadm source package in Bionic:
  In Progress
Status in mdadm source package in Disco:
  In Progress
Status in mdadm source package in Eoan:
  In Progress

Bug description:
  [Impact]

  * Currently, mounted raid0/md-linear arrays have no indication/warning
  when one or more members are removed or suffer from some non-
  recoverable error condition. The mdadm tool shows "clean" state
  regardless if a member was removed.

  * The patch proposed in this SRU addresses this issue by introducing a
  new state "broken", which is analog to "clean" but indicates that
  array is not in a good/correct state. The commit, available upstream
  as 43ebc910 ("mdadm: Introduce new array state 'broken' for
  raid0/linear") [0], was extensively discussed and received a good
  amount of reviews/analysis by both the current mdadm maintainer as
  well as an old maintainer.

  * One important note here is that this patch requires a counter-part in the 
kernel to be fully functional, which was SRUed in LP: #1847773.
  It works fine/transparently without this kernel counter-part though.

  [Test case]

  * To test this patch, create a raid0 or linear md array on Linux using
  mdadm, like: "mdadm --create md0 --level=0 --raid-devices=2
  /dev/nvme0n1 /dev/nvme1n1";

  * Format the array using a FS of your choice (for example ext4) and
  mount the array;

  * Remove one member of the array, for example using sysfs interface
  (for nvme: echo 1 > /sys/block/nvme0n1/device/device/remove, for scsi:
  echo 1 > /sys/block/sdX/device/delete);

  * Without this patch, the array state shown by "mdadm --detail" is
  "clean", regardless a member is missing/failed.

  [Regression potential]

  * There's not much potential regression here; we just exhibit arrays'
  state as "broken" if they have one or more missing/failed members; we
  believe the most common "issue" that could be reported from this patch
  is if an userspace tool rely on the array status as being always
  "clean" even for broken devices, then such tool may behave differently
  with this patch.

  * Note that we *proactively* skipped Xenial SRU here, in order to
  prevent potential regressions - Xenial mdadm tool lacks code
  infrastructure used by this patch, so the decision was for
  safety/stability, by only SRUing Bionic / Disco / Eoan mdadm versions.

  [0]
  https://git.kernel.org/pub/scm/utils/mdadm/mdadm.git/commit/?id=43ebc910

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/mdadm/+bug/1847924/+subscriptions

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1846787] Re: systemd-logind leaves leftover sessions and scope files

2019-10-10 Thread Dan Streetman
uploaded dbus and systemd to xenial queue, thanks!

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1846787

Title:
  systemd-logind leaves leftover sessions and scope files

Status in dbus package in Ubuntu:
  Fix Released
Status in systemd package in Ubuntu:
  Fix Released
Status in dbus source package in Xenial:
  In Progress
Status in systemd source package in Xenial:
  In Progress

Bug description:
  [Impact]
  Scope file leakage can cause SSH delays and reduce performance in systemd

  [Description]
  The current systemd-logind version present in Xenial can leave abandoned SSH
  sessions and scope files in cases where the host sees a lot of concurrent SSH
  connections. These leftover sessions can slow down systemd performance
  greatly, and can have an impact on sshd handling a great number of concurrent
  connections.

  To fix this issue, patches are needed in both dbus and systemd. These improve 
the
  performance of the communication between dbus and systemd, so that they can
  handle a better volume of events (e.g. SSH logins). All of those patches are
  already present from Bionic onwards, so we only need those fixes for Xenial.

  == Systemd ==
  Upstream patches:
  - core: use an AF_UNIX/SOCK_DGRAM socket for cgroup agent notification 
(d8fdc62037b5)

  $ git describe --contains d8fdc62037b5
  v230~71^2~2

  $ rmadison systemd
   systemd | 229-4ubuntu4 | xenial  | source, ...
   systemd | 229-4ubuntu21.21 | xenial-security | source, ...
   systemd | 229-4ubuntu21.22 | xenial-updates  | source, ... <
   systemd | 237-3ubuntu10| bionic  | source, ...
   systemd | 237-3ubuntu10.29 | bionic-security | source, ...
   systemd | 237-3ubuntu10.29 | bionic-updates  | source, ...
   systemd | 237-3ubuntu10.31 | bionic-proposed | source, ...

  == DBus ==
  Upstream patches:
  - Only read one message at a time if there are fds pending (892f084eeda0)
  - bus: Fix timeout restarts  (529600397bca)
  - DBusMainLoop: ensure all required timeouts are restarted (446b0d9ac75a)

  $ git describe --contains 892f084eeda0 529600397bca 446b0d9ac75a
  dbus-1.11.10~44
  dbus-1.11.10~45
  dbus-1.11.16~2

  $ rmadison dbus
   dbus | 1.10.6-1ubuntu3| xenial   | source, ...
   dbus | 1.10.6-1ubuntu3.4  | xenial-security  | source, ...
   dbus | 1.10.6-1ubuntu3.4  | xenial-updates   | source, ... <
   dbus | 1.12.2-1ubuntu1| bionic   | source, ...
   dbus | 1.12.2-1ubuntu1.1  | bionic-security  | source, ...
   dbus | 1.12.2-1ubuntu1.1  | bionic-updates   | source, ...

  [Test Case]
  1) Simulate a lot of concurrent SSH connections with e.g. a for loop:
  multipass@xenial-logind:~$ for i in {1..1000}; do sleep 0.1; ssh localhost 
sleep 1 & done

  2) Check for leaked sessions in /run/systemd/system/:
  multipass@xenial-logind:~$ ls -ld /run/systemd/system/session-*.scope*
  drwxr-xr-x 2 root root 160 Oct 4 15:34 /run/systemd/system/session-103.scope.d
  drwxr-xr-x 2 root root 160 Oct 4 15:34 /run/systemd/system/session-104.scope.d
  drwxr-xr-x 2 root root 160 Oct 4 15:34 /run/systemd/system/session-105.scope.d
  drwxr-xr-x 2 root root 160 Oct 4 15:34 /run/systemd/system/session-106.scope.d
  drwxr-xr-x 2 root root 160 Oct 4 15:34 /run/systemd/system/session-110.scope.d
  drwxr-xr-x 2 root root 160 Oct 4 15:34 /run/systemd/system/session-111.scope.d
  drwxr-xr-x 2 root root 160 Oct 4 15:34 /run/systemd/system/session-112.scope.d
  drwxr-xr-x 2 root root 160 Oct 4 15:34 /run/systemd/system/session-113.scope.d
  drwxr-xr-x 2 root root 160 Oct 4 15:34 /run/systemd/system/session-114.scope.d
  drwxr-xr-x 2 root root 160 Oct 4 15:34 /run/systemd/system/session-115.scope.d
  drwxr-xr-x 2 root root 160 Oct 4 15:34 /run/systemd/system/session-116.scope.d
  drwxr-xr-x 2 root root 160 Oct 4 15:34 /run/systemd/system/session-117.scope.d
  drwxr-xr-x 2 root root 160 Oct 4 15:34 /run/systemd/system/session-118.scope.d
  drwxr-xr-x 2 root root 160 Oct 4 15:34 /run/systemd/system/session-119.scope.d
  drwxr-xr-x 2 root root 160 Oct 4 15:34 /run/systemd/system/session-120.scope.d
  drwxr-xr-x 2 root root 160 Oct 4 15:34 /run/systemd/system/session-121.scope.d
  drwxr-xr-x 2 root root 160 Oct 4 15:34 /run/systemd/system/session-122.scope.d
  drwxr-xr-x 2 root root 160 Oct 4 15:34 /run/systemd/system/session-123.scope.d
  drwxr-xr-x 2 root root 160 Oct 4 15:34 /run/systemd/system/session-126.scope.d
  drwxr-xr-x 2 root root 160 Oct 4 15:34 /run/systemd/system/session-131.scope.d
  drwxr-xr-x 2 root root 160 Oct 4 15:34 /run/systemd/system/session-134.scope.d
  ...

  [Regression Potential]
  As the patches change the communication socket between dbus and systemd, 
possible regressions could cause systemd to not be notified of dbus events and 
vice-versa. We could see units not getting started properly, and communication 
between different services break down (e.g. between 

[Sts-sponsors] [Bug 1846787] Re: systemd-logind leaves leftover sessions and scope files

2019-10-10 Thread Dan Streetman
The patches are on the medium-to-large size, but I have reviewed them
and as far as I can tell they appear correct.  The systemd patch is
needed to fix the cgroup-agent from overrunning the dbus socket
connection queue, and the dbus patches are needed to prevent a highly
loaded dbus message queue from timing out a long queue of incoming
messages.

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1846787

Title:
  systemd-logind leaves leftover sessions and scope files

Status in dbus package in Ubuntu:
  Fix Released
Status in systemd package in Ubuntu:
  Fix Released
Status in dbus source package in Xenial:
  In Progress
Status in systemd source package in Xenial:
  In Progress

Bug description:
  [Impact]
  Scope file leakage can cause SSH delays and reduce performance in systemd

  [Description]
  The current systemd-logind version present in Xenial can leave abandoned SSH
  sessions and scope files in cases where the host sees a lot of concurrent SSH
  connections. These leftover sessions can slow down systemd performance
  greatly, and can have an impact on sshd handling a great number of concurrent
  connections.

  To fix this issue, patches are needed in both dbus and systemd. These improve 
the
  performance of the communication between dbus and systemd, so that they can
  handle a better volume of events (e.g. SSH logins). All of those patches are
  already present from Bionic onwards, so we only need those fixes for Xenial.

  == Systemd ==
  Upstream patches:
  - core: use an AF_UNIX/SOCK_DGRAM socket for cgroup agent notification 
(d8fdc62037b5)

  $ git describe --contains d8fdc62037b5
  v230~71^2~2

  $ rmadison systemd
   systemd | 229-4ubuntu4 | xenial  | source, ...
   systemd | 229-4ubuntu21.21 | xenial-security | source, ...
   systemd | 229-4ubuntu21.22 | xenial-updates  | source, ... <
   systemd | 237-3ubuntu10| bionic  | source, ...
   systemd | 237-3ubuntu10.29 | bionic-security | source, ...
   systemd | 237-3ubuntu10.29 | bionic-updates  | source, ...
   systemd | 237-3ubuntu10.31 | bionic-proposed | source, ...

  == DBus ==
  Upstream patches:
  - Only read one message at a time if there are fds pending (892f084eeda0)
  - bus: Fix timeout restarts  (529600397bca)
  - DBusMainLoop: ensure all required timeouts are restarted (446b0d9ac75a)

  $ git describe --contains 892f084eeda0 529600397bca 446b0d9ac75a
  dbus-1.11.10~44
  dbus-1.11.10~45
  dbus-1.11.16~2

  $ rmadison dbus
   dbus | 1.10.6-1ubuntu3| xenial   | source, ...
   dbus | 1.10.6-1ubuntu3.4  | xenial-security  | source, ...
   dbus | 1.10.6-1ubuntu3.4  | xenial-updates   | source, ... <
   dbus | 1.12.2-1ubuntu1| bionic   | source, ...
   dbus | 1.12.2-1ubuntu1.1  | bionic-security  | source, ...
   dbus | 1.12.2-1ubuntu1.1  | bionic-updates   | source, ...

  [Test Case]
  1) Simulate a lot of concurrent SSH connections with e.g. a for loop:
  multipass@xenial-logind:~$ for i in {1..1000}; do sleep 0.1; ssh localhost 
sleep 1 & done

  2) Check for leaked sessions in /run/systemd/system/:
  multipass@xenial-logind:~$ ls -ld /run/systemd/system/session-*.scope*
  drwxr-xr-x 2 root root 160 Oct 4 15:34 /run/systemd/system/session-103.scope.d
  drwxr-xr-x 2 root root 160 Oct 4 15:34 /run/systemd/system/session-104.scope.d
  drwxr-xr-x 2 root root 160 Oct 4 15:34 /run/systemd/system/session-105.scope.d
  drwxr-xr-x 2 root root 160 Oct 4 15:34 /run/systemd/system/session-106.scope.d
  drwxr-xr-x 2 root root 160 Oct 4 15:34 /run/systemd/system/session-110.scope.d
  drwxr-xr-x 2 root root 160 Oct 4 15:34 /run/systemd/system/session-111.scope.d
  drwxr-xr-x 2 root root 160 Oct 4 15:34 /run/systemd/system/session-112.scope.d
  drwxr-xr-x 2 root root 160 Oct 4 15:34 /run/systemd/system/session-113.scope.d
  drwxr-xr-x 2 root root 160 Oct 4 15:34 /run/systemd/system/session-114.scope.d
  drwxr-xr-x 2 root root 160 Oct 4 15:34 /run/systemd/system/session-115.scope.d
  drwxr-xr-x 2 root root 160 Oct 4 15:34 /run/systemd/system/session-116.scope.d
  drwxr-xr-x 2 root root 160 Oct 4 15:34 /run/systemd/system/session-117.scope.d
  drwxr-xr-x 2 root root 160 Oct 4 15:34 /run/systemd/system/session-118.scope.d
  drwxr-xr-x 2 root root 160 Oct 4 15:34 /run/systemd/system/session-119.scope.d
  drwxr-xr-x 2 root root 160 Oct 4 15:34 /run/systemd/system/session-120.scope.d
  drwxr-xr-x 2 root root 160 Oct 4 15:34 /run/systemd/system/session-121.scope.d
  drwxr-xr-x 2 root root 160 Oct 4 15:34 /run/systemd/system/session-122.scope.d
  drwxr-xr-x 2 root root 160 Oct 4 15:34 /run/systemd/system/session-123.scope.d
  drwxr-xr-x 2 root root 160 Oct 4 15:34 /run/systemd/system/session-126.scope.d
  drwxr-xr-x 2 root root 160 Oct 4 15:34 /run/systemd/system/session-131.scope.d
  drwxr-xr-x 2 root root 160 Oct 4 15:34 /run/systemd/system/session-134.scope.d
  ...

  [Regression 

[Sts-sponsors] [Bug 1846787] Re: systemd-logind leaves leftover sessions and scope files

2019-10-07 Thread Dan Streetman
** Changed in: systemd (Ubuntu Xenial)
   Importance: Undecided => Medium

** Changed in: systemd (Ubuntu)
   Importance: Undecided => Medium

** Changed in: dbus (Ubuntu)
   Importance: Undecided => Medium

** Changed in: dbus (Ubuntu Xenial)
   Importance: Undecided => Medium

** Tags added: ddstreet sts-sponsor-ddstreet systemd xenial

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1846787

Title:
  systemd-logind leaves leftover sessions and scope files

Status in dbus package in Ubuntu:
  Fix Released
Status in systemd package in Ubuntu:
  Fix Released
Status in dbus source package in Xenial:
  In Progress
Status in systemd source package in Xenial:
  In Progress

Bug description:
  [Impact]
  Scope file leakage can cause SSH delays and reduce performance in systemd

  [Description]
  The current systemd-logind version present in Xenial can leave abandoned SSH
  sessions and scope files in cases where the host sees a lot of concurrent SSH
  connections. These leftover sessions can slow down systemd performance
  greatly, and can have an impact on sshd handling a great number of concurrent
  connections.

  To fix this issue, patches are needed in both dbus and systemd. These improve 
the
  performance of the communication between dbus and systemd, so that they can
  handle a better volume of events (e.g. SSH logins). All of those patches are
  already present from Bionic onwards, so we only need those fixes for Xenial.

  == Systemd ==
  Upstream patches:
  - core: use an AF_UNIX/SOCK_DGRAM socket for cgroup agent notification 
(d8fdc62037b5)

  $ git describe --contains d8fdc62037b5
  v230~71^2~2

  $ rmadison systemd
   systemd | 229-4ubuntu4 | xenial  | source, ...
   systemd | 229-4ubuntu21.21 | xenial-security | source, ...
   systemd | 229-4ubuntu21.22 | xenial-updates  | source, ... <
   systemd | 237-3ubuntu10| bionic  | source, ...
   systemd | 237-3ubuntu10.29 | bionic-security | source, ...
   systemd | 237-3ubuntu10.29 | bionic-updates  | source, ...
   systemd | 237-3ubuntu10.31 | bionic-proposed | source, ...

  == DBus ==
  Upstream patches:
  - Only read one message at a time if there are fds pending (892f084eeda0)
  - bus: Fix timeout restarts  (529600397bca)
  - DBusMainLoop: ensure all required timeouts are restarted (446b0d9ac75a)

  $ git describe --contains 892f084eeda0 529600397bca 446b0d9ac75a
  dbus-1.11.10~44
  dbus-1.11.10~45
  dbus-1.11.16~2

  $ rmadison dbus
   dbus | 1.10.6-1ubuntu3| xenial   | source, ...
   dbus | 1.10.6-1ubuntu3.4  | xenial-security  | source, ...
   dbus | 1.10.6-1ubuntu3.4  | xenial-updates   | source, ... <
   dbus | 1.12.2-1ubuntu1| bionic   | source, ...
   dbus | 1.12.2-1ubuntu1.1  | bionic-security  | source, ...
   dbus | 1.12.2-1ubuntu1.1  | bionic-updates   | source, ...

  [Test Case]
  1) Simulate a lot of concurrent SSH connections with e.g. a for loop:
  multipass@xenial-logind:~$ for i in {1..1000}; do sleep 0.1; ssh localhost 
sleep 1 & done

  2) Check for leaked sessions in /run/systemd/system/:
  multipass@xenial-logind:~$ ls -ld /run/systemd/system/session-*.scope*
  drwxr-xr-x 2 root root 160 Oct 4 15:34 /run/systemd/system/session-103.scope.d
  drwxr-xr-x 2 root root 160 Oct 4 15:34 /run/systemd/system/session-104.scope.d
  drwxr-xr-x 2 root root 160 Oct 4 15:34 /run/systemd/system/session-105.scope.d
  drwxr-xr-x 2 root root 160 Oct 4 15:34 /run/systemd/system/session-106.scope.d
  drwxr-xr-x 2 root root 160 Oct 4 15:34 /run/systemd/system/session-110.scope.d
  drwxr-xr-x 2 root root 160 Oct 4 15:34 /run/systemd/system/session-111.scope.d
  drwxr-xr-x 2 root root 160 Oct 4 15:34 /run/systemd/system/session-112.scope.d
  drwxr-xr-x 2 root root 160 Oct 4 15:34 /run/systemd/system/session-113.scope.d
  drwxr-xr-x 2 root root 160 Oct 4 15:34 /run/systemd/system/session-114.scope.d
  drwxr-xr-x 2 root root 160 Oct 4 15:34 /run/systemd/system/session-115.scope.d
  drwxr-xr-x 2 root root 160 Oct 4 15:34 /run/systemd/system/session-116.scope.d
  drwxr-xr-x 2 root root 160 Oct 4 15:34 /run/systemd/system/session-117.scope.d
  drwxr-xr-x 2 root root 160 Oct 4 15:34 /run/systemd/system/session-118.scope.d
  drwxr-xr-x 2 root root 160 Oct 4 15:34 /run/systemd/system/session-119.scope.d
  drwxr-xr-x 2 root root 160 Oct 4 15:34 /run/systemd/system/session-120.scope.d
  drwxr-xr-x 2 root root 160 Oct 4 15:34 /run/systemd/system/session-121.scope.d
  drwxr-xr-x 2 root root 160 Oct 4 15:34 /run/systemd/system/session-122.scope.d
  drwxr-xr-x 2 root root 160 Oct 4 15:34 /run/systemd/system/session-123.scope.d
  drwxr-xr-x 2 root root 160 Oct 4 15:34 /run/systemd/system/session-126.scope.d
  drwxr-xr-x 2 root root 160 Oct 4 15:34 /run/systemd/system/session-131.scope.d
  drwxr-xr-x 2 root root 160 Oct 4 15:34 /run/systemd/system/session-134.scope.d
  ...

  [Regression Potential]
 

[Sts-sponsors] [Bug 1834340] Re: Regression for GMail after libssl upgrade with TLSv1.3

2019-08-29 Thread Dan Streetman
sponsored asterisk, prayer, and mailsync for e/d/b

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1834340

Title:
  Regression for GMail after libssl upgrade with TLSv1.3

Status in asterisk package in Ubuntu:
  New
Status in mailsync package in Ubuntu:
  New
Status in php-imap package in Ubuntu:
  Invalid
Status in prayer package in Ubuntu:
  New
Status in uw-imap package in Ubuntu:
  Fix Released
Status in asterisk source package in Bionic:
  New
Status in mailsync source package in Bionic:
  New
Status in php-imap source package in Bionic:
  Invalid
Status in prayer source package in Bionic:
  New
Status in uw-imap source package in Bionic:
  Fix Released
Status in asterisk source package in Disco:
  New
Status in mailsync source package in Disco:
  New
Status in php-imap source package in Disco:
  Invalid
Status in prayer source package in Disco:
  New
Status in uw-imap source package in Disco:
  Fix Released
Status in asterisk source package in Eoan:
  New
Status in mailsync source package in Eoan:
  New
Status in php-imap source package in Eoan:
  Invalid
Status in prayer source package in Eoan:
  New
Status in uw-imap source package in Eoan:
  Fix Released
Status in uw-imap package in Debian:
  Unknown

Bug description:
  [Impact]

   * Users of libc-client2007e (e.g., php7.x-imap) can no longer
     connect to GMail on Bionic and later, after introduction of
     TLSv1.3 with OpenSSL 1.1.1 (normal upgrade path in Bionic).

   * GMail requires Server Name Indication (SNI) to be set when
     TLSv1.3 is used, otherwise the server provided certificate
     fails verification in the client and connection is aborted.

   * The fix is to set SNI to the hostname that the client will
     perform verification on. The change is only enabled if the
     client is built with OpenSSL 1.1.1 or later (i.e., TLSv1.3
     support) so not to affect pre- TLSv1.3 support's behavior.

   * However it is functional nonetheless if the client is built
     with OpenSSL 1.1.1 or later but an earlier TLS version ends
     up used due to the handshake/negotiation/server TLS support
     (e.g., TLSv1.2); this shouldn't be a problem per test below.

   * Regression testing happened with a crawled list of IMAP/POP
     SSL servers (167 servers), and no regressions were observed.
     Actually, one more email provider/server has been fixed too.

   * OpenSSL-only demonstration with -(no)servername:

     $ echo QUIT \
   | openssl s_client \
     -connect imap.gmail.com:993 \
     -verify_hostname imap.gmail.com \
     -noservername `# or -servername imap.gmail.com` \
     -tls1_3 -brief 2>&1 \
   | grep -i ^verif

    Output with '-noservername':

    verify error:num=18:self signed certificate
    verify error:num=62:Hostname mismatch
    Verification error: Hostname mismatch

    Output with '-servername imap.gmail.com'

    Verification: OK
    Verified peername: imap.gmail.com

  [Test Case]

   * Commands:

     $ sudo apt install uw-mailutils
     $ mailutil check "{imap.googlemail.com:993/imap/ssl}INBOX"

     $ sudo apt install php7.2-cli php7.2-imap
     $ php -r 'imap_open("{imap.gmail.com:993/imap/ssl}INBOX", "username", 
"password");'

   * Before:

     $ mailutil check "{imap.googlemail.com:993/imap/ssl}INBOX"
     Certificate failure for imap.googlemail.com: self signed certificate: 
/OU=No SNI provided; please fix your client./CN=invalid2.invalid
     Certificate failure for imap.googlemail.com: self signed certificate: 
/OU=No SNI provided; please fix your client./CN=invalid2.invalid

     $ php -r 'imap_open("{imap.gmail.com:993/imap/ssl}INBOX", "username", 
"password");'
     PHP Warning:  imap_open(): Couldn't open stream 
{imap.gmail.com:993/imap/ssl}INBOX in Command line code on line 1
     PHP Notice:  Unknown: Certificate failure for imap.gmail.com: self signed 
certificate: /OU=No SNI provided; please fix your client./CN=invalid2.invalid 
(errflg=2) in Unknown on line 0

   * After:

     $ mailutil check "{imap.googlemail.com:993/imap/ssl}INBOX"
     {ce-in-f16.1e100.net/imap} username:
     ^C

     $ php -r 'imap_open("{imap.gmail.com:993/imap/ssl}INBOX", "username", 
"password");'
     PHP Warning:  imap_open(): Couldn't open stream 
{imap.gmail.com:993/imap/ssl}INBOX in Command line code on line 1
     PHP Notice:  Unknown: Retrying PLAIN authentication after [ALERT] Invalid 
credentials (Failure) (errflg=1) in Unknown on line 0
     PHP Notice:  Unknown: Retrying PLAIN authentication after [ALERT] Invalid 
credentials (Failure) (errflg=1) in Unknown on line 0
     PHP Notice:  Unknown: Can not authenticate to IMAP server: [ALERT] Invalid 
credentials (Failure) (errflg=2) in Unknown on line 0

   * Regression testing scripts/results are provided in
  attachments/comments.

  [Regression Potential]

   * Theoretically possible, but not observed in hundred+ of (167)
     

[Sts-sponsors] [Bug 1835818] Re: snmpd causes autofs mount points to be mounted on service start/restart

2019-08-21 Thread Dan Streetman
sponsored for e,d,b,x

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1835818

Title:
  snmpd causes autofs mount points to be mounted on service
  start/restart

Status in net-snmp package in Ubuntu:
  In Progress
Status in net-snmp source package in Xenial:
  In Progress
Status in net-snmp source package in Bionic:
  In Progress
Status in net-snmp source package in Cosmic:
  Won't Fix
Status in net-snmp source package in Disco:
  In Progress
Status in net-snmp source package in Eoan:
  In Progress
Status in net-snmp package in Debian:
  Unknown

Bug description:
  [Impact]

  Autofs direct map triggers are visible in /etc/mtab.
  On boot, when snmpd starts, it iterates over the entries in /etc/mtab and 
performs statfs() on them.
  This trigger automount to mount autofs mounts even if the user does not 
explicitly access them.

  However this happens only if autofs service is started before snmpd.
  If snmpd stars first /etc/mtab is not yet populated with autofs mounts and 
therefore
  are not mounted.

  When there a few autofs mount points the impact is insignificant.
  However when there are thousands of them, this causes unnecessary overhead on 
operations
  such as df.
  This also delays the system shutdown time since everything needs to be 
unmounted.

  [Test Case]

  *** Test Case 1 - During boot:

  The user that brought this issue to our attention would observe all autofs 
mounts
  be mounted at boot, because in their environment autofs would start first.

  In my environment snmpd starts first so to reproduce I had to add a small 
delay in
  snmpd init script.

  In /etc/init.d/snmp :
  @@ -36,6 +36,8 @@ cd /

   case "$1" in
     start)
  +# Delay snmp start
  +sleep 2
   log_daemon_msg "Starting SNMP services:"
   # remove old symlink with previous version
   if [ -L /var/run/agentx ]; then

  $cat /etc/auto.master
  /- /etc/auto.nfs --timeout=30

  $cat /etc/auto.nfs
  /home/test1 -fstype=nfs,hard,intr,nosuid,no-subtree-check,tcp :/srv/export/test1
  /home/test2 -fstype=nfs,hard,intr,nosuid,no-subtree-check,tcp :/srv/export/test2

  Reboot vm, syslog entries :

  # Autofs starts
  Jul 11 11:04:16 xenial-vm3 autofs[1295]:  * Starting automount...
  Jul 11 11:04:16 xenial-vm3 automount[1357]: Starting automounter version 
5.1.1, master map /etc/auto.master
  Jul 11 11:04:16 xenial-vm3 automount[1357]: using kernel protocol version 5.02
  # Mount triggers, now visible in mtab
  Jul 11 11:04:16 xenial-vm3 automount[1357]: mounted direct on /home/test1 
with timeout 300, freq 75 seconds
  Jul 11 11:04:16 xenial-vm3 automount[1357]: mounted direct on /home/test2 
with timeout 300, freq 75 seconds
  Jul 11 11:04:16 xenial-vm3 autofs[1295]:...done.
  ...
  # SNMP starts
  Jul 11 11:04:18 xenial-vm3 snmpd[1294]:  * Starting SNMP services:
  Jul 11 11:04:18 xenial-vm3 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got 
automount request for /proc/sys/fs/binfmt_misc, triggered by 1394 (snmpd)
  Jul 11 11:04:18 xenial-vm3 systemd[1]: Mounting Arbitrary Executable File 
Formats File System...
  Jul 11 11:04:18 xenial-vm3 systemd[1]: Mounted Arbitrary Executable File 
Formats File System.
  Jul 11 11:04:18 xenial-vm3 automount[1357]: attempting to mount entry 
/home/test1 <==
  Jul 11 11:04:18 xenial-vm3 kernel: [8.880685] FS-Cache: Loaded
  Jul 11 11:04:18 xenial-vm3 kernel: [8.889318] FS-Cache: Netfs 'nfs' 
registered for caching
  Jul 11 11:04:18 xenial-vm3 kernel: [8.902672] NFS: Registering the 
id_resolver key type
  Jul 11 11:04:18 xenial-vm3 kernel: [8.902680] Key type id_resolver 
registered
  Jul 11 11:04:18 xenial-vm3 kernel: [8.902682] Key type id_legacy 
registered
  Jul 11 11:04:18 xenial-vm3 automount[1357]: mounted /home/test1  <==
  Jul 11 11:04:18 xenial-vm3 automount[1357]: attempting to mount entry 
/home/test2  <==
  Jul 11 11:04:18 xenial-vm3 kernel: [9.163011] random: nonblocking pool is 
initialized
  Jul 11 11:04:18 xenial-vm3 automount[1357]: mounted /home/test2  <===

  *** Test Case 2 - Restart snmpd :

  To reproduce this case, autofs mounts should not be mounted to begin with.
  (restart autofs or let it expire)

  #systemctl restart snmpd.service

  Syslog entries :

  Jul 11 11:15:40 xenial-vm3 systemd[1]: Stopping LSB: SNMP agents...
  Jul 11 11:15:40 xenial-vm3 snmpd[1668]:  * Stopping SNMP services:
  Jul 11 11:15:40 xenial-vm3 snmpd[1434]: Received TERM or STOP signal...  
shutting down...
  Jul 11 11:15:40 xenial-vm3 systemd[1]: Stopped LSB: SNMP agents.
  Jul 11 11:15:40 xenial-vm3 systemd[1]: Starting LSB: SNMP agents...
  Jul 11 11:15:42 xenial-vm3 snmpd[1677]:  * Starting SNMP services:
  Jul 11 11:15:42 xenial-vm3 automount[1357]: attempting to mount entry 
/home/test1 <===
  Jul 11 11:15:42 xenial-vm3 automount[1357]: mounted /home/test1 <===
  Jul 11 11:15:42 

[Sts-sponsors] [Bug 1835818] Re: snmpd causes autofs mount points to be mounted on service start/restart

2019-08-21 Thread Dan Streetman
** Tags added: sts-sponsor sts-sponsor-ddstreet

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1835818

Title:
  snmpd causes autofs mount points to be mounted on service
  start/restart

Status in net-snmp package in Ubuntu:
  In Progress
Status in net-snmp source package in Xenial:
  In Progress
Status in net-snmp source package in Bionic:
  In Progress
Status in net-snmp source package in Cosmic:
  Won't Fix
Status in net-snmp source package in Disco:
  In Progress
Status in net-snmp source package in Eoan:
  In Progress
Status in net-snmp package in Debian:
  Unknown

Bug description:
  [Impact]

  Autofs direct map triggers are visible in /etc/mtab.
  On boot, when snmpd starts, it iterates over the entries in /etc/mtab and 
performs statfs() on them.
  This trigger automount to mount autofs mounts even if the user does not 
explicitly access them.

  However this happens only if autofs service is started before snmpd.
  If snmpd stars first /etc/mtab is not yet populated with autofs mounts and 
therefore
  are not mounted.

  When there a few autofs mount points the impact is insignificant.
  However when there are thousands of them, this causes unnecessary overhead on 
operations
  such as df.
  This also delays the system shutdown time since everything needs to be 
unmounted.

  [Test Case]

  *** Test Case 1 - During boot:

  The user that brought this issue to our attention would observe all autofs 
mounts
  be mounted at boot, because in their environment autofs would start first.

  In my environment snmpd starts first so to reproduce I had to add a small 
delay in
  snmpd init script.

  In /etc/init.d/snmp :
  @@ -36,6 +36,8 @@ cd /

   case "$1" in
     start)
  +# Delay snmp start
  +sleep 2
   log_daemon_msg "Starting SNMP services:"
   # remove old symlink with previous version
   if [ -L /var/run/agentx ]; then

  $cat /etc/auto.master
  /- /etc/auto.nfs --timeout=30

  $cat /etc/auto.nfs
  /home/test1 -fstype=nfs,hard,intr,nosuid,no-subtree-check,tcp :/srv/export/test1
  /home/test2 -fstype=nfs,hard,intr,nosuid,no-subtree-check,tcp :/srv/export/test2

  Reboot vm, syslog entries :

  # Autofs starts
  Jul 11 11:04:16 xenial-vm3 autofs[1295]:  * Starting automount...
  Jul 11 11:04:16 xenial-vm3 automount[1357]: Starting automounter version 
5.1.1, master map /etc/auto.master
  Jul 11 11:04:16 xenial-vm3 automount[1357]: using kernel protocol version 5.02
  # Mount triggers, now visible in mtab
  Jul 11 11:04:16 xenial-vm3 automount[1357]: mounted direct on /home/test1 
with timeout 300, freq 75 seconds
  Jul 11 11:04:16 xenial-vm3 automount[1357]: mounted direct on /home/test2 
with timeout 300, freq 75 seconds
  Jul 11 11:04:16 xenial-vm3 autofs[1295]:...done.
  ...
  # SNMP starts
  Jul 11 11:04:18 xenial-vm3 snmpd[1294]:  * Starting SNMP services:
  Jul 11 11:04:18 xenial-vm3 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got 
automount request for /proc/sys/fs/binfmt_misc, triggered by 1394 (snmpd)
  Jul 11 11:04:18 xenial-vm3 systemd[1]: Mounting Arbitrary Executable File 
Formats File System...
  Jul 11 11:04:18 xenial-vm3 systemd[1]: Mounted Arbitrary Executable File 
Formats File System.
  Jul 11 11:04:18 xenial-vm3 automount[1357]: attempting to mount entry 
/home/test1 <==
  Jul 11 11:04:18 xenial-vm3 kernel: [8.880685] FS-Cache: Loaded
  Jul 11 11:04:18 xenial-vm3 kernel: [8.889318] FS-Cache: Netfs 'nfs' 
registered for caching
  Jul 11 11:04:18 xenial-vm3 kernel: [8.902672] NFS: Registering the 
id_resolver key type
  Jul 11 11:04:18 xenial-vm3 kernel: [8.902680] Key type id_resolver 
registered
  Jul 11 11:04:18 xenial-vm3 kernel: [8.902682] Key type id_legacy 
registered
  Jul 11 11:04:18 xenial-vm3 automount[1357]: mounted /home/test1  <==
  Jul 11 11:04:18 xenial-vm3 automount[1357]: attempting to mount entry 
/home/test2  <==
  Jul 11 11:04:18 xenial-vm3 kernel: [9.163011] random: nonblocking pool is 
initialized
  Jul 11 11:04:18 xenial-vm3 automount[1357]: mounted /home/test2  <===

  *** Test Case 2 - Restart snmpd :

  To reproduce this case, autofs mounts should not be mounted to begin with.
  (restart autofs or let it expire)

  #systemctl restart snmpd.service

  Syslog entries :

  Jul 11 11:15:40 xenial-vm3 systemd[1]: Stopping LSB: SNMP agents...
  Jul 11 11:15:40 xenial-vm3 snmpd[1668]:  * Stopping SNMP services:
  Jul 11 11:15:40 xenial-vm3 snmpd[1434]: Received TERM or STOP signal...  
shutting down...
  Jul 11 11:15:40 xenial-vm3 systemd[1]: Stopped LSB: SNMP agents.
  Jul 11 11:15:40 xenial-vm3 systemd[1]: Starting LSB: SNMP agents...
  Jul 11 11:15:42 xenial-vm3 snmpd[1677]:  * Starting SNMP services:
  Jul 11 11:15:42 xenial-vm3 automount[1357]: attempting to mount entry 
/home/test1 <===
  Jul 11 11:15:42 xenial-vm3 automount[1357]: mounted /home/test1 

[Sts-sponsors] [Bug 1668771] Re: [SRU] systemd-resolved negative caching for extended period of time

2019-08-21 Thread Dan Streetman
The systemd in eoan-proposed was version 243-rc1, which contained this,
but that has been reverted and the current version is back to 240, which
doesn't contain this.  Discussion in #ubuntu-devel indicates eoan should
eventually have at least version 241, so I'm going to wait for that, and
then upload this fix if eoan doesn't already contain it.

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1668771

Title:
  [SRU] systemd-resolved negative caching for extended period of time

Status in systemd:
  New
Status in systemd package in Ubuntu:
  In Progress
Status in systemd source package in Bionic:
  Fix Released
Status in systemd source package in Disco:
  Fix Released
Status in systemd source package in Eoan:
  In Progress

Bug description:
  [Impact]

   * If a DNS lookup returns SERVFAIL, systemd-resolved seems to cache
  the result for very long (infinity?). I have to restart systemd-
  resolved to have the negative caching purged.

  * After SERVFAIL DNS server issue has been resolved, chromium/firefox
  still returns DNS error despite host can correctly resolve the name.

  [Test Case]

  * If a lookup returns SERVFAIL systemd-resolved will cache the result for 30s 
(See 201d995),
  however, there are several use cases on which this condition is not 
acceptable (See #5552 comments)
  and the only workaround would be to disable cache entirely or flush it , 
which isn't optimal.

  * Configure /etc/systemd/resolved.conf as follows:

  Cache=yes (default)

  * Restart systemd-resolved (systemctl restart systemd-
  resolved.service)

  * Run a host/getent command against a entry that will return SERVFAIL
  and check the journalctl output to see that the reply gets served from
  cache.

  root@systemd-disco:/home/ubuntu# host www.no-record.cl
  Host www.montemar.cl not found: 2(SERVFAIL)
  root@systemd-disco:/home/ubuntu# journalctl -u systemd-resolved -n
  -- Logs begin at Fri 2019-07-12 18:09:42 UTC, end at Tue 2019-07-23 15:10:17 
UTC. --
  Jul 23 15:10:10 systemd-disco systemd-resolved[1282]: Transaction 6222 for 
 on scope dns on ens3/* now complete with 
  Jul 23 15:10:10 systemd-disco systemd-resolved[1282]: Sending response packet 
with id 61042 on interface 1/AF_INET.
  Jul 23 15:10:10 systemd-disco systemd-resolved[1282]: Freeing transaction 
6222.
  Jul 23 15:10:17 systemd-disco systemd-resolved[1282]: Got DNS stub UDP query 
packet for id 53580
  Jul 23 15:10:17 systemd-disco systemd-resolved[1282]: Looking up RR for  
www.no-record.cl IN A.
  Jul 23 15:10:17 systemd-disco systemd-resolved[1282]: RCODE SERVFAIL cache 
hit for  www.no-record.cl IN A
  Jul 23 15:10:17 systemd-disco systemd-resolved[1282]: Transaction 58570 for < 
www.no-record.cl IN A> on scope dns on ens3/* now complete with  scope dns on ens3/.
  Jul 12 18:48:31 systemd-disco systemd-resolved[2635]: Using feature level UDP 
for transaction 22382.
  Jul 12 18:48:31 systemd-disco systemd-resolved[2635]: Sending query packet 
with id 22382.
  Jul 12 18:48:31 systemd-disco systemd-resolved[2635]: Processing incoming 
packet on transaction 22382 (rcode=SERVFAIL).
  Jul 12 18:48:31 systemd-disco systemd-resolved[2635]: Server returned error: 
SERVFAIL
  Jul 12 18:48:31 systemd-disco systemd-resolved[2635]: Not caching negative 
entry for: www.metaklass.org IN A, cache mode set to no-negative
  Jul 12 18:48:31 systemd-disco systemd-resolved[2635]: Transaction 22382 for 
 on scope dns on ens3/ now complete with from network 
(unsigned).
  Jul 12 18:48:31 systemd-disco systemd-resolved[2635]: Sending response packet 
with id 31060 on interface 1/AF_INET.

  The following patch https://github.com/systemd/systemd/pull/13047
  implements the required changes.

  [Other Info]

  Note that systemd in Eoan is being upgraded to upstream 242, so I am
  not adding this to Eoan now, as I don't want to disturb the merge. If
  needed after the merge, I'll add to Eoan.

To manage notifications about this bug go to:
https://bugs.launchpad.net/systemd/+bug/1668771/+subscriptions

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1668771] Re: [SRU] systemd-resolved negative caching for extended period of time

2019-08-21 Thread Dan Streetman
** Changed in: systemd (Ubuntu Eoan)
   Status: Fix Committed => In Progress

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1668771

Title:
  [SRU] systemd-resolved negative caching for extended period of time

Status in systemd:
  New
Status in systemd package in Ubuntu:
  In Progress
Status in systemd source package in Bionic:
  Fix Released
Status in systemd source package in Disco:
  Fix Released
Status in systemd source package in Eoan:
  In Progress

Bug description:
  [Impact]

   * If a DNS lookup returns SERVFAIL, systemd-resolved seems to cache
  the result for very long (infinity?). I have to restart systemd-
  resolved to have the negative caching purged.

  * After SERVFAIL DNS server issue has been resolved, chromium/firefox
  still returns DNS error despite host can correctly resolve the name.

  [Test Case]

  * If a lookup returns SERVFAIL systemd-resolved will cache the result for 30s 
(See 201d995),
  however, there are several use cases on which this condition is not 
acceptable (See #5552 comments)
  and the only workaround would be to disable cache entirely or flush it , 
which isn't optimal.

  * Configure /etc/systemd/resolved.conf as follows:

  Cache=yes (default)

  * Restart systemd-resolved (systemctl restart systemd-
  resolved.service)

  * Run a host/getent command against a entry that will return SERVFAIL
  and check the journalctl output to see that the reply gets served from
  cache.

  root@systemd-disco:/home/ubuntu# host www.no-record.cl
  Host www.montemar.cl not found: 2(SERVFAIL)
  root@systemd-disco:/home/ubuntu# journalctl -u systemd-resolved -n
  -- Logs begin at Fri 2019-07-12 18:09:42 UTC, end at Tue 2019-07-23 15:10:17 
UTC. --
  Jul 23 15:10:10 systemd-disco systemd-resolved[1282]: Transaction 6222 for 
 on scope dns on ens3/* now complete with 
  Jul 23 15:10:10 systemd-disco systemd-resolved[1282]: Sending response packet 
with id 61042 on interface 1/AF_INET.
  Jul 23 15:10:10 systemd-disco systemd-resolved[1282]: Freeing transaction 
6222.
  Jul 23 15:10:17 systemd-disco systemd-resolved[1282]: Got DNS stub UDP query 
packet for id 53580
  Jul 23 15:10:17 systemd-disco systemd-resolved[1282]: Looking up RR for  
www.no-record.cl IN A.
  Jul 23 15:10:17 systemd-disco systemd-resolved[1282]: RCODE SERVFAIL cache 
hit for  www.no-record.cl IN A
  Jul 23 15:10:17 systemd-disco systemd-resolved[1282]: Transaction 58570 for < 
www.no-record.cl IN A> on scope dns on ens3/* now complete with  scope dns on ens3/.
  Jul 12 18:48:31 systemd-disco systemd-resolved[2635]: Using feature level UDP 
for transaction 22382.
  Jul 12 18:48:31 systemd-disco systemd-resolved[2635]: Sending query packet 
with id 22382.
  Jul 12 18:48:31 systemd-disco systemd-resolved[2635]: Processing incoming 
packet on transaction 22382 (rcode=SERVFAIL).
  Jul 12 18:48:31 systemd-disco systemd-resolved[2635]: Server returned error: 
SERVFAIL
  Jul 12 18:48:31 systemd-disco systemd-resolved[2635]: Not caching negative 
entry for: www.metaklass.org IN A, cache mode set to no-negative
  Jul 12 18:48:31 systemd-disco systemd-resolved[2635]: Transaction 22382 for 
 on scope dns on ens3/ now complete with from network 
(unsigned).
  Jul 12 18:48:31 systemd-disco systemd-resolved[2635]: Sending response packet 
with id 31060 on interface 1/AF_INET.

  The following patch https://github.com/systemd/systemd/pull/13047
  implements the required changes.

  [Other Info]

  Note that systemd in Eoan is being upgraded to upstream 242, so I am
  not adding this to Eoan now, as I don't want to disturb the merge. If
  needed after the merge, I'll add to Eoan.

To manage notifications about this bug go to:
https://bugs.launchpad.net/systemd/+bug/1668771/+subscriptions

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1834340] Re: Regression for GMail after libssl upgrade with TLSv1.3

2019-08-14 Thread Dan Streetman
> Test with an IP address should not send SNI per the patch,
> so it should fail with the certificate verification error:

just to clarify as I was not clear at first:

-with TLSv1.3, some servers (as listed in description, e.g. gmail) require SNI
  -if the client is accessing the server via DNS name, it provides SNI
  -if the client is accessing the server via IP address, it does not provide SNI

So this means the servers that require SNI when using TLSv1.3 can not
(any longer?) be accessed by their direct ip address, their hostname
*must* be used.

questions:
1) did access by IP address used to work, before updating to TLSv1.3?
2) if direct IP address used to work before, does the code need to do a 
fallback to pre-TLSv1.3 for servers that require SNI but are being accessed by 
IP address?


I have sponsored this to e, d, and b, as it seems to be doing the right thing 
based on the RFC:
https://tools.ietf.org/html/rfc6066#page-6
as discussed in previous comments.

But, I think the regression potential should be considered in case
direct IP address access worked before (i.e. before the update to
openssl 1.1.1), but isn't restored by this patch.  At minimum it should
be listed in the regression potential section of the description.

Thanks!

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1834340

Title:
  Regression for GMail after libssl upgrade with TLSv1.3

Status in asterisk package in Ubuntu:
  New
Status in mailsync package in Ubuntu:
  New
Status in php-imap package in Ubuntu:
  Invalid
Status in prayer package in Ubuntu:
  New
Status in uw-imap package in Ubuntu:
  In Progress
Status in asterisk source package in Bionic:
  New
Status in mailsync source package in Bionic:
  New
Status in php-imap source package in Bionic:
  Invalid
Status in prayer source package in Bionic:
  New
Status in uw-imap source package in Bionic:
  In Progress
Status in asterisk source package in Disco:
  New
Status in mailsync source package in Disco:
  New
Status in php-imap source package in Disco:
  Invalid
Status in prayer source package in Disco:
  New
Status in uw-imap source package in Disco:
  In Progress
Status in asterisk source package in Eoan:
  New
Status in mailsync source package in Eoan:
  New
Status in php-imap source package in Eoan:
  Invalid
Status in prayer source package in Eoan:
  New
Status in uw-imap source package in Eoan:
  In Progress
Status in uw-imap package in Debian:
  Unknown

Bug description:
  [Impact]

   * Users of libc-client2007e (e.g., php7.x-imap) can no longer
     connect to GMail on Bionic and later, after introduction of
     TLSv1.3 with OpenSSL 1.1.1 (normal upgrade path in Bionic).

   * GMail requires Server Name Indication (SNI) to be set when
     TLSv1.3 is used, otherwise the server provided certificate
     fails verification in the client and connection is aborted.

   * The fix is to set SNI to the hostname that the client will
     perform verification on. The change is only enabled if the
     client is built with OpenSSL 1.1.1 or later (i.e., TLSv1.3
     support) so not to affect pre- TLSv1.3 support's behavior.

   * However it is functional nonetheless if the client is built
     with OpenSSL 1.1.1 or later but an earlier TLS version ends
     up used due to the handshake/negotiation/server TLS support
     (e.g., TLSv1.2); this shouldn't be a problem per test below.

   * Regression testing happened with a crawled list of IMAP/POP
     SSL servers (167 servers), and no regressions were observed.
     Actually, one more email provider/server has been fixed too.

   * OpenSSL-only demonstration with -(no)servername:

     $ echo QUIT \
   | openssl s_client \
     -connect imap.gmail.com:993 \
     -verify_hostname imap.gmail.com \
     -noservername `# or -servername imap.gmail.com` \
     -tls1_3 -brief 2>&1 \
   | grep -i ^verif

    Output with '-noservername':

    verify error:num=18:self signed certificate
    verify error:num=62:Hostname mismatch
    Verification error: Hostname mismatch

    Output with '-servername imap.gmail.com'

    Verification: OK
    Verified peername: imap.gmail.com

  [Test Case]

   * Commands:

     $ sudo apt install uw-mailutils
     $ mailutil check "{imap.googlemail.com:993/imap/ssl}INBOX"

     $ sudo apt install php7.2-cli php7.2-imap
     $ php -r 'imap_open("{imap.gmail.com:993/imap/ssl}INBOX", "username", 
"password");'

   * Before:

     $ mailutil check "{imap.googlemail.com:993/imap/ssl}INBOX"
     Certificate failure for imap.googlemail.com: self signed certificate: 
/OU=No SNI provided; please fix your client./CN=invalid2.invalid
     Certificate failure for imap.googlemail.com: self signed certificate: 
/OU=No SNI provided; please fix your client./CN=invalid2.invalid

     $ php -r 'imap_open("{imap.gmail.com:993/imap/ssl}INBOX", "username", 
"password");'
     PHP Warning:  

[Sts-sponsors] [Bug 1834340] Re: Regression for GMail after libssl upgrade with TLSv1.3

2019-08-14 Thread Dan Streetman
** Also affects: uw-imap (Debian) via
   https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=916041
   Importance: Unknown
   Status: Unknown

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1834340

Title:
  Regression for GMail after libssl upgrade with TLSv1.3

Status in asterisk package in Ubuntu:
  New
Status in mailsync package in Ubuntu:
  New
Status in php-imap package in Ubuntu:
  Invalid
Status in prayer package in Ubuntu:
  New
Status in uw-imap package in Ubuntu:
  In Progress
Status in asterisk source package in Bionic:
  New
Status in mailsync source package in Bionic:
  New
Status in php-imap source package in Bionic:
  Invalid
Status in prayer source package in Bionic:
  New
Status in uw-imap source package in Bionic:
  In Progress
Status in asterisk source package in Disco:
  New
Status in mailsync source package in Disco:
  New
Status in php-imap source package in Disco:
  Invalid
Status in prayer source package in Disco:
  New
Status in uw-imap source package in Disco:
  In Progress
Status in asterisk source package in Eoan:
  New
Status in mailsync source package in Eoan:
  New
Status in php-imap source package in Eoan:
  Invalid
Status in prayer source package in Eoan:
  New
Status in uw-imap source package in Eoan:
  In Progress
Status in uw-imap package in Debian:
  Unknown

Bug description:
  [Impact]

   * Users of libc-client2007e (e.g., php7.x-imap) can no longer
     connect to GMail on Bionic and later, after introduction of
     TLSv1.3 with OpenSSL 1.1.1 (normal upgrade path in Bionic).

   * GMail requires Server Name Indication (SNI) to be set when
     TLSv1.3 is used, otherwise the server provided certificate
     fails verification in the client and connection is aborted.

   * The fix is to set SNI to the hostname that the client will
     perform verification on. The change is only enabled if the
     client is built with OpenSSL 1.1.1 or later (i.e., TLSv1.3
     support) so not to affect pre- TLSv1.3 support's behavior.

   * However it is functional nonetheless if the client is built
     with OpenSSL 1.1.1 or later but an earlier TLS version ends
     up used due to the handshake/negotiation/server TLS support
     (e.g., TLSv1.2); this shouldn't be a problem per test below.

   * Regression testing happened with a crawled list of IMAP/POP
     SSL servers (167 servers), and no regressions were observed.
     Actually, one more email provider/server has been fixed too.

   * OpenSSL-only demonstration with -(no)servername:

     $ echo QUIT \
   | openssl s_client \
     -connect imap.gmail.com:993 \
     -verify_hostname imap.gmail.com \
     -noservername `# or -servername imap.gmail.com` \
     -tls1_3 -brief 2>&1 \
   | grep -i ^verif

    Output with '-noservername':

    verify error:num=18:self signed certificate
    verify error:num=62:Hostname mismatch
    Verification error: Hostname mismatch

    Output with '-servername imap.gmail.com'

    Verification: OK
    Verified peername: imap.gmail.com

  [Test Case]

   * Commands:

     $ sudo apt install uw-mailutils
     $ mailutil check "{imap.googlemail.com:993/imap/ssl}INBOX"

     $ sudo apt install php7.2-cli php7.2-imap
     $ php -r 'imap_open("{imap.gmail.com:993/imap/ssl}INBOX", "username", 
"password");'

   * Before:

     $ mailutil check "{imap.googlemail.com:993/imap/ssl}INBOX"
     Certificate failure for imap.googlemail.com: self signed certificate: 
/OU=No SNI provided; please fix your client./CN=invalid2.invalid
     Certificate failure for imap.googlemail.com: self signed certificate: 
/OU=No SNI provided; please fix your client./CN=invalid2.invalid

     $ php -r 'imap_open("{imap.gmail.com:993/imap/ssl}INBOX", "username", 
"password");'
     PHP Warning:  imap_open(): Couldn't open stream 
{imap.gmail.com:993/imap/ssl}INBOX in Command line code on line 1
     PHP Notice:  Unknown: Certificate failure for imap.gmail.com: self signed 
certificate: /OU=No SNI provided; please fix your client./CN=invalid2.invalid 
(errflg=2) in Unknown on line 0

   * After:

     $ mailutil check "{imap.googlemail.com:993/imap/ssl}INBOX"
     {ce-in-f16.1e100.net/imap} username:
     ^C

     $ php -r 'imap_open("{imap.gmail.com:993/imap/ssl}INBOX", "username", 
"password");'
     PHP Warning:  imap_open(): Couldn't open stream 
{imap.gmail.com:993/imap/ssl}INBOX in Command line code on line 1
     PHP Notice:  Unknown: Retrying PLAIN authentication after [ALERT] Invalid 
credentials (Failure) (errflg=1) in Unknown on line 0
     PHP Notice:  Unknown: Retrying PLAIN authentication after [ALERT] Invalid 
credentials (Failure) (errflg=1) in Unknown on line 0
     PHP Notice:  Unknown: Can not authenticate to IMAP server: [ALERT] Invalid 
credentials (Failure) (errflg=2) in Unknown on line 0

   * Regression testing scripts/results are provided in
  attachments/comments.

  

  1   2   >