[Sts-sponsors] [Bug 1826737] Re: lshw does not list NVMe storage devices as "disk" nodes

2021-01-14 Thread Victor Tapia
** Patch added: "lshw-bionic.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/lshw/+bug/1826737/+attachment/5452916/+files/lshw-bionic.debdiff

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1826737

Title:
  lshw does not list NVMe storage devices as "disk" nodes

Status in lshw package in Ubuntu:
  Fix Released
Status in lshw source package in Bionic:
  New
Status in lshw source package in Focal:
  New
Status in lshw source package in Groovy:
  New
Status in lshw source package in Hirsute:
  Fix Released
Status in lshw package in Debian:
  Unknown

Bug description:
  [Impact]

   * NVMe devices are not recognized by lshw in Ubuntu

  [Test Case]

   * Running "lshw -C disk" or "lshw -C storage" does not show NVMe devices.
     Example: https://pastebin.ubuntu.com/p/FfKGNc7W6M/

  [Where problems could occur]

   * This upload consists of four cherry-picked patches and the feature
  is self-contained, so the regression potential is quite low. If
  there's anything to happen, it would be in the network device scan,
  where the structure was altered by the main NVMe patch.

  * For those who does HW monitoring/inventory listing based on 'lshw'
  might observe changes in their inventory listing, it shouldn't be a
  'problem' but I want to point this out.

  [Other information]

  # Redhat bug:
  https://bugzilla.redhat.com/show_bug.cgi?id=1695343

  [Original description]

  Ubuntu MATE 19.04, updated 2019-04-28

  sudo lshw -class disk

  Expected : info on SSD
  Actual result : info on USB drive only.

  Note this is already reported to RedHat

  ProblemType: Bug
  DistroRelease: Ubuntu 19.04
  Package: lshw 02.18-0.1ubuntu7
  ProcVersionSignature: Ubuntu 5.0.0-13.14-generic 5.0.6
  Uname: Linux 5.0.0-13-generic x86_64
  NonfreeKernelModules: nvidia_modeset nvidia
  ApportVersion: 2.20.10-0ubuntu27
  Architecture: amd64
  CurrentDesktop: MATE
  Date: Sun Apr 28 07:11:45 2019
  InstallationDate: Installed on 2019-04-25 (3 days ago)
  InstallationMedia: Ubuntu-MATE 19.04 "Disco Dingo" - Release amd64 (20190416)
  SourcePackage: lshw
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/lshw/+bug/1826737/+subscriptions

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1826737] Re: lshw does not list NVMe storage devices as "disk" nodes

2021-01-14 Thread Victor Tapia
** Patch added: "lshw-focal.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/lshw/+bug/1826737/+attachment/5452915/+files/lshw-focal.debdiff

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1826737

Title:
  lshw does not list NVMe storage devices as "disk" nodes

Status in lshw package in Ubuntu:
  Fix Released
Status in lshw source package in Bionic:
  New
Status in lshw source package in Focal:
  New
Status in lshw source package in Groovy:
  New
Status in lshw source package in Hirsute:
  Fix Released
Status in lshw package in Debian:
  Unknown

Bug description:
  [Impact]

   * NVMe devices are not recognized by lshw in Ubuntu

  [Test Case]

   * Running "lshw -C disk" or "lshw -C storage" does not show NVMe devices.
     Example: https://pastebin.ubuntu.com/p/FfKGNc7W6M/

  [Where problems could occur]

   * This upload consists of four cherry-picked patches and the feature
  is self-contained, so the regression potential is quite low. If
  there's anything to happen, it would be in the network device scan,
  where the structure was altered by the main NVMe patch.

  * For those who does HW monitoring/inventory listing based on 'lshw'
  might observe changes in their inventory listing, it shouldn't be a
  'problem' but I want to point this out.

  [Other information]

  # Redhat bug:
  https://bugzilla.redhat.com/show_bug.cgi?id=1695343

  [Original description]

  Ubuntu MATE 19.04, updated 2019-04-28

  sudo lshw -class disk

  Expected : info on SSD
  Actual result : info on USB drive only.

  Note this is already reported to RedHat

  ProblemType: Bug
  DistroRelease: Ubuntu 19.04
  Package: lshw 02.18-0.1ubuntu7
  ProcVersionSignature: Ubuntu 5.0.0-13.14-generic 5.0.6
  Uname: Linux 5.0.0-13-generic x86_64
  NonfreeKernelModules: nvidia_modeset nvidia
  ApportVersion: 2.20.10-0ubuntu27
  Architecture: amd64
  CurrentDesktop: MATE
  Date: Sun Apr 28 07:11:45 2019
  InstallationDate: Installed on 2019-04-25 (3 days ago)
  InstallationMedia: Ubuntu-MATE 19.04 "Disco Dingo" - Release amd64 (20190416)
  SourcePackage: lshw
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/lshw/+bug/1826737/+subscriptions

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1826737] Re: lshw does not list NVMe storage devices as "disk" nodes

2021-01-14 Thread Victor Tapia
** Patch removed: "lshw-groovy.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/lshw/+bug/1826737/+attachment/5452323/+files/lshw-groovy.debdiff

** Patch removed: "lshw-focal.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/lshw/+bug/1826737/+attachment/5452324/+files/lshw-focal.debdiff

** Patch added: "lshw-groovy.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/lshw/+bug/1826737/+attachment/5452914/+files/lshw-groovy.debdiff

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1826737

Title:
  lshw does not list NVMe storage devices as "disk" nodes

Status in lshw package in Ubuntu:
  Fix Released
Status in lshw source package in Bionic:
  New
Status in lshw source package in Focal:
  New
Status in lshw source package in Groovy:
  New
Status in lshw source package in Hirsute:
  Fix Released
Status in lshw package in Debian:
  Unknown

Bug description:
  [Impact]

   * NVMe devices are not recognized by lshw in Ubuntu

  [Test Case]

   * Running "lshw -C disk" or "lshw -C storage" does not show NVMe devices.
     Example: https://pastebin.ubuntu.com/p/FfKGNc7W6M/

  [Where problems could occur]

   * This upload consists of four cherry-picked patches and the feature
  is self-contained, so the regression potential is quite low. If
  there's anything to happen, it would be in the network device scan,
  where the structure was altered by the main NVMe patch.

  * For those who does HW monitoring/inventory listing based on 'lshw'
  might observe changes in their inventory listing, it shouldn't be a
  'problem' but I want to point this out.

  [Other information]

  # Redhat bug:
  https://bugzilla.redhat.com/show_bug.cgi?id=1695343

  [Original description]

  Ubuntu MATE 19.04, updated 2019-04-28

  sudo lshw -class disk

  Expected : info on SSD
  Actual result : info on USB drive only.

  Note this is already reported to RedHat

  ProblemType: Bug
  DistroRelease: Ubuntu 19.04
  Package: lshw 02.18-0.1ubuntu7
  ProcVersionSignature: Ubuntu 5.0.0-13.14-generic 5.0.6
  Uname: Linux 5.0.0-13-generic x86_64
  NonfreeKernelModules: nvidia_modeset nvidia
  ApportVersion: 2.20.10-0ubuntu27
  Architecture: amd64
  CurrentDesktop: MATE
  Date: Sun Apr 28 07:11:45 2019
  InstallationDate: Installed on 2019-04-25 (3 days ago)
  InstallationMedia: Ubuntu-MATE 19.04 "Disco Dingo" - Release amd64 (20190416)
  SourcePackage: lshw
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/lshw/+bug/1826737/+subscriptions

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1826737] Re: lshw does not list NVMe storage devices as "disk" nodes

2021-01-12 Thread Victor Tapia
** Patch added: "lshw-focal.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/lshw/+bug/1826737/+attachment/5452324/+files/lshw-focal.debdiff

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1826737

Title:
  lshw does not list NVMe storage devices as "disk" nodes

Status in lshw package in Ubuntu:
  Fix Released
Status in lshw source package in Bionic:
  New
Status in lshw source package in Focal:
  New
Status in lshw source package in Groovy:
  New
Status in lshw source package in Hirsute:
  Fix Released
Status in lshw package in Debian:
  Unknown

Bug description:
  [Impact]

   * NVMe devices are not recognized by lshw in Ubuntu

  [Test Case]

   * Running "lshw -C disk" or "lshw -C storage" does not show NVMe devices.
     Example: https://pastebin.ubuntu.com/p/FfKGNc7W6M/

  [Where problems could occur]

   * This upload consists of four cherry-picked patches and the feature
  is self-contained, so the regression potential is quite low. If
  there's anything to happen, it would be in the network device scan,
  where the structure was altered by the main NVMe patch.

  * For those who does HW monitoring/inventory listing based on 'lshw'
  might observe changes in their inventory listing, it shouldn't be a
  'problem' but I want to point this out.

  [Other information]

  # Redhat bug:
  https://bugzilla.redhat.com/show_bug.cgi?id=1695343

  [Original description]

  Ubuntu MATE 19.04, updated 2019-04-28

  sudo lshw -class disk

  Expected : info on SSD
  Actual result : info on USB drive only.

  Note this is already reported to RedHat

  ProblemType: Bug
  DistroRelease: Ubuntu 19.04
  Package: lshw 02.18-0.1ubuntu7
  ProcVersionSignature: Ubuntu 5.0.0-13.14-generic 5.0.6
  Uname: Linux 5.0.0-13-generic x86_64
  NonfreeKernelModules: nvidia_modeset nvidia
  ApportVersion: 2.20.10-0ubuntu27
  Architecture: amd64
  CurrentDesktop: MATE
  Date: Sun Apr 28 07:11:45 2019
  InstallationDate: Installed on 2019-04-25 (3 days ago)
  InstallationMedia: Ubuntu-MATE 19.04 "Disco Dingo" - Release amd64 (20190416)
  SourcePackage: lshw
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/lshw/+bug/1826737/+subscriptions

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1826737] Re: lshw does not list NVMe storage devices as "disk" nodes

2021-01-12 Thread Victor Tapia
** Patch added: "lshw-groovy.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/lshw/+bug/1826737/+attachment/5452323/+files/lshw-groovy.debdiff

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1826737

Title:
  lshw does not list NVMe storage devices as "disk" nodes

Status in lshw package in Ubuntu:
  Fix Released
Status in lshw source package in Bionic:
  New
Status in lshw source package in Focal:
  New
Status in lshw source package in Groovy:
  New
Status in lshw source package in Hirsute:
  Fix Released
Status in lshw package in Debian:
  Unknown

Bug description:
  [Impact]

   * NVMe devices are not recognized by lshw in Ubuntu

  [Test Case]

   * Running "lshw -C disk" or "lshw -C storage" does not show NVMe devices.
     Example: https://pastebin.ubuntu.com/p/FfKGNc7W6M/

  [Where problems could occur]

   * This upload consists of four cherry-picked patches and the feature
  is self-contained, so the regression potential is quite low. If
  there's anything to happen, it would be in the network device scan,
  where the structure was altered by the main NVMe patch.

  * For those who does HW monitoring/inventory listing based on 'lshw'
  might observe changes in their inventory listing, it shouldn't be a
  'problem' but I want to point this out.

  [Other information]

  # Redhat bug:
  https://bugzilla.redhat.com/show_bug.cgi?id=1695343

  [Original description]

  Ubuntu MATE 19.04, updated 2019-04-28

  sudo lshw -class disk

  Expected : info on SSD
  Actual result : info on USB drive only.

  Note this is already reported to RedHat

  ProblemType: Bug
  DistroRelease: Ubuntu 19.04
  Package: lshw 02.18-0.1ubuntu7
  ProcVersionSignature: Ubuntu 5.0.0-13.14-generic 5.0.6
  Uname: Linux 5.0.0-13-generic x86_64
  NonfreeKernelModules: nvidia_modeset nvidia
  ApportVersion: 2.20.10-0ubuntu27
  Architecture: amd64
  CurrentDesktop: MATE
  Date: Sun Apr 28 07:11:45 2019
  InstallationDate: Installed on 2019-04-25 (3 days ago)
  InstallationMedia: Ubuntu-MATE 19.04 "Disco Dingo" - Release amd64 (20190416)
  SourcePackage: lshw
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/lshw/+bug/1826737/+subscriptions

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1580385] Re: /usr/bin/nmap:11:hascaptures:hascaptures:hascaptures:hascaptures:hascaptures

2019-11-07 Thread Victor Tapia
** Patch added: "eoan.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/lua-lpeg/+bug/1580385/+attachment/5303496/+files/eoan.debdiff

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1580385

Title:
  /usr/bin/nmap:11:hascaptures:hascaptures:hascaptures:hascaptures:hascaptures

Status in lua-lpeg package in Ubuntu:
  Fix Released
Status in lua-lpeg source package in Xenial:
  New
Status in lua-lpeg source package in Bionic:
  New
Status in lua-lpeg source package in Disco:
  New
Status in lua-lpeg source package in Eoan:
  New
Status in lua-lpeg package in Debian:
  New

Bug description:
  [Impact]

  Under certain conditions, lpeg will crash while walking the pattern
  tree looking for TCapture nodes.

  [Test Case]

  The reproducer, taken from an upstream discussion (link in "Other
  info"), is:

  $ cat repro.lua
  #!/usr/bin/env lua
  lpeg = require "lpeg"

  p = lpeg.C(-lpeg.P{lpeg.P'x' * lpeg.V(1) + lpeg.P'y'})
  p:match("xx")

  The program crashes due to a hascaptures() infinite recursion:

  $ ./repro.lua
  Segmentation fault (core dumped)

  (gdb) bt -25
  #523984 0x77a3743c in hascaptures () from 
/usr/lib/x86_64-linux-gnu/lua/5.2/lpeg.so
  #523985 0x77a3743c in hascaptures () from 
/usr/lib/x86_64-linux-gnu/lua/5.2/lpeg.so
  #523986 0x77a3743c in hascaptures () from 
/usr/lib/x86_64-linux-gnu/lua/5.2/lpeg.so
  #523987 0x77a3743c in hascaptures () from 
/usr/lib/x86_64-linux-gnu/lua/5.2/lpeg.so
  #523988 0x77a3743c in hascaptures () from 
/usr/lib/x86_64-linux-gnu/lua/5.2/lpeg.so
  #523989 0x77a3743c in hascaptures () from 
/usr/lib/x86_64-linux-gnu/lua/5.2/lpeg.so
  #523990 0x77a3815c in ?? () from 
/usr/lib/x86_64-linux-gnu/lua/5.2/lpeg.so
  #523991 0x77a388e3 in compile () from 
/usr/lib/x86_64-linux-gnu/lua/5.2/lpeg.so
  #523992 0x77a36fab in ?? () from 
/usr/lib/x86_64-linux-gnu/lua/5.2/lpeg.so
  #523993 0xfd1e in ?? ()
  #523994 0x5556a5fc in ?? ()
  #523995 0x555600c8 in ?? ()
  #523996 0xf63f in ?? ()
  #523997 0x5556030f in ?? ()
  #523998 0xdc91 in lua_pcallk ()
  #523999 0xb896 in ?? ()
  #524000 0xc54b in ?? ()
  #524001 0xfd1e in ?? ()
  #524002 0x55560092 in ?? ()
  #524003 0xf63f in ?? ()
  #524004 0x5556030f in ?? ()
  #524005 0xdc91 in lua_pcallk ()
  #524006 0xb64b in ?? ()
  #524007 0x77c94bbb in __libc_start_main (main=0xb5f0, argc=2, 
argv=0x7fffe6d8, init=, fini=, 
rtld_fini=, stack_end=0x7fffe6c8)
  at ../csu/libc-start.c:308
  #524008 0xb70a in ?? ()

  The expected behavior is to have the program finish normally

  [Regression potential]

  Low, this is a backport from upstream and only limits the infinite recursion 
in a scenario where it shouldn't happen to begin with (TCapture node search).
  [Other info]

  This was fixed upstream in 1.0.1 by stopping the recursion in TCall
  nodes and controlling that TRule nodes do not follow siblings (sib2)

  The upstream discussion can be found here:
  http://lua.2524044.n2.nabble.com/LPeg-intermittent-stack-exhaustion-
  td7674831.html

  My analysis can be found here:
  http://pastebin.ubuntu.com/p/n4824ftZt9/plain/

  [Original description]

  The Ubuntu Error Tracker has been receiving reports about a problem
  regarding nmap.  This problem was most recently seen with version
  7.01-2ubuntu2, the problem page at
  https://errors.ubuntu.com/problem/5e852236a443bab0279d47c8a9b7e55802bfb46f
  contains more details.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/lua-lpeg/+bug/1580385/+subscriptions

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1580385] Re: /usr/bin/nmap:11:hascaptures:hascaptures:hascaptures:hascaptures:hascaptures

2019-11-07 Thread Victor Tapia
** Patch added: "bionic.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/lua-lpeg/+bug/1580385/+attachment/5303499/+files/bionic.debdiff

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1580385

Title:
  /usr/bin/nmap:11:hascaptures:hascaptures:hascaptures:hascaptures:hascaptures

Status in lua-lpeg package in Ubuntu:
  Fix Released
Status in lua-lpeg source package in Xenial:
  New
Status in lua-lpeg source package in Bionic:
  New
Status in lua-lpeg source package in Disco:
  New
Status in lua-lpeg source package in Eoan:
  New
Status in lua-lpeg package in Debian:
  New

Bug description:
  [Impact]

  Under certain conditions, lpeg will crash while walking the pattern
  tree looking for TCapture nodes.

  [Test Case]

  The reproducer, taken from an upstream discussion (link in "Other
  info"), is:

  $ cat repro.lua
  #!/usr/bin/env lua
  lpeg = require "lpeg"

  p = lpeg.C(-lpeg.P{lpeg.P'x' * lpeg.V(1) + lpeg.P'y'})
  p:match("xx")

  The program crashes due to a hascaptures() infinite recursion:

  $ ./repro.lua
  Segmentation fault (core dumped)

  (gdb) bt -25
  #523984 0x77a3743c in hascaptures () from 
/usr/lib/x86_64-linux-gnu/lua/5.2/lpeg.so
  #523985 0x77a3743c in hascaptures () from 
/usr/lib/x86_64-linux-gnu/lua/5.2/lpeg.so
  #523986 0x77a3743c in hascaptures () from 
/usr/lib/x86_64-linux-gnu/lua/5.2/lpeg.so
  #523987 0x77a3743c in hascaptures () from 
/usr/lib/x86_64-linux-gnu/lua/5.2/lpeg.so
  #523988 0x77a3743c in hascaptures () from 
/usr/lib/x86_64-linux-gnu/lua/5.2/lpeg.so
  #523989 0x77a3743c in hascaptures () from 
/usr/lib/x86_64-linux-gnu/lua/5.2/lpeg.so
  #523990 0x77a3815c in ?? () from 
/usr/lib/x86_64-linux-gnu/lua/5.2/lpeg.so
  #523991 0x77a388e3 in compile () from 
/usr/lib/x86_64-linux-gnu/lua/5.2/lpeg.so
  #523992 0x77a36fab in ?? () from 
/usr/lib/x86_64-linux-gnu/lua/5.2/lpeg.so
  #523993 0xfd1e in ?? ()
  #523994 0x5556a5fc in ?? ()
  #523995 0x555600c8 in ?? ()
  #523996 0xf63f in ?? ()
  #523997 0x5556030f in ?? ()
  #523998 0xdc91 in lua_pcallk ()
  #523999 0xb896 in ?? ()
  #524000 0xc54b in ?? ()
  #524001 0xfd1e in ?? ()
  #524002 0x55560092 in ?? ()
  #524003 0xf63f in ?? ()
  #524004 0x5556030f in ?? ()
  #524005 0xdc91 in lua_pcallk ()
  #524006 0xb64b in ?? ()
  #524007 0x77c94bbb in __libc_start_main (main=0xb5f0, argc=2, 
argv=0x7fffe6d8, init=, fini=, 
rtld_fini=, stack_end=0x7fffe6c8)
  at ../csu/libc-start.c:308
  #524008 0xb70a in ?? ()

  The expected behavior is to have the program finish normally

  [Regression potential]

  Low, this is a backport from upstream and only limits the infinite recursion 
in a scenario where it shouldn't happen to begin with (TCapture node search).
  [Other info]

  This was fixed upstream in 1.0.1 by stopping the recursion in TCall
  nodes and controlling that TRule nodes do not follow siblings (sib2)

  The upstream discussion can be found here:
  http://lua.2524044.n2.nabble.com/LPeg-intermittent-stack-exhaustion-
  td7674831.html

  My analysis can be found here:
  http://pastebin.ubuntu.com/p/n4824ftZt9/plain/

  [Original description]

  The Ubuntu Error Tracker has been receiving reports about a problem
  regarding nmap.  This problem was most recently seen with version
  7.01-2ubuntu2, the problem page at
  https://errors.ubuntu.com/problem/5e852236a443bab0279d47c8a9b7e55802bfb46f
  contains more details.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/lua-lpeg/+bug/1580385/+subscriptions

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1580385] Re: /usr/bin/nmap:11:hascaptures:hascaptures:hascaptures:hascaptures:hascaptures

2019-11-07 Thread Victor Tapia
** Patch added: "disco.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/lua-lpeg/+bug/1580385/+attachment/5303497/+files/disco.debdiff

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1580385

Title:
  /usr/bin/nmap:11:hascaptures:hascaptures:hascaptures:hascaptures:hascaptures

Status in lua-lpeg package in Ubuntu:
  Fix Released
Status in lua-lpeg source package in Xenial:
  New
Status in lua-lpeg source package in Bionic:
  New
Status in lua-lpeg source package in Disco:
  New
Status in lua-lpeg source package in Eoan:
  New
Status in lua-lpeg package in Debian:
  New

Bug description:
  [Impact]

  Under certain conditions, lpeg will crash while walking the pattern
  tree looking for TCapture nodes.

  [Test Case]

  The reproducer, taken from an upstream discussion (link in "Other
  info"), is:

  $ cat repro.lua
  #!/usr/bin/env lua
  lpeg = require "lpeg"

  p = lpeg.C(-lpeg.P{lpeg.P'x' * lpeg.V(1) + lpeg.P'y'})
  p:match("xx")

  The program crashes due to a hascaptures() infinite recursion:

  $ ./repro.lua
  Segmentation fault (core dumped)

  (gdb) bt -25
  #523984 0x77a3743c in hascaptures () from 
/usr/lib/x86_64-linux-gnu/lua/5.2/lpeg.so
  #523985 0x77a3743c in hascaptures () from 
/usr/lib/x86_64-linux-gnu/lua/5.2/lpeg.so
  #523986 0x77a3743c in hascaptures () from 
/usr/lib/x86_64-linux-gnu/lua/5.2/lpeg.so
  #523987 0x77a3743c in hascaptures () from 
/usr/lib/x86_64-linux-gnu/lua/5.2/lpeg.so
  #523988 0x77a3743c in hascaptures () from 
/usr/lib/x86_64-linux-gnu/lua/5.2/lpeg.so
  #523989 0x77a3743c in hascaptures () from 
/usr/lib/x86_64-linux-gnu/lua/5.2/lpeg.so
  #523990 0x77a3815c in ?? () from 
/usr/lib/x86_64-linux-gnu/lua/5.2/lpeg.so
  #523991 0x77a388e3 in compile () from 
/usr/lib/x86_64-linux-gnu/lua/5.2/lpeg.so
  #523992 0x77a36fab in ?? () from 
/usr/lib/x86_64-linux-gnu/lua/5.2/lpeg.so
  #523993 0xfd1e in ?? ()
  #523994 0x5556a5fc in ?? ()
  #523995 0x555600c8 in ?? ()
  #523996 0xf63f in ?? ()
  #523997 0x5556030f in ?? ()
  #523998 0xdc91 in lua_pcallk ()
  #523999 0xb896 in ?? ()
  #524000 0xc54b in ?? ()
  #524001 0xfd1e in ?? ()
  #524002 0x55560092 in ?? ()
  #524003 0xf63f in ?? ()
  #524004 0x5556030f in ?? ()
  #524005 0xdc91 in lua_pcallk ()
  #524006 0xb64b in ?? ()
  #524007 0x77c94bbb in __libc_start_main (main=0xb5f0, argc=2, 
argv=0x7fffe6d8, init=, fini=, 
rtld_fini=, stack_end=0x7fffe6c8)
  at ../csu/libc-start.c:308
  #524008 0xb70a in ?? ()

  The expected behavior is to have the program finish normally

  [Regression potential]

  Low, this is a backport from upstream and only limits the infinite recursion 
in a scenario where it shouldn't happen to begin with (TCapture node search).
  [Other info]

  This was fixed upstream in 1.0.1 by stopping the recursion in TCall
  nodes and controlling that TRule nodes do not follow siblings (sib2)

  The upstream discussion can be found here:
  http://lua.2524044.n2.nabble.com/LPeg-intermittent-stack-exhaustion-
  td7674831.html

  My analysis can be found here:
  http://pastebin.ubuntu.com/p/n4824ftZt9/plain/

  [Original description]

  The Ubuntu Error Tracker has been receiving reports about a problem
  regarding nmap.  This problem was most recently seen with version
  7.01-2ubuntu2, the problem page at
  https://errors.ubuntu.com/problem/5e852236a443bab0279d47c8a9b7e55802bfb46f
  contains more details.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/lua-lpeg/+bug/1580385/+subscriptions

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1580385] Re: /usr/bin/nmap:11:hascaptures:hascaptures:hascaptures:hascaptures:hascaptures

2019-11-07 Thread Victor Tapia
** Patch added: "xenial.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/lua-lpeg/+bug/1580385/+attachment/5303500/+files/xenial.debdiff

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1580385

Title:
  /usr/bin/nmap:11:hascaptures:hascaptures:hascaptures:hascaptures:hascaptures

Status in lua-lpeg package in Ubuntu:
  Fix Released
Status in lua-lpeg source package in Xenial:
  New
Status in lua-lpeg source package in Bionic:
  New
Status in lua-lpeg source package in Disco:
  New
Status in lua-lpeg source package in Eoan:
  New
Status in lua-lpeg package in Debian:
  New

Bug description:
  [Impact]

  Under certain conditions, lpeg will crash while walking the pattern
  tree looking for TCapture nodes.

  [Test Case]

  The reproducer, taken from an upstream discussion (link in "Other
  info"), is:

  $ cat repro.lua
  #!/usr/bin/env lua
  lpeg = require "lpeg"

  p = lpeg.C(-lpeg.P{lpeg.P'x' * lpeg.V(1) + lpeg.P'y'})
  p:match("xx")

  The program crashes due to a hascaptures() infinite recursion:

  $ ./repro.lua
  Segmentation fault (core dumped)

  (gdb) bt -25
  #523984 0x77a3743c in hascaptures () from 
/usr/lib/x86_64-linux-gnu/lua/5.2/lpeg.so
  #523985 0x77a3743c in hascaptures () from 
/usr/lib/x86_64-linux-gnu/lua/5.2/lpeg.so
  #523986 0x77a3743c in hascaptures () from 
/usr/lib/x86_64-linux-gnu/lua/5.2/lpeg.so
  #523987 0x77a3743c in hascaptures () from 
/usr/lib/x86_64-linux-gnu/lua/5.2/lpeg.so
  #523988 0x77a3743c in hascaptures () from 
/usr/lib/x86_64-linux-gnu/lua/5.2/lpeg.so
  #523989 0x77a3743c in hascaptures () from 
/usr/lib/x86_64-linux-gnu/lua/5.2/lpeg.so
  #523990 0x77a3815c in ?? () from 
/usr/lib/x86_64-linux-gnu/lua/5.2/lpeg.so
  #523991 0x77a388e3 in compile () from 
/usr/lib/x86_64-linux-gnu/lua/5.2/lpeg.so
  #523992 0x77a36fab in ?? () from 
/usr/lib/x86_64-linux-gnu/lua/5.2/lpeg.so
  #523993 0xfd1e in ?? ()
  #523994 0x5556a5fc in ?? ()
  #523995 0x555600c8 in ?? ()
  #523996 0xf63f in ?? ()
  #523997 0x5556030f in ?? ()
  #523998 0xdc91 in lua_pcallk ()
  #523999 0xb896 in ?? ()
  #524000 0xc54b in ?? ()
  #524001 0xfd1e in ?? ()
  #524002 0x55560092 in ?? ()
  #524003 0xf63f in ?? ()
  #524004 0x5556030f in ?? ()
  #524005 0xdc91 in lua_pcallk ()
  #524006 0xb64b in ?? ()
  #524007 0x77c94bbb in __libc_start_main (main=0xb5f0, argc=2, 
argv=0x7fffe6d8, init=, fini=, 
rtld_fini=, stack_end=0x7fffe6c8)
  at ../csu/libc-start.c:308
  #524008 0xb70a in ?? ()

  The expected behavior is to have the program finish normally

  [Regression potential]

  Low, this is a backport from upstream and only limits the infinite recursion 
in a scenario where it shouldn't happen to begin with (TCapture node search).
  [Other info]

  This was fixed upstream in 1.0.1 by stopping the recursion in TCall
  nodes and controlling that TRule nodes do not follow siblings (sib2)

  The upstream discussion can be found here:
  http://lua.2524044.n2.nabble.com/LPeg-intermittent-stack-exhaustion-
  td7674831.html

  My analysis can be found here:
  http://pastebin.ubuntu.com/p/n4824ftZt9/plain/

  [Original description]

  The Ubuntu Error Tracker has been receiving reports about a problem
  regarding nmap.  This problem was most recently seen with version
  7.01-2ubuntu2, the problem page at
  https://errors.ubuntu.com/problem/5e852236a443bab0279d47c8a9b7e55802bfb46f
  contains more details.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/lua-lpeg/+bug/1580385/+subscriptions

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1572908] Re: sssd-ad pam_sss(cron:account): Access denied for user

2019-04-24 Thread Victor Tapia
The fix is included in sssd 1.16.4, currently in debian experimental

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1572908

Title:
  sssd-ad pam_sss(cron:account): Access denied for user

Status in sssd package in Ubuntu:
  In Progress
Status in sssd source package in Xenial:
  New
Status in sssd source package in Bionic:
  New
Status in sssd source package in Cosmic:
  New
Status in sssd source package in Disco:
  In Progress
Status in sssd source package in Eoan:
  In Progress

Bug description:
  [Impact]

  SSSD has GPO_CROND set to "crond" in its code while Debian/Ubuntu use
  "cron" as a PAM service. This difference makes AD users have cron
  blocked by default, instead of having it enabled.

  [Test Case]

  - With an Active Directory user created (e.g. logonuser@TESTS.LOCAL),
  set a cron task:

  logonuser@tests.local@xenial-sssd-ad:~$ crontab -l | grep -v ^#
  * * * * * true /tmp/crontest

  - If the default is set to "crond" the task is blocked:

  # ag pam /var/log/ | grep -i denied | head -n 2
  /var/log/auth.log.1:772:Feb 21 11:00:01 xenial-sssd-ad CRON[2387]: 
pam_sss(cron:account): Access denied for user logonuser@tests.local: 6 
(Permission denied)
  /var/log/auth.log.1:773:Feb 21 11:01:01 xenial-sssd-ad CRON[2390]: 
pam_sss(cron:account): Access denied for user logonuser@tests.local: 6 
(Permission denied)

  - Setting GPO_CROND to "cron" or adding "ad_gpo_map_batch = +cron" to
  the configuration file solves the issue.

  [Regression potential]

  Minimal. The default value does not apply to Debian/Ubuntu, and those
  who added a configuration option to circumvent the issue
  ("ad_gpo_map_batch = +cron") will continue working after this patch is
  applied.

  [Other Info]

  Upstream commit:
  https://github.com/SSSD/sssd/commit/bc65ba9a07a924a58b13a0d5a935114ab72b7524

  # git describe --contains bc65ba9a07a924a58b13a0d5a935114ab72b7524
  sssd-2_1_0~14

  # rmadison sssd
  => sssd | 1.13.4-1ubuntu1.13 | xenial-proposed 
  => sssd | 1.16.1-1ubuntu1.1  | bionic-updates
  => sssd | 1.16.3-1ubuntu2| cosmic  
  => sssd | 1.16.3-3ubuntu1| disco

  
  [Original description]

  User cron jobs has Access denied for user

  pr 21 11:05:02 edvlw08 CRON[6848]: pam_sss(cron:account): Access denied for 
user : 6 (Zugriff verweigert)
  Apr 21 11:05:02 edvlw08 CRON[6848]: Zugriff verweigert
  Apr 21 11:05:02 edvlw08 cron[965]: Zugriff verweigert

  SSSD-AD Login works, i see also my AD groups

  Description:Ubuntu 16.04 LTS
  Release:16.04

  sssd:
    Installed: 1.13.4-1ubuntu1
    Candidate: 1.13.4-1ubuntu1
    Version table:
   *** 1.13.4-1ubuntu1 500
  500 http://at.archive.ubuntu.com/ubuntu xenial/main amd64 Packages
  100 /var/lib/dpkg/status
  sssd-ad:
    Installed: 1.13.4-1ubuntu1
    Candidate: 1.13.4-1ubuntu1
    Version table:
   *** 1.13.4-1ubuntu1 500
  500 http://at.archive.ubuntu.com/ubuntu xenial/main amd64 Packages
  100 /var/lib/dpkg/status
  libpam-sss:
    Installed: 1.13.4-1ubuntu1
    Candidate: 1.13.4-1ubuntu1
    Version table:
   *** 1.13.4-1ubuntu1 500
  500 http://at.archive.ubuntu.com/ubuntu xenial/main amd64 Packages
  100 /var/lib/dpkg/status

  /ect/sssd/sssd.conf
  [sssd]
  services = nss, pam
  config_file_version = 2
  domains = test.at

  [nss]
  default_shell = /bin/false

  [domain/test.at]
  decription = TEST - ActiveDirectory
  enumerate = false
  cache_credentials = true
  id_provider = ad
  auth_provider = ad
  chpass_provider = ad
  ad_domain = test.at
  access_provider = ad
  subdomains_provider = none
  ldap_use_tokengroups = false
  dyndns_update = true
  krb5_realm = TEST.AT
  krb5_store_password_if_offline = true
  ldap_id_mapping = false
  krb5_keytab = /etc/krb5.host.keytab
  ldap_krb5_keytab = /etc/krb5.host.keytab
  ldap_use_tokengroups = false
  ldap_referrals = false

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/sssd/+bug/1572908/+subscriptions

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1572908] Re: sssd-ad pam_sss(cron:account): Access denied for user

2019-04-24 Thread Victor Tapia
** Also affects: sssd (Ubuntu Eoan)
   Importance: Medium
 Assignee: Victor Tapia (vtapia)
   Status: In Progress

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1572908

Title:
  sssd-ad pam_sss(cron:account): Access denied for user

Status in sssd package in Ubuntu:
  In Progress
Status in sssd source package in Xenial:
  New
Status in sssd source package in Bionic:
  New
Status in sssd source package in Cosmic:
  New
Status in sssd source package in Disco:
  In Progress
Status in sssd source package in Eoan:
  In Progress

Bug description:
  [Impact]

  SSSD has GPO_CROND set to "crond" in its code while Debian/Ubuntu use
  "cron" as a PAM service. This difference makes AD users have cron
  blocked by default, instead of having it enabled.

  [Test Case]

  - With an Active Directory user created (e.g. logonuser@TESTS.LOCAL),
  set a cron task:

  logonuser@tests.local@xenial-sssd-ad:~$ crontab -l | grep -v ^#
  * * * * * true /tmp/crontest

  - If the default is set to "crond" the task is blocked:

  # ag pam /var/log/ | grep -i denied | head -n 2
  /var/log/auth.log.1:772:Feb 21 11:00:01 xenial-sssd-ad CRON[2387]: 
pam_sss(cron:account): Access denied for user logonuser@tests.local: 6 
(Permission denied)
  /var/log/auth.log.1:773:Feb 21 11:01:01 xenial-sssd-ad CRON[2390]: 
pam_sss(cron:account): Access denied for user logonuser@tests.local: 6 
(Permission denied)

  - Setting GPO_CROND to "cron" or adding "ad_gpo_map_batch = +cron" to
  the configuration file solves the issue.

  [Regression potential]

  Minimal. The default value does not apply to Debian/Ubuntu, and those
  who added a configuration option to circumvent the issue
  ("ad_gpo_map_batch = +cron") will continue working after this patch is
  applied.

  [Other Info]

  Upstream commit:
  https://github.com/SSSD/sssd/commit/bc65ba9a07a924a58b13a0d5a935114ab72b7524

  # git describe --contains bc65ba9a07a924a58b13a0d5a935114ab72b7524
  sssd-2_1_0~14

  # rmadison sssd
  => sssd | 1.13.4-1ubuntu1.13 | xenial-proposed 
  => sssd | 1.16.1-1ubuntu1.1  | bionic-updates
  => sssd | 1.16.3-1ubuntu2| cosmic  
  => sssd | 1.16.3-3ubuntu1| disco

  
  [Original description]

  User cron jobs has Access denied for user

  pr 21 11:05:02 edvlw08 CRON[6848]: pam_sss(cron:account): Access denied for 
user : 6 (Zugriff verweigert)
  Apr 21 11:05:02 edvlw08 CRON[6848]: Zugriff verweigert
  Apr 21 11:05:02 edvlw08 cron[965]: Zugriff verweigert

  SSSD-AD Login works, i see also my AD groups

  Description:Ubuntu 16.04 LTS
  Release:16.04

  sssd:
    Installed: 1.13.4-1ubuntu1
    Candidate: 1.13.4-1ubuntu1
    Version table:
   *** 1.13.4-1ubuntu1 500
  500 http://at.archive.ubuntu.com/ubuntu xenial/main amd64 Packages
  100 /var/lib/dpkg/status
  sssd-ad:
    Installed: 1.13.4-1ubuntu1
    Candidate: 1.13.4-1ubuntu1
    Version table:
   *** 1.13.4-1ubuntu1 500
  500 http://at.archive.ubuntu.com/ubuntu xenial/main amd64 Packages
  100 /var/lib/dpkg/status
  libpam-sss:
    Installed: 1.13.4-1ubuntu1
    Candidate: 1.13.4-1ubuntu1
    Version table:
   *** 1.13.4-1ubuntu1 500
  500 http://at.archive.ubuntu.com/ubuntu xenial/main amd64 Packages
  100 /var/lib/dpkg/status

  /ect/sssd/sssd.conf
  [sssd]
  services = nss, pam
  config_file_version = 2
  domains = test.at

  [nss]
  default_shell = /bin/false

  [domain/test.at]
  decription = TEST - ActiveDirectory
  enumerate = false
  cache_credentials = true
  id_provider = ad
  auth_provider = ad
  chpass_provider = ad
  ad_domain = test.at
  access_provider = ad
  subdomains_provider = none
  ldap_use_tokengroups = false
  dyndns_update = true
  krb5_realm = TEST.AT
  krb5_store_password_if_offline = true
  ldap_id_mapping = false
  krb5_keytab = /etc/krb5.host.keytab
  ldap_krb5_keytab = /etc/krb5.host.keytab
  ldap_use_tokengroups = false
  ldap_referrals = false

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/sssd/+bug/1572908/+subscriptions

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1791108] Re: open-iscsi uses domainsearch instead of search for /etc/resolv.conf

2018-10-03 Thread Victor Tapia
#VERIFICATION-BIONIC

- Before the update
ubuntu@iscsi-bionic:~$ dpkg -l | grep open-iscsi
ii  open-iscsi2.0.874-5ubuntu2.2
amd64iSCSI initiator tools

ubuntu@iscsi-bionic:~$ cat /etc/resolv.conf 
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
# DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
# 127.0.0.53 is the systemd-resolved stub resolver.
# run "systemd-resolve --status" to see details about the actual nameservers.

nameserver 192.168.122.1
domainsearch example.com

ubuntu@iscsi-bionic:~$ sudo strace ping -c1 www 2>&1 | grep www
execve("/bin/ping", ["ping", "-c1", "www"], 0x7ffeb4aecd70 /* 14 vars */) = 0
sendmmsg(5, [{msg_hdr={msg_name=NULL, msg_namelen=0, 
msg_iov=[{iov_base="\300\17\1\0\0\1\0\0\0\0\0\0\3www\0\0\1\0\1", iov_len=21}], 
msg_iovlen=1, msg_controllen=0, 
msg_flags=MSG_TRUNC|MSG_DONTWAIT|MSG_EOR|MSG_WAITALL|MSG_CONFIRM|MSG_RST|MSG_MORE|MSG_BATCH|MSG_ZEROCOPY|MSG_FASTOPEN|0xb42},
 msg_len=21}, {msg_hdr={msg_name=NULL, msg_namelen=0, 
msg_iov=[{iov_base="\0\31\1\0\0\1\0\0\0\0\0\0\3www\0\0\34\0\1", iov_len=21}], 
msg_iovlen=1, msg_controllen=0, 
msg_flags=MSG_CTRUNC|MSG_FIN|MSG_SYN|MSG_CONFIRM|MSG_RST|MSG_MORE|MSG_BATCH|MSG_ZEROCOPY|MSG_FASTOPEN|0xb420010},
 msg_len=21}], 2, MSG_NOSIGNAL) = 2 
recvfrom(5, "\300\17\201\203\0\1\0\0\0\0\0\0\3www\0\0\1\0\1", 2048, 0, 
{sa_family=AF_INET, sin_port=htons(53), sin_addr=inet_addr("192.168.122.1")}, 
[28->16]) = 21
recvfrom(5, 
"\0\31\201\203\0\1\0\0\0\1\0\0\3www\0\0\34\0\1\0\0\6\0\1\0\0\"L\0@"..., 65536, 
0, {sa_family=AF_INET, sin_port=htons(53), 
sin_addr=inet_addr("192.168.122.1")}, [28->16]) = 96
write(2, "ping: www: Name or service not k"..., 37ping: www: Name or service 
not known

- After the update
ubuntu@iscsi-bionic:~$ dpkg -l | grep open-iscsi
ii  open-iscsi2.0.874-5ubuntu2.3
amd64iSCSI initiator tools

ubuntu@iscsi-bionic:~$ cat /etc/resolv.conf 
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
# DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
# 127.0.0.53 is the systemd-resolved stub resolver.
# run "systemd-resolve --status" to see details about the actual nameservers.

nameserver 192.168.122.1
search example.com

ubuntu@iscsi-bionic:~$ sudo strace ping -c1 www 2>&1 | grep www
execve("/bin/ping", ["ping", "-c1", "www"], 0x7ffea5cd0390 /* 14 vars */) = 0
sendmmsg(5, [{msg_hdr={msg_name=NULL, msg_namelen=0, 
msg_iov=[{iov_base="\262\224\1\0\0\1\0\0\0\0\0\0\3www\7example\3com\0\0\1\0"...,
 iov_len=33}], msg_iovlen=1, msg_controllen=0, 
msg_flags=MSG_TRUNC|MSG_DONTWAIT|MSG_EOR|MSG_WAITALL|MSG_CONFIRM|MSG_ERRQUEUE|MSG_NOSIGNAL|MSG_MORE|MSG_BATCH|MSG_ZEROCOPY|MSG_CMSG_CLOEXEC|0x1b48},
 msg_len=33}, {msg_hdr={msg_name=NULL, msg_namelen=0, 
msg_iov=[{iov_base="\226\235\1\0\0\1\0\0\0\0\0\0\3www\7example\3com\0\0\34\0"...,
 iov_len=33}], msg_iovlen=1, msg_controllen=0, 
msg_flags=MSG_CTRUNC|MSG_FIN|MSG_SYN|MSG_CONFIRM|MSG_ERRQUEUE|MSG_NOSIGNAL|MSG_MORE|MSG_BATCH|MSG_ZEROCOPY|MSG_CMSG_CLOEXEC|0x1b480010},
 msg_len=33}], 2, MSG_NOSIGNAL) = 2
recvfrom(5, "\262\224\201\200\0\1\0\1\0\0\0\0\3www\7example\3com\0\0\1\0"..., 
2048, 0, {sa_family=AF_INET, sin_port=htons(53), 
sin_addr=inet_addr("192.168.122.1")}, [28->16]) = 49
recvfrom(5, "\226\235\201\200\0\1\0\1\0\2\0\4\3www\7example\3com\0\0\34\0"..., 
65536, 0, {sa_family=AF_INET, sin_port=htons(53), 
sin_addr=inet_addr("192.168.122.1")}, [28->16]) = 197
write(1, "PING www.example.com (93.184.216"..., 132PING www.example.com 
(93.184.216.34) 56(84) bytes of data.
write(1, "--- www.example.com ping statist"..., 114--- www.example.com ping 
statistics ---

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1791108

Title:
  open-iscsi uses domainsearch instead of search for /etc/resolv.conf

Status in open-iscsi package in Ubuntu:
  Fix Released
Status in open-iscsi source package in Xenial:
  Fix Committed
Status in open-iscsi source package in Bionic:
  Fix Committed
Status in open-iscsi source package in Cosmic:
  Fix Released

Bug description:
  [Impact]

  * open-iscsi is adding "domainsearch", a non-existent configuration
  option, instead of "search" in /etc/resolv.conf. As a result, the
  search list is ignored in the clients.

  [Test case]

  * Install an ubuntu machine that uses iscsi as root, and does not use 
systemd-resolvd.
  * Prepare the dhcp server to provide the search list to its clients. For 
instance, in dnsmasq:

  dhcp-option=option:domain-search,canonical.com

  * Boot the machine and check the content of /etc/resolv.conf
  - if domainsearch is present, the search list will be ignored:

  root@iscsi-xenial:/home/ubuntu# ping -c1 golem
  ping: unknown host golem

  root@iscsi-xenial:/home/ubuntu# strace ping -c1 golem 2>&1 | grep golem
  execve("/bin/ping", 

[Sts-sponsors] [Bug 1791108] Re: open-iscsi uses domainsearch instead of search for /etc/resolv.conf

2018-10-03 Thread Victor Tapia
#VERIFICATION-XENIAL
- Before the update
ubuntu@iscsi_base:~$ dpkg -l | grep open-iscsi
ii  open-iscsi   2.0.873+git0.3b4b4500-14ubuntu3.5  
amd64iSCSI initiator tools

ubuntu@iscsi_base:~$ cat /etc/resolv.conf 
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
# DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
nameserver 192.168.122.1
domainsearch example.com

ubuntu@iscsi_base:~$ strace ping -c1 www 2>&1 | grep www
execve("/bin/ping", ["ping", "-c1", "www"], [/* 20 vars */]) = 0
sendto(3, "=\370\1\0\0\1\0\0\0\0\0\0\3www\0\0\1\0\1", 21, MSG_NOSIGNAL, NULL, 
0) = 21
recvfrom(3, 
"=\370\201\203\0\1\0\0\0\1\0\0\3www\0\0\1\0\1\0\0\6\0\1\0\0*0\0@"..., 1024, 0, 
{sa_family=AF_INET, sin_port=htons(53), sin_addr=inet_addr("192.168.122.1")}, 
[16]) = 96
write(2, "ping: unknown host www\n", 23ping: unknown host www


- After the update
ubuntu@iscsi_base:~$ dpkg -l | grep open-iscsi
ii  open-iscsi   2.0.873+git0.3b4b4500-14ubuntu3.6  
amd64iSCSI initiator tools

ubuntu@iscsi_base:~$ cat /etc/resolv.conf 
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
# DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
nameserver 192.168.122.1
search example.com

ubuntu@iscsi_base:~$ strace ping -c1 www 2>&1 | grep www
execve("/bin/ping", ["ping", "-c1", "www"], [/* 20 vars */]) = 0
sendto(3, "\241\223\1\0\0\1\0\0\0\0\0\0\3www\7example\3com\0\0\1\0"..., 33, 
MSG_NOSIGNAL, NULL, 0) = 33
recvfrom(3, "\241\223\201\200\0\1\0\1\0\0\0\0\3www\7example\3com\0\0\1\0"..., 
1024, 0, {sa_family=AF_INET, sin_port=htons(53), 
sin_addr=inet_addr("192.168.122.1")}, [16]) = 49

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1791108

Title:
  open-iscsi uses domainsearch instead of search for /etc/resolv.conf

Status in open-iscsi package in Ubuntu:
  Fix Released
Status in open-iscsi source package in Xenial:
  Fix Committed
Status in open-iscsi source package in Bionic:
  Fix Committed
Status in open-iscsi source package in Cosmic:
  Fix Released

Bug description:
  [Impact]

  * open-iscsi is adding "domainsearch", a non-existent configuration
  option, instead of "search" in /etc/resolv.conf. As a result, the
  search list is ignored in the clients.

  [Test case]

  * Install an ubuntu machine that uses iscsi as root, and does not use 
systemd-resolvd.
  * Prepare the dhcp server to provide the search list to its clients. For 
instance, in dnsmasq:

  dhcp-option=option:domain-search,canonical.com

  * Boot the machine and check the content of /etc/resolv.conf
  - if domainsearch is present, the search list will be ignored:

  root@iscsi-xenial:/home/ubuntu# ping -c1 golem
  ping: unknown host golem

  root@iscsi-xenial:/home/ubuntu# strace ping -c1 golem 2>&1 | grep golem
  execve("/bin/ping", ["ping", "-c1", "golem"], [/* 19 vars */]) = 0
  sendto(4, "_(\1\0\0\1\0\0\0\0\0\0\5golem\0\0\1\0\1", 23, MSG_NOSIGNAL, NULL, 
0) = 23
  recvfrom(4, "_(\201\203\0\1\0\0\0\0\0\0\5golem\0\0\1\0\1", 1024, 0, 
{sa_family=AF_INET, sin_port=htons(53), sin_addr=inet_addr("192.168.122.1")}, 
[16]) = 23
  write(2, "ping: unknown host golem\n", 25ping: unknown host golem

  - if search is present, the search list will be used:

  root@iscsi-xenial:/home/ubuntu# ping -c1 golem
  PING golem.canonical.com (91.189.89.199) 56(84) bytes of data.
  64 bytes from golem.canonical.com (91.189.89.199): icmp_seq=1 ttl=57 
time=63.7 ms

  --- golem.canonical.com ping statistics ---
  1 packets transmitted, 1 received, 0% packet loss, time 0ms
  rtt min/avg/max/mdev = 63.735/63.735/63.735/0.000 ms

  root@iscsi-xenial:/home/ubuntu# strace ping -c1 golem 2>&1 | grep golem
  execve("/bin/ping", ["ping", "-c1", "golem"], [/* 19 vars */]) = 0
  sendto(4, "\1\\\1\0\0\1\0\0\0\0\0\0\5golem\tcanonical\3com"..., 37, 
MSG_NOSIGNAL, NULL, 0) = 37
  recvfrom(4, "\1\\\201\200\0\1\0\1\0\0\0\0\5golem\tcanonical\3com"..., 1024, 
0, {sa_family=AF_INET, sin_port=htons(53), 
sin_addr=inet_addr("192.168.122.1")}, [16]) = 53
  write(1, "PING golem.canonical.com (91.189"..., 145PING golem.canonical.com 
(91.189.89.199) 56(84) bytes of data.
  64 bytes from golem.canonical.com (91.189.89.199): icmp_seq=1 ttl=57 
time=63.5 ms
  write(1, "--- golem.canonical.com ping sta"..., 157--- golem.canonical.com 
ping statistics ---

  [Regression potential]

  * The change is minor (a string replacement) and it's currently not working.
  * Any possible regression would involve continuing to break DNS resolution.

  [Other info]

  * resolv.conf man page: http://man7.org/linux/man-
  pages/man5/resolv.conf.5.html

  [Original description]

  Having an interface file such as /run/net-eno2.conf with the following
  content:

  DEVICE='eno2'
  PROTO='dhcp'
  IPV4ADDR='10.10.10.10'
  IPV4BROADCAST='10.10.10.255'
  IPV4NETMASK='255.255.255.0'
  

[Sts-sponsors] [Bug 1791108] Re: open-iscsi uses domainsearch instead of search for /etc/resolv.conf

2018-09-20 Thread Victor Tapia
** Patch removed: "open-iscsi-bionic.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/open-iscsi/+bug/1791108/+attachment/5190084/+files/open-iscsi-bionic.debdiff

** Patch removed: "open-iscsi-cosmic.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/open-iscsi/+bug/1791108/+attachment/5190085/+files/open-iscsi-cosmic.debdiff

** Patch removed: "open-iscsi-xenial.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/open-iscsi/+bug/1791108/+attachment/5190086/+files/open-iscsi-xenial.debdiff

** Patch added: "open-iscsi-xenial.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/open-iscsi/+bug/1791108/+attachment/5190912/+files/open-iscsi-xenial.debdiff

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1791108

Title:
  open-iscsi uses domainsearch instead of search for /etc/resolv.conf

Status in open-iscsi package in Ubuntu:
  In Progress
Status in open-iscsi source package in Xenial:
  In Progress
Status in open-iscsi source package in Bionic:
  In Progress
Status in open-iscsi source package in Cosmic:
  In Progress

Bug description:
  [Impact]

  * open-iscsi is adding "domainsearch", a non-existent configuration
  option, instead of "search" in /etc/resolv.conf. As a result, the
  search list is ignored in the clients.

  [Test case]

  * Install an ubuntu machine that uses iscsi as root, and does not use 
systemd-resolvd.
  * Prepare the dhcp server to provide the search list to its clients. For 
instance, in dnsmasq:

  dhcp-option=option:domain-search,canonical.com

  * Boot the machine and check the content of /etc/resolv.conf
  - if domainsearch is present, the search list will be ignored:

  root@iscsi-xenial:/home/ubuntu# ping -c1 golem
  ping: unknown host golem

  root@iscsi-xenial:/home/ubuntu# strace ping -c1 golem 2>&1 | grep golem
  execve("/bin/ping", ["ping", "-c1", "golem"], [/* 19 vars */]) = 0
  sendto(4, "_(\1\0\0\1\0\0\0\0\0\0\5golem\0\0\1\0\1", 23, MSG_NOSIGNAL, NULL, 
0) = 23
  recvfrom(4, "_(\201\203\0\1\0\0\0\0\0\0\5golem\0\0\1\0\1", 1024, 0, 
{sa_family=AF_INET, sin_port=htons(53), sin_addr=inet_addr("192.168.122.1")}, 
[16]) = 23
  write(2, "ping: unknown host golem\n", 25ping: unknown host golem

  - if search is present, the search list will be used:

  root@iscsi-xenial:/home/ubuntu# ping -c1 golem
  PING golem.canonical.com (91.189.89.199) 56(84) bytes of data.
  64 bytes from golem.canonical.com (91.189.89.199): icmp_seq=1 ttl=57 
time=63.7 ms

  --- golem.canonical.com ping statistics ---
  1 packets transmitted, 1 received, 0% packet loss, time 0ms
  rtt min/avg/max/mdev = 63.735/63.735/63.735/0.000 ms

  root@iscsi-xenial:/home/ubuntu# strace ping -c1 golem 2>&1 | grep golem
  execve("/bin/ping", ["ping", "-c1", "golem"], [/* 19 vars */]) = 0
  sendto(4, "\1\\\1\0\0\1\0\0\0\0\0\0\5golem\tcanonical\3com"..., 37, 
MSG_NOSIGNAL, NULL, 0) = 37
  recvfrom(4, "\1\\\201\200\0\1\0\1\0\0\0\0\5golem\tcanonical\3com"..., 1024, 
0, {sa_family=AF_INET, sin_port=htons(53), 
sin_addr=inet_addr("192.168.122.1")}, [16]) = 53
  write(1, "PING golem.canonical.com (91.189"..., 145PING golem.canonical.com 
(91.189.89.199) 56(84) bytes of data.
  64 bytes from golem.canonical.com (91.189.89.199): icmp_seq=1 ttl=57 
time=63.5 ms
  write(1, "--- golem.canonical.com ping sta"..., 157--- golem.canonical.com 
ping statistics ---

  [Regression potential]

  * The change is minor (a string replacement) and it's currently not working.
  * Any possible regression would involve continuing to break DNS resolution.

  [Other info]

  * resolv.conf man page: http://man7.org/linux/man-
  pages/man5/resolv.conf.5.html

  [Original description]

  Having an interface file such as /run/net-eno2.conf with the following
  content:

  DEVICE='eno2'
  PROTO='dhcp'
  IPV4ADDR='10.10.10.10'
  IPV4BROADCAST='10.10.10.255'
  IPV4NETMASK='255.255.255.0'
  IPV4GATEWAY='10.10.10.1'
  IPV4DNS0='169.254.169.254'
  IPV4DNS1='0.0.0.0'
  HOSTNAME=''
  DNSDOMAIN='test.com'
  NISDOMAIN=''
  ROOTSERVER='169.254.169.254'
  ROOTPATH=''
  filename='/ipxe.efi'
  UPTIME='45'
  DHCPLEASETIME='86400'
  DOMAINSEARCH='test.com'

  net-interface-handler translates it to:

  nameserver 169.254.169.254
  domainsearch test.com

  instead of:

  nameserver 169.254.169.254
  search test.com

  The problem is that domainsearch is not a valid configuration option
  for /etc/resolv.conf and is ignored.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/open-iscsi/+bug/1791108/+subscriptions

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1791108] Re: open-iscsi uses domainsearch instead of search for /etc/resolv.conf

2018-09-20 Thread Victor Tapia
** Patch added: "open-iscsi-bionic.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/open-iscsi/+bug/1791108/+attachment/5190913/+files/open-iscsi-bionic.debdiff

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1791108

Title:
  open-iscsi uses domainsearch instead of search for /etc/resolv.conf

Status in open-iscsi package in Ubuntu:
  In Progress
Status in open-iscsi source package in Xenial:
  In Progress
Status in open-iscsi source package in Bionic:
  In Progress
Status in open-iscsi source package in Cosmic:
  In Progress

Bug description:
  [Impact]

  * open-iscsi is adding "domainsearch", a non-existent configuration
  option, instead of "search" in /etc/resolv.conf. As a result, the
  search list is ignored in the clients.

  [Test case]

  * Install an ubuntu machine that uses iscsi as root, and does not use 
systemd-resolvd.
  * Prepare the dhcp server to provide the search list to its clients. For 
instance, in dnsmasq:

  dhcp-option=option:domain-search,canonical.com

  * Boot the machine and check the content of /etc/resolv.conf
  - if domainsearch is present, the search list will be ignored:

  root@iscsi-xenial:/home/ubuntu# ping -c1 golem
  ping: unknown host golem

  root@iscsi-xenial:/home/ubuntu# strace ping -c1 golem 2>&1 | grep golem
  execve("/bin/ping", ["ping", "-c1", "golem"], [/* 19 vars */]) = 0
  sendto(4, "_(\1\0\0\1\0\0\0\0\0\0\5golem\0\0\1\0\1", 23, MSG_NOSIGNAL, NULL, 
0) = 23
  recvfrom(4, "_(\201\203\0\1\0\0\0\0\0\0\5golem\0\0\1\0\1", 1024, 0, 
{sa_family=AF_INET, sin_port=htons(53), sin_addr=inet_addr("192.168.122.1")}, 
[16]) = 23
  write(2, "ping: unknown host golem\n", 25ping: unknown host golem

  - if search is present, the search list will be used:

  root@iscsi-xenial:/home/ubuntu# ping -c1 golem
  PING golem.canonical.com (91.189.89.199) 56(84) bytes of data.
  64 bytes from golem.canonical.com (91.189.89.199): icmp_seq=1 ttl=57 
time=63.7 ms

  --- golem.canonical.com ping statistics ---
  1 packets transmitted, 1 received, 0% packet loss, time 0ms
  rtt min/avg/max/mdev = 63.735/63.735/63.735/0.000 ms

  root@iscsi-xenial:/home/ubuntu# strace ping -c1 golem 2>&1 | grep golem
  execve("/bin/ping", ["ping", "-c1", "golem"], [/* 19 vars */]) = 0
  sendto(4, "\1\\\1\0\0\1\0\0\0\0\0\0\5golem\tcanonical\3com"..., 37, 
MSG_NOSIGNAL, NULL, 0) = 37
  recvfrom(4, "\1\\\201\200\0\1\0\1\0\0\0\0\5golem\tcanonical\3com"..., 1024, 
0, {sa_family=AF_INET, sin_port=htons(53), 
sin_addr=inet_addr("192.168.122.1")}, [16]) = 53
  write(1, "PING golem.canonical.com (91.189"..., 145PING golem.canonical.com 
(91.189.89.199) 56(84) bytes of data.
  64 bytes from golem.canonical.com (91.189.89.199): icmp_seq=1 ttl=57 
time=63.5 ms
  write(1, "--- golem.canonical.com ping sta"..., 157--- golem.canonical.com 
ping statistics ---

  [Regression potential]

  * The change is minor (a string replacement) and it's currently not working.
  * Any possible regression would involve continuing to break DNS resolution.

  [Other info]

  * resolv.conf man page: http://man7.org/linux/man-
  pages/man5/resolv.conf.5.html

  [Original description]

  Having an interface file such as /run/net-eno2.conf with the following
  content:

  DEVICE='eno2'
  PROTO='dhcp'
  IPV4ADDR='10.10.10.10'
  IPV4BROADCAST='10.10.10.255'
  IPV4NETMASK='255.255.255.0'
  IPV4GATEWAY='10.10.10.1'
  IPV4DNS0='169.254.169.254'
  IPV4DNS1='0.0.0.0'
  HOSTNAME=''
  DNSDOMAIN='test.com'
  NISDOMAIN=''
  ROOTSERVER='169.254.169.254'
  ROOTPATH=''
  filename='/ipxe.efi'
  UPTIME='45'
  DHCPLEASETIME='86400'
  DOMAINSEARCH='test.com'

  net-interface-handler translates it to:

  nameserver 169.254.169.254
  domainsearch test.com

  instead of:

  nameserver 169.254.169.254
  search test.com

  The problem is that domainsearch is not a valid configuration option
  for /etc/resolv.conf and is ignored.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/open-iscsi/+bug/1791108/+subscriptions

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1740892] Re: corosync upgrade on 2018-01-02 caused pacemaker to fail

2018-01-11 Thread Victor Tapia
I was wrong regarding iii) "when corosync is stopped, do not stop
pacemaker": Pacemaker can use other applications[1] (e.g. heartbeat)
instead of corosync, so this is a property we want to keep.

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1740892

Title:
  corosync upgrade on 2018-01-02 caused pacemaker to fail

Status in OpenStack hacluster charm:
  Invalid
Status in corosync package in Ubuntu:
  In Progress
Status in corosync source package in Trusty:
  New
Status in corosync source package in Xenial:
  New
Status in corosync source package in Zesty:
  New
Status in corosync source package in Artful:
  New
Status in corosync source package in Bionic:
  In Progress

Bug description:
  During upgrades on 2018-01-02, corosync and it's libs were upgraded:

  (from a trusty/mitaka cloud)

  Upgrade: libcmap4:amd64 (2.3.3-1ubuntu3, 2.3.3-1ubuntu4),
  corosync:amd64 (2.3.3-1ubuntu3, 2.3.3-1ubuntu4), libcfg6:amd64
  (2.3.3-1ubuntu3, 2.3.3-1ubuntu4), libcpg4:amd64 (2.3.3-1ubuntu3,
  2.3.3-1ubuntu4), libquorum5:amd64 (2.3.3-1ubuntu3, 2.3.3-1ubuntu4),
  libcorosync-common4:amd64 (2.3.3-1ubuntu3, 2.3.3-1ubuntu4),
  libsam4:amd64 (2.3.3-1ubuntu3, 2.3.3-1ubuntu4), libvotequorum6:amd64
  (2.3.3-1ubuntu3, 2.3.3-1ubuntu4), libtotem-pg5:amd64 (2.3.3-1ubuntu3,
  2.3.3-1ubuntu4)

  During this process, it appears that pacemaker service is restarted
  and it errors:

  syslog:Jan  2 16:09:33 juju-machine-0-lxc-4 pacemakerd[1994]:   notice: 
crm_update_peer_state: pcmk_quorum_notification: Node 
juju-machine-1-lxc-3[1001] - state is now lost (was member)
  syslog:Jan  2 16:09:34 juju-machine-0-lxc-4 pacemakerd[1994]:   notice: 
crm_update_peer_state: pcmk_quorum_notification: Node 
juju-machine-1-lxc-3[1001] - state is now member (was lost)
  syslog:Jan  2 16:14:32 juju-machine-0-lxc-4 pacemakerd[1994]:error: 
cfg_connection_destroy: Connection destroyed
  syslog:Jan  2 16:14:32 juju-machine-0-lxc-4 pacemakerd[1994]:   notice: 
pcmk_shutdown_worker: Shuting down Pacemaker
  syslog:Jan  2 16:14:32 juju-machine-0-lxc-4 pacemakerd[1994]:   notice: 
stop_child: Stopping crmd: Sent -15 to process 2050
  syslog:Jan  2 16:14:32 juju-machine-0-lxc-4 pacemakerd[1994]:error: 
pcmk_cpg_dispatch: Connection to the CPG API failed: Library error (2)
  syslog:Jan  2 16:14:32 juju-machine-0-lxc-4 pacemakerd[1994]:error: 
mcp_cpg_destroy: Connection destroyed

  
  Also affected xenial/ocata

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-hacluster/+bug/1740892/+subscriptions

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1740892] Re: corosync upgrade on 2018-01-02 caused pacemaker to fail

2018-01-10 Thread Victor Tapia
In my opinion, from the list of desired properties, only the second one is true:
i) Corosync can be used on its own, regardless of having pacemaker installed or 
not. Starting both of them would force to mask pacemaker's unit file under 
particular scenarios. 
iii) IIRC, pacemaker requires corosync to run, so this property can't happen 
(in fact pacemaker SIGTERMs its components when corosync is not available). 

I like the idea stated at point 3) (restart on upgrade instead of
stop+start). It would solve the issue without having to change the unit
files.


Regarding Trusty, both corosync and pacemaker currently use sysV scripts. I ran 
a short test switching to upstart using the scripts in source [1] and it seems 
to work fine (thanks to the 'respawn' directive for pacemaker).

[1] 
master/mcp/pacemaker.upstart.in
master/init/corosync.conf.in

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1740892

Title:
  corosync upgrade on 2018-01-02 caused pacemaker to fail

Status in OpenStack hacluster charm:
  Invalid
Status in corosync package in Ubuntu:
  In Progress
Status in corosync source package in Trusty:
  New
Status in corosync source package in Xenial:
  New
Status in corosync source package in Zesty:
  New
Status in corosync source package in Artful:
  New
Status in corosync source package in Bionic:
  In Progress

Bug description:
  During upgrades on 2018-01-02, corosync and it's libs were upgraded:

  (from a trusty/mitaka cloud)

  Upgrade: libcmap4:amd64 (2.3.3-1ubuntu3, 2.3.3-1ubuntu4),
  corosync:amd64 (2.3.3-1ubuntu3, 2.3.3-1ubuntu4), libcfg6:amd64
  (2.3.3-1ubuntu3, 2.3.3-1ubuntu4), libcpg4:amd64 (2.3.3-1ubuntu3,
  2.3.3-1ubuntu4), libquorum5:amd64 (2.3.3-1ubuntu3, 2.3.3-1ubuntu4),
  libcorosync-common4:amd64 (2.3.3-1ubuntu3, 2.3.3-1ubuntu4),
  libsam4:amd64 (2.3.3-1ubuntu3, 2.3.3-1ubuntu4), libvotequorum6:amd64
  (2.3.3-1ubuntu3, 2.3.3-1ubuntu4), libtotem-pg5:amd64 (2.3.3-1ubuntu3,
  2.3.3-1ubuntu4)

  During this process, it appears that pacemaker service is restarted
  and it errors:

  syslog:Jan  2 16:09:33 juju-machine-0-lxc-4 pacemakerd[1994]:   notice: 
crm_update_peer_state: pcmk_quorum_notification: Node 
juju-machine-1-lxc-3[1001] - state is now lost (was member)
  syslog:Jan  2 16:09:34 juju-machine-0-lxc-4 pacemakerd[1994]:   notice: 
crm_update_peer_state: pcmk_quorum_notification: Node 
juju-machine-1-lxc-3[1001] - state is now member (was lost)
  syslog:Jan  2 16:14:32 juju-machine-0-lxc-4 pacemakerd[1994]:error: 
cfg_connection_destroy: Connection destroyed
  syslog:Jan  2 16:14:32 juju-machine-0-lxc-4 pacemakerd[1994]:   notice: 
pcmk_shutdown_worker: Shuting down Pacemaker
  syslog:Jan  2 16:14:32 juju-machine-0-lxc-4 pacemakerd[1994]:   notice: 
stop_child: Stopping crmd: Sent -15 to process 2050
  syslog:Jan  2 16:14:32 juju-machine-0-lxc-4 pacemakerd[1994]:error: 
pcmk_cpg_dispatch: Connection to the CPG API failed: Library error (2)
  syslog:Jan  2 16:14:32 juju-machine-0-lxc-4 pacemakerd[1994]:error: 
mcp_cpg_destroy: Connection destroyed

  
  Also affected xenial/ocata

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-hacluster/+bug/1740892/+subscriptions

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1739033] Re: Corosync: Assertion 'sender_node != NULL' failed when bind iface is ready after corosync boots

2017-12-22 Thread Victor Tapia
#VERIFICATION FOR TRUSTY

- Packages
ii  corosync 2.3.3-1ubuntu4 
amd64Standards-based cluster framework (daemon and modules)
ii  libcorosync-common4  2.3.3-1ubuntu4 
amd64Standards-based cluster framework, common library

- Reproducer
Using a config file with bad entries (as shown in the description)
ifdown interface
/usr/sbin/corosync -f
ifup interface

- Debug output:

Dec 22 12:18:27 trusty-corosync corosync[3910]:   [TOTEM ] entering GATHER 
state from 0(consensus timeout).
Dec 22 12:18:27 trusty-corosync corosync[3910]:   [TOTEM ] Creating commit 
token because I am the rep.
Dec 22 12:18:27 trusty-corosync corosync[3910]:   [TOTEM ] Saving state aru 0 
high seq received 0
Dec 22 12:18:27 trusty-corosync corosync[3910]:   [TOTEM ] Storing new sequence 
id for ring 4
Dec 22 12:18:27 trusty-corosync corosync[3910]:   [TOTEM ] entering COMMIT 
state.
Dec 22 12:18:27 trusty-corosync corosync[3910]:   [TOTEM ] got commit token
Dec 22 12:18:27 trusty-corosync corosync[3910]:   [TOTEM ] entering RECOVERY 
state.
Dec 22 12:18:27 trusty-corosync corosync[3910]:   [TOTEM ] position [0] member 
169.254.241.20:
Dec 22 12:18:27 trusty-corosync corosync[3910]:   [TOTEM ] previous ring seq 0 
rep 127.0.0.1
Dec 22 12:18:27 trusty-corosync corosync[3910]:   [TOTEM ] aru 0 high delivered 
0 received flag 1
Dec 22 12:18:27 trusty-corosync corosync[3910]:   [TOTEM ] Did not need to 
originate any messages in recovery.
Dec 22 12:18:27 trusty-corosync corosync[3910]:   [TOTEM ] got commit token
Dec 22 12:18:27 trusty-corosync corosync[3910]:   [TOTEM ] Sending initial ORF 
token
Dec 22 12:18:27 trusty-corosync corosync[3910]:   [TOTEM ] token retrans flag 
is 0 my set retrans flag0 retrans queue empty 1 count 0, aru 0
Dec 22 12:18:27 trusty-corosync corosync[3910]:   [TOTEM ] install seq 0 aru 0 
high seq received 0
Dec 22 12:18:27 trusty-corosync corosync[3910]:   [TOTEM ] token retrans flag 
is 0 my set retrans flag0 retrans queue empty 1 count 1, aru 0
Dec 22 12:18:27 trusty-corosync corosync[3910]:   [TOTEM ] install seq 0 aru 0 
high seq received 0
Dec 22 12:18:27 trusty-corosync corosync[3910]:   [TOTEM ] token retrans flag 
is 0 my set retrans flag0 retrans queue empty 1 count 2, aru 0
Dec 22 12:18:27 trusty-corosync corosync[3910]:   [TOTEM ] install seq 0 aru 0 
high seq received 0
Dec 22 12:18:27 trusty-corosync corosync[3910]:   [TOTEM ] token retrans flag 
is 0 my set retrans flag0 retrans queue empty 1 count 3, aru 0
Dec 22 12:18:27 trusty-corosync corosync[3910]:   [TOTEM ] install seq 0 aru 0 
high seq received 0
Dec 22 12:18:27 trusty-corosync corosync[3910]:   [TOTEM ] retrans flag count 4 
token aru 0 install seq 0 aru 0 0
Dec 22 12:18:27 trusty-corosync corosync[3910]:   [TOTEM ] Resetting old ring 
state
Dec 22 12:18:27 trusty-corosync corosync[3910]:   [TOTEM ] recovery to regular 
1-0
Dec 22 12:18:27 trusty-corosync corosync[3910]:   [TOTEM ] waiting_trans_ack 
changed to 1
Dec 22 12:18:27 trusty-corosync corosync[3910]:   [MAIN  ] Member joined: r(0) 
ip(169.254.241.20) 
Dec 22 12:18:27 trusty-corosync corosync[3910]:   [TOTEM ] entering OPERATIONAL 
state.
Dec 22 12:18:27 trusty-corosync corosync[3910]:   [TOTEM ] A new membership 
(169.254.241.20:4) was formed. Members joined: 1
Dec 22 12:18:27 trusty-corosync corosync[3910]:   [QUORUM] got nodeinfo message 
from cluster node 1
Dec 22 12:18:27 trusty-corosync corosync[3910]:   [QUORUM] nodeinfo message[1]: 
votes: 1, expected: 2 flags: 8

root@trusty-corosync:/home/vtapia# corosync-cfgtool -s  

   
Printing ring status.
Local node ID 1
RING ID 0
id  = 169.254.241.20
status  = ring 0 active with no faults

Corosync starts as expected.

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1739033

Title:
  Corosync: Assertion 'sender_node != NULL' failed when bind iface is
  ready after corosync boots

Status in corosync package in Ubuntu:
  Fix Released
Status in corosync source package in Trusty:
  Fix Committed
Status in corosync source package in Xenial:
  Fix Committed
Status in corosync source package in Zesty:
  Fix Released
Status in corosync source package in Artful:
  Fix Released

Bug description:
  [Impact]

  Corosync sigaborts if it starts before the interface it has to bind to
  is ready.

  On boot, if no interface in the bindnetaddr range is up/configured,
  corosync binds to lo (127.0.0.1). Once an applicable interface is up,
  corosync crashes with the following error message:

  corosync: votequorum.c:2019: message_handler_req_exec_votequorum_nodeinfo: 
Assertion `sender_node != NULL' failed.
  Aborted (core dumped)

  The last log entries show that the interface is trying 

[Sts-sponsors] [Bug 1739033] Re: Corosync: Assertion 'sender_node != NULL' failed when bind iface is ready after corosync boots

2017-12-22 Thread Victor Tapia
#VERIFICATION FOR XENIAL

- Packages
ii  corosync 2.3.5-3ubuntu2 
amd64cluster engine daemon and utilities
ii  libcorosync-common4:amd642.3.5-3ubuntu2 
amd64cluster engine common library

- Reproducer
Using a config file with bad entries (as shown in the description)
ifdown interface
/usr/sbin/corosync -f
ifup interface

- Debug output:

Dec 22 11:22:01 xenial-corosync corosync[2742]:   [TOTEM ] totemudpu.c:408 
sendmsg(mcast) failed (non-critical): Invalid argument (22)
Dec 22 11:22:02 xenial-corosync corosync[2742]: message repeated 14 times: [   
[TOTEM ] totemudpu.c:408 sendmsg(mcast) failed (non-critical): Invalid argument 
(22)]
Dec 22 11:22:02 xenial-corosync corosync[2742]:   [TOTEM ] totemudpu.c:619 The 
network interface [169.254.241.20] is now up.
Dec 22 11:22:02 xenial-corosync corosync[2742]:   [TOTEM ] totemudpu.c:1125 
adding new UDPU member {169.254.241.10}
Dec 22 11:22:02 xenial-corosync corosync[2742]:   [TOTEM ] totemudpu.c:1125 
adding new UDPU member {169.254.241.20}
Dec 22 11:22:02 xenial-corosync corosync[2742]:   [TOTEM ] totemsrp.c:2175 
entering GATHER state from 15(interface change).
Dec 22 11:22:05 xenial-corosync corosync[2742]:   [TOTEM ] totemsrp.c:2175 
entering GATHER state from 0(consensus timeout).
Dec 22 11:22:05 xenial-corosync corosync[2742]:   [TOTEM ] totemsrp.c:3227 
Creating commit token because I am the rep.
Dec 22 11:22:05 xenial-corosync corosync[2742]:   [TOTEM ] totemsrp.c:1591 
Saving state aru 0 high seq received 0
Dec 22 11:22:06 xenial-corosync corosync[2742]:   [TOTEM ] totemsrp.c:2224 
entering COMMIT state.
Dec 22 11:22:06 xenial-corosync corosync[2742]:   [TOTEM ] totemsrp.c:4571 got 
commit token
Dec 22 11:22:06 xenial-corosync corosync[2742]:   [TOTEM ] totemsrp.c:2261 
entering RECOVERY state.
Dec 22 11:22:06 xenial-corosync corosync[2742]:   [TOTEM ] totemsrp.c:2307 
position [0] member 169.254.241.20:
Dec 22 11:22:06 xenial-corosync corosync[2742]:   [TOTEM ] totemsrp.c:2311 
previous ring seq 4c rep 127.0.0.1
Dec 22 11:22:06 xenial-corosync corosync[2742]:   [TOTEM ] totemsrp.c:2317 aru 
0 high delivered 0 received flag 1
Dec 22 11:22:06 xenial-corosync corosync[2742]:   [TOTEM ] totemsrp.c:2415 Did 
not need to originate any messages in recovery.
Dec 22 11:22:06 xenial-corosync corosync[2742]:   [TOTEM ] totemsrp.c:4571 got 
commit token
Dec 22 11:22:06 xenial-corosync corosync[2742]:   [TOTEM ] totemsrp.c:4632 
Sending initial ORF token
Dec 22 11:22:06 xenial-corosync corosync[2742]:   [TOTEM ] totemsrp.c:3828 
token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 0, aru 0
Dec 22 11:22:06 xenial-corosync corosync[2742]:   [TOTEM ] totemsrp.c:3839 
install seq 0 aru 0 high seq received 0
Dec 22 11:22:06 xenial-corosync corosync[2742]:   [TOTEM ] totemsrp.c:3828 
token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 1, aru 0
Dec 22 11:22:06 xenial-corosync corosync[2742]:   [TOTEM ] totemsrp.c:3839 
install seq 0 aru 0 high seq received 0
Dec 22 11:22:06 xenial-corosync corosync[2742]:   [TOTEM ] totemsrp.c:3828 
token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 2, aru 0
Dec 22 11:22:06 xenial-corosync corosync[2742]:   [TOTEM ] totemsrp.c:3839 
install seq 0 aru 0 high seq received 0
Dec 22 11:22:06 xenial-corosync corosync[2742]:   [TOTEM ] totemsrp.c:3828 
token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 3, aru 0
Dec 22 11:22:06 xenial-corosync corosync[2742]:   [TOTEM ] totemsrp.c:3839 
install seq 0 aru 0 high seq received 0
Dec 22 11:22:06 xenial-corosync corosync[2742]:   [TOTEM ] totemsrp.c:3858 
retrans flag count 4 token aru 0 install seq 0 aru 0 0
Dec 22 11:22:06 xenial-corosync corosync[2742]:   [TOTEM ] totemsrp.c:1607 
Resetting old ring state
Dec 22 11:22:06 xenial-corosync corosync[2742]:   [TOTEM ] totemsrp.c:1813 
recovery to regular 1-0
Dec 22 11:22:06 xenial-corosync corosync[2742]:   [TOTEM ] totempg.c:286 
waiting_trans_ack changed to 1
Dec 22 11:22:06 xenial-corosync corosync[2742]:   [TOTEM ] totemudpu.c:1232 
Marking UDPU member 169.254.241.20 active
Dec 22 11:22:06 xenial-corosync corosync[2742]:   [TOTEM ] totemsrp.c:2089 
entering OPERATIONAL state.
Dec 22 11:22:06 xenial-corosync corosync[2742]:   [TOTEM ] totemsrp.c:2095 A 
new membership (169.254.241.20:80) was formed. Members joined: 2

root@xenial-corosync:/home/vtapia# corosync-cfgtool -s  

   
Printing ring status.
Local node ID 2
RING ID 0
id  = 169.254.241.20
status  = ring 0 active with no faults

Corosync starts as expected.

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1739033

Title:
  Corosync: Assertion 

[Sts-sponsors] [Bug 1739033] Re: Corosync: Assertion 'sender_node != NULL' failed when bind iface is ready after corosync boots

2017-12-21 Thread Victor Tapia
Yes, the fix is already included in zesty+ (2.4.0+)

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1739033

Title:
  Corosync: Assertion 'sender_node != NULL' failed when bind iface is
  ready after corosync boots

Status in corosync package in Ubuntu:
  Fix Released
Status in corosync source package in Trusty:
  In Progress
Status in corosync source package in Xenial:
  Incomplete

Bug description:
  [Impact]

  Corosync sigaborts if it starts before the interface it has to bind to
  is ready.

  On boot, if no interface in the bindnetaddr range is up/configured,
  corosync binds to lo (127.0.0.1). Once an applicable interface is up,
  corosync crashes with the following error message:

  corosync: votequorum.c:2019: message_handler_req_exec_votequorum_nodeinfo: 
Assertion `sender_node != NULL' failed.
  Aborted (core dumped)

  The last log entries show that the interface is trying to join the
  cluster:

  Dec 19 11:36:05 [22167] xenial-pacemaker corosync debug   [TOTEM ] 
totemsrp.c:2089 entering OPERATIONAL state.
  Dec 19 11:36:05 [22167] xenial-pacemaker corosync notice  [TOTEM ] 
totemsrp.c:2095 A new membership (169.254.241.10:444) was formed. Members 
joined: 704573706

  During the quorum calculation, the generated nodeid (704573706) for
  the node is being used instead of the nodeid specified in the
  configuration file (1), and the assert fails because the nodeid is not
  present in the member list. Corosync should use the correct nodeid and
  continue running after the interface is up, as shown in a fixed
  corosync boot:

  Dec 19 11:50:56 [4824] xenial-corosync corosync notice  [TOTEM ]
  totemsrp.c:2095 A new membership (169.254.241.10:80) was formed.
  Members joined: 1

  [Environment]

  Xenial 16.04.3

  Packages:

  ii  corosync 2.3.5-3ubuntu1amd64cluster engine 
daemon and utilities
  ii  libcorosync-common4:amd642.3.5-3ubuntu1amd64cluster engine 
common library

  [Test Case]

  Config:

  totem {
  version: 2
  member {
  memberaddr: 169.254.241.10
  }
  member {
  memberaddr: 169.254.241.20
  }
  transport: udpu

  crypto_cipher: none
  crypto_hash: none
  nodeid: 1
  interface {
  ringnumber: 0
  bindnetaddr: 169.254.241.0
  mcastport: 5405
  ttl: 1
  }
  }

  quorum {
  provider: corosync_votequorum
  expected_votes: 2
  }

  nodelist {
  node {
  ring0_addr: 169.254.241.10
  nodeid: 1
  }
  node {
  ring0_addr: 169.254.241.20
  nodeid: 2
  }
  }

  1. ifdown interface (169.254.241.10)
  2. start corosync (/usr/sbin/corosync -f)
  3. ifup interface

  [Regression Potential]

  This patch affects corosync boot; the regression potential is for
  other problems during corosync startup and/or configuration parsing.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/corosync/+bug/1739033/+subscriptions

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1739033] Re: Corosync: Assertion 'sender_node != NULL' failed when bind iface is ready after corosync boots

2017-12-20 Thread Victor Tapia
** Patch added: "Trusty patch"
   
https://bugs.launchpad.net/ubuntu/+source/corosync/+bug/1739033/+attachment/5025215/+files/fix-lp1739033-trusty.debdiff

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1739033

Title:
  Corosync: Assertion 'sender_node != NULL' failed when bind iface is
  ready after corosync boots

Status in corosync package in Ubuntu:
  Fix Released
Status in corosync source package in Trusty:
  In Progress
Status in corosync source package in Xenial:
  In Progress

Bug description:
  [Description]

  Corosync sigaborts if it starts before the interface it has to bind to
  is ready.

  On boot, if no interface in the bindnetaddr range is up/configured,
  corosync binds to lo (127.0.0.1). Once an applicable interface is up,
  corosync crashes with the following error message:

  corosync: votequorum.c:2019: message_handler_req_exec_votequorum_nodeinfo: 
Assertion `sender_node != NULL' failed.
  Aborted (core dumped)

  The last log entries show that the interface is trying to join the
  cluster:

  Dec 19 11:36:05 [22167] xenial-pacemaker corosync debug   [TOTEM ] 
totemsrp.c:2089 entering OPERATIONAL state.
  Dec 19 11:36:05 [22167] xenial-pacemaker corosync notice  [TOTEM ] 
totemsrp.c:2095 A new membership (169.254.241.10:444) was formed. Members 
joined: 704573706

  During the quorum calculation, the generated nodeid (704573706) for
  the node is being used instead of the nodeid specified in the
  configuration file (1), and the assert fails because the nodeid is not
  present in the member list. Corosync should use the correct nodeid and
  continue running after the interface is up, as shown in a fixed
  corosync boot:

  Dec 19 11:50:56 [4824] xenial-corosync corosync notice  [TOTEM ]
  totemsrp.c:2095 A new membership (169.254.241.10:80) was formed.
  Members joined: 1

  [Environment]

  Xenial 16.04.3

  Packages:

  ii  corosync 2.3.5-3ubuntu1amd64cluster engine 
daemon and utilities
  ii  libcorosync-common4:amd642.3.5-3ubuntu1amd64cluster engine 
common library

  [Reproducer]

  Config:

  totem {
  version: 2
  member {
  memberaddr: 169.254.241.10
  }
  member {
  memberaddr: 169.254.241.20
  }
  transport: udpu

  crypto_cipher: none
  crypto_hash: none
  nodeid: 1
  interface {
  ringnumber: 0
  bindnetaddr: 169.254.241.0
  mcastport: 5405
  ttl: 1
  }
  }

  quorum {
  provider: corosync_votequorum
  expected_votes: 2
  }

  nodelist {
  node {
  ring0_addr: 169.254.241.10
  nodeid: 1
  }
  node {
  ring0_addr: 169.254.241.20
  nodeid: 2
  }
  }

  1. ifdown interface (169.254.241.10)
  2. start corosync (/usr/sbin/corosync -f)
  3. ifup interface

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/corosync/+bug/1739033/+subscriptions

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1739033] Re: Corosync: Assertion 'sender_node != NULL' failed when bind iface is ready after corosync boots

2017-12-20 Thread Victor Tapia
** Patch added: "Xenial patch"
   
https://bugs.launchpad.net/ubuntu/+source/corosync/+bug/1739033/+attachment/5025216/+files/fix-lp1739033-xenial.debdiff

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1739033

Title:
  Corosync: Assertion 'sender_node != NULL' failed when bind iface is
  ready after corosync boots

Status in corosync package in Ubuntu:
  Fix Released
Status in corosync source package in Trusty:
  In Progress
Status in corosync source package in Xenial:
  In Progress

Bug description:
  [Description]

  Corosync sigaborts if it starts before the interface it has to bind to
  is ready.

  On boot, if no interface in the bindnetaddr range is up/configured,
  corosync binds to lo (127.0.0.1). Once an applicable interface is up,
  corosync crashes with the following error message:

  corosync: votequorum.c:2019: message_handler_req_exec_votequorum_nodeinfo: 
Assertion `sender_node != NULL' failed.
  Aborted (core dumped)

  The last log entries show that the interface is trying to join the
  cluster:

  Dec 19 11:36:05 [22167] xenial-pacemaker corosync debug   [TOTEM ] 
totemsrp.c:2089 entering OPERATIONAL state.
  Dec 19 11:36:05 [22167] xenial-pacemaker corosync notice  [TOTEM ] 
totemsrp.c:2095 A new membership (169.254.241.10:444) was formed. Members 
joined: 704573706

  During the quorum calculation, the generated nodeid (704573706) for
  the node is being used instead of the nodeid specified in the
  configuration file (1), and the assert fails because the nodeid is not
  present in the member list. Corosync should use the correct nodeid and
  continue running after the interface is up, as shown in a fixed
  corosync boot:

  Dec 19 11:50:56 [4824] xenial-corosync corosync notice  [TOTEM ]
  totemsrp.c:2095 A new membership (169.254.241.10:80) was formed.
  Members joined: 1

  [Environment]

  Xenial 16.04.3

  Packages:

  ii  corosync 2.3.5-3ubuntu1amd64cluster engine 
daemon and utilities
  ii  libcorosync-common4:amd642.3.5-3ubuntu1amd64cluster engine 
common library

  [Reproducer]

  Config:

  totem {
  version: 2
  member {
  memberaddr: 169.254.241.10
  }
  member {
  memberaddr: 169.254.241.20
  }
  transport: udpu

  crypto_cipher: none
  crypto_hash: none
  nodeid: 1
  interface {
  ringnumber: 0
  bindnetaddr: 169.254.241.0
  mcastport: 5405
  ttl: 1
  }
  }

  quorum {
  provider: corosync_votequorum
  expected_votes: 2
  }

  nodelist {
  node {
  ring0_addr: 169.254.241.10
  nodeid: 1
  }
  node {
  ring0_addr: 169.254.241.20
  nodeid: 2
  }
  }

  1. ifdown interface (169.254.241.10)
  2. start corosync (/usr/sbin/corosync -f)
  3. ifup interface

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/corosync/+bug/1739033/+subscriptions

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp