[Ubuntu-ha] [Bug 1884149] Re: haproxy crashes on in __pool_get_first if unique-id-header is used

2020-07-13 Thread Launchpad Bug Tracker
This bug was fixed in the package haproxy - 1.8.8-1ubuntu0.11

---
haproxy (1.8.8-1ubuntu0.11) bionic; urgency=medium

  * Avoid crashes on idle connections between http requests (LP:
#1884149)

 -- Christian Ehrhardt   Mon, 22 Jun
2020 10:41:43 +0200

** Changed in: haproxy (Ubuntu Bionic)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1884149

Title:
  haproxy crashes on in __pool_get_first if unique-id-header is used

Status in HAProxy:
  Fix Released
Status in haproxy package in Ubuntu:
  Fix Released
Status in haproxy source package in Bionic:
  Fix Released
Status in haproxy package in Debian:
  Fix Released

Bug description:
  [Impact]

   * The handling of locks in haproxy led to a state that between idle http 
 connections one could have indicated a connection was destroyed. In that 
 case the code went on and accessed a just freed resource. As upstream 
 puts it "It can have random implications between requests as
   it may lead a wrong connection's polling to be re-enabled or disabled
   for example, especially with threads."

   * Backport the fix from upstreams 1.8 stable branch

  [Test Case]

   * It is a race and might be hard to trigger.
 An haproxy config to be in front of three webservers can be seen below.
 Setting up three apaches locally didn't trigger the same bug, but we 
 know it is timing sensitive.

   * Simon (anbox) has a setup which reliably triggers this and will run the 
 tests there.

   * The bad case will trigger a crash as reported below.

  [Regression Potential]

   * This change is in >=Disco and has no further bugs reported against it 
 (no follow on change) which should make it rather safe. Also no other
 change to that file context in 1.8 stable since then.
 The change is on the locking of connections. So if we want to expect 
 regressions, then they would be at the handling of concurrent 
 connections.

  [Other Info]
   
   * Strictly speaking it is a race, so triggering it depends on load and 
 machine cpu/IO speed.

  
  ---

  
  Version 1.8.8-1ubuntu0.10 of haproxy in Ubuntu 18.04 (bionic) crashes with

  

  Thread 2.1 "haproxy" received signal SIGSEGV, Segmentation fault.
  [Switching to Thread 0xf77b1010 (LWP 17174)]
  __pool_get_first (pool=0xaac6ddd0, pool=0xaac6ddd0) at 
include/common/memory.h:124
  124   include/common/memory.h: No such file or directory.
  (gdb) bt
  #0  __pool_get_first (pool=0xaac6ddd0, pool=0xaac6ddd0) at 
include/common/memory.h:124
  #1  pool_alloc_dirty (pool=0xaac6ddd0) at include/common/memory.h:154
  #2  pool_alloc (pool=0xaac6ddd0) at include/common/memory.h:229
  #3  conn_new () at include/proto/connection.h:655
  #4  cs_new (conn=0x0) at include/proto/connection.h:683
  #5  connect_conn_chk (t=0xaacb8820) at src/checks.c:1553
  #6  process_chk_conn (t=0xaacb8820) at src/checks.c:2135
  #7  process_chk (t=0xaacb8820) at src/checks.c:2281
  #8  0xaabca0b4 in process_runnable_tasks () at src/task.c:231
  #9  0xaab76f44 in run_poll_loop () at src/haproxy.c:2399
  #10 run_thread_poll_loop (data=) at src/haproxy.c:2461
  #11 0xaaad79ec in main (argc=, argv=0xaac61b30) at 
src/haproxy.c:3050

  

  when running on an ARM64 system. The haproxy.cfg looks like this:

  

  global
  log /dev/log local0
  log /dev/log local1 notice
  maxconn 4096
  user haproxy
  group haproxy
  spread-checks 0
  tune.ssl.default-dh-param 1024
  ssl-default-bind-ciphers 
ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:!DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:!DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:!CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA

  defaults
  log global
  mode tcp
  option httplog
  option dontlognull
  retries 3
  timeout queue 2
  timeout client 5
  timeout connect 5000
  timeout server 5

  frontend anbox-stream-gateway-lb-5-80
  bind 0.0.0.0:80
  default_backend api_http
  mode http
  http-request redirect scheme https

  backend api_http
  mode http

  frontend 

[Ubuntu-ha] [Bug 1886546] Re: pacemaker - 2.0.3-3ubuntu3 needs update (2.0.4-2)

2020-07-08 Thread Launchpad Bug Tracker
This bug was fixed in the package pacemaker - 2.0.4-2ubuntu1

---
pacemaker (2.0.4-2ubuntu1) groovy; urgency=medium

  * Merge with Debian unstable (LP: #1886546). Remaining changes:
- Dropped: debian/patches/pacemaker_is_partof_corosync.patch:
  Disagreement among Debian and Ubuntu about service dependencies.
- d/control: Demote fence-agents to Suggests, avoiding main inclusion.
- Skip autopkgtest for unprivileged containers: (LP #1828228)
  + d/t/control: mark pacemaker test as skippable
  + d/t/pacemaker: skip if memlock can't be set to unlimited by root
- Make crmsh the default management tool for now (LP #1862947)
- d/rules: Forcibly switch from ftime to clock_gettime, since building
  with ftime now results in deprecation errors
  * Dropped (from Ubuntu):
- Post 2.0.3 release fixes backported to Ubuntu (LP #1870235)
  debian/patches/ubuntu-2.0.3-fixes/:
  - lp1870235-0a8e789f9-Fix-libpengine-Options-should-be-uint.patch
  - lp1870235-186042bcb-Ref-libcrmservice-SIGCHLD-handling.patch
  - lp1870235-28bfd00e9-Low-libcrmservice-handle-child-wait-errors.patch
  - lp1870235-426f06cc0-Fix-tools-Fix-curses_indented_printf.patch
  - lp1870235-4f5207a28-Fix-tools-Correct-crm_mon-man-page.patch
  - lp1870235-5afe84e45-Fix-libstonithd-validate-arg-non-const.patch
  - lp1870235-c98987824-Fix-iso8601-Fix-crm_time_parse_offset.patch
  - lp1870235-dec326391-Log-libcrmcommon-correct-log-line-length.patch
  - lp1870235-e35908c79-Log-libcrmservice-impr-msgs-wait-child.patch
  - lp1870235-eaaa20949-Fix-libstonithd-tools-Fix-arg-stonith-event.patch
  - lp1870235-f0fe45806-Fix-scheduler-cluster-maint-mode-true.patch
  [in Debian Pacemaker-2.0.4]

pacemaker (2.0.4-2) unstable; urgency=medium

  * [a0fdbb5] The special libqb symbols aren't present on PowerPC
architectures

pacemaker (2.0.4-1) unstable; urgency=medium

  [ Rafael David Tinoco ]
  * [30838df] Omit pacemaker-resource-agents on Ubuntu/i386

  [ Ferenc Wágner ]
  * [bc43eed] New upstream release (2.0.4) (Closes: #959593)
  * [a4e8629] Drop upstreamed patch, refresh the rest
  * [42ee58f] Enable CIB secrets and ship the cibsecret tool
  * [13be83d] Update Standards-Version to 4.5.0 (no changes required)
  * [90d9610] New patch: libpacemaker calls into libstonithd directly
  * [61500a3] The obsolete ACL document was removed altogether.
In upstream commit d796f1e, because it was superseded by a new chapter
of Pacemaker Explained.
  * [dce33c1] Drop dummy packages which were already transitional in buster
  * [3f7de34] Update symbols files.
The newly disappeared symbols weren't present in the headers shipped in
pacemaker-dev, except for three functions in attrd.h.  However, those
weren't documented either and the header was renamed to attrd_internal.h
to show its internal status (see upstream commit 16c7d122e).
  * [4bbb828] Update packaging list email address
  * [7c12194] New patch: Fix typo: evalute => evaluate
  * [8d9cbd4] The bullseye toolchain defaults to linking with --as-needed

 -- Rafael David Tinoco   Mon, 06 Jul 2020
19:04:45 +

** Changed in: pacemaker (Ubuntu)
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to pacemaker in Ubuntu.
https://bugs.launchpad.net/bugs/1886546

Title:
  pacemaker - 2.0.3-3ubuntu3 needs update (2.0.4-2)

Status in pacemaker package in Ubuntu:
  Fix Released

Bug description:
  BUG: https://bugs.launchpad.net/ubuntu/+source/pacemaker/+bug/1886546
  PPA: https://launchpad.net/~rafaeldtinoco/+archive/ubuntu/lp1886546
  MERGE: https://tinyurl.com/y83wtxny

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/pacemaker/+bug/1886546/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1884149] Re: haproxy crashes on in __pool_get_first if unique-id-header is used

2020-06-22 Thread Launchpad Bug Tracker
** Merge proposal linked:
   
https://code.launchpad.net/~paelzer/ubuntu/+source/haproxy/+git/haproxy/+merge/386162

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1884149

Title:
  haproxy crashes on in __pool_get_first if unique-id-header is used

Status in HAProxy:
  Fix Released
Status in haproxy package in Ubuntu:
  Fix Released
Status in haproxy source package in Bionic:
  Triaged
Status in haproxy package in Debian:
  Fix Released

Bug description:
  Version 1.8.8-1ubuntu0.10 of haproxy in Ubuntu 18.04 (bionic) crashes
  with

  

  Thread 2.1 "haproxy" received signal SIGSEGV, Segmentation fault.
  [Switching to Thread 0xf77b1010 (LWP 17174)]
  __pool_get_first (pool=0xaac6ddd0, pool=0xaac6ddd0) at 
include/common/memory.h:124
  124   include/common/memory.h: No such file or directory.
  (gdb) bt
  #0  __pool_get_first (pool=0xaac6ddd0, pool=0xaac6ddd0) at 
include/common/memory.h:124
  #1  pool_alloc_dirty (pool=0xaac6ddd0) at include/common/memory.h:154
  #2  pool_alloc (pool=0xaac6ddd0) at include/common/memory.h:229
  #3  conn_new () at include/proto/connection.h:655
  #4  cs_new (conn=0x0) at include/proto/connection.h:683
  #5  connect_conn_chk (t=0xaacb8820) at src/checks.c:1553
  #6  process_chk_conn (t=0xaacb8820) at src/checks.c:2135
  #7  process_chk (t=0xaacb8820) at src/checks.c:2281
  #8  0xaabca0b4 in process_runnable_tasks () at src/task.c:231
  #9  0xaab76f44 in run_poll_loop () at src/haproxy.c:2399
  #10 run_thread_poll_loop (data=) at src/haproxy.c:2461
  #11 0xaaad79ec in main (argc=, argv=0xaac61b30) at 
src/haproxy.c:3050

  

  when running on an ARM64 system. The haproxy.cfg looks like this:

  

  global
  log /dev/log local0
  log /dev/log local1 notice
  maxconn 4096
  user haproxy
  group haproxy
  spread-checks 0
  tune.ssl.default-dh-param 1024
  ssl-default-bind-ciphers 
ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:!DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:!DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:!CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA

  defaults
  log global
  mode tcp
  option httplog
  option dontlognull
  retries 3
  timeout queue 2
  timeout client 5
  timeout connect 5000
  timeout server 5


  frontend anbox-stream-gateway-lb-5-80
  bind 0.0.0.0:80
  default_backend api_http
  mode http
  http-request redirect scheme https

  backend api_http
  mode http

  frontend anbox-stream-gateway-lb-5-443
  bind 0.0.0.0:443 ssl crt /var/lib/haproxy/default.pem no-sslv3
  default_backend app-anbox-stream-gateway
  mode http

  backend app-anbox-stream-gateway
  mode http
  balance leastconn
  server anbox-stream-gateway-0-4000 10.212.218.61:4000 check ssl verify 
none inter 2000 rise 2 fall 5 maxconn 4096
  server anbox-stream-gateway-1-4000 10.212.218.93:4000 check ssl verify 
none inter 2000 rise 2 fall 5 maxconn 4096
  server anbox-stream-gateway-2-4000 10.212.218.144:4000 check ssl verify 
none inter 2000 rise 2 fall 5 maxconn 4096

  

  The crash occurs after a first few HTTP requests going through and
  happens again when systemd restarts the service.

  The bug is already reported in Debian https://bugs.debian.org/cgi-
  bin/bugreport.cgi?bug=921981 and upstream at
  https://github.com/haproxy/haproxy/issues/40

  Using the 1.8.19-1+deb10u2 package from Debian fixes the crash.

To manage notifications about this bug go to:
https://bugs.launchpad.net/haproxy/+bug/1884149/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1877280] Re: attrd can segfault on exit

2020-05-19 Thread Launchpad Bug Tracker
This bug was fixed in the package pacemaker - 1.1.14-2ubuntu1.8

---
pacemaker (1.1.14-2ubuntu1.8) xenial; urgency=medium

  * d/p/lp1877280/0001-Fix-attrd-crash-on-exit-if-initialization-fails.patch,
d/p/lp1877280/0002-Fix-attrd-ipc-Prevent-possible-segfault-on-exit.patch:
- avoid segfault on exit (LP: #1877280)

 -- Dan Streetman   Thu, 07 May 2020 06:34:35
-0400

** Changed in: pacemaker (Ubuntu Xenial)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to pacemaker in Ubuntu.
https://bugs.launchpad.net/bugs/1877280

Title:
  attrd can segfault on exit

Status in pacemaker package in Ubuntu:
  Fix Released
Status in pacemaker source package in Xenial:
  Fix Released

Bug description:
  [impact]

  pacemaker's attrd may segfault on exit.

  [test case]

  this is a follow on to bug 1871166, the patches added there prevented
  one segfault but this one emerged.  As with that bug, I can't
  reproduce this myself, but the original reporter is able to reproduce
  intermittently.

  [regression potential]

  any regression would likely impact the exit path of attrd, possibly
  causing a segfault or other incorrect exit.

  [scope]

  this is needed only for Xenial.

  this is fixed upstream by commit 3c62fb1d0d which is included in
  Bionic and later.

  [other info]

  this is a follow on to bug 1871166.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/pacemaker/+bug/1877280/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1019833] Re: problem wtih the configuration of the cluster.conf file. when

2020-05-18 Thread Launchpad Bug Tracker
[Expired for corosync (Ubuntu Trusty) because there has been no activity
for 60 days.]

** Changed in: corosync (Ubuntu Trusty)
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to corosync in Ubuntu.
https://bugs.launchpad.net/bugs/1019833

Title:
   problem wtih the configuration of the cluster.conf file. when

Status in corosync package in Ubuntu:
  Invalid
Status in corosync source package in Trusty:
  Expired

Bug description:
  Hello,

   I'm trying to install and configure a small corosync cluster  with two
   nodes, on OS Ubuntu 12.04 LTS.
   There is a problem wtih the configuration of the cluster.conf file. when
   I try to validate it I get
   the error message: extra element rm in interleave.

   command used to validate: ccs_config_validate.

   Both nodes are working. - Used command clustat and cman_tool nodes

   I send a copy of  cluster.conf files of both nodes.

   Corosync version: 1.4.2

   Thanks in advance,

   João Francisco

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/corosync/+bug/1019833/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1871166] Re: lrmd crashes

2020-05-11 Thread Launchpad Bug Tracker
This bug was fixed in the package pacemaker - 1.1.14-2ubuntu1.7

---
pacemaker (1.1.14-2ubuntu1.7) xenial; urgency=medium

  [ Victor Tapia ]
  * 
d/p/lp1871166/0001-Fix-libservices-prevent-use-after-free-when-freeing-.patch,

d/p/lp1871166/0002-Fix-libservices-ensure-completed-ops-aren-t-on-block.patch,

d/p/lp1871166/0003-Refactor-libservices-handle-in-flight-case-first-whe.patch,

d/p/lp1871166/0004-Fix-libservices-properly-cancel-in-flight-systemd-up.patch,

d/p/lp1871166/0005-Fix-libservices-properly-detect-in-flight-systemd-up.patch:
- prevent use-after-free segfault (LP: #1871166)

 -- Dan Streetman   Mon, 06 Apr 2020 13:37:40
-0400

** Changed in: pacemaker (Ubuntu Xenial)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to pacemaker in Ubuntu.
https://bugs.launchpad.net/bugs/1871166

Title:
  lrmd crashes

Status in pacemaker package in Ubuntu:
  Fix Released
Status in pacemaker source package in Xenial:
  Fix Released

Bug description:
  [impact]

  lrmd crashes and dumps core.

  [test case]

  I can not reproduce, but it is reproducable in the specific setup of
  the person reporting the bug to me.

  [regression potential]

  this patches the cancel/cleanup part of the code, so regressions would
  likely involve possible memory leaks (instead of use-after-free
  segfaults), failure to correctly cancel or cleanup operations, or
  other failure during cancel action.

  [scope]

  this is fixed by commits:
  933d46ef20591757301784773a37e06b78906584
  94a4c58f675d163085a055f59fd6c3a2c9f57c43
  dc36d4375c049024a6f9e4d2277a3e6444fad05b
  deabcc5a6aa93dadf0b20364715b559a5b9848ac
  b85037b75255061a41d0ec3fd9b64f271351b43e

  which are all included starting with version 1.1.17, and Bionic
  includes version 1.1.18, so this is fixed already in Bionic and later.

  This is needed only for Xenial.

  [other info]

  As mentioned in the test case section, I do not have a setup where I'm
  able to reproduce this, but I can ask the initial reporter to test and
  verify the fix, and they have verified a test build fixed the problem
  for them.

  Also, the upstream commits removed two symbols, which I elided from
  the backported patches; those symbols are still available and, while
  it is unlikely there were any users of those symbols outside pacemaker
  itself, this change should not break any possible external users.  See
  patch 0002 header in the upload for more detail.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/pacemaker/+bug/1871166/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1866119] Re: [bionic] fence_scsi not working properly with Pacemaker 1.1.18-2ubuntu1.1

2020-04-21 Thread Launchpad Bug Tracker
This bug was fixed in the package pacemaker - 1.1.18-0ubuntu1.2

---
pacemaker (1.1.18-0ubuntu1.2) bionic; urgency=medium

  * Pacemaker fixes to allow fence-agents to work correctly (LP: #1866119)
- d/p/lp1866119-Fix-crmd-avoid-double-free.patch: fix double free
  causing intermittent errors
- d/p/lp1866119-Fix-attrd-ensure-node-name-is-broadcast.patch: fix
  hang on shutdown issue.
- d/p/lp1866119-Refactor-pengine-functionize.patch: small needed delta
  to allow the unfence fix.
- d/p/lp1866119-Fix-pengine-unfence-before-probing.patch: allows
  fence-agents to start correctly (LP #1865523)

 -- Rafael David Tinoco   Fri, 06 Mar 2020
02:28:20 +

** Changed in: pacemaker (Ubuntu Bionic)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to pacemaker in Ubuntu.
https://bugs.launchpad.net/bugs/1866119

Title:
  [bionic] fence_scsi not working properly with Pacemaker
  1.1.18-2ubuntu1.1

Status in pacemaker package in Ubuntu:
  Fix Released
Status in pacemaker source package in Bionic:
  Fix Released

Bug description:
  OBS: This bug was originally into LP: #1865523 but it was split.

   SRU: pacemaker

  [Impact]

   * fence_scsi is not currently working in a share disk environment

   * all clusters relying in fence_scsi and/or fence_scsi + watchdog
  won't be able to start the fencing agents OR, in worst case scenarios,
  the fence_scsi agent might start but won't make scsi reservations in
  the shared scsi disk.

   * this bug is taking care of pacemaker 1.1.18 issues with fence_scsi,
  since the later was fixed at LP: #1865523.

  [Test Case]

   * having a 3-node setup, nodes called "clubionic01, clubionic02,
  clubionic03", with a shared scsi disk (fully supporting persistent
  reservations) /dev/sda, with corosync and pacemaker operational and
  running, one might try:

  rafaeldtinoco@clubionic01:~$ crm configure
  crm(live)configure# property stonith-enabled=on
  crm(live)configure# property stonith-action=off
  crm(live)configure# property no-quorum-policy=stop
  crm(live)configure# property have-watchdog=true
  crm(live)configure# commit
  crm(live)configure# end
  crm(live)# end

  rafaeldtinoco@clubionic01:~$ crm configure primitive fence_clubionic \
  stonith:fence_scsi params \
  pcmk_host_list="clubionic01 clubionic02 clubionic03" \
  devices="/dev/sda" \
  meta provides=unfencing

  And see the following errors:

  Failed Actions:
  * fence_clubionic_start_0 on clubionic02 'unknown error' (1): call=6, 
status=Error, exitreason='',
  last-rc-change='Wed Mar  4 19:53:12 2020', queued=0ms, exec=1105ms
  * fence_clubionic_start_0 on clubionic03 'unknown error' (1): call=6, 
status=Error, exitreason='',
  last-rc-change='Wed Mar  4 19:53:13 2020', queued=0ms, exec=1109ms
  * fence_clubionic_start_0 on clubionic01 'unknown error' (1): call=6, 
status=Error, exitreason='',
  last-rc-change='Wed Mar  4 19:53:11 2020', queued=0ms, exec=1108ms

  and corosync.log will show:

  warning: unpack_rsc_op_failure: Processing failed op start for
  fence_clubionic on clubionic01: unknown error (1)

  [Regression Potential]

   * LP: #1865523 shows fence_scsi fully operational after SRU for that
  bug is done.

   * LP: #1865523 used pacemaker 1.1.19 (vanilla) in order to fix
  fence_scsi.

   * There are changes to: cluster resource manager daemon, local
  resource manager daemon and police engine. From all the changes, the
  police engine fix is the biggest, but still not big for a SRU. This
  could cause police engine, thus cluster decisions, to mal function.

   * All patches are based in upstream fixes made right after
  Pacemaker-1.1.18, used by Ubuntu Bionic and were tested with
  fence_scsi to make sure it fixed the issues.

  [Other Info]

   * Original Description:

  Trying to setup a cluster with an iscsi shared disk, using fence_scsi
  as the fencing mechanism, I realized that fence_scsi is not working in
  Ubuntu Bionic. I first thought it was related to Azure environment
  (LP: #1864419), where I was trying this environment, but then, trying
  locally, I figured out that somehow pacemaker 1.1.18 is not fencing
  the shared scsi disk properly.

  Note: I was able to "backport" vanilla 1.1.19 from upstream and
  fence_scsi worked. I have then tried 1.1.18 without all quilt patches
  and it didnt work as well. I think that bisecting 1.1.18 <-> 1.1.19
  might tell us which commit has fixed the behaviour needed by the
  fence_scsi agent.

  (k)rafaeldtinoco@clubionic01:~$ crm conf show
  node 1: clubionic01.private
  node 2: clubionic02.private
  node 3: clubionic03.private
  primitive fence_clubionic stonith:fence_scsi \
  params pcmk_host_list="10.250.3.10 10.250.3.11 10.250.3.12" 
devices="/dev/sda" \
  meta provides=unfencing
  property cib-bootstrap-options: \
  have-watchdog=false 

[Ubuntu-ha] [Bug 1870235] Re: [focal] pacemaker v2.0.3 last upstream fixes

2020-04-10 Thread Launchpad Bug Tracker
This bug was fixed in the package pacemaker - 2.0.3-3ubuntu3

---
pacemaker (2.0.3-3ubuntu3) focal; urgency=medium

  * Post 2.0.3 release fixes backported to Ubuntu (LP: #1870235)
debian/patches/ubuntu-2.0.3-fixes/:
- lp1870235-0a8e789f9-Fix-libpengine-Options-should-be-uint.patch
- lp1870235-186042bcb-Ref-libcrmservice-SIGCHLD-handling.patch
- lp1870235-28bfd00e9-Low-libcrmservice-handle-child-wait-errors.patch
- lp1870235-426f06cc0-Fix-tools-Fix-curses_indented_printf.patch
- lp1870235-4f5207a28-Fix-tools-Correct-crm_mon-man-page.patch
- lp1870235-5afe84e45-Fix-libstonithd-validate-arg-non-const.patch
- lp1870235-c98987824-Fix-iso8601-Fix-crm_time_parse_offset.patch
- lp1870235-dec326391-Log-libcrmcommon-correct-log-line-length.patch
- lp1870235-e35908c79-Log-libcrmservice-impr-msgs-wait-child.patch
- lp1870235-eaaa20949-Fix-libstonithd-tools-Fix-arg-stonith-event.patch
- lp1870235-f0fe45806-Fix-scheduler-cluster-maint-mode-true.patch

 -- Rafael David Tinoco   Mon, 06 Apr 2020
10:48:48 -0300

** Changed in: pacemaker (Ubuntu)
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to pacemaker in Ubuntu.
https://bugs.launchpad.net/bugs/1870235

Title:
  [focal] pacemaker v2.0.3 last upstream fixes

Status in pacemaker package in Ubuntu:
  Fix Released

Bug description:
  
  [Impact]

  * Idea here is latest release stabilization with backports of fixes
  released after the current version.

  * Since this is a LTS, it is worth applying "straightforward" fixes
  before the final release.

  [Test Case]

  * https://discourse.ubuntu.com/t/ubuntu-high-availability-corosync-
  pacemaker-shared-disk-environments/

  [Regression Potential]

  Areas of potential regression: stonith, pengine, crmservice:

  * stonith: I have regression tests and they look good.

  * pengine: could not identify bad decisions when fencing resources but
  haven't explored in detail as there are many possible combinations and
  decisions that pengine can take -> changes are very minimal here.

  * crmservice: pacemaker tests don't show regressions, resource agents
  start/stop/probe/migrate correctly afaik so the fix looks good.

  [Other Info]

  * This bug backport/cherrypick fixes released after 2.0.3 release.

  * The correct patches are going to be defined within this bug
  comments.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/pacemaker/+bug/1870235/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1437359] Re: A PIDFILE is double-defined for the corosync-notifyd init script

2020-04-09 Thread Launchpad Bug Tracker
This bug was fixed in the package corosync - 3.0.3-2ubuntu2

---
corosync (3.0.3-2ubuntu2) focal; urgency=medium

  [Jorge Niedbalski]
  * d/control: corosync binary depends on libqb-dev (LP: #1677684)

  [Rafael David Tinoco]
  * debian/corosync-notifyd.init: fix for 2 PIDFILEs declared (LP: #1437359)
  * Post v3.0.3 release fixes backported to Ubuntu (LP: #1869622)
debian/patches/ubuntu-v3.0.3-fixes/:
- lp1869622-09f6d34a-logconfig-Remove-double-free-of-value.patch
- lp1869622-0c118d8f-totemknet-Check-result-of-fcntl-O_NONBLOCK-call.patch
- lp1869622-0c16442f-votequorum-Change-check-of-expected_votes.patch
- lp1869622-1fb095b0-notifyd-Check-cmap_track_add-result.patch
- lp1869622-29109683-totemknet-Assert-strcpy-length.patch
- lp1869622-35c312f8-votequorum-Assert-copied-strings-length.patch
- lp1869622-380b744e-totemknet-Don-t-mix-corosync-and-knet-error-codes.patch
- lp1869622-56ee8503-quorumtool-Assert-copied-string-length.patch
- lp1869622-5f543465-quorumtool-exit-on-invalid-expected-votes.patch
- lp1869622-624b6a47-stats-Assert-value_len-when-value-is-needed.patch
- lp1869622-74eed54a-sync-Assert-sync_callbacks.name-length.patch
- lp1869622-89b0d62f-stats-Check-return-code-of-stats_map_get.patch
- lp1869622-8ce65bf9-votequorum-Reflect-runtime-change-of-2Node-to-WFA.patch
- lp1869622-8ff7760c-cmapctl-Free-bin_value-on-error.patch
- lp1869622-a24cbad5-totemconfig-Initialize-warnings-variable.patch
- lp1869622-c631951e-icmap-icmap_init_r-leaks-if-trie_create-fails.patch
- lp1869622-ca320bea-votequorum-set-wfa-status-only-on-startup.patch
- lp1869622-efe48120-totemconfig-Free-leaks-found-by-coverity.patch

 -- Rafael David Tinoco   Sun, 29 Mar 2020
21:50:35 +

** Changed in: corosync (Ubuntu Focal)
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to corosync in Ubuntu.
https://bugs.launchpad.net/bugs/1437359

Title:
  A PIDFILE is double-defined for the corosync-notifyd init script

Status in corosync package in Ubuntu:
  Fix Released
Status in corosync source package in Trusty:
  Won't Fix
Status in corosync source package in Xenial:
  Triaged
Status in corosync source package in Bionic:
  Triaged
Status in corosync source package in Disco:
  Won't Fix
Status in corosync source package in Eoan:
  Triaged
Status in corosync source package in Focal:
  Fix Released

Bug description:
  A /etc/init.d/corosync-notifyd contains two definitions for the PIDFILE:
  > PIDFILE=/var/run/$NAME.pid
  > SCRIPTNAME=/etc/init.d/$NAME
  > PIDFILE=/var/run/corosync.pid

  The first one is correct and the second one is wrong as it refers to
  the corosync service's pidfile instead

  The corosync package version is 2.3.3-1ubuntu1

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/corosync/+bug/1437359/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1677684] Re: /usr/bin/corosync-blackbox: 34: /usr/bin/corosync-blackbox: qb-blackbox: not found

2020-04-09 Thread Launchpad Bug Tracker
This bug was fixed in the package corosync - 3.0.3-2ubuntu2

---
corosync (3.0.3-2ubuntu2) focal; urgency=medium

  [Jorge Niedbalski]
  * d/control: corosync binary depends on libqb-dev (LP: #1677684)

  [Rafael David Tinoco]
  * debian/corosync-notifyd.init: fix for 2 PIDFILEs declared (LP: #1437359)
  * Post v3.0.3 release fixes backported to Ubuntu (LP: #1869622)
debian/patches/ubuntu-v3.0.3-fixes/:
- lp1869622-09f6d34a-logconfig-Remove-double-free-of-value.patch
- lp1869622-0c118d8f-totemknet-Check-result-of-fcntl-O_NONBLOCK-call.patch
- lp1869622-0c16442f-votequorum-Change-check-of-expected_votes.patch
- lp1869622-1fb095b0-notifyd-Check-cmap_track_add-result.patch
- lp1869622-29109683-totemknet-Assert-strcpy-length.patch
- lp1869622-35c312f8-votequorum-Assert-copied-strings-length.patch
- lp1869622-380b744e-totemknet-Don-t-mix-corosync-and-knet-error-codes.patch
- lp1869622-56ee8503-quorumtool-Assert-copied-string-length.patch
- lp1869622-5f543465-quorumtool-exit-on-invalid-expected-votes.patch
- lp1869622-624b6a47-stats-Assert-value_len-when-value-is-needed.patch
- lp1869622-74eed54a-sync-Assert-sync_callbacks.name-length.patch
- lp1869622-89b0d62f-stats-Check-return-code-of-stats_map_get.patch
- lp1869622-8ce65bf9-votequorum-Reflect-runtime-change-of-2Node-to-WFA.patch
- lp1869622-8ff7760c-cmapctl-Free-bin_value-on-error.patch
- lp1869622-a24cbad5-totemconfig-Initialize-warnings-variable.patch
- lp1869622-c631951e-icmap-icmap_init_r-leaks-if-trie_create-fails.patch
- lp1869622-ca320bea-votequorum-set-wfa-status-only-on-startup.patch
- lp1869622-efe48120-totemconfig-Free-leaks-found-by-coverity.patch

 -- Rafael David Tinoco   Sun, 29 Mar 2020
21:50:35 +

** Changed in: corosync (Ubuntu Focal)
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to corosync in Ubuntu.
https://bugs.launchpad.net/bugs/1677684

Title:
  /usr/bin/corosync-blackbox: 34: /usr/bin/corosync-blackbox: qb-
  blackbox: not found

Status in corosync package in Ubuntu:
  Fix Released
Status in corosync source package in Trusty:
  Won't Fix
Status in corosync source package in Xenial:
  Confirmed
Status in corosync source package in Zesty:
  Won't Fix
Status in corosync source package in Bionic:
  Confirmed
Status in corosync source package in Disco:
  Won't Fix
Status in corosync source package in Eoan:
  Confirmed
Status in corosync source package in Focal:
  Fix Released

Bug description:
  [Environment]

  Ubuntu Xenial 16.04
  Amd64

  [Test Case]

  1) sudo apt-get install corosync
  2) sudo corosync-blackbox.

  root@juju-niedbalski-xenial-machine-5:/home/ubuntu# dpkg -L corosync |grep 
black
  /usr/bin/corosync-blackbox

  Expected results: corosync-blackbox runs OK.

  Current results:

  $ sudo corosync-blackbox
  /usr/bin/corosync-blackbox: 34: /usr/bin/corosync-blackbox: qb-blackbox: not 
found

  [Impact]

   * Cannot run corosync-blackbox

  [Regression Potential]

  * None identified.

  [Fix]
  Make the package dependant of libqb-dev

  root@juju-niedbalski-xenial-machine-5:/home/ubuntu# dpkg -L libqb-dev | grep 
qb-bl
  /usr/sbin/qb-blackbox

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/corosync/+bug/1677684/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1869622] Re: [focal] corosync v3.0.3 last upstream fixes

2020-04-09 Thread Launchpad Bug Tracker
This bug was fixed in the package corosync - 3.0.3-2ubuntu2

---
corosync (3.0.3-2ubuntu2) focal; urgency=medium

  [Jorge Niedbalski]
  * d/control: corosync binary depends on libqb-dev (LP: #1677684)

  [Rafael David Tinoco]
  * debian/corosync-notifyd.init: fix for 2 PIDFILEs declared (LP: #1437359)
  * Post v3.0.3 release fixes backported to Ubuntu (LP: #1869622)
debian/patches/ubuntu-v3.0.3-fixes/:
- lp1869622-09f6d34a-logconfig-Remove-double-free-of-value.patch
- lp1869622-0c118d8f-totemknet-Check-result-of-fcntl-O_NONBLOCK-call.patch
- lp1869622-0c16442f-votequorum-Change-check-of-expected_votes.patch
- lp1869622-1fb095b0-notifyd-Check-cmap_track_add-result.patch
- lp1869622-29109683-totemknet-Assert-strcpy-length.patch
- lp1869622-35c312f8-votequorum-Assert-copied-strings-length.patch
- lp1869622-380b744e-totemknet-Don-t-mix-corosync-and-knet-error-codes.patch
- lp1869622-56ee8503-quorumtool-Assert-copied-string-length.patch
- lp1869622-5f543465-quorumtool-exit-on-invalid-expected-votes.patch
- lp1869622-624b6a47-stats-Assert-value_len-when-value-is-needed.patch
- lp1869622-74eed54a-sync-Assert-sync_callbacks.name-length.patch
- lp1869622-89b0d62f-stats-Check-return-code-of-stats_map_get.patch
- lp1869622-8ce65bf9-votequorum-Reflect-runtime-change-of-2Node-to-WFA.patch
- lp1869622-8ff7760c-cmapctl-Free-bin_value-on-error.patch
- lp1869622-a24cbad5-totemconfig-Initialize-warnings-variable.patch
- lp1869622-c631951e-icmap-icmap_init_r-leaks-if-trie_create-fails.patch
- lp1869622-ca320bea-votequorum-set-wfa-status-only-on-startup.patch
- lp1869622-efe48120-totemconfig-Free-leaks-found-by-coverity.patch

 -- Rafael David Tinoco   Sun, 29 Mar 2020
21:50:35 +

** Changed in: corosync (Ubuntu)
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to corosync in Ubuntu.
https://bugs.launchpad.net/bugs/1869622

Title:
  [focal] corosync v3.0.3 last upstream fixes

Status in corosync package in Ubuntu:
  Fix Released

Bug description:
  Together with fixes for:

  https://bugs.launchpad.net/ubuntu/+source/corosync/+bug/1437359
  https://bugs.launchpad.net/ubuntu/+source/corosync/+bug/1677684

  This bug will also backport/cherrypick the following post release
  fixes:

  ->  c631951e icmap: icmap_init_r() leaks if trie_create() fails = fix
  ->  ca320bea votequorum: set wfa status only on startup = fix
  ->  5f543465 quorumtool: exit on invalid expected votes = fix
  ->  0c16442f votequorum: Change check of expected_votes = fix
  ->  8ce65bf9 votequorum: Reflect runtime change of 2Node to WFA = fix
  ->  89b0d62f stats: Check return code of stats_map_get  = fix
  ->  56ee8503 quorumtool: Assert copied string length= assert
  ->  1fb095b0 notifyd: Check cmap_track_add result   = assert
  ->  8ff7760c cmapctl: Free bin_value on error   = fix
  ->  35c312f8 votequorum: Assert copied strings length   = assert
  ->  29109683 totemknet: Assert strcpy length assert = assert
  ->  0c118d8f totemknet: Check result of fcntl O_NONBLOCK call   = assert
  ->  a24cbad5 totemconfig: Initialize warnings variable  = build 
fix
  ->  74eed54a sync: Assert sync_callbacks.name length= assert
  ->  380b744e totemknet: Don't mix corosync and knet error codes = fix
  ->  624b6a47 stats: Assert value_len when value is needed   = assert
  ->  09f6d34a logconfig: Remove double free of value = fix
  ->  efe48120 totemconfig: Free leaks found by coverity  = fix

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/corosync/+bug/1869622/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1033470] [NEW] ocft make fails always

2020-04-06 Thread Launchpad Bug Tracker
You have been subscribed to a public bug by Rafael David Tinoco (rafaeldtinoco):

root@p1c2ppx01 /var/lib/resource-agents/ocft/cases # ocft make apache
Making 'apache': 
ERROR: apache: line 21: Macro 'default_status' not found.

# ls /var/lib/resource-agents/ocft/cases
apache_macro.prepare  apache_macro.required_args  apache.preparse

Running ocft with bash -xv shows this:

+ '[' '!' -r /var/lib/resource-agents/ocft/cases/apache_macro.default_status ']'
+ parse_die 'Macro '\''default_status'\'' not found.'
+ local str
+ str='Macro '\''default_status'\'' not found.'
+ die 'apache: line 21: Macro '\''default_status'\'' not found.'
+ local str
+ str='apache: line 21: Macro '\''default_status'\'' not found.'
+ echo 'ERROR: apache: line 21: Macro '\''default_status'\'' not found.'

But that file is not present.  It seems CASE-BLOCKs are not saved to
file as expected?

ProblemType: Bug
DistroRelease: Ubuntu 12.04
Package: resource-agents 1:3.9.2-5ubuntu4.1
ProcVersionSignature: Ubuntu 3.2.0-27.43-generic 3.2.21
Uname: Linux 3.2.0-27-generic x86_64
ApportVersion: 2.0.1-0ubuntu11
Architecture: amd64
Date: Mon Aug  6 12:37:22 2012
InstallationMedia: Ubuntu-Server 12.04 LTS "Precise Pangolin" - Release amd64 
(20120424.1)
SourcePackage: resource-agents
UpgradeStatus: No upgrade log present (probably fresh install)

** Affects: resource-agents (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: amd64 apport-bug precise
-- 
ocft make fails always
https://bugs.launchpad.net/bugs/1033470
You received this bug notification because you are a member of Ubuntu High 
Availability Team, which is subscribed to the bug report.

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1869751] Re: [focal] pacemaker FTBFS because of deprecated ftime()

2020-04-03 Thread Launchpad Bug Tracker
** Merge proposal linked:
   
https://code.launchpad.net/~rafaeldtinoco/ubuntu/+source/pacemaker/+git/pacemaker/+merge/381684

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to pacemaker in Ubuntu.
https://bugs.launchpad.net/bugs/1869751

Title:
  [focal] pacemaker FTBFS because of deprecated ftime()

Status in pacemaker package in Ubuntu:
  Confirmed

Bug description:
  https://people.canonical.com/~doko/ftbfs-report/test-rebuild-20200327
  -focal-focal.html

  shows that pacemaker started to be FTBFS because of:

  """
  gcc -DHAVE_CONFIG_H -I. -I../../include  -DSUPPORT_REMOTE -I../../include 
-I../../include -I../../libltdl -I../../libltdl -DPCMK_TIME_EMERGENCY_CGT 
-Wdate-time -D_FORTIFY_SOURCE=2 -I/usr/include/glib-2.0 
-I/usr/lib/i386-linux-gnu/glib-2.0/include -I/usr/include/libxml2 
-I/usr/include/heartbeat -I/usr/include/dbus-1.0 
-I/usr/lib/i386-linux-gnu/dbus-1.0/include -fPIE -g -O2 
-fdebug-prefix-map=/<>=. -fstack-protector-strong -Wformat 
-Werror=format-security   -ggdb  -fgnu89-inline -Wall -Waggregate-return 
-Wbad-function-cast -Wcast-align -Wdeclaration-after-statement -Wendif-labels 
-Wfloat-equal -Wformat-security -Wmissing-prototypes -Wmissing-declarations 
-Wnested-externs -Wno-long-long -Wno-strict-aliasing -Wpointer-arith 
-Wwrite-strings -Wunused-but-set-variable -Wformat=2 -Wformat-nonliteral 
-fstack-protector-strong -Werror -c -o pacemaker_remoted-pacemaker-execd.o 
`test -f 'pacemaker-execd.c' || echo './'`pacemaker-execd.c
  execd_commands.c: In function ‘stonith_recurring_op_helper’:
  execd_commands.c:257:5: error: ‘ftime’ is deprecated 
[-Werror=deprecated-declarations]
257 | ftime(>t_queue);
| ^
  In file included from execd_commands.c:23:
  /usr/include/i386-linux-gnu/sys/timeb.h:39:12: note: declared here
 39 | extern int ftime (struct timeb *__timebuf)
|^
  execd_commands.c: In function ‘schedule_lrmd_cmd’:
  execd_commands.c:389:5: error: ‘ftime’ is deprecated 
[-Werror=deprecated-declarations]
389 | ftime(>t_queue);
| ^
  """

  And man page shows:

  
  SYNOPSIS
 #include 

 int ftime(struct timeb *tp);

  DESCRIPTION
 NOTE: This function is deprecated, and will be removed in a future 
version of the GNU C library.  Use clock_gettime(2) instead.

  
  I'll fix this together with other fixes, opening this bug to track the issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/pacemaker/+bug/1869751/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1870235] Re: [focal] pacemaker v2.0.3 last upstream fixes

2020-04-03 Thread Launchpad Bug Tracker
** Merge proposal linked:
   
https://code.launchpad.net/~rafaeldtinoco/ubuntu/+source/pacemaker/+git/pacemaker/+merge/381684

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to pacemaker in Ubuntu.
https://bugs.launchpad.net/bugs/1870235

Title:
  [focal] pacemaker v2.0.3 last upstream fixes

Status in pacemaker package in Ubuntu:
  In Progress

Bug description:
  Together with fix for:

  https://bugs.launchpad.net/ubuntu/+source/pacemaker/+bug/1869751
  (FTBFS)

  This bug will also backport/cherrypick fixes released after 2.0.3
  release.

  The correct patches are going to be defined within this bug comments.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/pacemaker/+bug/1870235/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1437359] Re: A PIDFILE is double-defined for the corosync-notifyd init script

2020-03-29 Thread Launchpad Bug Tracker
** Merge proposal linked:
   
https://code.launchpad.net/~rafaeldtinoco/ubuntu/+source/corosync/+git/corosync/+merge/381355

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to corosync in Ubuntu.
https://bugs.launchpad.net/bugs/1437359

Title:
  A PIDFILE is double-defined for the corosync-notifyd init script

Status in corosync package in Ubuntu:
  In Progress
Status in corosync source package in Trusty:
  Won't Fix
Status in corosync source package in Xenial:
  Triaged
Status in corosync source package in Bionic:
  Triaged
Status in corosync source package in Disco:
  Won't Fix
Status in corosync source package in Eoan:
  Triaged
Status in corosync source package in Focal:
  In Progress

Bug description:
  A /etc/init.d/corosync-notifyd contains two definitions for the PIDFILE:
  > PIDFILE=/var/run/$NAME.pid
  > SCRIPTNAME=/etc/init.d/$NAME
  > PIDFILE=/var/run/corosync.pid

  The first one is correct and the second one is wrong as it refers to
  the corosync service's pidfile instead

  The corosync package version is 2.3.3-1ubuntu1

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/corosync/+bug/1437359/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1677684] Re: /usr/bin/corosync-blackbox: 34: /usr/bin/corosync-blackbox: qb-blackbox: not found

2020-03-29 Thread Launchpad Bug Tracker
** Merge proposal linked:
   
https://code.launchpad.net/~rafaeldtinoco/ubuntu/+source/corosync/+git/corosync/+merge/381355

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to corosync in Ubuntu.
https://bugs.launchpad.net/bugs/1677684

Title:
  /usr/bin/corosync-blackbox: 34: /usr/bin/corosync-blackbox: qb-
  blackbox: not found

Status in corosync package in Ubuntu:
  In Progress
Status in corosync source package in Trusty:
  Won't Fix
Status in corosync source package in Xenial:
  Confirmed
Status in corosync source package in Zesty:
  Won't Fix
Status in corosync source package in Bionic:
  Confirmed
Status in corosync source package in Disco:
  Won't Fix
Status in corosync source package in Eoan:
  Confirmed
Status in corosync source package in Focal:
  In Progress

Bug description:
  [Environment]

  Ubuntu Xenial 16.04
  Amd64

  [Test Case]

  1) sudo apt-get install corosync
  2) sudo corosync-blackbox.

  root@juju-niedbalski-xenial-machine-5:/home/ubuntu# dpkg -L corosync |grep 
black
  /usr/bin/corosync-blackbox

  Expected results: corosync-blackbox runs OK.

  Current results:

  $ sudo corosync-blackbox
  /usr/bin/corosync-blackbox: 34: /usr/bin/corosync-blackbox: qb-blackbox: not 
found

  [Impact]

   * Cannot run corosync-blackbox

  [Regression Potential]

  * None identified.

  [Fix]
  Make the package dependant of libqb-dev

  root@juju-niedbalski-xenial-machine-5:/home/ubuntu# dpkg -L libqb-dev | grep 
qb-bl
  /usr/sbin/qb-blackbox

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/corosync/+bug/1677684/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1869622] Re: [focal] corosync v3.0.3 last upstream fixes

2020-03-29 Thread Launchpad Bug Tracker
** Merge proposal linked:
   
https://code.launchpad.net/~rafaeldtinoco/ubuntu/+source/corosync/+git/corosync/+merge/381355

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to corosync in Ubuntu.
https://bugs.launchpad.net/bugs/1869622

Title:
  [focal] corosync v3.0.3 last upstream fixes

Status in corosync package in Ubuntu:
  In Progress

Bug description:
  Together with fixes for:

  https://bugs.launchpad.net/ubuntu/+source/corosync/+bug/1437359
  https://bugs.launchpad.net/ubuntu/+source/corosync/+bug/1677684

  This bug will also backport/cherrypick the following post release
  fixes:

  ->  c631951e icmap: icmap_init_r() leaks if trie_create() fails = fix
  ->  ca320bea votequorum: set wfa status only on startup = fix
  ->  5f543465 quorumtool: exit on invalid expected votes = fix
  ->  0c16442f votequorum: Change check of expected_votes = fix
  ->  8ce65bf9 votequorum: Reflect runtime change of 2Node to WFA = fix
  ->  89b0d62f stats: Check return code of stats_map_get  = fix
  ->  56ee8503 quorumtool: Assert copied string length= assert
  ->  1fb095b0 notifyd: Check cmap_track_add result   = assert
  ->  8ff7760c cmapctl: Free bin_value on error   = fix
  ->  35c312f8 votequorum: Assert copied strings length   = assert
  ->  29109683 totemknet: Assert strcpy length assert = assert
  ->  0c118d8f totemknet: Check result of fcntl O_NONBLOCK call   = assert
  ->  a24cbad5 totemconfig: Initialize warnings variable  = build 
fix
  ->  74eed54a sync: Assert sync_callbacks.name length= assert
  ->  380b744e totemknet: Don't mix corosync and knet error codes = fix
  ->  624b6a47 stats: Assert value_len when value is needed   = assert
  ->  09f6d34a logconfig: Remove double free of value = fix
  ->  efe48120 totemconfig: Free leaks found by coverity  = fix

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/corosync/+bug/1869622/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1019833] [NEW] problem wtih the configuration of the cluster.conf file. when

2020-03-19 Thread Launchpad Bug Tracker
You have been subscribed to a public bug by Rafael David Tinoco (rafaeldtinoco):

Hello,

 I'm trying to install and configure a small corosync cluster  with two
 nodes, on OS Ubuntu 12.04 LTS.
 There is a problem wtih the configuration of the cluster.conf file. when
 I try to validate it I get
 the error message: extra element rm in interleave.

 command used to validate: ccs_config_validate.

 Both nodes are working. - Used command clustat and cman_tool nodes

 I send a copy of  cluster.conf files of both nodes.

 Corosync version: 1.4.2

 Thanks in advance,

 João Francisco

** Affects: corosync (Ubuntu)
 Importance: Medium
 Assignee: Joao (nobrefr)
 Status: Incomplete

-- 
 problem wtih the configuration of the cluster.conf file. when
https://bugs.launchpad.net/bugs/1019833
You received this bug notification because you are a member of Ubuntu High 
Availability Team, which is subscribed to the bug report.

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1828496] Re: service haproxy reload sometimes fails to pick up new TLS certificates

2020-03-15 Thread Launchpad Bug Tracker
Status changed to 'Confirmed' because the bug affects multiple users.

** Changed in: haproxy (Ubuntu)
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1828496

Title:
  service haproxy reload sometimes fails to pick up new TLS certificates

Status in haproxy package in Ubuntu:
  Confirmed
Status in haproxy source package in Xenial:
  Confirmed
Status in haproxy source package in Bionic:
  Confirmed

Bug description:
  I suspect this is the same thing reported on StackOverflow:

  "I had this same issue where even after reloading the config, haproxy
  would randomly serve old certs. After looking around for many days the
  issue was that "reload" operation created a new process without
  killing the old one. Confirm this by "ps aux | grep haproxy"."

  https://stackoverflow.com/questions/46040504/haproxy-wont-recognize-
  new-certificate

  In our setup, we automate Let's Encrypt certificate renewals, and a
  fresh certificate will trigger a reload of the service. But
  occasionally this reload doesn't seem to do anything.

  Will update with details next time it happens, and hopefully confirm
  the multiple process theory.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/haproxy/+bug/1828496/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1864116] Re: Package: pacemaker (debian: 2.0.3-3, ubuntu: 2.0.1-5ubuntu5) needs merge

2020-03-06 Thread Launchpad Bug Tracker
This bug was fixed in the package pacemaker - 2.0.3-3ubuntu1

---
pacemaker (2.0.3-3ubuntu1) focal; urgency=medium

  * Merge with Debian unstable (LP: #1864116). Remaining changes:
- Skip autopkgtest for unprivileged containers: (LP: 1828228)
  - d/t/control: mark pacemaker test as skippable
  - d/t/pacemaker: skip if memlock can't be set to unlimited by root
- Make crmsh the default management tool for now (LP #1862947)
- Fix: Last attempt i386 binary packages removal was wrong (-Nlibkate)
- Ubuntu i386 binary compatibility only effort: (LP #1863437)
  - i386 binary package removal:
- pacemaker
- pacemaker-cli-utils
- pacemaker-remote
- pacemaker-resource-agents
  * Dropped (from Debian):
- debian/patches/pacemaker_is_partof_corosync.patch
  [fixed in corosync debian/2.4.2-3+deb9u1_bpo8+1]
- Omit pacemaker, pacemaker-cli-utils, pacemaker-remote binary
  packages on i386.
  [merged in debian/2.0.3-3]

pacemaker (2.0.3-3) unstable; urgency=medium

  * [543574f] Omit pacemaker{, -cli-utils, -remote} on Ubuntu/i386
(Closes: #948379)
  * [327889e] Reenable dwz, it already works with the magic sections from libqb

pacemaker (2.0.3-2) unstable; urgency=medium

  * [2b80438] Special libqb symbols not present on Power architectures
  * [99905eb] New patch: Avoid clashes with glibc error codes on HPPA
  * [b3928b4] Add -dev package dependencies
  * [712627c] Suppress stderr output of autopkgtest instead of ignoring it
  * [8f47294] Generate an autopkgtest artifact
  * [1e66b70] Run Salsa CI reprotest with diffoscope
  * [4560320] Recognize the autopkgtest artifacts in Salsa CI

pacemaker (2.0.3-1) unstable; urgency=medium

  * [20ccd21] Shorten and explain the autopkgtest wait
  * [d2eb58b] Ship /var/log/pacemaker, the new default directory of the detail
logs.
Without this directory the default configuration emits errors and the
detail log is not written.  The old /var/log/pacemaker.log* detail log
files are not moved automatically on upgrade, but the /var/log/pacemaker
directory is removed when purging pacemaker-common.
  * [6d373b3] Drop a patch: libtransitioner does not use liblrmd since 092281b
  * [5b8f4bf] New upstream pre-release (2.0.2~rc1)
  * [ab9d200] Remove obsolete patch, refresh the rest
  * [21ab824] Update changelog for 2.0.2~rc1-1 release
  * [3cc6291] New patch: libpacemaker directly uses libcrmcommon
  * [edc559c] The doxygen target moved into the doc subdirectory (5c77ae3)
  * [d9c423f] libpengine and libtransitioner merged into libpacemaker
  * [d361d67] pacemaker-dev is our single umbrella dev package
  * [a407d6e] Some agents use top, so Depend on procps
  * [7cb9d3a] Ship the new HealthIOWait resource agent
  * [313cb68] Ship the new crm_rule utility
  * [72dc571] New upstream release (2.0.2)
  * [2a58663] libpengine and libtransitioner merged into libpacemaker
  * [1198ccb] Update symbols files (all removed symbols were internal)
  * [dec7dcf] New patch: Don't reference the build path in the documentation
  * [de1a3f1] Update Standards-Version to 4.4.1 (no changes required)
  * [1dc4e5a] Advertise Rules-Requires-Root: no
  * [075b6a5] New upstream release (2.0.3)
  * [5fad681] Remove included patches, refresh the rest
  * [84788f8] Ship the new arch-independent pacemaker-schemas.pc
  * [c8bcad3] Update symbols files
  * [38cb849] Fix formatting of long license text
  * [fa95063] acls.txt was retired, the short placeholder text is not compressed
  * [b7a160d] Enroll to basic Salsa CI

 -- Rafael David Tinoco   Fri, 21 Feb 2020
01:18:12 +

** Changed in: pacemaker (Ubuntu)
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to pacemaker in Ubuntu.
https://bugs.launchpad.net/bugs/1864116

Title:
  Package: pacemaker (debian: 2.0.3-3, ubuntu: 2.0.1-5ubuntu5) needs
  merge

Status in pacemaker package in Ubuntu:
  Fix Released

Bug description:
  Package: pacemaker (debian: 2.0.3-3, ubuntu: 2.0.1-5ubuntu5) needs
  merge

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/pacemaker/+bug/1864116/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1866119] Re: [bionic] fence_scsi not working properly with 1.1.18-2ubuntu1.1

2020-03-05 Thread Launchpad Bug Tracker
** Merge proposal linked:
   
https://code.launchpad.net/~rafaeldtinoco/ubuntu/+source/pacemaker/+git/pacemaker/+merge/380337

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to pacemaker in Ubuntu.
https://bugs.launchpad.net/bugs/1866119

Title:
  [bionic] fence_scsi not working properly with Pacemaker
  1.1.18-2ubuntu1.1

Status in pacemaker package in Ubuntu:
  Fix Released
Status in pacemaker source package in Bionic:
  In Progress

Bug description:
  OBS: This bug was originally into LP: #1865523 but it was split.

   SRU: pacemaker

  [Impact]

   * fence_scsi is not currently working in a share disk environment

   * all clusters relying in fence_scsi and/or fence_scsi + watchdog
  won't be able to start the fencing agents OR, in worst case scenarios,
  the fence_scsi agent might start but won't make scsi reservations in
  the shared scsi disk.

   * this bug is taking care of pacemaker 1.1.18 issues with fence_scsi,
  since the later was fixed at LP: #1865523.

  [Test Case]

   * having a 3-node setup, nodes called "clubionic01, clubionic02,
  clubionic03", with a shared scsi disk (fully supporting persistent
  reservations) /dev/sda, with corosync and pacemaker operational and
  running, one might try:

  rafaeldtinoco@clubionic01:~$ crm configure
  crm(live)configure# property stonith-enabled=on
  crm(live)configure# property stonith-action=off
  crm(live)configure# property no-quorum-policy=stop
  crm(live)configure# property have-watchdog=true
  crm(live)configure# commit
  crm(live)configure# end
  crm(live)# end

  rafaeldtinoco@clubionic01:~$ crm configure primitive fence_clubionic \
  stonith:fence_scsi params \
  pcmk_host_list="clubionic01 clubionic02 clubionic03" \
  devices="/dev/sda" \
  meta provides=unfencing

  And see the following errors:

  Failed Actions:
  * fence_clubionic_start_0 on clubionic02 'unknown error' (1): call=6, 
status=Error, exitreason='',
  last-rc-change='Wed Mar  4 19:53:12 2020', queued=0ms, exec=1105ms
  * fence_clubionic_start_0 on clubionic03 'unknown error' (1): call=6, 
status=Error, exitreason='',
  last-rc-change='Wed Mar  4 19:53:13 2020', queued=0ms, exec=1109ms
  * fence_clubionic_start_0 on clubionic01 'unknown error' (1): call=6, 
status=Error, exitreason='',
  last-rc-change='Wed Mar  4 19:53:11 2020', queued=0ms, exec=1108ms

  and corosync.log will show:

  warning: unpack_rsc_op_failure: Processing failed op start for
  fence_clubionic on clubionic01: unknown error (1)

  [Regression Potential]

   * LP: #1865523 shows fence_scsi fully operational after SRU for that
  bug is done.

   * LP: #1865523 used pacemaker 1.1.19 (vanilla) in order to fix
  fence_scsi.

   * TODO

  [Other Info]

   * Original Description:

  Trying to setup a cluster with an iscsi shared disk, using fence_scsi
  as the fencing mechanism, I realized that fence_scsi is not working in
  Ubuntu Bionic. I first thought it was related to Azure environment
  (LP: #1864419), where I was trying this environment, but then, trying
  locally, I figured out that somehow pacemaker 1.1.18 is not fencing
  the shared scsi disk properly.

  Note: I was able to "backport" vanilla 1.1.19 from upstream and
  fence_scsi worked. I have then tried 1.1.18 without all quilt patches
  and it didnt work as well. I think that bisecting 1.1.18 <-> 1.1.19
  might tell us which commit has fixed the behaviour needed by the
  fence_scsi agent.

  (k)rafaeldtinoco@clubionic01:~$ crm conf show
  node 1: clubionic01.private
  node 2: clubionic02.private
  node 3: clubionic03.private
  primitive fence_clubionic stonith:fence_scsi \
  params pcmk_host_list="10.250.3.10 10.250.3.11 10.250.3.12" 
devices="/dev/sda" \
  meta provides=unfencing
  property cib-bootstrap-options: \
  have-watchdog=false \
  dc-version=1.1.18-2b07d5c5a9 \
  cluster-infrastructure=corosync \
  cluster-name=clubionic \
  stonith-enabled=on \
  stonith-action=off \
  no-quorum-policy=stop \
  symmetric-cluster=true

  

  (k)rafaeldtinoco@clubionic02:~$ sudo crm_mon -1
  Stack: corosync
  Current DC: clubionic01.private (version 1.1.18-2b07d5c5a9) - partition with 
quorum
  Last updated: Mon Mar 2 15:55:30 2020
  Last change: Mon Mar 2 15:45:33 2020 by root via cibadmin on 
clubionic01.private

  3 nodes configured
  1 resource configured

  Online: [ clubionic01.private clubionic02.private clubionic03.private
  ]

  Active resources:

   fence_clubionic (stonith:fence_scsi): Started clubionic01.private

  

  (k)rafaeldtinoco@clubionic02:~$ sudo sg_persist --in --read-keys 
--device=/dev/sda
    LIO-ORG cluster.bionic. 4.0
    Peripheral device type: disk
    PR generation=0x0, there are NO registered reservation keys

  (k)rafaeldtinoco@clubionic02:~$ sudo sg_persist -r /dev/sda
    LIO-ORG cluster.bionic. 4.0
    Peripheral device type: 

[Ubuntu-ha] [Bug 1864087] Re: Package: corosync (debian: 3.0.3-2, ubuntu: 3.0.2-1ubuntu2) needs merge

2020-03-04 Thread Launchpad Bug Tracker
This bug was fixed in the package corosync - 3.0.3-2ubuntu1

---
corosync (3.0.3-2ubuntu1) focal; urgency=medium

  * Merge with Debian unstable (LP: #1864087). Remaining changes:
- Skip autopkgtest for unprivileged containers: (LP: 1828228)
  - d/t/control: allow stderr and mark tests as skippable
  - d/t/{cfgtool,quorumtool}: skips when inside unpriv containers

corosync (3.0.3-2) unstable; urgency=medium

  * [d0a06e5] Separate the autopkgtests and make them generate artifacts
  * [8680d48] Run Salsa CI reprotest with diffoscope
  * [1d89c4f] Recognize the autopkgtest artifacts in Salsa CI
  * [8e09226] New patch: man: move cmap_keys man page from section 8 to 7

corosync (3.0.3-1) unstable; urgency=medium

  * [d103a33] New upstream release (3.0.3)
  * [e6f6831] Refresh our patches
  * [f1e85a3] Enable nozzle support
  * [19d3dd3] Package the votequorum simulator in corosync-vqsim
  * [8ae3235] Update Standards-Version to 4.4.1 (no changes required)
  * [bfd9560] Advertise Rules-Requires-Root: no

 -- Rafael David Tinoco   Fri, 21 Feb 2020
04:33:11 +

** Changed in: corosync (Ubuntu)
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to corosync in Ubuntu.
https://bugs.launchpad.net/bugs/1864087

Title:
  Package: corosync (debian: 3.0.3-2, ubuntu: 3.0.2-1ubuntu2) needs
  merge

Status in corosync package in Ubuntu:
  Fix Released

Bug description:
  Package: corosync (debian: 3.0.3-2, ubuntu: 3.0.2-1ubuntu2) needs
  merge

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/corosync/+bug/1864087/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1865523] Re: [bionic] fence_scsi not working properly with 1.1.18-2ubuntu1.1

2020-03-04 Thread Launchpad Bug Tracker
** Merge proposal linked:
   
https://code.launchpad.net/~rafaeldtinoco/ubuntu/+source/fence-agents/+git/fence-agents/+merge/380242

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to pacemaker in Ubuntu.
https://bugs.launchpad.net/bugs/1865523

Title:
  [bionic] fence_scsi not working properly with 1.1.18-2ubuntu1.1

Status in fence-agents package in Ubuntu:
  Fix Released
Status in pacemaker package in Ubuntu:
  Fix Released
Status in fence-agents source package in Bionic:
  Confirmed
Status in pacemaker source package in Bionic:
  Confirmed
Status in fence-agents source package in Disco:
  Confirmed
Status in fence-agents source package in Eoan:
  Fix Released
Status in fence-agents source package in Focal:
  Fix Released

Bug description:
  Trying to setup a cluster with an iscsi shared disk, using fence_scsi
  as the fencing mechanism, I realized that fence_scsi is not working in
  Ubuntu Bionic. I first thought it was related to Azure environment
  (LP: #1864419), where I was trying this environment, but then, trying
  locally, I figured out that somehow pacemaker 1.1.18 is not fencing
  the shared scsi disk properly.

  Note: I was able to "backport" vanilla 1.1.19 from upstream and
  fence_scsi worked. I have then tried 1.1.18 without all quilt patches
  and it didnt work as well. I think that bisecting 1.1.18 <-> 1.1.19
  might tell us which commit has fixed the behaviour needed by the
  fence_scsi agent.

  (k)rafaeldtinoco@clubionic01:~$ crm conf show
  node 1: clubionic01.private
  node 2: clubionic02.private
  node 3: clubionic03.private
  primitive fence_clubionic stonith:fence_scsi \
  params pcmk_host_list="10.250.3.10 10.250.3.11 10.250.3.12" 
devices="/dev/sda" \
  meta provides=unfencing
  property cib-bootstrap-options: \
  have-watchdog=false \
  dc-version=1.1.18-2b07d5c5a9 \
  cluster-infrastructure=corosync \
  cluster-name=clubionic \
  stonith-enabled=on \
  stonith-action=off \
  no-quorum-policy=stop \
  symmetric-cluster=true

  

  (k)rafaeldtinoco@clubionic02:~$ sudo crm_mon -1
  Stack: corosync
  Current DC: clubionic01.private (version 1.1.18-2b07d5c5a9) - partition with 
quorum
  Last updated: Mon Mar  2 15:55:30 2020
  Last change: Mon Mar  2 15:45:33 2020 by root via cibadmin on 
clubionic01.private

  3 nodes configured
  1 resource configured

  Online: [ clubionic01.private clubionic02.private clubionic03.private
  ]

  Active resources:

   fence_clubionic(stonith:fence_scsi):   Started
  clubionic01.private

  

  (k)rafaeldtinoco@clubionic02:~$ sudo sg_persist --in --read-keys 
--device=/dev/sda
LIO-ORG   cluster.bionic.   4.0
Peripheral device type: disk
PR generation=0x0, there are NO registered reservation keys

  (k)rafaeldtinoco@clubionic02:~$ sudo sg_persist -r /dev/sda
LIO-ORG   cluster.bionic.   4.0
Peripheral device type: disk
PR generation=0x0, there is NO reservation held

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/fence-agents/+bug/1865523/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1828228] Re: corosync fails to start in unprivileged containers - autopkgtest failure

2020-02-20 Thread Launchpad Bug Tracker
** Merge proposal linked:
   
https://code.launchpad.net/~rafaeldtinoco/ubuntu/+source/corosync/+git/corosync/+merge/379581

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to corosync in Ubuntu.
https://bugs.launchpad.net/bugs/1828228

Title:
  corosync fails to start in unprivileged containers - autopkgtest
  failure

Status in Auto Package Testing:
  Invalid
Status in corosync package in Ubuntu:
  Fix Released
Status in pacemaker package in Ubuntu:
  Fix Released
Status in pcs package in Ubuntu:
  In Progress

Bug description:
  Currently pacemaker v2 fails to start in armhf containers (and by
  extension corosync too).

  I found that it is reproducible locally, and that I had to bump a few
  limits to get it going.

  Specifically I did:

  1) bump memlock limits
  2) bump rmem_max limits

  = 1) Bump memlock limits =

  I have no idea, which one of these finally worked, and/or is
  sufficient. A bit of a whack-a-mole.

  cat >>/etc/security/limits.conf 

[Ubuntu-ha] [Bug 1828228] Re: corosync fails to start in unprivileged containers - autopkgtest failure

2020-02-20 Thread Launchpad Bug Tracker
** Merge proposal linked:
   
https://code.launchpad.net/~rafaeldtinoco/ubuntu/+source/pacemaker/+git/pacemaker/+merge/379595

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to corosync in Ubuntu.
https://bugs.launchpad.net/bugs/1828228

Title:
  corosync fails to start in unprivileged containers - autopkgtest
  failure

Status in Auto Package Testing:
  Invalid
Status in corosync package in Ubuntu:
  Fix Released
Status in pacemaker package in Ubuntu:
  Fix Released
Status in pcs package in Ubuntu:
  In Progress

Bug description:
  Currently pacemaker v2 fails to start in armhf containers (and by
  extension corosync too).

  I found that it is reproducible locally, and that I had to bump a few
  limits to get it going.

  Specifically I did:

  1) bump memlock limits
  2) bump rmem_max limits

  = 1) Bump memlock limits =

  I have no idea, which one of these finally worked, and/or is
  sufficient. A bit of a whack-a-mole.

  cat >>/etc/security/limits.conf 

[Ubuntu-ha] [Bug 1862947] Re: crmsh should be taken back to -main

2020-02-17 Thread Launchpad Bug Tracker
This bug was fixed in the package pacemaker - 2.0.1-5ubuntu5

---
pacemaker (2.0.1-5ubuntu5) focal; urgency=medium

  * Fix: Last attempt i386 binary packages removal was wrong (-Nlibkate)
  * Ubuntu i386 binary compatibility only effort: (LP: #1863437)
- i386 binary package removal:
  - pacemaker
  - pacemaker-cli-utils
  - pacemaker-remote
  - pacemaker-resource-agents

 -- Rafael David Tinoco   Sat, 15 Feb 2020
18:52:20 +

** Changed in: pacemaker (Ubuntu Focal)
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to pacemaker in Ubuntu.
https://bugs.launchpad.net/bugs/1862947

Title:
  crmsh should be taken back to -main

Status in crmsh package in Ubuntu:
  Fix Released
Status in pacemaker package in Ubuntu:
  Fix Released
Status in crmsh source package in Focal:
  Fix Released
Status in pacemaker source package in Focal:
  Fix Released

Bug description:
  crmsh is currently the most tested cluster resource manager
  configuration tool we have for Ubuntu (pcs being the second one, and
  already scheduled for more tests in order to replace it in a near
  future).

  With that said, currently crmsh is found at -universe:

  rafaeldtinoco@workstation:~$ rmadison crmsh
   crmsh | 1.2.5+hg1034-1ubuntu3 | trusty  | source, all
   crmsh | 1.2.5+hg1034-1ubuntu4 | trusty-updates  | source, all
   crmsh | 2.2.0-1   | xenial  | source, ...
   crmsh | 3.0.1-3ubuntu1| bionic/universe | source, all
   crmsh | 3.0.1-3ubuntu1| disco/universe  | source, all
   crmsh | 4.1.0-2ubuntu2| eoan/universe   | source, all
   crmsh | 4.2.0-2ubuntu1| focal/universe  | source, all

  When it should be placed back in main since there is no other way to
  configure pacemaker other than editing manually - blergh - a CIB file
  for example. There's gotta to be supported way (-main) to configure
  clustering.

  There was already a MIR for crmsh:

  https://bugs.launchpad.net/ubuntu/+source/crmsh/+bug/1205019

  So I'm including crmsh as a pacemaker dependency for it to be
  triggered as a component mismatch. This will likely be enough after
  some IRC discussions with other Ubuntu developers.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/crmsh/+bug/1862947/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1863437] Re: [focal] pacemaker i386 should drop a few i386 only packages

2020-02-17 Thread Launchpad Bug Tracker
This bug was fixed in the package pacemaker - 2.0.1-5ubuntu5

---
pacemaker (2.0.1-5ubuntu5) focal; urgency=medium

  * Fix: Last attempt i386 binary packages removal was wrong (-Nlibkate)
  * Ubuntu i386 binary compatibility only effort: (LP: #1863437)
- i386 binary package removal:
  - pacemaker
  - pacemaker-cli-utils
  - pacemaker-remote
  - pacemaker-resource-agents

 -- Rafael David Tinoco   Sat, 15 Feb 2020
18:52:20 +

** Changed in: pacemaker (Ubuntu Focal)
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to pacemaker in Ubuntu.
https://bugs.launchpad.net/bugs/1863437

Title:
  [focal] pacemaker i386 should drop a few i386 only packages

Status in pacemaker package in Ubuntu:
  Fix Released
Status in pacemaker source package in Focal:
  Fix Released

Bug description:
  When executing pacemaker i386 autopkgtests I realized that package
  "resource-agents" wasn't available for i386. When discussing this with
  @vorlon we came into the conclusion that some i386 only cluster
  packages could be removed from the repository, towards the effort of
  having *only the essential* packages available in i386 (to be run
  together with an amd64 host).

  IRC log:

  """
   resource-agents i386 binary package
   the pacemaker binary is /not/ present on i386 in the release pocket
   that may have been an overly aggressive removal
   vorlon: are u keeping pacemaker because of dependencies ?
   yeah, I removed it when I shouldn't have
  (https://launchpad.net/ubuntu/focal/i386/pacemaker)
   rafaeldtinoco: pacemaker-dev is a build-dep of something else we 
need,
  see the referenced germinate output for full details

  https://people.canonical.com/~ubuntu-archive/germinate-
  output/i386.focal/i386+build-depends

   pacemaker-dev is a build-dep of dlm
   and libesmtp-dev is a build-dep of pacemaker, not the other way 
around
   (and dlm is a build-dep of lvm2)
   ah gotcha
   dlm -> corosync -> pacemaker
   so, even though I removed the binary in error from the release 
pocket,
  the right answer is still for pacemaker/i386 binary to go away (leaving only 
the
  -dev and lib packages)

   do you want me to fix that up, or do you want to?
   to fix that we should do like we did to samba ?
   yeah

   looks like the binaries you'll need to drop are pacemaker,
  pacemaker-cli-utils, pacemaker-remote
   and I'll remove those from -proposed right now, so that those don't
  hold up migration

   but I'll hold off on adding the hint until they're dropped from the
  source
   deal, and next hint ill do with proper branch
  """

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/pacemaker/+bug/1863437/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1863677] [NEW] fence_scsi needs lvm2 package so it should depend on it

2020-02-17 Thread Launchpad Bug Tracker
You have been subscribed to a public bug by Rafael David Tinoco (rafaeldtinoco):

Playing with fence_scsi I have noticed that one of my images did not
have lvm2 package installed. With that, the following error ocurred:

$ sudo fence_scsi -n clufocal03 --action=status
2020-02-17 19:57:02,324 ERROR: Unable to run /sbin/vgs --noheadings --separator 
: --sort pv_uuid --options vg_attr,pv_name --config 'global { locking_type = 0 
} devices { preferred_names = [ "^/dev/dm" ] }'

when trying to scsi fence one of my nodes.

fence-agents should depend and/or suggest all packages it depends on for
the correct execution of provided agents.

** Affects: fence-agents (Ubuntu)
 Importance: Undecided
 Status: New

-- 
fence_scsi needs lvm2 package so it should depend on it
https://bugs.launchpad.net/bugs/1863677
You received this bug notification because you are a member of Ubuntu High 
Availability Team, which is subscribed to the bug report.

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1852122] Re: ocfs2-tools is causing kernel panics in Ubuntu Focal (Ubuntu-5.4.0-9.12)

2020-02-17 Thread Launchpad Bug Tracker
This bug was fixed in the package linux - 5.3.0-40.32

---
linux (5.3.0-40.32) eoan; urgency=medium

  * eoan/linux: 5.3.0-40.32 -proposed tracker (LP: #1861214)

  * No sof soundcard for 'ASoC: CODEC DAI intel-hdmi-hifi1 not registered' after
modprobe sof (LP: #1860248)
- ASoC: SOF: Intel: fix HDA codec driver probe with multiple controllers

  * ocfs2-tools is causing kernel panics in Ubuntu Focal (Ubuntu-5.4.0-9.12)
(LP: #1852122)
- ocfs2: fix the crash due to call ocfs2_get_dlm_debug once less

  * QAT drivers for C3XXX and C62X not included as modules (LP: #1845959)
- [Config] CRYPTO_DEV_QAT_C3XXX=m, CRYPTO_DEV_QAT_C62X=m and
  CRYPTO_DEV_QAT_DH895xCC=m

  * Eoan update: upstream stable patchset 2020-01-24 (LP: #1860816)
- scsi: lpfc: Fix discovery failures when target device connectivity bounces
- scsi: mpt3sas: Fix clear pending bit in ioctl status
- scsi: lpfc: Fix locking on mailbox command completion
- Input: atmel_mxt_ts - disable IRQ across suspend
- f2fs: fix to update time in lazytime mode
- iommu: rockchip: Free domain on .domain_free
- iommu/tegra-smmu: Fix page tables in > 4 GiB memory
- dmaengine: xilinx_dma: Clear desc_pendingcount in xilinx_dma_reset
- scsi: target: compare full CHAP_A Algorithm strings
- scsi: lpfc: Fix SLI3 hba in loop mode not discovering devices
- scsi: csiostor: Don't enable IRQs too early
- scsi: hisi_sas: Replace in_softirq() check in hisi_sas_task_exec()
- powerpc/pseries: Mark accumulate_stolen_time() as notrace
- powerpc/pseries: Don't fail hash page table insert for bolted mapping
- powerpc/tools: Don't quote $objdump in scripts
- dma-debug: add a schedule point in debug_dma_dump_mappings()
- leds: lm3692x: Handle failure to probe the regulator
- clocksource/drivers/asm9260: Add a check for of_clk_get
- clocksource/drivers/timer-of: Use unique device name instead of timer
- powerpc/security/book3s64: Report L1TF status in sysfs
- powerpc/book3s64/hash: Add cond_resched to avoid soft lockup warning
- ext4: update direct I/O read lock pattern for IOCB_NOWAIT
- ext4: iomap that extends beyond EOF should be marked dirty
- jbd2: Fix statistics for the number of logged blocks
- scsi: tracing: Fix handling of TRANSFER LENGTH == 0 for READ(6) and 
WRITE(6)
- scsi: lpfc: Fix duplicate unreg_rpi error in port offline flow
- f2fs: fix to update dir's i_pino during cross_rename
- clk: qcom: Allow constant ratio freq tables for rcg
- clk: clk-gpio: propagate rate change to parent
- irqchip/irq-bcm7038-l1: Enable parent IRQ if necessary
- irqchip: ingenic: Error out if IRQ domain creation failed
- fs/quota: handle overflows of sysctl fs.quota.* and report as unsigned 
long
- scsi: lpfc: fix: Coverity: lpfc_cmpl_els_rsp(): Null pointer dereferences
- PCI: rpaphp: Fix up pointer to first drc-info entry
- scsi: ufs: fix potential bug which ends in system hang
- powerpc/pseries/cmm: Implement release() function for sysfs device
- PCI: rpaphp: Don't rely on firmware feature to imply drc-info support
- PCI: rpaphp: Annotate and correctly byte swap DRC properties
- PCI: rpaphp: Correctly match ibm, my-drc-index to drc-name when using drc-
  info
- powerpc/security: Fix wrong message when RFI Flush is disable
- scsi: atari_scsi: sun3_scsi: Set sg_tablesize to 1 instead of SG_NONE
- clk: pxa: fix one of the pxa RTC clocks
- bcache: at least try to shrink 1 node in bch_mca_scan()
- HID: quirks: Add quirk for HP MSU1465 PIXART OEM mouse
- HID: logitech-hidpp: Silence intermittent get_battery_capacity errors
- ARM: 8937/1: spectre-v2: remove Brahma-B53 from hardening
- libnvdimm/btt: fix variable 'rc' set but not used
- HID: Improve Windows Precision Touchpad detection.
- HID: rmi: Check that the RMI_STARTED bit is set before unregistering the 
RMI
  transport device
- watchdog: Fix the race between the release of watchdog_core_data and cdev
- scsi: pm80xx: Fix for SATA device discovery
- scsi: ufs: Fix error handing during hibern8 enter
- scsi: scsi_debug: num_tgts must be >= 0
- scsi: NCR5380: Add disconnect_mask module parameter
- scsi: iscsi: Don't send data to unbound connection
- scsi: target: iscsi: Wait for all commands to finish before freeing a
  session
- gpio: mpc8xxx: Don't overwrite default irq_set_type callback
- apparmor: fix unsigned len comparison with less than zero
- scripts/kallsyms: fix definitely-lost memory leak
- powerpc: Don't add -mabi= flags when building with Clang
- cdrom: respect device capabilities during opening action
- perf script: Fix brstackinsn for AUXTRACE
- perf regs: Make perf_reg_name() return "unknown" instead of NULL
- s390/zcrypt: handle new reply code FILTERED_BY_HYPERVISOR
- libfdt: define INT32_MAX and UINT32_MAX in libfdt_env.h
- 

[Ubuntu-ha] [Bug 1863437] Re: [focal] pacemaker i386 should drop a few i386 only packages

2020-02-15 Thread Launchpad Bug Tracker
** Merge proposal linked:
   
https://code.launchpad.net/~rafaeldtinoco/ubuntu/+source/pacemaker/+git/pacemaker/+merge/379254

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to pacemaker in Ubuntu.
https://bugs.launchpad.net/bugs/1863437

Title:
  [focal] pacemaker i386 should drop a few i386 only packages

Status in pacemaker package in Ubuntu:
  In Progress
Status in pacemaker source package in Focal:
  In Progress

Bug description:
  When executing pacemaker i386 autopkgtests I realized that package
  "resource-agents" wasn't available for i386. When discussing this with
  @vorlon we came into the conclusion that some i386 only cluster
  packages could be removed from the repository, towards the effort of
  having *only the essential* packages available in i386 (to be run
  together with an amd64 host).

  IRC log:

  """
   resource-agents i386 binary package
   the pacemaker binary is /not/ present on i386 in the release pocket
   that may have been an overly aggressive removal
   vorlon: are u keeping pacemaker because of dependencies ?
   yeah, I removed it when I shouldn't have
  (https://launchpad.net/ubuntu/focal/i386/pacemaker)
   rafaeldtinoco: pacemaker-dev is a build-dep of something else we 
need,
  see the referenced germinate output for full details

  https://people.canonical.com/~ubuntu-archive/germinate-
  output/i386.focal/i386+build-depends

   pacemaker-dev is a build-dep of dlm
   and libesmtp-dev is a build-dep of pacemaker, not the other way 
around
   (and dlm is a build-dep of lvm2)
   ah gotcha
   dlm -> corosync -> pacemaker
   so, even though I removed the binary in error from the release 
pocket,
  the right answer is still for pacemaker/i386 binary to go away (leaving only 
the
  -dev and lib packages)

   do you want me to fix that up, or do you want to?
   to fix that we should do like we did to samba ?
   yeah

   looks like the binaries you'll need to drop are pacemaker,
  pacemaker-cli-utils, pacemaker-remote
   and I'll remove those from -proposed right now, so that those don't
  hold up migration

   but I'll hold off on adding the hint until they're dropped from the
  source
   deal, and next hint ill do with proper branch
  """

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/pacemaker/+bug/1863437/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1862947] Re: crmsh should be taken back to -main

2020-02-12 Thread Launchpad Bug Tracker
** Merge proposal linked:
   
https://code.launchpad.net/~rafaeldtinoco/ubuntu/+source/pacemaker/+git/pacemaker/+merge/378958

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to pacemaker in Ubuntu.
https://bugs.launchpad.net/bugs/1862947

Title:
  crmsh should be taken back to -main

Status in crmsh package in Ubuntu:
  Fix Released
Status in pacemaker package in Ubuntu:
  In Progress
Status in crmsh source package in Focal:
  Fix Released
Status in pacemaker source package in Focal:
  In Progress

Bug description:
  crmsh is currently the most tested cluster resource manager
  configuration tool we have for Ubuntu (pcs being the second one, and
  already scheduled for more tests in order to replace it in a near
  future).

  With that said, currently crmsh is found at -universe:

  rafaeldtinoco@workstation:~$ rmadison crmsh
   crmsh | 1.2.5+hg1034-1ubuntu3 | trusty  | source, all
   crmsh | 1.2.5+hg1034-1ubuntu4 | trusty-updates  | source, all
   crmsh | 2.2.0-1   | xenial  | source, ...
   crmsh | 3.0.1-3ubuntu1| bionic/universe | source, all
   crmsh | 3.0.1-3ubuntu1| disco/universe  | source, all
   crmsh | 4.1.0-2ubuntu2| eoan/universe   | source, all
   crmsh | 4.2.0-2ubuntu1| focal/universe  | source, all

  When it should be placed back in main since there is no other way to
  configure pacemaker other than editing manually - blergh - a CIB file
  for example. There's gotta to be supported way (-main) to configure
  clustering.

  There was already a MIR for crmsh:

  https://bugs.launchpad.net/ubuntu/+source/crmsh/+bug/1205019

  So I'm including crmsh as a pacemaker dependency for it to be
  triggered as a component mismatch. This will likely be enough after
  some IRC discussions with other Ubuntu developers.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/crmsh/+bug/1862947/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 613793] Re: o2cb stopping Failed

2020-02-08 Thread Launchpad Bug Tracker
[Expired for ocfs2-tools (Ubuntu) because there has been no activity for
60 days.]

** Changed in: ocfs2-tools (Ubuntu)
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to ocfs2-tools in Ubuntu.
https://bugs.launchpad.net/bugs/613793

Title:
  o2cb stopping Failed

Status in ocfs2-tools package in Ubuntu:
  Expired

Bug description:
  Binary package hint: ocfs2-tools

  Ubuntu release:
  Description:Ubuntu 10.04.1 LTS
  Release:10.04
  Package version:
  ocfs2-tools  1.4.3-1

  The script /etc/init.d/o2cb exits with an error when stopped and the services 
do not stop.
  Here the error message:

  /etc/init.d/o2cb stop
  Stopping O2CB cluster ocfs2: Failed
  Unable to stop cluster as heartbeat region still active

  I have identified a first error in the script. In the function
  clean_heartbeat the following if:

  if [ ! -f "$(configfs_path)/cluster/${CLUSTER}/heartbeat/*" ]
  then
  return
  fi

  is always true and the function returns. If the intention was to check
  the existence of the directory code must be:

  if [ ! -d "$(configfs_path)/cluster/${CLUSTER}/heartbeat/" ]
  then
  echo "OK"
  return
  fi

  An error persist even after these changes.

  /etc/init.d/o2cb stop
  Cleaning heartbeat on ocfs2: Failed
  At least one heartbeat region still active

  I added some lines for debugging by changing the function so:

  #
  # clean_heartbeat()
  # Removes the inactive heartbeat regions
  #
  clean_heartbeat()
  {
  if [ "$#" -lt "1" -o -z "$1" ]
  then
  echo "clean_heartbeat(): Requires an argument" >&2
  return 1
  fi
  CLUSTER="$1"

  if [ ! -d "$(configfs_path)/cluster/${CLUSTER}/heartbeat/" ]
  then
  echo "OK"
  return
  fi

  echo -n "Cleaning heartbeat on ${CLUSTER}: "

  ls -1 "$(configfs_path)/cluster/${CLUSTER}/heartbeat/" | while read HBUUID
  do
  if [ ! -d "$(configfs_path)/cluster/${CLUSTER}/heartbeat/${HBUUID}" ]
  then
  continue
  fi

  echo
  echo "DEBUG ocfs2_hb_ctl -I -u ${HBUUID} 2>&1"
  OUTPUT="`ocfs2_hb_ctl -I -u ${HBUUID} 2>&1`"
  if [ $? != 0 ]
  then
  echo "Failed"
  echo "${OUTPUT}" >&2
  exit 1
  fi

  echo "DEBUG ${OUTPUT}"
  REF="`echo ${OUTPUT} | awk '/refs/ {print $2; exit;}' 2>&1`"
  echo "DEBUG REF=$REF"
  if [ $REF != 0 ]
  then
     echo "Failed"
     echo "At least one heartbeat region still active" >&2
     exit 1
  else
     OUTPUT="`ocfs2_hb_ctl -K -u ${HBUUID} 2>&1`"
  fi
  done
  if [ $? = 1 ]
  then
  exit 1
  fi
  echo "OK"
  }

  The new output is:

  /etc/init.d/o2cb stop
  Cleaning heartbeat on ocfs2:
  DEBUG ocfs2_hb_ctl -I -u FC046AD7B2584E7EB12A7293993C81B0 2>&1
  DEBUG FC046AD7B2584E7EB12A7293993C81B0: 2 refs
  DEBUG REF=2
  Failed
  At least one heartbeat region still active

  At this point I checked the source code ocfs2_hb_ctl. The command 
ocfs2_hb_ctl-I-u ${HBUUID} returns the number of references in a semaphore used 
by programs that manage ocfs filesystem. In the source file libo2cb/o2cb_api.c:
  - the function o2cb_mutex_down increases the second semaphore;
  - the function o2cb_mutex_up decreases the first semaphore;
  - the function __o2cb_get_ref increases the first semaphore;
  - the function __o2cb_drop_ref decreases the first semaphore.

  I have not found the point where the second semaphore is decreased.
  This could be the cause of the error.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ocfs2-tools/+bug/613793/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1745155] Re: o2image fails on s390x

2020-01-20 Thread Launchpad Bug Tracker
This bug was fixed in the package ocfs2-tools - 1.8.6-2ubuntu1

---
ocfs2-tools (1.8.6-2ubuntu1) focal; urgency=medium

  * debian/control: specify only supported architectures (LP: #1745155)

 -- Rafael David Tinoco   Fri, 29 Nov 2019
13:12:00 +

** Changed in: ocfs2-tools (Ubuntu Focal)
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to ocfs2-tools in Ubuntu.
https://bugs.launchpad.net/bugs/1745155

Title:
  o2image fails on s390x

Status in OCFS2 Tools:
  New
Status in ocfs2-tools package in Ubuntu:
  Fix Released
Status in ocfs2-tools source package in Eoan:
  Won't Fix
Status in ocfs2-tools source package in Focal:
  Fix Released

Bug description:
  o2image fails on s390x:

  dd if=/dev/zero of=/tmp/disk bs=1M count=200
  losetup --find --show /tmp/disk
  mkfs.ocfs2 --cluster-stack=o2cb --cluster-name=ocfs2 /dev/loop0 # loop dev 
found in prev step

  Then this comand:
  o2image /dev/loop0 /tmp/disk.image

  Results in:
  Segmentation fault (core dumped)

  dmesg:
  [  862.642556] ocfs2: Registered cluster interface o2cb
  [  870.880635] User process fault: interruption code 003b ilc:3 in 
o2image[10c18+2e000]
  [  870.880643] Failing address:  TEID: 0800
  [  870.880644] Fault in primary space mode while using user ASCE.
  [  870.880646] AS:3d8f81c7 R3:0024 
  [  870.880650] CPU: 0 PID: 1484 Comm: o2image Not tainted 4.13.0-30-generic 
#33-Ubuntu
  [  870.880651] Hardware name: IBM 2964 N63 400 (KVM/Linux)
  [  870.880652] task: 3cb81200 task.stack: 3d50c000
  [  870.880653] User PSW : 070500018000 00010c184212
  [  870.880654]R:0 T:1 IO:1 EX:1 Key:0 M:1 W:0 P:1 AS:0 CC:0 PM:0 
RI:0 EA:3
  [  870.880655] User GPRS: 000144f0cc10 0001 0001 

  [  870.880655] 000144ef6090 000144f13cc0 
0001
  [  870.880656]000144ef6000 000144ef3280 000144f13cd8 
00037ee8
  [  870.880656]03ff965a6000 03ffe5e7e410 00010c183bc6 
03ffe5e7e370
  [  870.880663] User Code: 00010c184202: b9080034  agr %r3,%r4
00010c184206: c02b0007  nilf%r2,7
   #00010c18420c: eb2120df  sllk
%r2,%r1,0(%r2)
   >00010c184212: e3103090  llgc
%r1,0(%r3)
00010c184218: b9f61042  ork 
%r4,%r2,%r1
00010c18421c: 1421  nr  %r2,%r1
00010c18421e: 42403000  stc 
%r4,0(%r3)
00010c184222: 1322  lcr %r2,%r2
  [  870.880672] Last Breaking-Event-Address:
  [  870.880675]  [<00010c18e4ca>] 0x10c18e4ca

  Upstream issue:
  https://github.com/markfasheh/ocfs2-tools/issues/22

  This was triggered by our ocfs2-tools dep8 tests:
  http://autopkgtest.ubuntu.com/packages/o/ocfs2-tools/bionic/s390x

To manage notifications about this bug go to:
https://bugs.launchpad.net/ocfs2-tools/+bug/1745155/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1858485] Re: Won't build with py2, since python-marko is gone

2020-01-17 Thread Launchpad Bug Tracker
** Merge proposal linked:
   
https://code.launchpad.net/~ahasenack/ubuntu/+source/haproxy/+git/haproxy/+merge/36

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1858485

Title:
  Won't build with py2, since python-marko is gone

Status in haproxy package in Ubuntu:
  In Progress
Status in haproxy package in Debian:
  New

Bug description:
  haproxy runs debian/dconv/haproxy-dconv.py when building its
  documentation. That script is py2, and requires python2 and python-
  marko.

  python-marko is gone and is an NBS currently. The src:marko package
  now only builds the python3 version, so we need to convert haproxy-
  dconv.py to py3.

  haprox-dconv.py comes from https://github.com/cbonte/haproxy-dconv and
  already supports py3, so this boils down to updating the copy of
  haproxy-dconv in the debian package and updating the debian/patches
  /debianize-dconv.patch patch

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/haproxy/+bug/1858485/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1858485] Re: Won't build with py2, since python-marko is gone

2020-01-15 Thread Launchpad Bug Tracker
** Merge proposal linked:
   
https://code.launchpad.net/~ahasenack/ubuntu/+source/haproxy/+git/haproxy/+merge/377643

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1858485

Title:
  Won't build with py2, since python-marko is gone

Status in haproxy package in Ubuntu:
  In Progress
Status in haproxy package in Debian:
  New

Bug description:
  haproxy runs debian/dconv/haproxy-dconv.py when building its
  documentation. That script is py2, and requires python2 and python-
  marko.

  python-marko is gone and is an NBS currently. The src:marko package
  now only builds the python3 version, so we need to convert haproxy-
  dconv.py to py3.

  haprox-dconv.py comes from https://github.com/cbonte/haproxy-dconv and
  already supports py3, so this boils down to updating the copy of
  haproxy-dconv in the debian package and updating the debian/patches
  /debianize-dconv.patch patch

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/haproxy/+bug/1858485/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1855568] Re: pcs depends on python3-tornado (>= 6) but it won't be installed

2020-01-11 Thread Launchpad Bug Tracker
** Merge proposal linked:
   
https://code.launchpad.net/~rafaeldtinoco/ubuntu/+source/pcs/+git/pcs/+merge/377171

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1855568

Title:
  pcs depends on python3-tornado (>= 6) but it won't be installed

Status in pcs package in Ubuntu:
  In Progress

Bug description:
  PCS currently depends on python3-tornado >= 6 but that package does
  not exist.

  
  (c)rafaeldtinoco@pcsdevel:~$ apt-get install pcs
  Reading package lists... Done
  Building dependency tree   
  Reading state information... Done
  Some packages could not be installed. This may mean that you have
  requested an impossible situation or if you are using the unstable
  distribution that some required packages have not yet been created
  or been moved out of Incoming.
  The following information may help to resolve the situation:

  The following packages have unmet dependencies:
   pcs : Depends: python3-tornado (>= 6) but it is not going to be installed
 Recommends: pacemaker (>= 2.0) but it is not going to be installed
  E: Unable to correct problems, you have held broken packages.
  

  In Debian:

  python3-tornado | 6.0.3+really5.1.1-2 | testing  | amd64, arm64, armel, 
armhf, i386, mips64el, mipsel, ppc64el, s390x
  python3-tornado | 6.0.3+really5.1.1-2 | unstable | amd64, arm64, armel, 
armhf, i386, mips64el, mipsel, ppc64el, s390x

  Currently PCS needs python3-tornado to be upgraded to 6 OR to have the
  fixes for 5.1.1 to be re-added.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/pcs/+bug/1855568/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1854988] Re: Block automatic sync from debian for the next LTS

2019-12-08 Thread Launchpad Bug Tracker
This bug was fixed in the package haproxy - 2.0.10-1ubuntu1

---
haproxy (2.0.10-1ubuntu1) focal; urgency=medium

  * Add Ubuntu version to block automatic sync from Debian, as we want
to stay in the 2.0.x LTS series for Focal (LP: #1854988)

 -- Andreas Hasenack   Tue, 03 Dec 2019 15:38:53
-0300

** Changed in: haproxy (Ubuntu)
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1854988

Title:
  Block automatic sync from debian for the next LTS

Status in haproxy package in Ubuntu:
  Fix Released

Bug description:
  Upstream haproxy has the concept of an LTS release, and a stable (non-
  LTS) release. Currently, these are (see table at
  http://www.haproxy.org/):

  stable: 2.1.x
  stable LTS: 2.0.x

  Debian unstable is at the moment tracking 2.0.x, and debian
  experimental is tracking 2.1.x.

  For the next ubuntu lts release, we would like to stay on the 2.0.x
  track, which is upstream's LTS.

  From the 2.0 upstream announcement[2]:
  """
  The development will go on with 2.1 which will not be LTS, so it will
  experience quite some breakage to prepare 2.2 which will be LTS and
  expected approximately at the same date next year.
  """

  "same date next year" would be roughly June 2020, so not in time for
  Ubuntu 20.04.

  Since this package is a sync, whenever the debian maintainers decide
  to push 2.1.x into unstable, we will automatically sync it (if we are
  not yet in feature freeze mode). To avoid that, a few options were
  explored[1], and adding an ubuntu version to the package seems the
  best one.

  Therefore, the package will have an ubuntu version, have its
  maintainer changed, but no extra delta, just to stop the automatic
  sync from happening. Whenever new versions of the 2.0.x track are
  uploaded to debian/sid, we will merge it manually into ubuntu.

  1. https://lists.ubuntu.com/archives/ubuntu-devel/2019-December/040853.html
  2. https://www.mail-archive.com/haproxy@formilux.org/msg34215.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/haproxy/+bug/1854988/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1855568] Re: pcs depends on python3-tornado (>= 6) but it won't be installed

2019-12-07 Thread Launchpad Bug Tracker
** Merge proposal linked:
   
https://code.launchpad.net/~rafaeldtinoco/ubuntu/+source/pcs/+git/pcs/+merge/376492

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1855568

Title:
  pcs depends on python3-tornado (>= 6) but it won't be installed

Status in pcs package in Ubuntu:
  In Progress

Bug description:
  PCS currently depends on python3-tornado >= 6 but that package does
  not exist.

  
  (c)rafaeldtinoco@pcsdevel:~$ apt-get install pcs
  Reading package lists... Done
  Building dependency tree   
  Reading state information... Done
  Some packages could not be installed. This may mean that you have
  requested an impossible situation or if you are using the unstable
  distribution that some required packages have not yet been created
  or been moved out of Incoming.
  The following information may help to resolve the situation:

  The following packages have unmet dependencies:
   pcs : Depends: python3-tornado (>= 6) but it is not going to be installed
 Recommends: pacemaker (>= 2.0) but it is not going to be installed
  E: Unable to correct problems, you have held broken packages.
  

  In Debian:

  python3-tornado | 6.0.3+really5.1.1-2 | testing  | amd64, arm64, armel, 
armhf, i386, mips64el, mipsel, ppc64el, s390x
  python3-tornado | 6.0.3+really5.1.1-2 | unstable | amd64, arm64, armel, 
armhf, i386, mips64el, mipsel, ppc64el, s390x

  Currently PCS needs python3-tornado to be upgraded to 6 OR to have the
  fixes for 5.1.1 to be re-added.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/pcs/+bug/1855568/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1855568] [NEW] pcs depends on python3-tornado (>= 6) but it won't be installed

2019-12-07 Thread Launchpad Bug Tracker
You have been subscribed to a public bug by Rafael David Tinoco (rafaeldtinoco):

PCS currently depends on python3-tornado >= 6 but that package does not
exist.


(c)rafaeldtinoco@pcsdevel:~$ apt-get install pcs
Reading package lists... Done
Building dependency tree   
Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:

The following packages have unmet dependencies:
 pcs : Depends: python3-tornado (>= 6) but it is not going to be installed
   Recommends: pacemaker (>= 2.0) but it is not going to be installed
E: Unable to correct problems, you have held broken packages.


In Debian:

python3-tornado | 6.0.3+really5.1.1-2 | testing  | amd64, arm64, armel, 
armhf, i386, mips64el, mipsel, ppc64el, s390x
python3-tornado | 6.0.3+really5.1.1-2 | unstable | amd64, arm64, armel, 
armhf, i386, mips64el, mipsel, ppc64el, s390x

Currently PCS needs python3-tornado to be upgraded to 6 OR to have the
fixes for 5.1.1 to be re-added.

** Affects: pcs (Ubuntu)
 Importance: High
 Assignee: Rafael David Tinoco (rafaeldtinoco)
 Status: In Progress

-- 
pcs depends on python3-tornado (>= 6) but it won't be installed
https://bugs.launchpad.net/bugs/1855568
You received this bug notification because you are a member of Ubuntu High 
Availability Team, which is subscribed to the bug report.

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1848902] Re: haproxy in bionic can get stuck

2019-12-02 Thread Launchpad Bug Tracker
This bug was fixed in the package haproxy - 1.8.8-1ubuntu0.8

---
haproxy (1.8.8-1ubuntu0.8) bionic; urgency=medium

  * d/p/lp-1848902-MINOR-systemd-consider-exit-status-143-as-successful.patch:
fix potential hang in haproxy (LP: #1848902)

 -- Christian Ehrhardt   Tue, 12 Nov
2019 13:16:22 +0100

** Changed in: haproxy (Ubuntu Bionic)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1848902

Title:
  haproxy in bionic can get stuck

Status in haproxy package in Ubuntu:
  Fix Released
Status in haproxy source package in Bionic:
  Fix Released

Bug description:
  [Impact]

   * The master process will exit with the status of the last worker.
     When the worker is killed with SIGTERM, it is expected to get 143 as an
     exit status. Therefore, we consider this exit status as normal from a
     systemd point of view. If it happens when not stopping, the systemd
     unit is configured to always restart, so it has no adverse effect.

   * Backport upstream fix - adding another accepted RC to the systemd
     service

  [Test Case]

   * You want to install haproxy and have it running. Then sigterm it a lot.
     With the fix it would restart the service all the time, well except
     restart limit. But in the bad case it will just stay down and didn't
     even try to restart it.

     $ apt install haproxy
     $ for x in {1..100}; do pkill -TERM -x haproxy ; sleep 0.1 ; done
     $ systemctl status haproxy

 The above is a hacky way to trigger some A/B behavior on the fix.
 It isn't perfect as systemd restart counters will kick in and you 
 essentially check a secondary symptom.
 I'd recommend to in addition run the following:

     $ apt install haproxy
     $ for x in {1..1000}; do pkill -TERM -x haproxy ; sleep 0.001 systemctl 
  reset-failed haproxy.service; done
     $ systemctl status haproxy

 You can do so with even smaller sleeps, that should keep the service up 
 and running (this isn't changing with the fix, but should work with the 
new code).

  [Regression Potential]

   * This eventually is a conffile modification, so if there are other
     modifications done by the user they will get a prompt. But that isn't a
     regression. I checked the code and I can't think of another RC=143 that
     would due to that "no more" detected as error. I really think other
     than the update itself triggering a restart (as usual for services)
     there is no further regression potential to this.

  [Other Info]

   * Fix already active in IS hosted cloud without issues since a while
   * Also reports (comment #5) show that others use this in production as
     well

  ---

  On a Bionic/Stein cloud, after a network partition, we saw several
  units (glance, swift-proxy and cinder) fail to start haproxy, like so:

  root@juju-df624b-6-lxd-4:~# systemctl status haproxy.service
  ● haproxy.service - HAProxy Load Balancer
     Loaded: loaded (/lib/systemd/system/haproxy.service; enabled; vendor 
preset: enabled)
     Active: failed (Result: exit-code) since Sun 2019-10-20 00:23:18 UTC; 1h 
35min ago
   Docs: man:haproxy(1)
     file:/usr/share/doc/haproxy/configuration.txt.gz
    Process: 2002655 ExecStart=/usr/sbin/haproxy -Ws -f $CONFIG -p $PIDFILE 
$EXTRAOPTS (code=exited, status=143)
    Process: 2002649 ExecStartPre=/usr/sbin/haproxy -f $CONFIG -c -q $EXTRAOPTS 
(code=exited, status=0/SUCCESS)
   Main PID: 2002655 (code=exited, status=143)

  Oct 20 00:16:52 juju-df624b-6-lxd-4 systemd[1]: Starting HAProxy Load 
Balancer...
  Oct 20 00:16:52 juju-df624b-6-lxd-4 systemd[1]: Started HAProxy Load Balancer.
  Oct 20 00:23:18 juju-df624b-6-lxd-4 systemd[1]: Stopping HAProxy Load 
Balancer...
  Oct 20 00:23:18 juju-df624b-6-lxd-4 haproxy[2002655]: [WARNING] 292/001652 
(2002655) : Exiting Master process...
  Oct 20 00:23:18 juju-df624b-6-lxd-4 haproxy[2002655]: [ALERT] 292/001652 
(2002655) : Current worker 2002661 exited with code 143
  Oct 20 00:23:18 juju-df624b-6-lxd-4 haproxy[2002655]: [WARNING] 292/001652 
(2002655) : All workers exited. Exiting... (143)
  Oct 20 00:23:18 juju-df624b-6-lxd-4 systemd[1]: haproxy.service: Main process 
exited, code=exited, status=143/n/a
  Oct 20 00:23:18 juju-df624b-6-lxd-4 systemd[1]: haproxy.service: Failed with 
result 'exit-code'.
  Oct 20 00:23:18 juju-df624b-6-lxd-4 systemd[1]: Stopped HAProxy Load Balancer.
  root@juju-df624b-6-lxd-4:~#

  The Debian maintainer came up with the following patch for this:

    https://www.mail-archive.com/haproxy@formilux.org/msg30477.html

  Which was added to the 1.8.10-1 Debian upload and merged into upstream 1.8.13.
  Unfortunately Bionic is on 1.8.8-1ubuntu0.4 and doesn't have this patch.

  Please consider pulling this patch into an SRU for Bionic.

To manage notifications about this bug go to:

[Ubuntu-ha] [Bug 1745155] Re: o2image fails on s390x

2019-11-29 Thread Launchpad Bug Tracker
** Merge proposal linked:
   
https://code.launchpad.net/~rafaeldtinoco/ubuntu/+source/ocfs2-tools/+git/ocfs2-tools/+merge/376188

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to ocfs2-tools in Ubuntu.
https://bugs.launchpad.net/bugs/1745155

Title:
  o2image fails on s390x

Status in OCFS2 Tools:
  New
Status in ocfs2-tools package in Ubuntu:
  Confirmed
Status in ocfs2-tools source package in Eoan:
  Confirmed
Status in ocfs2-tools source package in Focal:
  Confirmed

Bug description:
  o2image fails on s390x:

  dd if=/dev/zero of=/tmp/disk bs=1M count=200
  losetup --find --show /tmp/disk
  mkfs.ocfs2 --cluster-stack=o2cb --cluster-name=ocfs2 /dev/loop0 # loop dev 
found in prev step

  Then this comand:
  o2image /dev/loop0 /tmp/disk.image

  Results in:
  Segmentation fault (core dumped)

  dmesg:
  [  862.642556] ocfs2: Registered cluster interface o2cb
  [  870.880635] User process fault: interruption code 003b ilc:3 in 
o2image[10c18+2e000]
  [  870.880643] Failing address:  TEID: 0800
  [  870.880644] Fault in primary space mode while using user ASCE.
  [  870.880646] AS:3d8f81c7 R3:0024 
  [  870.880650] CPU: 0 PID: 1484 Comm: o2image Not tainted 4.13.0-30-generic 
#33-Ubuntu
  [  870.880651] Hardware name: IBM 2964 N63 400 (KVM/Linux)
  [  870.880652] task: 3cb81200 task.stack: 3d50c000
  [  870.880653] User PSW : 070500018000 00010c184212
  [  870.880654]R:0 T:1 IO:1 EX:1 Key:0 M:1 W:0 P:1 AS:0 CC:0 PM:0 
RI:0 EA:3
  [  870.880655] User GPRS: 000144f0cc10 0001 0001 

  [  870.880655] 000144ef6090 000144f13cc0 
0001
  [  870.880656]000144ef6000 000144ef3280 000144f13cd8 
00037ee8
  [  870.880656]03ff965a6000 03ffe5e7e410 00010c183bc6 
03ffe5e7e370
  [  870.880663] User Code: 00010c184202: b9080034  agr %r3,%r4
00010c184206: c02b0007  nilf%r2,7
   #00010c18420c: eb2120df  sllk
%r2,%r1,0(%r2)
   >00010c184212: e3103090  llgc
%r1,0(%r3)
00010c184218: b9f61042  ork 
%r4,%r2,%r1
00010c18421c: 1421  nr  %r2,%r1
00010c18421e: 42403000  stc 
%r4,0(%r3)
00010c184222: 1322  lcr %r2,%r2
  [  870.880672] Last Breaking-Event-Address:
  [  870.880675]  [<00010c18e4ca>] 0x10c18e4ca

  Upstream issue:
  https://github.com/markfasheh/ocfs2-tools/issues/22

  This was triggered by our ocfs2-tools dep8 tests:
  http://autopkgtest.ubuntu.com/packages/o/ocfs2-tools/bionic/s390x

To manage notifications about this bug go to:
https://bugs.launchpad.net/ocfs2-tools/+bug/1745155/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1848902] Re: haproxy in bionic can get stuck

2019-11-12 Thread Launchpad Bug Tracker
** Merge proposal linked:
   
https://code.launchpad.net/~paelzer/ubuntu/+source/haproxy/+git/haproxy/+merge/375433

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1848902

Title:
  haproxy in bionic can get stuck

Status in haproxy package in Ubuntu:
  Fix Released
Status in haproxy source package in Bionic:
  Triaged

Bug description:
  [Impact]

   * The master process will exit with the status of the last worker. 
 When the worker is killed with SIGTERM, it is expected to get 143 as an
 exit status. Therefore, we consider this exit status as normal from a
 systemd point of view. If it happens when not stopping, the systemd
 unit is configured to always restart, so it has no adverse effect.

   * Backport upstream fix - adding another accepted RC to the systemd 
 service

  [Test Case]

   * You want to install haproxy and have it running. Then sigterm it a lot.
 With the fix it would restart the service all the time, well except 
 restart limit. But in the bad case it will just stay down and didn't 
 even try to restart it.

 $ apt install haproxy
 $ for x in {1..100}; do pkill -TERM -x haproxy ; sleep 0.1 ; done
 $ systemctl status haproxy

  [Regression Potential]

   * This eventually is a conffile modification, so if there are other 
 modifications done by the user they will get a prompt. But that isn't a 
 regression. I checked the code and I can't think of another RC=143 that 
 would due to that "no more" detected as error. I really think other 
 than the update itself triggering a restart (as usual for services) 
 there is no further regression potential to this.

  [Other Info]
   
   * Fix already active in IS hosted cloud without issues since a while
   * Also reports (comment #5) show that others use this in production as 
 well

  ---

  On a Bionic/Stein cloud, after a network partition, we saw several
  units (glance, swift-proxy and cinder) fail to start haproxy, like so:

  root@juju-df624b-6-lxd-4:~# systemctl status haproxy.service
  ● haproxy.service - HAProxy Load Balancer
     Loaded: loaded (/lib/systemd/system/haproxy.service; enabled; vendor 
preset: enabled)
     Active: failed (Result: exit-code) since Sun 2019-10-20 00:23:18 UTC; 1h 
35min ago
   Docs: man:haproxy(1)
     file:/usr/share/doc/haproxy/configuration.txt.gz
    Process: 2002655 ExecStart=/usr/sbin/haproxy -Ws -f $CONFIG -p $PIDFILE 
$EXTRAOPTS (code=exited, status=143)
    Process: 2002649 ExecStartPre=/usr/sbin/haproxy -f $CONFIG -c -q $EXTRAOPTS 
(code=exited, status=0/SUCCESS)
   Main PID: 2002655 (code=exited, status=143)

  Oct 20 00:16:52 juju-df624b-6-lxd-4 systemd[1]: Starting HAProxy Load 
Balancer...
  Oct 20 00:16:52 juju-df624b-6-lxd-4 systemd[1]: Started HAProxy Load Balancer.
  Oct 20 00:23:18 juju-df624b-6-lxd-4 systemd[1]: Stopping HAProxy Load 
Balancer...
  Oct 20 00:23:18 juju-df624b-6-lxd-4 haproxy[2002655]: [WARNING] 292/001652 
(2002655) : Exiting Master process...
  Oct 20 00:23:18 juju-df624b-6-lxd-4 haproxy[2002655]: [ALERT] 292/001652 
(2002655) : Current worker 2002661 exited with code 143
  Oct 20 00:23:18 juju-df624b-6-lxd-4 haproxy[2002655]: [WARNING] 292/001652 
(2002655) : All workers exited. Exiting... (143)
  Oct 20 00:23:18 juju-df624b-6-lxd-4 systemd[1]: haproxy.service: Main process 
exited, code=exited, status=143/n/a
  Oct 20 00:23:18 juju-df624b-6-lxd-4 systemd[1]: haproxy.service: Failed with 
result 'exit-code'.
  Oct 20 00:23:18 juju-df624b-6-lxd-4 systemd[1]: Stopped HAProxy Load Balancer.
  root@juju-df624b-6-lxd-4:~#

  The Debian maintainer came up with the following patch for this:

    https://www.mail-archive.com/haproxy@formilux.org/msg30477.html

  Which was added to the 1.8.10-1 Debian upload and merged into upstream 1.8.13.
  Unfortunately Bionic is on 1.8.8-1ubuntu0.4 and doesn't have this patch.

  Please consider pulling this patch into an SRU for Bionic.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/haproxy/+bug/1848902/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1841936] Re: Rebuild openssl 1.1.1 to pickup TLSv1.3 (bionic) and unbreak existing builds against 1.1.1 (dh key size)

2019-11-04 Thread Launchpad Bug Tracker
This bug was fixed in the package haproxy - 1.8.8-1ubuntu0.6

---
haproxy (1.8.8-1ubuntu0.6) bionic; urgency=medium

  * Fix issues around dh_params when building against openssl 1.1.1
to avoid regressing the minimal key size (LP: 1841936)
- d/p/lp-1841936-BUG-MEDIUM-ssl-tune.ssl.default-dh-param-value-ignor.patch
- d/p/lp-1841936-CLEANUP-ssl-make-ssl_sock_load_dh_params-handle-errc.patch

haproxy (1.8.8-1ubuntu0.5) bionic; urgency=medium

  * no change rebuild to pick up openssl 1.1.1 and via that
TLSv1.3 (LP: #1841936)

 -- Christian Ehrhardt   Wed, 23 Oct
2019 11:37:53 +0200

** Changed in: haproxy (Ubuntu Bionic)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1841936

Title:
  Rebuild openssl 1.1.1 to pickup TLSv1.3 (bionic) and unbreak existing
  builds against 1.1.1 (dh key size)

Status in HAProxy:
  Fix Released
Status in haproxy package in Ubuntu:
  Fix Committed
Status in haproxy source package in Bionic:
  Fix Released
Status in haproxy source package in Disco:
  Fix Released
Status in haproxy source package in Eoan:
  Fix Released
Status in haproxy source package in Focal:
  Fix Committed

Bug description:
  [Impact-Bionic]

   * openssl 1.1.1 has been backported to Bionic for its longer
     support upstream period

   * That would allow the extra feature of TLSv1.3 in some consuming
     packages what seems "for free". Just with a no change rebuild it would
     pick that up.

  [Impact Disco-Focal]

   * openssl >=1.1.1 is in Disco-Focal already and thereby it was built
     against that already. That made it pick up TLSv1.3, but also a related
     bug that broke the ability to control the DHE key, it was always in
     "ECDH auto" mode. Therefore the daemon didn't follow the config
     anymore.
     Upgraders would regress having their DH key behavior changed
     unexpectedly.

  [Test Case]

   A)
   * run "haproxy -vv" and check the reported TLS versions to include 1.3
   B)
   * download https://github.com/drwetter/testssl.sh
   * Install haproxy
 * ./testssl.sh --pfs :443
 * Check the reported DH key/group (shoudl stay 1024)
 * Check if settings work to bump it like
 tune.ssl.default-dh-param 2048
   into
 /etc/haproxy/haproxy.cfg

  [Regression Potential-Bionic]

   * This should be low, the code already runs against the .so of the newer
     openssl library. This would only make it recognize the newer TLS
     support.
     i'd expect more trouble as-is with the somewhat big delta between what
     it was built against vs what it runs with than afterwards.
   * [1] and [2]  indicate that any config that would have been made for
     TLSv1.2 [1] would not apply to the v1.3 as it would be configured in
     [2].
     It is good to have no entry for [2] yet as following the defaults of
     openssl is the safest as that would be updated if new insights/CVEs are
     known.
     But this could IMHO be the "regression that I'd expect", one explcitly
     configured the v1.2 things and once both ends support v1.3 that might
     be auto-negotiated. One can then set "force-tlsv12" but that is an
     administrative action [3]
   * Yet AFAIK this fine grained control [2] for TLSv1.3 only exists in
     >=1.8.15 [4] and Bionic is on haproxy 1.8.8. So any user of TLSv1.3 in
     Bionic haproxy would have to stay without that. There are further
     changes to TLS v1.3 handling enhancements [5] but also fixes [6] which
     aren't in 1.8.8 in Bionic.
     So one could say enabling this will enable an inferior TLSv1.3 and one
     might better not enable it, for an SRU the bar to not break old
     behavior is intentionally high - I tried to provide as much as possible
     background, the decision is up to the SRU team.

  [Regression Potential-Disco-Focal]

   * The fixes let the admin regain control of the DH key configuration
     which is the fix. But remember that the default config didn't specify
     any. Therefore we have two scenarios:
     a) an admin had set custom DH parameters which were ignored. He had no
    chance to control them and needs the fix. He might have been under
    the impression that his keys are safe (there is a CVE against small
    ones) and only now is he really safe -> gain high, regression low
     b) an admin had not set anything, the default config is meant to use
    (compatibility) and the program reported "I'm using 1024, but you
    should set it higher". But what really happened was ECDH auto mode
    which has longer keys and different settings. Those systems will
    be "fixed" by finally following the config, but that means the key
    will "now" after the fix be vulnerable.
    -> for their POV a formerly secure setup will become vulnerable
     I'd expect that any professional setup 

[Ubuntu-ha] [Bug 1841936] Re: Rebuild openssl 1.1.1 to pickup TLSv1.3 (bionic) and unbreak existing builds against 1.1.1 (dh key size)

2019-11-04 Thread Launchpad Bug Tracker
This bug was fixed in the package haproxy - 1.8.19-1ubuntu1.1

---
haproxy (1.8.19-1ubuntu1.1) disco; urgency=medium

  * Fix configurability of dh_params that regressed since building
against openssl 1.1.1 (LP: #1841936)
- d/p/lp-1841936-BUG-MEDIUM-ssl-tune.ssl.default-dh-param-value-ignor.patch
- d/p/lp-1841936-CLEANUP-ssl-make-ssl_sock_load_dh_params-handle-errc.patch

 -- Christian Ehrhardt   Wed, 23 Oct
2019 12:34:38 +0200

** Changed in: haproxy (Ubuntu Disco)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1841936

Title:
  Rebuild openssl 1.1.1 to pickup TLSv1.3 (bionic) and unbreak existing
  builds against 1.1.1 (dh key size)

Status in HAProxy:
  Fix Released
Status in haproxy package in Ubuntu:
  Fix Committed
Status in haproxy source package in Bionic:
  Fix Committed
Status in haproxy source package in Disco:
  Fix Released
Status in haproxy source package in Eoan:
  Fix Released
Status in haproxy source package in Focal:
  Fix Committed

Bug description:
  [Impact-Bionic]

   * openssl 1.1.1 has been backported to Bionic for its longer
     support upstream period

   * That would allow the extra feature of TLSv1.3 in some consuming
     packages what seems "for free". Just with a no change rebuild it would
     pick that up.

  [Impact Disco-Focal]

   * openssl >=1.1.1 is in Disco-Focal already and thereby it was built
     against that already. That made it pick up TLSv1.3, but also a related
     bug that broke the ability to control the DHE key, it was always in
     "ECDH auto" mode. Therefore the daemon didn't follow the config
     anymore.
     Upgraders would regress having their DH key behavior changed
     unexpectedly.

  [Test Case]

   A)
   * run "haproxy -vv" and check the reported TLS versions to include 1.3
   B)
   * download https://github.com/drwetter/testssl.sh
   * Install haproxy
 * ./testssl.sh --pfs :443
 * Check the reported DH key/group (shoudl stay 1024)
 * Check if settings work to bump it like
 tune.ssl.default-dh-param 2048
   into
 /etc/haproxy/haproxy.cfg

  [Regression Potential-Bionic]

   * This should be low, the code already runs against the .so of the newer
     openssl library. This would only make it recognize the newer TLS
     support.
     i'd expect more trouble as-is with the somewhat big delta between what
     it was built against vs what it runs with than afterwards.
   * [1] and [2]  indicate that any config that would have been made for
     TLSv1.2 [1] would not apply to the v1.3 as it would be configured in
     [2].
     It is good to have no entry for [2] yet as following the defaults of
     openssl is the safest as that would be updated if new insights/CVEs are
     known.
     But this could IMHO be the "regression that I'd expect", one explcitly
     configured the v1.2 things and once both ends support v1.3 that might
     be auto-negotiated. One can then set "force-tlsv12" but that is an
     administrative action [3]
   * Yet AFAIK this fine grained control [2] for TLSv1.3 only exists in
     >=1.8.15 [4] and Bionic is on haproxy 1.8.8. So any user of TLSv1.3 in
     Bionic haproxy would have to stay without that. There are further
     changes to TLS v1.3 handling enhancements [5] but also fixes [6] which
     aren't in 1.8.8 in Bionic.
     So one could say enabling this will enable an inferior TLSv1.3 and one
     might better not enable it, for an SRU the bar to not break old
     behavior is intentionally high - I tried to provide as much as possible
     background, the decision is up to the SRU team.

  [Regression Potential-Disco-Focal]

   * The fixes let the admin regain control of the DH key configuration
     which is the fix. But remember that the default config didn't specify
     any. Therefore we have two scenarios:
     a) an admin had set custom DH parameters which were ignored. He had no
    chance to control them and needs the fix. He might have been under
    the impression that his keys are safe (there is a CVE against small
    ones) and only now is he really safe -> gain high, regression low
     b) an admin had not set anything, the default config is meant to use
    (compatibility) and the program reported "I'm using 1024, but you
    should set it higher". But what really happened was ECDH auto mode
    which has longer keys and different settings. Those systems will
    be "fixed" by finally following the config, but that means the key
    will "now" after the fix be vulnerable.
    -> for their POV a formerly secure setup will become vulnerable
     I'd expect that any professional setup would use explicit config as it
     has seen the warning since day #1 and also any kind of deployment
     recipes should use big keys. So the majority of 

[Ubuntu-ha] [Bug 1841936] Re: Rebuild openssl 1.1.1 to pickup TLSv1.3 (bionic) and unbreak existing builds against 1.1.1 (dh key size)

2019-10-23 Thread Launchpad Bug Tracker
** Merge proposal linked:
   
https://code.launchpad.net/~paelzer/ubuntu/+source/haproxy/+git/haproxy/+merge/374595

** Merge proposal linked:
   
https://code.launchpad.net/~paelzer/ubuntu/+source/haproxy/+git/haproxy/+merge/374596

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1841936

Title:
  Rebuild openssl 1.1.1 to pickup TLSv1.3 (bionic) and unbreak existing
  builds against 1.1.1 (dh key size)

Status in HAProxy:
  Fix Released
Status in haproxy package in Ubuntu:
  Triaged
Status in haproxy source package in Bionic:
  Triaged
Status in haproxy source package in Disco:
  Triaged
Status in haproxy source package in Eoan:
  Triaged
Status in haproxy source package in Focal:
  Triaged

Bug description:
  [Impact-Bionic]

   * openssl 1.1.1 has been backported to Bionic for its longer
     support upstream period

   * That would allow the extra feature of TLSv1.3 in some consuming
     packages what seems "for free". Just with a no change rebuild it would
     pick that up.

  [Impact Disco-Focal]

   * openssl >=1.1.1 is in Disco-Focal already and thereby it was built 
 against that already. That made it pick up TLSv1.3, but also a related
 bug that broke the ability to control the DHE key, it was always in 
 "ECDH auto" mode. Therefore the daemon didn't follow the config 
 anymore.
 Upgraders would regress having their DH key behavior changed 
 unexpectedly.

  [Test Case]

   * run "haproxy -vv" and check the reported TLS versions to include
  1.3

  [Regression Potential-Bionic]

   * This should be low, the code already runs against the .so of the newer
     openssl library. This would only make it recognize the newer TLS
     support.
     i'd expect more trouble as-is with the somewhat big delta between what
     it was built against vs what it runs with than afterwards.
   * [1] and [2]  indicate that any config that would have been made for
     TLSv1.2 [1] would not apply to the v1.3 as it would be configured in
     [2].
     It is good to have no entry for [2] yet as following the defaults of
     openssl is the safest as that would be updated if new insights/CVEs are
     known.
     But this could IMHO be the "regression that I'd expect", one explcitly
     configured the v1.2 things and once both ends support v1.3 that might
     be auto-negotiated. One can then set "force-tlsv12" but that is an
     administrative action [3]
   * Yet AFAIK this fine grained control [2] for TLSv1.3 only exists in
     >=1.8.15 [4] and Bionic is on haproxy 1.8.8. So any user of TLSv1.3 in
     Bionic haproxy would have to stay without that. There are further
     changes to TLS v1.3 handling enhancements [5] but also fixes [6] which
     aren't in 1.8.8 in Bionic.
     So one could say enabling this will enable an inferior TLSv1.3 and one
     might better not enable it, for an SRU the bar to not break old
     behavior is intentionally high - I tried to provide as much as possible
     background, the decision is up to the SRU team.

  [Regression Potential-Disco-Focal]

   * The fixes let the admin regain control of the DH key configuration 
 which is the fix. But remember that the default config didn't specify 
 any. Therefore we have two scenarios:
 a) an admin had set custom DH parameters which were ignored. He had no 
chance to control them and needs the fix. He might have been under 
the impression that his keys are safe (there is a CVE against small 
ones) and only now is he really safe -> gain high, regression low
 b) an admin had not set anything, the default config is meant to use 
(compatibility) and the program reported "I'm using 1024, but you 
should set it higher". But what really happened was ECDH auto mode 
which has longer keys and different settings. Those systems will
be "fixed" by finally following the config, but that means the key 
will "now" after the fix be vulnerable.
-> for their POV a formerly secure setup will become vulnerable
 I'd expect that any professional setup would use explicit config as it 
 has seen the warning since day #1 and also any kind of deployment 
 recipes should use big keys. So the majority of users should be in (a).
 c) And OTOH there are people like the reporter who strictly NEED the 
key to be small for their devices they have to support. They will 
finally be able to use the small keys again which formerly was 
impossible.

 Summary: (a) good (c) good, (b) good, but a regression in some POV

  [1]: 
https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#3.1-ssl-default-bind-ciphers
  [2]: 
https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#3.1-ssl-default-bind-ciphersuites
  [3]: 

[Ubuntu-ha] [Bug 1841936] Re: Rebuild openssl 1.1.1 to pickup TLSv1.3 (bionic) and unbreak existing builds against 1.1.1 (dh key size)

2019-10-23 Thread Launchpad Bug Tracker
** Merge proposal linked:
   
https://code.launchpad.net/~paelzer/ubuntu/+source/haproxy/+git/haproxy/+merge/374592

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1841936

Title:
  Rebuild openssl 1.1.1 to pickup TLSv1.3 (bionic) and unbreak existing
  builds against 1.1.1 (dh key size)

Status in HAProxy:
  Fix Released
Status in haproxy package in Ubuntu:
  Triaged
Status in haproxy source package in Bionic:
  Triaged
Status in haproxy source package in Disco:
  Triaged
Status in haproxy source package in Eoan:
  Triaged
Status in haproxy source package in Focal:
  Triaged

Bug description:
  [Impact-Bionic]

   * openssl 1.1.1 has been backported to Bionic for its longer
     support upstream period

   * That would allow the extra feature of TLSv1.3 in some consuming
     packages what seems "for free". Just with a no change rebuild it would
     pick that up.

  [Impact Disco-Focal]

   * openssl >=1.1.1 is in Disco-Focal already and thereby it was built 
 against that already. That made it pick up TLSv1.3, but also a related
 bug that broke the ability to control the DHE key, it was always in 
 "ECDH auto" mode. Therefore the daemon didn't follow the config 
 anymore.
 Upgraders would regress having their DH key behavior changed 
 unexpectedly.

  [Test Case]

   * run "haproxy -vv" and check the reported TLS versions to include
  1.3

  [Regression Potential-Bionic]

   * This should be low, the code already runs against the .so of the newer
     openssl library. This would only make it recognize the newer TLS
     support.
     i'd expect more trouble as-is with the somewhat big delta between what
     it was built against vs what it runs with than afterwards.
   * [1] and [2]  indicate that any config that would have been made for
     TLSv1.2 [1] would not apply to the v1.3 as it would be configured in
     [2].
     It is good to have no entry for [2] yet as following the defaults of
     openssl is the safest as that would be updated if new insights/CVEs are
     known.
     But this could IMHO be the "regression that I'd expect", one explcitly
     configured the v1.2 things and once both ends support v1.3 that might
     be auto-negotiated. One can then set "force-tlsv12" but that is an
     administrative action [3]
   * Yet AFAIK this fine grained control [2] for TLSv1.3 only exists in
     >=1.8.15 [4] and Bionic is on haproxy 1.8.8. So any user of TLSv1.3 in
     Bionic haproxy would have to stay without that. There are further
     changes to TLS v1.3 handling enhancements [5] but also fixes [6] which
     aren't in 1.8.8 in Bionic.
     So one could say enabling this will enable an inferior TLSv1.3 and one
     might better not enable it, for an SRU the bar to not break old
     behavior is intentionally high - I tried to provide as much as possible
     background, the decision is up to the SRU team.

  [Regression Potential-Disco-Focal]

   * The fixes let the admin regain control of the DH key configuration 
 which is the fix. But remember that the default config didn't specify 
 any. Therefore we have two scenarios:
 a) an admin had set custom DH parameters which were ignored. He had no 
chance to control them and needs the fix. He might have been under 
the impression that his keys are safe (there is a CVE against small 
ones) and only now is he really safe -> gain high, regression low
 b) an admin had not set anything, the default config is meant to use 
(compatibility) and the program reported "I'm using 1024, but you 
should set it higher". But what really happened was ECDH auto mode 
which has longer keys and different settings. Those systems will
be "fixed" by finally following the config, but that means the key 
will "now" after the fix be vulnerable.
-> for their POV a formerly secure setup will become vulnerable
 I'd expect that any professional setup would use explicit config as it 
 has seen the warning since day #1 and also any kind of deployment 
 recipes should use big keys. So the majority of users should be in (a).
 c) And OTOH there are people like the reporter who strictly NEED the 
key to be small for their devices they have to support. They will 
finally be able to use the small keys again which formerly was 
impossible.

 Summary: (a) good (c) good, (b) good, but a regression in some POV

  [1]: 
https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#3.1-ssl-default-bind-ciphers
  [2]: 
https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#3.1-ssl-default-bind-ciphersuites
  [3]: 
https://www.haproxy.com/documentation/hapee/1-8r2/traffic-management/tls/#define-bind-directives-on-the-frontend
  [4]: https://github.com/haproxy/haproxy/blob/master/CHANGELOG#L2131
  [5]: 

[Ubuntu-ha] [Bug 1841936] Re: Rebuild openssl 1.1.1 to pickup TLSv1.3 (bionic) and unbreak existing builds against 1.1.1 (dh key size)

2019-10-23 Thread Launchpad Bug Tracker
** Merge proposal linked:
   
https://code.launchpad.net/~paelzer/ubuntu/+source/haproxy/+git/haproxy/+merge/374589

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1841936

Title:
  Rebuild openssl 1.1.1 to pickup TLSv1.3 (bionic) and unbreak existing
  builds against 1.1.1 (dh key size)

Status in HAProxy:
  Fix Released
Status in haproxy package in Ubuntu:
  Triaged
Status in haproxy source package in Bionic:
  Triaged
Status in haproxy source package in Disco:
  Triaged
Status in haproxy source package in Eoan:
  Triaged
Status in haproxy source package in Focal:
  Triaged

Bug description:
  [Impact-Bionic]

   * openssl 1.1.1 has been backported to Bionic for its longer
     support upstream period

   * That would allow the extra feature of TLSv1.3 in some consuming
     packages what seems "for free". Just with a no change rebuild it would
     pick that up.

  [Impact Disco-Focal]

   * openssl >=1.1.1 is in Disco-Focal already and thereby it was built 
 against that already. That made it pick up TLSv1.3, but also a related
 bug that broke the ability to control the DHE key, it was always in 
 "ECDH auto" mode. Therefore the daemon didn't follow the config 
 anymore.
 Upgraders would regress having their DH key behavior changed 
 unexpectedly.

  [Test Case]

   * run "haproxy -vv" and check the reported TLS versions to include
  1.3

  [Regression Potential-Bionic]

   * This should be low, the code already runs against the .so of the newer
     openssl library. This would only make it recognize the newer TLS
     support.
     i'd expect more trouble as-is with the somewhat big delta between what
     it was built against vs what it runs with than afterwards.
   * [1] and [2]  indicate that any config that would have been made for
     TLSv1.2 [1] would not apply to the v1.3 as it would be configured in
     [2].
     It is good to have no entry for [2] yet as following the defaults of
     openssl is the safest as that would be updated if new insights/CVEs are
     known.
     But this could IMHO be the "regression that I'd expect", one explcitly
     configured the v1.2 things and once both ends support v1.3 that might
     be auto-negotiated. One can then set "force-tlsv12" but that is an
     administrative action [3]
   * Yet AFAIK this fine grained control [2] for TLSv1.3 only exists in
     >=1.8.15 [4] and Bionic is on haproxy 1.8.8. So any user of TLSv1.3 in
     Bionic haproxy would have to stay without that. There are further
     changes to TLS v1.3 handling enhancements [5] but also fixes [6] which
     aren't in 1.8.8 in Bionic.
     So one could say enabling this will enable an inferior TLSv1.3 and one
     might better not enable it, for an SRU the bar to not break old
     behavior is intentionally high - I tried to provide as much as possible
     background, the decision is up to the SRU team.

  [Regression Potential-Disco-Focal]

   * The fixes let the admin regain control of the DH key configuration 
 which is the fix. But remember that the default config didn't specify 
 any. Therefore we have two scenarios:
 a) an admin had set custom DH parameters which were ignored. He had no 
chance to control them and needs the fix. He might have been under 
the impression that his keys are safe (there is a CVE against small 
ones) and only now is he really safe -> gain high, regression low
 b) an admin had not set anything, the default config is meant to use 
(compatibility) and the program reported "I'm using 1024, but you 
should set it higher". But what really happened was ECDH auto mode 
which has longer keys and different settings. Those systems will
be "fixed" by finally following the config, but that means the key 
will "now" after the fix be vulnerable.
-> for their POV a formerly secure setup will become vulnerable
 I'd expect that any professional setup would use explicit config as it 
 has seen the warning since day #1 and also any kind of deployment 
 recipes should use big keys. So the majority of users should be in (a).
 c) And OTOH there are people like the reporter who strictly NEED the 
key to be small for their devices they have to support. They will 
finally be able to use the small keys again which formerly was 
impossible.

 Summary: (a) good (c) good, (b) good, but a regression in some POV

  [1]: 
https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#3.1-ssl-default-bind-ciphers
  [2]: 
https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#3.1-ssl-default-bind-ciphersuites
  [3]: 
https://www.haproxy.com/documentation/hapee/1-8r2/traffic-management/tls/#define-bind-directives-on-the-frontend
  [4]: https://github.com/haproxy/haproxy/blob/master/CHANGELOG#L2131
  [5]: 

[Ubuntu-ha] [Bug 1848902] Re: haproxy in bionic can get stuck

2019-10-20 Thread Launchpad Bug Tracker
Status changed to 'Confirmed' because the bug affects multiple users.

** Changed in: haproxy (Ubuntu)
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1848902

Title:
  haproxy in bionic can get stuck

Status in haproxy package in Ubuntu:
  Confirmed

Bug description:
  On a Bionic/Stein cloud, after a network partition, we saw several
  units (glance, swift-proxy and cinder) fail to start haproxy, like so:

  root@juju-df624b-6-lxd-4:~# systemctl status haproxy.service
  ● haproxy.service - HAProxy Load Balancer
 Loaded: loaded (/lib/systemd/system/haproxy.service; enabled; vendor 
preset: enabled)
 Active: failed (Result: exit-code) since Sun 2019-10-20 00:23:18 UTC; 1h 
35min ago
   Docs: man:haproxy(1)
 file:/usr/share/doc/haproxy/configuration.txt.gz
Process: 2002655 ExecStart=/usr/sbin/haproxy -Ws -f $CONFIG -p $PIDFILE 
$EXTRAOPTS (code=exited, status=143)
Process: 2002649 ExecStartPre=/usr/sbin/haproxy -f $CONFIG -c -q $EXTRAOPTS 
(code=exited, status=0/SUCCESS)
   Main PID: 2002655 (code=exited, status=143)

  Oct 20 00:16:52 juju-df624b-6-lxd-4 systemd[1]: Starting HAProxy Load 
Balancer...
  Oct 20 00:16:52 juju-df624b-6-lxd-4 systemd[1]: Started HAProxy Load Balancer.
  Oct 20 00:23:18 juju-df624b-6-lxd-4 systemd[1]: Stopping HAProxy Load 
Balancer...
  Oct 20 00:23:18 juju-df624b-6-lxd-4 haproxy[2002655]: [WARNING] 292/001652 
(2002655) : Exiting Master process...
  Oct 20 00:23:18 juju-df624b-6-lxd-4 haproxy[2002655]: [ALERT] 292/001652 
(2002655) : Current worker 2002661 exited with code 143
  Oct 20 00:23:18 juju-df624b-6-lxd-4 haproxy[2002655]: [WARNING] 292/001652 
(2002655) : All workers exited. Exiting... (143)
  Oct 20 00:23:18 juju-df624b-6-lxd-4 systemd[1]: haproxy.service: Main process 
exited, code=exited, status=143/n/a
  Oct 20 00:23:18 juju-df624b-6-lxd-4 systemd[1]: haproxy.service: Failed with 
result 'exit-code'.
  Oct 20 00:23:18 juju-df624b-6-lxd-4 systemd[1]: Stopped HAProxy Load Balancer.
  root@juju-df624b-6-lxd-4:~#

  The Debian maintainer came up with the following patch for this:

https://www.mail-archive.com/haproxy@formilux.org/msg30477.html

  Which was added to the 1.8.10-1 Debian upload and merged into upstream 1.8.13.
  Unfortunately Bionic is on 1.8.8-1ubuntu0.4 and doesn't have this patch.

  Please consider pulling this patch into an SRU for Bionic.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/haproxy/+bug/1848902/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1819074] Re: Keepalived < 2.0.x in Ubuntu 18.04 LTS not compatible with systemd-networkd

2019-10-12 Thread Launchpad Bug Tracker
*** This bug is a duplicate of bug 1815101 ***
https://bugs.launchpad.net/bugs/1815101

Status changed to 'Confirmed' because the bug affects multiple users.

** Changed in: systemd (Ubuntu Bionic)
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to keepalived in Ubuntu.
https://bugs.launchpad.net/bugs/1819074

Title:
  Keepalived < 2.0.x in Ubuntu 18.04 LTS  not compatible with systemd-
  networkd

Status in keepalived package in Ubuntu:
  Fix Released
Status in netplan.io package in Ubuntu:
  Fix Released
Status in systemd package in Ubuntu:
  Fix Released
Status in keepalived source package in Bionic:
  Triaged
Status in netplan.io source package in Bionic:
  Confirmed
Status in systemd source package in Bionic:
  Confirmed

Bug description:
  Systemd-networkd clobbers VIPs placed by other daemons on any
  reconfiguration triggering systemd-networkd restart (netplan apply for
  example).  Keepalived < version 2.0.x will not restore a VIP lost in
  this fashion, breaking high availability on Ubuntu 18.04 LTS.  A
  backport for keepalived >= 2.0.x should fix the issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/keepalived/+bug/1819074/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1819074] Re: Keepalived < 2.0.x in Ubuntu 18.04 LTS not compatible with systemd-networkd

2019-10-12 Thread Launchpad Bug Tracker
*** This bug is a duplicate of bug 1815101 ***
https://bugs.launchpad.net/bugs/1815101

Status changed to 'Confirmed' because the bug affects multiple users.

** Changed in: netplan.io (Ubuntu Bionic)
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to keepalived in Ubuntu.
https://bugs.launchpad.net/bugs/1819074

Title:
  Keepalived < 2.0.x in Ubuntu 18.04 LTS  not compatible with systemd-
  networkd

Status in keepalived package in Ubuntu:
  Fix Released
Status in netplan.io package in Ubuntu:
  Fix Released
Status in systemd package in Ubuntu:
  Fix Released
Status in keepalived source package in Bionic:
  Triaged
Status in netplan.io source package in Bionic:
  Confirmed
Status in systemd source package in Bionic:
  Confirmed

Bug description:
  Systemd-networkd clobbers VIPs placed by other daemons on any
  reconfiguration triggering systemd-networkd restart (netplan apply for
  example).  Keepalived < version 2.0.x will not restore a VIP lost in
  this fashion, breaking high availability on Ubuntu 18.04 LTS.  A
  backport for keepalived >= 2.0.x should fix the issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/keepalived/+bug/1819074/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1815101] Re: [master] Restarting systemd-networkd breaks keepalived, heartbeat, corosync, pacemaker (interface aliases are restarted)

2019-10-11 Thread Launchpad Bug Tracker
** Merge proposal linked:
   
https://code.launchpad.net/~rafaeldtinoco/ubuntu/+source/systemd/+git/systemd/+merge/374027

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to keepalived in Ubuntu.
https://bugs.launchpad.net/bugs/1815101

Title:
  [master] Restarting systemd-networkd breaks keepalived, heartbeat,
  corosync, pacemaker (interface aliases are restarted)

Status in Keepalived Charm:
  New
Status in netplan:
  Confirmed
Status in heartbeat package in Ubuntu:
  Triaged
Status in keepalived package in Ubuntu:
  In Progress
Status in systemd package in Ubuntu:
  In Progress
Status in heartbeat source package in Bionic:
  Triaged
Status in keepalived source package in Bionic:
  Confirmed
Status in systemd source package in Bionic:
  Confirmed
Status in heartbeat source package in Disco:
  Triaged
Status in keepalived source package in Disco:
  Confirmed
Status in systemd source package in Disco:
  Confirmed
Status in heartbeat source package in Eoan:
  Triaged
Status in keepalived source package in Eoan:
  In Progress
Status in systemd source package in Eoan:
  In Progress

Bug description:
  Configure netplan for interfaces, for example (a working config with
  IP addresses obfuscated)

  network:
  ethernets:
  eth0:
  addresses: [192.168.0.5/24]
  dhcp4: false
  nameservers:
search: [blah.com, other.blah.com, hq.blah.com, cust.blah.com, 
phone.blah.com]
addresses: [10.22.11.1]
  eth2:
  addresses:
- 12.13.14.18/29
- 12.13.14.19/29
  gateway4: 12.13.14.17
  dhcp4: false
  nameservers:
search: [blah.com, other.blah.com, hq.blah.com, cust.blah.com, 
phone.blah.com]
addresses: [10.22.11.1]
  eth3:
  addresses: [10.22.11.6/24]
  dhcp4: false
  nameservers:
search: [blah.com, other.blah.com, hq.blah.com, cust.blah.com, 
phone.blah.com]
addresses: [10.22.11.1]
  eth4:
  addresses: [10.22.14.6/24]
  dhcp4: false
  nameservers:
search: [blah.com, other.blah.com, hq.blah.com, cust.blah.com, 
phone.blah.com]
addresses: [10.22.11.1]
  eth7:
  addresses: [9.5.17.34/29]
  dhcp4: false
  optional: true
  nameservers:
search: [blah.com, other.blah.com, hq.blah.com, cust.blah.com, 
phone.blah.com]
addresses: [10.22.11.1]
  version: 2

  Configure keepalived (again, a working config with IP addresses
  obfuscated)

  global_defs   # Block id
  {
  notification_email {
  sysadm...@blah.com
  }
  notification_email_from keepali...@system3.hq.blah.com
  smtp_server 10.22.11.7 # IP
  smtp_connect_timeout 30  # integer, seconds
  router_id system3  # string identifying the machine,
   # (doesn't have to be hostname).
  vrrp_mcast_group4 224.0.0.18 # optional, default 224.0.0.18
  vrrp_mcast_group6 ff02::12   # optional, default ff02::12
  enable_traps # enable SNMP traps
  }
  vrrp_sync_group collection {
  group {
  wan
  lan
  phone
  }
  vrrp_instance wan {
  state MASTER
  interface eth2
  virtual_router_id 77
  priority 150
  advert_int 1
  smtp_alert
  authentication {
  auth_type PASS
  auth_pass BlahBlah
  }
  virtual_ipaddress {
  12.13.14.20
  }
  }
  vrrp_instance lan {
  state MASTER
  interface eth3
  virtual_router_id 78
  priority 150
  advert_int 1
  smtp_alert
  authentication {
  auth_type PASS
  auth_pass MoreBlah
  }
  virtual_ipaddress {
  10.22.11.13/24
  }
  }
  vrrp_instance phone {
  state MASTER
  interface eth4
  virtual_router_id 79
  priority 150
  advert_int 1
  smtp_alert
  authentication {
  auth_type PASS
  auth_pass MostBlah
  }
  virtual_ipaddress {
  10.22.14.3/24
  }
  }

  At boot the affected interfaces have:
  5: eth4:  mtu 1500 qdisc mq state UP group 
default qlen 1000
  link/ether ab:cd:ef:90:c0:e3 brd ff:ff:ff:ff:ff:ff
  inet 10.22.14.6/24 brd 10.22.14.255 scope global eth4
 valid_lft forever preferred_lft forever
  inet 10.22.14.3/24 scope global secondary eth4
 valid_lft forever preferred_lft forever
  inet6 fe80::ae1f:6bff:fe90:c0e3/64 scope link 
 valid_lft 

[Ubuntu-ha] [Bug 1828496] Re: service haproxy reload sometimes fails to pick up new TLS certificates

2019-10-06 Thread Launchpad Bug Tracker
[Expired for haproxy (Ubuntu) because there has been no activity for 60
days.]

** Changed in: haproxy (Ubuntu)
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1828496

Title:
  service haproxy reload sometimes fails to pick up new TLS certificates

Status in haproxy package in Ubuntu:
  Expired

Bug description:
  I suspect this is the same thing reported on StackOverflow:

  "I had this same issue where even after reloading the config, haproxy
  would randomly serve old certs. After looking around for many days the
  issue was that "reload" operation created a new process without
  killing the old one. Confirm this by "ps aux | grep haproxy"."

  https://stackoverflow.com/questions/46040504/haproxy-wont-recognize-
  new-certificate

  In our setup, we automate Let's Encrypt certificate renewals, and a
  fresh certificate will trigger a reload of the service. But
  occasionally this reload doesn't seem to do anything.

  Will update with details next time it happens, and hopefully confirm
  the multiple process theory.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/haproxy/+bug/1828496/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1771335] Re: haproxy fails at startup when using server name instead of IP

2019-09-06 Thread Launchpad Bug Tracker
[Expired for haproxy (Ubuntu) because there has been no activity for 60
days.]

** Changed in: haproxy (Ubuntu)
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1771335

Title:
  haproxy fails at startup when using server name instead of IP

Status in haproxy package in Ubuntu:
  Expired

Bug description:
  This is similar to #689734 I believe.

  When starting haproxy using a DNS name on the 'server' line haproxy
  fails to start, giving the message:

  ```
  May 05 19:09:40 hyrule systemd[1]: Starting HAProxy Load Balancer...
  May 05 19:09:40 hyrule haproxy[1146]: [ALERT] 124/190940 (1146) : parsing 
[/etc/haproxy/haproxy.cfg:157] : 'server scanmon' :
  May 05 19:09:40 hyrule haproxy[1146]: [ALERT] 124/190940 (1146) : Failed to 
initialize server(s) addr.
  May 05 19:09:40 hyrule systemd[1]: haproxy.service: Control process exited, 
code=exited status=1
  May 05 19:09:40 hyrule systemd[1]: haproxy.service: Failed with result 
'exit-code'.
  May 05 19:09:40 hyrule systemd[1]: Failed to start HAProxy Load Balancer.
  ```

  In this case the server statement was:

  `  server scanmon myservername.mydomain.org:8000`

  Changing it to use the IP address corrected the problem.

  I believe there is a missing dependency for DNS in the unit file.

  --Info:
  Description:  Ubuntu 18.04 LTS
  Release:  18.04

  haproxy:
Installed: 1.8.8-1
Candidate: 1.8.8-1
Version table:
   *** 1.8.8-1 500
  500 http://archive.ubuntu.com/ubuntu bionic/main amd64 Packages
  100 /var/lib/dpkg/status

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/haproxy/+bug/1771335/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1745155] Re: o2image fails on s390x

2019-09-05 Thread Launchpad Bug Tracker
** Merge proposal linked:
   
https://code.launchpad.net/~rafaeldtinoco/ubuntu/+source/ocfs2-tools/+git/ocfs2-tools/+merge/372384

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to ocfs2-tools in Ubuntu.
https://bugs.launchpad.net/bugs/1745155

Title:
  o2image fails on s390x

Status in OCFS2 Tools:
  New
Status in ocfs2-tools package in Ubuntu:
  In Progress

Bug description:
  o2image fails on s390x:

  dd if=/dev/zero of=/tmp/disk bs=1M count=200
  losetup --find --show /tmp/disk
  mkfs.ocfs2 --cluster-stack=o2cb --cluster-name=ocfs2 /dev/loop0 # loop dev 
found in prev step

  Then this comand:
  o2image /dev/loop0 /tmp/disk.image

  Results in:
  Segmentation fault (core dumped)

  dmesg:
  [  862.642556] ocfs2: Registered cluster interface o2cb
  [  870.880635] User process fault: interruption code 003b ilc:3 in 
o2image[10c18+2e000]
  [  870.880643] Failing address:  TEID: 0800
  [  870.880644] Fault in primary space mode while using user ASCE.
  [  870.880646] AS:3d8f81c7 R3:0024 
  [  870.880650] CPU: 0 PID: 1484 Comm: o2image Not tainted 4.13.0-30-generic 
#33-Ubuntu
  [  870.880651] Hardware name: IBM 2964 N63 400 (KVM/Linux)
  [  870.880652] task: 3cb81200 task.stack: 3d50c000
  [  870.880653] User PSW : 070500018000 00010c184212
  [  870.880654]R:0 T:1 IO:1 EX:1 Key:0 M:1 W:0 P:1 AS:0 CC:0 PM:0 
RI:0 EA:3
  [  870.880655] User GPRS: 000144f0cc10 0001 0001 

  [  870.880655] 000144ef6090 000144f13cc0 
0001
  [  870.880656]000144ef6000 000144ef3280 000144f13cd8 
00037ee8
  [  870.880656]03ff965a6000 03ffe5e7e410 00010c183bc6 
03ffe5e7e370
  [  870.880663] User Code: 00010c184202: b9080034  agr %r3,%r4
00010c184206: c02b0007  nilf%r2,7
   #00010c18420c: eb2120df  sllk
%r2,%r1,0(%r2)
   >00010c184212: e3103090  llgc
%r1,0(%r3)
00010c184218: b9f61042  ork 
%r4,%r2,%r1
00010c18421c: 1421  nr  %r2,%r1
00010c18421e: 42403000  stc 
%r4,0(%r3)
00010c184222: 1322  lcr %r2,%r2
  [  870.880672] Last Breaking-Event-Address:
  [  870.880675]  [<00010c18e4ca>] 0x10c18e4ca

  Upstream issue:
  https://github.com/markfasheh/ocfs2-tools/issues/22

  This was triggered by our ocfs2-tools dep8 tests:
  http://autopkgtest.ubuntu.com/packages/o/ocfs2-tools/bionic/s390x

To manage notifications about this bug go to:
https://bugs.launchpad.net/ocfs2-tools/+bug/1745155/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1840958] Re: defragfs.ocfs2 hangs (or takes too long) on arm64, ppc64el

2019-09-03 Thread Launchpad Bug Tracker
This bug was fixed in the package ocfs2-tools - 1.8.6-1ubuntu1

---
ocfs2-tools (1.8.6-1ubuntu1) eoan; urgency=medium

  * d/p/defrag.ocfs2-make-getopt-portable.patch:
make defragfs.ocfs2 portable to ARM64 (LP: #1840958)

 -- Rafael David Tinoco   Mon, 02 Sep 2019
21:21:13 +

** Changed in: ocfs2-tools (Ubuntu)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to ocfs2-tools in Ubuntu.
https://bugs.launchpad.net/bugs/1840958

Title:
  defragfs.ocfs2 hangs (or takes too long) on arm64, ppc64el

Status in OCFS2 Tools:
  Unknown
Status in ocfs2-tools package in Ubuntu:
  Fix Released

Bug description:
  [Impact]

   * ocfs2 defrag tool does not work for ARM64 architecture.

  [Test Case]

   * Run the following script: 
 https://pastebin.ubuntu.com/p/dYG2xct6dz/

   * When it opens a new shell, run:
 $ defragfs.ocfs2 -v /mnt (as root)

   * Watch defragfs.ocfs2 to consume 100% of CPU and no output.

  [Regression Potential]

   * I'm basically changing a (char) for a (int). Potential for
 regression is almost non existent for this case.

  [Other Info]

  The new defragfs.ocfs2 test added in the 1.8.6-1 version of the
  package hangs (or takes too long) in our dep8 infrastructure.

  I reproduced this on an arm64 VM. The command stays silent, and
  consuming 99% of CPU. There is no I/O being done (checked with iostat
  and iotop).

  strace -f shows it stopping at this write:
  2129  write(1, "defragfs.ocfs2 1.8.6\n", 21) = 21

  Which is just a version print.

  Also tested with kernel 5.2.0-13-generic from eoan-proposed.

  debian's ci only runs this test on amd64 it seems.

  On an amd64 VM in the same cloud this tests completes in less than 1s.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ocfs2-tools/+bug/1840958/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1840958] Re: defragfs.ocfs2 hangs (or takes too long) on arm64, ppc64el

2019-09-02 Thread Launchpad Bug Tracker
** Merge proposal linked:
   
https://code.launchpad.net/~rafaeldtinoco/ubuntu/+source/ocfs2-tools/+git/ocfs2-tools/+merge/372170

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to ocfs2-tools in Ubuntu.
https://bugs.launchpad.net/bugs/1840958

Title:
  defragfs.ocfs2 hangs (or takes too long) on arm64, ppc64el

Status in OCFS2 Tools:
  Unknown
Status in ocfs2-tools package in Ubuntu:
  In Progress

Bug description:
  [Impact]

   * ocfs2 defrag tool does not work for ARM64 architecture.

  [Test Case]

   * Run the following script: 
 https://pastebin.ubuntu.com/p/dYG2xct6dz/

   * When it opens a new shell, run:
 $ defragfs.ocfs2 -v /mnt (as root)

   * Watch defragfs.ocfs2 to consume 100% of CPU and no output.

  [Regression Potential]

   * I'm basically changing a (char) for a (int). Potential for
 regression is almost non existent for this case.

  [Other Info]

  The new defragfs.ocfs2 test added in the 1.8.6-1 version of the
  package hangs (or takes too long) in our dep8 infrastructure.

  I reproduced this on an arm64 VM. The command stays silent, and
  consuming 99% of CPU. There is no I/O being done (checked with iostat
  and iotop).

  strace -f shows it stopping at this write:
  2129  write(1, "defragfs.ocfs2 1.8.6\n", 21) = 21

  Which is just a version print.

  Also tested with kernel 5.2.0-13-generic from eoan-proposed.

  debian's ci only runs this test on amd64 it seems.

  On an amd64 VM in the same cloud this tests completes in less than 1s.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ocfs2-tools/+bug/1840958/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 768471] Re: corosync segfaults on startup joining another node

2019-09-01 Thread Launchpad Bug Tracker
[Expired for corosync (Ubuntu) because there has been no activity for 60
days.]

** Changed in: corosync (Ubuntu)
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to corosync in Ubuntu.
https://bugs.launchpad.net/bugs/768471

Title:
  corosync segfaults on startup joining another node

Status in corosync package in Ubuntu:
  Expired

Bug description:
  Binary package hint: corosync

  Architecture: amd64
  Date: Thu Apr 21 12:00:35 2011
  Dependencies:
adduser 3.112ubuntu1
base-files 5.0.0ubuntu20.10.04.3
base-passwd 3.5.22
coreutils 7.4-2ubuntu3
debconf 1.5.28ubuntu4
debconf-i18n 1.5.28ubuntu4
debianutils 3.2.2
dpkg 1.15.5.6ubuntu4.5
findutils 4.4.2-1ubuntu1
gcc-4.4-base 4.4.3-4ubuntu5
libacl1 2.2.49-2
libattr1 1:2.4.44-1
libc-bin 2.11.1-0ubuntu7.8
libc6 2.11.1-0ubuntu7.8
libcorosync4 1.2.0-0ubuntu1
libdb4.8 4.8.24-1ubuntu1
libgcc1 1:4.4.3-4ubuntu5
liblocale-gettext-perl 1.05-6
libncurses5 5.7+20090803-2ubuntu3
libnspr4-0d 4.8.6-0ubuntu0.10.04.2
libnss3-1d 3.12.9+ckbi-1.82-0ubuntu0.10.04.1
libpam-modules 1.1.1-2ubuntu5
libpam0g 1.1.1-2ubuntu5
libselinux1 2.0.89-4
libsqlite3-0 3.6.22-1
libstdc++6 4.4.3-4ubuntu5
libtext-charwidth-perl 0.04-6
libtext-iconv-perl 1.7-2
libtext-wrapi18n-perl 0.06-7
lsb-base 4.0-0ubuntu8
lzma 4.43-14ubuntu2
ncurses-bin 5.7+20090803-2ubuntu3
passwd 1:4.1.4.2-1ubuntu2.2
perl-base 5.10.1-8ubuntu2
sed 4.2.1-6
sensible-utils 0.0.1ubuntu3
tzdata 2011e-0ubuntu0.10.04
zlib1g 1:1.2.3.3.dfsg-15ubuntu1
  DistroRelease: Ubuntu 10.04
  InstallationMedia: Ubuntu-Server 10.04 LTS "Lucid Lynx" - Release amd64 
(20100427)
  Package: corosync 1.2.0-0ubuntu1
  PackageArchitecture: amd64
  ProblemType: Bug
  ProcEnviron:
PATH=(custom, no user)
LANG=en_US.UTF-8
SHELL=/bin/bash
  ProcVersionSignature: Ubuntu 2.6.32-30.59-server 2.6.32.29+drm33.13
  SourcePackage: corosync
  Tags: lucid
  Uname: Linux 2.6.32-30-server x86_64

  It will segfault with or without the other server online. Oddly, these are 
built on similar machines with essentially the exact same config throughout. 
The only difference is that I originally forgot to set the mtu to 9000 on this 
particular machine and therefore corosync failed to communicate initially. I 
brought it back up with the correct eth MTU and it started with segfaulting. 
I've tried cleaning up the machine and starting fresh with just the config, but 
that doesn't work either. I suspect this is cured in a newer package of 
corosync.
   
  corosync.conf:

  # Please read the corosync.conf.5 manual page

  totem {
version: 2
secauth: off
threads: 0
netmtu: 9000
token: 3000
token_retransmits_before_loss_const: 10
join: 60
consensus: 5000
vsftype: none
max_messages: 20
clear_node_high_bit: yes
interface {
ringnumber: 0
bindnetaddr: 10.24.98.0
mcastaddr: 239.18.110.1
mcastport: 4172
}
  }

  logging {
fileline: off
to_stderr: yes
to_logfile: no
to_syslog: yes
syslog_facility: daemon
debug: on
timestamp: on
logger_subsys {
subsys: AMF
debug: on
}
  }

  amf {
mode: disabled
  }

  aisexec {
user: root
group: root
  }

  service {
# Load the Pacemaker Cluster Resource Manager
name: pacemaker
ver: 0
  }

  daemon.log: attached

  I have the strace too, if it would help.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/corosync/+bug/768471/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1840958] Re: defragfs.ocfs2 hangs (or takes too long) on arm64, ppc64el

2019-08-23 Thread Launchpad Bug Tracker
Status changed to 'Confirmed' because the bug affects multiple users.

** Changed in: ocfs2-tools (Ubuntu)
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to ocfs2-tools in Ubuntu.
https://bugs.launchpad.net/bugs/1840958

Title:
  defragfs.ocfs2 hangs (or takes too long) on arm64, ppc64el

Status in OCFS2 Tools:
  Unknown
Status in ocfs2-tools package in Ubuntu:
  Confirmed

Bug description:
  The new defragfs.ocfs2 test added in the 1.8.6-1 version of the
  package hangs (or takes too long) in our dep8 infrastructure.

  I reproduced this on an arm64 VM. The command stays silent, and
  consuming 99% of CPU. There is no I/O being done (checked with iostat
  and iotop).

  strace -f shows it stopping at this write:
  2129  write(1, "defragfs.ocfs2 1.8.6\n", 21) = 21

  Which is just a version print.

  Also tested with kernel 5.2.0-13-generic from eoan-proposed.

  debian's ci only runs this test on amd64 it seems.

  On an amd64 VM in the same cloud this tests completes in less than 1s.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ocfs2-tools/+bug/1840958/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1828228] Re: corosync fails to start in unprivileged containers - autopkgtest failure

2019-07-22 Thread Launchpad Bug Tracker
** Merge proposal linked:
   
https://code.launchpad.net/~rafaeldtinoco/ubuntu/+source/corosync/+git/corosync/+merge/370440

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to corosync in Ubuntu.
https://bugs.launchpad.net/bugs/1828228

Title:
  corosync fails to start in unprivileged containers - autopkgtest
  failure

Status in Auto Package Testing:
  Invalid
Status in corosync package in Ubuntu:
  In Progress

Bug description:
  Currently pacemaker v2 fails to start in armhf containers (and by
  extension corosync too).

  I found that it is reproducible locally, and that I had to bump a few
  limits to get it going.

  Specifically I did:

  1) bump memlock limits
  2) bump rmem_max limits

  = 1) Bump memlock limits =

  I have no idea, which one of these finally worked, and/or is
  sufficient. A bit of a whack-a-mole.

  cat >>/etc/security/limits.conf 

[Ubuntu-ha] [Bug 1830033] Re: Connection synchronization daemon fails at start due to a bug in launch script

2019-06-04 Thread Launchpad Bug Tracker
This bug was fixed in the package ipvsadm - 1:1.28-3ubuntu0.16.04.1

---
ipvsadm (1:1.28-3ubuntu0.16.04.1) xenial; urgency=medium

  * d/ipvsadm.init: remove duplicate syncid on daemon invocation (LP:
#1830033)

 -- Christian Ehrhardt   Fri, 24 May
2019 09:51:59 +0200

** Changed in: ipvsadm (Ubuntu Xenial)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to ipvsadm in Ubuntu.
https://bugs.launchpad.net/bugs/1830033

Title:
  Connection synchronization daemon fails at start due to a bug in
  launch script

Status in ipvsadm package in Ubuntu:
  Fix Released
Status in ipvsadm source package in Xenial:
  Fix Released
Status in ipvsadm source package in Bionic:
  Fix Released
Status in ipvsadm source package in Cosmic:
  Fix Released
Status in ipvsadm package in Debian:
  Fix Released

Bug description:
  [Impact]

   * the init script has an argument twice, which makes the service fail to 
 start 

   * Without the fix the service is rather unusable as it can't be
  started

  [Test Case]

   * Needs a VM (no container)
 $ sudo apt install ipvsadm
 $ echo 'AUTO="true"' | sudo tee -a /etc/default/ipvsadm
 $ echo 'DAEMON="master"' | sudo tee -a /etc/default/ipvsadm
 $ sudo systemctl restart ipvsadm
 $ systemctl status ipvsadm

 With the bug this will show the sevrice failing
 [...]
 Try `/sbin/ipvsadm -h' or '/sbin/ipvsadm --help' for more information.
 ...fail!

  
  [Regression Potential] 

   * Even in the default config (just enabling it to run) this doesn't work.
 Hence the risk should be next to none.
 I can think of setups that "are meant to work" , but so far "didn't 
 start" now properly would start. For example if someone 
 configured the service but ignored that it failed to start.
 I hope and expect that this is a rather unimportant risk, it actually 
 is the fix we are intending to make.

  [Other Info]
   
   * TBH it is in main for a dependency from keepalive but I wonder how much 
 that could be a sugegsts instead. But that is for the future.

  ---

  
  The launch script for ipvsadm has a bug that prevents LVS from start in 
synchronization mode.

  How to reproduce.
  1. Install ipvsadm on ubuntu server 16.04 and modify /etc/default/ipvsadm to 
lauch LVS in master mode (or backup):
  AUTO="true"
  DAEMON="master"
  IFACE="eno1"
  SYNCID="0"

  2. Start "ipvsadm" service and check systemd unit log:
  # systemctl start ipvsadm
  # journalctl -u ipvsadm
  What you expected to happen
  The log output without error:
  May 22 12:41:46 xxx systemd[1]: Starting LSB: ipvsadm daemon...
  May 22 12:41:46 xxx ipvsadm[4619]:  * Clearing the current IPVS 
table...
  May 22 12:41:46 xxx ipvsadm[4619]:...done.
  May 22 12:41:46 xxx ipvsadm[4619]:  * Loading IPVS configuration...
  May 22 12:41:46 xxx ipvsadm[4619]:...done.
  May 22 12:41:46 xxx ipvsadm[4619]:  * Starting IPVS Connection 
Synchronization Daemon master
  May 22 12:41:46 xxx ipvsadm[4619]:...done.
  May 22 12:41:46 xxx systemd[1]: Started LSB: ipvsadm daemon.

  What happened instead:
  May 22 12:32:59 xxx systemd[1]: Starting LSB: ipvsadm daemon...
  May 22 12:32:59 xxx ipvsadm[15743]:  * Clearing the current IPVS table...
  May 22 12:32:59 xxx ipvsadm[15743]:...done.
  May 22 12:32:59 xxx ipvsadm[15743]:  * Loading IPVS configuration...
  May 22 12:32:59 xxx ipvsadm[15743]:...done.
  May 22 12:32:59 xxx ipvsadm[15743]:  * Starting IPVS Connection 
Synchronization Daemon master
  May 22 12:32:59 xxx ipvsadm[15743]: Try `/sbin/ipvsadm -h' or 
'/sbin/ipvsadm --help' for more information.
  May 22 12:32:59 xxx ipvsadm[15743]:...fail!
  May 22 12:32:59 xxx ipvsadm[15743]:...done.
  May 22 12:32:59 xxx systemd[1]: Started LSB: ipvsadm daemon.

  As you can see in log output, there is an message: "Try `/sbin/ipvsadm
  -h' or '/sbin/ipvsadm --help' for more information". This message
  relates to bug in script /etc/init.d/ipvsadm.

  Here a difference how script should be updated to get it work:
  --- /etc/init.d/ipvsadm   2019-05-22 12:41:34.429916226 +
  +++ /root/ipvsadm 2019-05-22 11:18:04.307344255 +
  @@ -29,16 +29,16 @@
   case $DAEMON in
    master|backup)
    log_daemon_msg "Starting IPVS Connection Synchronization Daemon" 
"$DAEMON"
  - $IPVSADM --start-daemon $DAEMON --mcast-interface \
  + $IPVSADM --syncid $SYNCID --start-daemon $DAEMON --mcast-interface \
   $IFACE --syncid $SYNCID || log_end_msg 1
    log_end_msg 0
    ;;
    both)
    log_daemon_msg "Starting IPVS Connection Synchronization Daemon" 
"master"
  - $IPVSADM --start-daemon master --mcast-interface \
  + $IPVSADM --syncid $SYNCID --start-daemon master --mcast-interface \
  

[Ubuntu-ha] [Bug 1830033] Re: Connection synchronization daemon fails at start due to a bug in launch script

2019-06-04 Thread Launchpad Bug Tracker
This bug was fixed in the package ipvsadm - 1:1.28-3ubuntu0.18.04.1

---
ipvsadm (1:1.28-3ubuntu0.18.04.1) bionic; urgency=medium

  * d/ipvsadm.init: remove duplicate syncid on daemon invocation (LP:
#1830033)

 -- Christian Ehrhardt   Fri, 24 May
2019 09:51:59 +0200

** Changed in: ipvsadm (Ubuntu Bionic)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to ipvsadm in Ubuntu.
https://bugs.launchpad.net/bugs/1830033

Title:
  Connection synchronization daemon fails at start due to a bug in
  launch script

Status in ipvsadm package in Ubuntu:
  Fix Released
Status in ipvsadm source package in Xenial:
  Fix Released
Status in ipvsadm source package in Bionic:
  Fix Released
Status in ipvsadm source package in Cosmic:
  Fix Released
Status in ipvsadm package in Debian:
  Fix Released

Bug description:
  [Impact]

   * the init script has an argument twice, which makes the service fail to 
 start 

   * Without the fix the service is rather unusable as it can't be
  started

  [Test Case]

   * Needs a VM (no container)
 $ sudo apt install ipvsadm
 $ echo 'AUTO="true"' | sudo tee -a /etc/default/ipvsadm
 $ echo 'DAEMON="master"' | sudo tee -a /etc/default/ipvsadm
 $ sudo systemctl restart ipvsadm
 $ systemctl status ipvsadm

 With the bug this will show the sevrice failing
 [...]
 Try `/sbin/ipvsadm -h' or '/sbin/ipvsadm --help' for more information.
 ...fail!

  
  [Regression Potential] 

   * Even in the default config (just enabling it to run) this doesn't work.
 Hence the risk should be next to none.
 I can think of setups that "are meant to work" , but so far "didn't 
 start" now properly would start. For example if someone 
 configured the service but ignored that it failed to start.
 I hope and expect that this is a rather unimportant risk, it actually 
 is the fix we are intending to make.

  [Other Info]
   
   * TBH it is in main for a dependency from keepalive but I wonder how much 
 that could be a sugegsts instead. But that is for the future.

  ---

  
  The launch script for ipvsadm has a bug that prevents LVS from start in 
synchronization mode.

  How to reproduce.
  1. Install ipvsadm on ubuntu server 16.04 and modify /etc/default/ipvsadm to 
lauch LVS in master mode (or backup):
  AUTO="true"
  DAEMON="master"
  IFACE="eno1"
  SYNCID="0"

  2. Start "ipvsadm" service and check systemd unit log:
  # systemctl start ipvsadm
  # journalctl -u ipvsadm
  What you expected to happen
  The log output without error:
  May 22 12:41:46 xxx systemd[1]: Starting LSB: ipvsadm daemon...
  May 22 12:41:46 xxx ipvsadm[4619]:  * Clearing the current IPVS 
table...
  May 22 12:41:46 xxx ipvsadm[4619]:...done.
  May 22 12:41:46 xxx ipvsadm[4619]:  * Loading IPVS configuration...
  May 22 12:41:46 xxx ipvsadm[4619]:...done.
  May 22 12:41:46 xxx ipvsadm[4619]:  * Starting IPVS Connection 
Synchronization Daemon master
  May 22 12:41:46 xxx ipvsadm[4619]:...done.
  May 22 12:41:46 xxx systemd[1]: Started LSB: ipvsadm daemon.

  What happened instead:
  May 22 12:32:59 xxx systemd[1]: Starting LSB: ipvsadm daemon...
  May 22 12:32:59 xxx ipvsadm[15743]:  * Clearing the current IPVS table...
  May 22 12:32:59 xxx ipvsadm[15743]:...done.
  May 22 12:32:59 xxx ipvsadm[15743]:  * Loading IPVS configuration...
  May 22 12:32:59 xxx ipvsadm[15743]:...done.
  May 22 12:32:59 xxx ipvsadm[15743]:  * Starting IPVS Connection 
Synchronization Daemon master
  May 22 12:32:59 xxx ipvsadm[15743]: Try `/sbin/ipvsadm -h' or 
'/sbin/ipvsadm --help' for more information.
  May 22 12:32:59 xxx ipvsadm[15743]:...fail!
  May 22 12:32:59 xxx ipvsadm[15743]:...done.
  May 22 12:32:59 xxx systemd[1]: Started LSB: ipvsadm daemon.

  As you can see in log output, there is an message: "Try `/sbin/ipvsadm
  -h' or '/sbin/ipvsadm --help' for more information". This message
  relates to bug in script /etc/init.d/ipvsadm.

  Here a difference how script should be updated to get it work:
  --- /etc/init.d/ipvsadm   2019-05-22 12:41:34.429916226 +
  +++ /root/ipvsadm 2019-05-22 11:18:04.307344255 +
  @@ -29,16 +29,16 @@
   case $DAEMON in
    master|backup)
    log_daemon_msg "Starting IPVS Connection Synchronization Daemon" 
"$DAEMON"
  - $IPVSADM --start-daemon $DAEMON --mcast-interface \
  + $IPVSADM --syncid $SYNCID --start-daemon $DAEMON --mcast-interface \
   $IFACE --syncid $SYNCID || log_end_msg 1
    log_end_msg 0
    ;;
    both)
    log_daemon_msg "Starting IPVS Connection Synchronization Daemon" 
"master"
  - $IPVSADM --start-daemon master --mcast-interface \
  + $IPVSADM --syncid $SYNCID --start-daemon master --mcast-interface \
  

[Ubuntu-ha] [Bug 1830033] Re: Connection synchronization daemon fails at start due to a bug in launch script

2019-05-24 Thread Launchpad Bug Tracker
** Merge proposal linked:
   
https://code.launchpad.net/~paelzer/ubuntu/+source/ipvsadm/+git/ipvsadm/+merge/367890

** Merge proposal linked:
   
https://code.launchpad.net/~paelzer/ubuntu/+source/ipvsadm/+git/ipvsadm/+merge/367891

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to ipvsadm in Ubuntu.
https://bugs.launchpad.net/bugs/1830033

Title:
  Connection synchronization daemon fails at start due to a bug in
  launch script

Status in ipvsadm package in Ubuntu:
  Fix Released
Status in ipvsadm source package in Xenial:
  Triaged
Status in ipvsadm source package in Bionic:
  Triaged
Status in ipvsadm source package in Cosmic:
  Fix Released
Status in ipvsadm package in Debian:
  Fix Released

Bug description:
  [Impact]

   * the init script has an argument twice, which makes the service fail to 
 start 

   * Without the fix the service is rather unusable as it can't be
  started

  [Test Case]

   * Needs a VM (no container)
 $ sudo apt install ipvsadm
 $ echo 'AUTO="true"' | sudo tee -a /etc/default/ipvsadm
 $ echo 'DAEMON="master"' | sudo tee -a /etc/default/ipvsadm
 $ sudo systemctl restart ipvsadm
 $ systemctl status ipvsadm

 With the bug this will show the sevrice failing
 [...]
 Try `/sbin/ipvsadm -h' or '/sbin/ipvsadm --help' for more information.
 ...fail!

  
  [Regression Potential] 

   * Even in the default config (just enabling it to run) this doesn't work.
 Hence the risk should be next to none.
 I can think of setups that "are meant to work" , but so far "didn't 
 start" now properly would start. For example if someone 
 configured the service but ignored that it failed to start.
 I hope and expect that this is a rather unimportant risk, it actually 
 is the fix we are intending to make.

  [Other Info]
   
   * TBH it is in main for a dependency from keepalive but I wonder how much 
 that could be a sugegsts instead. But that is for the future.

  ---

  
  The launch script for ipvsadm has a bug that prevents LVS from start in 
synchronization mode.

  How to reproduce.
  1. Install ipvsadm on ubuntu server 16.04 and modify /etc/default/ipvsadm to 
lauch LVS in master mode (or backup):
  AUTO="true"
  DAEMON="master"
  IFACE="eno1"
  SYNCID="0"

  2. Start "ipvsadm" service and check systemd unit log:
  # systemctl start ipvsadm
  # journalctl -u ipvsadm
  What you expected to happen
  The log output without error:
  May 22 12:41:46 xxx systemd[1]: Starting LSB: ipvsadm daemon...
  May 22 12:41:46 xxx ipvsadm[4619]:  * Clearing the current IPVS 
table...
  May 22 12:41:46 xxx ipvsadm[4619]:...done.
  May 22 12:41:46 xxx ipvsadm[4619]:  * Loading IPVS configuration...
  May 22 12:41:46 xxx ipvsadm[4619]:...done.
  May 22 12:41:46 xxx ipvsadm[4619]:  * Starting IPVS Connection 
Synchronization Daemon master
  May 22 12:41:46 xxx ipvsadm[4619]:...done.
  May 22 12:41:46 xxx systemd[1]: Started LSB: ipvsadm daemon.

  What happened instead:
  May 22 12:32:59 xxx systemd[1]: Starting LSB: ipvsadm daemon...
  May 22 12:32:59 xxx ipvsadm[15743]:  * Clearing the current IPVS table...
  May 22 12:32:59 xxx ipvsadm[15743]:...done.
  May 22 12:32:59 xxx ipvsadm[15743]:  * Loading IPVS configuration...
  May 22 12:32:59 xxx ipvsadm[15743]:...done.
  May 22 12:32:59 xxx ipvsadm[15743]:  * Starting IPVS Connection 
Synchronization Daemon master
  May 22 12:32:59 xxx ipvsadm[15743]: Try `/sbin/ipvsadm -h' or 
'/sbin/ipvsadm --help' for more information.
  May 22 12:32:59 xxx ipvsadm[15743]:...fail!
  May 22 12:32:59 xxx ipvsadm[15743]:...done.
  May 22 12:32:59 xxx systemd[1]: Started LSB: ipvsadm daemon.

  As you can see in log output, there is an message: "Try `/sbin/ipvsadm
  -h' or '/sbin/ipvsadm --help' for more information". This message
  relates to bug in script /etc/init.d/ipvsadm.

  Here a difference how script should be updated to get it work:
  --- /etc/init.d/ipvsadm   2019-05-22 12:41:34.429916226 +
  +++ /root/ipvsadm 2019-05-22 11:18:04.307344255 +
  @@ -29,16 +29,16 @@
   case $DAEMON in
    master|backup)
    log_daemon_msg "Starting IPVS Connection Synchronization Daemon" 
"$DAEMON"
  - $IPVSADM --start-daemon $DAEMON --mcast-interface \
  + $IPVSADM --syncid $SYNCID --start-daemon $DAEMON --mcast-interface \
   $IFACE --syncid $SYNCID || log_end_msg 1
    log_end_msg 0
    ;;
    both)
    log_daemon_msg "Starting IPVS Connection Synchronization Daemon" 
"master"
  - $IPVSADM --start-daemon master --mcast-interface \
  + $IPVSADM --syncid $SYNCID --start-daemon master --mcast-interface \
   $IFACE --syncid $SYNCID  || FAILURE=1
    log_progress_msg "backup"
  - $IPVSADM --start-daemon backup --mcast-interface \
  

[Ubuntu-ha] [Bug 1820281] Re: Restarting systemd-networkd breaks keepalived clusters

2019-05-15 Thread Launchpad Bug Tracker
*** This bug is a duplicate of bug 1815101 ***
https://bugs.launchpad.net/bugs/1815101

Status changed to 'Confirmed' because the bug affects multiple users.

** Changed in: keepalived (Ubuntu)
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to keepalived in Ubuntu.
https://bugs.launchpad.net/bugs/1820281

Title:
  Restarting systemd-networkd breaks keepalived clusters

Status in keepalived package in Ubuntu:
  Confirmed
Status in netplan.io package in Ubuntu:
  Confirmed
Status in systemd package in Ubuntu:
  Confirmed

Bug description:
  Hello,

  Managing a HA floating IP with netplan and keepalived brings trouble.
  Im managing a two node keepalived cluster with a floating IP. 
  Everything works out except if i run on a the master node (which holds 
floating ip) "netplan apply".
  Prob netplan resets the whole interface, leaving keepalived in a broken state.
  This works without any trouble using the old iproute2 approach.

  Even setting, net.ipv4.conf.*.promote_secondaries does not help out.

  We need definitly an non strict reset option for interfaces in thise
  case.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/keepalived/+bug/1820281/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1792544] Re: demotion of pcre3 (8.x) a.k.a pcre (without the 3) in favor of pcre2 (10.x)

2019-05-07 Thread Launchpad Bug Tracker
This bug was fixed in the package libselinux - 2.9-1

---
libselinux (2.9-1) experimental; urgency=medium

  [ Laurent Bigonville ]
  * New upstream release
- Bump libsepol1-dev build-dependency to >= 2.9 to match the release
  * debian/ruby.mk: Do not override RUBYLIBS anymore, upstream build system
seems to do the right thing now
  * debian/control: Bump Standards-Version to 4.3.0 (no further changes)
  * debian/watch: Adjust the URL
  * debian/selinux-utils.install: Install manpages in Russian
  * debian/libselinux1.symbols: Add new exported symbol
  * debian/patches/python_nodefs.patch: Do not FTBFS if we have missing
symbols because we are not linking against the libpython

  [ Michael Biebl ]
  * Build against PCRE2. (Closes: #913921, LP: #1792544)

 -- Laurent Bigonville   Sun, 17 Mar 2019 20:22:24
+0100

** Changed in: libselinux (Ubuntu)
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1792544

Title:
  demotion of pcre3 (8.x) a.k.a pcre (without the 3) in favor of pcre2
  (10.x)

Status in aide package in Ubuntu:
  Incomplete
Status in anope package in Ubuntu:
  Incomplete
Status in apache2 package in Ubuntu:
  Triaged
Status in apr-util package in Ubuntu:
  Fix Released
Status in clamav package in Ubuntu:
  Fix Released
Status in exim4 package in Ubuntu:
  Incomplete
Status in freeradius package in Ubuntu:
  Incomplete
Status in git package in Ubuntu:
  Fix Released
Status in glib2.0 package in Ubuntu:
  Incomplete
Status in grep package in Ubuntu:
  Incomplete
Status in haproxy package in Ubuntu:
  Fix Released
Status in libpam-mount package in Ubuntu:
  Fix Released
Status in libselinux package in Ubuntu:
  Fix Released
Status in nginx package in Ubuntu:
  Incomplete
Status in nmap package in Ubuntu:
  Incomplete
Status in pcre3 package in Ubuntu:
  Confirmed
Status in php-defaults package in Ubuntu:
  Triaged
Status in php7.2 package in Ubuntu:
  Won't Fix
Status in postfix package in Ubuntu:
  Incomplete
Status in python-pyscss package in Ubuntu:
  Incomplete
Status in quagga package in Ubuntu:
  Invalid
Status in rasqal package in Ubuntu:
  Incomplete
Status in slang2 package in Ubuntu:
  Incomplete
Status in sssd package in Ubuntu:
  Incomplete
Status in systemd package in Ubuntu:
  Triaged
Status in tilix package in Ubuntu:
  New
Status in ubuntu-core-meta package in Ubuntu:
  New
Status in vte2.91 package in Ubuntu:
  Fix Released
Status in wget package in Ubuntu:
  Fix Released
Status in zsh package in Ubuntu:
  Incomplete

Bug description:
  https://people.canonical.com/~ubuntu-
  archive/transitions/html/pcre2-main.html

  demotion of pcre3 in favor of pcre2. These packages need analysis what
  needs to be done for the demotion of pcre3:

  Packages which are ready to build with pcre2 should be marked as
  'Triaged', packages which are not ready should be marked as
  'Incomplete'.

  --
  For clarification: pcre2 is actually newer than pcre3.  pcre3 is just poorly 
named.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/aide/+bug/1792544/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1735355] Re: heartbeat: port to Python3

2019-05-01 Thread Launchpad Bug Tracker
This bug was fixed in the package heartbeat - 1:3.0.6-9ubuntu1

---
heartbeat (1:3.0.6-9ubuntu1) eoan; urgency=medium

  * Force use python3 in cts. LP: #1735355

 -- Dimitri John Ledkov   Wed, 01 May 2019 14:54:30
+0100

** Changed in: heartbeat (Ubuntu)
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to heartbeat in Ubuntu.
https://bugs.launchpad.net/bugs/1735355

Title:
  heartbeat: port to Python3

Status in heartbeat package in Ubuntu:
  Fix Released
Status in heartbeat package in Debian:
  New

Bug description:
  heartbeat: port to Python3

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/heartbeat/+bug/1735355/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1792544] Re: demotion of pcre3 (8.x) a.k.a pcre (without the 3) in favor of pcre2 (10.x)

2019-04-29 Thread Launchpad Bug Tracker
This bug was fixed in the package zsh - 5.7.1-1ubuntu1

---
zsh (5.7.1-1ubuntu1) eoan; urgency=low

  * Merge from Debian unstable.  Remaining changes:
- debian/zshrc: Enable completions by default, unless skip_global_compinit
  is set.
- Support cross-compiling:
  - Adjust upstream autoconf cross-compile default fallbacks.
  - Skip zcompile when cross-compiling.
  - Add libelf-dev dependency.
  * Dropped changes, included upstream:
- debian/patches/CVE-2018-0502-and-CVE-2018-13259.patch:
  fix in Src/exec.c and add test Test/A05execution.ztst.
- Cherrypick upstream patch to fix A05execution.ztst test case. Resolves
  autopkgtest failure.

zsh (5.7.1-1) unstable; urgency=medium

  * [8b89d0d5,ea17cf89] New upstream bugfix release
+ [205859e6] Drop fix-vcs-info-p4.patch, fixed upstream

  [ Daniel Shahaf ]
  * [14d26260] d/upstream/signing-key.asc: Add the 5.7 RM's signing key.

zsh (5.7-2) unstable; urgency=low

  [ Frank Terbeck ]
  * [cb6fba8] Fix perforce detection in vcs-info

zsh (5.7-1) unstable; urgency=low

  * [9799d0f9,7b75d97c] New upstream release
+ Upload to unstable again.

zsh (5.6.2-test-3-1) experimental; urgency=low

  * [325fceab,7e6a3d734] New upstream release candidate
+ [f64cd71d] Fixes FTBFS on 32-bits architectures
  * [2c01967c] debian/pkg-zsh-workflow-new-upstream-release.md fine-tuning
  * [30825099] Bump debhelper compatibility level to 12.
+ Use b-d on "debhelper-compat (= 12)" instead of debian/compat.
  * [087d6045] Change shebang path in example script to Debian's zsh path.
  * [6cbe67e6] Make autopkgtest pass by including config.modules in
zsh-dev. (Test/V07pcre.ztst now tries to access ../config.modules to
check if the PCRE module was enabled and needs to be tested. The test
emits a warning on STDERR if ../config.modules is not found.)

zsh (5.6.2-test-2-1) experimental; urgency=low

  * [9dbde9e,bf8b7f7] New upstream release candidate
+ [dc2bfeee] Have V07pcre fail if PCRE was enabled by configure
  (config.modules) but failed to load for any reason. (Closes: #909114)
+ [44b0a973] Remove all cherry-picked patches.
+ [01c3153e] debian/rules: Build zshbuiltins.1 out-of-tree like the
  rest.

  [ Daniel Shahaf ]
  * [19fed9e6] debian/upstream/signing-key.asc: Add the upstream signing
key used since 5.5.1-test-1.

  [ Ondřej Nový ]
  * [b402fc2e] debian/tests: Use AUTOPKGTEST_TMP instead of ADTTMP.

  [ Axel Beckert ]
  * [24256bb7] Add zsh-autosuggestions to list of zsh-related packages in
bug-script.
  * [1239bb28] Declare compliance with Debian Policy 4.3.0. (No other
changes were necessary.)
  * [06946d43] debian/TODO.md: Mention that switching zsh-dev to arch:all
is non-trivial.
  * [e0798c2f] Declare debian/pkg-zsh-workflow.md as outdated.  Add
debian/pkg-zsh-workflow-new-upstream-release.md as massively reduced
replacement.

zsh (5.6.2-3) unstable; urgency=medium

  * [92175749] Revert "Switch from the deprecated libpcre3 to the newer
(!) libpcre2." libpcre2 is not a drop-in replacement and not detected
by zsh's configure script. (Closes: #909084, reopens LP#1792544)

zsh (5.6.2-2) unstable; urgency=medium

  [ Axel Beckert ]
  * [a5db2473] Switch from the deprecated libpcre3 to the newer (!)
libpcre2. (LP: #1792544)

  [ Daniel Shahaf ]
  * [3d8f4eee] Cherry-pick 551ff842 from upstream: Another attachtty()
fix. (Closes: #908818)

zsh (5.6.2-1) unstable; urgency=medium

  * [92bef885,3d116bbb] New upstream bugfix release.
+ [490c395d] Refresh further-mitigate-test-suite-hangs.patch.
+ [7c5241ed] 43446: More entersubsh() / addproc() wiring.
  (Closes: #908033, #908818)
+ [07ad7fd9] Fix windowsize when reattaching to terminal on process
  exit. (Closes: #908625)

  [ Daniel Shahaf ]
  * [77265d89] d/README.Debian: Fix typo.

  [ Axel Beckert ]
  * [5484e5d4] Cherry-pick decc78c72 from upstream: _svn: Allow hyphens in
command name aliases.

zsh (5.6.1-1) unstable; urgency=medium

  * [b3239c5e,b531f38d] New upstream bugfix release.
+ [0d5275c6] Fix process group setting in main shell. (Mitigates:
  #908033)
  * [5379b4c6] Retroactively mention #908000 in previous changelog entry.
  * [a04e81b6] Update README.Debian wrt. -dbg vs -dbgsym packages and
apt-get vs apt.

zsh (5.6-1) unstable; urgency=high

  * [b30b8941,6e6500bfa] New upstream release.
+ [ec2144bb] Refresh patch further-mitigate-test-suite-hangs.patch.
+ [1c4c7b6a] Fixes two security issues in shebang line
  parsing. (CVE-2018-0502, CVE-2018-13259; Closes: #908000)
+ Upload to unstable with high urgency.
  * [c605c9a1] Install symlinks to NEWS.gz (Policy 4.2.0) + README.Debian
to /usr/share/doc/zsh/. Drop no more used symlink from changelog.gz to
../zsh-common/changelog.gz from debian/zsh.links. Update
debian/TODO.md with some minor, but non-trivial documentation issues
recently noticed.

zsh 

[Ubuntu-ha] [Bug 1819074] Re: Keepalived < 2.0.x in Ubuntu 18.04 LTS not compatible with systemd-networkd

2019-04-12 Thread Launchpad Bug Tracker
Status changed to 'Confirmed' because the bug affects multiple users.

** Changed in: systemd (Ubuntu)
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to keepalived in Ubuntu.
https://bugs.launchpad.net/bugs/1819074

Title:
  Keepalived < 2.0.x in Ubuntu 18.04 LTS  not compatible with systemd-
  networkd

Status in keepalived package in Ubuntu:
  Confirmed
Status in netplan.io package in Ubuntu:
  Confirmed
Status in systemd package in Ubuntu:
  Confirmed

Bug description:
  Systemd-networkd clobbers VIPs placed by other daemons on any
  reconfiguration triggering systemd-networkd restart (netplan apply for
  example).  Keepalived < version 2.0.x will not restore a VIP lost in
  this fashion, breaking high availability on Ubuntu 18.04 LTS.  A
  backport for keepalived >= 2.0.x should fix the issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/keepalived/+bug/1819074/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1819074] Re: Keepalived < 2.0.x in Ubuntu 18.04 LTS not compatible with systemd-networkd

2019-04-12 Thread Launchpad Bug Tracker
Status changed to 'Confirmed' because the bug affects multiple users.

** Changed in: netplan.io (Ubuntu)
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to keepalived in Ubuntu.
https://bugs.launchpad.net/bugs/1819074

Title:
  Keepalived < 2.0.x in Ubuntu 18.04 LTS  not compatible with systemd-
  networkd

Status in keepalived package in Ubuntu:
  Confirmed
Status in netplan.io package in Ubuntu:
  Confirmed
Status in systemd package in Ubuntu:
  Confirmed

Bug description:
  Systemd-networkd clobbers VIPs placed by other daemons on any
  reconfiguration triggering systemd-networkd restart (netplan apply for
  example).  Keepalived < version 2.0.x will not restore a VIP lost in
  this fashion, breaking high availability on Ubuntu 18.04 LTS.  A
  backport for keepalived >= 2.0.x should fix the issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/keepalived/+bug/1819074/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1819074] Re: Keepalived < 2.0.x in Ubuntu 18.04 LTS not compatible with systemd-networkd

2019-04-12 Thread Launchpad Bug Tracker
Status changed to 'Confirmed' because the bug affects multiple users.

** Changed in: keepalived (Ubuntu)
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to keepalived in Ubuntu.
https://bugs.launchpad.net/bugs/1819074

Title:
  Keepalived < 2.0.x in Ubuntu 18.04 LTS  not compatible with systemd-
  networkd

Status in keepalived package in Ubuntu:
  Confirmed
Status in netplan.io package in Ubuntu:
  Confirmed
Status in systemd package in Ubuntu:
  Confirmed

Bug description:
  Systemd-networkd clobbers VIPs placed by other daemons on any
  reconfiguration triggering systemd-networkd restart (netplan apply for
  example).  Keepalived < version 2.0.x will not restore a VIP lost in
  this fashion, breaking high availability on Ubuntu 18.04 LTS.  A
  backport for keepalived >= 2.0.x should fix the issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/keepalived/+bug/1819074/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1819046] Re: Systemd unit file reads settings from wrong path

2019-03-21 Thread Launchpad Bug Tracker
This bug was fixed in the package pacemaker - 1.1.14-2ubuntu1.5

---
pacemaker (1.1.14-2ubuntu1.5) xenial; urgency=medium

  * Change systemd unit to source /etc/default files by default
(LP: #1819046)

 -- hal...@canonical.com (Heitor R. Alves de Siqueira)  Thu, 07 Mar 2019
15:13:34 -0300

** Changed in: pacemaker (Ubuntu Xenial)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to pacemaker in Ubuntu.
https://bugs.launchpad.net/bugs/1819046

Title:
  Systemd unit file reads settings from wrong path

Status in pacemaker package in Ubuntu:
  Fix Released
Status in pacemaker source package in Xenial:
  Fix Released

Bug description:
  [Impact]
  Systemd Unit file doesn't read any settings by default

  [Description]
  The unit file shipped with the Xenial pacemaker package tries to read 
environment settings from /etc/sysconfig/ instead of /etc/default/. The result 
is that settings defined in /etc/default/pacemaker are not effective.
  Since the /etc/default/pacemaker file is created with default values when the 
pacemaker package is installed, we should source that in the systemd unit file.

  [Test Case]
  1) Deploy a Xenial container:
  $ lxc launch ubuntu:xenial pacemaker
  2) Update container and install pacemaker:
  root@pacemaker:~# apt update && apt install pacemaker -y
  3) Change default pacemaker log location:
  root@pacemaker:~# echo "PCMK_logfile=/tmp/pacemaker.log" >> 
/etc/default/pacemaker
  4) Restart pacemaker service and verify that log file exists:
  root@pacemaker:~# systemctl restart pacemaker.service
  root@pacemaker:~# ls -l /tmp/pacemaker.log
  ls: cannot access '/tmp/pacemaker.log': No such file or directory

  After fixing the systemd unit, changes to /etc/default/pacemaker get picked 
up correctly:
  root@pacemaker:~# ls -l /tmp/pacemaker.log
  -rw-rw 1 hacluster haclient 27335 Mar  7 20:46 /tmp/pacemaker.log

  
  [Regression Potential]
  The regression potential for this should be very low, since the configuration 
file is already being created by default and other systemd unit files are using 
the /etc/default config. In case the file doesn't exist or the user removed it, 
the "-" prefix will gracefully ignore the missing file according to the 
systemd.exec manual [0].
  Nonetheless, the new package will be tested with autopkgtests and the fix 
will be validated in a reproduction environment.

  [0] https://www.freedesktop.org/software/systemd/man/systemd.exec.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/pacemaker/+bug/1819046/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1804069] Re: haproxy fails on arm64 due to alignment error

2019-02-07 Thread Launchpad Bug Tracker
This bug was fixed in the package haproxy - 1.8.8-1ubuntu0.4

---
haproxy (1.8.8-1ubuntu0.4) bionic; urgency=medium

  * d/p/stksess-align.patch: Make sure stksess is properly aligned.
(LP: #1804069)
  * d/t/control, d/t/proxy-localhost: simple DEP8 test to actually
generate traffic through haproxy.

 -- Andreas Hasenack   Thu, 24 Jan 2019 10:20:49
-0200

** Changed in: haproxy (Ubuntu Bionic)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1804069

Title:
  haproxy fails on arm64 due to alignment error

Status in haproxy package in Ubuntu:
  Fix Released
Status in haproxy source package in Bionic:
  Fix Released
Status in haproxy source package in Cosmic:
  Fix Released

Bug description:
  [Impact]
  haproxy as shipped with bionic and cosmic doesn't work on arm64 
architectures, crashing the moment it serves a request.

  [Test Case]

  * install haproxy and apache in an up-to-date ubuntu release you are testing, 
in an arm64 system:
  sudo apt update && sudo apt dist-upgrade -y && sudo apt install haproxy 
apache2 -y

  * Create /etc/haproxy/haproxy.cfg with the following contents:
  global
  chroot /var/lib/haproxy
  user haproxy
  group haproxy
  daemon
  maxconn 4096

  defaults
  log global
  option dontlognull
  option redispatch
  retries 3
  timeout client 50s
  timeout connect 10s
  timeout http-request 5s
  timeout server 50s
  maxconn 4096

  frontend test-front
  bind *:8080
  mode http
  default_backend test-back

  backend test-back
  mode http
  stick store-request src
  stick-table type ip size 256k expire 30m
  server test-1 localhost:80

  * in one terminal, keep tailing the (still nonexistent) haproxy log file:
  tail -F /var/log/haproxy.log

  * in another terminal, restart haproxy:
  sudo systemctl restart haproxy

  * The haproxy log will become live, and already show errors:
  Jan 24 19:22:23 cosmic-haproxy-1804069 haproxy[2286]: [WARNING] 023/191958 
(2286) : Exiting Master process...
  Jan 24 19:22:23 cosmic-haproxy-1804069 haproxy[2286]: [ALERT] 023/191958 
(2286) : Current worker 2287 exited with code 143
  Jan 24 19:22:23 cosmic-haproxy-1804069 haproxy[2286]: [WARNING] 023/191958 
(2286) : All workers exited. Exiting... (143)

  * Run wget to try to fetch the apache frontpage, via haproxy, limited to one 
attempt. It will fail:
  $ wget -t1 http://localhost:8080
  --2019-01-24 19:23:51--  http://localhost:8080/
  Resolving localhost (localhost)... 127.0.0.1
  Connecting to localhost (localhost)|127.0.0.1|:8080... connected.
  HTTP request sent, awaiting response... No data received.
  Giving up.
  $ echo $?
  4

  * the haproxy logs will show errors:
  Jan 24 19:24:36 cosmic-haproxy-1804069 haproxy[6411]: [ALERT] 023/192351 
(6411) : Current worker 6412 exited with code 135
  Jan 24 19:24:36 cosmic-haproxy-1804069 haproxy[6411]: [ALERT] 023/192351 
(6411) : exit-on-failure: killing every workers with SIGTERM
  Jan 24 19:24:36 cosmic-haproxy-1804069 haproxy[6411]: [WARNING] 023/192351 
(6411) : All workers exited. Exiting... (135)

  * Update the haproxy package and try the wget one more time. This time
  it will work, and the haproxy logs will stay silent:

  $ wget -t1 http://localhost:8080
  --2019-01-24 19:26:14--  http://localhost:8080/
  Resolving localhost (localhost)... 127.0.0.1
  Connecting to localhost (localhost)|127.0.0.1|:8080... connected.
  HTTP request sent, awaiting response... 200 OK
  Length: 10918 (11K) [text/html]
  Saving to: ‘index.html’

  index.html
  
100%[>]
  10.66K  --.-KB/sin 0s

  2019-01-24 19:26:14 (75.3 MB/s) - ‘index.html’ saved [10918/10918]

  [Regression Potential]
  Patch was applied upstream in 1.8.15 and is available in the same form in the 
latest 1.8.17 release. The patch is a bit low level, but seems to have been 
well understood.

  [Other Info]
  After writing the testing instructions for this bug, I decided they could be 
easily converted to a DEP8 test, which I did and included in this SRU. This new 
test, very simple but effective, shows that arm64 is working, and that the 
other architectures didn't break.

  [Original Description]

  This fault was reported via the haproxy mailing list https://www.mail-
  archive.com/hapr...@formilux.org/msg31749.html

  And then patched in the haproxy code here
  
https://github.com/haproxy/haproxy/commit/52dabbc4fad338233c7f0c96f977a43f8f81452a

  Without this patch haproxy is not functional on aarch64/arm64.
  Experimental deployments of openstack-ansible on arm64 fail because of
  this bug, and without a fix applied to the ubuntu bionic 

[Ubuntu-ha] [Bug 1804069] Re: haproxy fails on arm64 due to alignment error

2019-02-07 Thread Launchpad Bug Tracker
This bug was fixed in the package haproxy - 1.8.13-2ubuntu0.2

---
haproxy (1.8.13-2ubuntu0.2) cosmic; urgency=medium

  * d/p/stksess-align.patch: Make sure stksess is properly aligned.
(LP: #1804069)
  * d/t/control, d/t/proxy-localhost: simple DEP8 test to actually
generate traffic through haproxy.

 -- Andreas Hasenack   Wed, 23 Jan 2019 17:24:30
-0200

** Changed in: haproxy (Ubuntu Cosmic)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1804069

Title:
  haproxy fails on arm64 due to alignment error

Status in haproxy package in Ubuntu:
  Fix Released
Status in haproxy source package in Bionic:
  Fix Committed
Status in haproxy source package in Cosmic:
  Fix Released

Bug description:
  [Impact]
  haproxy as shipped with bionic and cosmic doesn't work on arm64 
architectures, crashing the moment it serves a request.

  [Test Case]

  * install haproxy and apache in an up-to-date ubuntu release you are testing, 
in an arm64 system:
  sudo apt update && sudo apt dist-upgrade -y && sudo apt install haproxy 
apache2 -y

  * Create /etc/haproxy/haproxy.cfg with the following contents:
  global
  chroot /var/lib/haproxy
  user haproxy
  group haproxy
  daemon
  maxconn 4096

  defaults
  log global
  option dontlognull
  option redispatch
  retries 3
  timeout client 50s
  timeout connect 10s
  timeout http-request 5s
  timeout server 50s
  maxconn 4096

  frontend test-front
  bind *:8080
  mode http
  default_backend test-back

  backend test-back
  mode http
  stick store-request src
  stick-table type ip size 256k expire 30m
  server test-1 localhost:80

  * in one terminal, keep tailing the (still nonexistent) haproxy log file:
  tail -F /var/log/haproxy.log

  * in another terminal, restart haproxy:
  sudo systemctl restart haproxy

  * The haproxy log will become live, and already show errors:
  Jan 24 19:22:23 cosmic-haproxy-1804069 haproxy[2286]: [WARNING] 023/191958 
(2286) : Exiting Master process...
  Jan 24 19:22:23 cosmic-haproxy-1804069 haproxy[2286]: [ALERT] 023/191958 
(2286) : Current worker 2287 exited with code 143
  Jan 24 19:22:23 cosmic-haproxy-1804069 haproxy[2286]: [WARNING] 023/191958 
(2286) : All workers exited. Exiting... (143)

  * Run wget to try to fetch the apache frontpage, via haproxy, limited to one 
attempt. It will fail:
  $ wget -t1 http://localhost:8080
  --2019-01-24 19:23:51--  http://localhost:8080/
  Resolving localhost (localhost)... 127.0.0.1
  Connecting to localhost (localhost)|127.0.0.1|:8080... connected.
  HTTP request sent, awaiting response... No data received.
  Giving up.
  $ echo $?
  4

  * the haproxy logs will show errors:
  Jan 24 19:24:36 cosmic-haproxy-1804069 haproxy[6411]: [ALERT] 023/192351 
(6411) : Current worker 6412 exited with code 135
  Jan 24 19:24:36 cosmic-haproxy-1804069 haproxy[6411]: [ALERT] 023/192351 
(6411) : exit-on-failure: killing every workers with SIGTERM
  Jan 24 19:24:36 cosmic-haproxy-1804069 haproxy[6411]: [WARNING] 023/192351 
(6411) : All workers exited. Exiting... (135)

  * Update the haproxy package and try the wget one more time. This time
  it will work, and the haproxy logs will stay silent:

  $ wget -t1 http://localhost:8080
  --2019-01-24 19:26:14--  http://localhost:8080/
  Resolving localhost (localhost)... 127.0.0.1
  Connecting to localhost (localhost)|127.0.0.1|:8080... connected.
  HTTP request sent, awaiting response... 200 OK
  Length: 10918 (11K) [text/html]
  Saving to: ‘index.html’

  index.html
  
100%[>]
  10.66K  --.-KB/sin 0s

  2019-01-24 19:26:14 (75.3 MB/s) - ‘index.html’ saved [10918/10918]

  [Regression Potential]
  Patch was applied upstream in 1.8.15 and is available in the same form in the 
latest 1.8.17 release. The patch is a bit low level, but seems to have been 
well understood.

  [Other Info]
  After writing the testing instructions for this bug, I decided they could be 
easily converted to a DEP8 test, which I did and included in this SRU. This new 
test, very simple but effective, shows that arm64 is working, and that the 
other architectures didn't break.

  [Original Description]

  This fault was reported via the haproxy mailing list https://www.mail-
  archive.com/hapr...@formilux.org/msg31749.html

  And then patched in the haproxy code here
  
https://github.com/haproxy/haproxy/commit/52dabbc4fad338233c7f0c96f977a43f8f81452a

  Without this patch haproxy is not functional on aarch64/arm64.
  Experimental deployments of openstack-ansible on arm64 fail because of
  this bug, and without a fix applied to the ubuntu 

[Ubuntu-ha] [Bug 1792544] Re: demotion of pcre3 (8.x) a.k.a pcre (without the 3) in favor of pcre2 (10.x)

2019-01-29 Thread Launchpad Bug Tracker
This bug was fixed in the package git - 1:2.20.1-2ubuntu1

---
git (1:2.20.1-2ubuntu1) disco; urgency=medium

  * Merge with Debian; remaining change:
- Build diff-highlight in the contrib dir (closes: #868871, LP: #1713690)
  * Dropped change:
- Build against pcre3 (pcre2 is now in main) (LP: #1792544)

git (1:2.20.1-2) unstable; urgency=low

  * package git-gui: actually Suggests: meld for mergetool support;
describe what meld is used for in package description (thx Jens
Reyer; closes: #707790).
  * package gitweb: Depends: libhttp-date-perl | libtime-parsedate-perl
instead of ... | libtime-modules-perl (thx gregor herrmann; closes:
#879165).
  * debian/control: use https in Vcs-Browser URL.
  * debian/rules: build and test quietly if DEB_BUILD_OPTIONS=terse.
  * debian/control: Standards-Version: 4.3.0.1.

git (1:2.20.1-1) unstable; urgency=medium

  * new upstream point release (see RelNotes/2.20.1.txt).
  * package git-gui: Suggests: meld for mergetool support (thx Jens
Reyer; closes: #707790).

git (1:2.20.0-1) unstable; urgency=medium

  * new upstream release (see RelNotes/2.20.0.txt).
  * package git: Recommends: ca-certificates for https support (thx HJ;
closes: #915644).

git (1:2.20.0~rc2-1) unstable; urgency=low

  * new upstream release candidate.
* rebase: specify 'rebase -i' in reflog for interactive rebase
  (closes: #914695).

git (1:2.20.0~rc1-1) unstable; urgency=low

  * new upstream release candidate (see RelNotes/2.20.0.txt).
  * debian/rules: target clean: don't remove t/t4256/1/mailinfo.c.orig.

git (1:2.19.2-1) unstable; urgency=high

  * new upstream point release (see RelNotes/2.19.2.txt).
* run-command: do not fall back to cwd when command is not in $PATH.

 -- Jeremy Bicha   Wed, 23 Jan 2019 11:30:48 -0500

** Changed in: git (Ubuntu)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1792544

Title:
  demotion of pcre3 (8.x) a.k.a pcre (without the 3) in favor of pcre2
  (10.x)

Status in aide package in Ubuntu:
  Incomplete
Status in anope package in Ubuntu:
  Incomplete
Status in apache2 package in Ubuntu:
  Triaged
Status in apr-util package in Ubuntu:
  Fix Released
Status in clamav package in Ubuntu:
  Fix Released
Status in exim4 package in Ubuntu:
  Incomplete
Status in freeradius package in Ubuntu:
  Incomplete
Status in git package in Ubuntu:
  Fix Released
Status in glib2.0 package in Ubuntu:
  Incomplete
Status in grep package in Ubuntu:
  Incomplete
Status in haproxy package in Ubuntu:
  Fix Released
Status in libpam-mount package in Ubuntu:
  Fix Released
Status in libselinux package in Ubuntu:
  Triaged
Status in nginx package in Ubuntu:
  Incomplete
Status in nmap package in Ubuntu:
  Incomplete
Status in pcre3 package in Ubuntu:
  Confirmed
Status in php-defaults package in Ubuntu:
  Triaged
Status in php7.2 package in Ubuntu:
  Won't Fix
Status in postfix package in Ubuntu:
  Incomplete
Status in python-pyscss package in Ubuntu:
  Incomplete
Status in quagga package in Ubuntu:
  Invalid
Status in rasqal package in Ubuntu:
  Incomplete
Status in slang2 package in Ubuntu:
  Incomplete
Status in sssd package in Ubuntu:
  Incomplete
Status in systemd package in Ubuntu:
  Triaged
Status in ubuntu-core-meta package in Ubuntu:
  New
Status in vte2.91 package in Ubuntu:
  Fix Released
Status in wget package in Ubuntu:
  Fix Released
Status in zsh package in Ubuntu:
  Incomplete

Bug description:
  https://people.canonical.com/~ubuntu-
  archive/transitions/html/pcre2-main.html

  demotion of pcre3 in favor of pcre2. These packages need analysis what
  needs to be done for the demotion of pcre3:

  Packages which are ready to build with pcre2 should be marked as
  'Triaged', packages which are not ready should be marked as
  'Incomplete'.

  --
  For clarification: pcre2 is actually newer than pcre3.  pcre3 is just poorly 
named.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/aide/+bug/1792544/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1804069] Re: haproxy fails on arm64 due to alignment error

2019-01-24 Thread Launchpad Bug Tracker
** Merge proposal linked:
   
https://code.launchpad.net/~ahasenack/ubuntu/+source/haproxy/+git/haproxy/+merge/362210

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1804069

Title:
  haproxy fails on arm64 due to alignment error

Status in haproxy package in Ubuntu:
  Fix Released
Status in haproxy source package in Bionic:
  In Progress
Status in haproxy source package in Cosmic:
  In Progress

Bug description:
  [Impact]
  haproxy as shipped with bionic and cosmic doesn't work on arm64 
architectures, crashing the moment it serves a request.

  [Test Case]

  * install haproxy and apache in an up-to-date ubuntu release you are testing, 
in an arm64 system:
  sudo apt update && sudo apt dist-upgrade -y && sudo apt install haproxy 
apache2 -y

  * Create /etc/haproxy/haproxy.cfg with the following contents:
  global
  chroot /var/lib/haproxy
  user haproxy
  group haproxy
  daemon
  maxconn 4096

  defaults
  log global
  option dontlognull
  option redispatch
  retries 3
  timeout client 50s
  timeout connect 10s
  timeout http-request 5s
  timeout server 50s
  maxconn 4096

  frontend test-front
  bind *:8080
  mode http
  default_backend test-back

  backend test-back
  mode http
  stick store-request src
  stick-table type ip size 256k expire 30m
  server test-1 localhost:80

  * in one terminal, keep tailing the (still nonexistent) haproxy log file:
  tail -F /var/log/haproxy.log

  * in another terminal, restart haproxy:
  sudo systemctl restart haproxy

  * The haproxy log will become live, and already show errors:
  Jan 24 19:22:23 cosmic-haproxy-1804069 haproxy[2286]: [WARNING] 023/191958 
(2286) : Exiting Master process...
  Jan 24 19:22:23 cosmic-haproxy-1804069 haproxy[2286]: [ALERT] 023/191958 
(2286) : Current worker 2287 exited with code 143
  Jan 24 19:22:23 cosmic-haproxy-1804069 haproxy[2286]: [WARNING] 023/191958 
(2286) : All workers exited. Exiting... (143)

  * Run wget to try to fetch the apache frontpage, via haproxy, limited to one 
attempt. It will fail:
  $ wget -t1 http://localhost:8080
  --2019-01-24 19:23:51--  http://localhost:8080/
  Resolving localhost (localhost)... 127.0.0.1
  Connecting to localhost (localhost)|127.0.0.1|:8080... connected.
  HTTP request sent, awaiting response... No data received.
  Giving up.
  $ echo $?
  4

  * the haproxy logs will show errors:
  Jan 24 19:24:36 cosmic-haproxy-1804069 haproxy[6411]: [ALERT] 023/192351 
(6411) : Current worker 6412 exited with code 135
  Jan 24 19:24:36 cosmic-haproxy-1804069 haproxy[6411]: [ALERT] 023/192351 
(6411) : exit-on-failure: killing every workers with SIGTERM
  Jan 24 19:24:36 cosmic-haproxy-1804069 haproxy[6411]: [WARNING] 023/192351 
(6411) : All workers exited. Exiting... (135)

  * Update the haproxy package and try the wget one more time. This time
  it will work, and the haproxy logs will stay silent:

  $ wget -t1 http://localhost:8080
  --2019-01-24 19:26:14--  http://localhost:8080/
  Resolving localhost (localhost)... 127.0.0.1
  Connecting to localhost (localhost)|127.0.0.1|:8080... connected.
  HTTP request sent, awaiting response... 200 OK
  Length: 10918 (11K) [text/html]
  Saving to: ‘index.html’

  index.html
  
100%[>]
  10.66K  --.-KB/sin 0s

  2019-01-24 19:26:14 (75.3 MB/s) - ‘index.html’ saved [10918/10918]

  [Regression Potential]

   * discussion of how regressions are most likely to manifest as a
  result of this change.

   * It is assumed that any SRU candidate patch is well-tested before
     upload and has a low overall risk of regression, but it's important
     to make the effort to think about what ''could'' happen in the
     event of a regression.

   * This both shows the SRU team that the risks have been considered,
     and provides guidance to testers in regression-testing the SRU.

  [Other Info]

   * Anything else you think is useful to include
   * Anticipate questions from users, SRU, +1 maintenance, security teams and 
the Technical Board
   * and address these questions in advance

  [Original Description]

  This fault was reported via the haproxy mailing list https://www.mail-
  archive.com/hapr...@formilux.org/msg31749.html

  And then patched in the haproxy code here
  
https://github.com/haproxy/haproxy/commit/52dabbc4fad338233c7f0c96f977a43f8f81452a

  Without this patch haproxy is not functional on aarch64/arm64.
  Experimental deployments of openstack-ansible on arm64 fail because of
  this bug, and without a fix applied to the ubuntu bionic packages we
  cannot proceed further as the openstack CI only installs the upstream
  ubuntu distribution packages.

To 

[Ubuntu-ha] [Bug 1804069] Re: haproxy fails on arm64 due to alignment error

2019-01-24 Thread Launchpad Bug Tracker
** Merge proposal linked:
   
https://code.launchpad.net/~ahasenack/ubuntu/+source/haproxy/+git/haproxy/+merge/362209

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1804069

Title:
  haproxy fails on arm64 due to alignment error

Status in haproxy package in Ubuntu:
  Fix Released
Status in haproxy source package in Bionic:
  In Progress
Status in haproxy source package in Cosmic:
  In Progress

Bug description:
  [Impact]

   * An explanation of the effects of the bug on users and

   * justification for backporting the fix to the stable release.

   * In addition, it is helpful, but not required, to include an
     explanation of how the upload fixes this bug.

  [Test Case]

  * install haproxy and apache in an up-to-date ubuntu release you are testing, 
in an arm64 system:
  sudo apt update && sudo apt dist-upgrade -y && sudo apt install haproxy 
apache2 -y

  * Create /etc/haproxy/haproxy.cfg with the following contents:
  global
  chroot /var/lib/haproxy
  user haproxy
  group haproxy
  daemon
  maxconn 4096

  defaults
  log global
  option dontlognull
  option redispatch
  retries 3
  timeout client 50s
  timeout connect 10s
  timeout http-request 5s
  timeout server 50s
  maxconn 4096

  frontend test-front
  bind *:8080
  mode http
  default_backend test-back

  backend test-back
  mode http
  stick store-request src
  stick-table type ip size 256k expire 30m
  server test-1 localhost:80

  * in one terminal, keep tailing the (still nonexistent) haproxy log file:
  tail -F /var/log/haproxy.log

  * in another terminal, restart haproxy:
  sudo systemctl restart haproxy

  * The haproxy log will become live, and already show errors:
  Jan 24 19:22:23 cosmic-haproxy-1804069 haproxy[2286]: [WARNING] 023/191958 
(2286) : Exiting Master process...
  Jan 24 19:22:23 cosmic-haproxy-1804069 haproxy[2286]: [ALERT] 023/191958 
(2286) : Current worker 2287 exited with code 143
  Jan 24 19:22:23 cosmic-haproxy-1804069 haproxy[2286]: [WARNING] 023/191958 
(2286) : All workers exited. Exiting... (143)

  * Run wget to try to fetch the apache frontpage, via haproxy, limited to one 
attempt. It will fail:
  $ wget -t1 http://localhost:8080
  --2019-01-24 19:23:51--  http://localhost:8080/
  Resolving localhost (localhost)... 127.0.0.1
  Connecting to localhost (localhost)|127.0.0.1|:8080... connected.
  HTTP request sent, awaiting response... No data received.
  Giving up.
  $ echo $?
  4

  * the haproxy logs will show errors:
  Jan 24 19:24:36 cosmic-haproxy-1804069 haproxy[6411]: [ALERT] 023/192351 
(6411) : Current worker 6412 exited with code 135
  Jan 24 19:24:36 cosmic-haproxy-1804069 haproxy[6411]: [ALERT] 023/192351 
(6411) : exit-on-failure: killing every workers with SIGTERM
  Jan 24 19:24:36 cosmic-haproxy-1804069 haproxy[6411]: [WARNING] 023/192351 
(6411) : All workers exited. Exiting... (135)

  * Update the haproxy package and try the wget one more time. This time it 
will work, and the haproxy logs will stay silent:
  wget -t1 http://localhost:8080
  --2019-01-24 19:26:14--  http://localhost:8080/
  Resolving localhost (localhost)... 127.0.0.1
  Connecting to localhost (localhost)|127.0.0.1|:8080... connected.
  HTTP request sent, awaiting response... 200 OK
  Length: 10918 (11K) [text/html]
  Saving to: ‘index.html’

  index.html
  
100%[>]
  10.66K  --.-KB/sin 0s

  2019-01-24 19:26:14 (75.3 MB/s) - ‘index.html’ saved [10918/10918]

  [Regression Potential]

   * discussion of how regressions are most likely to manifest as a
  result of this change.

   * It is assumed that any SRU candidate patch is well-tested before
     upload and has a low overall risk of regression, but it's important
     to make the effort to think about what ''could'' happen in the
     event of a regression.

   * This both shows the SRU team that the risks have been considered,
     and provides guidance to testers in regression-testing the SRU.

  [Other Info]

   * Anything else you think is useful to include
   * Anticipate questions from users, SRU, +1 maintenance, security teams and 
the Technical Board
   * and address these questions in advance

  [Original Description]

  This fault was reported via the haproxy mailing list https://www.mail-
  archive.com/hapr...@formilux.org/msg31749.html

  And then patched in the haproxy code here
  
https://github.com/haproxy/haproxy/commit/52dabbc4fad338233c7f0c96f977a43f8f81452a

  Without this patch haproxy is not functional on aarch64/arm64.
  Experimental deployments of openstack-ansible on arm64 fail because of
  this bug, and without a fix applied to the ubuntu bionic 

[Ubuntu-ha] [Bug 1792544] Re: demotion of pcre3 (8.x) a.k.a pcre (without the 3) in favor of pcre2 (10.x)

2019-01-23 Thread Launchpad Bug Tracker
This bug was fixed in the package wget - 1.20.1-1ubuntu3

---
wget (1.20.1-1ubuntu3) disco; urgency=medium

  * Revert switching to pcre3 since pcre2 is in main (LP: #1792544)

 -- Jeremy Bicha   Wed, 23 Jan 2019 11:56:34 -0500

** Changed in: wget (Ubuntu)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1792544

Title:
  demotion of pcre3 (8.x) a.k.a pcre (without the 3) in favor of pcre2
  (10.x)

Status in aide package in Ubuntu:
  Incomplete
Status in anope package in Ubuntu:
  Incomplete
Status in apache2 package in Ubuntu:
  Triaged
Status in apr-util package in Ubuntu:
  Fix Released
Status in clamav package in Ubuntu:
  Fix Released
Status in exim4 package in Ubuntu:
  Incomplete
Status in freeradius package in Ubuntu:
  Incomplete
Status in git package in Ubuntu:
  Fix Committed
Status in glib2.0 package in Ubuntu:
  Incomplete
Status in grep package in Ubuntu:
  Incomplete
Status in haproxy package in Ubuntu:
  Fix Released
Status in libpam-mount package in Ubuntu:
  Fix Released
Status in libselinux package in Ubuntu:
  Triaged
Status in nginx package in Ubuntu:
  Incomplete
Status in nmap package in Ubuntu:
  Incomplete
Status in pcre3 package in Ubuntu:
  Confirmed
Status in php-defaults package in Ubuntu:
  Triaged
Status in php7.2 package in Ubuntu:
  Won't Fix
Status in postfix package in Ubuntu:
  Incomplete
Status in python-pyscss package in Ubuntu:
  Incomplete
Status in quagga package in Ubuntu:
  Invalid
Status in rasqal package in Ubuntu:
  Incomplete
Status in slang2 package in Ubuntu:
  Incomplete
Status in sssd package in Ubuntu:
  Incomplete
Status in systemd package in Ubuntu:
  Triaged
Status in ubuntu-core-meta package in Ubuntu:
  New
Status in vte2.91 package in Ubuntu:
  Fix Released
Status in wget package in Ubuntu:
  Fix Released
Status in zsh package in Ubuntu:
  Incomplete

Bug description:
  https://people.canonical.com/~ubuntu-
  archive/transitions/html/pcre2-main.html

  demotion of pcre3 in favor of pcre2. These packages need analysis what
  needs to be done for the demotion of pcre3:

  Packages which are ready to build with pcre2 should be marked as
  'Triaged', packages which are not ready should be marked as
  'Incomplete'.

  --
  For clarification: pcre2 is actually newer than pcre3.  pcre3 is just poorly 
named.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/aide/+bug/1792544/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1810583] Re: Daily cron restarts network on unattended updates but keepalived .service is not restarted as a dependency

2019-01-23 Thread Launchpad Bug Tracker
Status changed to 'Confirmed' because the bug affects multiple users.

** Changed in: networkd-dispatcher (Ubuntu)
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to keepalived in Ubuntu.
https://bugs.launchpad.net/bugs/1810583

Title:
  Daily cron restarts network on unattended updates but keepalived
  .service is not restarted as a dependency

Status in keepalived package in Ubuntu:
  Confirmed
Status in networkd-dispatcher package in Ubuntu:
  Confirmed

Bug description:
  Description:Ubuntu 18.04.1 LTS
  Release:18.04
  ii  keepalived1:1.3.9-1ubuntu0.18.04.1  
amd64Failover and monitoring daemon for LVS clusters

  (From unanswered
  https://answers.launchpad.net/ubuntu/+source/keepalived/+question/676267)

  Since two weeks we lost our keepalived VRRP address on on our of
  systems, closer inspection reveals that this was due to the daily
  cronjob.Apparently something triggered a udev reload (and last week
  the same seemed to happen) which obviously triggers a network restart.

  Are we right in assuming the below patch is the correct way (and
  shouldn't this be in the default install of the systemd service of
  keepalived).

  /etc/systemd/system/multi-user.target.wants/keepalived.service:
  --- keepalived.service.orig 2018-11-20 09:17:06.973924706 +0100
  +++ keepalived.service 2018-11-20 09:05:55.984773226 +0100
  @@ -4,6 +4,7 @@
   Wants=network-online.target
   # Only start if there is a configuration file
   ConditionFileNotEmpty=/etc/keepalived/keepalived.conf
  +PartOf=systemd-networkd.service

  Accompanying syslog:
  Nov 20 06:34:33 ourmachine systemd[1]: Starting Daily apt upgrade and clean 
activities...
  Nov 20 06:34:42 ourmachine systemd[1]: Reloading.
  Nov 20 06:34:44 ourmachine systemd[1]: message repeated 2 times: [ Reloading.]
  Nov 20 06:34:44 ourmachine systemd[1]: Starting Daily apt download 
activities...
  Nov 20 06:34:44 ourmachine systemd[1]: Stopping udev Kernel Device Manager...
  Nov 20 06:34:44 ourmachine systemd[1]: Stopped udev Kernel Device Manager.
  Nov 20 06:34:44 ourmachine systemd[1]: Starting udev Kernel Device Manager...
  Nov 20 06:34:44 ourmachine systemd[1]: Started udev Kernel Device Manager.
  Nov 20 06:34:45 ourmachine systemd[1]: Reloading.
  Nov 20 06:34:45 ourmachine systemd[1]: Reloading.
  Nov 20 06:35:13 ourmachine systemd[1]: Reexecuting.
  Nov 20 06:35:13 ourmachine systemd[1]: Stopped Wait for Network to be 
Configured.
  Nov 20 06:35:13 ourmachine systemd[1]: Stopping Wait for Network to be 
Configured...
  Nov 20 06:35:13 ourmachine systemd[1]: Stopping Network Service..

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/keepalived/+bug/1810583/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1792544] Re: demotion of pcre3 (8.x) a.k.a pcre (without the 3) in favor of pcre2 (10.x)

2019-01-23 Thread Launchpad Bug Tracker
This bug was fixed in the package clamav - 0.100.2+dfsg-2ubuntu1

---
clamav (0.100.2+dfsg-2ubuntu1) disco; urgency=medium

  * Sync with Debian. Remaining change:
- clamav-daemon may fail to start due to options removed in new version
  and manually edited configuration file. (LP: #1783632)
  + debian/patches/Deprecate-unused-options-instead-of-removing-it.patch:
add patch from Debian stretch to simply warn about removed options.
  * Dropped change:
- Build-Depend on pcre3-dev (pcre2 is now in Ubuntu main) (LP: #1792544)

clamav (0.100.2+dfsg-2) unstable; urgency=medium

  * Increase clamd socket command read timeout to 30 seconds (Closes:
#915098)

clamav (0.100.2+dfsg-1) unstable; urgency=medium

  * Import new upstream
- Bump symbol version due to new version.
- CVE-2018-15378 (Closes: #910430).
  * add NEWS.md and README.md from upstream
  * Fix infinite loop in dpkg-reconfigure, Patch by Santiago Ruano Rincón
(Closes: #905044).

 -- Jeremy Bicha   Wed, 23 Jan 2019 12:08:48 -0500

** Changed in: clamav (Ubuntu)
   Status: Fix Committed => Fix Released

** CVE added: https://cve.mitre.org/cgi-bin/cvename.cgi?name=2018-15378

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1792544

Title:
  demotion of pcre3 (8.x) a.k.a pcre (without the 3) in favor of pcre2
  (10.x)

Status in aide package in Ubuntu:
  Incomplete
Status in anope package in Ubuntu:
  New
Status in apache2 package in Ubuntu:
  New
Status in apr-util package in Ubuntu:
  Fix Released
Status in clamav package in Ubuntu:
  Fix Released
Status in exim4 package in Ubuntu:
  Incomplete
Status in freeradius package in Ubuntu:
  Incomplete
Status in git package in Ubuntu:
  Fix Committed
Status in glib2.0 package in Ubuntu:
  Incomplete
Status in grep package in Ubuntu:
  Incomplete
Status in haproxy package in Ubuntu:
  Fix Released
Status in libpam-mount package in Ubuntu:
  Fix Released
Status in libselinux package in Ubuntu:
  Triaged
Status in nginx package in Ubuntu:
  Incomplete
Status in nmap package in Ubuntu:
  Incomplete
Status in pcre3 package in Ubuntu:
  Confirmed
Status in php-defaults package in Ubuntu:
  Triaged
Status in php7.2 package in Ubuntu:
  Won't Fix
Status in postfix package in Ubuntu:
  Incomplete
Status in python-pyscss package in Ubuntu:
  Incomplete
Status in quagga package in Ubuntu:
  Incomplete
Status in rasqal package in Ubuntu:
  Incomplete
Status in slang2 package in Ubuntu:
  Incomplete
Status in sssd package in Ubuntu:
  Incomplete
Status in systemd package in Ubuntu:
  Triaged
Status in vte2.91 package in Ubuntu:
  Fix Released
Status in wget package in Ubuntu:
  Fix Committed
Status in zsh package in Ubuntu:
  Incomplete

Bug description:
  demotion of pcre3 in favor of pcre2. These packages need analysis what
  needs to be done for the demotion of pcre3:

  Packages which are ready to build with pcre2 should be marked as
  'Triaged', packages which are not ready should be marked as
  'Incomplete'.

  aide
  apache2
  apr-util
  clamav
  exim4
  freeradius
  git
  glib2.0
  grep
  haproxy
  libpam-mount
  libselinux
  nginx
  nmap
  php7.2
  postfix
  python-pyscss
  quagga
  rasqal
  slang2
  sssd
  wget
  zsh

  --

  For clarification: pcre2 is actually newer than pcre3.  pcre3 is just
  poorly named (according to jbicha).

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/aide/+bug/1792544/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1792544] Re: demotion of pcre3 (8.x) a.k.a pcre (without the 3) in favor of pcre2 (10.x)

2019-01-23 Thread Launchpad Bug Tracker
This bug was fixed in the package libpam-mount - 2.16-9ubuntu2

---
libpam-mount (2.16-9ubuntu2) disco; urgency=medium

  * Revert remaining Ubuntu changes now that pcre2 is in main (LP:
#1792544)

 -- Jeremy Bicha   Wed, 23 Jan 2019 11:19:59 -0500

** Changed in: libpam-mount (Ubuntu)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1792544

Title:
  demotion of pcre3 (8.x) a.k.a pcre (without the 3) in favor of pcre2
  (10.x)

Status in aide package in Ubuntu:
  Incomplete
Status in apache2 package in Ubuntu:
  New
Status in apr-util package in Ubuntu:
  Fix Released
Status in clamav package in Ubuntu:
  Fix Committed
Status in exim4 package in Ubuntu:
  Incomplete
Status in freeradius package in Ubuntu:
  Incomplete
Status in git package in Ubuntu:
  Fix Committed
Status in glib2.0 package in Ubuntu:
  Incomplete
Status in grep package in Ubuntu:
  Incomplete
Status in haproxy package in Ubuntu:
  Fix Released
Status in libpam-mount package in Ubuntu:
  Fix Released
Status in libselinux package in Ubuntu:
  Triaged
Status in nginx package in Ubuntu:
  Incomplete
Status in nmap package in Ubuntu:
  Incomplete
Status in pcre3 package in Ubuntu:
  Confirmed
Status in php-defaults package in Ubuntu:
  Triaged
Status in php7.2 package in Ubuntu:
  Won't Fix
Status in postfix package in Ubuntu:
  Incomplete
Status in python-pyscss package in Ubuntu:
  Incomplete
Status in quagga package in Ubuntu:
  Incomplete
Status in rasqal package in Ubuntu:
  Incomplete
Status in slang2 package in Ubuntu:
  Incomplete
Status in sssd package in Ubuntu:
  Incomplete
Status in systemd package in Ubuntu:
  Triaged
Status in vte2.91 package in Ubuntu:
  Fix Released
Status in wget package in Ubuntu:
  Fix Committed
Status in zsh package in Ubuntu:
  Incomplete

Bug description:
  demotion of pcre3 in favor of pcre2. These packages need analysis what
  needs to be done for the demotion of pcre3:

  Packages which are ready to build with pcre2 should be marked as
  'Triaged', packages which are not ready should be marked as
  'Incomplete'.

  aide
  apache2
  apr-util
  clamav
  exim4
  freeradius
  git
  glib2.0
  grep
  haproxy
  libpam-mount
  libselinux
  nginx
  nmap
  php7.2
  postfix
  python-pyscss
  quagga
  rasqal
  slang2
  sssd
  wget
  zsh

  --

  For clarification: pcre2 is actually newer than pcre3.  pcre3 is just
  poorly named (according to jbicha).

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/aide/+bug/1792544/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1789045] Re: keepalived 1:1.2.24-1ubuntu0.16.04.1 breaks Neutron stable branches

2018-11-25 Thread Launchpad Bug Tracker
[Expired for keepalived (Ubuntu) because there has been no activity for
60 days.]

** Changed in: keepalived (Ubuntu)
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to keepalived in Ubuntu.
https://bugs.launchpad.net/bugs/1789045

Title:
  keepalived 1:1.2.24-1ubuntu0.16.04.1 breaks Neutron stable branches

Status in keepalived package in Ubuntu:
  Expired

Bug description:
  It looks that keepalived 1:1.2.24-1ubuntu0.16.04.1 which is available in 
Xenial repositories breaks CI functional tests in OpenStack Neutron in 
stable/queens, stable/pike and stable/ocata branches.
  See bug https://bugs.launchpad.net/neutron/+bug/1788185 for details.
  In master and stable/rocky branch we are using cloud-archive apt repo which 
already have keepalived in newer version 1:1.3.9-1ubuntu0.18.04.1~cloud0 and 
this works fine.

  Some details about issue:
  It looks for me that keepalived in broken version clears all IP addresses 
configured on qr- interface in router's namespace and that cause failures of 
tests but it also may (probably) cause to some real issue in production when 
Neutron with HA routers is used.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/keepalived/+bug/1789045/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1800159] [NEW] keepalived ip_vs

2018-10-26 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

1) 
Description:Ubuntu 16.04.5 LTS
Release:16.04
2) keepalived:
  Installed: 1:1.2.24-1ubuntu0.16.04.1
  Candidate: 1:1.2.24-1ubuntu0.16.04.1
  Version table:
 *** 1:1.2.24-1ubuntu0.16.04.1 500
500 http://ftp.hosteurope.de/mirror/archive.ubuntu.com 
xenial-updates/main amd64 Packages
100 /var/lib/dpkg/status

3) not loading the kernel module
systemctl start keepalived.service
Keepalived_healthcheckers[1680]: IPVS: Protocol not available
Keepalived_healthcheckers[1680]: message repeated 8 times: [ IPVS: Protocol not 
available]
...

4) loading the module manually 
systemctl stop keepalived.service
modprobe ip_vs
kernel: [  445.363609] IPVS: ipvs loaded.
systemctl start keepalived.service
Keepalived_healthcheckers[5533]: Initializing ipvs
kernel: [  600.828683] IPVS: [wlc] scheduler registered.

** Affects: keepalived (Ubuntu)
 Importance: Undecided
 Status: New

-- 
keepalived ip_vs
https://bugs.launchpad.net/bugs/1800159
You received this bug notification because you are a member of Ubuntu High 
Availability Team, which is subscribed to keepalived in Ubuntu.

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1792544] Re: demotion of pcre3 in favor of pcre2

2018-09-18 Thread Launchpad Bug Tracker
This bug was fixed in the package apr-util - 1.6.1-2ubuntu1

---
apr-util (1.6.1-2ubuntu1) cosmic; urgency=medium

  * Drop build dependency on libpcre3-dev. Closes: #909077. LP:
#1792544.

 -- Matthias Klose   Tue, 18 Sep 2018 13:05:27 +0200

** Changed in: apr-util (Ubuntu)
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1792544

Title:
  demotion of pcre3 in favor of pcre2

Status in aide package in Ubuntu:
  Incomplete
Status in apache2 package in Ubuntu:
  New
Status in apr-util package in Ubuntu:
  Fix Released
Status in clamav package in Ubuntu:
  Triaged
Status in exim4 package in Ubuntu:
  Incomplete
Status in freeradius package in Ubuntu:
  Incomplete
Status in git package in Ubuntu:
  Triaged
Status in glib2.0 package in Ubuntu:
  Incomplete
Status in grep package in Ubuntu:
  Incomplete
Status in haproxy package in Ubuntu:
  Triaged
Status in libpam-mount package in Ubuntu:
  Incomplete
Status in libselinux package in Ubuntu:
  Triaged
Status in nginx package in Ubuntu:
  Incomplete
Status in nmap package in Ubuntu:
  Incomplete
Status in pcre3 package in Ubuntu:
  Confirmed
Status in php7.2 package in Ubuntu:
  Triaged
Status in postfix package in Ubuntu:
  Incomplete
Status in python-pyscss package in Ubuntu:
  Incomplete
Status in quagga package in Ubuntu:
  Incomplete
Status in rasqal package in Ubuntu:
  Incomplete
Status in slang2 package in Ubuntu:
  Incomplete
Status in sssd package in Ubuntu:
  Incomplete
Status in wget package in Ubuntu:
  Incomplete
Status in zsh package in Ubuntu:
  Incomplete

Bug description:
  demotion of pcre3 in favor of pcre2. These packages need analysis what
  needs to be done for the demotion of pcre3:

  Packages which are ready to build with pcre2 should be marked as
  'Triaged', packages which are not ready should be marked as
  'Incomplete'.

  aide
  apache2
  apr-util
  clamav
  exim4
  freeradius
  git
  glib2.0
  grep
  haproxy
  libpam-mount
  libselinux
  nginx
  nmap
  php7.2
  postfix
  python-pyscss
  quagga
  rasqal
  slang2
  sssd
  wget
  zsh

  --

  For clarification: pcre2 is actually newer than pcre3.  pcre3 is just
  poorly named (according to jbicha).

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/aide/+bug/1792544/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1792298] Re: keepalived: MISC healthchecker's exit status is erroneously treated as a permanent error

2018-09-13 Thread Launchpad Bug Tracker
Status changed to 'Confirmed' because the bug affects multiple users.

** Changed in: keepalived (Ubuntu)
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to keepalived in Ubuntu.
https://bugs.launchpad.net/bugs/1792298

Title:
  keepalived: MISC healthchecker's exit status is erroneously treated as
  a permanent error

Status in keepalived package in Ubuntu:
  Confirmed

Bug description:
  1) The release of Ubuntu we are using
  $ lsb_release -rd
  Description:Ubuntu 16.04.5 LTS
  Release:16.04

  2) The version of the package we are using
  $ apt-cache policy keepalived
  keepalived:
Installed: 1:1.2.24-1ubuntu0.16.04.1
  ...

  3) What we expected to happen
  MISC healthcheckers would be treated normally.

  4) What happened instead
  We are trying to use Ubuntu 16.04's keepalived with our own MISC 
healthchecker, which is implemented to exit with exit code 3, and getting the 
following log messages endlessly.

  --- Note: some IP fields are masked ---
  Sep 12 06:55:09 devsvr Keepalived[16705]: Healthcheck child process(34232) 
died: Respawning
  Sep 12 06:55:09 devsvr Keepalived[16705]: Starting Healthcheck child process, 
pid=34239
  Sep 12 06:55:09 devsvr Keepalived_healthcheckers[34239]: Initializing ipvs
  Sep 12 06:55:09 devsvr Keepalived_healthcheckers[34239]: Registering Kernel 
netlink reflector
  Sep 12 06:55:09 devsvr Keepalived_healthcheckers[34239]: Registering Kernel 
netlink command channel
  Sep 12 06:55:09 devsvr Keepalived_healthcheckers[34239]: Opening file 
'/etc/keepalived/keepalived.conf'.
  Sep 12 06:55:09 devsvr Keepalived_healthcheckers[34239]: Using LinkWatch 
kernel netlink reflector...
  Sep 12 06:55:09 devsvr Keepalived_healthcheckers[34239]: Activating 
healthchecker for service [XX.XX.XX.18]:80
  Sep 12 06:55:09 devsvr Keepalived_healthcheckers[34239]: Activating 
healthchecker for service [XX.XX.XX.19]:80
  Sep 12 06:55:09 devsvr Keepalived_healthcheckers[34239]: Activating 
healthchecker for service [XX.XX.XX.18]:443
  Sep 12 06:55:09 devsvr Keepalived_healthcheckers[34239]: Activating 
healthchecker for service [XX.XX.XX.19]:443
  ...
  Sep 12 06:55:09 devsvr Keepalived_healthcheckers[34239]: Activating 
healthchecker for service [XX.XX.XX.52]:443
  Sep 12 06:55:09 devsvr Keepalived_healthcheckers[34239]: Activating 
healthchecker for service [XX.XX.XX.53]:443
  Sep 12 06:55:10 devsvr Keepalived_healthcheckers[34239]: pid 34257 exited 
with permanent error CONFIG. Terminating
  Sep 12 06:55:10 devsvr Keepalived_healthcheckers[34239]: Removing service 
[XX.XX.XX.24]:25 from VS [YY.YY.YY.YY]:0
  Sep 12 06:55:10 devsvr Keepalived_healthcheckers[34239]: Removing service 
[XX.XX.XX.25]:25 from VS [YY.YY.YY.YY]:0
  Sep 12 06:55:10 devsvr Keepalived_healthcheckers[34239]: Removing service 
[XX.XX.XX.21]:56667 from VS [ZZ.ZZ.ZZ.ZZ]:0
  Sep 12 06:55:10 devsvr Keepalived_healthcheckers[34239]: Removing service 
[XX.XX.XX.52]:443 from VS [WW.WW.WW.WW]:0
  Sep 12 06:55:10 devsvr Keepalived[16705]: Healthcheck child process(34239) 
died: Respawning
  Sep 12 06:55:10 devsvr Keepalived[16705]: Starting Healthcheck child process, 
pid=34260
  ...
  ---

  It looks like our MISC healthchecker's exit code 3, which should be a
  valid value according to the following description, is treated as a
  permanent error since it is equal to KEEPALIVED_EXIT_CONFIG defined in
  keepalived's lib/scheduler.h :

  ---
 # MISC healthchecker, run a program
 MISC_CHECK
 {
 # External script or program
 ...
 #   exit status 2-255: svc check success, weight
 # changed to 2 less than exit status.
 #   (for example: exit status of 255 would set
 # weight to 253)
 misc_dynamic
 }
  ---

  The problem, we think, have started with this patch (we did not see the 
problem in Ubuntu 14.04):
Stop respawning children repeatedly after permanent error
- 
https://github.com/acassen/keepalived/commit/4ae9314af448eb8ea4f3d8ef39bcc469779b0fec

  The problem will be fixed by this patch (not included in Ubuntu 16.04):
Make report_child_status() check for vrrp and checker child processes
- 
https://github.com/acassen/keepalived/commit/ca955a7c1a6af324428ff04e24be68a180be127f

  Please consider backporting it to Ubuntu 16.04's keepalived.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/keepalived/+bug/1792298/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1779543] Re: package haproxy (not installed) failed to install/upgrade: installed haproxy package post-installation script subprocess returned error exit status 1

2018-08-31 Thread Launchpad Bug Tracker
[Expired for haproxy (Ubuntu) because there has been no activity for 60
days.]

** Changed in: haproxy (Ubuntu)
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1779543

Title:
  package haproxy (not installed) failed to install/upgrade: installed
  haproxy package post-installation script subprocess returned error
  exit status 1

Status in haproxy package in Ubuntu:
  Expired

Bug description:
  I have removed haproxy from my machine but still I am getting this
  issue.

  ProblemType: Package
  DistroRelease: Ubuntu 18.04
  Package: haproxy (not installed)
  ProcVersionSignature: Ubuntu 4.15.0-24.26-generic 4.15.18
  Uname: Linux 4.15.0-24-generic x86_64
  NonfreeKernelModules: nvidia_modeset nvidia
  ApportVersion: 2.20.9-0ubuntu7.2
  Architecture: amd64
  Date: Tue Jun 26 19:31:19 2018
  ErrorMessage: installed haproxy package post-installation script subprocess 
returned error exit status 1
  InstallationDate: Installed on 2017-06-24 (371 days ago)
  InstallationMedia: Ubuntu 16.04.2 LTS "Xenial Xerus" - Release amd64 
(20170215.2)
  Python3Details: /usr/bin/python3.6, Python 3.6.5, python3-minimal, 
3.6.5-3ubuntu1
  PythonDetails: /usr/bin/python2.7, Python 2.7.15rc1, python-minimal, 
2.7.15~rc1-1
  RelatedPackageVersions:
   dpkg 1.19.0.5ubuntu2
   apt  1.6.2
  SourcePackage: haproxy
  Title: package haproxy (not installed) failed to install/upgrade: installed 
haproxy package post-installation script subprocess returned error exit status 1
  UpgradeStatus: Upgraded to bionic on 2018-04-15 (77 days ago)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/haproxy/+bug/1779543/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1789045] Re: keepalived 1:1.2.24-1ubuntu0.16.04.1 breaks Neutron stable branches

2018-08-28 Thread Launchpad Bug Tracker
Status changed to 'Confirmed' because the bug affects multiple users.

** Changed in: keepalived (Ubuntu)
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to keepalived in Ubuntu.
https://bugs.launchpad.net/bugs/1789045

Title:
  keepalived 1:1.2.24-1ubuntu0.16.04.1 breaks Neutron stable branches

Status in keepalived package in Ubuntu:
  Confirmed

Bug description:
  It looks that keepalived 1:1.2.24-1ubuntu0.16.04.1 which is available in 
Xenial repositories breaks CI functional tests in OpenStack Neutron in 
stable/queens, stable/pike and stable/ocata branches.
  See bug https://bugs.launchpad.net/neutron/+bug/1788185 for details.
  In master and stable/rocky branch we are using cloud-archive apt repo which 
already have keepalived in newer version 1:1.3.9-1ubuntu0.18.04.1~cloud0 and 
this works fine.

  Some details about issue:
  It looks for me that keepalived in broken version clears all IP addresses 
configured on qr- interface in router's namespace and that cause failures of 
tests but it also may (probably) cause to some real issue in production when 
Neutron with HA routers is used.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/keepalived/+bug/1789045/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1744062] Re: [SRU] L3 HA: multiple agents are active at the same time

2018-08-14 Thread Launchpad Bug Tracker
This bug was fixed in the package keepalived - 1:1.2.24-1ubuntu0.16.04.1

---
keepalived (1:1.2.24-1ubuntu0.16.04.1) xenial; urgency=medium

  * New upstream version for Ubuntu 16.04 (LP: #1783583).
  * d/p/fix_message_truncation_with_large_pagesizes.patch: Rebased.
  * d/p/fix-removing-left-over-addresses-if-keepalived-abort.patch:
Cherry-picked from upstream to ensure left-over VIPs and eVIPs are
properly removed on restart if keepalived terminates abonormally. This
fix is from the upstream 1.4.0 release (LP: #1744062).

keepalived (1:1.2.24-1) unstable; urgency=medium

  * [d378a6f] New upstream version 1.2.24

keepalived (1:1.2.23-1) unstable; urgency=medium

  * [94beb84] Imported Upstream version 1.2.23
(Closes: #821941)
- fix some segfaults (Closes: #830955)

keepalived (1:1.2.20-1) unstable; urgency=medium

  * [2a22d69] Imported Upstream version 1.2.20
enable support for:
 - nfnetlink
 - ipset
 - iptc
 - snmp rfcv2 and rfcv3

 -- Corey Bryant   Wed, 25 Jul 2018 10:52:54
-0400

** Changed in: keepalived (Ubuntu Xenial)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to keepalived in Ubuntu.
https://bugs.launchpad.net/bugs/1744062

Title:
  [SRU] L3 HA: multiple agents are active at the same time

Status in Ubuntu Cloud Archive:
  Triaged
Status in Ubuntu Cloud Archive mitaka series:
  Triaged
Status in Ubuntu Cloud Archive queens series:
  Fix Committed
Status in neutron:
  New
Status in keepalived package in Ubuntu:
  Fix Released
Status in neutron package in Ubuntu:
  Invalid
Status in keepalived source package in Xenial:
  Fix Released
Status in neutron source package in Xenial:
  Invalid
Status in keepalived source package in Bionic:
  Fix Released
Status in neutron source package in Bionic:
  Invalid

Bug description:
  [Impact]

  This is the same issue reported in
  https://bugs.launchpad.net/neutron/+bug/1731595, however that is
  marked as 'Fix Released' and the issue is still occurring and I can't
  change back to 'New' so it seems best to just open a new bug.

  It seems as if this bug surfaces due to load issues. While the fix
  provided by Venkata in https://bugs.launchpad.net/neutron/+bug/1731595
  (https://review.openstack.org/#/c/522641/) should help clean things up
  at the time of l3 agent restart, issues seem to come back later down
  the line in some circumstances. xavpaice mentioned he saw multiple
  routers active at the same time when they had 464 routers configured
  on 3 neutron gateway hosts using L3HA, and each router was scheduled
  to all 3 hosts. However, jhebden mentions that things seem stable at
  the 400 L3HA router mark, and it's worth noting this is the same
  deployment that xavpaice was referring to.

  keepalived has a patch upstream in 1.4.0 that provides a fix for
  removing left-over addresses if keepalived aborts. That patch will be
  cherry-picked to Ubuntu keepalived packages.

  [Test Case]
  The following SRU process will be followed:
  https://wiki.ubuntu.com/OpenStackUpdates

  In order to avoid regression of existing consumers, the OpenStack team
  will run their continuous integration test against the packages that
  are in -proposed. A successful run of all available tests will be
  required before the proposed packages can be let into -updates.

  The OpenStack team will be in charge of attaching the output summary
  of the executed tests. The OpenStack team members will not mark
  ‘verification-done’ until this has happened.

  [Regression Potential]
  The regression potential is lowered as the fix is cherry-picked without 
change from upstream. In order to mitigate the regression potential, the 
results of the aforementioned tests are attached to this bug.

  [Discussion]

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1744062/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1783583] Re: [SRU] keepalived v1.2.24

2018-08-14 Thread Launchpad Bug Tracker
This bug was fixed in the package keepalived - 1:1.2.24-1ubuntu0.16.04.1

---
keepalived (1:1.2.24-1ubuntu0.16.04.1) xenial; urgency=medium

  * New upstream version for Ubuntu 16.04 (LP: #1783583).
  * d/p/fix_message_truncation_with_large_pagesizes.patch: Rebased.
  * d/p/fix-removing-left-over-addresses-if-keepalived-abort.patch:
Cherry-picked from upstream to ensure left-over VIPs and eVIPs are
properly removed on restart if keepalived terminates abonormally. This
fix is from the upstream 1.4.0 release (LP: #1744062).

keepalived (1:1.2.24-1) unstable; urgency=medium

  * [d378a6f] New upstream version 1.2.24

keepalived (1:1.2.23-1) unstable; urgency=medium

  * [94beb84] Imported Upstream version 1.2.23
(Closes: #821941)
- fix some segfaults (Closes: #830955)

keepalived (1:1.2.20-1) unstable; urgency=medium

  * [2a22d69] Imported Upstream version 1.2.20
enable support for:
 - nfnetlink
 - ipset
 - iptc
 - snmp rfcv2 and rfcv3

 -- Corey Bryant   Wed, 25 Jul 2018 10:52:54
-0400

** Changed in: keepalived (Ubuntu Xenial)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to keepalived in Ubuntu.
https://bugs.launchpad.net/bugs/1783583

Title:
  [SRU] keepalived v1.2.24

Status in keepalived package in Ubuntu:
  Invalid
Status in keepalived source package in Xenial:
  Fix Released

Bug description:
  [Impact]
  This release sports mostly bug-fixes and we would like to make sure all of 
our supported customers have access to these improvements.

  [Test Case]
  The following SRU process was followed:
  https://wiki.ubuntu.com/OpenStackUpdates

  In order to avoid regression of existing consumers, the OpenStack team
  will run their continuous integration test against the packages that
  are in -proposed.  A successful run of all available tests will be
  required before the proposed packages can be let into -updates.

  The OpenStack team will be in charge of attaching the output summary
  of the executed tests. The OpenStack team members will not mark
  ‘verification-done’ until this has happened.

  This SRU will also get/require additional testing alongside
  https://bugs.launchpad.net/ubuntu/+source/neutron/+bug/1744062.

  [Regression Potential]
  In order to mitigate the regression potential, the results of the
  aforementioned tests are attached to this bug.

  [Discussion]

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/keepalived/+bug/1783583/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1774765] Re: package haproxy 1.8.8-1ubuntu0.1 failed to install/upgrade: installed haproxy package post-installation script subprocess returned error exit status 1

2018-08-03 Thread Launchpad Bug Tracker
[Expired for haproxy (Ubuntu) because there has been no activity for 60
days.]

** Changed in: haproxy (Ubuntu)
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1774765

Title:
  package haproxy 1.8.8-1ubuntu0.1 failed to install/upgrade: installed
  haproxy package post-installation script subprocess returned error
  exit status 1

Status in haproxy package in Ubuntu:
  Expired

Bug description:
  While upgrading the packages I got this report.

  ProblemType: Package
  DistroRelease: Ubuntu 18.04
  Package: haproxy 1.8.8-1ubuntu0.1
  ProcVersionSignature: Ubuntu 4.15.0-21.22-generic 4.15.17
  Uname: Linux 4.15.0-21-generic x86_64
  NonfreeKernelModules: nvidia_modeset nvidia
  ApportVersion: 2.20.9-0ubuntu7.2
  Architecture: amd64
  Date: Sat Jun  2 12:56:12 2018
  ErrorMessage: installed haproxy package post-installation script subprocess 
returned error exit status 1
  InstallationDate: Installed on 2017-06-24 (342 days ago)
  InstallationMedia: Ubuntu 16.04.2 LTS "Xenial Xerus" - Release amd64 
(20170215.2)
  Python3Details: /usr/bin/python3.6, Python 3.6.5, python3-minimal, 3.6.5-3
  PythonDetails: /usr/bin/python2.7, Python 2.7.15rc1, python-minimal, 
2.7.15~rc1-1
  RelatedPackageVersions:
   dpkg 1.19.0.5ubuntu2
   apt  1.6.1
  SourcePackage: haproxy
  Title: package haproxy 1.8.8-1ubuntu0.1 failed to install/upgrade: installed 
haproxy package post-installation script subprocess returned error exit status 1
  UpgradeStatus: Upgraded to bionic on 2018-04-15 (47 days ago)
  modified.conffile..etc.default.haproxy: [modified]
  modified.conffile..etc.haproxy.haproxy.cfg: [modified]
  mtime.conffile..etc.default.haproxy: 2018-03-03T20:50:28.580803
  mtime.conffile..etc.haproxy.haproxy.cfg: 2018-03-03T23:41:14.936352

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/haproxy/+bug/1774765/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1740927] Re: FTBFS: unknown type name ‘errcode_t’

2018-07-30 Thread Launchpad Bug Tracker
** Merge proposal linked:
   
https://code.launchpad.net/~ahasenack/ubuntu/+source/ocfs2-tools/+git/ocfs2-tools/+merge/351821

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to ocfs2-tools in Ubuntu.
https://bugs.launchpad.net/bugs/1740927

Title:
  FTBFS: unknown type name ‘errcode_t’

Status in ocfs2-tools package in Ubuntu:
  Fix Released
Status in ocfs2-tools source package in Artful:
  New

Bug description:
  gcc -g -O2 -fdebug-prefix-map=/home/ubuntu/x/ocfs2-tools-1.8.5=. 
-fstack-protector-strong -Wall -Wstrict-prototypes -Wmissing-prototypes 
-Wmissing-declarations -pipe  -Wdate-time -D_FORTIFY_SOURCE=2  -I../include -I. 
-DVERSION=\"1.8.5\"  -MD -MP -MF ./.feature_quota.d -o feature_quota.o -c 
feature_quota.c
  In file included from /usr/include/string.h:431:0,
   from ../include/ocfs2/ocfs2.h:41,
   from pass4.c:32:
  include/strings.h:37:1: error: unknown type name ‘errcode_t’; did you mean 
‘mode_t’?
   errcode_t o2fsck_strings_insert(o2fsck_strings *strings, char *string,
   ^
   mode_t
  ../Postamble.make:40: recipe for target 'pass4.o' failed

  Upstream issue: https://github.com/markfasheh/ocfs2-tools/issues/17
  Fix: 
https://github.com/markfasheh/ocfs2-tools/commit/0ffd58b223e24779420130522ea8ee359505f493

  Debian sid is unaffected at the moment.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ocfs2-tools/+bug/1740927/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1744062] Re: [SRU] L3 HA: multiple agents are active at the same time

2018-07-30 Thread Launchpad Bug Tracker
This bug was fixed in the package keepalived - 1:1.3.9-1ubuntu0.18.04.1

---
keepalived (1:1.3.9-1ubuntu0.18.04.1) bionic; urgency=medium

  * d/p/fix-removing-left-over-addresses-if-keepalived-abort.patch:
Cherry-picked from upstream to ensure left-over VIPs and eVIPs are
properly removed on restart if keepalived terminates abonormally. This
fix is from the upstream 1.4.0 release (LP: #1744062).

 -- Corey Bryant   Tue, 03 Jul 2018 10:40:59
-0400

** Changed in: keepalived (Ubuntu Bionic)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to keepalived in Ubuntu.
https://bugs.launchpad.net/bugs/1744062

Title:
  [SRU] L3 HA: multiple agents are active at the same time

Status in Ubuntu Cloud Archive:
  Triaged
Status in Ubuntu Cloud Archive mitaka series:
  Triaged
Status in Ubuntu Cloud Archive queens series:
  Fix Committed
Status in neutron:
  New
Status in keepalived package in Ubuntu:
  Fix Released
Status in neutron package in Ubuntu:
  Invalid
Status in keepalived source package in Xenial:
  Triaged
Status in neutron source package in Xenial:
  Invalid
Status in keepalived source package in Bionic:
  Fix Released
Status in neutron source package in Bionic:
  Invalid

Bug description:
  [Impact]

  This is the same issue reported in
  https://bugs.launchpad.net/neutron/+bug/1731595, however that is
  marked as 'Fix Released' and the issue is still occurring and I can't
  change back to 'New' so it seems best to just open a new bug.

  It seems as if this bug surfaces due to load issues. While the fix
  provided by Venkata in https://bugs.launchpad.net/neutron/+bug/1731595
  (https://review.openstack.org/#/c/522641/) should help clean things up
  at the time of l3 agent restart, issues seem to come back later down
  the line in some circumstances. xavpaice mentioned he saw multiple
  routers active at the same time when they had 464 routers configured
  on 3 neutron gateway hosts using L3HA, and each router was scheduled
  to all 3 hosts. However, jhebden mentions that things seem stable at
  the 400 L3HA router mark, and it's worth noting this is the same
  deployment that xavpaice was referring to.

  keepalived has a patch upstream in 1.4.0 that provides a fix for
  removing left-over addresses if keepalived aborts. That patch will be
  cherry-picked to Ubuntu keepalived packages.

  [Test Case]
  The following SRU process will be followed:
  https://wiki.ubuntu.com/OpenStackUpdates

  In order to avoid regression of existing consumers, the OpenStack team
  will run their continuous integration test against the packages that
  are in -proposed. A successful run of all available tests will be
  required before the proposed packages can be let into -updates.

  The OpenStack team will be in charge of attaching the output summary
  of the executed tests. The OpenStack team members will not mark
  ‘verification-done’ until this has happened.

  [Regression Potential]
  The regression potential is lowered as the fix is cherry-picked without 
change from upstream. In order to mitigate the regression potential, the 
results of the aforementioned tests are attached to this bug.

  [Discussion]

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1744062/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1771335] Re: haproxy fails at startup when using server name instead of IP

2018-07-17 Thread Launchpad Bug Tracker
[Expired for haproxy (Ubuntu) because there has been no activity for 60
days.]

** Changed in: haproxy (Ubuntu)
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1771335

Title:
  haproxy fails at startup when using server name instead of IP

Status in haproxy package in Ubuntu:
  Expired

Bug description:
  This is similar to #689734 I believe.

  When starting haproxy using a DNS name on the 'server' line haproxy
  fails to start, giving the message:

  ```
  May 05 19:09:40 hyrule systemd[1]: Starting HAProxy Load Balancer...
  May 05 19:09:40 hyrule haproxy[1146]: [ALERT] 124/190940 (1146) : parsing 
[/etc/haproxy/haproxy.cfg:157] : 'server scanmon' :
  May 05 19:09:40 hyrule haproxy[1146]: [ALERT] 124/190940 (1146) : Failed to 
initialize server(s) addr.
  May 05 19:09:40 hyrule systemd[1]: haproxy.service: Control process exited, 
code=exited status=1
  May 05 19:09:40 hyrule systemd[1]: haproxy.service: Failed with result 
'exit-code'.
  May 05 19:09:40 hyrule systemd[1]: Failed to start HAProxy Load Balancer.
  ```

  In this case the server statement was:

  `  server scanmon myservername.mydomain.org:8000`

  Changing it to use the IP address corrected the problem.

  I believe there is a missing dependency for DNS in the unit file.

  --Info:
  Description:  Ubuntu 18.04 LTS
  Release:  18.04

  haproxy:
Installed: 1.8.8-1
Candidate: 1.8.8-1
Version table:
   *** 1.8.8-1 500
  500 http://archive.ubuntu.com/ubuntu bionic/main amd64 Packages
  100 /var/lib/dpkg/status

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/haproxy/+bug/1771335/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1744062] Re: [SRU] L3 HA: multiple agents are active at the same time

2018-07-03 Thread Launchpad Bug Tracker
This bug was fixed in the package keepalived - 1:1.3.9-1ubuntu1

---
keepalived (1:1.3.9-1ubuntu1) cosmic; urgency=medium

  * d/p/fix-removing-left-over-addresses-if-keepalived-abort.patch:
Cherry-picked from upstream to ensure left-over VIPs and eVIPs are
properly removed on restart if keepalived terminates abonormally. This
fix is from the upstream 1.4.0 release (LP: #1744062).

 -- Corey Bryant   Tue, 03 Jul 2018 10:26:45
-0400

** Changed in: keepalived (Ubuntu)
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to keepalived in Ubuntu.
https://bugs.launchpad.net/bugs/1744062

Title:
  [SRU] L3 HA: multiple agents are active at the same time

Status in Ubuntu Cloud Archive:
  Triaged
Status in Ubuntu Cloud Archive mitaka series:
  Triaged
Status in Ubuntu Cloud Archive ocata series:
  Triaged
Status in Ubuntu Cloud Archive pike series:
  Triaged
Status in Ubuntu Cloud Archive queens series:
  Triaged
Status in neutron:
  New
Status in keepalived package in Ubuntu:
  Fix Released
Status in neutron package in Ubuntu:
  New
Status in keepalived source package in Xenial:
  Triaged
Status in neutron source package in Xenial:
  New
Status in keepalived source package in Bionic:
  Triaged
Status in neutron source package in Bionic:
  New

Bug description:
  [Impact]

  This is the same issue reported in
  https://bugs.launchpad.net/neutron/+bug/1731595, however that is
  marked as 'Fix Released' and the issue is still occurring and I can't
  change back to 'New' so it seems best to just open a new bug.

  It seems as if this bug surfaces due to load issues. While the fix
  provided by Venkata in https://bugs.launchpad.net/neutron/+bug/1731595
  (https://review.openstack.org/#/c/522641/) should help clean things up
  at the time of l3 agent restart, issues seem to come back later down
  the line in some circumstances. xavpaice mentioned he saw multiple
  routers active at the same time when they had 464 routers configured
  on 3 neutron gateway hosts using L3HA, and each router was scheduled
  to all 3 hosts. However, jhebden mentions that things seem stable at
  the 400 L3HA router mark, and it's worth noting this is the same
  deployment that xavpaice was referring to.

  keepalived has a patch upstream in 1.4.0 that provides a fix for
  removing left-over addresses if keepalived aborts. That patch will be
  cherry-picked to Ubuntu keepalived packages.

  [Test Case]
  The following SRU process will be followed:
  https://wiki.ubuntu.com/OpenStackUpdates

  In order to avoid regression of existing consumers, the OpenStack team
  will run their continuous integration test against the packages that
  are in -proposed. A successful run of all available tests will be
  required before the proposed packages can be let into -updates.

  The OpenStack team will be in charge of attaching the output summary
  of the executed tests. The OpenStack team members will not mark
  ‘verification-done’ until this has happened.

  [Regression Potential]
  The regression potential is lowered as the fix is cherry-picked without 
change from upstream. In order to mitigate the regression potential, the 
results of the aforementioned tests are attached to this bug.

  [Discussion]

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1744062/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1747411] Re: Change of default database file format to SQL

2018-05-12 Thread Launchpad Bug Tracker
This bug was fixed in the package nss - 2:3.36.1-1ubuntu1

---
nss (2:3.36.1-1ubuntu1) cosmic; urgency=medium

  * Merge with Debian unstable. Remaining changes:
- d/libnss3.links: make freebl3 available as library (LP 1744328)
  - d/control: add dh-exec to Build-Depends
  - d/rules: make mkdir tolerate debian/tmp existing (due to dh-exec)
- d/rules: when building with -O3 on ppc64el this FTBFS, build with
  -Wno-error=maybe-uninitialized to avoid that
  * Dropped changes:
- revert switching to SQL default format (LP: 1746947) Dropping this
  adresses (LP: #1747411) and effectively means we now switch to the new
  default format after we ensured all depending packages are ready.
  * Added changes:
- d/rules: extended the FTBFS to -O3 on ppc64el to only apply on ppc64el

nss (2:3.36.1-1) unstable; urgency=medium

  * New upstream release.
  * debian/control: Update Maintainer and Vcs fields, moving off alioth.

nss (2:3.36-1) unstable; urgency=medium

  * New upstream release. Closes: #894981.

 -- Christian Ehrhardt   Mon, 07 May
2018 17:08:46 +0200

** Changed in: nss (Ubuntu)
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to corosync in Ubuntu.
https://bugs.launchpad.net/bugs/1747411

Title:
  Change of default database file format to SQL

Status in certmonger package in Ubuntu:
  Fix Released
Status in corosync package in Ubuntu:
  New
Status in dogtag-pki package in Ubuntu:
  Fix Released
Status in freeipa package in Ubuntu:
  Fix Released
Status in libapache2-mod-nss package in Ubuntu:
  Won't Fix
Status in nss package in Ubuntu:
  Fix Released

Bug description:
  nss in version 3.35 in upstream changed [2] the default file format [1] (if 
no explicit one is specified).
  For now we reverted that change in bug 1746947 until all packages depending 
on it are ready to work with that correctly.

  This bug here is about to track when the revert can be dropped.
  Therefore we list all known-to-be-affected packages and once all are resolved 
this can be dropped.

  [1]: https://fedoraproject.org/wiki/Changes/NSSDefaultFileFormatSql
  [2]: 
https://github.com/nss-dev/nss/commit/33b114e38278c4ffbb6b244a0ebc9910e5245cd3

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/certmonger/+bug/1747411/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1763085] Re: Investigate updating to pacemaker 1.1.18 and corosync 2.4.3

2018-04-18 Thread Launchpad Bug Tracker
This bug was fixed in the package pacemaker - 1.1.18-0ubuntu1

---
pacemaker (1.1.18-0ubuntu1) bionic; urgency=medium

  * New upstream release (1.1.18)
- Drop upstreamed patches and refresh others.
- LP: #1763085

 -- Nishanth Aravamudan   Wed, 11 Apr
2018 09:23:27 -0700

** Changed in: pacemaker (Ubuntu)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to pacemaker in Ubuntu.
https://bugs.launchpad.net/bugs/1763085

Title:
  Investigate updating to pacemaker 1.1.18 and corosync 2.4.3

Status in corosync package in Ubuntu:
  Fix Released
Status in pacemaker package in Ubuntu:
  Fix Released

Bug description:
  Request provided offline.

  No FFE needed, as both updates are bugfix only.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/corosync/+bug/1763085/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1763085] Re: Investigate updating to pacemaker 1.1.18 and corosync 2.4.3

2018-04-18 Thread Launchpad Bug Tracker
This bug was fixed in the package corosync - 2.4.3-0ubuntu1

---
corosync (2.4.3-0ubuntu1) bionic; urgency=medium

  * New upstream release (2.4.3)
- Drop upstreamed patches and refresh others.
- LP: #1763085

 -- Nishanth Aravamudan   Wed, 11 Apr
2018 09:14:52 -0700

** Changed in: corosync (Ubuntu)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to pacemaker in Ubuntu.
https://bugs.launchpad.net/bugs/1763085

Title:
  Investigate updating to pacemaker 1.1.18 and corosync 2.4.3

Status in corosync package in Ubuntu:
  Fix Released
Status in pacemaker package in Ubuntu:
  Fix Committed

Bug description:
  Request provided offline.

  No FFE needed, as both updates are bugfix only.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/corosync/+bug/1763085/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1763493] Re: Update to 1.8.7

2018-04-12 Thread Launchpad Bug Tracker
Status changed to 'Confirmed' because the bug affects multiple users.

** Changed in: haproxy (Ubuntu)
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1763493

Title:
  Update to 1.8.7

Status in haproxy package in Ubuntu:
  Confirmed

Bug description:
  Hey!

  Would  it be possible to update HAProxy to 1.8.7, despite the freeze?
  This is a stable update and it fixes some important bugs. In Debian,
  we kept previous releases in experimental for Ubuntu to not use them.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/haproxy/+bug/1763493/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1316970] Re: g_dbus memory leak in lrmd

2018-04-12 Thread Launchpad Bug Tracker
This bug was fixed in the package pacemaker -
1.1.10+git20130802-1ubuntu2.5

---
pacemaker (1.1.10+git20130802-1ubuntu2.5) trusty; urgency=medium

  * Fixing memory leak (LP: #1316970)
- adding free function where is missing

 -- Seyeong Kim   Tue, 20 Feb 2018 19:36:53
+0900

** Changed in: pacemaker (Ubuntu Trusty)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to pacemaker in Ubuntu.
https://bugs.launchpad.net/bugs/1316970

Title:
  g_dbus memory leak in lrmd

Status in pacemaker package in Ubuntu:
  Fix Released
Status in pacemaker source package in Trusty:
  Fix Released

Bug description:
  [Impact]
  lrmd daemon with upstart resource has memory leak in Trusty

  affected to pacemaker 1.1.10.
  affected to glib2.0 2.40.2-0ubuntu1 >> for glib2.0 created new lp [1]

  Please note that patch for pacemaker is created myself.

  [Test Case]

  https://pastebin.ubuntu.com/p/fqK6Cx3SKK/
  you can check memory leak with this script

  [Regression]
  Restarting daemon after upgrading this pkg will be needed. this patch adds 
free for non-freed dynamic allocated memory. so it solves memory leak.

  [Others]

  this patch is from my self with testing.

  Please review carefully if it is ok.

  [1] https://bugs.launchpad.net/ubuntu/+source/glib2.0/+bug/1750741

  [Original Description]

  I'm running Pacemaker 1.1.10+git20130802-1ubuntu1 on Ubuntu Saucy
  (13.10) and have encountered a memory leak in lrmd.

  The details of the bug are covered here in this thread
  (http://oss.clusterlabs.org/pipermail/pacemaker/2014-May/021689.html)
  but to summarise, the Pacemaker developers believe the leak is caused
  by the g_dbus API, the use of which was removed in Pacemaker 1.11.

  I've also attached the Valgrind output from the run that exposed the
  issue.

  Given that this issue affects production stability (a periodic restart
  of Pacemaker is required), will a version of 1.11 be released for
  Trusty? (I'm happy to upgrade the OS to Trusty to get it).

  If not, can you advise which version of the OS will be the first to
  take 1.11 please?

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/pacemaker/+bug/1316970/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1747411] Re: Change of default database file format to SQL

2018-04-05 Thread Launchpad Bug Tracker
This bug was fixed in the package certmonger - 0.79.5-3

---
certmonger (0.79.5-3) experimental; urgency=medium

  * Merge changes from upstream git to support sqlite nssdb's.
(LP: #1747411)
  * force-utf-8.diff: Dropped, upstream.
  * fix-apache-path.diff: Use proper path to apache nssdb.

 -- Timo Aaltonen   Fri, 30 Mar 2018 09:57:57 +0300

** Changed in: certmonger (Ubuntu)
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to corosync in Ubuntu.
https://bugs.launchpad.net/bugs/1747411

Title:
  Change of default database file format to SQL

Status in certmonger package in Ubuntu:
  Fix Released
Status in corosync package in Ubuntu:
  New
Status in dogtag-pki package in Ubuntu:
  In Progress
Status in freeipa package in Ubuntu:
  New
Status in libapache2-mod-nss package in Ubuntu:
  New
Status in nss package in Ubuntu:
  New

Bug description:
  nss in version 3.35 in upstream changed [2] the default file format [1] (if 
no explicit one is specified).
  For now we reverted that change in bug 1746947 until all packages depending 
on it are ready to work with that correctly.

  This bug here is about to track when the revert can be dropped.
  Therefore we list all known-to-be-affected packages and once all are resolved 
this can be dropped.

  [1]: https://fedoraproject.org/wiki/Changes/NSSDefaultFileFormatSql
  [2]: 
https://github.com/nss-dev/nss/commit/33b114e38278c4ffbb6b244a0ebc9910e5245cd3

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/certmonger/+bug/1747411/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1740892] Re: corosync upgrade on 2018-01-02 caused pacemaker to fail

2018-03-05 Thread Launchpad Bug Tracker
This bug was fixed in the package corosync - 2.4.2-3ubuntu0.17.10.1

---
corosync (2.4.2-3ubuntu0.17.10.1) artful; urgency=high

  * Properly restart corosync and pacemaker together (LP: #1740892)
- d/rules: pass --restart-after-upgrade to dh_installinit
- d/control: indicate this version breaks all older pacemaker, to
  force an upgrade of pacemaker.
- d/corosync.postinst: if flagged to do so by pacemaker, start
  pacemaker on upgrade.

 -- Eric Desrochers   Mon, 26 Feb 2018
08:49:19 -0500

** Changed in: corosync (Ubuntu Artful)
   Status: Fix Committed => Fix Released

** Changed in: pacemaker (Ubuntu Artful)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to corosync in Ubuntu.
https://bugs.launchpad.net/bugs/1740892

Title:
  corosync upgrade on 2018-01-02 caused pacemaker to fail

Status in OpenStack hacluster charm:
  Invalid
Status in corosync package in Ubuntu:
  Fix Released
Status in pacemaker package in Ubuntu:
  Fix Released
Status in corosync source package in Trusty:
  Won't Fix
Status in pacemaker source package in Trusty:
  Won't Fix
Status in corosync source package in Xenial:
  Fix Released
Status in pacemaker source package in Xenial:
  Fix Released
Status in corosync source package in Artful:
  Fix Released
Status in pacemaker source package in Artful:
  Fix Released
Status in corosync source package in Bionic:
  Fix Released
Status in corosync package in Debian:
  New

Bug description:
  [Impact]

  When corosync and pacemaker are both installed, a corosync upgrade
  caused pacemaker to fail. pacemaker will need to be restarted manually
  to work again, it won't recover by itself.

  [Test Case]

  1) Have corosync (< 2.3.5-3ubuntu2) and pacemaker (< 1.1.14-2ubuntu1.3) 
installed
  2) Make sure corosync & pacemaker are running via systemctl status cmd.
  3) Upgrade corosync
  4) Look corosync and pacemaker via systemctl status cmd again.

  You will notice pacemaker is dead (inactive) and doesn't recover,
  unless a systemctl start pacemaker is done manually.

  [Regression Potential]

  Regression potential is low, it doesn't change corosync/pacemaker core
  functionality. This patch make sure thing goes smoother at the
  packaging level during a corosync upgrade where pacemaker is
  installed/involved.

  This can also be useful in particular in situation where the system
  has "unattended-upgrades" enable (software upgrades without
  supervision), and no sysadmin available to start pacemaker manually
  because this isn't a schedule maintenance.

  For the symbol tag change in Artful to (optional), please refer
  yourself to comment #60 from slangasek.

  For the asctime change in Artful, please refer yourself to comment #51
  & comment #52.

  Note that both Artful changes in pacemaker above are only necessary
  for the package to build (even as-is without this patch). They aren't
  a requirement for the patch the work, but for the src pkg to build.

  [Other Info]

  XENIAL Merge-proposal:
  
https://code.launchpad.net/~nacc/ubuntu/+source/corosync/+git/corosync/+merge/336338
  
https://code.launchpad.net/~nacc/ubuntu/+source/pacemaker/+git/pacemaker/+merge/336339

  [Original Description]

  During upgrades on 2018-01-02, corosync and it's libs were upgraded:

  (from a trusty/mitaka cloud)

  Upgrade: libcmap4:amd64 (2.3.3-1ubuntu3, 2.3.3-1ubuntu4),
  corosync:amd64 (2.3.3-1ubuntu3, 2.3.3-1ubuntu4), libcfg6:amd64
  (2.3.3-1ubuntu3, 2.3.3-1ubuntu4), libcpg4:amd64 (2.3.3-1ubuntu3,
  2.3.3-1ubuntu4), libquorum5:amd64 (2.3.3-1ubuntu3, 2.3.3-1ubuntu4),
  libcorosync-common4:amd64 (2.3.3-1ubuntu3, 2.3.3-1ubuntu4),
  libsam4:amd64 (2.3.3-1ubuntu3, 2.3.3-1ubuntu4), libvotequorum6:amd64
  (2.3.3-1ubuntu3, 2.3.3-1ubuntu4), libtotem-pg5:amd64 (2.3.3-1ubuntu3,
  2.3.3-1ubuntu4)

  During this process, it appears that pacemaker service is restarted
  and it errors:

  syslog:Jan  2 16:09:33 juju-machine-0-lxc-4 pacemakerd[1994]:   notice: 
crm_update_peer_state: pcmk_quorum_notification: Node 
juju-machine-1-lxc-3[1001] - state is now lost (was member)
  syslog:Jan  2 16:09:34 juju-machine-0-lxc-4 pacemakerd[1994]:   notice: 
crm_update_peer_state: pcmk_quorum_notification: Node 
juju-machine-1-lxc-3[1001] - state is now member (was lost)
  syslog:Jan  2 16:14:32 juju-machine-0-lxc-4 pacemakerd[1994]:error: 
cfg_connection_destroy: Connection destroyed
  syslog:Jan  2 16:14:32 juju-machine-0-lxc-4 pacemakerd[1994]:   notice: 
pcmk_shutdown_worker: Shuting down Pacemaker
  syslog:Jan  2 16:14:32 juju-machine-0-lxc-4 pacemakerd[1994]:   notice: 
stop_child: Stopping crmd: Sent -15 to process 2050
  syslog:Jan  2 16:14:32 juju-machine-0-lxc-4 pacemakerd[1994]:error: 
pcmk_cpg_dispatch: Connection to the CPG API failed: Library error (2)
  syslog:Jan  2 16:14:32 juju-machine-0-lxc-4 pacemakerd[1994]:

[Ubuntu-ha] [Bug 1740892] Re: corosync upgrade on 2018-01-02 caused pacemaker to fail

2018-03-05 Thread Launchpad Bug Tracker
This bug was fixed in the package pacemaker - 1.1.14-2ubuntu1.4

---
pacemaker (1.1.14-2ubuntu1.4) xenial; urgency=high

  * Properly restart corosync and pacemaker together (LP: #1740892)
- d/pacemaker.preinst: flag corosync to restart pacemaker on
  upgrade.

 -- Eric Desrochers   Mon, 19 Feb 2018
09:37:35 -0500

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to corosync in Ubuntu.
https://bugs.launchpad.net/bugs/1740892

Title:
  corosync upgrade on 2018-01-02 caused pacemaker to fail

Status in OpenStack hacluster charm:
  Invalid
Status in corosync package in Ubuntu:
  Fix Released
Status in pacemaker package in Ubuntu:
  Fix Released
Status in corosync source package in Trusty:
  Won't Fix
Status in pacemaker source package in Trusty:
  Won't Fix
Status in corosync source package in Xenial:
  Fix Released
Status in pacemaker source package in Xenial:
  Fix Released
Status in corosync source package in Artful:
  Fix Released
Status in pacemaker source package in Artful:
  Fix Released
Status in corosync source package in Bionic:
  Fix Released
Status in corosync package in Debian:
  New

Bug description:
  [Impact]

  When corosync and pacemaker are both installed, a corosync upgrade
  caused pacemaker to fail. pacemaker will need to be restarted manually
  to work again, it won't recover by itself.

  [Test Case]

  1) Have corosync (< 2.3.5-3ubuntu2) and pacemaker (< 1.1.14-2ubuntu1.3) 
installed
  2) Make sure corosync & pacemaker are running via systemctl status cmd.
  3) Upgrade corosync
  4) Look corosync and pacemaker via systemctl status cmd again.

  You will notice pacemaker is dead (inactive) and doesn't recover,
  unless a systemctl start pacemaker is done manually.

  [Regression Potential]

  Regression potential is low, it doesn't change corosync/pacemaker core
  functionality. This patch make sure thing goes smoother at the
  packaging level during a corosync upgrade where pacemaker is
  installed/involved.

  This can also be useful in particular in situation where the system
  has "unattended-upgrades" enable (software upgrades without
  supervision), and no sysadmin available to start pacemaker manually
  because this isn't a schedule maintenance.

  For the symbol tag change in Artful to (optional), please refer
  yourself to comment #60 from slangasek.

  For the asctime change in Artful, please refer yourself to comment #51
  & comment #52.

  Note that both Artful changes in pacemaker above are only necessary
  for the package to build (even as-is without this patch). They aren't
  a requirement for the patch the work, but for the src pkg to build.

  [Other Info]

  XENIAL Merge-proposal:
  
https://code.launchpad.net/~nacc/ubuntu/+source/corosync/+git/corosync/+merge/336338
  
https://code.launchpad.net/~nacc/ubuntu/+source/pacemaker/+git/pacemaker/+merge/336339

  [Original Description]

  During upgrades on 2018-01-02, corosync and it's libs were upgraded:

  (from a trusty/mitaka cloud)

  Upgrade: libcmap4:amd64 (2.3.3-1ubuntu3, 2.3.3-1ubuntu4),
  corosync:amd64 (2.3.3-1ubuntu3, 2.3.3-1ubuntu4), libcfg6:amd64
  (2.3.3-1ubuntu3, 2.3.3-1ubuntu4), libcpg4:amd64 (2.3.3-1ubuntu3,
  2.3.3-1ubuntu4), libquorum5:amd64 (2.3.3-1ubuntu3, 2.3.3-1ubuntu4),
  libcorosync-common4:amd64 (2.3.3-1ubuntu3, 2.3.3-1ubuntu4),
  libsam4:amd64 (2.3.3-1ubuntu3, 2.3.3-1ubuntu4), libvotequorum6:amd64
  (2.3.3-1ubuntu3, 2.3.3-1ubuntu4), libtotem-pg5:amd64 (2.3.3-1ubuntu3,
  2.3.3-1ubuntu4)

  During this process, it appears that pacemaker service is restarted
  and it errors:

  syslog:Jan  2 16:09:33 juju-machine-0-lxc-4 pacemakerd[1994]:   notice: 
crm_update_peer_state: pcmk_quorum_notification: Node 
juju-machine-1-lxc-3[1001] - state is now lost (was member)
  syslog:Jan  2 16:09:34 juju-machine-0-lxc-4 pacemakerd[1994]:   notice: 
crm_update_peer_state: pcmk_quorum_notification: Node 
juju-machine-1-lxc-3[1001] - state is now member (was lost)
  syslog:Jan  2 16:14:32 juju-machine-0-lxc-4 pacemakerd[1994]:error: 
cfg_connection_destroy: Connection destroyed
  syslog:Jan  2 16:14:32 juju-machine-0-lxc-4 pacemakerd[1994]:   notice: 
pcmk_shutdown_worker: Shuting down Pacemaker
  syslog:Jan  2 16:14:32 juju-machine-0-lxc-4 pacemakerd[1994]:   notice: 
stop_child: Stopping crmd: Sent -15 to process 2050
  syslog:Jan  2 16:14:32 juju-machine-0-lxc-4 pacemakerd[1994]:error: 
pcmk_cpg_dispatch: Connection to the CPG API failed: Library error (2)
  syslog:Jan  2 16:14:32 juju-machine-0-lxc-4 pacemakerd[1994]:error: 
mcp_cpg_destroy: Connection destroyed

  Also affected xenial/ocata

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-hacluster/+bug/1740892/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : 

  1   2   3   4   >