[Ubuntu-ha] [Bug 1858485] Re: Won't build with py2, since python-marko is gone

2020-01-15 Thread Andreas Hasenack
Oops, back pedalling on this:
 doko: I'm taking advantage of you reintroducing python-mako (py2), 
to build haproxy with it (not a runtime dep). Is that why you reintroduced it, 
or are other packages needing it?
 ahasenack, ginggs: I just removed python2-scipy, so it's not needed for 
that anymore

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1858485

Title:
  Won't build with py2, since python-marko is gone

Status in haproxy package in Ubuntu:
  In Progress
Status in haproxy package in Debian:
  New

Bug description:
  haproxy runs debian/dconv/haproxy-dconv.py when building its
  documentation. That script is py2, and requires python2 and python-
  marko.

  python-marko is gone and is an NBS currently. The src:marko package
  now only builds the python3 version, so we need to convert haproxy-
  dconv.py to py3.

  haprox-dconv.py comes from https://github.com/cbonte/haproxy-dconv and
  already supports py3, so this boils down to updating the copy of
  haproxy-dconv in the debian package and updating the debian/patches
  /debianize-dconv.patch patch

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/haproxy/+bug/1858485/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1858485] Re: Won't build with py2, since python-marko is gone

2020-01-15 Thread Andreas Hasenack
python-marko (py2) was reinstated, using it.

** Changed in: haproxy (Ubuntu)
   Status: New => In Progress

** Changed in: haproxy (Ubuntu)
 Assignee: (unassigned) => Andreas Hasenack (ahasenack)

** Changed in: haproxy (Ubuntu)
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1858485

Title:
  Won't build with py2, since python-marko is gone

Status in haproxy package in Ubuntu:
  In Progress
Status in haproxy package in Debian:
  New

Bug description:
  haproxy runs debian/dconv/haproxy-dconv.py when building its
  documentation. That script is py2, and requires python2 and python-
  marko.

  python-marko is gone and is an NBS currently. The src:marko package
  now only builds the python3 version, so we need to convert haproxy-
  dconv.py to py3.

  haprox-dconv.py comes from https://github.com/cbonte/haproxy-dconv and
  already supports py3, so this boils down to updating the copy of
  haproxy-dconv in the debian package and updating the debian/patches
  /debianize-dconv.patch patch

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/haproxy/+bug/1858485/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1828228] Re: corosync fails to start in unprivileged containers - autopkgtest failure

2020-01-10 Thread Andreas Hasenack
** Tags added: update-excuse

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to corosync in Ubuntu.
https://bugs.launchpad.net/bugs/1828228

Title:
  corosync fails to start in unprivileged containers - autopkgtest
  failure

Status in Auto Package Testing:
  Invalid
Status in corosync package in Ubuntu:
  Fix Released
Status in pacemaker package in Ubuntu:
  Fix Released
Status in pcs package in Ubuntu:
  In Progress

Bug description:
  Currently pacemaker v2 fails to start in armhf containers (and by
  extension corosync too).

  I found that it is reproducible locally, and that I had to bump a few
  limits to get it going.

  Specifically I did:

  1) bump memlock limits
  2) bump rmem_max limits

  = 1) Bump memlock limits =

  I have no idea, which one of these finally worked, and/or is
  sufficient. A bit of a whack-a-mole.

  cat >>/etc/security/limits.conf 

[Ubuntu-ha] [Bug 1855568] Re: pcs depends on python3-tornado (>= 6) but it won't be installed

2020-01-10 Thread Andreas Hasenack
** Tags added: update-excuse

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1855568

Title:
  pcs depends on python3-tornado (>= 6) but it won't be installed

Status in pcs package in Ubuntu:
  In Progress

Bug description:
  PCS currently depends on python3-tornado >= 6 but that package does
  not exist.

  
  (c)rafaeldtinoco@pcsdevel:~$ apt-get install pcs
  Reading package lists... Done
  Building dependency tree   
  Reading state information... Done
  Some packages could not be installed. This may mean that you have
  requested an impossible situation or if you are using the unstable
  distribution that some required packages have not yet been created
  or been moved out of Incoming.
  The following information may help to resolve the situation:

  The following packages have unmet dependencies:
   pcs : Depends: python3-tornado (>= 6) but it is not going to be installed
 Recommends: pacemaker (>= 2.0) but it is not going to be installed
  E: Unable to correct problems, you have held broken packages.
  

  In Debian:

  python3-tornado | 6.0.3+really5.1.1-2 | testing  | amd64, arm64, armel, 
armhf, i386, mips64el, mipsel, ppc64el, s390x
  python3-tornado | 6.0.3+really5.1.1-2 | unstable | amd64, arm64, armel, 
armhf, i386, mips64el, mipsel, ppc64el, s390x

  Currently PCS needs python3-tornado to be upgraded to 6 OR to have the
  fixes for 5.1.1 to be re-added.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/pcs/+bug/1855568/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1828228] Re: corosync fails to start in unprivileged containers - autopkgtest failure

2020-01-07 Thread Andreas Hasenack
** Also affects: pcs (Ubuntu)
   Importance: Undecided
   Status: New

** Changed in: pcs (Ubuntu)
   Status: New => In Progress

** Changed in: pcs (Ubuntu)
 Assignee: (unassigned) => Rafael David Tinoco (rafaeldtinoco)

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to pacemaker in Ubuntu.
https://bugs.launchpad.net/bugs/1828228

Title:
  corosync fails to start in unprivileged containers - autopkgtest
  failure

Status in Auto Package Testing:
  Invalid
Status in corosync package in Ubuntu:
  Fix Released
Status in pacemaker package in Ubuntu:
  Fix Released
Status in pcs package in Ubuntu:
  In Progress

Bug description:
  Currently pacemaker v2 fails to start in armhf containers (and by
  extension corosync too).

  I found that it is reproducible locally, and that I had to bump a few
  limits to get it going.

  Specifically I did:

  1) bump memlock limits
  2) bump rmem_max limits

  = 1) Bump memlock limits =

  I have no idea, which one of these finally worked, and/or is
  sufficient. A bit of a whack-a-mole.

  cat >>/etc/security/limits.conf 

[Ubuntu-ha] [Bug 1858485] [NEW] Won't build with py2, since python-marko is gone

2020-01-06 Thread Andreas Hasenack
Public bug reported:

haproxy runs debian/dconv/haproxy-dconv.py when building its
documentation. That script is py2, and requires python2 and python-
marko.

python-marko is gone and is an NBS currently. The src:marko package now
only builds the python3 version, so we need to convert haproxy-dconv.py
to py3.

haprox-dconv.py comes from https://github.com/cbonte/haproxy-dconv and
already supports py3, so this boils down to updating the copy of
haproxy-dconv in the debian package and updating the debian/patches
/debianize-dconv.patch patch

** Affects: haproxy (Ubuntu)
 Importance: Undecided
 Status: New

** Affects: haproxy (Debian)
 Importance: Unknown
 Status: Unknown


** Tags: ftbfs update-excuse

** Tags added: update-excuse

** Bug watch added: Debian Bug tracker #948296
   https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=948296

** Also affects: haproxy (Debian) via
   https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=948296
   Importance: Unknown
   Status: Unknown

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1858485

Title:
  Won't build with py2, since python-marko is gone

Status in haproxy package in Ubuntu:
  New
Status in haproxy package in Debian:
  Unknown

Bug description:
  haproxy runs debian/dconv/haproxy-dconv.py when building its
  documentation. That script is py2, and requires python2 and python-
  marko.

  python-marko is gone and is an NBS currently. The src:marko package
  now only builds the python3 version, so we need to convert haproxy-
  dconv.py to py3.

  haprox-dconv.py comes from https://github.com/cbonte/haproxy-dconv and
  already supports py3, so this boils down to updating the copy of
  haproxy-dconv in the debian package and updating the debian/patches
  /debianize-dconv.patch patch

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/haproxy/+bug/1858485/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1854988] Re: Block automatic sync from debian for the next LTS

2019-12-03 Thread Andreas Hasenack
** Changed in: haproxy (Ubuntu)
 Assignee: (unassigned) => Andreas Hasenack (ahasenack)

** Description changed:

  Upstream haproxy has the concept of an LTS release, and a stable (non-
  LTS) release. Currently, these are (see table at
  http://www.haproxy.org/):
  
  stable: 2.1.x
  stable LTS: 2.0.x
  
  Debian unstable is at the moment tracking 2.0.x, and debian experimental
  is tracking 2.1.x.
  
  For the next ubuntu lts release, we would like to stay on the 2.0.x
  track, which is upstream's LTS.
  
+ From the 2.0 upstream announcement[2]:
+ """
+ The development will go on with 2.1 which will not be LTS, so it will
+ experience quite some breakage to prepare 2.2 which will be LTS and
+ expected approximately at the same date next year.
+ """
+ 
+ "same date next year" would be roughly June 2020, so not in time for
+ Ubuntu 20.04.
+ 
  Since this package is a sync, whenever the debian maintainers decide to
  push 2.1.x into unstable, we will automatically sync it (if we are not
  yet in feature freeze mode). To avoid that, a few options were
  explored[1], and adding an ubuntu version to the package seems the best
  one.
  
  Therefore, the package will have an ubuntu version, have its maintainer
  changed, but no extra delta, just to stop the automatic sync from
  happening. Whenever new versions of the 2.0.x track are uploaded to
  debian/sid, we will merge it manually into ubuntu.
  
- 1. https://lists.ubuntu.com/archives/ubuntu-
- devel/2019-December/040853.html
+ 1. https://lists.ubuntu.com/archives/ubuntu-devel/2019-December/040853.html
+ 2. https://www.mail-archive.com/haproxy@formilux.org/msg34215.html

** Merge proposal linked:
   
https://code.launchpad.net/~ahasenack/ubuntu/+source/haproxy/+git/haproxy/+merge/376302

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1854988

Title:
  Block automatic sync from debian for the next LTS

Status in haproxy package in Ubuntu:
  In Progress

Bug description:
  Upstream haproxy has the concept of an LTS release, and a stable (non-
  LTS) release. Currently, these are (see table at
  http://www.haproxy.org/):

  stable: 2.1.x
  stable LTS: 2.0.x

  Debian unstable is at the moment tracking 2.0.x, and debian
  experimental is tracking 2.1.x.

  For the next ubuntu lts release, we would like to stay on the 2.0.x
  track, which is upstream's LTS.

  From the 2.0 upstream announcement[2]:
  """
  The development will go on with 2.1 which will not be LTS, so it will
  experience quite some breakage to prepare 2.2 which will be LTS and
  expected approximately at the same date next year.
  """

  "same date next year" would be roughly June 2020, so not in time for
  Ubuntu 20.04.

  Since this package is a sync, whenever the debian maintainers decide
  to push 2.1.x into unstable, we will automatically sync it (if we are
  not yet in feature freeze mode). To avoid that, a few options were
  explored[1], and adding an ubuntu version to the package seems the
  best one.

  Therefore, the package will have an ubuntu version, have its
  maintainer changed, but no extra delta, just to stop the automatic
  sync from happening. Whenever new versions of the 2.0.x track are
  uploaded to debian/sid, we will merge it manually into ubuntu.

  1. https://lists.ubuntu.com/archives/ubuntu-devel/2019-December/040853.html
  2. https://www.mail-archive.com/haproxy@formilux.org/msg34215.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/haproxy/+bug/1854988/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1854988] [NEW] Block automatic sync from debian for the next LTS

2019-12-03 Thread Andreas Hasenack
Public bug reported:

Upstream haproxy has the concept of an LTS release, and a stable (non-
LTS) release. Currently, these are (see table at
http://www.haproxy.org/):

stable: 2.1.x
stable LTS: 2.0.x

Debian unstable is at the moment tracking 2.0.x, and debian experimental
is tracking 2.1.x.

For the next ubuntu lts release, we would like to stay on the 2.0.x
track, which is upstream's LTS.

Since this package is a sync, whenever the debian maintainers decide to
push 2.1.x into unstable, we will automatically sync it (if we are not
yet in feature freeze mode). To avoid that, a few options were
explored[1], and adding an ubuntu version to the package seems the best
one.

Therefore, the package will have an ubuntu version, have its maintainer
changed, but no extra delta, just to stop the automatic sync from
happening. Whenever new versions of the 2.0.x track are uploaded to
debian/sid, we will merge it manually into ubuntu.

1. https://lists.ubuntu.com/archives/ubuntu-
devel/2019-December/040853.html

** Affects: haproxy (Ubuntu)
 Importance: Medium
 Assignee: Andreas Hasenack (ahasenack)
 Status: In Progress


** Tags: ahasenack

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1854988

Title:
  Block automatic sync from debian for the next LTS

Status in haproxy package in Ubuntu:
  In Progress

Bug description:
  Upstream haproxy has the concept of an LTS release, and a stable (non-
  LTS) release. Currently, these are (see table at
  http://www.haproxy.org/):

  stable: 2.1.x
  stable LTS: 2.0.x

  Debian unstable is at the moment tracking 2.0.x, and debian
  experimental is tracking 2.1.x.

  For the next ubuntu lts release, we would like to stay on the 2.0.x
  track, which is upstream's LTS.

  Since this package is a sync, whenever the debian maintainers decide
  to push 2.1.x into unstable, we will automatically sync it (if we are
  not yet in feature freeze mode). To avoid that, a few options were
  explored[1], and adding an ubuntu version to the package seems the
  best one.

  Therefore, the package will have an ubuntu version, have its
  maintainer changed, but no extra delta, just to stop the automatic
  sync from happening. Whenever new versions of the 2.0.x track are
  uploaded to debian/sid, we will merge it manually into ubuntu.

  1. https://lists.ubuntu.com/archives/ubuntu-
  devel/2019-December/040853.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/haproxy/+bug/1854988/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1852122] Re: ocfs2-tools autopkgtest is causing kernel panics on ppc64el

2019-11-14 Thread Andreas Hasenack
Reproduced. Left part shows the o2cb test being run, and right side is
dmesg. It's focal, not eoan as the screenshot says, it's because I had
to start with eoan and dist-upgrade to focal.

** Attachment added: "Screenshot from 2019-11-14 17-13-34.png"
   
https://bugs.launchpad.net/ubuntu/+source/ocfs2-tools/+bug/1852122/+attachment/5305453/+files/Screenshot%20from%202019-11-14%2017-13-34.png

** Changed in: ocfs2-tools (Ubuntu)
   Status: New => Confirmed

** Changed in: ocfs2-tools (Ubuntu)
   Status: Confirmed => Triaged

** Changed in: ocfs2-tools (Ubuntu)
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to ocfs2-tools in Ubuntu.
https://bugs.launchpad.net/bugs/1852122

Title:
  ocfs2-tools autopkgtest is causing kernel panics on ppc64el

Status in ocfs2-tools package in Ubuntu:
  Triaged

Bug description:
  I noticed the tests for ocfs2-tools/1.8.6-1ubuntu1 were constantly
  retrying themselves. It's a feature we have so that transient /
  occasional failures are auto-retried, but it's misfiring here because
  we're not detecting that it's a consistent failure. That particular
  bug is fixed, but it means that ocfs2-tools is failing on ppc64el.
  Here's the important part of the log, full output attached.

  [   85.605738] BUG: Unable to handle kernel data access at 0x01744098

   [   85.605850] Faulting instruction address: 
0xc0e81168

   [   85.605901] Oops: Kernel access of bad 
area, sig: 11 [#1]

   [   85.605970] LE PAGE_SIZE=64K MMU=Hash SMP 
NR_CPUS=2048 NUMA pSeries

   [   85.606029] Modules linked in: ocfs2 
quota_tree ocfs2_dlmfs ocfs2_stack_o2cb ocfs2_dlm ocfs2_nodemanager 
ocfs2_stackglue iptable_mangle xt_TCPMSS xt_tcpudp bpfilter dm_multipath 
scsi_dh_rdac scsi_dh_emc scsi_dh_alua vmx_crypto crct10dif_vpmsum sch_fq_codel 
ip_tables x_tables autofs4 btrfs xor zstd_compress raid6_pq libcrc32c 
crc32c_vpmsum virtio_net virtio_blk net_failover failover

   [   85.606291] CPU: 0 PID: 1 Comm: systemd 
Not tainted 5.3.0-18-generic #19-Ubuntu

   [   85.606350] NIP:  c0e81168 LR: 
c054f240 CTR: 

   [   85.606410] REGS: c0005a3e3700 TRAP: 
0300   Not tainted  (5.3.0-18-generic)

   [   85.606469] MSR:  80009033 
  CR: 28024448  XER: 

   [   85.606531] CFAR: 701f9806f638 DAR: 
01744098 DSISR: 4000 IRQMASK: 0 

   [   85.606531] GPR00: 7374 
c0005a3e3990 c19c9100 c0004fe462a8 

   [   85.606531] GPR04: c0005856d840 
000e 74656772 c0004fe4a568 

   [   85.606531] GPR08:  
c00058568004 01744090  

   [   85.606531] GPR12: e8086002 
c1d6 7fffddd522d0  

   [   85.606531] GPR16:  
  c755e07c 

   [   85.606531] GPR20: c000598caca8 
c0005a3e3a58  c00058292f00 

   [   85.606531] GPR24: c0eea710 
 c0005856d840 c755e074 

  

[Ubuntu-ha] [Bug 1848902] Re: haproxy in bionic can get stuck

2019-11-06 Thread Andreas Hasenack
** Changed in: haproxy (Ubuntu)
   Importance: Undecided => High

** Changed in: haproxy (Ubuntu)
   Status: Confirmed => Triaged

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1848902

Title:
  haproxy in bionic can get stuck

Status in haproxy package in Ubuntu:
  Triaged

Bug description:
  On a Bionic/Stein cloud, after a network partition, we saw several
  units (glance, swift-proxy and cinder) fail to start haproxy, like so:

  root@juju-df624b-6-lxd-4:~# systemctl status haproxy.service
  ● haproxy.service - HAProxy Load Balancer
 Loaded: loaded (/lib/systemd/system/haproxy.service; enabled; vendor 
preset: enabled)
 Active: failed (Result: exit-code) since Sun 2019-10-20 00:23:18 UTC; 1h 
35min ago
   Docs: man:haproxy(1)
 file:/usr/share/doc/haproxy/configuration.txt.gz
Process: 2002655 ExecStart=/usr/sbin/haproxy -Ws -f $CONFIG -p $PIDFILE 
$EXTRAOPTS (code=exited, status=143)
Process: 2002649 ExecStartPre=/usr/sbin/haproxy -f $CONFIG -c -q $EXTRAOPTS 
(code=exited, status=0/SUCCESS)
   Main PID: 2002655 (code=exited, status=143)

  Oct 20 00:16:52 juju-df624b-6-lxd-4 systemd[1]: Starting HAProxy Load 
Balancer...
  Oct 20 00:16:52 juju-df624b-6-lxd-4 systemd[1]: Started HAProxy Load Balancer.
  Oct 20 00:23:18 juju-df624b-6-lxd-4 systemd[1]: Stopping HAProxy Load 
Balancer...
  Oct 20 00:23:18 juju-df624b-6-lxd-4 haproxy[2002655]: [WARNING] 292/001652 
(2002655) : Exiting Master process...
  Oct 20 00:23:18 juju-df624b-6-lxd-4 haproxy[2002655]: [ALERT] 292/001652 
(2002655) : Current worker 2002661 exited with code 143
  Oct 20 00:23:18 juju-df624b-6-lxd-4 haproxy[2002655]: [WARNING] 292/001652 
(2002655) : All workers exited. Exiting... (143)
  Oct 20 00:23:18 juju-df624b-6-lxd-4 systemd[1]: haproxy.service: Main process 
exited, code=exited, status=143/n/a
  Oct 20 00:23:18 juju-df624b-6-lxd-4 systemd[1]: haproxy.service: Failed with 
result 'exit-code'.
  Oct 20 00:23:18 juju-df624b-6-lxd-4 systemd[1]: Stopped HAProxy Load Balancer.
  root@juju-df624b-6-lxd-4:~#

  The Debian maintainer came up with the following patch for this:

https://www.mail-archive.com/haproxy@formilux.org/msg30477.html

  Which was added to the 1.8.10-1 Debian upload and merged into upstream 1.8.13.
  Unfortunately Bionic is on 1.8.8-1ubuntu0.4 and doesn't have this patch.

  Please consider pulling this patch into an SRU for Bionic.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/haproxy/+bug/1848902/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1848834] Re: ClusterMon resource creation core-dumps while created with extra_option -E

2019-11-06 Thread Andreas Hasenack
Good to know. I also couldn't reproduce it just by using those commands,
some more setting up is probably needed. If we get reproducible steps,
we might be able to identify the commit (or series of commits) that
fixed it after 1.1.14. I took a quick look at the ChangeLog file but
failed to spot anything obvious.

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to pacemaker in Ubuntu.
https://bugs.launchpad.net/bugs/1848834

Title:
  ClusterMon resource creation core-dumps while created with
  extra_option -E

Status in pacemaker package in Ubuntu:
  Fix Released
Status in pacemaker source package in Xenial:
  Incomplete

Bug description:
  Hi
  I have a 2 nodes cluster with a number of resources working fine.
  I am using Ubuntu 16.04 with Pacemaker: 1.1.14
  The moment i create a ClusterMon resource with an extra_option "-E" to run a 
script, it crashes and i can see the following in dmesg

  
  [73880.444953] crm_mon[20739]: segfault at 0 ip 7f948cc5c746 sp 
7ffed0cb0fb8 error 4 in libc-2.23.so[7f948cbd1000+1c]

  I am using the following command to create the resource:
  pcs resource create newRes ClusterMon user="root" extra_options="-E 
/usr/local/bin/new.sh "
  OR
  pcs resource create newRes ocf:pacemaker:ClusterMon user="root" 
extra_options="-E /usr/local/bin/new.sh "

  
  and immediately i see following in /var/log/messages

  2019-10-19T01:53:11.783763-04:00 master daemon notice crmd 17042   notice: 
Operation newRes_monitor_0: not running (node=master.dhcp, call=85, rc=7, 
cib-update=58, confirmed=true)
  2019-10-19T01:53:12.097529-04:00 master daemon info systemd - Started Session 
c75 of user root.
  2019-10-19T01:53:12.105468-04:00 master auth info systemd-logind - New 
session c75 of user root.
  2019-10-19T01:53:12.150340-04:00 master daemon notice lrmd 17039   notice: 
newRes_start_0:30376:stderr [ mesg: ttyname failed: Inappropriate ioctl for 
device ]
  2019-10-19T01:53:12.186340-04:00 master daemon notice crmd 17042   notice: 
Operation newRes_start_0: ok (node=master.dhcp, call=86, rc=0, cib-update=59, 
confirmed=true)
  2019-10-19T01:53:12.195312-04:00 master kern info kernel - crm_mon[30398]: 
segfault at 0 ip 7f9cfbe41746 sp 7ffd971060e8 error 4 in 
libc-2.23.so[7f9cfbdb6000+1c]
  2019-10-19T01:53:12.216644-04:00 master auth info systemd-logind - Removed 
session c75.
  2019-10-19T01:53:12.241439-04:00 master daemon notice lrmd 17039   notice: 
newRes_monitor_1:30406:stderr [ 
/usr/lib/ocf/resource.d/heartbeat/ClusterMon: 155: kill: No such process ]
  2019-10-19T01:53:12.241980-04:00 master daemon notice lrmd 17039   notice: 
newRes_monitor_1:30406:stderr [  ]
  2019-10-19T01:53:12.245273-04:00 master daemon notice crmd 17042   notice: 
master.dhcp-newRes_monitor_1:87 [ 
/usr/lib/ocf/resource.d/heartbeat/ClusterMon: 155: kill: No such process\n\n ]

  
  Note:
  - All other types of resources i.e. IPAddr, Drbd, systemd are working fine.
  - Also, if the newRes is created wihtout -E, it works fine.
  - Script has no complicated code. Event without the "echo" command i am 
seeing same issue.
  cat /usr/local/bin/new.sh
  #!/bin/sh
  echo "HELLO from Crm_mon script" >> /var/log/messages
  exit

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/pacemaker/+bug/1848834/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1828496] Re: service haproxy reload sometimes fails to pick up new TLS certificates

2019-10-07 Thread Andreas Hasenack
** Changed in: haproxy (Ubuntu)
   Status: Expired => New

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1828496

Title:
  service haproxy reload sometimes fails to pick up new TLS certificates

Status in haproxy package in Ubuntu:
  New

Bug description:
  I suspect this is the same thing reported on StackOverflow:

  "I had this same issue where even after reloading the config, haproxy
  would randomly serve old certs. After looking around for many days the
  issue was that "reload" operation created a new process without
  killing the old one. Confirm this by "ps aux | grep haproxy"."

  https://stackoverflow.com/questions/46040504/haproxy-wont-recognize-
  new-certificate

  In our setup, we automate Let's Encrypt certificate renewals, and a
  fresh certificate will trigger a reload of the service. But
  occasionally this reload doesn't seem to do anything.

  Will update with details next time it happens, and hopefully confirm
  the multiple process theory.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/haproxy/+bug/1828496/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1840958] Re: defragfs.ocfs2 hangs (or takes too long) on arm64, ppc64el

2019-08-21 Thread Andreas Hasenack
** Summary changed:

- defragfs.ocfs2 hangs (or takes too long) on arm64
+ defragfs.ocfs2 hangs (or takes too long) on arm64, ppc64el

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to ocfs2-tools in Ubuntu.
https://bugs.launchpad.net/bugs/1840958

Title:
  defragfs.ocfs2 hangs (or takes too long) on arm64, ppc64el

Status in OCFS2 Tools:
  Unknown
Status in ocfs2-tools package in Ubuntu:
  New

Bug description:
  The new defragfs.ocfs2 test added in the 1.8.6-1 version of the
  package hangs (or takes too long) in our dep8 infrastructure.

  I reproduced this on an arm64 VM. The command stays silent, and
  consuming 99% of CPU. There is no I/O being done (checked with iostat
  and iotop).

  strace -f shows it stopping at this write:
  2129  write(1, "defragfs.ocfs2 1.8.6\n", 21) = 21

  Which is just a version print.

  Also tested with kernel 5.2.0-13-generic from eoan-proposed.

  debian's ci only runs this test on amd64 it seems.

  On an amd64 VM in the same cloud this tests completes in less than 1s.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ocfs2-tools/+bug/1840958/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1840958] Re: defragfs.ocfs2 hangs (or takes too long) on arm64

2019-08-21 Thread Andreas Hasenack
** Bug watch added: github.com/markfasheh/ocfs2-tools/issues #42
   https://github.com/markfasheh/ocfs2-tools/issues/42

** Also affects: ocfs2-tools via
   https://github.com/markfasheh/ocfs2-tools/issues/42
   Importance: Unknown
   Status: Unknown

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to ocfs2-tools in Ubuntu.
https://bugs.launchpad.net/bugs/1840958

Title:
  defragfs.ocfs2 hangs (or takes too long) on arm64

Status in OCFS2 Tools:
  Unknown
Status in ocfs2-tools package in Ubuntu:
  New

Bug description:
  The new defragfs.ocfs2 test added in the 1.8.6-1 version of the
  package hangs (or takes too long) in our dep8 infrastructure.

  I reproduced this on an arm64 VM. The command stays silent, and
  consuming 99% of CPU. There is no I/O being done (checked with iostat
  and iotop).

  strace -f shows it stopping at this write:
  2129  write(1, "defragfs.ocfs2 1.8.6\n", 21) = 21

  Which is just a version print.

  Also tested with kernel 5.2.0-13-generic from eoan-proposed.

  debian's ci only runs this test on amd64 it seems.

  On an amd64 VM in the same cloud this tests completes in less than 1s.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ocfs2-tools/+bug/1840958/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1840958] [NEW] defragfs.ocfs2 hangs (or takes too long) on arm64

2019-08-21 Thread Andreas Hasenack
Public bug reported:

The new defragfs.ocfs2 test added in the 1.8.6-1 version of the package
hangs (or takes too long) in our dep8 infrastructure.

I reproduced this on an arm64 VM. The command stays silent, and
consuming 99% of CPU. There is no I/O being done (checked with iostat
and iotop).

strace -f shows it stopping at this write:
2129  write(1, "defragfs.ocfs2 1.8.6\n", 21) = 21

Which is just a version print.

Also tested with kernel 5.2.0-13-generic from eoan-proposed.

debian's ci only runs this test on amd64 it seems.

On an amd64 VM in the same cloud this tests completes in less than 1s.

** Affects: ocfs2-tools (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to ocfs2-tools in Ubuntu.
https://bugs.launchpad.net/bugs/1840958

Title:
  defragfs.ocfs2 hangs (or takes too long) on arm64

Status in ocfs2-tools package in Ubuntu:
  New

Bug description:
  The new defragfs.ocfs2 test added in the 1.8.6-1 version of the
  package hangs (or takes too long) in our dep8 infrastructure.

  I reproduced this on an arm64 VM. The command stays silent, and
  consuming 99% of CPU. There is no I/O being done (checked with iostat
  and iotop).

  strace -f shows it stopping at this write:
  2129  write(1, "defragfs.ocfs2 1.8.6\n", 21) = 21

  Which is just a version print.

  Also tested with kernel 5.2.0-13-generic from eoan-proposed.

  debian's ci only runs this test on amd64 it seems.

  On an amd64 VM in the same cloud this tests completes in less than 1s.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ocfs2-tools/+bug/1840958/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1792544] Re: demotion of pcre3 (8.x) a.k.a pcre (without the 3) in favor of pcre2 (10.x)

2019-07-31 Thread Andreas Hasenack
sssd master has pcre2 support via
https://github.com/SSSD/sssd/pull/677#issuecomment-508238642

eoan is getting 2.2.0, which doesn't have that yet, but getting closer!

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1792544

Title:
  demotion of pcre3 (8.x) a.k.a pcre (without the 3) in favor of pcre2
  (10.x)

Status in aide package in Ubuntu:
  Incomplete
Status in anope package in Ubuntu:
  Incomplete
Status in apache2 package in Ubuntu:
  Triaged
Status in apr-util package in Ubuntu:
  Fix Released
Status in clamav package in Ubuntu:
  Fix Released
Status in exim4 package in Ubuntu:
  Incomplete
Status in freeradius package in Ubuntu:
  Incomplete
Status in git package in Ubuntu:
  Fix Released
Status in glib2.0 package in Ubuntu:
  Incomplete
Status in grep package in Ubuntu:
  Incomplete
Status in haproxy package in Ubuntu:
  Fix Released
Status in libpam-mount package in Ubuntu:
  Fix Released
Status in libselinux package in Ubuntu:
  Fix Released
Status in nginx package in Ubuntu:
  Incomplete
Status in nmap package in Ubuntu:
  Incomplete
Status in pcre3 package in Ubuntu:
  Confirmed
Status in php-defaults package in Ubuntu:
  Triaged
Status in php7.2 package in Ubuntu:
  Won't Fix
Status in postfix package in Ubuntu:
  Incomplete
Status in python-pyscss package in Ubuntu:
  Incomplete
Status in quagga package in Ubuntu:
  Invalid
Status in rasqal package in Ubuntu:
  Incomplete
Status in slang2 package in Ubuntu:
  Incomplete
Status in sssd package in Ubuntu:
  Incomplete
Status in systemd package in Ubuntu:
  Triaged
Status in tilix package in Ubuntu:
  New
Status in ubuntu-core-meta package in Ubuntu:
  New
Status in vte2.91 package in Ubuntu:
  Fix Released
Status in wget package in Ubuntu:
  Fix Released
Status in zsh package in Ubuntu:
  Incomplete

Bug description:
  https://people.canonical.com/~ubuntu-
  archive/transitions/html/pcre2-main.html

  demotion of pcre3 in favor of pcre2. These packages need analysis what
  needs to be done for the demotion of pcre3:

  Packages which are ready to build with pcre2 should be marked as
  'Triaged', packages which are not ready should be marked as
  'Incomplete'.

  --
  For clarification: pcre2 is actually newer than pcre3.  pcre3 is just poorly 
named.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/aide/+bug/1792544/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1828496] Re: service haproxy reload sometimes fails to pick up new TLS certificates

2019-07-17 Thread Andreas Hasenack
Going over the details from comment #7

This is the state before the reload:
ubuntu@foo:~$ ps auxfwww | grep haproxy
root  1346  0.0  0.0   4356   684 ?Ss   May22   0:00 
/usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg -p 
/run/haproxy.pid
haproxy   2210  0.0  0.2  42644 10520 ?SMay22   0:00  \_ 
/usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds -sf 1378
haproxy   2215  2.7  0.8  68576 36308 ?Ss   May22  84:46  \_ 
/usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds -sf 1378

-sf means to send the finish signal (which is SIGTTOU and SIGUSR1
according to haproxy(1)) to the pids listed after startup, which is pid
1378 in this case. There is no haproxy 1378 in this list, so I wonder if
the "before" state was already a bit borked and what haproxy does if the
pids listed after -sf do not exist.

After reload, we have:
ubuntu@foo:~$ ps auxfwww | grep haproxy
root  1346  0.0  0.0   4356   724 ?Ss   May22   0:00 
/usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg -p 
/run/haproxy.pid
haproxy   2210  0.0  0.2  42644 10520 ?SMay22   0:00  \_ 
/usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds -sf 1378
haproxy   2215  2.7  0.8  68496 36228 ?Ss   May22  84:47  |   \_ 
/usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds -sf 1378
haproxy   8151  0.0  0.2  42644 10456 ?S07:36   0:00  \_ 
/usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds -sf 2215
haproxy   8152  2.0  0.2  43048 10568 ?Ss   07:36   0:00  \_ 
/usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds -sf 2215
ubuntu@foo:~$ 

Here we can see new haproxy processes with -sf pointing at the previous
2215 one. The ones with -sf 1378 are still there, and will remain there
until a full restart probably.

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1828496

Title:
  service haproxy reload sometimes fails to pick up new TLS certificates

Status in haproxy package in Ubuntu:
  Incomplete

Bug description:
  I suspect this is the same thing reported on StackOverflow:

  "I had this same issue where even after reloading the config, haproxy
  would randomly serve old certs. After looking around for many days the
  issue was that "reload" operation created a new process without
  killing the old one. Confirm this by "ps aux | grep haproxy"."

  https://stackoverflow.com/questions/46040504/haproxy-wont-recognize-
  new-certificate

  In our setup, we automate Let's Encrypt certificate renewals, and a
  fresh certificate will trigger a reload of the service. But
  occasionally this reload doesn't seem to do anything.

  Will update with details next time it happens, and hopefully confirm
  the multiple process theory.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/haproxy/+bug/1828496/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1810926] Re: initscript status check is too fragile

2019-07-10 Thread Andreas Hasenack
You are forcing the compatibility between sysv and systemd into a corner
here. I understand that your idea is to provide a simple case to
reproduce the bug, but let's step back for a second and look at this
statement:

"""
The initscript is used as a LSB RA for pacemaker deployments; this bug 
effectively prevents pacemaker from realizing that haproxy is down (in some 
cases).

"""

The above is the original reason for this bug. Is masking the service
via systemd one of those cases? As you saw in /lib/lsb/init-
functions.d/40-systemd, the 0 exit status is quite explicit for masked
services, and not something we should change lightly.

Another option, I'm guessing, would be to change pacemaker to use
systemctl instead of the initscript directly. No idea how feasible that
is.

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1810926

Title:
  initscript status check is too fragile

Status in haproxy package in Ubuntu:
  Confirmed

Bug description:
  `/etc/init.d/haproxy status` will return 0 if the pidfile is present, 
regardless of whether haproxy is actually running.
  I suggest for the check to be rewritten to actually look for a process with 
that PID (e.g. `pgrep -F $PIDFILE`).

  How to reproduce:
  chmod -x /usr/sbin/haproxy
  service haproxy restart

  Result:
  RC of `service haproxy status` is 3, but RC of `/etc/init.d/haproxy status` 
is 0

  Impact:
  The initscript is used as a LSB RA for pacemaker deployments; this bug 
effectively prevents pacemaker from realizing that haproxy is down (in some 
cases).

  Tested on:
  root@juju-8d5e58-14:~# lsb_release -rd
  Description:Ubuntu 16.04.5 LTS
  Release:16.04
  root@juju-8d5e58-14:~# dpkg -s haproxy | grep Version
  Version: 1.6.3-1ubuntu0.1


  This bug is present in the most recent debian package upstream as well
  (1.9.0-1), but I think it would make sense to track this here first as
  we have a lot of OpenStack installations on Xenial that would benefit
  from receiving a fix.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/haproxy/+bug/1810926/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1771335] Re: haproxy fails at startup when using server name instead of IP

2019-07-08 Thread Andreas Hasenack
Do you have fully qualified hostnames in the haproxy config? Or a bare
name?

** Changed in: haproxy (Ubuntu)
   Status: Expired => Incomplete

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1771335

Title:
  haproxy fails at startup when using server name instead of IP

Status in haproxy package in Ubuntu:
  Incomplete

Bug description:
  This is similar to #689734 I believe.

  When starting haproxy using a DNS name on the 'server' line haproxy
  fails to start, giving the message:

  ```
  May 05 19:09:40 hyrule systemd[1]: Starting HAProxy Load Balancer...
  May 05 19:09:40 hyrule haproxy[1146]: [ALERT] 124/190940 (1146) : parsing 
[/etc/haproxy/haproxy.cfg:157] : 'server scanmon' :
  May 05 19:09:40 hyrule haproxy[1146]: [ALERT] 124/190940 (1146) : Failed to 
initialize server(s) addr.
  May 05 19:09:40 hyrule systemd[1]: haproxy.service: Control process exited, 
code=exited status=1
  May 05 19:09:40 hyrule systemd[1]: haproxy.service: Failed with result 
'exit-code'.
  May 05 19:09:40 hyrule systemd[1]: Failed to start HAProxy Load Balancer.
  ```

  In this case the server statement was:

  `  server scanmon myservername.mydomain.org:8000`

  Changing it to use the IP address corrected the problem.

  I believe there is a missing dependency for DNS in the unit file.

  --Info:
  Description:  Ubuntu 18.04 LTS
  Release:  18.04

  haproxy:
Installed: 1.8.8-1
Candidate: 1.8.8-1
Version table:
   *** 1.8.8-1 500
  500 http://archive.ubuntu.com/ubuntu bionic/main amd64 Packages
  100 /var/lib/dpkg/status

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/haproxy/+bug/1771335/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1795420] Re: Keepalived update from 1.2.19 to 1.2.24 breaks support for /dev/tcp health check

2019-06-04 Thread Andreas Hasenack
Still in the queue.

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to keepalived in Ubuntu.
https://bugs.launchpad.net/bugs/1795420

Title:
  Keepalived update from 1.2.19 to 1.2.24 breaks support for /dev/tcp
  health check

Status in keepalived package in Ubuntu:
  Triaged

Bug description:
  Previous configuration that works fine:

  vrrp_script chk_trigger_port {
  script "https://github.com/acassen/keepalived/commit/5cd5fff78de11178c51ca245ff5de61a86b85049

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/keepalived/+bug/1795420/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1795420] Re: Keepalived update from 1.2.19 to 1.2.24 breaks support for /dev/tcp health check

2019-06-03 Thread Andreas Hasenack
** Changed in: keepalived (Ubuntu)
 Assignee: Karl Stenerud (kstenerud) => (unassigned)

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to keepalived in Ubuntu.
https://bugs.launchpad.net/bugs/1795420

Title:
  Keepalived update from 1.2.19 to 1.2.24 breaks support for /dev/tcp
  health check

Status in keepalived package in Ubuntu:
  Triaged

Bug description:
  Previous configuration that works fine:

  vrrp_script chk_trigger_port {
  script "https://github.com/acassen/keepalived/commit/5cd5fff78de11178c51ca245ff5de61a86b85049

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/keepalived/+bug/1795420/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1828496] Re: service haproxy reload sometimes fails to pick up new TLS certificates

2019-05-23 Thread Andreas Hasenack
Note that there is a systemd wrapper process in xenial:
  411 ?Ss 0:00 /usr/sbin/haproxy-systemd-wrapper -f 
/etc/haproxy/haproxy.cfg -p /run/haproxy.pid
  413 ?S  0:00  \_ /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p 
/run/haproxy.pid -Ds
  432 ?Ss 0:00  \_ /usr/sbin/haproxy -f 
/etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds

After a reload (not restart), that particular process stays (411), but its 
children, which is what actually serves the content, are restarted:
  411 ?Ss 0:00 /usr/sbin/haproxy-systemd-wrapper -f 
/etc/haproxy/haproxy.cfg -p /run/haproxy.pid
  671 ?S  0:00  \_ /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p 
/run/haproxy.pid -Ds -sf 432
  675 ?Ss 0:00  \_ /usr/sbin/haproxy -f 
/etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds -sf 432


Maybe there is a bad interaction between reload, certs, and existing 
connections. The tests I've done so far are rather static, with a simple 
frontend and backend.

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1828496

Title:
  service haproxy reload sometimes fails to pick up new TLS certificates

Status in haproxy package in Ubuntu:
  Incomplete

Bug description:
  I suspect this is the same thing reported on StackOverflow:

  "I had this same issue where even after reloading the config, haproxy
  would randomly serve old certs. After looking around for many days the
  issue was that "reload" operation created a new process without
  killing the old one. Confirm this by "ps aux | grep haproxy"."

  https://stackoverflow.com/questions/46040504/haproxy-wont-recognize-
  new-certificate

  In our setup, we automate Let's Encrypt certificate renewals, and a
  fresh certificate will trigger a reload of the service. But
  occasionally this reload doesn't seem to do anything.

  Will update with details next time it happens, and hopefully confirm
  the multiple process theory.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/haproxy/+bug/1828496/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1825992] Re: Upgrade to version 2.0 to satisfy pcs dependency

2019-05-07 Thread Andreas Hasenack
maybe disco will need a pcs with a +really version to go back to 0.9. I
think we will know more once we get pacemaker 2 into eoan.

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to pacemaker in Ubuntu.
https://bugs.launchpad.net/bugs/1825992

Title:
  Upgrade to version 2.0 to satisfy pcs dependency

Status in pacemaker package in Ubuntu:
  Confirmed
Status in pacemaker source package in Disco:
  New
Status in pacemaker source package in Eoan:
  Confirmed

Bug description:
  Is there a way we could upgrade the version to 2.0? The pcs package
  requires a version of pacemaker that is greater than or equal to 2.0,
  and there is already a debian version packaged for v2. Installing the
  Ubuntu package for pcs will delete the pacemaker package as there is
  no version of pacemaker that is greater than 2.0.

  I can help with build/testing and packaging for 2.0 based on the
  existing debian deb if needed.

  https://pkgs.org/download/pacemaker

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/pacemaker/+bug/1825992/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1792544] Re: demotion of pcre3 (8.x) a.k.a pcre (without the 3) in favor of pcre2 (10.x)

2019-05-02 Thread Andreas Hasenack
Switching zsh back to incomplete, according to previous comment. Thanks
for noticing that.

** Changed in: zsh (Ubuntu)
   Status: Triaged => Incomplete

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1792544

Title:
  demotion of pcre3 (8.x) a.k.a pcre (without the 3) in favor of pcre2
  (10.x)

Status in aide package in Ubuntu:
  Incomplete
Status in anope package in Ubuntu:
  Incomplete
Status in apache2 package in Ubuntu:
  Triaged
Status in apr-util package in Ubuntu:
  Fix Released
Status in clamav package in Ubuntu:
  Fix Released
Status in exim4 package in Ubuntu:
  Incomplete
Status in freeradius package in Ubuntu:
  Incomplete
Status in git package in Ubuntu:
  Fix Released
Status in glib2.0 package in Ubuntu:
  Incomplete
Status in grep package in Ubuntu:
  Incomplete
Status in haproxy package in Ubuntu:
  Fix Released
Status in libpam-mount package in Ubuntu:
  Fix Released
Status in libselinux package in Ubuntu:
  Triaged
Status in nginx package in Ubuntu:
  Incomplete
Status in nmap package in Ubuntu:
  Incomplete
Status in pcre3 package in Ubuntu:
  Confirmed
Status in php-defaults package in Ubuntu:
  Triaged
Status in php7.2 package in Ubuntu:
  Won't Fix
Status in postfix package in Ubuntu:
  Incomplete
Status in python-pyscss package in Ubuntu:
  Incomplete
Status in quagga package in Ubuntu:
  Invalid
Status in rasqal package in Ubuntu:
  Incomplete
Status in slang2 package in Ubuntu:
  Incomplete
Status in sssd package in Ubuntu:
  Incomplete
Status in systemd package in Ubuntu:
  Triaged
Status in tilix package in Ubuntu:
  New
Status in ubuntu-core-meta package in Ubuntu:
  New
Status in vte2.91 package in Ubuntu:
  Fix Released
Status in wget package in Ubuntu:
  Fix Released
Status in zsh package in Ubuntu:
  Incomplete

Bug description:
  https://people.canonical.com/~ubuntu-
  archive/transitions/html/pcre2-main.html

  demotion of pcre3 in favor of pcre2. These packages need analysis what
  needs to be done for the demotion of pcre3:

  Packages which are ready to build with pcre2 should be marked as
  'Triaged', packages which are not ready should be marked as
  'Incomplete'.

  --
  For clarification: pcre2 is actually newer than pcre3.  pcre3 is just poorly 
named.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/aide/+bug/1792544/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1792544] Re: demotion of pcre3 (8.x) a.k.a pcre (without the 3) in favor of pcre2 (10.x)

2019-05-01 Thread Andreas Hasenack
Thanks for the update, I missed that comment (again). I guess it can be
moved back to "triaged" then.

** Changed in: apache2 (Ubuntu)
   Status: Incomplete => Triaged

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1792544

Title:
  demotion of pcre3 (8.x) a.k.a pcre (without the 3) in favor of pcre2
  (10.x)

Status in aide package in Ubuntu:
  Incomplete
Status in anope package in Ubuntu:
  Incomplete
Status in apache2 package in Ubuntu:
  Triaged
Status in apr-util package in Ubuntu:
  Fix Released
Status in clamav package in Ubuntu:
  Fix Released
Status in exim4 package in Ubuntu:
  Incomplete
Status in freeradius package in Ubuntu:
  Incomplete
Status in git package in Ubuntu:
  Fix Released
Status in glib2.0 package in Ubuntu:
  Incomplete
Status in grep package in Ubuntu:
  Incomplete
Status in haproxy package in Ubuntu:
  Fix Released
Status in libpam-mount package in Ubuntu:
  Fix Released
Status in libselinux package in Ubuntu:
  Triaged
Status in nginx package in Ubuntu:
  Incomplete
Status in nmap package in Ubuntu:
  Incomplete
Status in pcre3 package in Ubuntu:
  Confirmed
Status in php-defaults package in Ubuntu:
  Triaged
Status in php7.2 package in Ubuntu:
  Won't Fix
Status in postfix package in Ubuntu:
  Incomplete
Status in python-pyscss package in Ubuntu:
  Incomplete
Status in quagga package in Ubuntu:
  Invalid
Status in rasqal package in Ubuntu:
  Incomplete
Status in slang2 package in Ubuntu:
  Incomplete
Status in sssd package in Ubuntu:
  Incomplete
Status in systemd package in Ubuntu:
  Triaged
Status in tilix package in Ubuntu:
  New
Status in ubuntu-core-meta package in Ubuntu:
  New
Status in vte2.91 package in Ubuntu:
  Fix Released
Status in wget package in Ubuntu:
  Fix Released
Status in zsh package in Ubuntu:
  Fix Released

Bug description:
  https://people.canonical.com/~ubuntu-
  archive/transitions/html/pcre2-main.html

  demotion of pcre3 in favor of pcre2. These packages need analysis what
  needs to be done for the demotion of pcre3:

  Packages which are ready to build with pcre2 should be marked as
  'Triaged', packages which are not ready should be marked as
  'Incomplete'.

  --
  For clarification: pcre2 is actually newer than pcre3.  pcre3 is just poorly 
named.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/aide/+bug/1792544/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1792544] Re: demotion of pcre3 (8.x) a.k.a pcre (without the 3) in favor of pcre2 (10.x)

2019-05-01 Thread Andreas Hasenack
Regarding apache2, https://bz.apache.org/bugzilla/show_bug.cgi?id=57471
is still open, and I just tried a 2.4.38 build with pcre2, and it
doesn't work. Switching the task back to incomplete according to the bug
instructions.

** Changed in: apache2 (Ubuntu)
   Status: Triaged => Incomplete

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1792544

Title:
  demotion of pcre3 (8.x) a.k.a pcre (without the 3) in favor of pcre2
  (10.x)

Status in aide package in Ubuntu:
  Incomplete
Status in anope package in Ubuntu:
  Incomplete
Status in apache2 package in Ubuntu:
  Incomplete
Status in apr-util package in Ubuntu:
  Fix Released
Status in clamav package in Ubuntu:
  Fix Released
Status in exim4 package in Ubuntu:
  Incomplete
Status in freeradius package in Ubuntu:
  Incomplete
Status in git package in Ubuntu:
  Fix Released
Status in glib2.0 package in Ubuntu:
  Incomplete
Status in grep package in Ubuntu:
  Incomplete
Status in haproxy package in Ubuntu:
  Fix Released
Status in libpam-mount package in Ubuntu:
  Fix Released
Status in libselinux package in Ubuntu:
  Triaged
Status in nginx package in Ubuntu:
  Incomplete
Status in nmap package in Ubuntu:
  Incomplete
Status in pcre3 package in Ubuntu:
  Confirmed
Status in php-defaults package in Ubuntu:
  Triaged
Status in php7.2 package in Ubuntu:
  Won't Fix
Status in postfix package in Ubuntu:
  Incomplete
Status in python-pyscss package in Ubuntu:
  Incomplete
Status in quagga package in Ubuntu:
  Invalid
Status in rasqal package in Ubuntu:
  Incomplete
Status in slang2 package in Ubuntu:
  Incomplete
Status in sssd package in Ubuntu:
  Incomplete
Status in systemd package in Ubuntu:
  Triaged
Status in tilix package in Ubuntu:
  New
Status in ubuntu-core-meta package in Ubuntu:
  New
Status in vte2.91 package in Ubuntu:
  Fix Released
Status in wget package in Ubuntu:
  Fix Released
Status in zsh package in Ubuntu:
  Fix Released

Bug description:
  https://people.canonical.com/~ubuntu-
  archive/transitions/html/pcre2-main.html

  demotion of pcre3 in favor of pcre2. These packages need analysis what
  needs to be done for the demotion of pcre3:

  Packages which are ready to build with pcre2 should be marked as
  'Triaged', packages which are not ready should be marked as
  'Incomplete'.

  --
  For clarification: pcre2 is actually newer than pcre3.  pcre3 is just poorly 
named.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/aide/+bug/1792544/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1815101] Re: netplan removes keepalived configuration

2019-03-11 Thread Andreas Hasenack
Seems a dupe to me.

For the bionic case, with keepalived < 2.0, is there some keepalived
script that can be run to restore the vip, after networkd removed it? We
could run it as a network-dispatcher hook then. Has this been
considered?

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to keepalived in Ubuntu.
https://bugs.launchpad.net/bugs/1815101

Title:
  netplan removes keepalived configuration

Status in netplan:
  Invalid
Status in keepalived package in Ubuntu:
  Incomplete
Status in systemd package in Ubuntu:
  Triaged

Bug description:
  Configure netplan for interfaces, for example (a working config with
  IP addresses obfuscated)

  network:
  ethernets:
  eth0:
  addresses: [192.168.0.5/24]
  dhcp4: false
  nameservers:
search: [blah.com, other.blah.com, hq.blah.com, cust.blah.com, 
phone.blah.com]
addresses: [10.22.11.1]
  eth2:
  addresses:
- 12.13.14.18/29
- 12.13.14.19/29
  gateway4: 12.13.14.17
  dhcp4: false
  nameservers:
search: [blah.com, other.blah.com, hq.blah.com, cust.blah.com, 
phone.blah.com]
addresses: [10.22.11.1]
  eth3:
  addresses: [10.22.11.6/24]
  dhcp4: false
  nameservers:
search: [blah.com, other.blah.com, hq.blah.com, cust.blah.com, 
phone.blah.com]
addresses: [10.22.11.1]
  eth4:
  addresses: [10.22.14.6/24]
  dhcp4: false
  nameservers:
search: [blah.com, other.blah.com, hq.blah.com, cust.blah.com, 
phone.blah.com]
addresses: [10.22.11.1]
  eth7:
  addresses: [9.5.17.34/29]
  dhcp4: false
  optional: true
  nameservers:
search: [blah.com, other.blah.com, hq.blah.com, cust.blah.com, 
phone.blah.com]
addresses: [10.22.11.1]
  version: 2

  Configure keepalived (again, a working config with IP addresses
  obfuscated)

  global_defs   # Block id
  {
  notification_email {
  sysadm...@blah.com
  }
  notification_email_from keepali...@system3.hq.blah.com
  smtp_server 10.22.11.7 # IP
  smtp_connect_timeout 30  # integer, seconds
  router_id system3  # string identifying the machine,
   # (doesn't have to be hostname).
  vrrp_mcast_group4 224.0.0.18 # optional, default 224.0.0.18
  vrrp_mcast_group6 ff02::12   # optional, default ff02::12
  enable_traps # enable SNMP traps
  }
  vrrp_sync_group collection {
  group {
  wan
  lan
  phone
  }
  vrrp_instance wan {
  state MASTER
  interface eth2
  virtual_router_id 77
  priority 150
  advert_int 1
  smtp_alert
  authentication {
  auth_type PASS
  auth_pass BlahBlah
  }
  virtual_ipaddress {
  12.13.14.20
  }
  }
  vrrp_instance lan {
  state MASTER
  interface eth3
  virtual_router_id 78
  priority 150
  advert_int 1
  smtp_alert
  authentication {
  auth_type PASS
  auth_pass MoreBlah
  }
  virtual_ipaddress {
  10.22.11.13/24
  }
  }
  vrrp_instance phone {
  state MASTER
  interface eth4
  virtual_router_id 79
  priority 150
  advert_int 1
  smtp_alert
  authentication {
  auth_type PASS
  auth_pass MostBlah
  }
  virtual_ipaddress {
  10.22.14.3/24
  }
  }

  At boot the affected interfaces have:
  5: eth4:  mtu 1500 qdisc mq state UP group 
default qlen 1000
  link/ether ab:cd:ef:90:c0:e3 brd ff:ff:ff:ff:ff:ff
  inet 10.22.14.6/24 brd 10.22.14.255 scope global eth4
 valid_lft forever preferred_lft forever
  inet 10.22.14.3/24 scope global secondary eth4
 valid_lft forever preferred_lft forever
  inet6 fe80::ae1f:6bff:fe90:c0e3/64 scope link 
 valid_lft forever preferred_lft forever
  7: eth3:  mtu 1500 qdisc mq state UP group 
default qlen 1000
  link/ether ab:cd:ef:b0:26:29 brd ff:ff:ff:ff:ff:ff
  inet 10.22.11.6/24 brd 10.22.11.255 scope global eth3
 valid_lft forever preferred_lft forever
  inet 10.22.11.13/24 scope global secondary eth3
 valid_lft forever preferred_lft forever
  inet6 fe80::ae1f:6bff:feb0:2629/64 scope link 
 valid_lft forever preferred_lft forever
  9: eth2:  mtu 1500 qdisc mq state UP group 
default qlen 1000
  link/ether ab:cd:ef:b0:26:2b 

[Ubuntu-ha] [Bug 1804069] Re: haproxy fails on arm64 due to alignment error

2019-02-06 Thread Andreas Hasenack
Bionic verification

arm system:
root@bionic-haproxy-1804069:~# uname -a
Linux bionic-haproxy-1804069 4.15.0-45-generic #48-Ubuntu SMP Tue Jan 29 
16:32:18 UTC 2019 aarch64 aarch64 aarch64 GNU/Linux

using affected package at first:
root@bionic-haproxy-1804069:~# apt-cache policy haproxy
haproxy:
  Installed: 1.8.8-1ubuntu0.3
  Candidate: 1.8.8-1ubuntu0.3
  Version table:
 *** 1.8.8-1ubuntu0.3 500
500 http://ports.ubuntu.com/ubuntu-ports bionic-updates/main arm64 
Packages


wget fails as expected:
root@bionic-haproxy-1804069:~# wget -t1 http://localhost:8080
--2019-02-06 11:57:56--  http://localhost:8080/
Resolving localhost (localhost)... 127.0.0.1
Connecting to localhost (localhost)|127.0.0.1|:8080... connected.
HTTP request sent, awaiting response... No data received.
Giving up.

root@bionic-haproxy-1804069:~# echo $?
4

haproxy logs show errors:
Feb  6 11:57:57 bionic-haproxy-1804069 haproxy[10191]: [ALERT] 036/115744 
(10191) : Current worker 10192 exited with code 135
Feb  6 11:57:57 bionic-haproxy-1804069 haproxy[10191]: [ALERT] 036/115744 
(10191) : exit-on-failure: killing every workers with SIGTERM
Feb  6 11:57:57 bionic-haproxy-1804069 haproxy[10191]: [WARNING] 036/115744 
(10191) : All workers exited. Exiting... (135)


Using updated package:
root@bionic-haproxy-1804069:~# apt-cache policy haproxy
haproxy:
  Installed: 1.8.8-1ubuntu0.4
  Candidate: 1.8.8-1ubuntu0.4
  Version table:
 *** 1.8.8-1ubuntu0.4 500
500 http://ports.ubuntu.com/ubuntu-ports bionic-proposed/main arm64 
Packages


wget now works:
root@bionic-haproxy-1804069:~# wget -t1 http://localhost:8080
--2019-02-06 12:00:17--  http://localhost:8080/
Resolving localhost (localhost)... 127.0.0.1
Connecting to localhost (localhost)|127.0.0.1|:8080... connected.
HTTP request sent, awaiting response... 200 OK
Length: 10918 (11K) [text/html]
Saving to: ‘index.html’

index.html
100%[>]
10.66K  --.-KB/sin 0s

2019-02-06 12:00:17 (165 MB/s) - ‘index.html’ saved [10918/10918]

root@bionic-haproxy-1804069:~# echo $?
0

and haproxy logs stay silent.



bionic verification succeeded

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1804069

Title:
  haproxy fails on arm64 due to alignment error

Status in haproxy package in Ubuntu:
  Fix Released
Status in haproxy source package in Bionic:
  Fix Committed
Status in haproxy source package in Cosmic:
  Fix Committed

Bug description:
  [Impact]
  haproxy as shipped with bionic and cosmic doesn't work on arm64 
architectures, crashing the moment it serves a request.

  [Test Case]

  * install haproxy and apache in an up-to-date ubuntu release you are testing, 
in an arm64 system:
  sudo apt update && sudo apt dist-upgrade -y && sudo apt install haproxy 
apache2 -y

  * Create /etc/haproxy/haproxy.cfg with the following contents:
  global
  chroot /var/lib/haproxy
  user haproxy
  group haproxy
  daemon
  maxconn 4096

  defaults
  log global
  option dontlognull
  option redispatch
  retries 3
  timeout client 50s
  timeout connect 10s
  timeout http-request 5s
  timeout server 50s
  maxconn 4096

  frontend test-front
  bind *:8080
  mode http
  default_backend test-back

  backend test-back
  mode http
  stick store-request src
  stick-table type ip size 256k expire 30m
  server test-1 localhost:80

  * in one terminal, keep tailing the (still nonexistent) haproxy log file:
  tail -F /var/log/haproxy.log

  * in another terminal, restart haproxy:
  sudo systemctl restart haproxy

  * The haproxy log will become live, and already show errors:
  Jan 24 19:22:23 cosmic-haproxy-1804069 haproxy[2286]: [WARNING] 023/191958 
(2286) : Exiting Master process...
  Jan 24 19:22:23 cosmic-haproxy-1804069 haproxy[2286]: [ALERT] 023/191958 
(2286) : Current worker 2287 exited with code 143
  Jan 24 19:22:23 cosmic-haproxy-1804069 haproxy[2286]: [WARNING] 023/191958 
(2286) : All workers exited. Exiting... (143)

  * Run wget to try to fetch the apache frontpage, via haproxy, limited to one 
attempt. It will fail:
  $ wget -t1 http://localhost:8080
  --2019-01-24 19:23:51--  http://localhost:8080/
  Resolving localhost (localhost)... 127.0.0.1
  Connecting to localhost (localhost)|127.0.0.1|:8080... connected.
  HTTP request sent, awaiting response... No data received.
  Giving up.
  $ echo $?
  4

  * the haproxy logs will show errors:
  Jan 24 19:24:36 cosmic-haproxy-1804069 haproxy[6411]: [ALERT] 023/192351 
(6411) : Current worker 6412 exited with code 135
  Jan 24 19:24:36 cosmic-haproxy-1804069 haproxy[6411]: [ALERT] 023/192351 
(6411) : exit-on-failure: killing every workers with SIGTERM
  Jan 24 

[Ubuntu-ha] [Bug 1804069] Re: haproxy fails on arm64 due to alignment error

2019-02-06 Thread Andreas Hasenack
Cosmic verification

Using arm64:
root@cosmic-haproxy-1804069:~# uname -a
Linux cosmic-haproxy-1804069 4.15.0-45-generic #48-Ubuntu SMP Tue Jan 29 
16:32:18 UTC 2019 aarch64 aarch64 aarch64 GNU/Linux


First, confirming the bug:
root@cosmic-haproxy-1804069:~# apt-cache policy haproxy
haproxy:
  Installed: 1.8.13-2ubuntu0.1
  Candidate: 1.8.13-2ubuntu0.1
  Version table:
 *** 1.8.13-2ubuntu0.1 500
500 http://ports.ubuntu.com/ubuntu-ports cosmic-updates/main arm64 
Packages

wget fails as expected:
root@cosmic-haproxy-1804069:~# wget -t1 http://localhost:8080
--2019-02-06 11:43:52--  http://localhost:8080/
Resolving localhost (localhost)... 127.0.0.1
Connecting to localhost (localhost)|127.0.0.1|:8080... connected.
HTTP request sent, awaiting response... No data received.
Giving up.

root@cosmic-haproxy-1804069:~# echo $?
4

Haproxy error logs show:
Feb  6 11:43:52 cosmic-haproxy-1804069 haproxy[4539]: [ALERT] 036/114339 (4539) 
: Current worker 4540 exited with code 135  
 
Feb  6 11:43:52 cosmic-haproxy-1804069 haproxy[4539]: [ALERT] 036/114339 (4539) 
: exit-on-failure: killing every workers with SIGTERM   
 
Feb  6 11:43:52 cosmic-haproxy-1804069 haproxy[4539]: [WARNING] 036/114339 
(4539) : All workers exited. Exiting... (135)   


Upgraded haproxy package:
root@cosmic-haproxy-1804069:~# apt-cache policy haproxy
haproxy:
  Installed: 1.8.13-2ubuntu0.2
  Candidate: 1.8.13-2ubuntu0.2
  Version table:
 *** 1.8.13-2ubuntu0.2 500
500 http://ports.ubuntu.com/ubuntu-ports cosmic-proposed/main arm64 
Packages

wget now works:
root@cosmic-haproxy-1804069:~# wget -t1 http://localhost:8080
--2019-02-06 11:46:19--  http://localhost:8080/
Resolving localhost (localhost)... 127.0.0.1
Connecting to localhost (localhost)|127.0.0.1|:8080... connected.
HTTP request sent, awaiting response... 200 OK
Length: 10918 (11K) [text/html]
Saving to: ‘index.html’

index.html
100%[>]
10.66K  --.-KB/sin 0s

2019-02-06 11:46:19 (154 MB/s) - ‘index.html’ saved [10918/10918]

root@cosmic-haproxy-1804069:~# echo $?
0
root@cosmic-haproxy-1804069:~# 

And the haproxy logs stay quiet.


Cosmic verification succeeded.

** Tags removed: verification-needed-cosmic
** Tags added: verification-done-cosmic

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1804069

Title:
  haproxy fails on arm64 due to alignment error

Status in haproxy package in Ubuntu:
  Fix Released
Status in haproxy source package in Bionic:
  Fix Committed
Status in haproxy source package in Cosmic:
  Fix Committed

Bug description:
  [Impact]
  haproxy as shipped with bionic and cosmic doesn't work on arm64 
architectures, crashing the moment it serves a request.

  [Test Case]

  * install haproxy and apache in an up-to-date ubuntu release you are testing, 
in an arm64 system:
  sudo apt update && sudo apt dist-upgrade -y && sudo apt install haproxy 
apache2 -y

  * Create /etc/haproxy/haproxy.cfg with the following contents:
  global
  chroot /var/lib/haproxy
  user haproxy
  group haproxy
  daemon
  maxconn 4096

  defaults
  log global
  option dontlognull
  option redispatch
  retries 3
  timeout client 50s
  timeout connect 10s
  timeout http-request 5s
  timeout server 50s
  maxconn 4096

  frontend test-front
  bind *:8080
  mode http
  default_backend test-back

  backend test-back
  mode http
  stick store-request src
  stick-table type ip size 256k expire 30m
  server test-1 localhost:80

  * in one terminal, keep tailing the (still nonexistent) haproxy log file:
  tail -F /var/log/haproxy.log

  * in another terminal, restart haproxy:
  sudo systemctl restart haproxy

  * The haproxy log will become live, and already show errors:
  Jan 24 19:22:23 cosmic-haproxy-1804069 haproxy[2286]: [WARNING] 023/191958 
(2286) : Exiting Master process...
  Jan 24 19:22:23 cosmic-haproxy-1804069 haproxy[2286]: [ALERT] 023/191958 
(2286) : Current worker 2287 exited with code 143
  Jan 24 19:22:23 cosmic-haproxy-1804069 haproxy[2286]: [WARNING] 023/191958 
(2286) : All workers exited. Exiting... (143)

  * Run wget to try to fetch the apache frontpage, via haproxy, limited to one 
attempt. It will fail:
  $ wget -t1 http://localhost:8080
  --2019-01-24 19:23:51--  http://localhost:8080/
  Resolving localhost (localhost)... 127.0.0.1
  Connecting to localhost (localhost)|127.0.0.1|:8080... connected.
  HTTP request sent, awaiting response... No data received.
  Giving up.
  $ echo 

[Ubuntu-ha] [Bug 1804069] Re: haproxy fails on arm64 due to alignment error

2019-02-06 Thread Andreas Hasenack
Ok, thanks for trying. I can complete the verification and will post
results later today.

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1804069

Title:
  haproxy fails on arm64 due to alignment error

Status in haproxy package in Ubuntu:
  Fix Released
Status in haproxy source package in Bionic:
  Fix Committed
Status in haproxy source package in Cosmic:
  Fix Committed

Bug description:
  [Impact]
  haproxy as shipped with bionic and cosmic doesn't work on arm64 
architectures, crashing the moment it serves a request.

  [Test Case]

  * install haproxy and apache in an up-to-date ubuntu release you are testing, 
in an arm64 system:
  sudo apt update && sudo apt dist-upgrade -y && sudo apt install haproxy 
apache2 -y

  * Create /etc/haproxy/haproxy.cfg with the following contents:
  global
  chroot /var/lib/haproxy
  user haproxy
  group haproxy
  daemon
  maxconn 4096

  defaults
  log global
  option dontlognull
  option redispatch
  retries 3
  timeout client 50s
  timeout connect 10s
  timeout http-request 5s
  timeout server 50s
  maxconn 4096

  frontend test-front
  bind *:8080
  mode http
  default_backend test-back

  backend test-back
  mode http
  stick store-request src
  stick-table type ip size 256k expire 30m
  server test-1 localhost:80

  * in one terminal, keep tailing the (still nonexistent) haproxy log file:
  tail -F /var/log/haproxy.log

  * in another terminal, restart haproxy:
  sudo systemctl restart haproxy

  * The haproxy log will become live, and already show errors:
  Jan 24 19:22:23 cosmic-haproxy-1804069 haproxy[2286]: [WARNING] 023/191958 
(2286) : Exiting Master process...
  Jan 24 19:22:23 cosmic-haproxy-1804069 haproxy[2286]: [ALERT] 023/191958 
(2286) : Current worker 2287 exited with code 143
  Jan 24 19:22:23 cosmic-haproxy-1804069 haproxy[2286]: [WARNING] 023/191958 
(2286) : All workers exited. Exiting... (143)

  * Run wget to try to fetch the apache frontpage, via haproxy, limited to one 
attempt. It will fail:
  $ wget -t1 http://localhost:8080
  --2019-01-24 19:23:51--  http://localhost:8080/
  Resolving localhost (localhost)... 127.0.0.1
  Connecting to localhost (localhost)|127.0.0.1|:8080... connected.
  HTTP request sent, awaiting response... No data received.
  Giving up.
  $ echo $?
  4

  * the haproxy logs will show errors:
  Jan 24 19:24:36 cosmic-haproxy-1804069 haproxy[6411]: [ALERT] 023/192351 
(6411) : Current worker 6412 exited with code 135
  Jan 24 19:24:36 cosmic-haproxy-1804069 haproxy[6411]: [ALERT] 023/192351 
(6411) : exit-on-failure: killing every workers with SIGTERM
  Jan 24 19:24:36 cosmic-haproxy-1804069 haproxy[6411]: [WARNING] 023/192351 
(6411) : All workers exited. Exiting... (135)

  * Update the haproxy package and try the wget one more time. This time
  it will work, and the haproxy logs will stay silent:

  $ wget -t1 http://localhost:8080
  --2019-01-24 19:26:14--  http://localhost:8080/
  Resolving localhost (localhost)... 127.0.0.1
  Connecting to localhost (localhost)|127.0.0.1|:8080... connected.
  HTTP request sent, awaiting response... 200 OK
  Length: 10918 (11K) [text/html]
  Saving to: ‘index.html’

  index.html
  
100%[>]
  10.66K  --.-KB/sin 0s

  2019-01-24 19:26:14 (75.3 MB/s) - ‘index.html’ saved [10918/10918]

  [Regression Potential]
  Patch was applied upstream in 1.8.15 and is available in the same form in the 
latest 1.8.17 release. The patch is a bit low level, but seems to have been 
well understood.

  [Other Info]
  After writing the testing instructions for this bug, I decided they could be 
easily converted to a DEP8 test, which I did and included in this SRU. This new 
test, very simple but effective, shows that arm64 is working, and that the 
other architectures didn't break.

  [Original Description]

  This fault was reported via the haproxy mailing list https://www.mail-
  archive.com/hapr...@formilux.org/msg31749.html

  And then patched in the haproxy code here
  
https://github.com/haproxy/haproxy/commit/52dabbc4fad338233c7f0c96f977a43f8f81452a

  Without this patch haproxy is not functional on aarch64/arm64.
  Experimental deployments of openstack-ansible on arm64 fail because of
  this bug, and without a fix applied to the ubuntu bionic packages we
  cannot proceed further as the openstack CI only installs the upstream
  ubuntu distribution packages.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/haproxy/+bug/1804069/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : 

[Ubuntu-ha] [Bug 1804069] Re: haproxy fails on arm64 due to alignment error

2019-02-04 Thread Andreas Hasenack
You can boot some other release for the host and use a lxd container for
the cosmic test.

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1804069

Title:
  haproxy fails on arm64 due to alignment error

Status in haproxy package in Ubuntu:
  Fix Released
Status in haproxy source package in Bionic:
  Fix Committed
Status in haproxy source package in Cosmic:
  Fix Committed

Bug description:
  [Impact]
  haproxy as shipped with bionic and cosmic doesn't work on arm64 
architectures, crashing the moment it serves a request.

  [Test Case]

  * install haproxy and apache in an up-to-date ubuntu release you are testing, 
in an arm64 system:
  sudo apt update && sudo apt dist-upgrade -y && sudo apt install haproxy 
apache2 -y

  * Create /etc/haproxy/haproxy.cfg with the following contents:
  global
  chroot /var/lib/haproxy
  user haproxy
  group haproxy
  daemon
  maxconn 4096

  defaults
  log global
  option dontlognull
  option redispatch
  retries 3
  timeout client 50s
  timeout connect 10s
  timeout http-request 5s
  timeout server 50s
  maxconn 4096

  frontend test-front
  bind *:8080
  mode http
  default_backend test-back

  backend test-back
  mode http
  stick store-request src
  stick-table type ip size 256k expire 30m
  server test-1 localhost:80

  * in one terminal, keep tailing the (still nonexistent) haproxy log file:
  tail -F /var/log/haproxy.log

  * in another terminal, restart haproxy:
  sudo systemctl restart haproxy

  * The haproxy log will become live, and already show errors:
  Jan 24 19:22:23 cosmic-haproxy-1804069 haproxy[2286]: [WARNING] 023/191958 
(2286) : Exiting Master process...
  Jan 24 19:22:23 cosmic-haproxy-1804069 haproxy[2286]: [ALERT] 023/191958 
(2286) : Current worker 2287 exited with code 143
  Jan 24 19:22:23 cosmic-haproxy-1804069 haproxy[2286]: [WARNING] 023/191958 
(2286) : All workers exited. Exiting... (143)

  * Run wget to try to fetch the apache frontpage, via haproxy, limited to one 
attempt. It will fail:
  $ wget -t1 http://localhost:8080
  --2019-01-24 19:23:51--  http://localhost:8080/
  Resolving localhost (localhost)... 127.0.0.1
  Connecting to localhost (localhost)|127.0.0.1|:8080... connected.
  HTTP request sent, awaiting response... No data received.
  Giving up.
  $ echo $?
  4

  * the haproxy logs will show errors:
  Jan 24 19:24:36 cosmic-haproxy-1804069 haproxy[6411]: [ALERT] 023/192351 
(6411) : Current worker 6412 exited with code 135
  Jan 24 19:24:36 cosmic-haproxy-1804069 haproxy[6411]: [ALERT] 023/192351 
(6411) : exit-on-failure: killing every workers with SIGTERM
  Jan 24 19:24:36 cosmic-haproxy-1804069 haproxy[6411]: [WARNING] 023/192351 
(6411) : All workers exited. Exiting... (135)

  * Update the haproxy package and try the wget one more time. This time
  it will work, and the haproxy logs will stay silent:

  $ wget -t1 http://localhost:8080
  --2019-01-24 19:26:14--  http://localhost:8080/
  Resolving localhost (localhost)... 127.0.0.1
  Connecting to localhost (localhost)|127.0.0.1|:8080... connected.
  HTTP request sent, awaiting response... 200 OK
  Length: 10918 (11K) [text/html]
  Saving to: ‘index.html’

  index.html
  
100%[>]
  10.66K  --.-KB/sin 0s

  2019-01-24 19:26:14 (75.3 MB/s) - ‘index.html’ saved [10918/10918]

  [Regression Potential]
  Patch was applied upstream in 1.8.15 and is available in the same form in the 
latest 1.8.17 release. The patch is a bit low level, but seems to have been 
well understood.

  [Other Info]
  After writing the testing instructions for this bug, I decided they could be 
easily converted to a DEP8 test, which I did and included in this SRU. This new 
test, very simple but effective, shows that arm64 is working, and that the 
other architectures didn't break.

  [Original Description]

  This fault was reported via the haproxy mailing list https://www.mail-
  archive.com/hapr...@formilux.org/msg31749.html

  And then patched in the haproxy code here
  
https://github.com/haproxy/haproxy/commit/52dabbc4fad338233c7f0c96f977a43f8f81452a

  Without this patch haproxy is not functional on aarch64/arm64.
  Experimental deployments of openstack-ansible on arm64 fail because of
  this bug, and without a fix applied to the ubuntu bionic packages we
  cannot proceed further as the openstack CI only installs the upstream
  ubuntu distribution packages.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/haproxy/+bug/1804069/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : 

[Ubuntu-ha] [Bug 1804069] Re: haproxy fails on arm64 due to alignment error

2019-02-04 Thread Andreas Hasenack
Jonathan, could you please detail what exactly you tested, and with
which package version? The SRU team will appreciate that level of
detail, which gives more confidence in the update.

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1804069

Title:
  haproxy fails on arm64 due to alignment error

Status in haproxy package in Ubuntu:
  Fix Released
Status in haproxy source package in Bionic:
  Fix Committed
Status in haproxy source package in Cosmic:
  Fix Committed

Bug description:
  [Impact]
  haproxy as shipped with bionic and cosmic doesn't work on arm64 
architectures, crashing the moment it serves a request.

  [Test Case]

  * install haproxy and apache in an up-to-date ubuntu release you are testing, 
in an arm64 system:
  sudo apt update && sudo apt dist-upgrade -y && sudo apt install haproxy 
apache2 -y

  * Create /etc/haproxy/haproxy.cfg with the following contents:
  global
  chroot /var/lib/haproxy
  user haproxy
  group haproxy
  daemon
  maxconn 4096

  defaults
  log global
  option dontlognull
  option redispatch
  retries 3
  timeout client 50s
  timeout connect 10s
  timeout http-request 5s
  timeout server 50s
  maxconn 4096

  frontend test-front
  bind *:8080
  mode http
  default_backend test-back

  backend test-back
  mode http
  stick store-request src
  stick-table type ip size 256k expire 30m
  server test-1 localhost:80

  * in one terminal, keep tailing the (still nonexistent) haproxy log file:
  tail -F /var/log/haproxy.log

  * in another terminal, restart haproxy:
  sudo systemctl restart haproxy

  * The haproxy log will become live, and already show errors:
  Jan 24 19:22:23 cosmic-haproxy-1804069 haproxy[2286]: [WARNING] 023/191958 
(2286) : Exiting Master process...
  Jan 24 19:22:23 cosmic-haproxy-1804069 haproxy[2286]: [ALERT] 023/191958 
(2286) : Current worker 2287 exited with code 143
  Jan 24 19:22:23 cosmic-haproxy-1804069 haproxy[2286]: [WARNING] 023/191958 
(2286) : All workers exited. Exiting... (143)

  * Run wget to try to fetch the apache frontpage, via haproxy, limited to one 
attempt. It will fail:
  $ wget -t1 http://localhost:8080
  --2019-01-24 19:23:51--  http://localhost:8080/
  Resolving localhost (localhost)... 127.0.0.1
  Connecting to localhost (localhost)|127.0.0.1|:8080... connected.
  HTTP request sent, awaiting response... No data received.
  Giving up.
  $ echo $?
  4

  * the haproxy logs will show errors:
  Jan 24 19:24:36 cosmic-haproxy-1804069 haproxy[6411]: [ALERT] 023/192351 
(6411) : Current worker 6412 exited with code 135
  Jan 24 19:24:36 cosmic-haproxy-1804069 haproxy[6411]: [ALERT] 023/192351 
(6411) : exit-on-failure: killing every workers with SIGTERM
  Jan 24 19:24:36 cosmic-haproxy-1804069 haproxy[6411]: [WARNING] 023/192351 
(6411) : All workers exited. Exiting... (135)

  * Update the haproxy package and try the wget one more time. This time
  it will work, and the haproxy logs will stay silent:

  $ wget -t1 http://localhost:8080
  --2019-01-24 19:26:14--  http://localhost:8080/
  Resolving localhost (localhost)... 127.0.0.1
  Connecting to localhost (localhost)|127.0.0.1|:8080... connected.
  HTTP request sent, awaiting response... 200 OK
  Length: 10918 (11K) [text/html]
  Saving to: ‘index.html’

  index.html
  
100%[>]
  10.66K  --.-KB/sin 0s

  2019-01-24 19:26:14 (75.3 MB/s) - ‘index.html’ saved [10918/10918]

  [Regression Potential]
  Patch was applied upstream in 1.8.15 and is available in the same form in the 
latest 1.8.17 release. The patch is a bit low level, but seems to have been 
well understood.

  [Other Info]
  After writing the testing instructions for this bug, I decided they could be 
easily converted to a DEP8 test, which I did and included in this SRU. This new 
test, very simple but effective, shows that arm64 is working, and that the 
other architectures didn't break.

  [Original Description]

  This fault was reported via the haproxy mailing list https://www.mail-
  archive.com/hapr...@formilux.org/msg31749.html

  And then patched in the haproxy code here
  
https://github.com/haproxy/haproxy/commit/52dabbc4fad338233c7f0c96f977a43f8f81452a

  Without this patch haproxy is not functional on aarch64/arm64.
  Experimental deployments of openstack-ansible on arm64 fail because of
  this bug, and without a fix applied to the ubuntu bionic packages we
  cannot proceed further as the openstack CI only installs the upstream
  ubuntu distribution packages.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/haproxy/+bug/1804069/+subscriptions


[Ubuntu-ha] [Bug 1804069] Re: haproxy fails on arm64 due to alignment error

2019-01-30 Thread Andreas Hasenack
** Description changed:

  [Impact]
  haproxy as shipped with bionic and cosmic doesn't work on arm64 
architectures, crashing the moment it serves a request.
  
  [Test Case]
  
  * install haproxy and apache in an up-to-date ubuntu release you are testing, 
in an arm64 system:
  sudo apt update && sudo apt dist-upgrade -y && sudo apt install haproxy 
apache2 -y
  
  * Create /etc/haproxy/haproxy.cfg with the following contents:
  global
  chroot /var/lib/haproxy
  user haproxy
  group haproxy
  daemon
  maxconn 4096
  
  defaults
  log global
  option dontlognull
  option redispatch
  retries 3
  timeout client 50s
  timeout connect 10s
  timeout http-request 5s
  timeout server 50s
  maxconn 4096
  
  frontend test-front
  bind *:8080
  mode http
  default_backend test-back
  
  backend test-back
  mode http
  stick store-request src
  stick-table type ip size 256k expire 30m
  server test-1 localhost:80
  
  * in one terminal, keep tailing the (still nonexistent) haproxy log file:
  tail -F /var/log/haproxy.log
  
  * in another terminal, restart haproxy:
  sudo systemctl restart haproxy
  
  * The haproxy log will become live, and already show errors:
  Jan 24 19:22:23 cosmic-haproxy-1804069 haproxy[2286]: [WARNING] 023/191958 
(2286) : Exiting Master process...
  Jan 24 19:22:23 cosmic-haproxy-1804069 haproxy[2286]: [ALERT] 023/191958 
(2286) : Current worker 2287 exited with code 143
  Jan 24 19:22:23 cosmic-haproxy-1804069 haproxy[2286]: [WARNING] 023/191958 
(2286) : All workers exited. Exiting... (143)
  
  * Run wget to try to fetch the apache frontpage, via haproxy, limited to one 
attempt. It will fail:
  $ wget -t1 http://localhost:8080
  --2019-01-24 19:23:51--  http://localhost:8080/
  Resolving localhost (localhost)... 127.0.0.1
  Connecting to localhost (localhost)|127.0.0.1|:8080... connected.
  HTTP request sent, awaiting response... No data received.
  Giving up.
  $ echo $?
  4
  
  * the haproxy logs will show errors:
  Jan 24 19:24:36 cosmic-haproxy-1804069 haproxy[6411]: [ALERT] 023/192351 
(6411) : Current worker 6412 exited with code 135
  Jan 24 19:24:36 cosmic-haproxy-1804069 haproxy[6411]: [ALERT] 023/192351 
(6411) : exit-on-failure: killing every workers with SIGTERM
  Jan 24 19:24:36 cosmic-haproxy-1804069 haproxy[6411]: [WARNING] 023/192351 
(6411) : All workers exited. Exiting... (135)
  
  * Update the haproxy package and try the wget one more time. This time
  it will work, and the haproxy logs will stay silent:
  
  $ wget -t1 http://localhost:8080
  --2019-01-24 19:26:14--  http://localhost:8080/
  Resolving localhost (localhost)... 127.0.0.1
  Connecting to localhost (localhost)|127.0.0.1|:8080... connected.
  HTTP request sent, awaiting response... 200 OK
  Length: 10918 (11K) [text/html]
  Saving to: ‘index.html’
  
  index.html
  
100%[>]
  10.66K  --.-KB/sin 0s
  
  2019-01-24 19:26:14 (75.3 MB/s) - ‘index.html’ saved [10918/10918]
  
  [Regression Potential]
  Patch was applied upstream in 1.8.15 and is available in the same form in the 
latest 1.8.17 release. The patch is a bit low level, but seems to have been 
well understood.
  
  [Other Info]
- It's bad that our DEP8 test didn't catch this, since all it does is start the 
service and briefly talk to it, but there is no actual proxying going on.
+ After writing the testing instructions for this bug, I decided they could be 
easily converted to a DEP8 test, which I did and included in this SRU. This new 
test, very simple but effective, shows that arm64 is working, and that the 
other architectures didn't break.
  
  [Original Description]
  
  This fault was reported via the haproxy mailing list https://www.mail-
  archive.com/hapr...@formilux.org/msg31749.html
  
  And then patched in the haproxy code here
  
https://github.com/haproxy/haproxy/commit/52dabbc4fad338233c7f0c96f977a43f8f81452a
  
  Without this patch haproxy is not functional on aarch64/arm64.
  Experimental deployments of openstack-ansible on arm64 fail because of
  this bug, and without a fix applied to the ubuntu bionic packages we
  cannot proceed further as the openstack CI only installs the upstream
  ubuntu distribution packages.

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1804069

Title:
  haproxy fails on arm64 due to alignment error

Status in haproxy package in Ubuntu:
  Fix Released
Status in haproxy source package in Bionic:
  In Progress
Status in haproxy source package in Cosmic:
  In Progress

Bug description:
  [Impact]
  haproxy as shipped with bionic and cosmic doesn't work on arm64 
architectures, crashing the moment it serves a 

[Ubuntu-ha] [Bug 1804069] Re: haproxy fails on arm64 due to alignment error

2019-01-24 Thread Andreas Hasenack
** Description changed:

  [Impact]
  haproxy as shipped with bionic and cosmic doesn't work on arm64 
architectures, crashing the moment it serves a request.
  
  [Test Case]
  
  * install haproxy and apache in an up-to-date ubuntu release you are testing, 
in an arm64 system:
  sudo apt update && sudo apt dist-upgrade -y && sudo apt install haproxy 
apache2 -y
  
  * Create /etc/haproxy/haproxy.cfg with the following contents:
  global
  chroot /var/lib/haproxy
  user haproxy
  group haproxy
  daemon
  maxconn 4096
  
  defaults
  log global
  option dontlognull
  option redispatch
  retries 3
  timeout client 50s
  timeout connect 10s
  timeout http-request 5s
  timeout server 50s
  maxconn 4096
  
  frontend test-front
  bind *:8080
  mode http
  default_backend test-back
  
  backend test-back
  mode http
  stick store-request src
  stick-table type ip size 256k expire 30m
  server test-1 localhost:80
  
  * in one terminal, keep tailing the (still nonexistent) haproxy log file:
  tail -F /var/log/haproxy.log
  
  * in another terminal, restart haproxy:
  sudo systemctl restart haproxy
  
  * The haproxy log will become live, and already show errors:
  Jan 24 19:22:23 cosmic-haproxy-1804069 haproxy[2286]: [WARNING] 023/191958 
(2286) : Exiting Master process...
  Jan 24 19:22:23 cosmic-haproxy-1804069 haproxy[2286]: [ALERT] 023/191958 
(2286) : Current worker 2287 exited with code 143
  Jan 24 19:22:23 cosmic-haproxy-1804069 haproxy[2286]: [WARNING] 023/191958 
(2286) : All workers exited. Exiting... (143)
  
  * Run wget to try to fetch the apache frontpage, via haproxy, limited to one 
attempt. It will fail:
  $ wget -t1 http://localhost:8080
  --2019-01-24 19:23:51--  http://localhost:8080/
  Resolving localhost (localhost)... 127.0.0.1
  Connecting to localhost (localhost)|127.0.0.1|:8080... connected.
  HTTP request sent, awaiting response... No data received.
  Giving up.
  $ echo $?
  4
  
  * the haproxy logs will show errors:
  Jan 24 19:24:36 cosmic-haproxy-1804069 haproxy[6411]: [ALERT] 023/192351 
(6411) : Current worker 6412 exited with code 135
  Jan 24 19:24:36 cosmic-haproxy-1804069 haproxy[6411]: [ALERT] 023/192351 
(6411) : exit-on-failure: killing every workers with SIGTERM
  Jan 24 19:24:36 cosmic-haproxy-1804069 haproxy[6411]: [WARNING] 023/192351 
(6411) : All workers exited. Exiting... (135)
  
  * Update the haproxy package and try the wget one more time. This time
  it will work, and the haproxy logs will stay silent:
  
  $ wget -t1 http://localhost:8080
  --2019-01-24 19:26:14--  http://localhost:8080/
  Resolving localhost (localhost)... 127.0.0.1
  Connecting to localhost (localhost)|127.0.0.1|:8080... connected.
  HTTP request sent, awaiting response... 200 OK
  Length: 10918 (11K) [text/html]
  Saving to: ‘index.html’
  
  index.html
  
100%[>]
  10.66K  --.-KB/sin 0s
  
  2019-01-24 19:26:14 (75.3 MB/s) - ‘index.html’ saved [10918/10918]
  
  [Regression Potential]
- 
-  * discussion of how regressions are most likely to manifest as a result
- of this change.
- 
-  * It is assumed that any SRU candidate patch is well-tested before
-    upload and has a low overall risk of regression, but it's important
-    to make the effort to think about what ''could'' happen in the
-    event of a regression.
- 
-  * This both shows the SRU team that the risks have been considered,
-    and provides guidance to testers in regression-testing the SRU.
+ Patch was applied upstream in 1.8.15 and is available in the same form in the 
latest 1.8.17 release. The patch is a bit low level, but seems to have been 
well understood.
  
  [Other Info]
- 
-  * Anything else you think is useful to include
-  * Anticipate questions from users, SRU, +1 maintenance, security teams and 
the Technical Board
-  * and address these questions in advance
+ It's bad that our DEP8 test didn't catch this, since all it does is start the 
service and briefly talk to it, but there is no actual proxying going on.
  
  [Original Description]
  
  This fault was reported via the haproxy mailing list https://www.mail-
  archive.com/hapr...@formilux.org/msg31749.html
  
  And then patched in the haproxy code here
  
https://github.com/haproxy/haproxy/commit/52dabbc4fad338233c7f0c96f977a43f8f81452a
  
  Without this patch haproxy is not functional on aarch64/arm64.
  Experimental deployments of openstack-ansible on arm64 fail because of
  this bug, and without a fix applied to the ubuntu bionic packages we
  cannot proceed further as the openstack CI only installs the upstream
  ubuntu distribution packages.

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.

[Ubuntu-ha] [Bug 1804069] Re: haproxy fails on arm64 due to alignment error

2019-01-24 Thread Andreas Hasenack
** Description changed:

  [Impact]
- 
-  * An explanation of the effects of the bug on users and
- 
-  * justification for backporting the fix to the stable release.
- 
-  * In addition, it is helpful, but not required, to include an
-    explanation of how the upload fixes this bug.
+ haproxy as shipped with bionic and cosmic doesn't work on arm64 
architectures, crashing the moment it serves a request.
  
  [Test Case]
  
  * install haproxy and apache in an up-to-date ubuntu release you are testing, 
in an arm64 system:
  sudo apt update && sudo apt dist-upgrade -y && sudo apt install haproxy 
apache2 -y
  
  * Create /etc/haproxy/haproxy.cfg with the following contents:
  global
  chroot /var/lib/haproxy
  user haproxy
  group haproxy
  daemon
  maxconn 4096
  
  defaults
  log global
  option dontlognull
  option redispatch
  retries 3
  timeout client 50s
  timeout connect 10s
  timeout http-request 5s
  timeout server 50s
  maxconn 4096
  
  frontend test-front
  bind *:8080
  mode http
  default_backend test-back
  
  backend test-back
  mode http
  stick store-request src
  stick-table type ip size 256k expire 30m
  server test-1 localhost:80
  
  * in one terminal, keep tailing the (still nonexistent) haproxy log file:
  tail -F /var/log/haproxy.log
  
  * in another terminal, restart haproxy:
  sudo systemctl restart haproxy
  
  * The haproxy log will become live, and already show errors:
  Jan 24 19:22:23 cosmic-haproxy-1804069 haproxy[2286]: [WARNING] 023/191958 
(2286) : Exiting Master process...
  Jan 24 19:22:23 cosmic-haproxy-1804069 haproxy[2286]: [ALERT] 023/191958 
(2286) : Current worker 2287 exited with code 143
  Jan 24 19:22:23 cosmic-haproxy-1804069 haproxy[2286]: [WARNING] 023/191958 
(2286) : All workers exited. Exiting... (143)
  
  * Run wget to try to fetch the apache frontpage, via haproxy, limited to one 
attempt. It will fail:
  $ wget -t1 http://localhost:8080
  --2019-01-24 19:23:51--  http://localhost:8080/
  Resolving localhost (localhost)... 127.0.0.1
  Connecting to localhost (localhost)|127.0.0.1|:8080... connected.
  HTTP request sent, awaiting response... No data received.
  Giving up.
  $ echo $?
  4
  
  * the haproxy logs will show errors:
  Jan 24 19:24:36 cosmic-haproxy-1804069 haproxy[6411]: [ALERT] 023/192351 
(6411) : Current worker 6412 exited with code 135
  Jan 24 19:24:36 cosmic-haproxy-1804069 haproxy[6411]: [ALERT] 023/192351 
(6411) : exit-on-failure: killing every workers with SIGTERM
  Jan 24 19:24:36 cosmic-haproxy-1804069 haproxy[6411]: [WARNING] 023/192351 
(6411) : All workers exited. Exiting... (135)
  
  * Update the haproxy package and try the wget one more time. This time
  it will work, and the haproxy logs will stay silent:
  
  $ wget -t1 http://localhost:8080
  --2019-01-24 19:26:14--  http://localhost:8080/
  Resolving localhost (localhost)... 127.0.0.1
  Connecting to localhost (localhost)|127.0.0.1|:8080... connected.
  HTTP request sent, awaiting response... 200 OK
  Length: 10918 (11K) [text/html]
  Saving to: ‘index.html’
  
  index.html
  
100%[>]
  10.66K  --.-KB/sin 0s
  
  2019-01-24 19:26:14 (75.3 MB/s) - ‘index.html’ saved [10918/10918]
  
  [Regression Potential]
  
   * discussion of how regressions are most likely to manifest as a result
  of this change.
  
   * It is assumed that any SRU candidate patch is well-tested before
     upload and has a low overall risk of regression, but it's important
     to make the effort to think about what ''could'' happen in the
     event of a regression.
  
   * This both shows the SRU team that the risks have been considered,
     and provides guidance to testers in regression-testing the SRU.
  
  [Other Info]
  
   * Anything else you think is useful to include
   * Anticipate questions from users, SRU, +1 maintenance, security teams and 
the Technical Board
   * and address these questions in advance
  
  [Original Description]
  
  This fault was reported via the haproxy mailing list https://www.mail-
  archive.com/hapr...@formilux.org/msg31749.html
  
  And then patched in the haproxy code here
  
https://github.com/haproxy/haproxy/commit/52dabbc4fad338233c7f0c96f977a43f8f81452a
  
  Without this patch haproxy is not functional on aarch64/arm64.
  Experimental deployments of openstack-ansible on arm64 fail because of
  this bug, and without a fix applied to the ubuntu bionic packages we
  cannot proceed further as the openstack CI only installs the upstream
  ubuntu distribution packages.

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1804069

Title:
  haproxy fails on arm64 due to 

[Ubuntu-ha] [Bug 1804069] Re: haproxy fails on arm64 due to alignment error

2019-01-24 Thread Andreas Hasenack
** Description changed:

  [Impact]
  
   * An explanation of the effects of the bug on users and
  
   * justification for backporting the fix to the stable release.
  
   * In addition, it is helpful, but not required, to include an
     explanation of how the upload fixes this bug.
  
  [Test Case]
  
  * install haproxy and apache in an up-to-date ubuntu release you are testing, 
in an arm64 system:
  sudo apt update && sudo apt dist-upgrade -y && sudo apt install haproxy 
apache2 -y
  
  * Create /etc/haproxy/haproxy.cfg with the following contents:
  global
- chroot /var/lib/haproxy
- user haproxy
- group haproxy
- daemon
- maxconn 4096
+ chroot /var/lib/haproxy
+ user haproxy
+ group haproxy
+ daemon
+ maxconn 4096
  
  defaults
- log global
- option dontlognull
- option redispatch
- retries 3
- timeout client 50s
- timeout connect 10s
- timeout http-request 5s
- timeout server 50s
- maxconn 4096
+ log global
+ option dontlognull
+ option redispatch
+ retries 3
+ timeout client 50s
+ timeout connect 10s
+ timeout http-request 5s
+ timeout server 50s
+ maxconn 4096
  
  frontend test-front
- bind *:8080
- mode http
- default_backend test-back
+ bind *:8080
+ mode http
+ default_backend test-back
  
  backend test-back
- mode http
- stick store-request src
- stick-table type ip size 256k expire 30m
- server test-1 localhost:80
- 
+ mode http
+ stick store-request src
+ stick-table type ip size 256k expire 30m
+ server test-1 localhost:80
  
  * in one terminal, keep tailing the (still nonexistent) haproxy log file:
  tail -F /var/log/haproxy.log
  
- * in another terminal, restart haproxy and tail its logs:
+ * in another terminal, restart haproxy:
  sudo systemctl restart haproxy
- tail -f /var/log/haproxy.log
  
  * in another terminal, start haproxy:
  sudo systemctl restart haproxy
  
  * The haproxy log fill become live, and already show errors:
  Jan 24 19:22:23 cosmic-haproxy-1804069 haproxy[2286]: [WARNING] 023/191958 
(2286) : Exiting Master process...
  Jan 24 19:22:23 cosmic-haproxy-1804069 haproxy[2286]: [ALERT] 023/191958 
(2286) : Current worker 2287 exited with code 143
  Jan 24 19:22:23 cosmic-haproxy-1804069 haproxy[2286]: [WARNING] 023/191958 
(2286) : All workers exited. Exiting... (143)
  
  * Run wget to try to fetch the apache frontpage, via haproxy, limited to one 
attempt. It will fail:
  $ wget -t1 http://localhost:8080
  --2019-01-24 19:23:51--  http://localhost:8080/
  Resolving localhost (localhost)... 127.0.0.1
  Connecting to localhost (localhost)|127.0.0.1|:8080... connected.
  HTTP request sent, awaiting response... No data received.
  Giving up.
  $ echo $?
  4
  
  * the haproxy logs will show errors:
- Jan 24 19:24:36 cosmic-haproxy-1804069 haproxy[6411]: [ALERT] 023/192351 
(6411) : Current worker 6412 exited with code 135   

- Jan 24 19:24:36 cosmic-haproxy-1804069 haproxy[6411]: [ALERT] 023/192351 
(6411) : exit-on-failure: killing every workers with SIGTERM

- Jan 24 19:24:36 cosmic-haproxy-1804069 haproxy[6411]: [WARNING] 023/192351 
(6411) : All workers exited. Exiting... (135)  
+ Jan 24 19:24:36 cosmic-haproxy-1804069 haproxy[6411]: [ALERT] 023/192351 
(6411) : Current worker 6412 exited with code 135
+ Jan 24 19:24:36 cosmic-haproxy-1804069 haproxy[6411]: [ALERT] 023/192351 
(6411) : exit-on-failure: killing every workers with SIGTERM
+ Jan 24 19:24:36 cosmic-haproxy-1804069 haproxy[6411]: [WARNING] 023/192351 
(6411) : All workers exited. Exiting... (135)
  
  * Update the haproxy package and try the wget one more time. This time it 
will work, and the haproxy logs will stay silent:
  wget -t1 http://localhost:8080
  --2019-01-24 19:26:14--  http://localhost:8080/
  Resolving localhost (localhost)... 127.0.0.1
  Connecting to localhost (localhost)|127.0.0.1|:8080... connected.
  HTTP request sent, awaiting response... 200 OK
  Length: 10918 (11K) [text/html]
  Saving to: ‘index.html’
  
  index.html
  
100%[>]
  10.66K  --.-KB/sin 0s
  
  2019-01-24 19:26:14 (75.3 MB/s) - ‘index.html’ saved [10918/10918]
- 
  
  [Regression Potential]
  
   * discussion of how regressions are most likely to manifest as a result
  of this change.
  
   * It is assumed that any SRU candidate patch is well-tested before
     upload and has a low overall risk of regression, but it's important
     to make the effort to think about what ''could'' happen in the
     event of a 

[Ubuntu-ha] [Bug 1804069] Re: haproxy fails on arm64 due to alignment error

2019-01-24 Thread Andreas Hasenack
** Description changed:

  [Impact]
  
-  * An explanation of the effects of the bug on users and
+  * An explanation of the effects of the bug on users and
  
-  * justification for backporting the fix to the stable release.
+  * justification for backporting the fix to the stable release.
  
-  * In addition, it is helpful, but not required, to include an
-explanation of how the upload fixes this bug.
+  * In addition, it is helpful, but not required, to include an
+    explanation of how the upload fixes this bug.
  
  [Test Case]
  
-  * detailed instructions how to reproduce the bug
+ * install haproxy and apache in an up-to-date ubuntu release you are testing, 
in an arm64 system:
+ sudo apt update && sudo apt dist-upgrade -y && sudo apt install haproxy 
apache2 -y
  
-  * these should allow someone who is not familiar with the affected
-package to reproduce the bug and verify that the updated package fixes
-the problem.
+ * Create /etc/haproxy/haproxy.cfg with the following contents:
+ global
+ chroot /var/lib/haproxy
+ user haproxy
+ group haproxy
+ daemon
+ maxconn 4096
+ 
+ defaults
+ log global
+ option dontlognull
+ option redispatch
+ retries 3
+ timeout client 50s
+ timeout connect 10s
+ timeout http-request 5s
+ timeout server 50s
+ maxconn 4096
+ 
+ frontend test-front
+ bind *:8080
+ mode http
+ default_backend test-back
+ 
+ backend test-back
+ mode http
+ stick store-request src
+ stick-table type ip size 256k expire 30m
+ server test-1 localhost:80
+ 
+ 
+ * in one terminal, keep tailing the (still nonexistent) haproxy log file:
+ tail -F /var/log/haproxy.log
+ 
+ * in another terminal, restart haproxy and tail its logs:
+ sudo systemctl restart haproxy
+ tail -f /var/log/haproxy.log
+ 
+ * in another terminal, start haproxy:
+ sudo systemctl restart haproxy
+ 
+ * The haproxy log fill become live, and already show errors:
+ Jan 24 19:22:23 cosmic-haproxy-1804069 haproxy[2286]: [WARNING] 023/191958 
(2286) : Exiting Master process...
+ Jan 24 19:22:23 cosmic-haproxy-1804069 haproxy[2286]: [ALERT] 023/191958 
(2286) : Current worker 2287 exited with code 143
+ Jan 24 19:22:23 cosmic-haproxy-1804069 haproxy[2286]: [WARNING] 023/191958 
(2286) : All workers exited. Exiting... (143)
+ 
+ * Run wget to try to fetch the apache frontpage, via haproxy, limited to one 
attempt. It will fail:
+ $ wget -t1 http://localhost:8080
+ --2019-01-24 19:23:51--  http://localhost:8080/
+ Resolving localhost (localhost)... 127.0.0.1
+ Connecting to localhost (localhost)|127.0.0.1|:8080... connected.
+ HTTP request sent, awaiting response... No data received.
+ Giving up.
+ $ echo $?
+ 4
+ 
+ * the haproxy logs will show errors:
+ Jan 24 19:24:36 cosmic-haproxy-1804069 haproxy[6411]: [ALERT] 023/192351 
(6411) : Current worker 6412 exited with code 135   

+ Jan 24 19:24:36 cosmic-haproxy-1804069 haproxy[6411]: [ALERT] 023/192351 
(6411) : exit-on-failure: killing every workers with SIGTERM

+ Jan 24 19:24:36 cosmic-haproxy-1804069 haproxy[6411]: [WARNING] 023/192351 
(6411) : All workers exited. Exiting... (135)  
+ 
+ * Update the haproxy package and try the wget one more time. This time it 
will work, and the haproxy logs will stay silent:
+ wget -t1 http://localhost:8080
+ --2019-01-24 19:26:14--  http://localhost:8080/
+ Resolving localhost (localhost)... 127.0.0.1
+ Connecting to localhost (localhost)|127.0.0.1|:8080... connected.
+ HTTP request sent, awaiting response... 200 OK
+ Length: 10918 (11K) [text/html]
+ Saving to: ‘index.html’
+ 
+ index.html
+ 
100%[>]
+ 10.66K  --.-KB/sin 0s
+ 
+ 2019-01-24 19:26:14 (75.3 MB/s) - ‘index.html’ saved [10918/10918]
+ 
  
  [Regression Potential]
  
-  * discussion of how regressions are most likely to manifest as a result
+  * discussion of how regressions are most likely to manifest as a result
  of this change.
  
-  * It is assumed that any SRU candidate patch is well-tested before
-upload and has a low overall risk of regression, but it's important
-to make the effort to think about what ''could'' happen in the
-event of a regression.
+  * It is assumed that any SRU candidate patch is well-tested before
+    upload and has a low overall risk of regression, but it's important
+    to make the effort to think about what ''could'' happen in the
+    event of a regression.
  
-  * This both shows the SRU team that the risks have been considered,
-and provides guidance to testers in regression-testing the SRU.
+  * This both shows the SRU team that the risks have been 

[Ubuntu-ha] [Bug 1804069] Re: haproxy fails on arm64 due to alignment error

2019-01-24 Thread Andreas Hasenack
** Description changed:

+ [Impact]
+ 
+  * An explanation of the effects of the bug on users and
+ 
+  * justification for backporting the fix to the stable release.
+ 
+  * In addition, it is helpful, but not required, to include an
+explanation of how the upload fixes this bug.
+ 
+ [Test Case]
+ 
+  * detailed instructions how to reproduce the bug
+ 
+  * these should allow someone who is not familiar with the affected
+package to reproduce the bug and verify that the updated package fixes
+the problem.
+ 
+ [Regression Potential]
+ 
+  * discussion of how regressions are most likely to manifest as a result
+ of this change.
+ 
+  * It is assumed that any SRU candidate patch is well-tested before
+upload and has a low overall risk of regression, but it's important
+to make the effort to think about what ''could'' happen in the
+event of a regression.
+ 
+  * This both shows the SRU team that the risks have been considered,
+and provides guidance to testers in regression-testing the SRU.
+ 
+ [Other Info]
+  
+  * Anything else you think is useful to include
+  * Anticipate questions from users, SRU, +1 maintenance, security teams and 
the Technical Board
+  * and address these questions in advance
+ 
+ 
+ [Original Description]
+ 
  This fault was reported via the haproxy mailing list https://www.mail-
  archive.com/hapr...@formilux.org/msg31749.html
  
  And then patched in the haproxy code here
  
https://github.com/haproxy/haproxy/commit/52dabbc4fad338233c7f0c96f977a43f8f81452a
  
  Without this patch haproxy is not functional on aarch64/arm64.
  Experimental deployments of openstack-ansible on arm64 fail because of
  this bug, and without a fix applied to the ubuntu bionic packages we
  cannot proceed further as the openstack CI only installs the upstream
  ubuntu distribution packages.

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1804069

Title:
  haproxy fails on arm64 due to alignment error

Status in haproxy package in Ubuntu:
  Fix Released
Status in haproxy source package in Bionic:
  In Progress
Status in haproxy source package in Cosmic:
  In Progress

Bug description:
  [Impact]

   * An explanation of the effects of the bug on users and

   * justification for backporting the fix to the stable release.

   * In addition, it is helpful, but not required, to include an
 explanation of how the upload fixes this bug.

  [Test Case]

   * detailed instructions how to reproduce the bug

   * these should allow someone who is not familiar with the affected
 package to reproduce the bug and verify that the updated package fixes
 the problem.

  [Regression Potential]

   * discussion of how regressions are most likely to manifest as a
  result of this change.

   * It is assumed that any SRU candidate patch is well-tested before
 upload and has a low overall risk of regression, but it's important
 to make the effort to think about what ''could'' happen in the
 event of a regression.

   * This both shows the SRU team that the risks have been considered,
 and provides guidance to testers in regression-testing the SRU.

  [Other Info]
   
   * Anything else you think is useful to include
   * Anticipate questions from users, SRU, +1 maintenance, security teams and 
the Technical Board
   * and address these questions in advance

  
  [Original Description]

  This fault was reported via the haproxy mailing list https://www.mail-
  archive.com/hapr...@formilux.org/msg31749.html

  And then patched in the haproxy code here
  
https://github.com/haproxy/haproxy/commit/52dabbc4fad338233c7f0c96f977a43f8f81452a

  Without this patch haproxy is not functional on aarch64/arm64.
  Experimental deployments of openstack-ansible on arm64 fail because of
  this bug, and without a fix applied to the ubuntu bionic packages we
  cannot proceed further as the openstack CI only installs the upstream
  ubuntu distribution packages.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/haproxy/+bug/1804069/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1804069] Re: haproxy fails on arm64 due to alignment error

2019-01-24 Thread Andreas Hasenack
Tests kicked off:
cosmic: https://bileto.ubuntu.com/#/ticket/3610
bionic: https://bileto.ubuntu.com/#/ticket/3611

Those tickets have links to respective ppa builds, if someone wants to
test early.

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1804069

Title:
  haproxy fails on arm64 due to alignment error

Status in haproxy package in Ubuntu:
  Fix Released
Status in haproxy source package in Bionic:
  In Progress
Status in haproxy source package in Cosmic:
  In Progress

Bug description:
  This fault was reported via the haproxy mailing list https://www.mail-
  archive.com/hapr...@formilux.org/msg31749.html

  And then patched in the haproxy code here
  
https://github.com/haproxy/haproxy/commit/52dabbc4fad338233c7f0c96f977a43f8f81452a

  Without this patch haproxy is not functional on aarch64/arm64.
  Experimental deployments of openstack-ansible on arm64 fail because of
  this bug, and without a fix applied to the ubuntu bionic packages we
  cannot proceed further as the openstack CI only installs the upstream
  ubuntu distribution packages.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/haproxy/+bug/1804069/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1804069] Re: haproxy fails on arm64 due to alignment error

2019-01-24 Thread Andreas Hasenack
Bug reproduced, and fix confirmed.

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1804069

Title:
  haproxy fails on arm64 due to alignment error

Status in haproxy package in Ubuntu:
  Fix Released
Status in haproxy source package in Bionic:
  In Progress
Status in haproxy source package in Cosmic:
  In Progress

Bug description:
  This fault was reported via the haproxy mailing list https://www.mail-
  archive.com/hapr...@formilux.org/msg31749.html

  And then patched in the haproxy code here
  
https://github.com/haproxy/haproxy/commit/52dabbc4fad338233c7f0c96f977a43f8f81452a

  Without this patch haproxy is not functional on aarch64/arm64.
  Experimental deployments of openstack-ansible on arm64 fail because of
  this bug, and without a fix applied to the ubuntu bionic packages we
  cannot proceed further as the openstack CI only installs the upstream
  ubuntu distribution packages.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/haproxy/+bug/1804069/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1804069] Re: haproxy fails on arm64 due to alignment error

2019-01-23 Thread Andreas Hasenack
** Changed in: haproxy (Ubuntu Bionic)
 Assignee: (unassigned) => Andreas Hasenack (ahasenack)

** Changed in: haproxy (Ubuntu Cosmic)
 Assignee: (unassigned) => Andreas Hasenack (ahasenack)

** Changed in: haproxy (Ubuntu Bionic)
   Status: Triaged => In Progress

** Changed in: haproxy (Ubuntu Cosmic)
   Status: Triaged => In Progress

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1804069

Title:
  haproxy fails on arm64 due to alignment error

Status in haproxy package in Ubuntu:
  Fix Released
Status in haproxy source package in Bionic:
  In Progress
Status in haproxy source package in Cosmic:
  In Progress

Bug description:
  This fault was reported via the haproxy mailing list https://www.mail-
  archive.com/hapr...@formilux.org/msg31749.html

  And then patched in the haproxy code here
  
https://github.com/haproxy/haproxy/commit/52dabbc4fad338233c7f0c96f977a43f8f81452a

  Without this patch haproxy is not functional on aarch64/arm64.
  Experimental deployments of openstack-ansible on arm64 fail because of
  this bug, and without a fix applied to the ubuntu bionic packages we
  cannot proceed further as the openstack CI only installs the upstream
  ubuntu distribution packages.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/haproxy/+bug/1804069/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1804069] Re: haproxy fails on arm64 due to alignment error

2019-01-23 Thread Andreas Hasenack
This was fixed upstream in 1.8.15:
2018/12/13 : 1.8.15
- MINOR: threads: Make sure threads_sync_pipe is initialized before using 
it.
...
- BUG/MEDIUM: Make sure stksess is properly aligned. <--

Marking main task as fix released, as disco has 1.8.17. But confirmed
for bionic and cosmic (assuming it was introduced in the 1.8.x series
and not present earlier).

** Also affects: haproxy (Ubuntu Cosmic)
   Importance: Undecided
   Status: New

** Changed in: haproxy (Ubuntu)
   Status: Triaged => Fix Released

** Changed in: haproxy (Ubuntu Cosmic)
   Status: New => Triaged

** Changed in: haproxy (Ubuntu Bionic)
   Importance: Undecided => High

** Changed in: haproxy (Ubuntu Cosmic)
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1804069

Title:
  haproxy fails on arm64 due to alignment error

Status in haproxy package in Ubuntu:
  Fix Released
Status in haproxy source package in Bionic:
  Triaged
Status in haproxy source package in Cosmic:
  Triaged

Bug description:
  This fault was reported via the haproxy mailing list https://www.mail-
  archive.com/hapr...@formilux.org/msg31749.html

  And then patched in the haproxy code here
  
https://github.com/haproxy/haproxy/commit/52dabbc4fad338233c7f0c96f977a43f8f81452a

  Without this patch haproxy is not functional on aarch64/arm64.
  Experimental deployments of openstack-ansible on arm64 fail because of
  this bug, and without a fix applied to the ubuntu bionic packages we
  cannot proceed further as the openstack CI only installs the upstream
  ubuntu distribution packages.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/haproxy/+bug/1804069/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1810583] Re: Daily cron restarts network on unattended updates but keepalived .service is not restarted as a dependency

2019-01-15 Thread Andreas Hasenack
Do all of you have daily network restarts? What's the reason? Or was
this a one-off update that just by chance had a package upgrade that
required such a restart?

That being said, I of course agree that losing the virtual IP in such a
situation is bad.

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to keepalived in Ubuntu.
https://bugs.launchpad.net/bugs/1810583

Title:
  Daily cron restarts network on unattended updates but keepalived
  .service is not restarted as a dependency

Status in keepalived package in Ubuntu:
  Confirmed
Status in networkd-dispatcher package in Ubuntu:
  New

Bug description:
  Description:Ubuntu 18.04.1 LTS
  Release:18.04
  ii  keepalived1:1.3.9-1ubuntu0.18.04.1  
amd64Failover and monitoring daemon for LVS clusters

  (From unanswered
  https://answers.launchpad.net/ubuntu/+source/keepalived/+question/676267)

  Since two weeks we lost our keepalived VRRP address on on our of
  systems, closer inspection reveals that this was due to the daily
  cronjob.Apparently something triggered a udev reload (and last week
  the same seemed to happen) which obviously triggers a network restart.

  Are we right in assuming the below patch is the correct way (and
  shouldn't this be in the default install of the systemd service of
  keepalived).

  /etc/systemd/system/multi-user.target.wants/keepalived.service:
  --- keepalived.service.orig 2018-11-20 09:17:06.973924706 +0100
  +++ keepalived.service 2018-11-20 09:05:55.984773226 +0100
  @@ -4,6 +4,7 @@
   Wants=network-online.target
   # Only start if there is a configuration file
   ConditionFileNotEmpty=/etc/keepalived/keepalived.conf
  +PartOf=systemd-networkd.service

  Accompanying syslog:
  Nov 20 06:34:33 ourmachine systemd[1]: Starting Daily apt upgrade and clean 
activities...
  Nov 20 06:34:42 ourmachine systemd[1]: Reloading.
  Nov 20 06:34:44 ourmachine systemd[1]: message repeated 2 times: [ Reloading.]
  Nov 20 06:34:44 ourmachine systemd[1]: Starting Daily apt download 
activities...
  Nov 20 06:34:44 ourmachine systemd[1]: Stopping udev Kernel Device Manager...
  Nov 20 06:34:44 ourmachine systemd[1]: Stopped udev Kernel Device Manager.
  Nov 20 06:34:44 ourmachine systemd[1]: Starting udev Kernel Device Manager...
  Nov 20 06:34:44 ourmachine systemd[1]: Started udev Kernel Device Manager.
  Nov 20 06:34:45 ourmachine systemd[1]: Reloading.
  Nov 20 06:34:45 ourmachine systemd[1]: Reloading.
  Nov 20 06:35:13 ourmachine systemd[1]: Reexecuting.
  Nov 20 06:35:13 ourmachine systemd[1]: Stopped Wait for Network to be 
Configured.
  Nov 20 06:35:13 ourmachine systemd[1]: Stopping Wait for Network to be 
Configured...
  Nov 20 06:35:13 ourmachine systemd[1]: Stopping Network Service..

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/keepalived/+bug/1810583/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1810844] Re: libknet-dev universe dependency

2019-01-09 Thread Andreas Hasenack
MIR being prepared:
https://bugs.launchpad.net/ubuntu/+source/kronosnet/+bug/1811139

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to corosync in Ubuntu.
https://bugs.launchpad.net/bugs/1810844

Title:
  libknet-dev universe dependency

Status in corosync package in Ubuntu:
  New

Bug description:
  corosync is a sync from debian, and in version 3.0.0-1 they added a
  dependency on libknet-dev (kronos net). In Ubuntu, libknet-dev is in
  universe.

  
https://salsa.debian.org/ha-team/corosync/commit/ca28efda4c8647ba262367c076e54d4787f529e0
  
https://salsa.debian.org/ha-team/corosync/commit/f69e2aa8b8f0fa3895f791a4706f7fdacb53b270

  Some bullet points copied from 
https://github.com/corosync/corosync/wiki/Corosync-3.0.0-Release-Notes:
  - Corosync 3.0 contains many interesting features mostly related to usage of 
Kronosnet (https://kronosnet.org/) as a default (and preferred) network 
transport.
  - UDP/UDPU transports are still present, but supports only single ring (RRP 
is gone in favor of Knet) and doesn't support encryption
  - Corosync 3 is not wire compatible with previous versions. Needle was 
patched so it will display warning message when receive corosync 3 packet (and 
vice-versa).
  - NSS dependency removal - not needed anymore because crypto is now handled 
by knet, so no replacement needed. This change affects only cpgverify where 
packet format is changed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/corosync/+bug/1810844/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1809682] Re: "systemctl enable corosync-qdevice.service" fails

2019-01-09 Thread Andreas Hasenack
Bug confirmed. It's an odd looking initscript, as it does not have the
usual actions in itself.

** Changed in: corosync (Ubuntu)
   Status: New => Triaged

** Changed in: corosync (Ubuntu)
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to corosync in Ubuntu.
https://bugs.launchpad.net/bugs/1809682

Title:
  "systemctl enable corosync-qdevice.service" fails

Status in corosync package in Ubuntu:
  Triaged

Bug description:
  "systemctl enable corosync-qdevice.service" fails:
  Synchronizing state of corosync-qdevice.service with SysV service script with 
/lib/systemd/systemd-sysv-install.
  Executing: /lib/systemd/systemd-sysv-install enable corosync-qdevice
  update-rc.d: error: corosync-qdevice Default-Start contains no runlevels, 
aborting.

  Reason is that an init.d script as well as a systemd service definition 
exists:
  # dpkg -L corosync-qdevice
  /lib/systemd/system/corosync-qdevice.service
  /etc/init.d/corosync-qdevice

  Therefore, using pcs to add a qdevice to the cluster fails:
  pcs quorum device add model net host=qdevice-host algorithm=ffsplit
  Log:
  I, [2018-12-24T22:39:00.208158 #149]  INFO -- : Running: systemctl enable 
corosync-qdevice.service
  I, [2018-12-24T22:39:00.208322 #149]  INFO -- : CIB USER: hacluster, groups: 
  I, [2018-12-24T22:39:00.463878 #149]  INFO -- : Return Value: 1
  E, [2018-12-24T22:39:00.464110 #149] ERROR -- : Enabling corosync-qdevice 
failed

  Workaround is to manually remove the "/etc/init.d/corosync-qdevice"
  script.

  Solution is to remove the "/etc/init.d/corosync-qdevice" file from the
  "corosync-qdevice" package.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/corosync/+bug/1809682/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1810844] Re: libknet-dev universe dependency

2019-01-07 Thread Andreas Hasenack
** Description changed:

  corosync is a sync from debian, and in version 3.0.0-1 they added a
  dependency on libknet-dev (kronos net). In Ubuntu, libknet-dev is in
  universe.
  
  
https://salsa.debian.org/ha-team/corosync/commit/ca28efda4c8647ba262367c076e54d4787f529e0
  
https://salsa.debian.org/ha-team/corosync/commit/f69e2aa8b8f0fa3895f791a4706f7fdacb53b270
  
- Some bits from 
https://github.com/corosync/corosync/wiki/Corosync-3.0.0-Release-Notes:
+ Some bullet points copied from 
https://github.com/corosync/corosync/wiki/Corosync-3.0.0-Release-Notes:
  - Corosync 3.0 contains many interesting features mostly related to usage of 
Kronosnet (https://kronosnet.org/) as a default (and preferred) network 
transport.
  - UDP/UDPU transports are still present, but supports only single ring (RRP 
is gone in favor of Knet) and doesn't support encryption
  - Corosync 3 is not wire compatible with previous versions. Needle was 
patched so it will display warning message when receive corosync 3 packet (and 
vice-versa).
  - NSS dependency removal - not needed anymore because crypto is now handled 
by knet, so no replacement needed. This change affects only cpgverify where 
packet format is changed.

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to corosync in Ubuntu.
https://bugs.launchpad.net/bugs/1810844

Title:
  libknet-dev universe dependency

Status in corosync package in Ubuntu:
  New

Bug description:
  corosync is a sync from debian, and in version 3.0.0-1 they added a
  dependency on libknet-dev (kronos net). In Ubuntu, libknet-dev is in
  universe.

  
https://salsa.debian.org/ha-team/corosync/commit/ca28efda4c8647ba262367c076e54d4787f529e0
  
https://salsa.debian.org/ha-team/corosync/commit/f69e2aa8b8f0fa3895f791a4706f7fdacb53b270

  Some bullet points copied from 
https://github.com/corosync/corosync/wiki/Corosync-3.0.0-Release-Notes:
  - Corosync 3.0 contains many interesting features mostly related to usage of 
Kronosnet (https://kronosnet.org/) as a default (and preferred) network 
transport.
  - UDP/UDPU transports are still present, but supports only single ring (RRP 
is gone in favor of Knet) and doesn't support encryption
  - Corosync 3 is not wire compatible with previous versions. Needle was 
patched so it will display warning message when receive corosync 3 packet (and 
vice-versa).
  - NSS dependency removal - not needed anymore because crypto is now handled 
by knet, so no replacement needed. This change affects only cpgverify where 
packet format is changed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/corosync/+bug/1810844/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1810844] Re: libknet-dev universe dependency

2019-01-07 Thread Andreas Hasenack
** Description changed:

  corosync is a sync from debian, and in version 3.0.0-1 they added a
  dependency on libknet-dev (kronos net). In Ubuntu, libknet-dev is in
  universe.
  
  
https://salsa.debian.org/ha-team/corosync/commit/ca28efda4c8647ba262367c076e54d4787f529e0
  
https://salsa.debian.org/ha-team/corosync/commit/f69e2aa8b8f0fa3895f791a4706f7fdacb53b270
  
- I don't know at this point if this is an upstream change (deprecating
- libnss3 in favor of libknet) or just a debian change.
+ Some bits from 
https://github.com/corosync/corosync/wiki/Corosync-3.0.0-Release-Notes:
+ - Corosync 3.0 contains many interesting features mostly related to usage of 
Kronosnet (https://kronosnet.org/) as a default (and preferred) network 
transport.
+ - UDP/UDPU transports are still present, but supports only single ring (RRP 
is gone in favor of Knet) and doesn't support encryption
+ - Corosync 3 is not wire compatible with previous versions. Needle was 
patched so it will display warning message when receive corosync 3 packet (and 
vice-versa).
+ - NSS dependency removal - not needed anymore because crypto is now handled 
by knet, so no replacement needed. This change affects only cpgverify where 
packet format is changed.

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to corosync in Ubuntu.
https://bugs.launchpad.net/bugs/1810844

Title:
  libknet-dev universe dependency

Status in corosync package in Ubuntu:
  New

Bug description:
  corosync is a sync from debian, and in version 3.0.0-1 they added a
  dependency on libknet-dev (kronos net). In Ubuntu, libknet-dev is in
  universe.

  
https://salsa.debian.org/ha-team/corosync/commit/ca28efda4c8647ba262367c076e54d4787f529e0
  
https://salsa.debian.org/ha-team/corosync/commit/f69e2aa8b8f0fa3895f791a4706f7fdacb53b270

  Some bits from 
https://github.com/corosync/corosync/wiki/Corosync-3.0.0-Release-Notes:
  - Corosync 3.0 contains many interesting features mostly related to usage of 
Kronosnet (https://kronosnet.org/) as a default (and preferred) network 
transport.
  - UDP/UDPU transports are still present, but supports only single ring (RRP 
is gone in favor of Knet) and doesn't support encryption
  - Corosync 3 is not wire compatible with previous versions. Needle was 
patched so it will display warning message when receive corosync 3 packet (and 
vice-versa).
  - NSS dependency removal - not needed anymore because crypto is now handled 
by knet, so no replacement needed. This change affects only cpgverify where 
packet format is changed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/corosync/+bug/1810844/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1810844] [NEW] libknet-dev universe dependency

2019-01-07 Thread Andreas Hasenack
Public bug reported:

corosync is a sync from debian, and in version 3.0.0-1 they added a
dependency on libknet-dev (kronos net). In Ubuntu, libknet-dev is in
universe.

https://salsa.debian.org/ha-team/corosync/commit/ca28efda4c8647ba262367c076e54d4787f529e0
https://salsa.debian.org/ha-team/corosync/commit/f69e2aa8b8f0fa3895f791a4706f7fdacb53b270

I don't know at this point if this is an upstream change (deprecating
libnss3 in favor of libknet) or just a debian change.

** Affects: corosync (Ubuntu)
 Importance: Undecided
 Status: New

** Summary changed:

- FTBFS: libknet-dev dependency which is in universe
+ FTBFS: libknet-dev universe dependency

** Summary changed:

- FTBFS: libknet-dev universe dependency
+ libknet-dev universe dependency

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to corosync in Ubuntu.
https://bugs.launchpad.net/bugs/1810844

Title:
  libknet-dev universe dependency

Status in corosync package in Ubuntu:
  New

Bug description:
  corosync is a sync from debian, and in version 3.0.0-1 they added a
  dependency on libknet-dev (kronos net). In Ubuntu, libknet-dev is in
  universe.

  
https://salsa.debian.org/ha-team/corosync/commit/ca28efda4c8647ba262367c076e54d4787f529e0
  
https://salsa.debian.org/ha-team/corosync/commit/f69e2aa8b8f0fa3895f791a4706f7fdacb53b270

  I don't know at this point if this is an upstream change (deprecating
  libnss3 in favor of libknet) or just a debian change.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/corosync/+bug/1810844/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1792298] Re: keepalived: MISC healthchecker's exit status is erroneously treated as a permanent error

2018-12-05 Thread Andreas Hasenack
That patch is applied upstream in the package shipped in bionic and
later

** Also affects: keepalived (Ubuntu Xenial)
   Importance: Undecided
   Status: New

** Changed in: keepalived (Ubuntu)
   Status: Triaged => Fix Released

** Changed in: keepalived (Ubuntu Xenial)
   Status: New => Triaged

** Changed in: keepalived (Ubuntu Xenial)
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to keepalived in Ubuntu.
https://bugs.launchpad.net/bugs/1792298

Title:
  keepalived: MISC healthchecker's exit status is erroneously treated as
  a permanent error

Status in keepalived package in Ubuntu:
  Fix Released
Status in keepalived source package in Xenial:
  Triaged

Bug description:
  1) The release of Ubuntu we are using
  $ lsb_release -rd
  Description:Ubuntu 16.04.5 LTS
  Release:16.04

  2) The version of the package we are using
  $ apt-cache policy keepalived
  keepalived:
Installed: 1:1.2.24-1ubuntu0.16.04.1
  ...

  3) What we expected to happen
  MISC healthcheckers would be treated normally.

  4) What happened instead
  We are trying to use Ubuntu 16.04's keepalived with our own MISC 
healthchecker, which is implemented to exit with exit code 3, and getting the 
following log messages endlessly.

  --- Note: some IP fields are masked ---
  Sep 12 06:55:09 devsvr Keepalived[16705]: Healthcheck child process(34232) 
died: Respawning
  Sep 12 06:55:09 devsvr Keepalived[16705]: Starting Healthcheck child process, 
pid=34239
  Sep 12 06:55:09 devsvr Keepalived_healthcheckers[34239]: Initializing ipvs
  Sep 12 06:55:09 devsvr Keepalived_healthcheckers[34239]: Registering Kernel 
netlink reflector
  Sep 12 06:55:09 devsvr Keepalived_healthcheckers[34239]: Registering Kernel 
netlink command channel
  Sep 12 06:55:09 devsvr Keepalived_healthcheckers[34239]: Opening file 
'/etc/keepalived/keepalived.conf'.
  Sep 12 06:55:09 devsvr Keepalived_healthcheckers[34239]: Using LinkWatch 
kernel netlink reflector...
  Sep 12 06:55:09 devsvr Keepalived_healthcheckers[34239]: Activating 
healthchecker for service [XX.XX.XX.18]:80
  Sep 12 06:55:09 devsvr Keepalived_healthcheckers[34239]: Activating 
healthchecker for service [XX.XX.XX.19]:80
  Sep 12 06:55:09 devsvr Keepalived_healthcheckers[34239]: Activating 
healthchecker for service [XX.XX.XX.18]:443
  Sep 12 06:55:09 devsvr Keepalived_healthcheckers[34239]: Activating 
healthchecker for service [XX.XX.XX.19]:443
  ...
  Sep 12 06:55:09 devsvr Keepalived_healthcheckers[34239]: Activating 
healthchecker for service [XX.XX.XX.52]:443
  Sep 12 06:55:09 devsvr Keepalived_healthcheckers[34239]: Activating 
healthchecker for service [XX.XX.XX.53]:443
  Sep 12 06:55:10 devsvr Keepalived_healthcheckers[34239]: pid 34257 exited 
with permanent error CONFIG. Terminating
  Sep 12 06:55:10 devsvr Keepalived_healthcheckers[34239]: Removing service 
[XX.XX.XX.24]:25 from VS [YY.YY.YY.YY]:0
  Sep 12 06:55:10 devsvr Keepalived_healthcheckers[34239]: Removing service 
[XX.XX.XX.25]:25 from VS [YY.YY.YY.YY]:0
  Sep 12 06:55:10 devsvr Keepalived_healthcheckers[34239]: Removing service 
[XX.XX.XX.21]:56667 from VS [ZZ.ZZ.ZZ.ZZ]:0
  Sep 12 06:55:10 devsvr Keepalived_healthcheckers[34239]: Removing service 
[XX.XX.XX.52]:443 from VS [WW.WW.WW.WW]:0
  Sep 12 06:55:10 devsvr Keepalived[16705]: Healthcheck child process(34239) 
died: Respawning
  Sep 12 06:55:10 devsvr Keepalived[16705]: Starting Healthcheck child process, 
pid=34260
  ...
  ---

  It looks like our MISC healthchecker's exit code 3, which should be a
  valid value according to the following description, is treated as a
  permanent error since it is equal to KEEPALIVED_EXIT_CONFIG defined in
  keepalived's lib/scheduler.h :

  ---
 # MISC healthchecker, run a program
 MISC_CHECK
 {
 # External script or program
 ...
 #   exit status 2-255: svc check success, weight
 # changed to 2 less than exit status.
 #   (for example: exit status of 255 would set
 # weight to 253)
 misc_dynamic
 }
  ---

  The problem, we think, have started with this patch (we did not see the 
problem in Ubuntu 14.04):
Stop respawning children repeatedly after permanent error
- 
https://github.com/acassen/keepalived/commit/4ae9314af448eb8ea4f3d8ef39bcc469779b0fec

  The problem will be fixed by this patch (not included in Ubuntu 16.04):
Make report_child_status() check for vrrp and checker child processes
- 
https://github.com/acassen/keepalived/commit/ca955a7c1a6af324428ff04e24be68a180be127f

  Please consider backporting it to Ubuntu 16.04's keepalived.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/keepalived/+bug/1792298/+subscriptions

___
Mailing 

[Ubuntu-ha] [Bug 1792298] Re: keepalived: MISC healthchecker's exit status is erroneously treated as a permanent error

2018-12-05 Thread Andreas Hasenack
Thanks for the patch and testing instructions

** Tags added: bitesize server-next

** Changed in: keepalived (Ubuntu)
   Status: Incomplete => Triaged

** Changed in: keepalived (Ubuntu)
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to keepalived in Ubuntu.
https://bugs.launchpad.net/bugs/1792298

Title:
  keepalived: MISC healthchecker's exit status is erroneously treated as
  a permanent error

Status in keepalived package in Ubuntu:
  Fix Released
Status in keepalived source package in Xenial:
  Triaged

Bug description:
  1) The release of Ubuntu we are using
  $ lsb_release -rd
  Description:Ubuntu 16.04.5 LTS
  Release:16.04

  2) The version of the package we are using
  $ apt-cache policy keepalived
  keepalived:
Installed: 1:1.2.24-1ubuntu0.16.04.1
  ...

  3) What we expected to happen
  MISC healthcheckers would be treated normally.

  4) What happened instead
  We are trying to use Ubuntu 16.04's keepalived with our own MISC 
healthchecker, which is implemented to exit with exit code 3, and getting the 
following log messages endlessly.

  --- Note: some IP fields are masked ---
  Sep 12 06:55:09 devsvr Keepalived[16705]: Healthcheck child process(34232) 
died: Respawning
  Sep 12 06:55:09 devsvr Keepalived[16705]: Starting Healthcheck child process, 
pid=34239
  Sep 12 06:55:09 devsvr Keepalived_healthcheckers[34239]: Initializing ipvs
  Sep 12 06:55:09 devsvr Keepalived_healthcheckers[34239]: Registering Kernel 
netlink reflector
  Sep 12 06:55:09 devsvr Keepalived_healthcheckers[34239]: Registering Kernel 
netlink command channel
  Sep 12 06:55:09 devsvr Keepalived_healthcheckers[34239]: Opening file 
'/etc/keepalived/keepalived.conf'.
  Sep 12 06:55:09 devsvr Keepalived_healthcheckers[34239]: Using LinkWatch 
kernel netlink reflector...
  Sep 12 06:55:09 devsvr Keepalived_healthcheckers[34239]: Activating 
healthchecker for service [XX.XX.XX.18]:80
  Sep 12 06:55:09 devsvr Keepalived_healthcheckers[34239]: Activating 
healthchecker for service [XX.XX.XX.19]:80
  Sep 12 06:55:09 devsvr Keepalived_healthcheckers[34239]: Activating 
healthchecker for service [XX.XX.XX.18]:443
  Sep 12 06:55:09 devsvr Keepalived_healthcheckers[34239]: Activating 
healthchecker for service [XX.XX.XX.19]:443
  ...
  Sep 12 06:55:09 devsvr Keepalived_healthcheckers[34239]: Activating 
healthchecker for service [XX.XX.XX.52]:443
  Sep 12 06:55:09 devsvr Keepalived_healthcheckers[34239]: Activating 
healthchecker for service [XX.XX.XX.53]:443
  Sep 12 06:55:10 devsvr Keepalived_healthcheckers[34239]: pid 34257 exited 
with permanent error CONFIG. Terminating
  Sep 12 06:55:10 devsvr Keepalived_healthcheckers[34239]: Removing service 
[XX.XX.XX.24]:25 from VS [YY.YY.YY.YY]:0
  Sep 12 06:55:10 devsvr Keepalived_healthcheckers[34239]: Removing service 
[XX.XX.XX.25]:25 from VS [YY.YY.YY.YY]:0
  Sep 12 06:55:10 devsvr Keepalived_healthcheckers[34239]: Removing service 
[XX.XX.XX.21]:56667 from VS [ZZ.ZZ.ZZ.ZZ]:0
  Sep 12 06:55:10 devsvr Keepalived_healthcheckers[34239]: Removing service 
[XX.XX.XX.52]:443 from VS [WW.WW.WW.WW]:0
  Sep 12 06:55:10 devsvr Keepalived[16705]: Healthcheck child process(34239) 
died: Respawning
  Sep 12 06:55:10 devsvr Keepalived[16705]: Starting Healthcheck child process, 
pid=34260
  ...
  ---

  It looks like our MISC healthchecker's exit code 3, which should be a
  valid value according to the following description, is treated as a
  permanent error since it is equal to KEEPALIVED_EXIT_CONFIG defined in
  keepalived's lib/scheduler.h :

  ---
 # MISC healthchecker, run a program
 MISC_CHECK
 {
 # External script or program
 ...
 #   exit status 2-255: svc check success, weight
 # changed to 2 less than exit status.
 #   (for example: exit status of 255 would set
 # weight to 253)
 misc_dynamic
 }
  ---

  The problem, we think, have started with this patch (we did not see the 
problem in Ubuntu 14.04):
Stop respawning children repeatedly after permanent error
- 
https://github.com/acassen/keepalived/commit/4ae9314af448eb8ea4f3d8ef39bcc469779b0fec

  The problem will be fixed by this patch (not included in Ubuntu 16.04):
Make report_child_status() check for vrrp and checker child processes
- 
https://github.com/acassen/keepalived/commit/ca955a7c1a6af324428ff04e24be68a180be127f

  Please consider backporting it to Ubuntu 16.04's keepalived.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/keepalived/+bug/1792298/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : 

[Ubuntu-ha] [Bug 1627083] Re: ipt_CLUSTERIP is deprecated and it will removed soon, use xt_cluster instead

2018-12-05 Thread Andreas Hasenack
For strongswan, I found a reference in a 2018 workshop to work on
xt_cluster support:
https://wiki.strongswan.org/projects/strongswan/wiki/Linux_IPsec_Workshop_2018

No open bug reports about moving from ipt_CLUSTERIP to xt_cluster, just
references in old bugs about how that was wanted, but just not done yet.

For pacemaker, I couldn't find results even mentioning the problem,
other than this bug.

Looks like it will be some time still until ipt_CLUSTERIP is abandoned.

** Changed in: strongswan (Ubuntu)
   Importance: Undecided => Wishlist

** Changed in: pacemaker (Ubuntu)
   Importance: Undecided => Wishlist

** Changed in: strongswan (Ubuntu)
   Status: New => Triaged

** Changed in: pacemaker (Ubuntu)
   Status: New => Triaged

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to pacemaker in Ubuntu.
https://bugs.launchpad.net/bugs/1627083

Title:
  ipt_CLUSTERIP is deprecated and it will removed soon, use xt_cluster
  instead

Status in pacemaker package in Ubuntu:
  Triaged
Status in strongswan package in Ubuntu:
  Triaged

Bug description:
  pacemaker still uses iptable's "CLUSTERIP" -- and dmesg shows a
  deprecation warning:

  [   15.027333] ipt_CLUSTERIP: ClusterIP Version 0.8 loaded successfully
  [   15.027464] ipt_CLUSTERIP: ipt_CLUSTERIP is deprecated and it will removed 
soon, use xt_cluster instead

  ~# iptables -L
  Chain INPUT (policy ACCEPT)
  target prot opt source   destination 
  CLUSTERIP  all  --  anywhere proxy.charite.de CLUSTERIP 
hashmode=sourceip-sourceport clustermac=EF:EE:6B:F9:7B:67 total_nodes=4 
local_node=2 hash_init=0

  ProblemType: Bug
  DistroRelease: Ubuntu 16.04
  Package: pacemaker 1.1.14-2ubuntu1.1
  ProcVersionSignature: Ubuntu 4.4.0-38.57-generic 4.4.19
  Uname: Linux 4.4.0-38-generic x86_64
  ApportVersion: 2.20.1-0ubuntu2.1
  Architecture: amd64
  Date: Fri Sep 23 17:26:01 2016
  InstallationDate: Installed on 2014-08-19 (766 days ago)
  InstallationMedia: Ubuntu-Server 14.04.1 LTS "Trusty Tahr" - Release amd64 
(20140722.3)
  SourcePackage: pacemaker
  UpgradeStatus: Upgraded to xenial on 2016-09-22 (1 days ago)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/pacemaker/+bug/1627083/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1800159] Re: keepalived ip_vs

2018-11-19 Thread Andreas Hasenack
Can you share your config /etc/keepalived/keepalived.conf ?

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to keepalived in Ubuntu.
https://bugs.launchpad.net/bugs/1800159

Title:
  keepalived ip_vs

Status in keepalived package in Ubuntu:
  Incomplete

Bug description:
  1) 
  Description:  Ubuntu 16.04.5 LTS
  Release:  16.04
  2) keepalived:
Installed: 1:1.2.24-1ubuntu0.16.04.1
Candidate: 1:1.2.24-1ubuntu0.16.04.1
Version table:
   *** 1:1.2.24-1ubuntu0.16.04.1 500
  500 http://ftp.hosteurope.de/mirror/archive.ubuntu.com 
xenial-updates/main amd64 Packages
  100 /var/lib/dpkg/status

  3) not loading the kernel module
  systemctl start keepalived.service
  Keepalived_healthcheckers[1680]: IPVS: Protocol not available
  Keepalived_healthcheckers[1680]: message repeated 8 times: [ IPVS: Protocol 
not available]
  ...

  4) loading the module manually 
  systemctl stop keepalived.service
  modprobe ip_vs
  kernel: [  445.363609] IPVS: ipvs loaded.
  systemctl start keepalived.service
  Keepalived_healthcheckers[5533]: Initializing ipvs
  kernel: [  600.828683] IPVS: [wlc] scheduler registered.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/keepalived/+bug/1800159/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1795420] Re: Keepalived update from 1.2.19 to 1.2.24 breaks support for /dev/tcp health check

2018-10-02 Thread Andreas Hasenack
Thanks for filing this bug in Ubuntu, and providing a link to a patch.
This will need some backporting, but it's an excellent start.

Looks like even cosmic is affected, at version 1.3.9.

** Changed in: keepalived (Ubuntu)
   Status: New => Triaged

** Changed in: keepalived (Ubuntu)
   Importance: Undecided => High

** Tags added: regression-update

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to keepalived in Ubuntu.
https://bugs.launchpad.net/bugs/1795420

Title:
  Keepalived update from 1.2.19 to 1.2.24 breaks support for /dev/tcp
  health check

Status in keepalived package in Ubuntu:
  Triaged

Bug description:
  Previous configuration that works fine:

  vrrp_script chk_trigger_port {
  script "https://github.com/acassen/keepalived/commit/5cd5fff78de11178c51ca245ff5de61a86b85049

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/keepalived/+bug/1795420/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1792544] Re: demotion of pcre3 in favor of pcre2

2018-09-18 Thread Andreas Hasenack
I've seen the wget debian change, but just switching builddeps from pcre3-dev 
to pcre2-dev and rebuilding isn't enough. The package ends up not finding pcre 
and doesn't enable it:
checking for PCRE... no
checking pcre.h usability... no
checking pcre.h presence... no
checking for pcre.h... no
...
  Libs:  -luuid -lidn2 -lnettle -lgnutls -lz -lpsl 

So while the package builds, it's not using pcre.

I tried to switch apache, and while I could make it find pcre2-config and use 
it, pcre2 has different libraries than pcre3:
pcre3-config --libs: -lpcre
pcre2-config --libs: no such parameter

In pcre2, we have --libs8, --libs-posix, --libs32 and --libs16, but no
--libs. Is this a bug in pcre2-config?

Any quick tips about this before I dig in further?

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1792544

Title:
  demotion of pcre3 in favor of pcre2

Status in aide package in Ubuntu:
  New
Status in apache2 package in Ubuntu:
  New
Status in apr-util package in Ubuntu:
  New
Status in clamav package in Ubuntu:
  Triaged
Status in exim4 package in Ubuntu:
  New
Status in freeradius package in Ubuntu:
  New
Status in git package in Ubuntu:
  Triaged
Status in glib2.0 package in Ubuntu:
  New
Status in grep package in Ubuntu:
  New
Status in haproxy package in Ubuntu:
  New
Status in libpam-mount package in Ubuntu:
  New
Status in libselinux package in Ubuntu:
  New
Status in nginx package in Ubuntu:
  Triaged
Status in nmap package in Ubuntu:
  New
Status in pcre3 package in Ubuntu:
  Confirmed
Status in php7.2 package in Ubuntu:
  Triaged
Status in postfix package in Ubuntu:
  New
Status in python-pyscss package in Ubuntu:
  New
Status in quagga package in Ubuntu:
  New
Status in rasqal package in Ubuntu:
  New
Status in slang2 package in Ubuntu:
  New
Status in sssd package in Ubuntu:
  Triaged
Status in wget package in Ubuntu:
  Triaged
Status in zsh package in Ubuntu:
  Triaged

Bug description:
  demotion of pcre3 in favor of pcre2. These packages need analysis what
  needs to be done for the demotion of pcre3:

  Packages which are ready to build with pcre2 should be marked as
  'Triaged', packages which are not ready should be marked as
  'Incomplete'.

  aide
  apache2
  apr-util
  clamav
  exim4
  freeradius
  git
  glib2.0
  grep
  haproxy
  libpam-mount
  libselinux
  nginx
  nmap
  php7.2
  postfix
  python-pyscss
  quagga
  rasqal
  slang2
  sssd
  wget
  zsh

  --

  For clarification: pcre2 is actually newer than pcre3.  pcre3 is just
  poorly named (according to jbicha).

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/aide/+bug/1792544/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1740927] Re: FTBFS: unknown type name ‘errcode_t’

2018-07-31 Thread Andreas Hasenack
Artful is EOL.

** Merge proposal unlinked:
   
https://code.launchpad.net/~ahasenack/ubuntu/+source/ocfs2-tools/+git/ocfs2-tools/+merge/351821

** Changed in: ocfs2-tools (Ubuntu Artful)
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to ocfs2-tools in Ubuntu.
https://bugs.launchpad.net/bugs/1740927

Title:
  FTBFS: unknown type name ‘errcode_t’

Status in ocfs2-tools package in Ubuntu:
  Fix Released
Status in ocfs2-tools source package in Artful:
  Won't Fix

Bug description:
  gcc -g -O2 -fdebug-prefix-map=/home/ubuntu/x/ocfs2-tools-1.8.5=. 
-fstack-protector-strong -Wall -Wstrict-prototypes -Wmissing-prototypes 
-Wmissing-declarations -pipe  -Wdate-time -D_FORTIFY_SOURCE=2  -I../include -I. 
-DVERSION=\"1.8.5\"  -MD -MP -MF ./.feature_quota.d -o feature_quota.o -c 
feature_quota.c
  In file included from /usr/include/string.h:431:0,
   from ../include/ocfs2/ocfs2.h:41,
   from pass4.c:32:
  include/strings.h:37:1: error: unknown type name ‘errcode_t’; did you mean 
‘mode_t’?
   errcode_t o2fsck_strings_insert(o2fsck_strings *strings, char *string,
   ^
   mode_t
  ../Postamble.make:40: recipe for target 'pass4.o' failed

  Upstream issue: https://github.com/markfasheh/ocfs2-tools/issues/17
  Fix: 
https://github.com/markfasheh/ocfs2-tools/commit/0ffd58b223e24779420130522ea8ee359505f493

  Debian sid is unaffected at the moment.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ocfs2-tools/+bug/1740927/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1779543] Re: package haproxy (not installed) failed to install/upgrade: installed haproxy package post-installation script subprocess returned error exit status 1

2018-07-02 Thread Andreas Hasenack
Thanks for filing this bug in Ubuntu.

This is the error reported in the logs:
Setting up haproxy (1.8.8-1ubuntu0.1) ...
Job for haproxy.service failed because the control process exited with error 
code.
See "systemctl status haproxy.service" and "journalctl -xe" for details.
invoke-rc.d: initscript haproxy, action "start" failed.
● haproxy.service - HAProxy Load Balancer
   Loaded: loaded (/lib/systemd/system/haproxy.service; enabled; vendor preset: 
enabled)
   Active: activating (auto-restart) (Result: exit-code) since Tue 2018-06-26 
19:31:18 IST; 7ms ago
 Docs: man:haproxy(1)
   file:/usr/share/doc/haproxy/configuration.txt.gz
  Process: 16621 ExecStart=/usr/sbin/haproxy -Ws -f $CONFIG -p $PIDFILE 
$EXTRAOPTS (code=exited, status=1/FAILURE)
  Process: 16619 ExecStartPre=/usr/sbin/haproxy -f $CONFIG -c -q $EXTRAOPTS 
(code=exited, status=0/SUCCESS)
 Main PID: 16621 (code=exited, status=1/FAILURE)

Jun 26 19:31:18 Dell systemd[1]: haproxy.service: 
Main process exited, code=exited, status=1/FAILURE
Jun 26 19:31:18 Dell systemd[1]: haproxy.service: 
Failed with result 'exit-code'.
Jun 26 19:31:18 Dell systemd[1]: Failed to start 
HAProxy Load Balancer.
dpkg: error processing package haproxy (--configure):
 installed haproxy package post-installation script subprocess returned error 
exit status 1


It doesn't tell us enough about why the service failed to start, though.

It also doesn't look like you removed haproxy, as the above shows it
being setup.

What happens when you run these commands:

sudo apt update
sudo apt -f install

If it fails again, then please attach the haproxy configuration file
(/etc/haproxy/haproxy.cfg) and the logs, if any, from
/var/log/haproxy.log

Thanks


** Changed in: haproxy (Ubuntu)
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1779543

Title:
  package haproxy (not installed) failed to install/upgrade: installed
  haproxy package post-installation script subprocess returned error
  exit status 1

Status in haproxy package in Ubuntu:
  Incomplete

Bug description:
  I have removed haproxy from my machine but still I am getting this
  issue.

  ProblemType: Package
  DistroRelease: Ubuntu 18.04
  Package: haproxy (not installed)
  ProcVersionSignature: Ubuntu 4.15.0-24.26-generic 4.15.18
  Uname: Linux 4.15.0-24-generic x86_64
  NonfreeKernelModules: nvidia_modeset nvidia
  ApportVersion: 2.20.9-0ubuntu7.2
  Architecture: amd64
  Date: Tue Jun 26 19:31:19 2018
  ErrorMessage: installed haproxy package post-installation script subprocess 
returned error exit status 1
  InstallationDate: Installed on 2017-06-24 (371 days ago)
  InstallationMedia: Ubuntu 16.04.2 LTS "Xenial Xerus" - Release amd64 
(20170215.2)
  Python3Details: /usr/bin/python3.6, Python 3.6.5, python3-minimal, 
3.6.5-3ubuntu1
  PythonDetails: /usr/bin/python2.7, Python 2.7.15rc1, python-minimal, 
2.7.15~rc1-1
  RelatedPackageVersions:
   dpkg 1.19.0.5ubuntu2
   apt  1.6.2
  SourcePackage: haproxy
  Title: package haproxy (not installed) failed to install/upgrade: installed 
haproxy package post-installation script subprocess returned error exit status 1
  UpgradeStatus: Upgraded to bionic on 2018-04-15 (77 days ago)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/haproxy/+bug/1779543/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1774765] Re: package haproxy 1.8.8-1ubuntu0.1 failed to install/upgrade: installed haproxy package post-installation script subprocess returned error exit status 1

2018-06-04 Thread Andreas Hasenack
Thanks for filing this bug in Ubuntu.

This is the error log:
Setting up haproxy (1.8.8-1ubuntu0.1) ...
Job for haproxy.service failed because the control process exited with error 
code.
See "systemctl status haproxy.service" and "journalctl -xe" for details.
invoke-rc.d: initscript haproxy, action "start" failed.
● haproxy.service - HAProxy Load Balancer
   Loaded: loaded (/lib/systemd/system/haproxy.service; enabled; vendor preset: 
enabled)
   Active: activating (auto-restart) (Result: exit-code) since Sat 2018-06-02 
12:55:58 IST; 5ms ago
 Docs: man:haproxy(1)
   file:/usr/share/doc/haproxy/configuration.txt.gz
  Process: 6482 ExecStart=/usr/sbin/haproxy -Ws -f $CONFIG -p $PIDFILE 
$EXTRAOPTS (code=exited, status=1/FAILURE)
  Process: 6481 ExecStartPre=/usr/sbin/haproxy -f $CONFIG -c -q $EXTRAOPTS 
(code=exited, status=0/SUCCESS)
 Main PID: 6482 (code=exited, status=1/FAILURE)

Jun 02 12:55:59 Dell haproxy[6495]: [ALERT] 152/125559 (6495) : Starting proxy 
mysql: cannot bind socket [127.0.0.1:3306]
Jun 02 12:55:59 Dell systemd[1]: haproxy.service: 
Main process exited, code=exited, status=1/FAILURE
Jun 02 12:55:59 Dell systemd[1]: haproxy.service: 
Failed with result 'exit-code'.
Jun 02 12:55:59 Dell systemd[1]: Failed to start 
HAProxy Load Balancer.
Jun 02 12:55:59 Dell systemd[1]: haproxy.service: Service hold-off time over, 
scheduling restart.
Jun 02 12:55:59 Dell systemd[1]: haproxy.service: Scheduled restart job, 
restart counter is at 5.
Jun 02 12:55:59 Dell systemd[1]: Stopped HAProxy Load Balancer.
Jun 02 12:55:59 Dell systemd[1]: haproxy.service: 
Start request repeated too quickly.
Jun 02 12:55:59 Dell systemd[1]: haproxy.service: 
Failed with result 'exit-code'.
Jun 02 12:55:59 Dell systemd[1]: [0; 

It looks like you have another service already listening on port 3306.
Perhaps it's mysql itself?

Please attach these files:
/etc/default/haproxy
/etc/haproxy/haproxy.cfg

And show the output of this command:
sudo netstat -anp

Thanks!


** Changed in: haproxy (Ubuntu)
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1774765

Title:
  package haproxy 1.8.8-1ubuntu0.1 failed to install/upgrade: installed
  haproxy package post-installation script subprocess returned error
  exit status 1

Status in haproxy package in Ubuntu:
  Incomplete

Bug description:
  While upgrading the packages I got this report.

  ProblemType: Package
  DistroRelease: Ubuntu 18.04
  Package: haproxy 1.8.8-1ubuntu0.1
  ProcVersionSignature: Ubuntu 4.15.0-21.22-generic 4.15.17
  Uname: Linux 4.15.0-21-generic x86_64
  NonfreeKernelModules: nvidia_modeset nvidia
  ApportVersion: 2.20.9-0ubuntu7.2
  Architecture: amd64
  Date: Sat Jun  2 12:56:12 2018
  ErrorMessage: installed haproxy package post-installation script subprocess 
returned error exit status 1
  InstallationDate: Installed on 2017-06-24 (342 days ago)
  InstallationMedia: Ubuntu 16.04.2 LTS "Xenial Xerus" - Release amd64 
(20170215.2)
  Python3Details: /usr/bin/python3.6, Python 3.6.5, python3-minimal, 3.6.5-3
  PythonDetails: /usr/bin/python2.7, Python 2.7.15rc1, python-minimal, 
2.7.15~rc1-1
  RelatedPackageVersions:
   dpkg 1.19.0.5ubuntu2
   apt  1.6.1
  SourcePackage: haproxy
  Title: package haproxy 1.8.8-1ubuntu0.1 failed to install/upgrade: installed 
haproxy package post-installation script subprocess returned error exit status 1
  UpgradeStatus: Upgraded to bionic on 2018-04-15 (47 days ago)
  modified.conffile..etc.default.haproxy: [modified]
  modified.conffile..etc.haproxy.haproxy.cfg: [modified]
  mtime.conffile..etc.default.haproxy: 2018-03-03T20:50:28.580803
  mtime.conffile..etc.haproxy.haproxy.cfg: 2018-03-03T23:41:14.936352

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/haproxy/+bug/1774765/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1735355] Re: heartbeat: port to Python3

2018-05-29 Thread Andreas Hasenack
** Changed in: heartbeat (Ubuntu)
   Status: New => Triaged

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to heartbeat in Ubuntu.
https://bugs.launchpad.net/bugs/1735355

Title:
  heartbeat: port to Python3

Status in heartbeat package in Ubuntu:
  Triaged
Status in heartbeat package in Debian:
  New

Bug description:
  heartbeat: port to Python3

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/heartbeat/+bug/1735355/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1747411] Re: Change of default database file format to SQL

2018-04-30 Thread Andreas Hasenack
** Tags added: server-next

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to corosync in Ubuntu.
https://bugs.launchpad.net/bugs/1747411

Title:
  Change of default database file format to SQL

Status in certmonger package in Ubuntu:
  Fix Released
Status in corosync package in Ubuntu:
  New
Status in dogtag-pki package in Ubuntu:
  Fix Released
Status in freeipa package in Ubuntu:
  Fix Released
Status in libapache2-mod-nss package in Ubuntu:
  Won't Fix
Status in nss package in Ubuntu:
  New

Bug description:
  nss in version 3.35 in upstream changed [2] the default file format [1] (if 
no explicit one is specified).
  For now we reverted that change in bug 1746947 until all packages depending 
on it are ready to work with that correctly.

  This bug here is about to track when the revert can be dropped.
  Therefore we list all known-to-be-affected packages and once all are resolved 
this can be dropped.

  [1]: https://fedoraproject.org/wiki/Changes/NSSDefaultFileFormatSql
  [2]: 
https://github.com/nss-dev/nss/commit/33b114e38278c4ffbb6b244a0ebc9910e5245cd3

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/certmonger/+bug/1747411/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1763493] Re: Update to 1.8.7

2018-04-30 Thread Andreas Hasenack
Bionic was released with 1.8.8-1, sync from debian.

** Changed in: haproxy (Ubuntu)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1763493

Title:
  Update to 1.8.7

Status in haproxy package in Ubuntu:
  Fix Released

Bug description:
  Hey!

  Would  it be possible to update HAProxy to 1.8.7, despite the freeze?
  This is a stable update and it fixes some important bugs. In Debian,
  we kept previous releases in experimental for Ubuntu to not use them.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/haproxy/+bug/1763493/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1748210] Re: Sync haproxy 1.8.3-1 (main) from Debian experimental (main)

2018-02-21 Thread Andreas Hasenack
haproxy 1.8.4 has hit the bionic archive. Since it's now a sync, there
is no mention of this bug in its changelog, so I'm closing this bug
manually.

** Changed in: haproxy (Ubuntu)
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1748210

Title:
  Sync haproxy 1.8.3-1 (main) from Debian experimental (main)

Status in haproxy package in Ubuntu:
  Fix Released

Bug description:
  Please sync haproxy 1.8.3-1 (main) from Debian experimental (main)

  Explanation of the Ubuntu delta and why it can be dropped:
* Backport of -x option from upstream haproxy to enable seamless
  reloading of haproxy without dropping connections.  This is enabled
  by adding
  " stats socket  expose-fd listeners
stats bind-process 1 "
  to the global section of your haproxy config, and
  setting HAPROXY_STATS_SOCKET in the haproxy.service unit file.
  (LP: #1712925)
* Backport of -x option from upstream haproxy to enable seamless
  reloading of haproxy without dropping connections.  This is enabled
  by adding
  " stats socket  expose-fd listeners
stats bind-process 1 "
  to the global section of your haproxy config, and
  setting HAPROXY_STATS_SOCKET in the haproxy.service unit file.
  (LP: #1712925)

  1.8 version includes this backported change, is a the current stable
  haproxy version and will be mainline throughout the bionic release
  cycle. It should be in the LTS version, rather than maintaining the
  current delta for five years.

  Changelog entries since current bionic version 1.7.9-1ubuntu2:

  haproxy (1.8.3-1) experimental; urgency=medium

* New upstream stable release.
* Change default configuration of stats socket to support hitless
  reload.

   -- Vincent Bernat   Tue, 02 Jan 2018 18:48:24
  +0100

  haproxy (1.8.2-1) experimental; urgency=medium

* New upstream stable release
* Refresh patches
* Bump Standards-Version to 4.1.2; no changes needed

   -- Apollon Oikonomopoulos   Sun, 24 Dec 2017
  14:28:28 +0200

  haproxy (1.8.1-1) experimental; urgency=medium

* New upstream stable release.
* Enable PCRE JIT.
* systemd: replace Wants/After=syslog.service with After=rsyslog.service
  (Closes: #882610)

   -- Apollon Oikonomopoulos   Sun, 03 Dec 2017
  23:59:03 +0200

  haproxy (1.8.0-2) experimental; urgency=medium

* Use libatomic on platforms without 64-bit atomics. Fixes FTBFS on armel,
  mips, mipsel, powerpc, powerpcspe, sh4 and m68k.
* d/rules: use variables defined in architecture.mk and buildflags.mk
* d/rules: drop unreachable else case.

   -- Apollon Oikonomopoulos   Wed, 29 Nov 2017
  01:21:40 +0200

  haproxy (1.8.0-1) experimental; urgency=medium

* New upstream stable series. Notable new features include:
  + HTTP/2 support
  + Support for multiple worker threads to allow scalability across CPUs
(e.g. for SSL termination)
  + Seamless reloads
  + HTTP small object caching
  + Dynamic backend server configuration
  See https://www.haproxy.com/blog/whats-new-haproxy-1-8/ and
  https://www.mail-archive.com/haproxy@formilux.org/msg28004.html for more
  detailed descriptions of the new features.
* Upload to experimental
* Refresh all patches.
* d/watch: switch to the 1.8.x upstream stable series
* Bump Standards to 4.1.1
  + Switch haproxy-doc to Priority: optional from extra.
* Bump compat to 10:
  + B-D on debhelper (>= 10)
  + Drop explicit dh-systemd dependency and invocation
  + Replace --no-restart-on-upgrade with --no-restart-after-upgrade
--no-stop-on-upgrade to make up for DH 10 defaults.
* B-D on libsystemd-dev and enable sd_notify() support on Linux.
* B-D on python3-sphinx instead of python-sphinx.
* d/rules: do not call dpkg-parsechangelog directly.
* d/copyright: drop obsolete section.
* Drop obsolete lintian overrides.
* Do a full-service restart when upgrading from pre-1.8 versions and running
  under systemd, to migrate to the new process model and service type.
  + Document this in d/NEWS as well.

   -- Apollon Oikonomopoulos   Tue, 28 Nov 2017
  22:25:11 +0200

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/haproxy/+bug/1748210/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1748210] Re: Sync haproxy 1.8.3-1 (main) from Debian experimental (main)

2018-02-21 Thread Andreas Hasenack
Other tests I just finished:
- deploy large app with charms, haproxy on bionic. Verify app works. Upgrade 
haproxy to 1.8.4. app still works. Add new backend to app, works.
- upgrade xenial to bionic, with the ppa for haproxy 1.8.4 enabled. Tested 
simple config (haproxy as fe, one apache as backend to see the hello ubuntu 
page). Also worked.

I'm +1 for the sync.

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1748210

Title:
  Sync haproxy 1.8.3-1 (main) from Debian experimental (main)

Status in haproxy package in Ubuntu:
  In Progress

Bug description:
  Please sync haproxy 1.8.3-1 (main) from Debian experimental (main)

  Explanation of the Ubuntu delta and why it can be dropped:
* Backport of -x option from upstream haproxy to enable seamless
  reloading of haproxy without dropping connections.  This is enabled
  by adding
  " stats socket  expose-fd listeners
stats bind-process 1 "
  to the global section of your haproxy config, and
  setting HAPROXY_STATS_SOCKET in the haproxy.service unit file.
  (LP: #1712925)
* Backport of -x option from upstream haproxy to enable seamless
  reloading of haproxy without dropping connections.  This is enabled
  by adding
  " stats socket  expose-fd listeners
stats bind-process 1 "
  to the global section of your haproxy config, and
  setting HAPROXY_STATS_SOCKET in the haproxy.service unit file.
  (LP: #1712925)

  1.8 version includes this backported change, is a the current stable
  haproxy version and will be mainline throughout the bionic release
  cycle. It should be in the LTS version, rather than maintaining the
  current delta for five years.

  Changelog entries since current bionic version 1.7.9-1ubuntu2:

  haproxy (1.8.3-1) experimental; urgency=medium

* New upstream stable release.
* Change default configuration of stats socket to support hitless
  reload.

   -- Vincent Bernat   Tue, 02 Jan 2018 18:48:24
  +0100

  haproxy (1.8.2-1) experimental; urgency=medium

* New upstream stable release
* Refresh patches
* Bump Standards-Version to 4.1.2; no changes needed

   -- Apollon Oikonomopoulos   Sun, 24 Dec 2017
  14:28:28 +0200

  haproxy (1.8.1-1) experimental; urgency=medium

* New upstream stable release.
* Enable PCRE JIT.
* systemd: replace Wants/After=syslog.service with After=rsyslog.service
  (Closes: #882610)

   -- Apollon Oikonomopoulos   Sun, 03 Dec 2017
  23:59:03 +0200

  haproxy (1.8.0-2) experimental; urgency=medium

* Use libatomic on platforms without 64-bit atomics. Fixes FTBFS on armel,
  mips, mipsel, powerpc, powerpcspe, sh4 and m68k.
* d/rules: use variables defined in architecture.mk and buildflags.mk
* d/rules: drop unreachable else case.

   -- Apollon Oikonomopoulos   Wed, 29 Nov 2017
  01:21:40 +0200

  haproxy (1.8.0-1) experimental; urgency=medium

* New upstream stable series. Notable new features include:
  + HTTP/2 support
  + Support for multiple worker threads to allow scalability across CPUs
(e.g. for SSL termination)
  + Seamless reloads
  + HTTP small object caching
  + Dynamic backend server configuration
  See https://www.haproxy.com/blog/whats-new-haproxy-1-8/ and
  https://www.mail-archive.com/haproxy@formilux.org/msg28004.html for more
  detailed descriptions of the new features.
* Upload to experimental
* Refresh all patches.
* d/watch: switch to the 1.8.x upstream stable series
* Bump Standards to 4.1.1
  + Switch haproxy-doc to Priority: optional from extra.
* Bump compat to 10:
  + B-D on debhelper (>= 10)
  + Drop explicit dh-systemd dependency and invocation
  + Replace --no-restart-on-upgrade with --no-restart-after-upgrade
--no-stop-on-upgrade to make up for DH 10 defaults.
* B-D on libsystemd-dev and enable sd_notify() support on Linux.
* B-D on python3-sphinx instead of python-sphinx.
* d/rules: do not call dpkg-parsechangelog directly.
* d/copyright: drop obsolete section.
* Drop obsolete lintian overrides.
* Do a full-service restart when upgrading from pre-1.8 versions and running
  under systemd, to migrate to the new process model and service type.
  + Document this in d/NEWS as well.

   -- Apollon Oikonomopoulos   Tue, 28 Nov 2017
  22:25:11 +0200

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/haproxy/+bug/1748210/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1748210] Re: Sync haproxy 1.8.3-1 (main) from Debian experimental (main)

2018-02-21 Thread Andreas Hasenack
I performed the following tests so far:
- used a charm bundle to deploy a complex application that uses haproxy as the 
frontend, with and without ssl (landscape-server). Added backends after the 
deployment as well. Worked as expected
- tried reload while there were ongoing live connections to the backends via 
haproxy. Saw the new -x option be automatically used, and once the connections 
ended, the old haproxy processes disappeared.
- tried an upgrade from 1.7.x to 1.8.4 with a simple app (haproxy as the fe, 
one apache server as the be)

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1748210

Title:
  Sync haproxy 1.8.3-1 (main) from Debian experimental (main)

Status in haproxy package in Ubuntu:
  In Progress

Bug description:
  Please sync haproxy 1.8.3-1 (main) from Debian experimental (main)

  Explanation of the Ubuntu delta and why it can be dropped:
* Backport of -x option from upstream haproxy to enable seamless
  reloading of haproxy without dropping connections.  This is enabled
  by adding
  " stats socket  expose-fd listeners
stats bind-process 1 "
  to the global section of your haproxy config, and
  setting HAPROXY_STATS_SOCKET in the haproxy.service unit file.
  (LP: #1712925)
* Backport of -x option from upstream haproxy to enable seamless
  reloading of haproxy without dropping connections.  This is enabled
  by adding
  " stats socket  expose-fd listeners
stats bind-process 1 "
  to the global section of your haproxy config, and
  setting HAPROXY_STATS_SOCKET in the haproxy.service unit file.
  (LP: #1712925)

  1.8 version includes this backported change, is a the current stable
  haproxy version and will be mainline throughout the bionic release
  cycle. It should be in the LTS version, rather than maintaining the
  current delta for five years.

  Changelog entries since current bionic version 1.7.9-1ubuntu2:

  haproxy (1.8.3-1) experimental; urgency=medium

* New upstream stable release.
* Change default configuration of stats socket to support hitless
  reload.

   -- Vincent Bernat   Tue, 02 Jan 2018 18:48:24
  +0100

  haproxy (1.8.2-1) experimental; urgency=medium

* New upstream stable release
* Refresh patches
* Bump Standards-Version to 4.1.2; no changes needed

   -- Apollon Oikonomopoulos   Sun, 24 Dec 2017
  14:28:28 +0200

  haproxy (1.8.1-1) experimental; urgency=medium

* New upstream stable release.
* Enable PCRE JIT.
* systemd: replace Wants/After=syslog.service with After=rsyslog.service
  (Closes: #882610)

   -- Apollon Oikonomopoulos   Sun, 03 Dec 2017
  23:59:03 +0200

  haproxy (1.8.0-2) experimental; urgency=medium

* Use libatomic on platforms without 64-bit atomics. Fixes FTBFS on armel,
  mips, mipsel, powerpc, powerpcspe, sh4 and m68k.
* d/rules: use variables defined in architecture.mk and buildflags.mk
* d/rules: drop unreachable else case.

   -- Apollon Oikonomopoulos   Wed, 29 Nov 2017
  01:21:40 +0200

  haproxy (1.8.0-1) experimental; urgency=medium

* New upstream stable series. Notable new features include:
  + HTTP/2 support
  + Support for multiple worker threads to allow scalability across CPUs
(e.g. for SSL termination)
  + Seamless reloads
  + HTTP small object caching
  + Dynamic backend server configuration
  See https://www.haproxy.com/blog/whats-new-haproxy-1-8/ and
  https://www.mail-archive.com/haproxy@formilux.org/msg28004.html for more
  detailed descriptions of the new features.
* Upload to experimental
* Refresh all patches.
* d/watch: switch to the 1.8.x upstream stable series
* Bump Standards to 4.1.1
  + Switch haproxy-doc to Priority: optional from extra.
* Bump compat to 10:
  + B-D on debhelper (>= 10)
  + Drop explicit dh-systemd dependency and invocation
  + Replace --no-restart-on-upgrade with --no-restart-after-upgrade
--no-stop-on-upgrade to make up for DH 10 defaults.
* B-D on libsystemd-dev and enable sd_notify() support on Linux.
* B-D on python3-sphinx instead of python-sphinx.
* d/rules: do not call dpkg-parsechangelog directly.
* d/copyright: drop obsolete section.
* Drop obsolete lintian overrides.
* Do a full-service restart when upgrading from pre-1.8 versions and running
  under systemd, to migrate to the new process model and service type.
  + Document this in d/NEWS as well.

   -- Apollon Oikonomopoulos   Tue, 28 Nov 2017
  22:25:11 +0200

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/haproxy/+bug/1748210/+subscriptions

___
Mailing list: 

[Ubuntu-ha] [Bug 1748210] Re: Sync haproxy 1.8.3-1 (main) from Debian experimental (main)

2018-02-20 Thread Andreas Hasenack
It looks like this can become a sync indeed. I'm testing a package I
built on this ppa:
https://launchpad.net/~ahasenack/+archive/ubuntu/haproxy-18-merge-1748210/+packages

It still has the "ubuntu" suffix, just because that's what I arrived at
after doing the merge, but it has no delta.

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1748210

Title:
  Sync haproxy 1.8.3-1 (main) from Debian experimental (main)

Status in haproxy package in Ubuntu:
  In Progress

Bug description:
  Please sync haproxy 1.8.3-1 (main) from Debian experimental (main)

  Explanation of the Ubuntu delta and why it can be dropped:
* Backport of -x option from upstream haproxy to enable seamless
  reloading of haproxy without dropping connections.  This is enabled
  by adding
  " stats socket  expose-fd listeners
stats bind-process 1 "
  to the global section of your haproxy config, and
  setting HAPROXY_STATS_SOCKET in the haproxy.service unit file.
  (LP: #1712925)
* Backport of -x option from upstream haproxy to enable seamless
  reloading of haproxy without dropping connections.  This is enabled
  by adding
  " stats socket  expose-fd listeners
stats bind-process 1 "
  to the global section of your haproxy config, and
  setting HAPROXY_STATS_SOCKET in the haproxy.service unit file.
  (LP: #1712925)

  1.8 version includes this backported change, is a the current stable
  haproxy version and will be mainline throughout the bionic release
  cycle. It should be in the LTS version, rather than maintaining the
  current delta for five years.

  Changelog entries since current bionic version 1.7.9-1ubuntu2:

  haproxy (1.8.3-1) experimental; urgency=medium

* New upstream stable release.
* Change default configuration of stats socket to support hitless
  reload.

   -- Vincent Bernat   Tue, 02 Jan 2018 18:48:24
  +0100

  haproxy (1.8.2-1) experimental; urgency=medium

* New upstream stable release
* Refresh patches
* Bump Standards-Version to 4.1.2; no changes needed

   -- Apollon Oikonomopoulos   Sun, 24 Dec 2017
  14:28:28 +0200

  haproxy (1.8.1-1) experimental; urgency=medium

* New upstream stable release.
* Enable PCRE JIT.
* systemd: replace Wants/After=syslog.service with After=rsyslog.service
  (Closes: #882610)

   -- Apollon Oikonomopoulos   Sun, 03 Dec 2017
  23:59:03 +0200

  haproxy (1.8.0-2) experimental; urgency=medium

* Use libatomic on platforms without 64-bit atomics. Fixes FTBFS on armel,
  mips, mipsel, powerpc, powerpcspe, sh4 and m68k.
* d/rules: use variables defined in architecture.mk and buildflags.mk
* d/rules: drop unreachable else case.

   -- Apollon Oikonomopoulos   Wed, 29 Nov 2017
  01:21:40 +0200

  haproxy (1.8.0-1) experimental; urgency=medium

* New upstream stable series. Notable new features include:
  + HTTP/2 support
  + Support for multiple worker threads to allow scalability across CPUs
(e.g. for SSL termination)
  + Seamless reloads
  + HTTP small object caching
  + Dynamic backend server configuration
  See https://www.haproxy.com/blog/whats-new-haproxy-1-8/ and
  https://www.mail-archive.com/haproxy@formilux.org/msg28004.html for more
  detailed descriptions of the new features.
* Upload to experimental
* Refresh all patches.
* d/watch: switch to the 1.8.x upstream stable series
* Bump Standards to 4.1.1
  + Switch haproxy-doc to Priority: optional from extra.
* Bump compat to 10:
  + B-D on debhelper (>= 10)
  + Drop explicit dh-systemd dependency and invocation
  + Replace --no-restart-on-upgrade with --no-restart-after-upgrade
--no-stop-on-upgrade to make up for DH 10 defaults.
* B-D on libsystemd-dev and enable sd_notify() support on Linux.
* B-D on python3-sphinx instead of python-sphinx.
* d/rules: do not call dpkg-parsechangelog directly.
* d/copyright: drop obsolete section.
* Drop obsolete lintian overrides.
* Do a full-service restart when upgrading from pre-1.8 versions and running
  under systemd, to migrate to the new process model and service type.
  + Document this in d/NEWS as well.

   -- Apollon Oikonomopoulos   Tue, 28 Nov 2017
  22:25:11 +0200

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/haproxy/+bug/1748210/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1748210] Re: Sync haproxy 1.8.3-1 (main) from Debian experimental (main)

2018-02-09 Thread Andreas Hasenack
** Changed in: haproxy (Ubuntu)
 Assignee: (unassigned) => Andreas Hasenack (ahasenack)

** Changed in: haproxy (Ubuntu)
   Status: Confirmed => In Progress

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1748210

Title:
  Sync haproxy 1.8.3-1 (main) from Debian experimental (main)

Status in haproxy package in Ubuntu:
  In Progress

Bug description:
  Please sync haproxy 1.8.3-1 (main) from Debian experimental (main)

  Explanation of the Ubuntu delta and why it can be dropped:
* Backport of -x option from upstream haproxy to enable seamless
  reloading of haproxy without dropping connections.  This is enabled
  by adding
  " stats socket  expose-fd listeners
stats bind-process 1 "
  to the global section of your haproxy config, and
  setting HAPROXY_STATS_SOCKET in the haproxy.service unit file.
  (LP: #1712925)
* Backport of -x option from upstream haproxy to enable seamless
  reloading of haproxy without dropping connections.  This is enabled
  by adding
  " stats socket  expose-fd listeners
stats bind-process 1 "
  to the global section of your haproxy config, and
  setting HAPROXY_STATS_SOCKET in the haproxy.service unit file.
  (LP: #1712925)

  1.8 version includes this backported change, is a the current stable
  haproxy version and will be mainline throughout the bionic release
  cycle. It should be in the LTS version, rather than maintaining the
  current delta for five years.

  Changelog entries since current bionic version 1.7.9-1ubuntu2:

  haproxy (1.8.3-1) experimental; urgency=medium

* New upstream stable release.
* Change default configuration of stats socket to support hitless
  reload.

   -- Vincent Bernat <ber...@debian.org>  Tue, 02 Jan 2018 18:48:24
  +0100

  haproxy (1.8.2-1) experimental; urgency=medium

* New upstream stable release
* Refresh patches
* Bump Standards-Version to 4.1.2; no changes needed

   -- Apollon Oikonomopoulos <apoi...@debian.org>  Sun, 24 Dec 2017
  14:28:28 +0200

  haproxy (1.8.1-1) experimental; urgency=medium

* New upstream stable release.
* Enable PCRE JIT.
* systemd: replace Wants/After=syslog.service with After=rsyslog.service
  (Closes: #882610)

   -- Apollon Oikonomopoulos <apoi...@debian.org>  Sun, 03 Dec 2017
  23:59:03 +0200

  haproxy (1.8.0-2) experimental; urgency=medium

* Use libatomic on platforms without 64-bit atomics. Fixes FTBFS on armel,
  mips, mipsel, powerpc, powerpcspe, sh4 and m68k.
* d/rules: use variables defined in architecture.mk and buildflags.mk
* d/rules: drop unreachable else case.

   -- Apollon Oikonomopoulos <apoi...@debian.org>  Wed, 29 Nov 2017
  01:21:40 +0200

  haproxy (1.8.0-1) experimental; urgency=medium

* New upstream stable series. Notable new features include:
  + HTTP/2 support
  + Support for multiple worker threads to allow scalability across CPUs
(e.g. for SSL termination)
  + Seamless reloads
  + HTTP small object caching
  + Dynamic backend server configuration
  See https://www.haproxy.com/blog/whats-new-haproxy-1-8/ and
  https://www.mail-archive.com/haproxy@formilux.org/msg28004.html for more
  detailed descriptions of the new features.
* Upload to experimental
* Refresh all patches.
* d/watch: switch to the 1.8.x upstream stable series
* Bump Standards to 4.1.1
  + Switch haproxy-doc to Priority: optional from extra.
* Bump compat to 10:
  + B-D on debhelper (>= 10)
  + Drop explicit dh-systemd dependency and invocation
  + Replace --no-restart-on-upgrade with --no-restart-after-upgrade
--no-stop-on-upgrade to make up for DH 10 defaults.
* B-D on libsystemd-dev and enable sd_notify() support on Linux.
* B-D on python3-sphinx instead of python-sphinx.
* d/rules: do not call dpkg-parsechangelog directly.
* d/copyright: drop obsolete section.
* Drop obsolete lintian overrides.
* Do a full-service restart when upgrading from pre-1.8 versions and running
  under systemd, to migrate to the new process model and service type.
  + Document this in d/NEWS as well.

   -- Apollon Oikonomopoulos <apoi...@debian.org>  Tue, 28 Nov 2017
  22:25:11 +0200

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/haproxy/+bug/1748210/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1745155] [NEW] o2image fails on s390x

2018-01-24 Thread Andreas Hasenack
Public bug reported:

o2image fails on s390x:

dd if=/dev/zero of=/tmp/disk bs=1M count=200
losetup --find --show /tmp/disk
mkfs.ocfs2 --cluster-stack=o2cb --cluster-name=ocfs2 /dev/loop0 # loop dev 
found in prev step

Then this comand:
o2image /dev/loop0 /tmp/disk.image

Results in:
Segmentation fault (core dumped)

dmesg:
[  862.642556] ocfs2: Registered cluster interface o2cb
[  870.880635] User process fault: interruption code 003b ilc:3 in 
o2image[10c18+2e000]
[  870.880643] Failing address:  TEID: 0800
[  870.880644] Fault in primary space mode while using user ASCE.
[  870.880646] AS:3d8f81c7 R3:0024 
[  870.880650] CPU: 0 PID: 1484 Comm: o2image Not tainted 4.13.0-30-generic 
#33-Ubuntu
[  870.880651] Hardware name: IBM 2964 N63 400 (KVM/Linux)
[  870.880652] task: 3cb81200 task.stack: 3d50c000
[  870.880653] User PSW : 070500018000 00010c184212
[  870.880654]R:0 T:1 IO:1 EX:1 Key:0 M:1 W:0 P:1 AS:0 CC:0 PM:0 
RI:0 EA:3
[  870.880655] User GPRS: 000144f0cc10 0001 0001 

[  870.880655] 000144ef6090 000144f13cc0 
0001
[  870.880656]000144ef6000 000144ef3280 000144f13cd8 
00037ee8
[  870.880656]03ff965a6000 03ffe5e7e410 00010c183bc6 
03ffe5e7e370
[  870.880663] User Code: 00010c184202: b9080034agr %r3,%r4
  00010c184206: c02b0007nilf%r2,7
 #00010c18420c: eb2120dfsllk
%r2,%r1,0(%r2)
 >00010c184212: e3103090llgc
%r1,0(%r3)
  00010c184218: b9f61042ork 
%r4,%r2,%r1
  00010c18421c: 1421nr  %r2,%r1
  00010c18421e: 42403000stc 
%r4,0(%r3)
  00010c184222: 1322lcr %r2,%r2
[  870.880672] Last Breaking-Event-Address:
[  870.880675]  [<00010c18e4ca>] 0x10c18e4ca

Upstream issue:
https://github.com/markfasheh/ocfs2-tools/issues/22

This was triggered by our ocfs2-tools dep8 tests:
http://autopkgtest.ubuntu.com/packages/o/ocfs2-tools/bionic/s390x

** Affects: ocfs2-tools
 Importance: Unknown
 Status: New

** Affects: ocfs2-tools (Ubuntu)
 Importance: Undecided
 Status: New

** Bug watch added: github.com/markfasheh/ocfs2-tools/issues #22
   https://github.com/markfasheh/ocfs2-tools/issues/22

** Also affects: ocfs2-tools via
   https://github.com/markfasheh/ocfs2-tools/issues/22
   Importance: Unknown
   Status: Unknown

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to ocfs2-tools in Ubuntu.
https://bugs.launchpad.net/bugs/1745155

Title:
  o2image fails on s390x

Status in OCFS2 Tools:
  New
Status in ocfs2-tools package in Ubuntu:
  New

Bug description:
  o2image fails on s390x:

  dd if=/dev/zero of=/tmp/disk bs=1M count=200
  losetup --find --show /tmp/disk
  mkfs.ocfs2 --cluster-stack=o2cb --cluster-name=ocfs2 /dev/loop0 # loop dev 
found in prev step

  Then this comand:
  o2image /dev/loop0 /tmp/disk.image

  Results in:
  Segmentation fault (core dumped)

  dmesg:
  [  862.642556] ocfs2: Registered cluster interface o2cb
  [  870.880635] User process fault: interruption code 003b ilc:3 in 
o2image[10c18+2e000]
  [  870.880643] Failing address:  TEID: 0800
  [  870.880644] Fault in primary space mode while using user ASCE.
  [  870.880646] AS:3d8f81c7 R3:0024 
  [  870.880650] CPU: 0 PID: 1484 Comm: o2image Not tainted 4.13.0-30-generic 
#33-Ubuntu
  [  870.880651] Hardware name: IBM 2964 N63 400 (KVM/Linux)
  [  870.880652] task: 3cb81200 task.stack: 3d50c000
  [  870.880653] User PSW : 070500018000 00010c184212
  [  870.880654]R:0 T:1 IO:1 EX:1 Key:0 M:1 W:0 P:1 AS:0 CC:0 PM:0 
RI:0 EA:3
  [  870.880655] User GPRS: 000144f0cc10 0001 0001 

  [  870.880655] 000144ef6090 000144f13cc0 
0001
  [  870.880656]000144ef6000 000144ef3280 000144f13cd8 
00037ee8
  [  870.880656]03ff965a6000 03ffe5e7e410 00010c183bc6 
03ffe5e7e370
  [  870.880663] User Code: 00010c184202: b9080034  agr %r3,%r4
00010c184206: c02b0007  nilf%r2,7
   #00010c18420c: eb2120df  sllk
%r2,%r1,0(%r2)
   >00010c184212: e3103090  llgc
%r1,0(%r3)
00010c184218: b9f61042  ork 
%r4,%r2,%r1
00010c18421c: 1421  nr  

[Ubuntu-ha] [Bug 1740927] Re: FTBFS: unknown type name ‘errcode_t’

2018-01-04 Thread Andreas Hasenack
** Tags added: ftbfs

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to ocfs2-tools in Ubuntu.
https://bugs.launchpad.net/bugs/1740927

Title:
  FTBFS: unknown type name ‘errcode_t’

Status in ocfs2-tools package in Ubuntu:
  In Progress
Status in ocfs2-tools source package in Artful:
  New

Bug description:
  gcc -g -O2 -fdebug-prefix-map=/home/ubuntu/x/ocfs2-tools-1.8.5=. 
-fstack-protector-strong -Wall -Wstrict-prototypes -Wmissing-prototypes 
-Wmissing-declarations -pipe  -Wdate-time -D_FORTIFY_SOURCE=2  -I../include -I. 
-DVERSION=\"1.8.5\"  -MD -MP -MF ./.feature_quota.d -o feature_quota.o -c 
feature_quota.c
  In file included from /usr/include/string.h:431:0,
   from ../include/ocfs2/ocfs2.h:41,
   from pass4.c:32:
  include/strings.h:37:1: error: unknown type name ‘errcode_t’; did you mean 
‘mode_t’?
   errcode_t o2fsck_strings_insert(o2fsck_strings *strings, char *string,
   ^
   mode_t
  ../Postamble.make:40: recipe for target 'pass4.o' failed

  Upstream issue: https://github.com/markfasheh/ocfs2-tools/issues/17
  Fix: 
https://github.com/markfasheh/ocfs2-tools/commit/0ffd58b223e24779420130522ea8ee359505f493

  Debian sid is unaffected at the moment.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ocfs2-tools/+bug/1740927/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1740927] Re: FTBFS: unknown type name ‘errcode_t’

2018-01-02 Thread Andreas Hasenack
PPA with test packages:
https://launchpad.net/~ahasenack/+archive/ubuntu/ocfs2-tools-
ftbfs-1740927/ (still building atm)

** Description changed:

  gcc -g -O2 -fdebug-prefix-map=/home/ubuntu/x/ocfs2-tools-1.8.5=. 
-fstack-protector-strong -Wall -Wstrict-prototypes -Wmissing-prototypes 
-Wmissing-declarations -pipe  -Wdate-time -D_FORTIFY_SOURCE=2  -I../include -I. 
-DVERSION=\"1.8.5\"  -MD -MP -MF ./.feature_quota.d -o feature_quota.o -c 
feature_quota.c
  In file included from /usr/include/string.h:431:0,
-  from ../include/ocfs2/ocfs2.h:41,
-  from pass4.c:32:
+  from ../include/ocfs2/ocfs2.h:41,
+  from pass4.c:32:
  include/strings.h:37:1: error: unknown type name ‘errcode_t’; did you mean 
‘mode_t’?
-  errcode_t o2fsck_strings_insert(o2fsck_strings *strings, char *string,
-  ^
-  mode_t
+  errcode_t o2fsck_strings_insert(o2fsck_strings *strings, char *string,
+  ^
+  mode_t
  ../Postamble.make:40: recipe for target 'pass4.o' failed
- 
  
  Upstream issue: https://github.com/markfasheh/ocfs2-tools/issues/17
  Fix: 
https://github.com/markfasheh/ocfs2-tools/commit/0ffd58b223e24779420130522ea8ee359505f493
+ 
+ Debian sid is unaffected at the moment.

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to ocfs2-tools in Ubuntu.
https://bugs.launchpad.net/bugs/1740927

Title:
  FTBFS: unknown type name ‘errcode_t’

Status in ocfs2-tools package in Ubuntu:
  In Progress
Status in ocfs2-tools source package in Artful:
  New

Bug description:
  gcc -g -O2 -fdebug-prefix-map=/home/ubuntu/x/ocfs2-tools-1.8.5=. 
-fstack-protector-strong -Wall -Wstrict-prototypes -Wmissing-prototypes 
-Wmissing-declarations -pipe  -Wdate-time -D_FORTIFY_SOURCE=2  -I../include -I. 
-DVERSION=\"1.8.5\"  -MD -MP -MF ./.feature_quota.d -o feature_quota.o -c 
feature_quota.c
  In file included from /usr/include/string.h:431:0,
   from ../include/ocfs2/ocfs2.h:41,
   from pass4.c:32:
  include/strings.h:37:1: error: unknown type name ‘errcode_t’; did you mean 
‘mode_t’?
   errcode_t o2fsck_strings_insert(o2fsck_strings *strings, char *string,
   ^
   mode_t
  ../Postamble.make:40: recipe for target 'pass4.o' failed

  Upstream issue: https://github.com/markfasheh/ocfs2-tools/issues/17
  Fix: 
https://github.com/markfasheh/ocfs2-tools/commit/0ffd58b223e24779420130522ea8ee359505f493

  Debian sid is unaffected at the moment.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ocfs2-tools/+bug/1740927/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1740927] Re: FTBFS: unknown type name ‘errcode_t’

2018-01-02 Thread Andreas Hasenack
artful has the same bug, but it's not worth an SRU just because of this.

I nominated artful, however, so that if an SRU is ever done, this bug
will show up and the fix is here.

** Also affects: ocfs2-tools (Ubuntu Artful)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to ocfs2-tools in Ubuntu.
https://bugs.launchpad.net/bugs/1740927

Title:
  FTBFS: unknown type name ‘errcode_t’

Status in ocfs2-tools package in Ubuntu:
  In Progress
Status in ocfs2-tools source package in Artful:
  New

Bug description:
  gcc -g -O2 -fdebug-prefix-map=/home/ubuntu/x/ocfs2-tools-1.8.5=. 
-fstack-protector-strong -Wall -Wstrict-prototypes -Wmissing-prototypes 
-Wmissing-declarations -pipe  -Wdate-time -D_FORTIFY_SOURCE=2  -I../include -I. 
-DVERSION=\"1.8.5\"  -MD -MP -MF ./.feature_quota.d -o feature_quota.o -c 
feature_quota.c
  In file included from /usr/include/string.h:431:0,
   from ../include/ocfs2/ocfs2.h:41,
   from pass4.c:32:
  include/strings.h:37:1: error: unknown type name ‘errcode_t’; did you mean 
‘mode_t’?
   errcode_t o2fsck_strings_insert(o2fsck_strings *strings, char *string,
   ^
   mode_t
  ../Postamble.make:40: recipe for target 'pass4.o' failed

  Upstream issue: https://github.com/markfasheh/ocfs2-tools/issues/17
  Fix: 
https://github.com/markfasheh/ocfs2-tools/commit/0ffd58b223e24779420130522ea8ee359505f493

  Debian sid is unaffected at the moment.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ocfs2-tools/+bug/1740927/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1740927] [NEW] FTBFS: unknown type name ‘errcode_t’

2018-01-02 Thread Andreas Hasenack
Public bug reported:

gcc -g -O2 -fdebug-prefix-map=/home/ubuntu/x/ocfs2-tools-1.8.5=. 
-fstack-protector-strong -Wall -Wstrict-prototypes -Wmissing-prototypes 
-Wmissing-declarations -pipe  -Wdate-time -D_FORTIFY_SOURCE=2  -I../include -I. 
-DVERSION=\"1.8.5\"  -MD -MP -MF ./.feature_quota.d -o feature_quota.o -c 
feature_quota.c
In file included from /usr/include/string.h:431:0,
 from ../include/ocfs2/ocfs2.h:41,
 from pass4.c:32:
include/strings.h:37:1: error: unknown type name ‘errcode_t’; did you mean 
‘mode_t’?
 errcode_t o2fsck_strings_insert(o2fsck_strings *strings, char *string,
 ^
 mode_t
../Postamble.make:40: recipe for target 'pass4.o' failed


Upstream issue: https://github.com/markfasheh/ocfs2-tools/issues/17
Fix: 
https://github.com/markfasheh/ocfs2-tools/commit/0ffd58b223e24779420130522ea8ee359505f493

** Affects: ocfs2-tools (Ubuntu)
 Importance: Low
 Assignee: Andreas Hasenack (ahasenack)
 Status: In Progress

** Changed in: ocfs2-tools (Ubuntu)
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to ocfs2-tools in Ubuntu.
https://bugs.launchpad.net/bugs/1740927

Title:
  FTBFS: unknown type name ‘errcode_t’

Status in ocfs2-tools package in Ubuntu:
  In Progress

Bug description:
  gcc -g -O2 -fdebug-prefix-map=/home/ubuntu/x/ocfs2-tools-1.8.5=. 
-fstack-protector-strong -Wall -Wstrict-prototypes -Wmissing-prototypes 
-Wmissing-declarations -pipe  -Wdate-time -D_FORTIFY_SOURCE=2  -I../include -I. 
-DVERSION=\"1.8.5\"  -MD -MP -MF ./.feature_quota.d -o feature_quota.o -c 
feature_quota.c
  In file included from /usr/include/string.h:431:0,
   from ../include/ocfs2/ocfs2.h:41,
   from pass4.c:32:
  include/strings.h:37:1: error: unknown type name ‘errcode_t’; did you mean 
‘mode_t’?
   errcode_t o2fsck_strings_insert(o2fsck_strings *strings, char *string,
   ^
   mode_t
  ../Postamble.make:40: recipe for target 'pass4.o' failed

  
  Upstream issue: https://github.com/markfasheh/ocfs2-tools/issues/17
  Fix: 
https://github.com/markfasheh/ocfs2-tools/commit/0ffd58b223e24779420130522ea8ee359505f493

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ocfs2-tools/+bug/1740927/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1584629] Re: Failed to start LSB: Load O2CB cluster services at system boot.

2017-09-20 Thread Andreas Hasenack
The development seems to be happening here:
https://github.com/markfasheh/ocfs2-tools

Might be worth filing a bug there:
https://github.com/markfasheh/ocfs2-tools/issues

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to ocfs2-tools in Ubuntu.
https://bugs.launchpad.net/bugs/1584629

Title:
  Failed to start LSB: Load O2CB cluster services at system boot.

Status in ocfs2-tools package in Ubuntu:
  Triaged
Status in ocfs2-tools source package in Trusty:
  New
Status in ocfs2-tools source package in Xenial:
  New
Status in ocfs2-tools source package in Yakkety:
  New

Bug description:
  Ubuntu 16.04.

  Sometimes (not every boot) o2cb failed to start:

  systemctl status o2cb
  ● o2cb.service - LSB: Load O2CB cluster services at system boot.
 Loaded: loaded (/etc/init.d/o2cb; bad; vendor preset: enabled)
 Active: failed (Result: exit-code) since Пн 2016-05-23 11:46:43 SAMT; 2min 
12s ago
   Docs: man:systemd-sysv-generator(8)
Process: 1526 ExecStart=/etc/init.d/o2cb start (code=exited, 
status=1/FAILURE)

  май 23 11:46:43 inetgw1 systemd[1]: Starting LSB: Load O2CB cluster services 
at system boot
  май 23 11:46:43 inetgw1 o2cb[1526]: Loading filesystem "configfs": OK
  май 23 11:46:43 inetgw1 o2cb[1526]: Mounting configfs filesystem at 
/sys/kernel/config: mount: configfs is already 
  май 23 11:46:43 inetgw1 o2cb[1526]:configfs is already mounted on 
/sys/kernel/config
  май 23 11:46:43 inetgw1 o2cb[1526]: Unable to mount configfs filesystem
  май 23 11:46:43 inetgw1 o2cb[1526]: Failed
  май 23 11:46:43 inetgw1 systemd[1]: o2cb.service: Control process exited, 
code=exited status=1
  май 23 11:46:43 inetgw1 systemd[1]: Failed to start LSB: Load O2CB cluster 
services at system boot..
  май 23 11:46:43 inetgw1 systemd[1]: o2cb.service: Unit entered failed state.
  май 23 11:46:43 inetgw1 systemd[1]: o2cb.service: Failed with result 
'exit-code'.

  
  next try is successful:
  systemctl status o2cb
  ● o2cb.service - LSB: Load O2CB cluster services at system boot.
 Loaded: loaded (/etc/init.d/o2cb; bad; vendor preset: enabled)
 Active: active (exited) since Пн 2016-05-23 11:49:07 SAMT; 1s ago
   Docs: man:systemd-sysv-generator(8)
Process: 2101 ExecStart=/etc/init.d/o2cb start (code=exited, 
status=0/SUCCESS)

  май 23 11:49:07 inetgw1 systemd[1]: Starting LSB: Load O2CB cluster services 
at system boot
  май 23 11:49:07 inetgw1 o2cb[2101]: Loading stack plugin "o2cb": OK
  май 23 11:49:07 inetgw1 o2cb[2101]: Loading filesystem "ocfs2_dlmfs": OK
  май 23 11:49:07 inetgw1 o2cb[2101]: Mounting ocfs2_dlmfs filesystem at /dlm: 
OK
  май 23 11:49:07 inetgw1 o2cb[2101]: Setting cluster stack "o2cb": OK
  май 23 11:49:07 inetgw1 o2cb[2101]: Starting O2CB cluster inetgw: OK
  май 23 11:49:07 inetgw1 systemd[1]: Started LSB: Load O2CB cluster services 
at system boot..

  
  I guess this is startup dependency problem.

  Thank you!

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ocfs2-tools/+bug/1584629/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1582708] Re: HAproxy 1.6.3 mail alerts on DOWN but not UP

2017-06-28 Thread Andreas Hasenack
Sorry for taking to long to get back to this bug.

>From the haproxy documentation 
>(http://cbonte.github.io/haproxy-dconv/1.6/configuration.html#4.2-email-alert%20level):
"""
Alerts are sent when :

* An un-paused server is marked as down and  is alert or lower
* A paused server is marked as down and  is notice or lower
* A server is marked as up or enters the drain state and 
  is notice or lower
* "option log-health-checks" is enabled,  is info or lower,
   and a health check status update occurs
"""

Notice how "server up" uses a different alert threshold: "notice or
lower", whereas the "down" event uses "alert or lower". The default
threshold is "alert".

Anyway, with this config I got an email in the "up" event:
email-alert mailers mymta
email-alert from haproxy@lxd
email-alert to ubuntu@lxd
email-alert level notice

Without the "email-alert level notice" line, I only got the email when
the server is down.

It seems to match the documentation at least.

I used haproxy 1.6.3-1ubuntu0.1 in xenial.


** Changed in: haproxy (Ubuntu)
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1582708

Title:
  HAproxy 1.6.3 mail alerts on DOWN but not UP

Status in haproxy package in Ubuntu:
  Incomplete

Bug description:
  This has plagued since I upgraded to Xenial beta and HAProxy 1.6.x
  (latest is .3).  For some reason, HAProxy will mail alerts when the
  state changes to DOWN, but not when the state changes back to UP. I
  get mails for DOWN, so the mailer settings are correct, so this must
  be something in HAProxy that's not quite right.

  This is on a fully updated Xenial

  Kernel: 4.4.0-22-generic
  Description:Ubuntu 16.04 LTS
  Release:16.04

  
  DOWN Alert:
  May 16 15:52:44 caesar haproxy[957]: Server lc-db-cluster/qa-db is DOWN, 
reason: Layer7 wrong status, code: 503, info: "Service Unavailable", check 
duration: 9ms. 1 active and 0 backup servers left. 0 sessions active, 0 
requeued, 0 remaining in queue.

  DOWN Mail:
  May 16 15:52:45 caesar postfix/smtp[1366]: 9D5CC1E0419: 
to=, relay=smtp.sendgrid.net[167.89.121.145]:587, 
delay=0.94, delays=0.08/0.03/0.67/0.16, dsn=2.0.0, status=sent (250 Ok: queued 
as ab6lg_ZkT6arW6MD-zLMlw)

  UP Alert:
  May 16 15:55:32 caesar haproxy[957]: Server lc-db-cluster/qa-db is UP, 
reason: Layer7 check passed, code: 200, info: "OK", check duration: 9ms. 2 
active and 0 backup servers online. 0 sessions requeued, 0 total in queue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/haproxy/+bug/1582708/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1477198] Re: Stop doesn't work on 14.04 (start-stop-daemon --pid not supported)

2015-11-09 Thread Andreas Hasenack
This worked for me in trusty with the proposed cloud-archive repository.

Before, when I restarted haproxy, I got an extra process:
root@juju-machine-2-lxc-0:~# service haproxy restart
 * Restarting haproxy haproxy
   ...done.
root@juju-machine-2-lxc-0:~# ps fxaw|grep haproxy
  32215 ?Ss 0:00 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -D 
-p /var/run/haproxy.pid
  32547 ?Ss 0:00 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -D 
-p /var/run/haproxy.pid


Now it's just one as expected, replacing the one that was running before:


root@juju-machine-2-lxc-0:~# ps fxaw|grep haproxy
  33493 pts/3S+ 0:00  \_ grep --color=auto 
haproxy
  33483 ?Ss 0:00 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p 
/var/run/haproxy.pid -D -sf 33115

root@juju-machine-2-lxc-0:~# service haproxy restart
 * Restarting haproxy haproxy
   ...done.

root@juju-machine-2-lxc-0:~# ps fxaw|grep haproxy   

  
  33510 pts/3S+ 0:00  \_ grep --color=auto 
haproxy
  33507 ?Ss 0:00 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -D 
-p /var/run/haproxy.pid


Using version 1.5.14-1ubuntu0.15.10.1~cloud0:
root@juju-machine-2-lxc-0:~# apt-cache policy haproxy
haproxy:
  Installed: 1.5.14-1ubuntu0.15.10.1~cloud0
  Candidate: 1.5.14-1ubuntu0.15.10.1~cloud0
  Version table:
 *** 1.5.14-1ubuntu0.15.10.1~cloud0 0
500 http://ubuntu-cloud.archive.canonical.com/ubuntu/ 
trusty-proposed/liberty/main amd64 Packages
100 /var/lib/dpkg/status
 1.5.14-1~cloud2 0
500 http://ubuntu-cloud.archive.canonical.com/ubuntu/ 
trusty-updates/liberty/main amd64 Packages
 1.4.24-2ubuntu0.2 0
500 http://archive.ubuntu.com/ubuntu/ trusty-updates/main amd64 Packages
 1.4.24-2 0
500 http://archive.ubuntu.com/ubuntu/ trusty/main amd64 Packages

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1477198

Title:
  Stop doesn't work on 14.04 (start-stop-daemon --pid not supported)

Status in Ubuntu Cloud Archive:
  Invalid
Status in Ubuntu Cloud Archive kilo series:
  Invalid
Status in Ubuntu Cloud Archive liberty series:
  In Progress
Status in haproxy package in Ubuntu:
  Fix Released
Status in haproxy source package in Trusty:
  Fix Released
Status in haproxy source package in Wily:
  Fix Committed
Status in haproxy source package in Xenial:
  Fix Released
Status in haproxy package in Debian:
  New

Bug description:
  [Description]

  The stop method is not working properly. I removed the --oknodo &&
  --quiet and is returning (No /usr/sbin/haproxy found running; none
  killed)

  I think this is a regression caused by the incorporation of this lines
  on the stop method:

  + for pid in $(cat $PIDFILE); do
  + start-stop-daemon --quiet --oknodo --stop \
  + --retry 5 --pid $pid --exec $HAPROXY || ret=$?

  root@juju-machine-1-lxc-0:~# service haproxy status
  haproxy is running.
  root@juju-machine-1-lxc-0:~# ps -ef| grep haproxy
  haproxy 1269   1  0 14:33 ?00:00:00 /usr/sbin/haproxy -f 
/etc/haproxy/haproxy.cfg -D -p /var/run/haproxy.pid
  root1513 906  0 14:33 pts/600:00:00 grep --color=auto haproxy
  root@juju-machine-1-lxc-0:~# service haproxy restart
   * Restarting haproxy haproxy
     ...done.
  root@juju-machine-1-lxc-0:~# ps -ef| grep haproxy
  haproxy 1269   1  0 14:33 ?00:00:00 /usr/sbin/haproxy -f 
/etc/haproxy/haproxy.cfg -D -p /var/run/haproxy.pid
  haproxy 2169   1  0 14:33 ?00:00:00 /usr/sbin/haproxy -f 
/etc/haproxy/haproxy.cfg -D -p /var/run/haproxy.pid
  root2277 906  0 14:33 pts/600:00:00 grep --color=auto haproxy
  root@juju-machine-1-lxc-0:~# service haproxy restart
   * Restarting haproxy haproxy
     ...done.
  root@juju-machine-1-lxc-0:~# ps -ef| grep haproxy
  haproxy 1269   1  0 14:33 ?00:00:00 /usr/sbin/haproxy -f 
/etc/haproxy/haproxy.cfg -D -p /var/run/haproxy.pid
  haproxy 2169   1  0 14:33 ?00:00:00 /usr/sbin/haproxy -f 
/etc/haproxy/haproxy.cfg -D -p /var/run/haproxy.pid
  haproxy 2505   1  0 14:33 ?00:00:00 /usr/sbin/haproxy -f 
/etc/haproxy/haproxy.cfg -D -p /var/run/haproxy.pid
  root2523 906  0 14:33 pts/600:00:00 grep --color=auto haproxy
  root@juju-machine-1-lxc-0:~# service haproxy stop
   * Stopping haproxy haproxy
     ...done.
  root@juju-machine-1-lxc-0:~# ps -ef| grep haproxy
  haproxy 1269   1  0 14:33 ?00:00:00 /usr/sbin/haproxy -f 
/etc/haproxy/haproxy.cfg -D -p /var/run/haproxy.pid
  haproxy 2169   1  0 14:33 ?00:00:00 /usr/sbin/haproxy -f 
/etc/haproxy/haproxy.cfg -D -p /var/run/haproxy.pid
  haproxy 2505   1  0 

[Ubuntu-ha] [Bug 1477198] Re: Stop doesn't work on Trusty

2015-11-06 Thread Andreas Hasenack
** Summary changed:

- Stop doesn't works on Trusty
+ Stop doesn't work on Trusty

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1477198

Title:
  Stop doesn't work on Trusty

Status in Ubuntu Cloud Archive:
  Confirmed
Status in Ubuntu Cloud Archive kilo series:
  Confirmed
Status in Ubuntu Cloud Archive liberty series:
  Confirmed
Status in haproxy package in Ubuntu:
  Fix Released
Status in haproxy source package in Trusty:
  Fix Released

Bug description:
  [Description]

  The stop method is not working properly. I removed the --oknodo &&
  --quiet and is returning (No /usr/sbin/haproxy found running; none
  killed)

  I think this is a regression caused by the incorporation of this lines
  on the stop method:

  + for pid in $(cat $PIDFILE); do
  + start-stop-daemon --quiet --oknodo --stop \
  + --retry 5 --pid $pid --exec $HAPROXY || ret=$?

  root@juju-machine-1-lxc-0:~# service haproxy status
  haproxy is running.
  root@juju-machine-1-lxc-0:~# ps -ef| grep haproxy
  haproxy 1269   1  0 14:33 ?00:00:00 /usr/sbin/haproxy -f 
/etc/haproxy/haproxy.cfg -D -p /var/run/haproxy.pid
  root1513 906  0 14:33 pts/600:00:00 grep --color=auto haproxy
  root@juju-machine-1-lxc-0:~# service haproxy restart
   * Restarting haproxy haproxy
     ...done.
  root@juju-machine-1-lxc-0:~# ps -ef| grep haproxy
  haproxy 1269   1  0 14:33 ?00:00:00 /usr/sbin/haproxy -f 
/etc/haproxy/haproxy.cfg -D -p /var/run/haproxy.pid
  haproxy 2169   1  0 14:33 ?00:00:00 /usr/sbin/haproxy -f 
/etc/haproxy/haproxy.cfg -D -p /var/run/haproxy.pid
  root2277 906  0 14:33 pts/600:00:00 grep --color=auto haproxy
  root@juju-machine-1-lxc-0:~# service haproxy restart
   * Restarting haproxy haproxy
     ...done.
  root@juju-machine-1-lxc-0:~# ps -ef| grep haproxy
  haproxy 1269   1  0 14:33 ?00:00:00 /usr/sbin/haproxy -f 
/etc/haproxy/haproxy.cfg -D -p /var/run/haproxy.pid
  haproxy 2169   1  0 14:33 ?00:00:00 /usr/sbin/haproxy -f 
/etc/haproxy/haproxy.cfg -D -p /var/run/haproxy.pid
  haproxy 2505   1  0 14:33 ?00:00:00 /usr/sbin/haproxy -f 
/etc/haproxy/haproxy.cfg -D -p /var/run/haproxy.pid
  root2523 906  0 14:33 pts/600:00:00 grep --color=auto haproxy
  root@juju-machine-1-lxc-0:~# service haproxy stop
   * Stopping haproxy haproxy
     ...done.
  root@juju-machine-1-lxc-0:~# ps -ef| grep haproxy
  haproxy 1269   1  0 14:33 ?00:00:00 /usr/sbin/haproxy -f 
/etc/haproxy/haproxy.cfg -D -p /var/run/haproxy.pid
  haproxy 2169   1  0 14:33 ?00:00:00 /usr/sbin/haproxy -f 
/etc/haproxy/haproxy.cfg -D -p /var/run/haproxy.pid
  haproxy 2505   1  0 14:33 ?00:00:00 /usr/sbin/haproxy -f 
/etc/haproxy/haproxy.cfg -D -p /var/run/haproxy.pid
  root2584 906  0 14:34 pts/600:00:00 grep --color=auto haproxy
  root@juju-machine-1-lxc-0:~# service haproxy start
   * Starting haproxy haproxy
     ...done.
  root@juju-machine-1-lxc-0:~# ps -ef| grep haproxy
  haproxy 1269   1  0 14:33 ?00:00:00 /usr/sbin/haproxy -f 
/etc/haproxy/haproxy.cfg -D -p /var/run/haproxy.pid
  haproxy 2169   1  0 14:33 ?00:00:00 /usr/sbin/haproxy -f 
/etc/haproxy/haproxy.cfg -D -p /var/run/haproxy.pid
  haproxy 2505   1  0 14:33 ?00:00:00 /usr/sbin/haproxy -f 
/etc/haproxy/haproxy.cfg -D -p /var/run/haproxy.pid
  haproxy 2591   1  0 14:34 ?00:00:00 /usr/sbin/haproxy -f 
/etc/haproxy/haproxy.cfg -D -p /var/run/haproxy.pid
  root2610 906  0 14:34 pts/600:00:00 grep --color=auto haproxy

  [Impact]

  - 'service stop/restart' doesn't works properly.

  [Test Case]

  - Install latest haproxy package.
  - Run service haproxy restart
   * Restarting haproxy haproxy
     ...done.
  root@juju-machine-1-lxc-0:~# ps -ef| grep haproxy
  haproxy 1269   1  0 14:33 ?00:00:00 /usr/sbin/haproxy -f 
/etc/haproxy/haproxy.cfg -D -p /var/run/haproxy.pid
  haproxy 2169   1  0 14:33 ?00:00:00 /usr/sbin/haproxy -f 
/etc/haproxy/haproxy.cfg -D -p /var/run/haproxy.pid

  - Run service haproxy stop

  root@juju-machine-1-lxc-0:~# service haproxy stop
   * Stopping haproxy haproxy
     ...done.
  root@juju-machine-1-lxc-0:~# ps -ef| grep haproxy
  haproxy 1269   1  0 14:33 ?00:00:00 /usr/sbin/haproxy -f 
/etc/haproxy/haproxy.cfg -D -p /var/run/haproxy.pid
  haproxy 2169   1  0 14:33 ?00:00:00 /usr/sbin/haproxy -f 
/etc/haproxy/haproxy.cfg -D -p /var/run/haproxy.pid

  [Other info]

  Related bugs:
   * bug 1481737:  HAProxy init script does not work correctly with nbproc 
configuration option

To manage notifications about this bug go to:

[Ubuntu-ha] [Bug 1490727] Re: "Invalid IPC credentials" after corosync, pacemaker service restarts

2015-09-18 Thread Andreas Hasenack
** Changed in: landscape/cisco-odl
   Status: In Progress => Fix Committed

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to pacemaker in Ubuntu.
https://bugs.launchpad.net/bugs/1490727

Title:
  "Invalid IPC credentials" after corosync, pacemaker service restarts

Status in Landscape Server:
  Fix Committed
Status in Landscape Server cisco-odl series:
  Fix Committed
Status in Landscape Server release-29 series:
  Fix Committed
Status in pacemaker package in Ubuntu:
  Incomplete
Status in hacluster package in Juju Charms Collection:
  Fix Released

Bug description:
  Followup from 
https://bugs.launchpad.net/ubuntu/+source/pacemaker/+bug/1439649,
  see there comments #14, #15 as it _maybe_ related to missing uidgid ACLs for
  hacluster:haclient (as apparently presented by pacemaker).

  FYI you can find relevant IPC resources with:
  $ find /run/shm -user hacluster -group haclient -ls

To manage notifications about this bug go to:
https://bugs.launchpad.net/landscape/+bug/1490727/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1490727] Re: "Invalid IPC credentials" after corosync, pacemaker service restarts

2015-09-18 Thread Andreas Hasenack
** Changed in: landscape
   Status: In Progress => Fix Committed

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to pacemaker in Ubuntu.
https://bugs.launchpad.net/bugs/1490727

Title:
  "Invalid IPC credentials" after corosync, pacemaker service restarts

Status in Landscape Server:
  Fix Committed
Status in Landscape Server cisco-odl series:
  In Progress
Status in Landscape Server release-29 series:
  Fix Committed
Status in pacemaker package in Ubuntu:
  Incomplete
Status in hacluster package in Juju Charms Collection:
  Fix Released

Bug description:
  Followup from 
https://bugs.launchpad.net/ubuntu/+source/pacemaker/+bug/1439649,
  see there comments #14, #15 as it _maybe_ related to missing uidgid ACLs for
  hacluster:haclient (as apparently presented by pacemaker).

  FYI you can find relevant IPC resources with:
  $ find /run/shm -user hacluster -group haclient -ls

To manage notifications about this bug go to:
https://bugs.launchpad.net/landscape/+bug/1490727/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1490727] Re: "Invalid IPC credentials" after corosync, pacemaker service restarts

2015-09-17 Thread Andreas Hasenack
** Changed in: landscape/release-29
   Status: New => In Progress

** Changed in: landscape/release-29
 Assignee: (unassigned) => Andreas Hasenack (ahasenack)

** Changed in: landscape
   Status: New => In Progress

** Changed in: landscape
 Assignee: (unassigned) => Andreas Hasenack (ahasenack)

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to pacemaker in Ubuntu.
https://bugs.launchpad.net/bugs/1490727

Title:
  "Invalid IPC credentials" after corosync, pacemaker service restarts

Status in Landscape Server:
  In Progress
Status in Landscape Server cisco-odl series:
  In Progress
Status in Landscape Server release-29 series:
  In Progress
Status in pacemaker package in Ubuntu:
  Incomplete
Status in hacluster package in Juju Charms Collection:
  Fix Released

Bug description:
  Followup from 
https://bugs.launchpad.net/ubuntu/+source/pacemaker/+bug/1439649,
  see there comments #14, #15 as it _maybe_ related to missing uidgid ACLs for
  hacluster:haclient (as apparently presented by pacemaker).

  FYI you can find relevant IPC resources with:
  $ find /run/shm -user hacluster -group haclient -ls

To manage notifications about this bug go to:
https://bugs.launchpad.net/landscape/+bug/1490727/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1490727] Re: "Invalid IPC credentials" after corosync, pacemaker service restarts

2015-09-16 Thread Andreas Hasenack
** Changed in: landscape/cisco-odl
   Status: New => In Progress

** Changed in: landscape/cisco-odl
 Assignee: (unassigned) => Andreas Hasenack (ahasenack)

** Changed in: landscape/cisco-odl
Milestone: None => falkor-0.9

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to pacemaker in Ubuntu.
https://bugs.launchpad.net/bugs/1490727

Title:
  "Invalid IPC credentials" after corosync, pacemaker service restarts

Status in Landscape Server:
  New
Status in Landscape Server cisco-odl series:
  In Progress
Status in Landscape Server release-29 series:
  New
Status in pacemaker package in Ubuntu:
  Incomplete
Status in hacluster package in Juju Charms Collection:
  Fix Released

Bug description:
  Followup from 
https://bugs.launchpad.net/ubuntu/+source/pacemaker/+bug/1439649,
  see there comments #14, #15 as it _maybe_ related to missing uidgid ACLs for
  hacluster:haclient (as apparently presented by pacemaker).

  FYI you can find relevant IPC resources with:
  $ find /run/shm -user hacluster -group haclient -ls

To manage notifications about this bug go to:
https://bugs.launchpad.net/landscape/+bug/1490727/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp