[Ubuntu-ha] [Bug 1884149] Re: haproxy crashes on in __pool_get_first if unique-id-header is used

2020-07-01 Thread Robie Basak
This looks good, thanks!

One minor note. The dep3 header says: Origin: upstream,
https://github.com/haproxy/haproxy/commit/d9a130e1962c2a5352f33088c563f4248a102c48;
however that link says "This commit does not belong to any branch on
this repository." so it isn't technically "upstream". However, it also
says "cherry picked from commit ad7f0ad" and
https://github.com/haproxy/haproxy/commit/ad7f0ad1c3c9c541a4c315b24d4500405d1383ee
*is* upstream and clearly is the same change, reported as present in the
upstream master branch since what looks like their v1.9-dev2 tag. So
everything is fine after all. Recording this here in case the d9a130
commit disappears and someone wants to track it down.

** Changed in: haproxy (Ubuntu Bionic)
   Status: Triaged => Fix Committed

** Tags added: verification-needed verification-needed-bionic

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1884149

Title:
  haproxy crashes on in __pool_get_first if unique-id-header is used

Status in HAProxy:
  Fix Released
Status in haproxy package in Ubuntu:
  Fix Released
Status in haproxy source package in Bionic:
  Fix Committed
Status in haproxy package in Debian:
  Fix Released

Bug description:
  [Impact]

   * The handling of locks in haproxy led to a state that between idle http 
 connections one could have indicated a connection was destroyed. In that 
 case the code went on and accessed a just freed resource. As upstream 
 puts it "It can have random implications between requests as
   it may lead a wrong connection's polling to be re-enabled or disabled
   for example, especially with threads."

   * Backport the fix from upstreams 1.8 stable branch

  [Test Case]

   * It is a race and might be hard to trigger.
 An haproxy config to be in front of three webservers can be seen below.
 Setting up three apaches locally didn't trigger the same bug, but we 
 know it is timing sensitive.

   * Simon (anbox) has a setup which reliably triggers this and will run the 
 tests there.

   * The bad case will trigger a crash as reported below.

  [Regression Potential]

   * This change is in >=Disco and has no further bugs reported against it 
 (no follow on change) which should make it rather safe. Also no other
 change to that file context in 1.8 stable since then.
 The change is on the locking of connections. So if we want to expect 
 regressions, then they would be at the handling of concurrent 
 connections.

  [Other Info]
   
   * Strictly speaking it is a race, so triggering it depends on load and 
 machine cpu/IO speed.

  
  ---

  
  Version 1.8.8-1ubuntu0.10 of haproxy in Ubuntu 18.04 (bionic) crashes with

  

  Thread 2.1 "haproxy" received signal SIGSEGV, Segmentation fault.
  [Switching to Thread 0xf77b1010 (LWP 17174)]
  __pool_get_first (pool=0xaac6ddd0, pool=0xaac6ddd0) at 
include/common/memory.h:124
  124   include/common/memory.h: No such file or directory.
  (gdb) bt
  #0  __pool_get_first (pool=0xaac6ddd0, pool=0xaac6ddd0) at 
include/common/memory.h:124
  #1  pool_alloc_dirty (pool=0xaac6ddd0) at include/common/memory.h:154
  #2  pool_alloc (pool=0xaac6ddd0) at include/common/memory.h:229
  #3  conn_new () at include/proto/connection.h:655
  #4  cs_new (conn=0x0) at include/proto/connection.h:683
  #5  connect_conn_chk (t=0xaacb8820) at src/checks.c:1553
  #6  process_chk_conn (t=0xaacb8820) at src/checks.c:2135
  #7  process_chk (t=0xaacb8820) at src/checks.c:2281
  #8  0xaabca0b4 in process_runnable_tasks () at src/task.c:231
  #9  0xaab76f44 in run_poll_loop () at src/haproxy.c:2399
  #10 run_thread_poll_loop (data=) at src/haproxy.c:2461
  #11 0xaaad79ec in main (argc=, argv=0xaac61b30) at 
src/haproxy.c:3050

  

  when running on an ARM64 system. The haproxy.cfg looks like this:

  

  global
  log /dev/log local0
  log /dev/log local1 notice
  maxconn 4096
  user haproxy
  group haproxy
  spread-checks 0
  tune.ssl.default-dh-param 1024
  ssl-default-bind-ciphers 

[Ubuntu-ha] [Bug 1884149] Please test proposed package

2020-07-01 Thread Robie Basak
Hello Simon, or anyone else affected,

Accepted haproxy into bionic-proposed. The package will build now and be
available at
https://launchpad.net/ubuntu/+source/haproxy/1.8.8-1ubuntu0.11 in a few
hours, and then in the -proposed repository.

Please help us by testing this new package.  See
https://wiki.ubuntu.com/Testing/EnableProposed for documentation on how
to enable and use -proposed.  Your feedback will aid us getting this
update out to other Ubuntu users.

If this package fixes the bug for you, please add a comment to this bug,
mentioning the version of the package you tested, what testing has been
performed on the package and change the tag from verification-needed-
bionic to verification-done-bionic. If it does not fix the bug for you,
please add a comment stating that, and change the tag to verification-
failed-bionic. In either case, without details of your testing we will
not be able to proceed.

Further information regarding the verification process can be found at
https://wiki.ubuntu.com/QATeam/PerformingSRUVerification .  Thank you in
advance for helping!

N.B. The updated package will be released to -updates after the bug(s)
fixed by this package have been verified and the package has been in
-proposed for a minimum of 7 days.

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1884149

Title:
  haproxy crashes on in __pool_get_first if unique-id-header is used

Status in HAProxy:
  Fix Released
Status in haproxy package in Ubuntu:
  Fix Released
Status in haproxy source package in Bionic:
  Fix Committed
Status in haproxy package in Debian:
  Fix Released

Bug description:
  [Impact]

   * The handling of locks in haproxy led to a state that between idle http 
 connections one could have indicated a connection was destroyed. In that 
 case the code went on and accessed a just freed resource. As upstream 
 puts it "It can have random implications between requests as
   it may lead a wrong connection's polling to be re-enabled or disabled
   for example, especially with threads."

   * Backport the fix from upstreams 1.8 stable branch

  [Test Case]

   * It is a race and might be hard to trigger.
 An haproxy config to be in front of three webservers can be seen below.
 Setting up three apaches locally didn't trigger the same bug, but we 
 know it is timing sensitive.

   * Simon (anbox) has a setup which reliably triggers this and will run the 
 tests there.

   * The bad case will trigger a crash as reported below.

  [Regression Potential]

   * This change is in >=Disco and has no further bugs reported against it 
 (no follow on change) which should make it rather safe. Also no other
 change to that file context in 1.8 stable since then.
 The change is on the locking of connections. So if we want to expect 
 regressions, then they would be at the handling of concurrent 
 connections.

  [Other Info]
   
   * Strictly speaking it is a race, so triggering it depends on load and 
 machine cpu/IO speed.

  
  ---

  
  Version 1.8.8-1ubuntu0.10 of haproxy in Ubuntu 18.04 (bionic) crashes with

  

  Thread 2.1 "haproxy" received signal SIGSEGV, Segmentation fault.
  [Switching to Thread 0xf77b1010 (LWP 17174)]
  __pool_get_first (pool=0xaac6ddd0, pool=0xaac6ddd0) at 
include/common/memory.h:124
  124   include/common/memory.h: No such file or directory.
  (gdb) bt
  #0  __pool_get_first (pool=0xaac6ddd0, pool=0xaac6ddd0) at 
include/common/memory.h:124
  #1  pool_alloc_dirty (pool=0xaac6ddd0) at include/common/memory.h:154
  #2  pool_alloc (pool=0xaac6ddd0) at include/common/memory.h:229
  #3  conn_new () at include/proto/connection.h:655
  #4  cs_new (conn=0x0) at include/proto/connection.h:683
  #5  connect_conn_chk (t=0xaacb8820) at src/checks.c:1553
  #6  process_chk_conn (t=0xaacb8820) at src/checks.c:2135
  #7  process_chk (t=0xaacb8820) at src/checks.c:2281
  #8  0xaabca0b4 in process_runnable_tasks () at src/task.c:231
  #9  0xaab76f44 in run_poll_loop () at src/haproxy.c:2399
  #10 run_thread_poll_loop (data=) at src/haproxy.c:2461
  #11 0xaaad79ec in main (argc=, argv=0xaac61b30) at 
src/haproxy.c:3050

  

  when running on an ARM64 system. The haproxy.cfg looks like this:

  

  global
  log /dev/log local0
  log /dev/log local1 notice
  maxconn 4096
  user haproxy
  group haproxy
  spread-checks 0
  tune.ssl.default-dh-param 1024
  ssl-default-bind-ciphers 

[Ubuntu-ha] [Bug 1884149] Re: haproxy crashes on arm64 in __pool_get_first

2020-06-18 Thread Robie Basak
Based on the upstream report this doesn't appear to be arm64 specific.

** Tags added: bitesize server-next

** Changed in: haproxy (Ubuntu)
   Status: New => Triaged

** Changed in: haproxy (Ubuntu)
   Importance: Undecided => High

** Summary changed:

- haproxy crashes on arm64 in __pool_get_first
+ haproxy crashes on in __pool_get_first if unique-id-header is used

** Bug watch added: Debian Bug tracker #921981
   https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=921981

** Also affects: haproxy (Debian) via
   https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=921981
   Importance: Unknown
   Status: Unknown

** Bug watch added: github.com/haproxy/haproxy/issues #40
   https://github.com/haproxy/haproxy/issues/40

** Also affects: haproxy via
   https://github.com/haproxy/haproxy/issues/40
   Importance: Unknown
   Status: Unknown

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1884149

Title:
  haproxy crashes on in __pool_get_first if unique-id-header is used

Status in HAProxy:
  Unknown
Status in haproxy package in Ubuntu:
  Triaged
Status in haproxy package in Debian:
  Unknown

Bug description:
  Version 1.8.8-1ubuntu0.10 of haproxy in Ubuntu 18.04 (bionic) crashes
  with

  

  Thread 2.1 "haproxy" received signal SIGSEGV, Segmentation fault.
  [Switching to Thread 0xf77b1010 (LWP 17174)]
  __pool_get_first (pool=0xaac6ddd0, pool=0xaac6ddd0) at 
include/common/memory.h:124
  124   include/common/memory.h: No such file or directory.
  (gdb) bt
  #0  __pool_get_first (pool=0xaac6ddd0, pool=0xaac6ddd0) at 
include/common/memory.h:124
  #1  pool_alloc_dirty (pool=0xaac6ddd0) at include/common/memory.h:154
  #2  pool_alloc (pool=0xaac6ddd0) at include/common/memory.h:229
  #3  conn_new () at include/proto/connection.h:655
  #4  cs_new (conn=0x0) at include/proto/connection.h:683
  #5  connect_conn_chk (t=0xaacb8820) at src/checks.c:1553
  #6  process_chk_conn (t=0xaacb8820) at src/checks.c:2135
  #7  process_chk (t=0xaacb8820) at src/checks.c:2281
  #8  0xaabca0b4 in process_runnable_tasks () at src/task.c:231
  #9  0xaab76f44 in run_poll_loop () at src/haproxy.c:2399
  #10 run_thread_poll_loop (data=) at src/haproxy.c:2461
  #11 0xaaad79ec in main (argc=, argv=0xaac61b30) at 
src/haproxy.c:3050

  

  when running on an ARM64 system. The haproxy.cfg looks like this:

  

  global
  log /dev/log local0
  log /dev/log local1 notice
  maxconn 4096
  user haproxy
  group haproxy
  spread-checks 0
  tune.ssl.default-dh-param 1024
  ssl-default-bind-ciphers 
ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:!DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:!DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:!CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA

  defaults
  log global
  mode tcp
  option httplog
  option dontlognull
  retries 3
  timeout queue 2
  timeout client 5
  timeout connect 5000
  timeout server 5


  frontend anbox-stream-gateway-lb-5-80
  bind 0.0.0.0:80
  default_backend api_http
  mode http
  http-request redirect scheme https

  backend api_http
  mode http

  frontend anbox-stream-gateway-lb-5-443
  bind 0.0.0.0:443 ssl crt /var/lib/haproxy/default.pem no-sslv3
  default_backend app-anbox-stream-gateway
  mode http

  backend app-anbox-stream-gateway
  mode http
  balance leastconn
  server anbox-stream-gateway-0-4000 10.212.218.61:4000 check ssl verify 
none inter 2000 rise 2 fall 5 maxconn 4096
  server anbox-stream-gateway-1-4000 10.212.218.93:4000 check ssl verify 
none inter 2000 rise 2 fall 5 maxconn 4096
  server anbox-stream-gateway-2-4000 10.212.218.144:4000 check ssl verify 
none inter 2000 rise 2 fall 5 maxconn 4096

  

  The crash occurs after a first few HTTP requests going through and
  happens again when systemd restarts the service.

  The bug is already reported in Debian https://bugs.debian.org/cgi-
  bin/bugreport.cgi?bug=921981 and upstream at
  https://github.com/haproxy/haproxy/issues/40

  Using the 1.8.19-1+deb10u2 package from Debian fixes the crash.

To manage notifications 

[Ubuntu-ha] [Bug 1437359] Re: A PIDFILE is double-defined for the corosync-notifyd init script

2020-06-17 Thread Robie Basak
** Tags removed: server-triage-discuss

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to corosync in Ubuntu.
https://bugs.launchpad.net/bugs/1437359

Title:
  A PIDFILE is double-defined for the corosync-notifyd init script

Status in corosync package in Ubuntu:
  Fix Released
Status in corosync source package in Trusty:
  Won't Fix
Status in corosync source package in Xenial:
  Triaged
Status in corosync source package in Bionic:
  Triaged
Status in corosync source package in Disco:
  Won't Fix
Status in corosync source package in Eoan:
  Triaged
Status in corosync source package in Focal:
  Fix Released

Bug description:
  A /etc/init.d/corosync-notifyd contains two definitions for the PIDFILE:
  > PIDFILE=/var/run/$NAME.pid
  > SCRIPTNAME=/etc/init.d/$NAME
  > PIDFILE=/var/run/corosync.pid

  The first one is correct and the second one is wrong as it refers to
  the corosync service's pidfile instead

  The corosync package version is 2.3.3-1ubuntu1

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/corosync/+bug/1437359/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1855140] Re: How to handle tmpfiles.d in non-systemd environments

2020-02-17 Thread Robie Basak
> From my understanding this means that supporting an alternative init
system is optional ("Packages may include support for alternate init
systems besides systemd"). So basically this is up to the package
maintainer whether or not the package should support an alternative init
system.

I'm not sure that the Docker use case was intended to be within the
scope of the Debian GR. But nevertheless, I think that in Debian it's
still up to package maintainers to decide to what extent to support the
Docker use case as it always has been. The Debian GR was relevant in
that it didn't end up mandating an alternative to tmpfiles.d for example
which might have affected any technical solution to this general
problem.

For Ubuntu, I think it makes sense to support it and to send Debian
package maintainers patches as appropriate.

The question is: how, technically, can we resolve this particular issue?

I think it would be best if it could be done centrally somehow, rather
than tweaking each affected package individually. Could the systemd
tmpfiles.d mechanism somehow be used by Docker image builds as a
machine-readable list of temporary directories to arrange to be handled
automatically? If so, then that one solution could fix this entire class
of problem.

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1855140

Title:
  How to handle tmpfiles.d in non-systemd environments

Status in haproxy package in Ubuntu:
  Opinion
Status in systemd package in Ubuntu:
  New

Bug description:
  This is a general issue about systemd features like tmpfiles.d which
  won't run in some environments like docker containers.

  Packages more and more rely on that with haproxy being the example
  that opened the bug, but clearly not the only one.

  I wanted to add tasks for all affected, but a qucik check showed that
  there are almost to many.

  $ apt-file search tmpfiles.d | cut -d':' -f 1 | sort | uniq
  129 at the moment and probably increasing.

  List of affected as of Dec 2019:
  acmetool anytun apt-cacher-ng bacula-common bind9 binkd bley bzflag-server 
ceph-common certmonger cockpit-ws colord connman courier-authdaemon 
courier-imap courier-ldap courier-mlm courier-mta courier-pop cryptsetup-bin 
cyrus-common dbus dhcpcanon diaspora-common dnssec-trigger ejabberd fail2ban 
firebird3.0-server freeipa-client freeipa-server glusterfs-server gvfs-common 
haproxy hddemux heartbeat htcondor i2pd inn inspircd iodine knot knot-resolver 
krb5-otp laptop-mode-tools lemonldap-ng-fastcgi-server libreswan lighttpd lirc 
lvm2 mailman mailman3 mailman3-web man-db mandos memcached mon mpd munge 
munin-common myproxy-server nagios-nrpe-server ngircd nrpe-ng nscd nsd 
nullmailer nut-client nut-server opencryptoki opendkim opendmarc 
opendnssec-enforcer opendnssec-signer opennebula opennebula-sunstone opensips 
open-vm-tools-desktop openvpn passwd pesign php7.2-fpm pidentd ploop 
postgresql-common prads prelude-correlator prelude-lml prelude-manager puppet 
pushpin resource-agents rkt rpcbind rsyslog samba-common-bin screen 
shairport-sync shibboleth-sp2-utils slurmctld slurmd slurmdbd sogo 
spice-vdagent sqwebmail sslh sudo sudo-ldap systemd systemd-container tcpcryptd 
tinyproxy tuned ulogd2 uptimed vrfydmn vsftpd w1retap-doc wdm 
wesnoth-1.12-server x2goserver-common xpra yadifa zabbix-agent 
zabbix-java-gateway zabbix-proxy-mysql zabbix-proxy-pgsql zabbix-proxy-sqlite3 
zabbix-server-mysql zabbix-server-pgsql

  Handling of these heavily Depends on the recent Debian GR [1].

  I'd suggest we wait how that turns out and then need to consider how
  (if?) to handle it in a central place, probably systemd or a
  derivative tool as started to be discussed in [2]

  If possible I'd avoid fixes in individual packages as it encourages
  growth of various workarounds for a problem that needs a general
  solution.

  [1]: https://www.debian.org/vote/2019/vote_002
  [2]: https://lists.debian.org/debian-devel/2019/12/msg00060.html

  --- Original report below ---

  When installing the haproxy package from the current Ubuntu 18.04
  Bionic repos, the package does not install the directory /run/haproxy.
  This directory is mentioned in the default config file
  /etc/haproxy/haproxy.cfg:

   stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd
  listeners

  Starting HAProxy manually will show the following error:

  # /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg
  [ALERT] 337/154339 (24) : Starting frontend GLOBAL: cannot bind UNIX socket 
[/run/haproxy/admin.sock]

  After manual creation of the directory, the start works:

  # mkdir /run/haproxy

  # /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg

  # ps auxf
  USER   PID %CPU %MEMVSZ   RSS TTY  STAT START   TIME COMMAND
  root10  0.1  0.0  18616  3416 pts/0Ss   15:42   0:00 /bin/bash
  root32  0.0  0.0  34400  2900 pts/0R+   15:45 

[Ubuntu-ha] [Bug 1841936] Please test proposed package

2019-10-28 Thread Robie Basak
Hello David, or anyone else affected,

Accepted haproxy into disco-proposed. The package will build now and be
available at
https://launchpad.net/ubuntu/+source/haproxy/1.8.19-1ubuntu1.1 in a few
hours, and then in the -proposed repository.

Please help us by testing this new package.  See
https://wiki.ubuntu.com/Testing/EnableProposed for documentation on how
to enable and use -proposed.Your feedback will aid us getting this
update out to other Ubuntu users.

If this package fixes the bug for you, please add a comment to this bug,
mentioning the version of the package you tested and change the tag from
verification-needed-disco to verification-done-disco. If it does not fix
the bug for you, please add a comment stating that, and change the tag
to verification-failed-disco. In either case, details of your testing
will help us make a better decision.

Further information regarding the verification process can be found at
https://wiki.ubuntu.com/QATeam/PerformingSRUVerification .  Thank you in
advance!

** Changed in: haproxy (Ubuntu Bionic)
   Status: Triaged => Fix Committed

** Tags removed: verification-failed-bionic
** Tags added: verification-needed-bionic

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1841936

Title:
  Rebuild openssl 1.1.1 to pickup TLSv1.3 (bionic) and unbreak existing
  builds against 1.1.1 (dh key size)

Status in HAProxy:
  Fix Released
Status in haproxy package in Ubuntu:
  Fix Committed
Status in haproxy source package in Bionic:
  Fix Committed
Status in haproxy source package in Disco:
  Fix Committed
Status in haproxy source package in Eoan:
  Fix Committed
Status in haproxy source package in Focal:
  Fix Committed

Bug description:
  [Impact-Bionic]

   * openssl 1.1.1 has been backported to Bionic for its longer
     support upstream period

   * That would allow the extra feature of TLSv1.3 in some consuming
     packages what seems "for free". Just with a no change rebuild it would
     pick that up.

  [Impact Disco-Focal]

   * openssl >=1.1.1 is in Disco-Focal already and thereby it was built
     against that already. That made it pick up TLSv1.3, but also a related
     bug that broke the ability to control the DHE key, it was always in
     "ECDH auto" mode. Therefore the daemon didn't follow the config
     anymore.
     Upgraders would regress having their DH key behavior changed
     unexpectedly.

  [Test Case]

   A)
   * run "haproxy -vv" and check the reported TLS versions to include 1.3
   B)
   * download https://github.com/drwetter/testssl.sh
   * Install haproxy
 * ./testssl.sh --pfs :443
 * Check the reported DH key/group (shoudl stay 1024)
 * Check if settings work to bump it like
 tune.ssl.default-dh-param 2048
   into
 /etc/haproxy/haproxy.cfg

  [Regression Potential-Bionic]

   * This should be low, the code already runs against the .so of the newer
     openssl library. This would only make it recognize the newer TLS
     support.
     i'd expect more trouble as-is with the somewhat big delta between what
     it was built against vs what it runs with than afterwards.
   * [1] and [2]  indicate that any config that would have been made for
     TLSv1.2 [1] would not apply to the v1.3 as it would be configured in
     [2].
     It is good to have no entry for [2] yet as following the defaults of
     openssl is the safest as that would be updated if new insights/CVEs are
     known.
     But this could IMHO be the "regression that I'd expect", one explcitly
     configured the v1.2 things and once both ends support v1.3 that might
     be auto-negotiated. One can then set "force-tlsv12" but that is an
     administrative action [3]
   * Yet AFAIK this fine grained control [2] for TLSv1.3 only exists in
     >=1.8.15 [4] and Bionic is on haproxy 1.8.8. So any user of TLSv1.3 in
     Bionic haproxy would have to stay without that. There are further
     changes to TLS v1.3 handling enhancements [5] but also fixes [6] which
     aren't in 1.8.8 in Bionic.
     So one could say enabling this will enable an inferior TLSv1.3 and one
     might better not enable it, for an SRU the bar to not break old
     behavior is intentionally high - I tried to provide as much as possible
     background, the decision is up to the SRU team.

  [Regression Potential-Disco-Focal]

   * The fixes let the admin regain control of the DH key configuration
     which is the fix. But remember that the default config didn't specify
     any. Therefore we have two scenarios:
     a) an admin had set custom DH parameters which were ignored. He had no
    chance to control them and needs the fix. He might have been under
    the impression that his keys are safe (there is a CVE against small
    ones) and only now is he really safe -> gain high, regression low
     b) an admin had not set anything, the 

[Ubuntu-ha] [Bug 1841936] Please test proposed package

2019-10-28 Thread Robie Basak
Hello David, or anyone else affected,

Accepted haproxy into bionic-proposed. The package will build now and be
available at
https://launchpad.net/ubuntu/+source/haproxy/1.8.8-1ubuntu0.6 in a few
hours, and then in the -proposed repository.

Please help us by testing this new package.  See
https://wiki.ubuntu.com/Testing/EnableProposed for documentation on how
to enable and use -proposed.Your feedback will aid us getting this
update out to other Ubuntu users.

If this package fixes the bug for you, please add a comment to this bug,
mentioning the version of the package you tested and change the tag from
verification-needed-bionic to verification-done-bionic. If it does not
fix the bug for you, please add a comment stating that, and change the
tag to verification-failed-bionic. In either case, details of your
testing will help us make a better decision.

Further information regarding the verification process can be found at
https://wiki.ubuntu.com/QATeam/PerformingSRUVerification .  Thank you in
advance!

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1841936

Title:
  Rebuild openssl 1.1.1 to pickup TLSv1.3 (bionic) and unbreak existing
  builds against 1.1.1 (dh key size)

Status in HAProxy:
  Fix Released
Status in haproxy package in Ubuntu:
  Fix Committed
Status in haproxy source package in Bionic:
  Fix Committed
Status in haproxy source package in Disco:
  Fix Committed
Status in haproxy source package in Eoan:
  Fix Committed
Status in haproxy source package in Focal:
  Fix Committed

Bug description:
  [Impact-Bionic]

   * openssl 1.1.1 has been backported to Bionic for its longer
     support upstream period

   * That would allow the extra feature of TLSv1.3 in some consuming
     packages what seems "for free". Just with a no change rebuild it would
     pick that up.

  [Impact Disco-Focal]

   * openssl >=1.1.1 is in Disco-Focal already and thereby it was built
     against that already. That made it pick up TLSv1.3, but also a related
     bug that broke the ability to control the DHE key, it was always in
     "ECDH auto" mode. Therefore the daemon didn't follow the config
     anymore.
     Upgraders would regress having their DH key behavior changed
     unexpectedly.

  [Test Case]

   A)
   * run "haproxy -vv" and check the reported TLS versions to include 1.3
   B)
   * download https://github.com/drwetter/testssl.sh
   * Install haproxy
 * ./testssl.sh --pfs :443
 * Check the reported DH key/group (shoudl stay 1024)
 * Check if settings work to bump it like
 tune.ssl.default-dh-param 2048
   into
 /etc/haproxy/haproxy.cfg

  [Regression Potential-Bionic]

   * This should be low, the code already runs against the .so of the newer
     openssl library. This would only make it recognize the newer TLS
     support.
     i'd expect more trouble as-is with the somewhat big delta between what
     it was built against vs what it runs with than afterwards.
   * [1] and [2]  indicate that any config that would have been made for
     TLSv1.2 [1] would not apply to the v1.3 as it would be configured in
     [2].
     It is good to have no entry for [2] yet as following the defaults of
     openssl is the safest as that would be updated if new insights/CVEs are
     known.
     But this could IMHO be the "regression that I'd expect", one explcitly
     configured the v1.2 things and once both ends support v1.3 that might
     be auto-negotiated. One can then set "force-tlsv12" but that is an
     administrative action [3]
   * Yet AFAIK this fine grained control [2] for TLSv1.3 only exists in
     >=1.8.15 [4] and Bionic is on haproxy 1.8.8. So any user of TLSv1.3 in
     Bionic haproxy would have to stay without that. There are further
     changes to TLS v1.3 handling enhancements [5] but also fixes [6] which
     aren't in 1.8.8 in Bionic.
     So one could say enabling this will enable an inferior TLSv1.3 and one
     might better not enable it, for an SRU the bar to not break old
     behavior is intentionally high - I tried to provide as much as possible
     background, the decision is up to the SRU team.

  [Regression Potential-Disco-Focal]

   * The fixes let the admin regain control of the DH key configuration
     which is the fix. But remember that the default config didn't specify
     any. Therefore we have two scenarios:
     a) an admin had set custom DH parameters which were ignored. He had no
    chance to control them and needs the fix. He might have been under
    the impression that his keys are safe (there is a CVE against small
    ones) and only now is he really safe -> gain high, regression low
     b) an admin had not set anything, the default config is meant to use
    (compatibility) and the program reported "I'm using 1024, but you
    should set it higher". But what really happened was 

[Ubuntu-ha] [Bug 1841936] Re: Rebuild openssl 1.1.1 to pickup TLSv1.3 (bionic) and unbreak existing builds against 1.1.1 (dh key size)

2019-10-28 Thread Robie Basak
Hello David, or anyone else affected,

Accepted haproxy into eoan-proposed. The package will build now and be
available at
https://launchpad.net/ubuntu/+source/haproxy/2.0.5-1ubuntu0.1 in a few
hours, and then in the -proposed repository.

Please help us by testing this new package.  See
https://wiki.ubuntu.com/Testing/EnableProposed for documentation on how
to enable and use -proposed.Your feedback will aid us getting this
update out to other Ubuntu users.

If this package fixes the bug for you, please add a comment to this bug,
mentioning the version of the package you tested and change the tag from
verification-needed-eoan to verification-done-eoan. If it does not fix
the bug for you, please add a comment stating that, and change the tag
to verification-failed-eoan. In either case, details of your testing
will help us make a better decision.

Further information regarding the verification process can be found at
https://wiki.ubuntu.com/QATeam/PerformingSRUVerification .  Thank you in
advance!

** Changed in: haproxy (Ubuntu Eoan)
   Status: Triaged => Fix Committed

** Tags removed: verification-failed
** Tags added: verification-needed verification-needed-eoan

** Changed in: haproxy (Ubuntu Disco)
   Status: Triaged => Fix Committed

** Tags added: verification-needed-disco

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1841936

Title:
  Rebuild openssl 1.1.1 to pickup TLSv1.3 (bionic) and unbreak existing
  builds against 1.1.1 (dh key size)

Status in HAProxy:
  Fix Released
Status in haproxy package in Ubuntu:
  Fix Committed
Status in haproxy source package in Bionic:
  Fix Committed
Status in haproxy source package in Disco:
  Fix Committed
Status in haproxy source package in Eoan:
  Fix Committed
Status in haproxy source package in Focal:
  Fix Committed

Bug description:
  [Impact-Bionic]

   * openssl 1.1.1 has been backported to Bionic for its longer
     support upstream period

   * That would allow the extra feature of TLSv1.3 in some consuming
     packages what seems "for free". Just with a no change rebuild it would
     pick that up.

  [Impact Disco-Focal]

   * openssl >=1.1.1 is in Disco-Focal already and thereby it was built
     against that already. That made it pick up TLSv1.3, but also a related
     bug that broke the ability to control the DHE key, it was always in
     "ECDH auto" mode. Therefore the daemon didn't follow the config
     anymore.
     Upgraders would regress having their DH key behavior changed
     unexpectedly.

  [Test Case]

   A)
   * run "haproxy -vv" and check the reported TLS versions to include 1.3
   B)
   * download https://github.com/drwetter/testssl.sh
   * Install haproxy
 * ./testssl.sh --pfs :443
 * Check the reported DH key/group (shoudl stay 1024)
 * Check if settings work to bump it like
 tune.ssl.default-dh-param 2048
   into
 /etc/haproxy/haproxy.cfg

  [Regression Potential-Bionic]

   * This should be low, the code already runs against the .so of the newer
     openssl library. This would only make it recognize the newer TLS
     support.
     i'd expect more trouble as-is with the somewhat big delta between what
     it was built against vs what it runs with than afterwards.
   * [1] and [2]  indicate that any config that would have been made for
     TLSv1.2 [1] would not apply to the v1.3 as it would be configured in
     [2].
     It is good to have no entry for [2] yet as following the defaults of
     openssl is the safest as that would be updated if new insights/CVEs are
     known.
     But this could IMHO be the "regression that I'd expect", one explcitly
     configured the v1.2 things and once both ends support v1.3 that might
     be auto-negotiated. One can then set "force-tlsv12" but that is an
     administrative action [3]
   * Yet AFAIK this fine grained control [2] for TLSv1.3 only exists in
     >=1.8.15 [4] and Bionic is on haproxy 1.8.8. So any user of TLSv1.3 in
     Bionic haproxy would have to stay without that. There are further
     changes to TLS v1.3 handling enhancements [5] but also fixes [6] which
     aren't in 1.8.8 in Bionic.
     So one could say enabling this will enable an inferior TLSv1.3 and one
     might better not enable it, for an SRU the bar to not break old
     behavior is intentionally high - I tried to provide as much as possible
     background, the decision is up to the SRU team.

  [Regression Potential-Disco-Focal]

   * The fixes let the admin regain control of the DH key configuration
     which is the fix. But remember that the default config didn't specify
     any. Therefore we have two scenarios:
     a) an admin had set custom DH parameters which were ignored. He had no
    chance to control them and needs the fix. He might have been under
    the impression that his keys are safe (there is a CVE against 

[Ubuntu-ha] [Bug 1841936] Re: Rebuild haproxy with openssl 1.1.1 will change features (bionic)

2019-10-08 Thread Robie Basak
** Tags added: bionic-openssl-1.1

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1841936

Title:
  Rebuild haproxy with openssl 1.1.1 will change features (bionic)

Status in haproxy package in Ubuntu:
  Fix Released
Status in haproxy source package in Bionic:
  Fix Committed

Bug description:
  [Impact]

   * openssl 1.1.1 has been backported to Bionic for its longer
     support upstream period

   * That would allow the extra feature of TLSv1.3 in some consuming
     packages what seems "for free". Just with a no change rebuild it would
     pick that up.

  [Test Case]

   * run "haproxy -vv" and check the reported TLS versions to include
  1.3

  [Regression Potential]

   * This should be low, the code already runs against the .so of the newer
     openssl library. This would only make it recognize the newer TLS
     support.
     i'd expect more trouble as-is with the somewhat big delta between what
     it was built against vs what it runs with than afterwards.
   * [1] and [2]  indicate that any config that would have been made for
     TLSv1.2 [1] would not apply to the v1.3 as it would be configured in
     [2].
     It is good to have no entry for [2] yet as following the defaults of
     openssl is the safest as that would be updated if new insights/CVEs are
     known.
     But this could IMHO be the "regression that I'd expect", one explcitly
     configured the v1.2 things and once both ends support v1.3 that might
     be auto-negotiated. One can then set "force-tlsv12" but that is an
     administrative action [3]
   * Yet AFAIK this fine grained control [2] for TLSv1.3 only exists in
     >=1.8.15 [4] and Bionic is on haproxy 1.8.8. So any user of TLSv1.3 in
     Bionic haproxy would have to stay without that. There are further 
 changes to TLS v1.3 handling enhancements [5] but also fixes [6] which 
 aren't in 1.8.8 in Bionic.
     So one could say enabling this will enable an inferior TLSv1.3 and one
     might better not enable it, for an SRU the bar to not break old 
 behavior is intentionally high - I tried to provide as much as possible 
 background, the decision is up to the SRU team.

  [1]: 
https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#3.1-ssl-default-bind-ciphers
  [2]: 
https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#3.1-ssl-default-bind-ciphersuites
  [3]: 
https://www.haproxy.com/documentation/hapee/1-8r2/traffic-management/tls/#define-bind-directives-on-the-frontend
  [4]: https://github.com/haproxy/haproxy/blob/master/CHANGELOG#L2131
  [5]: 
https://github.com/haproxy/haproxy/commit/526894ff3925d272c13e57926aa6b5d9d8ed5ee3
  [6]: 
https://github.com/haproxy/haproxy/commit/bc34cd1de2ee80de63b5c4d319a501fc0d4ea2f5

  [Other Info]

   * If this is nack'ed we will need an upload that prevents to enable
     TLSv1.3 to avoid enabling it by accident on e.g. a security update.

  ---

  haproxy needs to be rebuilt after #1797386 to take advantage of
  TLSv1.3.

  (If that's not desirable for some reason, then maybe TLSv1.3 should be
  actively disabled to avoid any surprises in case of a future bug fix
  release.)

  ---

  Output of haproxy -vv with stock package:

  Built with OpenSSL version : OpenSSL 1.1.0g  2 Nov 2017
  Running on OpenSSL version : OpenSSL 1.1.1  11 Sep 2018 (VERSIONS DIFFER!)
  OpenSSL library supports TLS extensions : yes
  OpenSSL library supports SNI : yes
  OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2

  ---

  Output after rebuilding the package from source:

  Built with OpenSSL version : OpenSSL 1.1.1  11 Sep 2018
  Running on OpenSSL version : OpenSSL 1.1.1  11 Sep 2018
  OpenSSL library supports TLS extensions : yes
  OpenSSL library supports SNI : yes
  OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2 TLSv1.3

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/haproxy/+bug/1841936/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1846714] Re: Merge "BUG/MEDIUM: server: Also copy "check-sni" for server templates."

2019-10-08 Thread Robie Basak
Thank you for taking the time to report this bug and helping to make
Ubuntu better.

Is there an upstream bug link for this issue please?

If any upstream bug does not provide this information already, please
could you provide exact steps to reproduce the problem and an
explanation of how it impacts haproxy users? We need this information to
be able to decide if it is appropriate to patch the Ubuntu stable
releases, and to be able to QA any such fix prior to landing it.

Once done, please change the bug status back to New. Thanks!

Triage notes: probably server-next, but we need SRU information before
this can be actionable so it doesn't qualify yet.

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1846714

Title:
  Merge "BUG/MEDIUM: server: Also copy "check-sni" for server
  templates."

Status in haproxy package in Ubuntu:
  Fix Released
Status in haproxy source package in Bionic:
  Triaged
Status in haproxy source package in Disco:
  Triaged

Bug description:
  The current HAProxy 1.8.8 supplied with Ubuntu Bionic and Disco
  contains a bug when using the server-template functionality for
  generating upstreams in combination with the check-sni setting.

  The bug was fixed in upstream 1.8.17:
  
https://git.haproxy.org/?p=haproxy-1.8.git;a=commit;h=5b9c962725e8352189911f2bdac7e3fa14f73846

  Could you please take the appropriate action?

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/haproxy/+bug/1846714/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1846714] Re: Merge "BUG/MEDIUM: server: Also copy "check-sni" for server templates."

2019-10-08 Thread Robie Basak
Upstream 2.0 branch commit:
https://git.haproxy.org/?p=haproxy-2.0.git;a=commit;h=21944019cabcb46ceb95b7fd925528b9dace4e35

Looks like this was fixed before the release of the 2.0 series, so this
is Fix Released for the development release.

** Also affects: haproxy (Ubuntu Disco)
   Importance: Undecided
   Status: New

** Also affects: haproxy (Ubuntu Bionic)
   Importance: Undecided
   Status: New

** Changed in: haproxy (Ubuntu)
   Status: New => Fix Released

** Changed in: haproxy (Ubuntu Bionic)
   Status: New => Triaged

** Changed in: haproxy (Ubuntu Disco)
   Status: New => Triaged

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1846714

Title:
  Merge "BUG/MEDIUM: server: Also copy "check-sni" for server
  templates."

Status in haproxy package in Ubuntu:
  Fix Released
Status in haproxy source package in Bionic:
  Triaged
Status in haproxy source package in Disco:
  Triaged

Bug description:
  The current HAProxy 1.8.8 supplied with Ubuntu Bionic and Disco
  contains a bug when using the server-template functionality for
  generating upstreams in combination with the check-sni setting.

  The bug was fixed in upstream 1.8.17:
  
https://git.haproxy.org/?p=haproxy-1.8.git;a=commit;h=5b9c962725e8352189911f2bdac7e3fa14f73846

  Could you please take the appropriate action?

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/haproxy/+bug/1846714/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1841936] Re: Rebuild haproxy with openssl 1.1.1 will change features (bionic)

2019-09-25 Thread Robie Basak
The SRU team discussed this last week and we agreed that enabling TLS
1.3 is appropriate.

> [Test Case]

> * run "haproxy -vv" and check the reported TLS versions to include 1.3

I think we should additionally check during SRU verification that TLS
functionality is working correctly, both with an existing protocol and
also with 1.3, as we know we're perturbing it.

** Changed in: haproxy (Ubuntu Bionic)
   Status: Triaged => Fix Committed

** Tags added: verification-needed verification-needed-bionic

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1841936

Title:
  Rebuild haproxy with openssl 1.1.1 will change features (bionic)

Status in haproxy package in Ubuntu:
  Fix Released
Status in haproxy source package in Bionic:
  Fix Committed

Bug description:
  [Impact]

   * openssl 1.1.1 has been backported to Bionic for its longer
     support upstream period

   * That would allow the extra feature of TLSv1.3 in some consuming
     packages what seems "for free". Just with a no change rebuild it would
     pick that up.

  [Test Case]

   * run "haproxy -vv" and check the reported TLS versions to include
  1.3

  [Regression Potential]

   * This should be low, the code already runs against the .so of the newer
     openssl library. This would only make it recognize the newer TLS
     support.
     i'd expect more trouble as-is with the somewhat big delta between what
     it was built against vs what it runs with than afterwards.
   * [1] and [2]  indicate that any config that would have been made for
     TLSv1.2 [1] would not apply to the v1.3 as it would be configured in
     [2].
     It is good to have no entry for [2] yet as following the defaults of
     openssl is the safest as that would be updated if new insights/CVEs are
     known.
     But this could IMHO be the "regression that I'd expect", one explcitly
     configured the v1.2 things and once both ends support v1.3 that might
     be auto-negotiated. One can then set "force-tlsv12" but that is an
     administrative action [3]
   * Yet AFAIK this fine grained control [2] for TLSv1.3 only exists in
     >=1.8.15 [4] and Bionic is on haproxy 1.8.8. So any user of TLSv1.3 in
     Bionic haproxy would have to stay without that. There are further 
 changes to TLS v1.3 handling enhancements [5] but also fixes [6] which 
 aren't in 1.8.8 in Bionic.
     So one could say enabling this will enable an inferior TLSv1.3 and one
     might better not enable it, for an SRU the bar to not break old 
 behavior is intentionally high - I tried to provide as much as possible 
 background, the decision is up to the SRU team.

  [1]: 
https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#3.1-ssl-default-bind-ciphers
  [2]: 
https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#3.1-ssl-default-bind-ciphersuites
  [3]: 
https://www.haproxy.com/documentation/hapee/1-8r2/traffic-management/tls/#define-bind-directives-on-the-frontend
  [4]: https://github.com/haproxy/haproxy/blob/master/CHANGELOG#L2131
  [5]: 
https://github.com/haproxy/haproxy/commit/526894ff3925d272c13e57926aa6b5d9d8ed5ee3
  [6]: 
https://github.com/haproxy/haproxy/commit/bc34cd1de2ee80de63b5c4d319a501fc0d4ea2f5

  [Other Info]

   * If this is nack'ed we will need an upload that prevents to enable
     TLSv1.3 to avoid enabling it by accident on e.g. a security update.

  ---

  haproxy needs to be rebuilt after #1797386 to take advantage of
  TLSv1.3.

  (If that's not desirable for some reason, then maybe TLSv1.3 should be
  actively disabled to avoid any surprises in case of a future bug fix
  release.)

  ---

  Output of haproxy -vv with stock package:

  Built with OpenSSL version : OpenSSL 1.1.0g  2 Nov 2017
  Running on OpenSSL version : OpenSSL 1.1.1  11 Sep 2018 (VERSIONS DIFFER!)
  OpenSSL library supports TLS extensions : yes
  OpenSSL library supports SNI : yes
  OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2

  ---

  Output after rebuilding the package from source:

  Built with OpenSSL version : OpenSSL 1.1.1  11 Sep 2018
  Running on OpenSSL version : OpenSSL 1.1.1  11 Sep 2018
  OpenSSL library supports TLS extensions : yes
  OpenSSL library supports SNI : yes
  OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2 TLSv1.3

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/haproxy/+bug/1841936/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


Re: [Ubuntu-ha] [Bug 1828228] Re: corosync fails to start in container (armhf) bump some limits

2019-07-16 Thread Robie Basak
Note that if this turns out to be challenging a "force-badtest" is
likely to be acceptable to get the package migrated for now.

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to pacemaker in Ubuntu.
https://bugs.launchpad.net/bugs/1828228

Title:
  corosync fails to start in container (armhf) bump some limits

Status in Auto Package Testing:
  New
Status in corosync package in Ubuntu:
  In Progress
Status in pacemaker package in Ubuntu:
  In Progress

Bug description:
  Currently pacemaker v2 fails to start in armhf containers (and by
  extension corosync too).

  I found that it is reproducible locally, and that I had to bump a few
  limits to get it going.

  Specifically I did:

  1) bump memlock limits
  2) bump rmem_max limits

  = 1) Bump memlock limits =

  I have no idea, which one of these finally worked, and/or is
  sufficient. A bit of a whack-a-mole.

  cat >>/etc/security/limits.conf 

[Ubuntu-ha] [Bug 1828496] Re: service haproxy reload sometimes fails to pick up new TLS certificates

2019-05-14 Thread Robie Basak
Thank you for taking the time to report this bug and helping to make
Ubuntu better.

When you update with details, please make sure to provide full
reproduction steps and include details of the Ubuntu release and package
versions you used. When done, please change the bug status back to New,
and we can look at it again.

** Changed in: haproxy (Ubuntu)
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1828496

Title:
  service haproxy reload sometimes fails to pick up new TLS certificates

Status in haproxy package in Ubuntu:
  Incomplete

Bug description:
  I suspect this is the same thing reported on StackOverflow:

  "I had this same issue where even after reloading the config, haproxy
  would randomly serve old certs. After looking around for many days the
  issue was that "reload" operation created a new process without
  killing the old one. Confirm this by "ps aux | grep haproxy"."

  https://stackoverflow.com/questions/46040504/haproxy-wont-recognize-
  new-certificate

  In our setup, we automate Let's Encrypt certificate renewals, and a
  fresh certificate will trigger a reload of the service. But
  occasionally this reload doesn't seem to do anything.

  Will update with details next time it happens, and hopefully confirm
  the multiple process theory.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/haproxy/+bug/1828496/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1828228] Re: corosync fails to start in container (armhf) bump some limits

2019-05-09 Thread Robie Basak
Am I right in thinking that the limits being too low are causing false
positives in autopkgtests?

If so, we could check the limits in the test themselves and skip (exit
77 and declare "skippable") if on armhf and the limits aren't high
enough. That's a reasonable action for the packages, I think.

** Changed in: corosync (Ubuntu)
   Status: New => Triaged

** Changed in: pacemaker (Ubuntu)
   Status: New => Triaged

** Changed in: corosync (Ubuntu)
   Importance: Undecided => Medium

** Changed in: pacemaker (Ubuntu)
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to pacemaker in Ubuntu.
https://bugs.launchpad.net/bugs/1828228

Title:
  corosync fails to start in container (armhf) bump some limits

Status in Auto Package Testing:
  New
Status in corosync package in Ubuntu:
  Triaged
Status in pacemaker package in Ubuntu:
  Triaged

Bug description:
  Currently pacemaker v2 fails to start in armhf containers (and by
  extension corosync too).

  I found that it is reproducible locally, and that I had to bump a few
  limits to get it going.

  Specifically I did:

  1) bump memlock limits
  2) bump rmem_max limits

  = 1) Bump memlock limits =

  I have no idea, which one of these finally worked, and/or is
  sufficient. A bit of a whack-a-mole.

  cat >>/etc/security/limits.conf 

[Ubuntu-ha] [Bug 1815101] Re: [master] Restarting systemd-networkd breaks keepalived clusters

2019-05-09 Thread Robie Basak
It looks like there is some clear and actionable work in keepalived here
(even if as a workaround and the real fix ends up being in systemd), so
I'm marking it as Triaged.

FTR, the Ubuntu Server Team is aware of this as a high level issue and
it is high up in our list of priorities to determine how to address it
properly.

** Changed in: keepalived (Ubuntu)
   Status: Incomplete => Triaged

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to keepalived in Ubuntu.
https://bugs.launchpad.net/bugs/1815101

Title:
  [master] Restarting systemd-networkd breaks keepalived clusters

Status in netplan:
  Invalid
Status in keepalived package in Ubuntu:
  Triaged
Status in systemd package in Ubuntu:
  Triaged

Bug description:
  Configure netplan for interfaces, for example (a working config with
  IP addresses obfuscated)

  network:
  ethernets:
  eth0:
  addresses: [192.168.0.5/24]
  dhcp4: false
  nameservers:
search: [blah.com, other.blah.com, hq.blah.com, cust.blah.com, 
phone.blah.com]
addresses: [10.22.11.1]
  eth2:
  addresses:
- 12.13.14.18/29
- 12.13.14.19/29
  gateway4: 12.13.14.17
  dhcp4: false
  nameservers:
search: [blah.com, other.blah.com, hq.blah.com, cust.blah.com, 
phone.blah.com]
addresses: [10.22.11.1]
  eth3:
  addresses: [10.22.11.6/24]
  dhcp4: false
  nameservers:
search: [blah.com, other.blah.com, hq.blah.com, cust.blah.com, 
phone.blah.com]
addresses: [10.22.11.1]
  eth4:
  addresses: [10.22.14.6/24]
  dhcp4: false
  nameservers:
search: [blah.com, other.blah.com, hq.blah.com, cust.blah.com, 
phone.blah.com]
addresses: [10.22.11.1]
  eth7:
  addresses: [9.5.17.34/29]
  dhcp4: false
  optional: true
  nameservers:
search: [blah.com, other.blah.com, hq.blah.com, cust.blah.com, 
phone.blah.com]
addresses: [10.22.11.1]
  version: 2

  Configure keepalived (again, a working config with IP addresses
  obfuscated)

  global_defs   # Block id
  {
  notification_email {
  sysadm...@blah.com
  }
  notification_email_from keepali...@system3.hq.blah.com
  smtp_server 10.22.11.7 # IP
  smtp_connect_timeout 30  # integer, seconds
  router_id system3  # string identifying the machine,
   # (doesn't have to be hostname).
  vrrp_mcast_group4 224.0.0.18 # optional, default 224.0.0.18
  vrrp_mcast_group6 ff02::12   # optional, default ff02::12
  enable_traps # enable SNMP traps
  }
  vrrp_sync_group collection {
  group {
  wan
  lan
  phone
  }
  vrrp_instance wan {
  state MASTER
  interface eth2
  virtual_router_id 77
  priority 150
  advert_int 1
  smtp_alert
  authentication {
  auth_type PASS
  auth_pass BlahBlah
  }
  virtual_ipaddress {
  12.13.14.20
  }
  }
  vrrp_instance lan {
  state MASTER
  interface eth3
  virtual_router_id 78
  priority 150
  advert_int 1
  smtp_alert
  authentication {
  auth_type PASS
  auth_pass MoreBlah
  }
  virtual_ipaddress {
  10.22.11.13/24
  }
  }
  vrrp_instance phone {
  state MASTER
  interface eth4
  virtual_router_id 79
  priority 150
  advert_int 1
  smtp_alert
  authentication {
  auth_type PASS
  auth_pass MostBlah
  }
  virtual_ipaddress {
  10.22.14.3/24
  }
  }

  At boot the affected interfaces have:
  5: eth4:  mtu 1500 qdisc mq state UP group 
default qlen 1000
  link/ether ab:cd:ef:90:c0:e3 brd ff:ff:ff:ff:ff:ff
  inet 10.22.14.6/24 brd 10.22.14.255 scope global eth4
 valid_lft forever preferred_lft forever
  inet 10.22.14.3/24 scope global secondary eth4
 valid_lft forever preferred_lft forever
  inet6 fe80::ae1f:6bff:fe90:c0e3/64 scope link 
 valid_lft forever preferred_lft forever
  7: eth3:  mtu 1500 qdisc mq state UP group 
default qlen 1000
  link/ether ab:cd:ef:b0:26:29 brd ff:ff:ff:ff:ff:ff
  inet 10.22.11.6/24 brd 10.22.11.255 scope global eth3
 valid_lft forever preferred_lft forever
  inet 10.22.11.13/24 scope global secondary eth3
 valid_lft forever preferred_lft forever
  inet6 

[Ubuntu-ha] [Bug 1825992] Re: Upgrade to version 2.0 to support management with PCS

2019-04-24 Thread Robie Basak
Thank you for taking the time to report this bug and helping to make
Ubuntu better.

In which Ubuntu release are you asking for the update? If a stable
release, please start by reading
https://wiki.ubuntu.com/StableReleaseUpdates - we certainly won't do it
without any justification, so please start by providing the
justification against our SRU documented policy in this bug.

Once you've explained what versions are needed in which releases, with
justifications, please change the bug status back to New. Thanks!

** Changed in: pacemaker (Ubuntu)
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to pacemaker in Ubuntu.
https://bugs.launchpad.net/bugs/1825992

Title:
  Upgrade to version 2.0 to support management with PCS

Status in pacemaker package in Ubuntu:
  Incomplete

Bug description:
  Is there a way we could upgrade the version to 2.0? The pcs package
  requires a version of pacemaker that is greater than or equal to 2.0,
  and there is already a debian version packaged for v2. Installing the
  Ubuntu package for pcs will delete the pacemaker package as there is
  no version of pacemaker that is greater than 2.0.

  I can help with build/testing and packaging for 2.0 based on the
  existing debian deb if needed.

  https://pkgs.org/download/pacemaker

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/pacemaker/+bug/1825992/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1810583] Re: Daily cron restarts network on unattended updates but keepalived .service is not restarted as a dependency

2019-02-27 Thread Robie Basak
** Tags removed: server-triage-discuss

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to keepalived in Ubuntu.
https://bugs.launchpad.net/bugs/1810583

Title:
  Daily cron restarts network on unattended updates but keepalived
  .service is not restarted as a dependency

Status in keepalived package in Ubuntu:
  Triaged
Status in networkd-dispatcher package in Ubuntu:
  Invalid

Bug description:
  Description:Ubuntu 18.04.1 LTS
  Release:18.04
  ii  keepalived1:1.3.9-1ubuntu0.18.04.1  
amd64Failover and monitoring daemon for LVS clusters

  (From unanswered
  https://answers.launchpad.net/ubuntu/+source/keepalived/+question/676267)

  Since two weeks we lost our keepalived VRRP address on on our of
  systems, closer inspection reveals that this was due to the daily
  cronjob.Apparently something triggered a udev reload (and last week
  the same seemed to happen) which obviously triggers a network restart.

  Are we right in assuming the below patch is the correct way (and
  shouldn't this be in the default install of the systemd service of
  keepalived).

  /etc/systemd/system/multi-user.target.wants/keepalived.service:
  --- keepalived.service.orig 2018-11-20 09:17:06.973924706 +0100
  +++ keepalived.service 2018-11-20 09:05:55.984773226 +0100
  @@ -4,6 +4,7 @@
   Wants=network-online.target
   # Only start if there is a configuration file
   ConditionFileNotEmpty=/etc/keepalived/keepalived.conf
  +PartOf=systemd-networkd.service

  Accompanying syslog:
  Nov 20 06:34:33 ourmachine systemd[1]: Starting Daily apt upgrade and clean 
activities...
  Nov 20 06:34:42 ourmachine systemd[1]: Reloading.
  Nov 20 06:34:44 ourmachine systemd[1]: message repeated 2 times: [ Reloading.]
  Nov 20 06:34:44 ourmachine systemd[1]: Starting Daily apt download 
activities...
  Nov 20 06:34:44 ourmachine systemd[1]: Stopping udev Kernel Device Manager...
  Nov 20 06:34:44 ourmachine systemd[1]: Stopped udev Kernel Device Manager.
  Nov 20 06:34:44 ourmachine systemd[1]: Starting udev Kernel Device Manager...
  Nov 20 06:34:44 ourmachine systemd[1]: Started udev Kernel Device Manager.
  Nov 20 06:34:45 ourmachine systemd[1]: Reloading.
  Nov 20 06:34:45 ourmachine systemd[1]: Reloading.
  Nov 20 06:35:13 ourmachine systemd[1]: Reexecuting.
  Nov 20 06:35:13 ourmachine systemd[1]: Stopped Wait for Network to be 
Configured.
  Nov 20 06:35:13 ourmachine systemd[1]: Stopping Wait for Network to be 
Configured...
  Nov 20 06:35:13 ourmachine systemd[1]: Stopping Network Service..

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/keepalived/+bug/1810583/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


Re: [Ubuntu-ha] [Bug 1800159] Re: keepalived does not autoload the ip_vs kernel module when it is required

2018-11-22 Thread Robie Basak
Great, thanks!

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to keepalived in Ubuntu.
https://bugs.launchpad.net/bugs/1800159

Title:
  keepalived does not autoload the ip_vs kernel module when it is
  required

Status in keepalived package in Ubuntu:
  Triaged
Status in keepalived package in Debian:
  Unknown

Bug description:
  1) 
  Description:  Ubuntu 16.04.5 LTS
  Release:  16.04
  2) keepalived:
Installed: 1:1.2.24-1ubuntu0.16.04.1
Candidate: 1:1.2.24-1ubuntu0.16.04.1
Version table:
   *** 1:1.2.24-1ubuntu0.16.04.1 500
  500 http://ftp.hosteurope.de/mirror/archive.ubuntu.com 
xenial-updates/main amd64 Packages
  100 /var/lib/dpkg/status

  3) not loading the kernel module
  systemctl start keepalived.service
  Keepalived_healthcheckers[1680]: IPVS: Protocol not available
  Keepalived_healthcheckers[1680]: message repeated 8 times: [ IPVS: Protocol 
not available]
  ...

  4) loading the module manually 
  systemctl stop keepalived.service
  modprobe ip_vs
  kernel: [  445.363609] IPVS: ipvs loaded.
  systemctl start keepalived.service
  Keepalived_healthcheckers[5533]: Initializing ipvs
  kernel: [  600.828683] IPVS: [wlc] scheduler registered.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/keepalived/+bug/1800159/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1800159] Re: keepalived does not autoload the ip_vs kernel module when it is required

2018-11-22 Thread Robie Basak
Reproduced with your keepalived.conf, thanks - specifically that what
goes to syslog seems to be a problem on startup without ip_vs manually
loaded, and after a manual modprobe there is less of an error (I get
other errors presumably because I don't have your network set up).

I'm surprised to hear of the report in the Debian bug that adding ip_vs
to /etc/modules does not work. @Thorsten please could you try this? It's
a workaround and not a proper permanent fix but if the workaround
doesn't work that will help us inform us for a proper fix.

I also don't see any changes in packaging that would have caused this to
have been fixed in Bionic. I wonder if there is a change keepalived that
landed after Xenial's version that fixes this.

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to keepalived in Ubuntu.
https://bugs.launchpad.net/bugs/1800159

Title:
  keepalived does not autoload the ip_vs kernel module when it is
  required

Status in keepalived package in Ubuntu:
  Triaged
Status in keepalived package in Debian:
  Unknown

Bug description:
  1) 
  Description:  Ubuntu 16.04.5 LTS
  Release:  16.04
  2) keepalived:
Installed: 1:1.2.24-1ubuntu0.16.04.1
Candidate: 1:1.2.24-1ubuntu0.16.04.1
Version table:
   *** 1:1.2.24-1ubuntu0.16.04.1 500
  500 http://ftp.hosteurope.de/mirror/archive.ubuntu.com 
xenial-updates/main amd64 Packages
  100 /var/lib/dpkg/status

  3) not loading the kernel module
  systemctl start keepalived.service
  Keepalived_healthcheckers[1680]: IPVS: Protocol not available
  Keepalived_healthcheckers[1680]: message repeated 8 times: [ IPVS: Protocol 
not available]
  ...

  4) loading the module manually 
  systemctl stop keepalived.service
  modprobe ip_vs
  kernel: [  445.363609] IPVS: ipvs loaded.
  systemctl start keepalived.service
  Keepalived_healthcheckers[5533]: Initializing ipvs
  kernel: [  600.828683] IPVS: [wlc] scheduler registered.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/keepalived/+bug/1800159/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1804069] Re: haproxy fails on arm64 due to alignment error

2018-11-22 Thread Robie Basak
** Tags added: bitesize server-next

** Also affects: haproxy (Ubuntu Bionic)
   Importance: Undecided
   Status: New

** Changed in: haproxy (Ubuntu Bionic)
   Status: New => Triaged

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1804069

Title:
  haproxy fails on arm64 due to alignment error

Status in haproxy package in Ubuntu:
  Triaged
Status in haproxy source package in Bionic:
  Triaged

Bug description:
  This fault was reported via the haproxy mailing list https://www.mail-
  archive.com/hapr...@formilux.org/msg31749.html

  And then patched in the haproxy code here
  
https://github.com/haproxy/haproxy/commit/52dabbc4fad338233c7f0c96f977a43f8f81452a

  Without this patch haproxy is not functional on aarch64/arm64.
  Experimental deployments of openstack-ansible on arm64 fail because of
  this bug, and without a fix applied to the ubuntu bionic packages we
  cannot proceed further as the openstack CI only installs the upstream
  ubuntu distribution packages.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/haproxy/+bug/1804069/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1800159] Re: keepalived ip_vs

2018-11-22 Thread Robie Basak
** Bug watch added: Debian Bug tracker #888747
   https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=888747

** Also affects: keepalived (Debian) via
   https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=888747
   Importance: Unknown
   Status: Unknown

** Summary changed:

- keepalived ip_vs
+ keepalived does not autoload the ip_vs kernel module when it is required

** Changed in: keepalived (Ubuntu)
   Status: New => Triaged

** Changed in: keepalived (Ubuntu)
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to keepalived in Ubuntu.
https://bugs.launchpad.net/bugs/1800159

Title:
  keepalived does not autoload the ip_vs kernel module when it is
  required

Status in keepalived package in Ubuntu:
  Triaged
Status in keepalived package in Debian:
  Unknown

Bug description:
  1) 
  Description:  Ubuntu 16.04.5 LTS
  Release:  16.04
  2) keepalived:
Installed: 1:1.2.24-1ubuntu0.16.04.1
Candidate: 1:1.2.24-1ubuntu0.16.04.1
Version table:
   *** 1:1.2.24-1ubuntu0.16.04.1 500
  500 http://ftp.hosteurope.de/mirror/archive.ubuntu.com 
xenial-updates/main amd64 Packages
  100 /var/lib/dpkg/status

  3) not loading the kernel module
  systemctl start keepalived.service
  Keepalived_healthcheckers[1680]: IPVS: Protocol not available
  Keepalived_healthcheckers[1680]: message repeated 8 times: [ IPVS: Protocol 
not available]
  ...

  4) loading the module manually 
  systemctl stop keepalived.service
  modprobe ip_vs
  kernel: [  445.363609] IPVS: ipvs loaded.
  systemctl start keepalived.service
  Keepalived_healthcheckers[5533]: Initializing ipvs
  kernel: [  600.828683] IPVS: [wlc] scheduler registered.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/keepalived/+bug/1800159/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1740892] Re: corosync upgrade on 2018-01-02 caused pacemaker to fail

2018-01-03 Thread Robie Basak
** Tags added: regression-update

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to pacemaker in Ubuntu.
https://bugs.launchpad.net/bugs/1740892

Title:
  corosync upgrade on 2018-01-02 caused pacemaker to fail

Status in OpenStack hacluster charm:
  Invalid
Status in corosync package in Ubuntu:
  In Progress
Status in pacemaker package in Ubuntu:
  New

Bug description:
  During upgrades on 2018-01-02, corosync and it's libs were upgraded:

  (from a trusty/mitaka cloud)

  Upgrade: libcmap4:amd64 (2.3.3-1ubuntu3, 2.3.3-1ubuntu4),
  corosync:amd64 (2.3.3-1ubuntu3, 2.3.3-1ubuntu4), libcfg6:amd64
  (2.3.3-1ubuntu3, 2.3.3-1ubuntu4), libcpg4:amd64 (2.3.3-1ubuntu3,
  2.3.3-1ubuntu4), libquorum5:amd64 (2.3.3-1ubuntu3, 2.3.3-1ubuntu4),
  libcorosync-common4:amd64 (2.3.3-1ubuntu3, 2.3.3-1ubuntu4),
  libsam4:amd64 (2.3.3-1ubuntu3, 2.3.3-1ubuntu4), libvotequorum6:amd64
  (2.3.3-1ubuntu3, 2.3.3-1ubuntu4), libtotem-pg5:amd64 (2.3.3-1ubuntu3,
  2.3.3-1ubuntu4)

  During this process, it appears that pacemaker service is restarted
  and it errors:

  syslog:Jan  2 16:09:33 juju-machine-0-lxc-4 pacemakerd[1994]:   notice: 
crm_update_peer_state: pcmk_quorum_notification: Node 
juju-machine-1-lxc-3[1001] - state is now lost (was member)
  syslog:Jan  2 16:09:34 juju-machine-0-lxc-4 pacemakerd[1994]:   notice: 
crm_update_peer_state: pcmk_quorum_notification: Node 
juju-machine-1-lxc-3[1001] - state is now member (was lost)
  syslog:Jan  2 16:14:32 juju-machine-0-lxc-4 pacemakerd[1994]:error: 
cfg_connection_destroy: Connection destroyed
  syslog:Jan  2 16:14:32 juju-machine-0-lxc-4 pacemakerd[1994]:   notice: 
pcmk_shutdown_worker: Shuting down Pacemaker
  syslog:Jan  2 16:14:32 juju-machine-0-lxc-4 pacemakerd[1994]:   notice: 
stop_child: Stopping crmd: Sent -15 to process 2050
  syslog:Jan  2 16:14:32 juju-machine-0-lxc-4 pacemakerd[1994]:error: 
pcmk_cpg_dispatch: Connection to the CPG API failed: Library error (2)
  syslog:Jan  2 16:14:32 juju-machine-0-lxc-4 pacemakerd[1994]:error: 
mcp_cpg_destroy: Connection destroyed

  
  Also affected xenial/ocata

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-hacluster/+bug/1740892/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1204542] Re: Feature request for init script: status function

2016-11-07 Thread Robie Basak
Marking Incomplete. See my comment 5. If this is still an issue, please
explain and set the bug status back to New. Thanks!

** Changed in: keepalived (Ubuntu)
   Status: Confirmed => Incomplete

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to keepalived in Ubuntu.
https://bugs.launchpad.net/bugs/1204542

Title:
  Feature request for init script: status function

Status in keepalived package in Ubuntu:
  Incomplete

Bug description:
  In order to have configuration management tools like salt manage the
  keepalived service, the init script has to have a 'status' function.
  I've made a basic 'status' function patch for /etc/init.d/keepalived
  based on keepalived 1:1.2.2-3ubuntu1 on  Ubuntu 12.04.2 LT. See
  attached patch.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/keepalived/+bug/1204542/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1628747] Re: package haproxy 1.6.3-1ubuntu0.1 failed to install/upgrade: サブプロセス インストール済みの post-installation スクリプト はエラー終了ステータス 1 を返しました

2016-10-03 Thread Robie Basak
Please do not change the bug status away from Incomplete until you have
responded to the questions Christian raised in comment 2.

** Changed in: haproxy (Ubuntu)
   Status: Confirmed => Incomplete

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1628747

Title:
  package haproxy 1.6.3-1ubuntu0.1 failed to install/upgrade: サブプロセス
  インストール済みの post-installation スクリプト はエラー終了ステータス 1 を返しました

Status in haproxy package in Ubuntu:
  Incomplete

Bug description:
  at haproxy install
  $ sudo apt install haproxy

  ProblemType: Package
  DistroRelease: Ubuntu 16.04
  Package: haproxy 1.6.3-1ubuntu0.1
  ProcVersionSignature: Ubuntu 4.4.0-38.57-generic 4.4.19
  Uname: Linux 4.4.0-38-generic x86_64
  ApportVersion: 2.20.1-0ubuntu2.1
  AptOrdering:
   liblua5.3-0: Install
   haproxy: Install
   liblua5.3-0: Configure
   haproxy: Configure
   NULL: ConfigurePending
  Architecture: amd64
  Date: Thu Sep 29 11:21:28 2016
  ErrorMessage: サブプロセス インストール済みの post-installation スクリプト はエラー終了ステータス 1 を返しました
  InstallationDate: Installed on 2015-10-23 (341 days ago)
  InstallationMedia: Ubuntu 15.10 "Wily Werewolf" - Release amd64 (20151021)
  RelatedPackageVersions:
   dpkg 1.18.4ubuntu1.1
   apt  1.2.12~ubuntu16.04.1
  SourcePackage: haproxy
  Title: package haproxy 1.6.3-1ubuntu0.1 failed to install/upgrade: サブプロセス 
インストール済みの post-installation スクリプト はエラー終了ステータス 1 を返しました
  UpgradeStatus: No upgrade log present (probably fresh install)
  mtime.conffile..etc.haproxy.haproxy.cfg: 2016-01-09T20:20:32.739917

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/haproxy/+bug/1628747/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1583503] Re: keepalived fails to start when PID file is empty

2016-09-16 Thread Robie Basak
** Changed in: keepalived (Ubuntu Xenial)
   Status: Triaged => Incomplete

** Changed in: keepalived (Ubuntu Trusty)
   Status: Triaged => Incomplete

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to keepalived in Ubuntu.
https://bugs.launchpad.net/bugs/1583503

Title:
  keepalived fails to start when PID file is empty

Status in neutron:
  Fix Released
Status in keepalived package in Ubuntu:
  Fix Released
Status in keepalived source package in Trusty:
  Incomplete
Status in keepalived source package in Xenial:
  Incomplete

Bug description:
  After a crash of a network node, we were left with empty PID files for
  some keepalived processes:

   root@network-node14:~# ls -l 
/var/lib/neutron/ha_confs/0ab5f647-1e04-4345-ae9b-ee66c6f08882.pid
  -rw-r--r-- 1 root root 0 May 19 08:41 
/var/lib/neutron/ha_confs/0ab5f647-1e04-4345-ae9b-ee66c6f08882.pid

  This causes the L3 agent to log the following errors repeating every
  minute:

  2016-05-19 08:46:44.525 13554 ERROR neutron.agent.linux.utils [-] Unable to 
convert value in 
/var/lib/neutron/ha_confs/0ab5f647-1e04-4345-ae9b-ee66c6f08882.pid
  2016-05-19 08:46:44.526 13554 ERROR neutron.agent.linux.external_process [-] 
keepalived for router with uuid 0ab5f647-1e04-4345-ae9b-ee66c6f08882 not found. 
The process should not have died
  2016-05-19 08:46:44.526 13554 WARNING neutron.agent.linux.external_process 
[-] Respawning keepalived for uuid 0ab5f647-1e04-4345-ae9b-ee66c6f08882
  2016-05-19 08:46:44.526 13554 ERROR neutron.agent.linux.utils [-] Unable to 
convert value in 
/var/lib/neutron/ha_confs/0ab5f647-1e04-4345-ae9b-ee66c6f08882.pid
  2016-05-19 08:46:44.526 13554 ERROR neutron.agent.linux.utils [-] Unable to 
convert value in 
/var/lib/neutron/ha_confs/0ab5f647-1e04-4345-ae9b-ee66c6f08882.pid-vrrp

  and the keepalived process fails to start. As a result, the routers
  hosted by this agent are non-functional.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1583503/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1583503] Re: keepalived fails to start when PID file is empty

2016-09-13 Thread Robie Basak
** Changed in: keepalived (Ubuntu Xenial)
 Assignee: (unassigned) => Joshua Powers (powersj)

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to keepalived in Ubuntu.
https://bugs.launchpad.net/bugs/1583503

Title:
  keepalived fails to start when PID file is empty

Status in neutron:
  Fix Released
Status in keepalived package in Ubuntu:
  Fix Released
Status in keepalived source package in Trusty:
  Triaged
Status in keepalived source package in Xenial:
  Triaged

Bug description:
  After a crash of a network node, we were left with empty PID files for
  some keepalived processes:

   root@network-node14:~# ls -l 
/var/lib/neutron/ha_confs/0ab5f647-1e04-4345-ae9b-ee66c6f08882.pid
  -rw-r--r-- 1 root root 0 May 19 08:41 
/var/lib/neutron/ha_confs/0ab5f647-1e04-4345-ae9b-ee66c6f08882.pid

  This causes the L3 agent to log the following errors repeating every
  minute:

  2016-05-19 08:46:44.525 13554 ERROR neutron.agent.linux.utils [-] Unable to 
convert value in 
/var/lib/neutron/ha_confs/0ab5f647-1e04-4345-ae9b-ee66c6f08882.pid
  2016-05-19 08:46:44.526 13554 ERROR neutron.agent.linux.external_process [-] 
keepalived for router with uuid 0ab5f647-1e04-4345-ae9b-ee66c6f08882 not found. 
The process should not have died
  2016-05-19 08:46:44.526 13554 WARNING neutron.agent.linux.external_process 
[-] Respawning keepalived for uuid 0ab5f647-1e04-4345-ae9b-ee66c6f08882
  2016-05-19 08:46:44.526 13554 ERROR neutron.agent.linux.utils [-] Unable to 
convert value in 
/var/lib/neutron/ha_confs/0ab5f647-1e04-4345-ae9b-ee66c6f08882.pid
  2016-05-19 08:46:44.526 13554 ERROR neutron.agent.linux.utils [-] Unable to 
convert value in 
/var/lib/neutron/ha_confs/0ab5f647-1e04-4345-ae9b-ee66c6f08882.pid-vrrp

  and the keepalived process fails to start. As a result, the routers
  hosted by this agent are non-functional.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1583503/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1595901] Re: Missing dependency on dbus

2016-06-29 Thread Robie Basak
** Changed in: pacemaker (Ubuntu)
   Importance: Undecided => High

** Tags added: bitesize server-next

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to pacemaker in Ubuntu.
https://bugs.launchpad.net/bugs/1595901

Title:
  Missing dependency on dbus

Status in pacemaker package in Ubuntu:
  New

Bug description:
  The systemd unit pacemaker.service depends on dbus.service, but the
  package has no Dependency on the dbus package, so on a system without
  dbus you get:

  Setting up pacemaker (1.1.14-2ubuntu1) ...
  Installing new version of config file /etc/init.d/pacemaker ...
  insserv: warning: current start runlevel(s) (2 3 4 5) of script `pacemaker' 
overrides LSB defaults (empty).
  insserv: warning: current stop runlevel(s) (0 1 6) of script `pacemaker' 
overrides LSB defaults (empty).
  Failed to start pacemaker.service: Unit dbus.service not found.
  invoke-rc.d: initscript pacemaker, action "start" failed.
  dpkg: error processing package pacemaker (--configure):
   subprocess installed post-installation script returned error exit status 5
  dpkg: dependency problems prevent configuration of pacemaker-cli-utils:
   pacemaker-cli-utils depends on pacemaker | pacemaker-remote; however:
Package pacemaker is not configured yet.
Package pacemaker-remote is not installed.

  Cheers
  Wolfgang

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/pacemaker/+bug/1595901/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1590208] Re: needs a no-change rebuild against openhpi

2016-06-09 Thread Robie Basak
Uploaded, thanks.

** Changed in: cluster-glue (Ubuntu)
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to cluster-glue in Ubuntu.
https://bugs.launchpad.net/bugs/1590208

Title:
  needs a no-change rebuild against openhpi

Status in cluster-glue package in Ubuntu:
  Fix Released

Bug description:
  There's a new version of openhpi sitting in -proposed that, due to an
  ABI change, ships libopenhpi3 to replace libopenhpi2. This package
  needs a no-change rebuild against it, since it currently depends on
  libopenhpi2.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/cluster-glue/+bug/1590208/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1524635] Re: haproxy syslog configuration causes double logging

2016-06-07 Thread Robie Basak
** Also affects: haproxy (Ubuntu Trusty)
   Importance: Undecided
   Status: New

** Changed in: haproxy (Ubuntu Trusty)
   Status: New => Triaged

** Changed in: haproxy (Ubuntu Trusty)
   Importance: Undecided => Medium

** Changed in: haproxy (Ubuntu Trusty)
 Assignee: (unassigned) => Nish Aravamudan (nacc)

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1524635

Title:
  haproxy syslog configuration causes double logging

Status in haproxy package in Ubuntu:
  Fix Released
Status in haproxy source package in Trusty:
  Triaged

Bug description:
  The current rsyslogd configuration as provided by the rsyslogd package
  causes double logging to occur.

  Steps to Reproduce:
  1) Install haproxy via whatever normal means (apt-get etc)
  2) Configure it to listen on at least one port, even just the stats port
  3) Visit the URL configured

  You'll see logs generated in both /var/log/syslog (via
  /etc/rsyslogd.d/50-default.conf) and /var/log/haproxy.log (via
  /etc/rsyslog.d/haproxy.conf).

  Steps to fix:
  1) mv /etc/rsyslog.d/haproxy.conf 49-haproxy.conf  # This could be any number 
less than 50, to have it read before the default.conf
  2) Restart rsyslog.
  3) Access the provided service

  This will cause the entries to be written out to only
  /var/log/haproxy.log.

  The testing was done on a Ubuntu 14.04 server (trusty) with haproxy
  1.4.24-2ubuntu0.3 installed:

  $ lsb_release -rd
  Description:  Ubuntu 14.04.3 LTS
  Release:  14.04

  $ dpkg-query -W haproxy
  haproxy   1.4.24-2ubuntu0.3

  Please let me know if you have any further questions.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/haproxy/+bug/1524635/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1524635] Re: haproxy syslog configuration causes double logging

2016-05-24 Thread Robie Basak
** Changed in: haproxy (Ubuntu)
 Assignee: (unassigned) => Nish Aravamudan (nacc)

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1524635

Title:
  haproxy syslog configuration causes double logging

Status in haproxy package in Ubuntu:
  Confirmed

Bug description:
  The current rsyslogd configuration as provided by the rsyslogd package
  causes double logging to occur.

  Steps to Reproduce:
  1) Install haproxy via whatever normal means (apt-get etc)
  2) Configure it to listen on at least one port, even just the stats port
  3) Visit the URL configured

  You'll see logs generated in both /var/log/syslog (via
  /etc/rsyslogd.d/50-default.conf) and /var/log/haproxy.log (via
  /etc/rsyslog.d/haproxy.conf).

  Steps to fix:
  1) mv /etc/rsyslog.d/haproxy.conf 49-haproxy.conf  # This could be any number 
less than 50, to have it read before the default.conf
  2) Restart rsyslog.
  3) Access the provided service

  This will cause the entries to be written out to only
  /var/log/haproxy.log.

  The testing was done on a Ubuntu 14.04 server (trusty) with haproxy
  1.4.24-2ubuntu0.3 installed:

  $ lsb_release -rd
  Description:  Ubuntu 14.04.3 LTS
  Release:  14.04

  $ dpkg-query -W haproxy
  haproxy   1.4.24-2ubuntu0.3

  Please let me know if you have any further questions.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/haproxy/+bug/1524635/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1583503] Re: keepalived fails to start when PID file is empty

2016-05-19 Thread Robie Basak
Thank you for taking the time to report this bug and helping to make
Ubuntu better.

Yakkety has 1.2.20, so this is Fix Released. You asked for a backport
but you haven't said what package version you're using, what release you
want the fix backported to or in which releases you have confirmed the
bug to exist. Is it Trusty, Xenial, both or other? For each one, please
can you state whether you have confirmed the bug to exist?

** Changed in: keepalived (Ubuntu)
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to keepalived in Ubuntu.
https://bugs.launchpad.net/bugs/1583503

Title:
  keepalived fails to start when PID file is empty

Status in neutron:
  New
Status in keepalived package in Ubuntu:
  Fix Released

Bug description:
  After a crash of a network node, we were left with empty PID files for
  some keepalived processes:

   root@network-node14:~# ls -l 
/var/lib/neutron/ha_confs/0ab5f647-1e04-4345-ae9b-ee66c6f08882.pid
  -rw-r--r-- 1 root root 0 May 19 08:41 
/var/lib/neutron/ha_confs/0ab5f647-1e04-4345-ae9b-ee66c6f08882.pid

  This causes the L3 agent to log the following errors repeating every
  minute:

  2016-05-19 08:46:44.525 13554 ERROR neutron.agent.linux.utils [-] Unable to 
convert value in 
/var/lib/neutron/ha_confs/0ab5f647-1e04-4345-ae9b-ee66c6f08882.pid
  2016-05-19 08:46:44.526 13554 ERROR neutron.agent.linux.external_process [-] 
keepalived for router with uuid 0ab5f647-1e04-4345-ae9b-ee66c6f08882 not found. 
The process should not have died
  2016-05-19 08:46:44.526 13554 WARNING neutron.agent.linux.external_process 
[-] Respawning keepalived for uuid 0ab5f647-1e04-4345-ae9b-ee66c6f08882
  2016-05-19 08:46:44.526 13554 ERROR neutron.agent.linux.utils [-] Unable to 
convert value in 
/var/lib/neutron/ha_confs/0ab5f647-1e04-4345-ae9b-ee66c6f08882.pid
  2016-05-19 08:46:44.526 13554 ERROR neutron.agent.linux.utils [-] Unable to 
convert value in 
/var/lib/neutron/ha_confs/0ab5f647-1e04-4345-ae9b-ee66c6f08882.pid-vrrp

  and the keepalived process fails to start. As a result, the routers
  hosted by this agent are non-functional.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1583503/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1531322] Re: package haproxy 1.5.15-1ppa1~wily failed to install/upgrade: subprocess installed post-installation script returned error exit status 1

2016-01-20 Thread Robie Basak
Thank you for taking the time to report this bug and helping to make
Ubuntu better.

It looks like you're using a version of haproxy from a PPA, rather than
the package shipped with Ubuntu? In this case we can't treat this as a
bug in Ubuntu, so I'm marking this bug filed against haproxy in Ubuntu
as Invalid.

You may be able to find pointers to get help for this sort of problem
here: http://www.ubuntu.com/support/community

Or if you believe that this is really a bug in Ubuntu, then you may find
it helpful to read "How to report bugs effectively"
http://www.chiark.greenend.org.uk/~sgtatham/bugs.html. We'd be grateful
if you would then provide a more complete description of the problem,
explain why you believe this is a bug in Ubuntu rather than a problem
specific to your system, and then change the bug status back to New.

** Changed in: haproxy (Ubuntu)
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1531322

Title:
  package haproxy 1.5.15-1ppa1~wily failed to install/upgrade:
  subprocess installed post-installation script returned error exit
  status 1

Status in haproxy package in Ubuntu:
  Invalid

Bug description:
  Downgrade haproxy 1.6 to 1.5.15 reports that it failed on reboot of
  machine. But in fact, it worked.

  ProblemType: Package
  DistroRelease: Ubuntu 16.04
  Package: haproxy 1.5.15-1ppa1~wily
  ProcVersionSignature: Ubuntu 3.16.0-57.77~14.04.1-generic 3.16.7-ckt20
  Uname: Linux 3.16.0-57-generic x86_64
  ApportVersion: 2.19.3-0ubuntu2
  Architecture: amd64
  Date: Tue Jan  5 09:55:53 2016
  DuplicateSignature:
   Installing new version of config file /etc/init.d/haproxy ...
   Job for haproxy.service failed because the control process exited with error 
code. See "systemctl status haproxy.service" and "journalctl -xe" for details.
   invoke-rc.d: initscript haproxy, action "start" failed.
   dpkg: error processing package haproxy (--configure):
subprocess installed post-installation script returned error exit status 1
  ErrorMessage: subprocess installed post-installation script returned error 
exit status 1
  InstallationDate: Installed on 2015-07-08 (181 days ago)
  InstallationMedia: Ubuntu 14.04.2 LTS "Trusty Tahr" - Release amd64 
(20150218.1)
  RelatedPackageVersions:
   dpkg 1.18.3ubuntu1
   apt  1.1.10
  SourcePackage: haproxy
  Title: package haproxy 1.5.15-1ppa1~wily failed to install/upgrade: 
subprocess installed post-installation script returned error exit status 1
  UpgradeStatus: Upgraded to xenial on 2015-12-22 (14 days ago)
  modified.conffile..etc.haproxy.haproxy.cfg: [modified]
  mtime.conffile..etc.haproxy.haproxy.cfg: 2016-01-05T11:04:22.198685

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/haproxy/+bug/1531322/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1204542] Re: Feature request for init script: status function

2016-01-20 Thread Robie Basak
Thank you for taking the time to report this bug and helping to make
Ubuntu better.

It looks like keepalived has a systemd service definition file now, and
Xenial uses systemd. So presumably this is moot now, as on Ubuntu salt
(I presume) will be able to use systemd's service status?

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to keepalived in Ubuntu.
https://bugs.launchpad.net/bugs/1204542

Title:
  Feature request for init script: status function

Status in keepalived package in Ubuntu:
  Confirmed

Bug description:
  In order to have configuration management tools like salt manage the
  keepalived service, the init script has to have a 'status' function.
  I've made a basic 'status' function patch for /etc/init.d/keepalived
  based on keepalived 1:1.2.2-3ubuntu1 on  Ubuntu 12.04.2 LT. See
  attached patch.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/keepalived/+bug/1204542/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1397278] Re: Fencing is not reported to DLM

2016-01-07 Thread Robie Basak
Thanks. Since this is then presumed fixed in Xenial which is at
1.1.12-0ubuntu3, I'm marking this bug as Fix Released. If you would like
this fix backported to 14.04 then someone needs to prepare a backport as
described in https://wiki.ubuntu.com/StableReleaseUpdates#Procedure.
I'll leave a task open to reflect this.

** Changed in: pacemaker (Ubuntu)
   Status: Triaged => Fix Released

** Also affects: pacemaker (Ubuntu Trusty)
   Importance: Undecided
   Status: New

** Changed in: pacemaker (Ubuntu Trusty)
   Status: New => Triaged

** Changed in: pacemaker (Ubuntu Trusty)
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to pacemaker in Ubuntu.
https://bugs.launchpad.net/bugs/1397278

Title:
  Fencing is not reported to DLM

Status in pacemaker package in Ubuntu:
  Fix Released
Status in pacemaker source package in Trusty:
  Triaged

Bug description:
  Hello,

  When dlm_controld requests the fencing of a node, the node is fenced
  but dlm_controld is not notified of the fence result.

  All the informations are reported on pacemaker mailing-list
  http://oss.clusterlabs.org/pipermail/pacemaker/2014-November/023082.html.

  There is 3 patches available to fix this issue:
  http://oss.clusterlabs.org/pipermail/pacemaker/2014-November/023151.html

  * David Vossel (9 months ago) 054fedf: Fix: stonith_api_time_helper now 
returns when the most recent fencing operation completed  (origin/pr/444)
  * Andrew Beekhof (9 months ago) d9921e5: Fix: Fencing: Pass the correct 
options when looking up the history by node name 
  * Andrew Beekhof (9 months ago) b0a8876: Log: Fencing: Send details of 
stonith_api_time() and stonith_api_kick() to syslog 

  Regards.

  pacemaker:
Installé : 1.1.10+git20130802-1ubuntu2.1
Candidat : 1.1.10+git20130802-1ubuntu2.1
   Table de version :
   *** 1.1.10+git20130802-1ubuntu2.1 0
  500 http://fr.archive.ubuntu.com/ubuntu/ trusty-updates/main amd64 
Packages
  100 /var/lib/dpkg/status
   1.1.10+git20130802-1ubuntu2 0
  500 http://fr.archive.ubuntu.com/ubuntu/ trusty/main amd64 Packages

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/pacemaker/+bug/1397278/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1524635] Re: haproxy syslog configuration causes double logging

2015-12-14 Thread Robie Basak
** Changed in: haproxy (Ubuntu)
   Importance: Undecided => Medium

** Tags added: bitesize

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1524635

Title:
  haproxy syslog configuration causes double logging

Status in haproxy package in Ubuntu:
  Confirmed

Bug description:
  The current rsyslogd configuration as provided by the rsyslogd package
  causes double logging to occur.

  Steps to Reproduce:
  1) Install haproxy via whatever normal means (apt-get etc)
  2) Configure it to listen on at least one port, even just the stats port
  3) Visit the URL configured

  You'll see logs generated in both /var/log/syslog (via
  /etc/rsyslogd.d/50-default.conf) and /var/log/haproxy.log (via
  /etc/rsyslog.d/haproxy.conf).

  Steps to fix:
  1) mv /etc/rsyslog.d/haproxy.conf 49-haproxy.conf  # This could be any number 
less than 50, to have it read before the default.conf
  2) Restart rsyslog.
  3) Access the provided service

  This will cause the entries to be written out to only
  /var/log/haproxy.log.

  The testing was done on a Ubuntu 14.04 server (trusty) with haproxy
  1.4.24-2ubuntu0.3 installed:

  $ lsb_release -rd
  Description:  Ubuntu 14.04.3 LTS
  Release:  14.04

  $ dpkg-query -W haproxy
  haproxy   1.4.24-2ubuntu0.3

  Please let me know if you have any further questions.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/haproxy/+bug/1524635/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1334958] Re: package haproxy 1.4.24-2 failed to install/upgrade: ErrorMessage: subprocess installed post-installation script returned error exit status 1

2015-12-04 Thread Robie Basak
Thanks, marking this as Invalid based on Mark's reply.

** Changed in: haproxy (Ubuntu)
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1334958

Title:
  package haproxy 1.4.24-2 failed to install/upgrade: ErrorMessage:
  subprocess installed post-installation script returned error exit
  status 1

Status in haproxy package in Ubuntu:
  Invalid

Bug description:
  Doing release upgrade from 12.04 to 14.04.

  ProblemType: Package
  DistroRelease: Ubuntu 14.04
  Package: haproxy 1.4.24-2
  ProcVersionSignature: Ubuntu 3.2.0-36.57-generic 3.2.35
  Uname: Linux 3.2.0-36-generic x86_64
  ApportVersion: 2.14.1-0ubuntu3.2
  Architecture: amd64
  Date: Fri Jun 27 15:36:08 2014
  DuplicateSignature: package:haproxy:1.4.24-2:ErrorMessage: subprocess 
installed post-installation script returned error exit status 1
  ErrorMessage: ErrorMessage: subprocess installed post-installation script 
returned error exit status 1
  InstallationDate: Installed on 2012-10-02 (633 days ago)
  InstallationMedia: Ubuntu-Server 11.10 "Oneiric Ocelot" - Release amd64 
(20111011)
  SourcePackage: haproxy
  Title: package haproxy 1.4.24-2 failed to install/upgrade: ErrorMessage: 
subprocess installed post-installation script returned error exit status 1
  UpgradeStatus: Upgraded to trusty on 2014-06-27 (0 days ago)
  modified.conffile..etc.default.haproxy:
   # Set ENABLED to 1 if you want the init script to start haproxy.
   ENABLED=1
   # Add extra flags here.
   #EXTRAOPTS="-de -m 16"
  mtime.conffile..etc.default.haproxy: 2012-10-02T12:34:46.10
  mtime.conffile..etc.haproxy.haproxy.cfg: 2014-06-27T14:46:05.313880

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/haproxy/+bug/1334958/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1308314] Re: incorrect balancing in weight value 256

2015-12-04 Thread Robie Basak
Thank you for taking the time to report this bug and helping to make
Ubuntu better.

By "make sure that below patch has been used on above environment" do
you mean that you are running a patched version of haproxy, or are you
proposing that this patch fixes the bug that you are reporting?

Is there a specific patch available upstream the fixes this bug, and if
so, is it released upstream and if so which version? Otherwise, can
someone report this bug upstream if it still applies please?

Once answered, please change the bug status back to New.

** Changed in: haproxy (Ubuntu)
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1308314

Title:
  incorrect balancing in weight value 256

Status in haproxy package in Ubuntu:
  Incomplete

Bug description:
  Haproxy can work correctly with wight 256,  the node with weight 256
  either takes all the traffic or takes none traffic:

  1-1 weight: web1:256, web2:1, web3:1 (incorrect case.)
  (1024 accesses * 3 times to vip)
  result 1
  web1: 1024
  web2: 0
  web3: 0
  result 2
  web1: 1024
  web2: 0
  web3: 0
  result 3
  web1 counted: 1024
  web2 counted: 0
  web3 counted: 0

  1-2 weight: web1:255, web2:1, web3:1 (correct case.)
  (1024 accesses * 3 times to vip)
  result 1
  web1 counted: 1017
  web2 counted: 4
  web3 counted: 3
  result 2
  web1 counted: 1016
  web2 counted: 4
  web3 counted: 4
  result 3
  web1 counted: 1016
  web2 counted: 4
  web3 counted: 4

  2-1 web1:256, web2:128, web3:128 (incorrect case.)
  (256 accesses * 3 times to vip)
  result 1
  web1 counted: 1
  web2 counted: 128
  web3 counted: 127
  result 2
  web1 counted: 0
  web2 counted: 128
  web3 counted: 128
  result 3
  web1 counted: 0
  web2 counted: 128
  web3 counted: 128

  2-2 web1:255, web2:128, web3:128(correct case.)
  (256 accesses * 3 times to vip)
  result 1
  web1 counted: 128
  web2 counted: 64
  web3 counted: 64
  result 2
  web1 counted: 128
  web2 counted: 64
  web3 counted: 64
  result 3
  web1 counted: 128
  web2 counted: 64
  web3 counted: 64

  From the haproxy aspect, they have fixed a bug of "Roundrobin can not
  work well when a server's weight is 256" , make sure that below patch
  has been used on above environment.

Mail of "Roundrobin can not work well when a server's weight is 256"
https://www.mail-archive.com/haproxy@formilux.org/msg10613.html
Patch:

https://dev.openwrt.org/browser/packages/net/haproxy/patches/0005-BUG-MEDIUM-server-set-the-macro-for-server-s-max-wei.patch

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/haproxy/+bug/1308314/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1339801] Re: Illegal Instruction crash on startup with spread_checks in src/checks.c

2015-12-04 Thread Robie Basak
Thank you for taking the time to report this bug and helping to make
Ubuntu better.

This seems like to be an upstream bug and if it can be confirmed to be
then should be reported there, and we can backport any fix to Ubuntu as
required.

However Lucid is EOL now. If this bug still affects a supported release,
please change the bug status back to New.

** Tags added: needs-upstream-report

** Changed in: haproxy (Ubuntu)
   Status: New => Incomplete

** Tags removed: needs-upstream-report
** Tags added: server-triage-failure

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1339801

Title:
  Illegal Instruction crash on startup with spread_checks in
  src/checks.c

Status in haproxy package in Ubuntu:
  Incomplete

Bug description:
  haproxy crashes after starting up successfully, when the spread_checks
  option is provided.

  According to GDB, here is the backtrace of the crash:
  ```
  Program received signal SIGILL, Illegal instruction.
  0x0042b97f in process_chk (t=0x6ef7c0) at src/checks.c:1587
  1587  src/checks.c: No such file or directory.
in src/checks.c
  (gdb) bt
  #0  0x0042b97f in process_chk (t=0x6ef7c0) at src/checks.c:1587
  #1  0x004103ee in process_runnable_tasks (next=0x7fffe4cc) at 
src/task.c:240
  #2  0x00406440 in run_poll_loop () at src/haproxy.c:1304
  #3  0x00408966 in main (argc=, 
argv=0x7fffe6f8) at src/haproxy.c:1638
  ```

  it's caused by this code:
  ```
if (global.spread_checks > 0) {
rv = srv_getinter(check) * global.spread_checks / 100;
rv -= (int) (2 * rv * (rand() / (RAND_MAX + 1.0)));
}
  ```
  on line 1587  of src/checks.c (the second line in the if clause).

  We're running haproxy 1.5.0

  > lsb_release -rd
  Description:  Ubuntu 10.04.4 LTS
  Release:  10.04

  > uname -a
  Linux 789b4b1b-6f7b-44cf-accc-88d90341f17a 3.8.0-29-generic 
#42~precise1-Ubuntu SMP Wed Aug 14 16:19:23 UTC 2013 x86_64 GNU/Linux

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/haproxy/+bug/1339801/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1494141] Re: HAProxy 1.5 init script does not terminate processes

2015-09-17 Thread Robie Basak
Louis,

I can't sponsor your debdiff into backports, but be careful of ordering
issues in your patch. clean() should be defined before the trap is set,
and tmp should be defined before any point that clean() could be called.
In general you should quote "$tmp" as well in case it ends up with
spaces (eg. if $TMPDIR has a space in it).

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1494141

Title:
  HAProxy 1.5 init script does not terminate processes

Status in trusty-backports:
  In Progress
Status in haproxy package in Ubuntu:
  Fix Released
Status in haproxy source package in Trusty:
  Invalid

Bug description:
  On a new installation of Ubuntu 14.04.3 LTS I installed HAProxy 1.5
  from trusty-backports (1.5.4-1ubuntu2.1~ubuntu14.04.1).

  When I restarted HAProxy, I got random HTTP 503 although the backend
  servers were all working fine. By checking netstat, I saw that HAProxy
  was listening multiple times on the frontend ports.

  It seems that the init script coming with the installation does not
  work correctly. The processes are not terminated correctly when using
  stop (or restart, in this matter, either).

  Only with a kill I was able to correctly terminate the HAProxy
  processes.

  The following output should show more clarity:

  root@mylinux:~# netstat -lntp
  Active Internet connections (only servers)
  Proto Recv-Q Send-Q Local Address   Foreign Address State 
  PID/Program name
  tcp0  0 0.0.0.0:80800.0.0.0:*   LISTEN
  4653/haproxy
  tcp0  0 0.0.0.0:80800.0.0.0:*   LISTEN
  4221/haproxy
  tcp0  0 0.0.0.0:80  0.0.0.0:*   LISTEN
  956/nginx
  tcp0  0 0.0.0.0:22  0.0.0.0:*   LISTEN
  855/sshd
  tcp0  0 0.0.0.0:80900.0.0.0:*   LISTEN
  4653/haproxy
  tcp0  0 0.0.0.0:80900.0.0.0:*   LISTEN
  4221/haproxy
  tcp0  0 0.0.0.0:80990.0.0.0:*   LISTEN
  4653/haproxy
  tcp0  0 0.0.0.0:80990.0.0.0:*   LISTEN
  4221/haproxy
  tcp6   0  0 :::22   :::*LISTEN
  855/sshd

  root@mylinux:~# service haproxy stop
   * Stopping haproxy haproxy   
[ OK ]

  root@mylinux:~# service haproxy status
  haproxy not running.

  root@mylinux:~# netstat -lntp
  Active Internet connections (only servers)
  Proto Recv-Q Send-Q Local Address   Foreign Address State 
  PID/Program name
  tcp0  0 0.0.0.0:80800.0.0.0:*   LISTEN
  4653/haproxy
  tcp0  0 0.0.0.0:80800.0.0.0:*   LISTEN
  4221/haproxy
  tcp0  0 0.0.0.0:80  0.0.0.0:*   LISTEN
  956/nginx
  tcp0  0 0.0.0.0:22  0.0.0.0:*   LISTEN
  855/sshd
  tcp0  0 0.0.0.0:80900.0.0.0:*   LISTEN
  4653/haproxy
  tcp0  0 0.0.0.0:80900.0.0.0:*   LISTEN
  4221/haproxy
  tcp0  0 0.0.0.0:80990.0.0.0:*   LISTEN
  4653/haproxy
  tcp0  0 0.0.0.0:80990.0.0.0:*   LISTEN
  4221/haproxy
  tcp6   0  0 :::22   :::*LISTEN
  855/sshd

  root@mylinux:~# killall haproxy

  root@mylinux:~# netstat -lntp
  Active Internet connections (only servers)
  Proto Recv-Q Send-Q Local Address   Foreign Address State 
  PID/Program name
  tcp0  0 0.0.0.0:80  0.0.0.0:*   LISTEN
  956/nginx
  tcp0  0 0.0.0.0:22  0.0.0.0:*   LISTEN
  855/sshd
  tcp6   0  0 :::22   :::*LISTEN
  855/sshd

  root@mylinux:~# service haproxy start
   * Starting haproxy haproxy   
[ OK ]

  root@mylinux:~# netstat -lntp
  Active Internet connections (only servers)
  Proto Recv-Q Send-Q Local Address   Foreign Address State 
  PID/Program name
  tcp0  0 0.0.0.0:80800.0.0.0:*   LISTEN
  8205/haproxy
  tcp0  0 0.0.0.0:80  0.0.0.0:*   LISTEN
  956/nginx
  tcp0  0 0.0.0.0:22  0.0.0.0:*   LISTEN
  855/sshd
  tcp0  0 0.0.0.0:80900.0.0.0:*   LISTEN
  8205/haproxy
  tcp0  0 0.0.0.0:80990.0.0.0:*   LISTEN
  8205/haproxy
  tcp6   0  

[Ubuntu-ha] [Bug 1490727] Re: "Invalid IPC credentials" after corosync, pacemaker service restarts

2015-09-07 Thread Robie Basak
Please could you complete the bug report from the perspective of the
pacemaker task? I don't think there's enough here to go on for example
in a way that upstream or the Debian maintainer would understand.

** Changed in: pacemaker (Ubuntu)
   Status: Confirmed => Incomplete

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to pacemaker in Ubuntu.
https://bugs.launchpad.net/bugs/1490727

Title:
  "Invalid IPC credentials" after corosync, pacemaker service restarts

Status in pacemaker package in Ubuntu:
  Incomplete
Status in hacluster package in Juju Charms Collection:
  In Progress

Bug description:
  Followup from 
https://bugs.launchpad.net/ubuntu/+source/pacemaker/+bug/1439649,
  see there comments #14, #15 as it _maybe_ related to missing uidgid ACLs for
  hacluster:haclient (as apparently presented by pacemaker).

  FYI you can find relevant IPC resources with:
  $ find /run/shm -user hacluster -group haclient -ls

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/pacemaker/+bug/1490727/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1481737] Re: HAProxy init script is not working properly

2015-09-07 Thread Robie Basak
Thank you for taking the time to report this bug and helping to make
Ubuntu better.

We're in sync with Debian on the haproxy package so this bug should be
reported there. For Ubuntu, managing haproxy with systemd units would
presumably be easiest; this could also be contributed to Debian.

Workaround: change /etc/init.d/haproxy as you described. Importance ->
Medium since a workaround is available.

** Tags added: needs-upstream-report

** Summary changed:

- HAProxy init script is not working properly
+ HAProxy init script does not work correctly with nbproc configuration option

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1481737

Title:
  HAProxy init script does not work correctly with nbproc configuration
  option

Status in haproxy package in Ubuntu:
  Confirmed

Bug description:
  In case you have more than one process enabled in the haproxy config, the 
init script fails to stop all of those processes.
  So to clarify you need to have this in your haproxy.cfg:

  """
  ...
  global
  maxconn 32000
  ulimit-n65536
  userhaproxy
  group   haproxy
  nbproc  2

  ...
  """

  the problem is more visible if you set the nbproc to higher number.

  service haproxy stop --> will only stop on of the haproxy processes.

  The problem is so that start-stop-daemon can't handle pid files with multiple 
lines. Only stopping the first one. HAProxy does write all the pids started by 
itself into the pid file, so the problem is not in HAProxy, but in the 
start-stop-daemon or more likely in the init scrip of haproxy.
  One solution or workaround is to remove pidfile option of start-stop-daemon 
in the init script, than it wil work as killall and will stop the haproxy 
processes properly.

  To proof you can try this sequence:

  service haproxy start #if its not running
  ps ax | grep haproxy | grep -v grep | wc -l #this should report 2
  service haproxy restart
  ps ax | grep haproxy | grep -v grep | wc -l #this will report 3

  The workaround as a diff:
  root@ubi1:/opt# diff /etc/init.d/haproxy /etc/init.d/haproxy.orig 
  62c62
  <   --retry 5 --exec $HAPROXY || ret=$?
  ---
  >   --retry 5 --pidfile $PIDFILE --exec $HAPROXY || ret=$?

  extra infos:
  root@ubi1:/opt# lsb_release -rd
  Description:Ubuntu 14.04.3 LTS
  Release:14.04

  root@ubi1:/opt# apt-cache policy haproxy
  haproxy:
Installed: 1.4.24-2ubuntu0.2
Candidate: 1.4.24-2ubuntu0.2
Version table:
   1.5.3-1~ubuntu14.04.1 0
  100 http://us.archive.ubuntu.com/ubuntu/ trusty-backports/main amd64 
Packages
   *** 1.4.24-2ubuntu0.2 0
  500 http://us.archive.ubuntu.com/ubuntu/ trusty-updates/main amd64 
Packages
  100 /var/lib/dpkg/status
   1.4.24-2 0
  500 http://us.archive.ubuntu.com/ubuntu/ trusty/main amd64 Packages

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/haproxy/+bug/1481737/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1479661] Re: hacluster install hook fails on Vivid and Wily (pacemaker /var/lib/heartbeat home dir ownership issue)

2015-09-05 Thread Robie Basak
*** This bug is a duplicate of bug 1488453 ***
https://bugs.launchpad.net/bugs/1488453

** This bug has been marked a duplicate of bug 1488453
   Package postinst always fail on first install when using systemd

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to pacemaker in Ubuntu.
https://bugs.launchpad.net/bugs/1479661

Title:
  hacluster install hook fails on Vivid and Wily (pacemaker
  /var/lib/heartbeat home dir ownership issue)

Status in pacemaker package in Ubuntu:
  New
Status in hacluster package in Juju Charms Collection:
  Triaged

Bug description:
  pacemaker and/or openhpid in Vivid and Wily may need love.

  This occurs when manually installing on fresh Wily and Vivid instances:
  ...
  Setting up pacemaker (1.1.12-0ubuntu2) ...
  Adding group `haclient' (GID 119) ...
  Done.
  Warning: The home dir /var/lib/heartbeat you specified already exists.
  Adding system user `hacluster' (UID 111) ...
  Adding new user `hacluster' (UID 111) with group `haclient' ...
  The home directory `/var/lib/heartbeat' already exists. Not copying from 
`/etc/skel'.
  adduser: Warning: The home directory `/var/lib/heartbeat' does not belong to 
the user you are currently creating.

  
  # Observation from the charm perspective:
  http://paste.ubuntu.com/11964818/

  2015-07-30 07:27:14 INFO install Setting up openhpid (2.14.1-1.3ubuntu2) ...
  2015-07-30 07:27:15 INFO install Job for openhpid.service failed. See 
"systemctl status openhpid.service" and "journalctl -xe" for details.
  2015-07-30 07:27:15 INFO install invoke-rc.d: initscript openhpid, action 
"start" failed.
  2015-07-30 07:27:15 INFO install dpkg: error processing package openhpid 
(--configure):
  2015-07-30 07:27:15 INFO install  subprocess installed post-installation 
script returned error exit status 1
  2015-07-30 07:27:15 INFO install Errors were encountered while processing:
  2015-07-30 07:27:15 INFO install  openhpid
  2015-07-30 07:27:15 INFO install E: Sub-process /usr/bin/dpkg returned an 
error code (1)
  2015-07-30 07:27:15 INFO install Traceback (most recent call last):
  2015-07-30 07:27:15 INFO install   File 
"/var/lib/juju/agents/unit-hacluster-2/charm/hooks/install", line 405, in 

  2015-07-30 07:27:15 INFO install hooks.execute(sys.argv)
  2015-07-30 07:27:15 INFO install   File 
"/var/lib/juju/agents/unit-hacluster-2/charm/hooks/charmhelpers/core/hookenv.py",
 line 557, in execute
  2015-07-30 07:27:15 INFO install self._hooks[hook_name]()
  2015-07-30 07:27:15 INFO install   File 
"/var/lib/juju/agents/unit-hacluster-2/charm/hooks/install", line 87, in install
  2015-07-30 07:27:15 INFO install 
apt_install(filter_installed_packages(PACKAGES), fatal=True)
  2015-07-30 07:27:15 INFO install   File 
"/var/lib/juju/agents/unit-hacluster-2/charm/hooks/charmhelpers/fetch/__init__.py",
 line 183, in apt_install
  2015-07-30 07:27:15 INFO install _run_apt_command(cmd, fatal)
  2015-07-30 07:27:15 INFO install   File 
"/var/lib/juju/agents/unit-hacluster-2/charm/hooks/charmhelpers/fetch/__init__.py",
 line 428, in _run_apt_command
  2015-07-30 07:27:15 INFO install result = subprocess.check_call(cmd, 
env=env)
  2015-07-30 07:27:15 INFO install   File "/usr/lib/python2.7/subprocess.py", 
line 540, in check_call
  2015-07-30 07:27:15 INFO install raise CalledProcessError(retcode, cmd)
  2015-07-30 07:27:15 INFO install subprocess.CalledProcessError: Command 
'['apt-get', '--assume-yes', '--option=Dpkg::Options::=--force-confold', 
'install', 'corosync', 'pacemaker', 'ipmitool', 'libnagios-plugin-perl']' 
returned non-zero exit status 100
  2015-07-30 07:27:15 INFO juju.worker.uniter.context context.go:543 handling 
reboot
  2015-07-30 07:27:15 ERROR juju.worker.uniter.operation runhook.go:103 hook 
"install" failed: exit status 1

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/pacemaker/+bug/1479661/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1481337] Re: please update keepalived to version 1.2.17 or higher

2015-08-11 Thread Robie Basak
Thank you for taking the time to report this bug and helping to make
Ubuntu better.

In Ubuntu we do not update stable releases with new upstream versions in
order to keep stable releases stable, and keepalived in Ubuntu does not
currently have an exception on this. See
https://wiki.ubuntu.com/StableReleaseUpdates for the policy and
rationale.

We can still fix specific issues by backporting their fixes from newer
upstream releases, so I'll turn this bug into one to track the specific
issue you're facing.

We can have a separate bug to track the update of keepalived in the
development release of Ubuntu (currently Wily) to a newer upstream
release, but it doesn't sound like that'll help you since your focus is
currently on 14.04.

** Summary changed:

- please update keepalived to version 1.2.17 or higher
+ keepalived makes a floating IP available on more than one host after 
configuration reload

** Tags added: server-next

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to keepalived in Ubuntu.
https://bugs.launchpad.net/bugs/1481337

Title:
  keepalived makes a floating IP available on more than one host after
  configuration reload

Status in keepalived package in Ubuntu:
  Confirmed
Status in keepalived source package in Trusty:
  Confirmed
Status in keepalived source package in Vivid:
  Confirmed
Status in keepalived source package in Wily:
  Confirmed

Bug description:
  The version in 14.04 is 1:1.2.7-1ubuntu1

  There's an issue with reloading the configuration file, where the
  state of VRRP gets 'confused' (in lack of a better description),
  resulting in a floating IP being available on more than one host.

  This issue seems to be fixed in 1.2.17. Since the package hasn't had
  an update in over 2 years, I kindly request it to be updated to at
  least 1.2.17

  For more information, see http://www.keepalived.org/changelog.html

  Extra info:
  1) The release of Ubuntu you are using, via 'lsb_release -rd' or System - 
About Ubuntu

  Description:  Ubuntu 14.04.2 LTS
  Release:  14.04

  2) The version of the package you are using, via 'apt-cache policy
  pkgname' or by checking in Software Center

  1:1.2.7-1ubuntu1

  3) What you expected to happen

  Keepalived config reloaded without interruption of services. VRRP
  should notice VIP being present on one host and do nothing.

  4) What happened instead

  Keepalived config was reloaded but VRRP decided to activate the VIP on
  the host that didn't have it previously. Resulting in routing errors
  etc.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/keepalived/+bug/1481337/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1481337] Re: keepalived makes a floating IP available on more than one host after configuration reload

2015-08-11 Thread Robie Basak
** Tags removed: server-next

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to keepalived in Ubuntu.
https://bugs.launchpad.net/bugs/1481337

Title:
  keepalived makes a floating IP available on more than one host after
  configuration reload

Status in keepalived package in Ubuntu:
  Confirmed
Status in keepalived source package in Trusty:
  Confirmed
Status in keepalived source package in Vivid:
  Confirmed
Status in keepalived source package in Wily:
  Confirmed

Bug description:
  The version in 14.04 is 1:1.2.7-1ubuntu1

  There's an issue with reloading the configuration file, where the
  state of VRRP gets 'confused' (in lack of a better description),
  resulting in a floating IP being available on more than one host.

  This issue seems to be fixed in 1.2.17. Since the package hasn't had
  an update in over 2 years, I kindly request it to be updated to at
  least 1.2.17

  For more information, see http://www.keepalived.org/changelog.html

  Extra info:
  1) The release of Ubuntu you are using, via 'lsb_release -rd' or System - 
About Ubuntu

  Description:  Ubuntu 14.04.2 LTS
  Release:  14.04

  2) The version of the package you are using, via 'apt-cache policy
  pkgname' or by checking in Software Center

  1:1.2.7-1ubuntu1

  3) What you expected to happen

  Keepalived config reloaded without interruption of services. VRRP
  should notice VIP being present on one host and do nothing.

  4) What happened instead

  Keepalived config was reloaded but VRRP decided to activate the VIP on
  the host that didn't have it previously. Resulting in routing errors
  etc.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/keepalived/+bug/1481337/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1472713] Re: HAProxy 1.5.3 requires security updates

2015-07-10 Thread Robie Basak
** Package changed: haproxy (Ubuntu) = trusty-backports

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1472713

Title:
  HAProxy 1.5.3 requires security updates

Status in trusty-backports:
  New

Bug description:
  Per this bug:

  https://bugs.launchpad.net/trusty-backports/+bug/1336628

  HAProxy 1.5.3 was backported from U to T. However, it was not flagged
  in the recent USN:

  http://www.ubuntu.com/usn/usn-2668-1/

  Will that fix be applied to 1.5.3?

To manage notifications about this bug go to:
https://bugs.launchpad.net/trusty-backports/+bug/1472713/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1325847] Re: Improvement: initscript enhancement with support for conf.d and configtest on startup

2015-05-07 Thread Robie Basak
Following Debian as I don't think it's appropriate for Ubuntu to diverge
from Debian on this point. We'll follow whatever Debian does.

** Changed in: haproxy (Ubuntu)
   Status: New = Won't Fix

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1325847

Title:
  Improvement: initscript enhancement with support for conf.d and
  configtest on startup

Status in haproxy package in Ubuntu:
  Won't Fix
Status in haproxy package in Debian:
  Won't Fix

Bug description:
  The haproxy initscript misses a configtest option, which haproxy natively 
supports. It also does not warn the user, if haproxy has been disabled in the 
default file, but exits silently.
  Also, it has become a de-facto standard for daemons to include conf.d 
configuration file support.
  I attached a patch for the current init script, which remedies all these 
issues and should be forward/backward compatible to haproxy 1.3/1.4/1.5.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/haproxy/+bug/1325847/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1382842] Re: Security update breaks pacemaker in 14.04

2014-10-20 Thread Robie Basak
Thanks Ante. Though this is a regular update, not a security one.

** Summary changed:

- Security update breaks pacemaker in 14.04
+ SRU breaks pacemaker in 14.04

** Tags added: regression-update

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to pacemaker in Ubuntu.
https://bugs.launchpad.net/bugs/1382842

Title:
  SRU breaks pacemaker in 14.04

Status in “pacemaker” package in Ubuntu:
  New

Bug description:
  Ubuntu 14.04

  If system is running with pacemaker from the archive:

  $ dpkg -l | grep 1.1.10+git20130802-1ubuntu2
  ii  libcib3   
1.1.10+git20130802-1ubuntu2 amd64Pacemaker 
libraries - CIB
  ii  libcrmcluster4
1.1.10+git20130802-1ubuntu2 amd64Pacemaker 
libraries - CRM
  ii  libcrmcommon3 
1.1.10+git20130802-1ubuntu2 amd64Pacemaker 
libraries - common CRM
  ii  libcrmservice1
1.1.10+git20130802-1ubuntu2 amd64Pacemaker 
libraries - crmservice
  ii  liblrmd1  
1.1.10+git20130802-1ubuntu2 amd64Pacemaker 
libraries - lrmd
  ii  libpe-rules2  
1.1.10+git20130802-1ubuntu2 amd64Pacemaker 
libraries - rules for P-Engine
  ii  libpe-status4 
1.1.10+git20130802-1ubuntu2 amd64Pacemaker 
libraries - status for P-Engine
  ii  libpengine4   
1.1.10+git20130802-1ubuntu2 amd64Pacemaker 
libraries - P-Engine
  ii  libstonithd2  
1.1.10+git20130802-1ubuntu2 amd64Pacemaker 
libraries - stonith
  ii  libtransitioner2  
1.1.10+git20130802-1ubuntu2 amd64Pacemaker 
libraries - transitioner
  ii  pacemaker 
1.1.10+git20130802-1ubuntu2 amd64HA cluster 
resource manager
  ii  pacemaker-cli-utils   
1.1.10+git20130802-1ubuntu2 amd64Command line 
interface utilities for Pacemaker

  $ sudo crm status
  Last updated: Sat Oct 18 20:52:32 2014
  Last change: Sat Oct 18 20:51:28 2014 via crmd on saturn
  Stack: corosync
  Current DC: saturn (2130706433) - partition with quorum
  Version: 1.1.10-42f2063
  1 Nodes configured
  0 Resources configured

  Online: [ saturn ]

  And then one installs pacemaker (which pulls in pacemaker from
  -security):

  $ sudo apt-get install pacemaker
  Reading package lists... Done
  Building dependency tree
  Reading state information... Done
  The following packages were automatically installed and are no longer 
required:
    libccrtp0 libdbus-c++-1-0 libucommon6 libyate5.0.0 libzrtpcpp2
  Use 'apt-get autoremove' to remove them.
  The following packages will be upgraded:
    pacemaker
  1 upgraded, 0 newly installed, 0 to remove and 59 not upgraded.
  Need to get 364 kB of archives.
  After this operation, 0 B of additional disk space will be used.
  Get:1 http://hr.archive.ubuntu.com/ubuntu/ trusty-updates/main pacemaker 
amd64 1.1.10+git20130802-1ubuntu2.1 [364 kB]
  Fetched 364 kB in 1s (197 kB/s)
  (Reading database ... 638230 files and directories currently installed.)
  Preparing to unpack .../pacemaker_1.1.10+git20130802-1ubuntu2.1_amd64.deb ...
  Unpacking pacemaker (1.1.10+git20130802-1ubuntu2.1) over 
(1.1.10+git20130802-1ubuntu2) ...
  Processing triggers for man-db (2.6.7.1-1ubuntu1) ...
  Processing triggers for ureadahead (0.100.0-16) ...
  Setting up pacemaker (1.1.10+git20130802-1ubuntu2.1) ...
  addgroup: The group `haclient' already exists as a system group. Exiting.
  Warning: The home dir /var/lib/heartbeat you specified already exists.
  The system user `hacluster' already exists. Exiting.

  Restarting pacemaker results in havoc:

  $ sudo /etc/init.d/pacemaker stop
  Signaling Pacemaker Cluster Manager to terminate: [  OK  ]
  Waiting for cluster services to unload:^[[A.[  OK  ]
  $ sudo /etc/init.d/pacemaker start
  Starting Pacemaker Cluster Manager: [  OK  ]

  $ sudo crm status
  Last updated: Sat Oct 18 20:54:03 2014
  Last change: Sat Oct 18 20:51:28 2014 via crmd on saturn
  Stack: corosync
  Current DC: NONE
  1 Nodes configured
  0 Resources configured

  Node saturn (2130706433): UNCLEAN (offline)

  From the syslog:

  Oct 18 20:54:16 saturn crmd[23424]:  warning: do_lrm_control: Failed to sign 
on to the LRM 2 (30 max) times
  Oct 18 20:54:16 saturn crmd[23424]:  warning: do_lrm_control: Failed to sign 
on to the LRM 3 (30 max) times

[Ubuntu-ha] [Bug 1353421] Re: haproxy: component mismatch (depends on twitter-bootstrap and friends)

2014-08-06 Thread Robie Basak
I posted https://lists.ubuntu.com/archives/ubuntu-
release/2014-July/002967.html

12:37 rbasak doko: while you're looking, there was also 
   
https://lists.ubuntu.com/archives/ubuntu-release/2014-July/002967.html
12:37 rbasak New dependencies on haproxy-doc, pulling in nodejs of all things.
12:38 rbasak I suggest demoting just the haproxy-doc binary to universe, if 
that's acceptable?

If so, then there's nothing more to do at this end, right? Just demote
haproxy-doc?

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1353421

Title:
  haproxy: component mismatch (depends on twitter-bootstrap and friends)

Status in “haproxy” package in Ubuntu:
  Confirmed
Status in “haproxy” source package in Utopic:
  Confirmed

Bug description:
  haproxy: component mismatch (depends on twitter-bootstrap and friends)

  see
  http://people.canonical.com/~ubuntu-archive/component-mismatches.svg

  either file MIR's for the twitter-bootstrap tree, or avoid the
  dependency.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/haproxy/+bug/1353421/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1325847] Re: Improvement: initscript enhancement with support for conf.d and configtest on startup

2014-06-03 Thread Robie Basak
Also, I didn't check if the init.d script is provided by packaging or
comes directly from upstream. If it comes directly from upstream, then
sending the patch upstream directly would be best.

** Changed in: haproxy (Ubuntu)
   Importance: Undecided = Wishlist

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1325847

Title:
  Improvement: initscript enhancement with support for conf.d and
  configtest on startup

Status in “haproxy” package in Ubuntu:
  New

Bug description:
  The haproxy initscript misses a configtest option, which haproxy natively 
supports. It also does not warn the user, if haproxy has been disabled in the 
default file, but exits silently.
  Also, it has become a de-facto standard for daemons to include conf.d 
configuration file support.
  I attached a patch for the current init script, which remedies all these 
issues and should be forward/backward compatible to haproxy 1.3/1.4/1.5.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/haproxy/+bug/1325847/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1325847] Re: Improvement: initscript enhancement with support for conf.d and configtest on startup

2014-06-03 Thread Robie Basak
Thank you for taking the time to report this bug and helping to make
Ubuntu better.

haproxy in Ubuntu is currently identical to the Debian haproxy package
from which it is derived, so it seems likely to me that Debian will
benefit from this patch, too. I would also like to avoid the additional
maintenance burden of forking Debian's packaging if possible, and this
is better community etiquette anyway.

Please could you verify if this patch is suitable for Debian also, and
if so, forward it to the Debian bug tracker? Then Ubuntu will
automatically sync the patch after it is uploaded to Debian.

** Tags added: needs-upstream-report

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1325847

Title:
  Improvement: initscript enhancement with support for conf.d and
  configtest on startup

Status in “haproxy” package in Ubuntu:
  New

Bug description:
  The haproxy initscript misses a configtest option, which haproxy natively 
supports. It also does not warn the user, if haproxy has been disabled in the 
default file, but exits silently.
  Also, it has become a de-facto standard for daemons to include conf.d 
configuration file support.
  I attached a patch for the current init script, which remedies all these 
issues and should be forward/backward compatible to haproxy 1.3/1.4/1.5.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/haproxy/+bug/1325847/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1325084] Re: Please update package keepalived on ubuntu 12.04 LTS (precise)

2014-06-02 Thread Robie Basak
Thank you for taking the time to report this bug and helping to make
Ubuntu better.

Please see the stable release update policy, rationale and procedure
documented at: https://wiki.ubuntu.com/StableReleaseUpdates

In particular, I see changelog entries such as some cosmetics., which
do not appear suitable for update in Precise under the existing policy,
and I do not believe that keepalived has an exception. However, specific
bugs can be cherry-picked - see the policy for details.

Since a straight update does not appear compliant with policy as
upstream do not have a bugfix-only branch, this bug is effectively
reduced to please fix bugs, so I'm marking it as Invalid as there is
no specific action that can be done here. If there are specific bugs
that have fixes that should be backported to Precise, then please file
bugs on those and follow the SRU procedure as documented. If you do so,
then feel free to reopen this bug to track progress.

** Changed in: keepalived (Ubuntu)
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to keepalived in Ubuntu.
https://bugs.launchpad.net/bugs/1325084

Title:
  Please update package keepalived on ubuntu 12.04 LTS (precise)

Status in “keepalived” package in Ubuntu:
  Invalid

Bug description:
  There are a number of bugs that have been address in newer version of
  keepalived. Please update this package on precise. I believe it is
  currently 1.2.2 where as trusty has 1.2.7

  http://www.keepalived.org/changelog.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/keepalived/+bug/1325084/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1322899] Re: pacemaker init script links aren't created on upgrade

2014-05-29 Thread Robie Basak
Thank you for taking the time to report this bug and helping to make
Ubuntu better.

I guess that update-rc.d can be run by hand as a workaround? In this
case, setting Importance: Medium as a workaround is available.

** Changed in: pacemaker (Ubuntu)
   Importance: Undecided = Medium

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to pacemaker in Ubuntu.
https://bugs.launchpad.net/bugs/1322899

Title:
  pacemaker init script links aren't created on upgrade

Status in “pacemaker” package in Ubuntu:
  New

Bug description:
  When upgrading my pacemaker/corosync machines from ubuntu 12.04 to
  14.04, update-rc.d for pacemaker is not run, so the new pacemaker
  init.d script is never executed on system startup, causing
  corosync/pacemaker HA system to not start.

  When adding the new pacemaker init.d script, update-rc.d should be
  run, so pacemaker init.d is run to start pacemaker as needed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/pacemaker/+bug/1322899/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1204459] Re: Please package drbd tools v8.4 for use with Raring LTS Enablement Stacks

2013-07-24 Thread Robie Basak
** Tags added: hwe-dkms

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to drbd8 in Ubuntu.
https://bugs.launchpad.net/bugs/1204459

Title:
  Please package drbd tools v8.4 for use with Raring LTS Enablement
  Stacks

Status in “drbd8” package in Ubuntu:
  New

Bug description:
  [Impact] DRBD will not work (hang) on fresh install using Ubuntu 12.04.3 
media, and will stop working on sites where the Raring Enablement Stacks is 
manually installed
  [Test Case] install the Raring kernel in Precise, install/configure DRBD: you 
get No response from the DRBD driver! Is the module loaded?.
  [Regression Potential] the current drbd8-utils must not be upgraded (it's 
needed for Precise  Quantal kernels), a new package must be created for the 
DRBD 8.4 utils.

  Ubuntu 12.04.3 is due in august and, as part of the  LTS Enablement Stack 
program, it will ship with Raring's kernel 3.8
  This new kernel includes DRBD v8.4.2, but the DRBD utils shipped in Precise 
are version 8.3.11. They will not work with the new kernel: please check Bug 
#1132302 for the expected symptoms.

  Would you please upload a package for DRBD utils v8.4.x for use with the 
backported kernel ?
  The tools v8.3 must remain available too, so the new package should named 
something else, maybe drbd8-utils-lts-raring .

  If this cannot be done in time for 12.04.3 (or not at all), I suggest
  it should be clearly stated in the release notes that DRBD users using
  the 12.04.3 CD/DVD should revert to the Quantal LTS Enablement Stack.
  The Quantal kernel has DRBD 8.3.13 which should work fine with the
  current utils.

  Lionel Sausin.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/drbd8/+bug/1204459/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1183300] Re: [needs-packaging] keepalived

2013-05-23 Thread Robie Basak
Thank you for taking the time to report this bug and helping to make
Ubuntu better.

keepalived is already packaged, but I can see that upstream version
1.2.7 is available. Ubuntu and Debian have 1.2.2; 1.2.6 is available in
experimental.

Please specify which upstream release you are requesting, and consider
filing a bug with Debian to have Debian update to the latest upstream
version.

** Tags removed: needs-packaging
** Tags added: upgrade-software-version

** Summary changed:

- [needs-packaging] keepalived
+ Please upgrade keepalived to the latest upstream version

** Changed in: keepalived (Ubuntu)
   Importance: Undecided = Wishlist

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to keepalived in Ubuntu.
https://bugs.launchpad.net/bugs/1183300

Title:
  Please upgrade keepalived to the latest upstream version

Status in “keepalived” package in Ubuntu:
  New

Bug description:
  keepalived provides simple and robust facilities for loadbalancing and
  high-availability to Linux system and Linux based infrastructures.

  URL: http://www.keepalived.org/
  License: GNU General Public License
  Notes: 
  The newest version contains new functions and very important bug fixes.
  e.g. ) 
  New function: Add SNMP support to checker and VRRP frameworks. 
  Fixed bug: When the real servers have the same port number for loadbalancing, 
changing settings of the real server is not reflected in actual loadbalancing.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/keepalived/+bug/1183300/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1132302] Re: drbd8-utils are not compatible with kernel 3.8

2013-02-26 Thread Robie Basak
Lionel,

I've uploaded 2:8.4.3-0ubuntu1~ppa2 to my PPA
(https://launchpad.net/~racb/+archive/experimental). This merges changes
in Debian's 8.3.13-2, which includes turning off the kernel version
check. Please could you check that this works as you expect; then we can
get it uploaded? Thanks!

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to drbd8 in Ubuntu.
https://bugs.launchpad.net/bugs/1132302

Title:
  drbd8-utils are not compatible with kernel 3.8

Status in “drbd8” package in Ubuntu:
  Triaged
Status in “drbd8” source package in Raring:
  Triaged

Bug description:
  Ubuntu Raring is going to ship with kernel 3.8, which contains DRBD v8.4.2
  Unfortunately, the userland tools in drbd8-utils are still v8.3.13, which is 
1 major version behind: they are not compatible with the kernel module.

  From my testing (both in a vmware environment and a live-usb), loading
  the module and then using drbdadm to bring up a resource blocks for a
  long time and eventually leads to the message No response from the
  DRBD driver! Is the module loaded?.

  Replacing the packaged tools with a locally compiled v8.4.2 totally
  fixes the problem.

  Lionel Sausin

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/drbd8/+bug/1132302/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1132302] Re: drbd8-utils are not compatible with kernel 3.8

2013-02-26 Thread Robie Basak
PS. the package is still in the build queue; it should be done in about
an hour.

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to drbd8 in Ubuntu.
https://bugs.launchpad.net/bugs/1132302

Title:
  drbd8-utils are not compatible with kernel 3.8

Status in “drbd8” package in Ubuntu:
  Triaged
Status in “drbd8” source package in Raring:
  Triaged

Bug description:
  Ubuntu Raring is going to ship with kernel 3.8, which contains DRBD v8.4.2
  Unfortunately, the userland tools in drbd8-utils are still v8.3.13, which is 
1 major version behind: they are not compatible with the kernel module.

  From my testing (both in a vmware environment and a live-usb), loading
  the module and then using drbdadm to bring up a resource blocks for a
  long time and eventually leads to the message No response from the
  DRBD driver! Is the module loaded?.

  Replacing the packaged tools with a locally compiled v8.4.2 totally
  fixes the problem.

  Lionel Sausin

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/drbd8/+bug/1132302/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1132302] Re: drbd8-utils are not compatible with kernel 3.8

2013-02-25 Thread Robie Basak
Thank you for taking the time to report this bug and helping to make
Ubuntu better.

I have reproduced the problem on raring and prepared an updated package.

I first hoped that Now builds on Linux-3.7 from the 8.3.15 changelog
entry would fix this problem, but I had the same problem.

Then I tried 8.4.3 (latest upstream), and this seems to work fine
(except for the warning that the kernel module is at 8.4.2).

I've uploaded my test 8.4.3 package build to my experimental PPA
(https://launchpad.net/~racb/+archive/experimental/). Please could you
test the package from here and check that it is fully functional?

Should I be using 8.4.2 instead of 8.4.3 to get the version to match the
raring kernel? If so, do we know that the raring kernel's version won't
end up bumped before raring's release?

** Changed in: drbd8 (Ubuntu Raring)
   Status: In Progress = Triaged

** Changed in: drbd8 (Ubuntu Raring)
 Assignee: Robie Basak (racb) = (unassigned)

** Tags added: upgrade-software-version

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to drbd8 in Ubuntu.
https://bugs.launchpad.net/bugs/1132302

Title:
  drbd8-utils are not compatible with kernel 3.8

Status in “drbd8” package in Ubuntu:
  Triaged
Status in “drbd8” source package in Raring:
  Triaged

Bug description:
  Ubuntu Raring is going to ship with kernel 3.8, which contains DRBD v8.4.2
  Unfortunately, the userland tools in drbd8-utils are still v8.3.13, which is 
1 major version behind: they are not compatible with the kernel module.

  From my testing (both in a vmware environment and a live-usb), loading
  the module and then using drbdadm to bring up a resource blocks for a
  long time and eventually leads to the message No response from the
  DRBD driver! Is the module loaded?.

  Replacing the packaged tools with a locally compiled v8.4.2 totally
  fixes the problem.

  Lionel Sausin

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/drbd8/+bug/1132302/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1103656] Re: Update Precise drbd8-utils to 8.3.13 for the 12.04.2 release

2013-01-24 Thread Robie Basak
Thank you for taking the time to report this bug and helping to make
Ubuntu better.

I think this needs attention, so setting Importance to High. Usually SRU
policy generally frowns on this kind of update, but it may make sense
given that the kernel is being bumped. I appreciate you bringing it up.

I believe it will be possible to run the old kernel. Will the new drbd
work with both?

** Changed in: drbd8 (Ubuntu)
   Importance: Undecided = High

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to drbd8 in Ubuntu.
https://bugs.launchpad.net/bugs/1103656

Title:
  Update Precise drbd8-utils to 8.3.13 for the 12.04.2 release

Status in “drbd8” package in Ubuntu:
  New

Bug description:
  According to https://lists.ubuntu.com/archives/ubuntu-devel-
  announce/2013-January/001003.html, Precise 12.04.2 will ship with
  kernel 3.5.0. This new kernel comes with DRBD 8.3.13 but the
  drbd8-utils package in Precise are still is at version 8.3.11.

  Running a different version of the userspace utils and the kernel
  module is not recommended by upstream and gives such warning:

  # /etc/init.d/drbd start
   * Starting DRBD resources

  DRBD module version: 8.3.13
 userland version: 8.3.11
  you should upgrade your drbd tools!

  More information on the test VM (installed with the daily image):

  simon@ubuntu:~$ modinfo drbd
  filename:   
/lib/modules/3.5.0-22-generic/kernel/drivers/block/drbd/drbd.ko
  alias:  block-major-147-*
  license:GPL
  version:8.3.13
  description:drbd - Distributed Replicated Block Device v8.3.13
  author: Philipp Reisner p...@linbit.com, Lars Ellenberg 
l...@linbit.com
  srcversion: 697DE8B1973B1D8914F04DB
  depends:lru_cache
  intree: Y
  vermagic:   3.5.0-22-generic SMP mod_unload modversions 
  parm:   minor_count:Maximum number of drbd devices (1-256) (uint)
  parm:   disable_sendpage:bool
  parm:   allow_oos:DONT USE! (bool)
  parm:   cn_idx:uint
  parm:   proc_details:int
  parm:   usermode_helper:string

  simon@ubuntu:~$ apt-cache policy linux-image-$(uname -r) drbd8-utils
  linux-image-3.5.0-22-generic:
Installed: 3.5.0-22.34~precise1
Candidate: 3.5.0-22.34~precise1
Version table:
   *** 3.5.0-22.34~precise1 0
  500 http://ca.archive.ubuntu.com/ubuntu/ precise-updates/main amd64 
Packages
  100 /var/lib/dpkg/status
  drbd8-utils:
Installed: 2:8.3.11-0ubuntu1
Candidate: 2:8.3.11-0ubuntu1
Version table:
   *** 2:8.3.11-0ubuntu1 0
  500 http://ca.archive.ubuntu.com/ubuntu/ precise/main amd64 Packages
  100 /var/lib/dpkg/status

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/drbd8/+bug/1103656/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1055892] Re: keepalived does not honor use_vmac directive

2012-11-01 Thread Robie Basak
** Changed in: keepalived (Ubuntu)
   Status: Incomplete = New

** Changed in: keepalived (Ubuntu)
   Importance: Undecided = Medium

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to keepalived in Ubuntu.
https://bugs.launchpad.net/bugs/1055892

Title:
  keepalived does not honor use_vmac directive

Status in “keepalived” package in Ubuntu:
  New

Bug description:
  As of precise/quantal, keepalived does not honor the use_vmac
  directive.  Expected behavior is for keepalived to synthesize a
  virtual mac address to go along with the virtual IP address.  This is
  important in situations where systems do not accept gratuitous ARP as
  a means to fail over a virtual IP address.

  I have built a version against the latest upstream (1.2.7), which does
  honor the use_vmac directive as expected.  below follows an example of
  how it should look (see 6:):

  1: lo: LOOPBACK,UP,LOWER_UP mtu 16436 qdisc noqueue state UNKNOWN 
  link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  inet 127.0.0.1/8 scope host lo
  inet6 ::1/128 scope host 
 valid_lft forever preferred_lft forever
  2: eth0: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc pfifo_fast state UP 
qlen 1000
  link/ether 08:00:27:36:14:4f brd ff:ff:ff:ff:ff:ff
  inet 192.168.4.105/24 brd 192.168.4.255 scope global eth0
  inet6 fe80::a00:27ff:fe36:144f/64 scope link 
 valid_lft forever preferred_lft forever
  3: virbr0: NO-CARRIER,BROADCAST,MULTICAST,UP mtu 1500 qdisc noqueue state 
DOWN 
  link/ether a2:20:3b:c0:e0:1a brd ff:ff:ff:ff:ff:ff
  inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
  6: vrrp.90@eth0: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc noqueue 
state UNKNOWN 
  link/ether 00:00:5e:00:01:5a brd ff:ff:ff:ff:ff:ff
  inet 192.168.4.200/32 scope global vrrp.90
  inet6 fe80::200:5eff:fe00:15a/64 scope link 
 valid_lft forever preferred_lft forever

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/keepalived/+bug/1055892/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp