[Ubuntu-ha] [Bug 912588] Re: mount.ocfs2 doesn't accept mount option "uhelper=udisks"

2019-07-04 Thread Rafael David Tinoco
That change would have to have happened in Nautilus, since the "external
helpers" feature is something done by the "umount" utility, which
redirects the umount requests to a wrapper (helper), but, before,
removing the uhelper flag from the umount requested.

I'm marking this as "won't fix" because the request was done to
ocfs2-tools and it is pretty old. If you still want to pursue this, I
would recommend opening a new bug targeting Nautilus and udisk utilities
and it will probably be considered a "wishlist" if not implemented by
upstream until today.

** Changed in: ocfs2-tools (Ubuntu)
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to ocfs2-tools in Ubuntu.
https://bugs.launchpad.net/bugs/912588

Title:
  mount.ocfs2 doesn't accept mount option "uhelper=udisks"

Status in ocfs2-tools package in Ubuntu:
  Won't Fix

Bug description:
  Scenario:

  3-node Ubuntu AoE*/OCFS2 Cluster on Ubuntu Server sharing disks with
  Ubuntu Desktops

  Node 1: Oneiric ubuntu-server amd64 with disk shared using AoE and formatted 
and clustered
  with OCFS2.
  Node 2: Oneiric mint-desktop i386
  Node 3: Oneiric ubuntu-desktop amd64

  Background:

  1. Initially implemented AOE to share disks in node 1 (SAN) formatted
  XFS

  2. Needed to access FS on disks from more than one Ubuntu desktop
  client

  3. Implemented OCFS2 cluster and formatted one disk (label "Backup")
  in node 1 with OCFS2 to test

  Problem:

  4. Mounting the disk using Nautilus fails with the following error thrown by 
Nautilus:
  Unable to mount Backup
  Error mounting: mount exited with exit code 1: mount.ocfs2: Invalid argument 
while mounting /dev/etherd/e2.0p1 on /media/Backup_. Check 'dmesg' for more 
information on this error.

  dmesg includes the following:
  Jan  6 02:06:52 anthracite kernel: [ 8672.067816] 
(mount.ocfs2,27208,1):ocfs2_parse_options:1512 ERROR: Unrecognized mount option 
"uhelper=udisks" or missing value
  Jan  6 02:06:52 anthracite kernel: [ 8672.067824] 
(mount.ocfs2,27208,1):ocfs2_fill_super:1233 ERROR: status = -22

  As I understand it the uhelper=udisks mount option relates to the disk
  showing up in the "Devices" section of Nautilus.

  Further:

  5. Nautilus mounts the AoE shared XFS devices without error, even on
  multiple devices simultaneously, posing risk to FS integrity
  (intentional AoE design)

  6. Creating a mount point and mounting the disk (AoE shared OCFS2
  "Backup") using the mount command works fine and it still shows up in
  the "Devices" section of Nautilus:

  # mount -L "Backup" /media/Backup/

  Synposis:

  7. It would thus appear that mount.ocfs2 does not support this mount
  option.  This is understandable given that OCFS2 is used heavily in
  server-only environments and not in sharing for use by Linux desktop
  machines.

  8. However, such use cases will surely increase and it should be
  trivial to add support for this mount option.

  Thanks very much.

  *ATA over Ethernet

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ocfs2-tools/+bug/912588/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1018671] Re: mismatch between ocfs2-tools version and kernel

2019-07-04 Thread Rafael David Tinoco
Even being old, I'm flagging this case as "ubuntu-ha" for consideration
in Ubuntu-HA effort for Eoan. I'll close this case after considering
debugfs.ocfs2 capabilities and kernel in question.

** Tags added: ubuntu-ha

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to ocfs2-tools in Ubuntu.
https://bugs.launchpad.net/bugs/1018671

Title:
  mismatch between ocfs2-tools version and kernel

Status in ocfs2-tools package in Ubuntu:
  Confirmed

Bug description:
  In at least 11.10 (likely 12.04 as well, however...), the ocfs2-tools
  is behind what the 3.0 kernel is creating on disk.  This can be seen
  trivially by attempting to examine the locks on an ocfs2 filesystem:

  ===# debugfs.ocfs2 -n -R "fs_locks" /dev/dm-1
  Debug string proto 3 found, but 2 is the highest I understand.

  The 1.8 tree in the ocfs2-tools git repo includes support for proto 3.
  That set of tools should probably be distributed with 11.10 and up, so
  that the tools match what the kernel is providing.

  Not having the tools match up to the kernel makes things very
  difficult, like trying to debug locking issues.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ocfs2-tools/+bug/1018671/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1172042] Re: ocfs2 readlink hangs on

2019-07-04 Thread Rafael David Tinoco
inaddy@workstation:~/work/sources/kernel/linux$ git log -1 -p 
30b9c9e6ba289ba3bb67cc292efcc4122ea37ae5
commit 30b9c9e6ba289ba3bb67cc292efcc4122ea37ae5
Author: Sunil Mushran 
Date:   Fri Aug 3 13:36:17 2012

ocfs2: Fix oops in ocfs2_fast_symlink_readpage() code path

Commit ea022dfb3c2a4680483b00eb2fecc9fc4f6091d1 was missing a var
init.

Reported-and-Tested-by: Vincent Etienne 
Signed-off-by: Sunil Mushran 
Signed-off-by: Joel Becker 
Signed-off-by: Al Viro 

diff --git a/fs/ocfs2/symlink.c b/fs/ocfs2/symlink.c
index f1fbb4b552ad..66edce7ecfd7 100644
--- a/fs/ocfs2/symlink.c
+++ b/fs/ocfs2/symlink.c
@@ -57,7 +57,7 @@
 static int ocfs2_fast_symlink_readpage(struct file *unused, struct page *page)
 {
struct inode *inode = page->mapping->host;
-   struct buffer_head *bh;
+   struct buffer_head *bh = NULL;
int status = ocfs2_read_inode_block(inode, );
struct ocfs2_dinode *fe;
const char *link;

AND

inaddy@workstation:~/work/sources/kernel/linux$ git log -1 -p 
ea022dfb3c2a4680483b00eb2fecc9fc4f6091d1
commit ea022dfb3c2a4680483b00eb2fecc9fc4f6091d1
Author: Al Viro 
Date:   Thu May 3 11:14:29 2012

ocfs: simplify symlink handling

seeing that "fast" symlinks still get allocation + copy, we might as
well simply switch them to pagecache-based variant of ->follow_link();
just need an appropriate ->readpage() for them...

Signed-off-by: Al Viro 

** Changed in: ocfs2-tools (Ubuntu)
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to ocfs2-tools in Ubuntu.
https://bugs.launchpad.net/bugs/1172042

Title:
  ocfs2 readlink hangs on

Status in ocfs2-tools package in Ubuntu:
  Fix Released

Bug description:
  On ubuntu 13.04 try to read symlink on ocfs2 hangs on. This is known
  bug in newer kernel (newer than 3.5) reported on:

  https://bugzilla.kernel.org/show_bug.cgi?id=49561

  Link to patch:

  https://lkml.org/lkml/2013/3/2/40

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ocfs2-tools/+bug/1172042/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1412438] Re: OCF:pacemaker:o2cb broken in 14.04

2019-07-04 Thread Rafael David Tinoco
** Tags added: ubuntu-ha

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to ocfs2-tools in Ubuntu.
https://bugs.launchpad.net/bugs/1412438

Title:
  OCF:pacemaker:o2cb broken in 14.04

Status in ocfs2-tools package in Ubuntu:
  Confirmed

Bug description:
  The pacemaker resource agent ocf:pacemaker:o2cb requires

  /usr/sbin/ocfs2_controld.pcmk

  in order to execute correctly. This binary used to be in ocsf2-tools-
  pacemaker in earlier releases (e.g. 12.04). However, this is omitted
  from 14.04. As a result, resource agent is broken and pacemaker cannot
  be used to manage ocfs2 file systems

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ocfs2-tools/+bug/1412438/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1584629] Re: Failed to start LSB: Load O2CB cluster services at system boot.

2019-07-04 Thread Rafael David Tinoco
** Tags added: ubuntu-ha

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to ocfs2-tools in Ubuntu.
https://bugs.launchpad.net/bugs/1584629

Title:
  Failed to start LSB: Load O2CB cluster services at system boot.

Status in ocfs2-tools package in Ubuntu:
  Triaged
Status in ocfs2-tools source package in Trusty:
  New
Status in ocfs2-tools source package in Xenial:
  New
Status in ocfs2-tools source package in Yakkety:
  New

Bug description:
  Ubuntu 16.04.

  Sometimes (not every boot) o2cb failed to start:

  systemctl status o2cb
  ● o2cb.service - LSB: Load O2CB cluster services at system boot.
     Loaded: loaded (/etc/init.d/o2cb; bad; vendor preset: enabled)
     Active: failed (Result: exit-code) since Пн 2016-05-23 11:46:43 SAMT; 2min 
12s ago
   Docs: man:systemd-sysv-generator(8)
    Process: 1526 ExecStart=/etc/init.d/o2cb start (code=exited, 
status=1/FAILURE)

  май 23 11:46:43 inetgw1 systemd[1]: Starting LSB: Load O2CB cluster services 
at system boot
  май 23 11:46:43 inetgw1 o2cb[1526]: Loading filesystem "configfs": OK
  май 23 11:46:43 inetgw1 o2cb[1526]: Mounting configfs filesystem at 
/sys/kernel/config: mount: configfs is already
  май 23 11:46:43 inetgw1 o2cb[1526]:configfs is already mounted on 
/sys/kernel/config
  май 23 11:46:43 inetgw1 o2cb[1526]: Unable to mount configfs filesystem
  май 23 11:46:43 inetgw1 o2cb[1526]: Failed
  май 23 11:46:43 inetgw1 systemd[1]: o2cb.service: Control process exited, 
code=exited status=1
  май 23 11:46:43 inetgw1 systemd[1]: Failed to start LSB: Load O2CB cluster 
services at system boot..
  май 23 11:46:43 inetgw1 systemd[1]: o2cb.service: Unit entered failed state.
  май 23 11:46:43 inetgw1 systemd[1]: o2cb.service: Failed with result 
'exit-code'.

  next try is successful:
  systemctl status o2cb
  ● o2cb.service - LSB: Load O2CB cluster services at system boot.
     Loaded: loaded (/etc/init.d/o2cb; bad; vendor preset: enabled)
     Active: active (exited) since Пн 2016-05-23 11:49:07 SAMT; 1s ago
   Docs: man:systemd-sysv-generator(8)
    Process: 2101 ExecStart=/etc/init.d/o2cb start (code=exited, 
status=0/SUCCESS)

  май 23 11:49:07 inetgw1 systemd[1]: Starting LSB: Load O2CB cluster services 
at system boot
  май 23 11:49:07 inetgw1 o2cb[2101]: Loading stack plugin "o2cb": OK
  май 23 11:49:07 inetgw1 o2cb[2101]: Loading filesystem "ocfs2_dlmfs": OK
  май 23 11:49:07 inetgw1 o2cb[2101]: Mounting ocfs2_dlmfs filesystem at /dlm: 
OK
  май 23 11:49:07 inetgw1 o2cb[2101]: Setting cluster stack "o2cb": OK
  май 23 11:49:07 inetgw1 o2cb[2101]: Starting O2CB cluster inetgw: OK
  май 23 11:49:07 inetgw1 systemd[1]: Started LSB: Load O2CB cluster services 
at system boot..

  I guess this is startup dependency problem.

  Thank you!

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ocfs2-tools/+bug/1584629/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1745155] Re: o2image fails on s390x

2019-07-04 Thread Rafael David Tinoco
** Tags added: ubuntu-ha

** Changed in: ocfs2-tools (Ubuntu)
   Status: New => Incomplete

** Changed in: ocfs2-tools (Ubuntu)
   Status: Incomplete => Confirmed

** Changed in: ocfs2-tools (Ubuntu)
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to ocfs2-tools in Ubuntu.
https://bugs.launchpad.net/bugs/1745155

Title:
  o2image fails on s390x

Status in OCFS2 Tools:
  New
Status in ocfs2-tools package in Ubuntu:
  Confirmed

Bug description:
  o2image fails on s390x:

  dd if=/dev/zero of=/tmp/disk bs=1M count=200
  losetup --find --show /tmp/disk
  mkfs.ocfs2 --cluster-stack=o2cb --cluster-name=ocfs2 /dev/loop0 # loop dev 
found in prev step

  Then this comand:
  o2image /dev/loop0 /tmp/disk.image

  Results in:
  Segmentation fault (core dumped)

  dmesg:
  [  862.642556] ocfs2: Registered cluster interface o2cb
  [  870.880635] User process fault: interruption code 003b ilc:3 in 
o2image[10c18+2e000]
  [  870.880643] Failing address:  TEID: 0800
  [  870.880644] Fault in primary space mode while using user ASCE.
  [  870.880646] AS:3d8f81c7 R3:0024 
  [  870.880650] CPU: 0 PID: 1484 Comm: o2image Not tainted 4.13.0-30-generic 
#33-Ubuntu
  [  870.880651] Hardware name: IBM 2964 N63 400 (KVM/Linux)
  [  870.880652] task: 3cb81200 task.stack: 3d50c000
  [  870.880653] User PSW : 070500018000 00010c184212
  [  870.880654]R:0 T:1 IO:1 EX:1 Key:0 M:1 W:0 P:1 AS:0 CC:0 PM:0 
RI:0 EA:3
  [  870.880655] User GPRS: 000144f0cc10 0001 0001 

  [  870.880655] 000144ef6090 000144f13cc0 
0001
  [  870.880656]000144ef6000 000144ef3280 000144f13cd8 
00037ee8
  [  870.880656]03ff965a6000 03ffe5e7e410 00010c183bc6 
03ffe5e7e370
  [  870.880663] User Code: 00010c184202: b9080034  agr %r3,%r4
00010c184206: c02b0007  nilf%r2,7
   #00010c18420c: eb2120df  sllk
%r2,%r1,0(%r2)
   >00010c184212: e3103090  llgc
%r1,0(%r3)
00010c184218: b9f61042  ork 
%r4,%r2,%r1
00010c18421c: 1421  nr  %r2,%r1
00010c18421e: 42403000  stc 
%r4,0(%r3)
00010c184222: 1322  lcr %r2,%r2
  [  870.880672] Last Breaking-Event-Address:
  [  870.880675]  [<00010c18e4ca>] 0x10c18e4ca

  Upstream issue:
  https://github.com/markfasheh/ocfs2-tools/issues/22

  This was triggered by our ocfs2-tools dep8 tests:
  http://autopkgtest.ubuntu.com/packages/o/ocfs2-tools/bionic/s390x

To manage notifications about this bug go to:
https://bugs.launchpad.net/ocfs2-tools/+bug/1745155/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1825992] Re: Upgrade to version 2.0 to satisfy pcs dependency

2019-07-04 Thread Rafael David Tinoco
I'm assigning this case to myself to work during our Ubuntu HA effort,
together with all other HA bugs. I'll address this case when reviewing
pacemaker and pcs (shortly).  I'll let you know our decision for Disco
as soon as we check Eoan fully.

Thanks a lot for reporting this!

** Tags added: ubuntu-ha

** Changed in: pacemaker (Ubuntu Disco)
   Status: New => Confirmed

** Changed in: pacemaker (Ubuntu Eoan)
 Assignee: (unassigned) => Rafael David Tinoco (rafaeldtinoco)

** Changed in: pacemaker (Ubuntu Disco)
 Assignee: (unassigned) => Rafael David Tinoco (rafaeldtinoco)

** Changed in: pacemaker (Ubuntu Eoan)
   Importance: Undecided => Medium

** Changed in: pacemaker (Ubuntu Disco)
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to pacemaker in Ubuntu.
https://bugs.launchpad.net/bugs/1825992

Title:
  Upgrade to version 2.0 to satisfy pcs dependency

Status in pacemaker package in Ubuntu:
  Confirmed
Status in pacemaker source package in Disco:
  Confirmed
Status in pacemaker source package in Eoan:
  Confirmed

Bug description:
  Is there a way we could upgrade the version to 2.0? The pcs package
  requires a version of pacemaker that is greater than or equal to 2.0,
  and there is already a debian version packaged for v2. Installing the
  Ubuntu package for pcs will delete the pacemaker package as there is
  no version of pacemaker that is greater than 2.0.

  I can help with build/testing and packaging for 2.0 based on the
  existing debian deb if needed.

  https://pkgs.org/download/pacemaker

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/pacemaker/+bug/1825992/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1627083] Re: ipt_CLUSTERIP is deprecated and it will removed soon, use xt_cluster instead

2019-07-04 Thread Rafael David Tinoco
** Tags added: ubuntu-ha

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to pacemaker in Ubuntu.
https://bugs.launchpad.net/bugs/1627083

Title:
  ipt_CLUSTERIP is deprecated and it will removed soon, use xt_cluster
  instead

Status in pacemaker package in Ubuntu:
  Triaged
Status in strongswan package in Ubuntu:
  Triaged

Bug description:
  pacemaker still uses iptable's "CLUSTERIP" -- and dmesg shows a
  deprecation warning:

  [   15.027333] ipt_CLUSTERIP: ClusterIP Version 0.8 loaded successfully
  [   15.027464] ipt_CLUSTERIP: ipt_CLUSTERIP is deprecated and it will removed 
soon, use xt_cluster instead

  ~# iptables -L
  Chain INPUT (policy ACCEPT)
  target prot opt source   destination 
  CLUSTERIP  all  --  anywhere proxy.charite.de CLUSTERIP 
hashmode=sourceip-sourceport clustermac=EF:EE:6B:F9:7B:67 total_nodes=4 
local_node=2 hash_init=0

  ProblemType: Bug
  DistroRelease: Ubuntu 16.04
  Package: pacemaker 1.1.14-2ubuntu1.1
  ProcVersionSignature: Ubuntu 4.4.0-38.57-generic 4.4.19
  Uname: Linux 4.4.0-38-generic x86_64
  ApportVersion: 2.20.1-0ubuntu2.1
  Architecture: amd64
  Date: Fri Sep 23 17:26:01 2016
  InstallationDate: Installed on 2014-08-19 (766 days ago)
  InstallationMedia: Ubuntu-Server 14.04.1 LTS "Trusty Tahr" - Release amd64 
(20140722.3)
  SourcePackage: pacemaker
  UpgradeStatus: Upgraded to xenial on 2016-09-22 (1 days ago)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/pacemaker/+bug/1627083/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1828223] Re: pacemaker v2 appears to be missing log directory

2019-07-04 Thread Rafael David Tinoco
** Changed in: pacemaker (Ubuntu)
 Assignee: (unassigned) => Rafael David Tinoco (rafaeldtinoco)

** Changed in: pacemaker (Ubuntu)
   Importance: Undecided => Medium

** Changed in: pacemaker (Ubuntu)
   Status: Triaged => Confirmed

** Tags added: ubuntu-ha

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to pacemaker in Ubuntu.
https://bugs.launchpad.net/bugs/1828223

Title:
  pacemaker v2 appears to be missing log directory

Status in pacemaker package in Ubuntu:
  Confirmed
Status in pacemaker package in Debian:
  Fix Released

Bug description:
  # journalctl -u pacemaker | grep 'logging to'
  May 08 12:30:49 nice-mako pacemakerd[325]:  error: Directory 
'/var/log/pacemaker' does not exist: logging to 
'/var/log/pacemaker/pacemaker.log' is disabled
  May 08 12:30:49 nice-mako pacemaker-fenced[329]:  error: Directory 
'/var/log/pacemaker' does not exist: logging to 
'/var/log/pacemaker/pacemaker.log' is disabled
  May 08 12:30:49 nice-mako pacemaker-based[328]:  error: Directory 
'/var/log/pacemaker' does not exist: logging to 
'/var/log/pacemaker/pacemaker.log' is disabled
  May 08 12:30:49 nice-mako pacemaker-schedulerd[332]:  error: Directory 
'/var/log/pacemaker' does not exist: logging to 
'/var/log/pacemaker/pacemaker.log' is disabled
  May 08 12:30:49 nice-mako pacemaker-attrd[331]:  error: Directory 
'/var/log/pacemaker' does not exist: logging to 
'/var/log/pacemaker/pacemaker.log' is disabled
  May 08 12:30:49 nice-mako pacemaker-execd[330]:  error: Directory 
'/var/log/pacemaker' does not exist: logging to 
'/var/log/pacemaker/pacemaker.log' is disabled
  May 08 12:30:49 nice-mako pacemaker-controld[333]:  error: Directory 
'/var/log/pacemaker' does not exist: logging to 
'/var/log/pacemaker/pacemaker.log' is disabled

  I guess pacemaker v2 should either be configured to log to journal by
  default, or that directory needs to be created? or configuration
  changed to log to /var/log/pacemaker.log?

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/pacemaker/+bug/1828223/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1828228] Re: corosync fails to start in container (armhf) bump some limits

2019-07-04 Thread Rafael David Tinoco
** Changed in: corosync (Ubuntu)
 Assignee: Rafael David Tinoco (rafaeldtinoco) => (unassigned)

** Changed in: pacemaker (Ubuntu)
 Assignee: Rafael David Tinoco (rafaeldtinoco) => (unassigned)

** Tags removed: ubuntu-ha

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to pacemaker in Ubuntu.
https://bugs.launchpad.net/bugs/1828228

Title:
  corosync fails to start in container (armhf) bump some limits

Status in Auto Package Testing:
  New
Status in corosync package in Ubuntu:
  Triaged
Status in pacemaker package in Ubuntu:
  Triaged

Bug description:
  Currently pacemaker v2 fails to start in armhf containers (and by
  extension corosync too).

  I found that it is reproducible locally, and that I had to bump a few
  limits to get it going.

  Specifically I did:

  1) bump memlock limits
  2) bump rmem_max limits

  = 1) Bump memlock limits =

  I have no idea, which one of these finally worked, and/or is
  sufficient. A bit of a whack-a-mole.

  cat >>/etc/security/limits.conf 

[Ubuntu-ha] [Bug 1052449] Re: corosync hangs due to missing pacemaker shutdown scripts

2019-07-04 Thread Rafael David Tinoco
** Tags added: ubuntu-ha

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to pacemaker in Ubuntu.
https://bugs.launchpad.net/bugs/1052449

Title:
  corosync hangs due to missing pacemaker shutdown scripts

Status in pacemaker package in Ubuntu:
  Triaged

Bug description:
  The pacemaker package installs the right init script but doesn't link
  it to the according runlevels. If corosync is activated and started on
  the system this leads to a hanging shutdown / reboot because corosync
  only ends if pacemaker is stopped beforehand. In addition to this the
  pacemaker daemon has to start after corosync but has to stop before
  corosync.

  A possible solution would be to link the init script accordingly an
  enable it throug /etc/default/

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/pacemaker/+bug/1052449/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1471056] Re: external/vcenter records many "Smartmatch is experimental" log

2019-07-04 Thread Rafael David Tinoco
Since Trusty is EOL, and the current upstream version contains the
pointed fix:

inaddy@workstation:~/work/sources/upstream/cluster-glue$ git log --grep 
"replace experimental smart"
commit a182a0dd9fa41f0b1c0ceb50dc97a9b3e379564c
Author: Dejan Muhamedagic 
Date:   Mon Nov 3 13:33:57 2014

Medium: stonith: external/vcenter: replace experimental smartmatch
(bnc#900353)

I'm closing this case as Fix Released. Feel free to open a new case,
under a newer Ubuntu Release, if something like this happen again.

Thank you.

** Changed in: cluster-glue (Ubuntu)
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to cluster-glue in Ubuntu.
https://bugs.launchpad.net/bugs/1471056

Title:
  external/vcenter records many "Smartmatch is experimental" log

Status in cluster-glue package in Ubuntu:
  Fix Released

Bug description:
  I'm using Ubuntu 14.04.2 LTS.
  external/vcenter records many "Smartmatch is experimental" log.

  ---
  Jul 03 10:24:39 [930] node3 stonith-ng: info: stonith_command: 
Processed st_execute from lrmd.931: Operation now in progress (-115)
  Jul 03 10:24:39 [930] node3 stonith-ng: info: stonith_action_create:  
 Initiating action monitor for agent fence_legacy (target=(null))
  Jul 03 10:24:52 [930] node3 stonith-ng: info: log_operation:   
res_fence_node1:2512 [ Performing: stonith -t external/vcenter -S ]
  Jul 03 10:24:52 [930] node3 stonith-ng: info: log_operation:   
res_fence_node1:2512 [ Smartmatch is experimental at 
/usr/lib/stonith/plugins/external/vcenter line 34. ]
  Jul 03 10:24:52 [930] node3 stonith-ng: info: log_operation:   
res_fence_node1:2512 [ Smartmatch is experimental at 
/usr/lib/stonith/plugins/external/vcenter line 115. ]
  Jul 03 10:24:52 [930] node3 stonith-ng: info: log_operation:   
res_fence_node1:2512 [ Smartmatch is experimental at 
/usr/lib/stonith/plugins/external/vcenter line 152. ]
  Jul 03 10:24:52 [930] node3 stonith-ng: info: log_operation:   
res_fence_node1:2512 [ Smartmatch is experimental at 
/usr/lib/stonith/plugins/external/vcenter line 34. ]
  Jul 03 10:24:52 [930] node3 stonith-ng: info: log_operation:   
res_fence_node1:2512 [ Smartmatch is experimental at 
/usr/lib/stonith/plugins/external/vcenter line 115. ]
  Jul 03 10:24:52 [930] node3 stonith-ng: info: log_operation:   
res_fence_node1:2512 [ Smartmatch is experimental at 
/usr/lib/stonith/plugins/external/vcenter line 152. ]
  Jul 03 10:24:52 [930] node3 stonith-ng: info: log_operation:   
res_fence_node1:2512 [ success:  0 ]
  ---

  I wish to apply the upstream patch(bnc#900353):
  http://hg.linux-ha.org/glue/rev/5f9e5c5bf64f

  thanks.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/cluster-glue/+bug/1471056/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1251298] Re: Failed to sign on to LRMd with Heartbeat/Pacemaker

2019-07-04 Thread Rafael David Tinoco
** Changed in: cluster-glue (Ubuntu)
 Assignee: (unassigned) => Rafael David Tinoco (rafaeldtinoco)

** Tags added: ubuntu-ha

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to cluster-glue in Ubuntu.
https://bugs.launchpad.net/bugs/1251298

Title:
  Failed to sign on to LRMd with Heartbeat/Pacemaker

Status in cluster-glue package in Ubuntu:
  Confirmed

Bug description:
  I'm running a 2 node heartbeat/pacemaker cluster, which was working fine with 
Ubuntu 13.04
  After upgrading from Ubuntu 13.04 to Ubuntu 13.10, Heartbeat/Pacemaker keeps 
restarting the system due to sign on errors of lrmd and heartbeat tries to 
recover.

  As one system is already on ubuntu 13.10 and one system still running
  13.04, I've tried it without the second node, which leads to the same
  behavior, which occurs before any cluster communication happens.

  Syslog:
  Nov 14 15:53:06 wolverine crmd[2464]:  warning: do_lrm_control: Failed to 
sign on to the LRM 1 (30 max) times
  Nov 14 15:53:06 wolverine crmd[2464]:   notice: crmd_client_status_callback: 
Status update: Client wolverine.domain.tld/crmd now has status [join] (DC=false)
  Nov 14 15:53:06 wolverine crmd[2464]:   notice: crmd_client_status_callback: 
Status update: Client wolverine.domain.tld/crmd now has status [online] 
(DC=false)
  Nov 14 15:53:06 wolverine crmd[2464]:  warning: do_lrm_control: Failed to 
sign on to the LRM 2 (30 max) times
  Nov 14 15:53:06 wolverine crmd[2464]:  warning: do_lrm_control: Failed to 
sign on to the LRM 3 (30 max) times
  Nov 14 15:53:07 wolverine stonith-ng[2462]:   notice: setup_cib: Watching for 
stonith topology changes
  Nov 14 15:53:07 wolverine stonith-ng[2462]:   notice: unpack_config: On loss 
of CCM Quorum: Ignore
  Nov 14 15:53:08 wolverine crmd[2464]:  warning: do_lrm_control: Failed to 
sign on to the LRM 4 (30 max) times
  Nov 14 15:53:10 wolverine crmd[2464]:  warning: do_lrm_control: Failed to 
sign on to the LRM 5 (30 max) times
  Nov 14 15:53:12 wolverine crmd[2464]:  warning: do_lrm_control: Failed to 
sign on to the LRM 6 (30 max) times
  Nov 14 15:53:14 wolverine crmd[2464]:  warning: do_lrm_control: Failed to 
sign on to the LRM 7 (30 max) times
  Nov 14 15:53:16 wolverine crmd[2464]:  warning: do_lrm_control: Failed to 
sign on to the LRM 8 (30 max) times
  Nov 14 15:53:18 wolverine crmd[2464]:  warning: do_lrm_control: Failed to 
sign on to the LRM 9 (30 max) times
  Nov 14 15:53:20 wolverine crmd[2464]:  warning: do_lrm_control: Failed to 
sign on to the LRM 10 (30 max) times
  Nov 14 15:53:22 wolverine crmd[2464]:  warning: do_lrm_control: Failed to 
sign on to the LRM 11 (30 max) times
  Nov 14 15:53:24 wolverine crmd[2464]:  warning: do_lrm_control: Failed to 
sign on to the LRM 12 (30 max) times
  Nov 14 15:53:26 wolverine crmd[2464]:  warning: do_lrm_control: Failed to 
sign on to the LRM 13 (30 max) times
  Nov 14 15:53:28 wolverine crmd[2464]:  warning: do_lrm_control: Failed to 
sign on to the LRM 14 (30 max) times
  Nov 14 15:53:30 wolverine crmd[2464]:  warning: do_lrm_control: Failed to 
sign on to the LRM 15 (30 max) times
  Nov 14 15:53:32 wolverine crmd[2464]:  warning: do_lrm_control: Failed to 
sign on to the LRM 16 (30 max) times
  Nov 14 15:53:34 wolverine crmd[2464]:  warning: do_lrm_control: Failed to 
sign on to the LRM 17 (30 max) times
  Nov 14 15:53:36 wolverine crmd[2464]:  warning: do_lrm_control: Failed to 
sign on to the LRM 18 (30 max) times
  Nov 14 15:53:38 wolverine crmd[2464]:  warning: do_lrm_control: Failed to 
sign on to the LRM 19 (30 max) times
  Nov 14 15:53:40 wolverine crmd[2464]:  warning: do_lrm_control: Failed to 
sign on to the LRM 20 (30 max) times
  Nov 14 15:53:42 wolverine crmd[2464]:  warning: do_lrm_control: Failed to 
sign on to the LRM 21 (30 max) times
  Nov 14 15:53:44 wolverine crmd[2464]:  warning: do_lrm_control: Failed to 
sign on to the LRM 22 (30 max) times
  Nov 14 15:53:46 wolverine crmd[2464]:  warning: do_lrm_control: Failed to 
sign on to the LRM 23 (30 max) times
  Nov 14 15:53:48 wolverine crmd[2464]:  warning: do_lrm_control: Failed to 
sign on to the LRM 24 (30 max) times
  Nov 14 15:53:50 wolverine crmd[2464]:  warning: do_lrm_control: Failed to 
sign on to the LRM 25 (30 max) times
  Nov 14 15:53:52 wolverine crmd[2464]:  warning: do_lrm_control: Failed to 
sign on to the LRM 26 (30 max) times
  Nov 14 15:53:54 wolverine crmd[2464]:  warning: do_lrm_control: Failed to 
sign on to the LRM 27 (30 max) times
  Nov 14 15:53:56 wolverine crmd[2464]:  warning: do_lrm_control: Failed to 
sign on to the LRM 28 (30 max) times
  Nov 14 15:53:58 wolverine crmd[2464]:  warning: do_lrm_control: Failed to 
sign on to the LRM 29 (30 max) times
  Nov 14 15:54:00 wolverine crmd[2464]:error: do_lrm_control: Failed to 
sign on to the LRM 30 (max) times
  Nov 14 15:54:00 wolverine crmd[2464]:error: do_log: FSA: Input I_ERROR 
from do_lrm_control() received in state 

[Ubuntu-ha] [Bug 939327] Re: lrmd ignores timeouts for start|stop|monitor when managing upstart jobs

2019-07-04 Thread Rafael David Tinoco
This has been upstreamed long time ago:

commit faa022c14609d74b39498970f9a444a3d05ec080
Author: Ante Karamatić 
Date:   Fri Feb 17 05:25:46 2012

Medium: LRM: lrmd: use the resource timeout as an override to the
default dbus timeout for upstart RA

and this bug can be closed as Fix Released.


** Changed in: cluster-glue (Ubuntu)
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to cluster-glue in Ubuntu.
https://bugs.launchpad.net/bugs/939327

Title:
  lrmd ignores timeouts for start|stop|monitor when managing upstart
  jobs

Status in cluster-glue package in Ubuntu:
  Fix Released

Bug description:
  If the primitive has a timeout set to 26 or higher, lrmd will declare
  start|stop|monitor as failed after 25 seconds, ignoring the configured
  timeout. Reason for this is default dbus timeout, which is set to 25
  seconds.

  There's a patch accepted by upstream: http://hg.linux-
  ha.org/glue/rev/6f8d63c39207

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/cluster-glue/+bug/939327/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1015602] Re: Monitor on Master resource stops working - pls apply patch

2019-07-04 Thread Rafael David Tinoco
Its it not clear which bug was pointed out. An upstream patch was given
and, right now, all I can do is to guarantee that this patch is, still,
contained in cluster-glue upstream version:

} else {
if (HA_OK != 
ha_msg_mod_int(op->msg,F_LRM_OPSTATUS,(int)LRM_OP_CANCELLED)) {
LOG_FAILED_TO_ADD_FIELD(F_LRM_OPSTATUS);
return HA_FAIL;
}
op_status = LRM_OP_CANCELLED;
remove_op_history(op);
}
 

With that, I'm marking this bug as Fix Released, feel free to re-open
with any other information OR open a new one (better) related to newer
Ubuntu releases.

Thank you!

** Changed in: cluster-glue (Ubuntu)
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to cluster-glue in Ubuntu.
https://bugs.launchpad.net/bugs/1015602

Title:
  Monitor on Master resource stops working - pls apply patch

Status in cluster-glue package in Ubuntu:
  Fix Released

Bug description:
  Ubuntu-precise is missing this patch http://hg.linux-
  ha.org/glue/diff/29c2eb9fa966/lrm/lrmd/lrmd.c

  http://hg.linux-ha.org/glue/rev/29c2eb9fa966

  Because of this master/slave resource is missing "monitor" function.
  Request you to apply this patch in cluster-glue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/cluster-glue/+bug/1015602/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp


[Ubuntu-ha] [Bug 1677776] Re: Missing dep8 tests

2019-07-04 Thread Rafael David Tinoco
** Changed in: cluster-glue (Ubuntu)
 Assignee: (unassigned) => Rafael David Tinoco (rafaeldtinoco)

** Tags added: ubuntu-ha

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to cluster-glue in Ubuntu.
https://bugs.launchpad.net/bugs/166

Title:
  Missing dep8 tests

Status in cluster-glue package in Ubuntu:
  New

Bug description:
  -BEGIN PGP SIGNED MESSAGE-
  Hash: SHA256

  As of March 29, 2017, this source package did not contain dep8 tests in
  the current development release of Ubuntu, named Zesty. This was
  determined by running `pull-lp-source cluster-glue zesty` and then
  checking for the existence of 'debian/tests/' and
  'debian/tests/control'.

  Test automation is essential to higher levels of quality and confidence
  in updates to packages. dep8 tests [1] specify how automatic testing can
  be integrated into packages and then run by package maintainers before
  new uploads.

  This defect is to report the absence of these tests and to report the
  opportunity as a potential item for development by both new and
  experienced contributors.

  [1] http://packaging.ubuntu.com/html/auto-pkg-test.html

   affects ubuntu/cluster-glue
   status new
   importance wishlist
   tag needs-dep8

  - ---
  Joshua Powers
  Ubuntu Server
  Canonical Ltd

  -BEGIN PGP SIGNATURE-

  iQIcBAEBCAAGBQJY3XYgAAoJEIP8BxPaZgwlTF8P/2xJ8jLaR8FhFX6GKD4biZkq
  hbo88HMufM1hmyYq00ps9DmOAhU4Ck/eTlmvtm/UJ9L+8vyD+MBrTb9lzH0IEunt
  7q7TEXvSnsK15RMpEehtO/DrlATurTtGa8vSIJHtpiWTpZCetP/i/UqTsFX1/0vK
  zfZcvquVPdBdwKYD7bWQxVOJyzki4ZzSjDG2sexYPtFyjnBN5fIkRuAtN4bPurgE
  GgS4xhkNDsQJFZP23q547Rh8g1DsfczZZw+xLOs45DJFFwK7xuXpFbIZi1/TipHX
  yErejpmMwp25SQ9kxBBHjkwI1Or7sxZX25igJ2/n2ITUge3eZHY6wBvjObjBc+cV
  35foRiCSwdivgI1b8TH4OxGgNUBZ9wTIOTGLVSaKoxfFuBMkmUlqFWjGpKwD8j9q
  AfRbAwgFkn7cgFhJ0e/ZyMRDup9u/bx0+40CN16tzVGdg0nwzdlfhnb5Y7z/H/O3
  CuH5GGvsldSBcmR10FdvS0PLGhM4y8VX2WTyOellKdCHpvE5BeZ9GKUOiB9bL3aq
  HXezKvQZAtLWVs7z+wcHxLBQ65KKYnsCWiDMUbKrjk5qIPUnvVIv5PFSWPDtveO8
  lGT04S0X6kTXcbyCISad5XDYOvq1PzJajJyE37+HQQkDfE/WFK9QQjWPS+3WCm9F
  wJjkogJGLQ05NyMo0g04
  =P3Tc
  -END PGP SIGNATURE-

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/cluster-glue/+bug/166/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : ubuntu-ha@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp