Re: [ClusterLabs] changing pacemaker.log location

2016-08-12 Thread Ken Gaillot
On 08/12/2016 10:19 AM, Christopher Harvey wrote:
> I'm surprised I'm having such a hard time figuring this out on my own.
> I'm running pacemaker 1.1.13 and corosync-2.3.4 and want to change the
> location of pacemaker.log.
> 
> By default it is located in /var/log.
> 
> I looked in corosync.c and found the following lines:
> get_config_opt(config, local_handle, KEY_PREFIX "to_logfile",
> &logfile_enabled, "on");
> get_config_opt(config, local_handle, KEY_PREFIX "logfile",
> &logfile, "/var/log/pacemaker.log");
> in mcp_read_config
> 
> I can't find any other documentation.
> 
> Here is my corosync.conf file.
> 
> totem {
>   version: 2
>   # Need a cluster name for now:
>   #   https://github.com/corosync/corosync/issues/137
>   cluster_name: temp
>   crypto_cipher: aes256
>   crypto_hash: sha512
> 
>   interface {
> ringnumber: 0
> bindnetaddr: 192.168.132.10
> mcastport: 5405
>   }
>   transport: udpu
>   heartbeat_failures_allowed: 3
> }
> 
> nodelist {
>   node {
> ring0_addr: 192.168.132.25
> nodeid: 1
> name: a
>   }
> 
>   node {
> ring0_addr: 192.168.132.21
> nodeid: 2
> name: b
>   }
> 
>   node {
> ring0_addr: 192.168.132.10
> nodeid: 3
> name: c
>   }
> }
> 
> logging {
>   # Log the source file and line where messages are being
>   # generated. When in doubt, leave off. Potentially useful for
>   # debugging.
>   fileline: on
>   # Log to standard error. When in doubt, set to no. Useful when
>   # running in the foreground (when invoking 'corosync -f')
>   to_stderr: no
>   # Log to a log file. When set to 'no', the 'logfile' option
>   # must not be set.
>   to_logfile: yes
>   logfile: /my/new/location/corosync.log

By default, pacemaker will use the same log file as corosync, so this
should be sufficient.

Alternatively, you can explicitly tell Pacemaker what detail log file to
use with the environment variable PCMK_logfile (typically set in a
distro-specific location such as /etc/sysconfig/pacemaker or
/etc/default/pacemaker).

>   # Log to the system log daemon. When in doubt, set to yes.
>   to_syslog: yes
>   # Log debug messages (very verbose). When in doubt, leave off.
>   debug: off
>   # Log messages with time stamps. When in doubt, set to on
>   # (unless you are only logging to syslog, where double
>   # timestamps can be annoying).
>   timestamp: on
>   logger_subsys {
> subsys: QUORUM
> debug: off
>   }
> }
> quorum {
>   provider: corosync_votequorum
>   expected_votes: 3
> }
> 
> Thanks,
> Chris

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] changing pacemaker.log location

2016-08-12 Thread Christopher Harvey
I'm surprised I'm having such a hard time figuring this out on my own.
I'm running pacemaker 1.1.13 and corosync-2.3.4 and want to change the
location of pacemaker.log.

By default it is located in /var/log.

I looked in corosync.c and found the following lines:
get_config_opt(config, local_handle, KEY_PREFIX "to_logfile",
&logfile_enabled, "on");
get_config_opt(config, local_handle, KEY_PREFIX "logfile",
&logfile, "/var/log/pacemaker.log");
in mcp_read_config

I can't find any other documentation.

Here is my corosync.conf file.

totem {
  version: 2
  # Need a cluster name for now:
  #   https://github.com/corosync/corosync/issues/137
  cluster_name: temp
  crypto_cipher: aes256
  crypto_hash: sha512

  interface {
ringnumber: 0
bindnetaddr: 192.168.132.10
mcastport: 5405
  }
  transport: udpu
  heartbeat_failures_allowed: 3
}

nodelist {
  node {
ring0_addr: 192.168.132.25
nodeid: 1
name: a
  }

  node {
ring0_addr: 192.168.132.21
nodeid: 2
name: b
  }

  node {
ring0_addr: 192.168.132.10
nodeid: 3
name: c
  }
}

logging {
  # Log the source file and line where messages are being
  # generated. When in doubt, leave off. Potentially useful for
  # debugging.
  fileline: on
  # Log to standard error. When in doubt, set to no. Useful when
  # running in the foreground (when invoking 'corosync -f')
  to_stderr: no
  # Log to a log file. When set to 'no', the 'logfile' option
  # must not be set.
  to_logfile: yes
  logfile: /my/new/location/corosync.log
  # Log to the system log daemon. When in doubt, set to yes.
  to_syslog: yes
  # Log debug messages (very verbose). When in doubt, leave off.
  debug: off
  # Log messages with time stamps. When in doubt, set to on
  # (unless you are only logging to syslog, where double
  # timestamps can be annoying).
  timestamp: on
  logger_subsys {
subsys: QUORUM
debug: off
  }
}
quorum {
  provider: corosync_votequorum
  expected_votes: 3
}

Thanks,
Chris

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] Releasing crmsh version 2.3.0

2016-08-12 Thread Kristoffer Grönlund

Hello everyone!

I am proud to present crmsh version 2.3.0, the latest stable
release. I would recommend all users to upgrade to 2.3.0 if they
can.

For this release, I would like to begin by highlighting the new
contributors to crmsh since 2.2.0 was released in January:

* Marc A. Smith added the new subcommand "configure load push", which
  removes any configuration lines that aren't included in the cib
  provided when pushing.

* Andrei Maruha added an optional name parameter to the "corosync
  add-node" command, and made the add-node command recycle old node
  IDs if possible.

* Kai Kang fixed a build system bug when removing generated docs,
  causing issues with parallel make.

* Daniel Hoffend contributed various fixes improving support for
  building crmsh for Debian and Ubuntu.

* Pedro Salgado fixed a bug in the graph rendering code in crmsh,
  added a tox configuration file to make testing with multiple
  versions of Python easy, and updated the Travis CI configuration to
  use tox.

* Nate Clark fixed a bug in the parser for fencing hierarchies.

I would also like to thank all the other contributors, testers and
users who have helped in making this release as stable and reliable as
possible.

Some of the other major features in 2.3.0 include:

* Support for the new event-based alerts feature in Pacemaker 1.1.15

* Greatly improved timezone handling in crm report and the history
  explorer

* Improvements to the cluster scripts / wizards, as well as new
  wizards for LVM on DRBD, and NFS on LVM and DRBD and VMware/vCenter

* Better support for fencing remote nodes

The source code can be downloaded from Github:

* https://github.com/ClusterLabs/crmsh/releases/tag/2.3.0

Packages for several popular Linux distributions can be downloaded
From the Stable repository at the OBS:

* http://download.opensuse.org/repositories/network:/ha-clustering:/Stable/

Archives of the tagged release:

* https://github.com/ClusterLabs/crmsh/archive/2.3.0.tar.gz
* https://github.com/ClusterLabs/crmsh/archive/2.3.0.zip

For the full list of changes since version 2.3.0, see the ChangeLog,
available at:

* https://github.com/ClusterLabs/crmsh/blob/2.3.0/ChangeLog


As usual, a huge thank you to all contributors and users of crmsh!

Cheers,
Kristoffer

-- 
// Kristoffer Grönlund
// kgronl...@suse.com


signature.asc
Description: PGP signature
___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] Antw: ocf:heartbeat:pgsql not starting

2016-08-12 Thread Ulrich Windl
Two tips:

1) Did you stop the configured postgres in the cluster and put it into 
maintenance mode while tyring OCF-tester?
2) When testing my RAs I replace "'!/bin/sh" with "#!/bin/sh -x" temporarily. 
It produces a lot of output, but sometimes you'll find the problem.

Regards,
Ulrich

>>> Darren Kinley  schrieb am 11.08.2016 um 23:44 in
Nachricht <0c9f39fd10c20e49bdfe9c5b09c5e7d83f955...@exbermd01.ds.mda.ca>:
> Hi,
> 
> I have PostgreSQL 9.3 replicated and I'm trying to put it under Pacemaker 
> control
> using ocf:heartbeat:pgsql provided by SLES12SP1.
> 
> This is the crmsh script that I used to configure Pacemaker.
> 
> configure cib new pgsql_cfg --force
> configure primitive res-ars-pgsql ocf:heartbeat:pgsql \
>pgctl="/usr/lib/postgresql93/bin/pg_ctl" \
>psql="/usr/lib/postgresql93/bin/psql" \
>pgdata="/var/lib/pgsql/data/" \
>rep_mode="sync" \
>node_list="ars1 ars2" \
>restore_command="cp /var/lib/pgsql/pg_archive/%f %p" \
>primary_conninfo_opt="keepalives_idle=60 keepalives_interval=5 
> keepalives_count=5" \
>master_ip="192.168.244.223" \
>restart_on_promote='true' \
>pghost="191.168.244.223" \
>repuser="postgres" \
>check_wal_receiver='true' \
>monitor_user='postgres' \
>monitor_password='xxx' \
>op start   timeout="120s" interval="0s"  on-fail="restart" \
>op monitor timeout="120s" interval="4s" on-fail="restart" \
>op monitor timeout="120s" interval="3s"  on-fail="restart" 
> role="Master" \
>op promote timeout="120s" interval="0s"  on-fail="restart" \
>op demote  timeout="120s" interval="0s"  on-fail="stop" \
>op stoptimeout="120s" interval="0s"  on-fail="block" \
>op notify  timeout="90s" interval="0s"
> configure ms ms-ars-pgsql res-ars-pgsql \
>meta master-max=1 master-node-max=1 clone-max=2 clone-node-max=1 
> notify=true
> configure colocation col-ars-pgsql-with-drbd inf: ms-ars-pgsql:Master 
> ms-ars-drbd:Master
> configure cib commit pgsql_cfg
> 
> I have a ~postgres/.pgpass
> 
> 
> My nodes remain stopped and only once during the 12 hours I've been working 
> on this
> did both nodes try to bring up PG (both in recovery mode) before shutting 
> them both down.
> 
> When running ocf-tester I think that I'm to name the master/slave resource.
> 
> ars2:/usr/lib/ocf/resource.d/heartbeat # ocf-tester -v -n 
> ms-ars-pgsql 
> `pwd`/pgsql
> Beginning tests for /usr/lib/ocf/resource.d/heartbeat/pgsql...
> Testing permissions with uid nobody
> Testing: meta-data
> Testing: meta-data
> ...
> 
> ...
> Testing: validate-all
> Checking current state
> Testing: stop
> INFO: waiting for server to shut down done server stopped
> INFO: PostgreSQL is down
> Testing: monitor
> INFO: PostgreSQL is down
> Testing: monitor
> ocf-exit-reason:Setup problem: couldn't find command: /usr/bin/pg_ctl
> Testing: start
> INFO: server starting
> INFO: PostgreSQL start command sent.
> INFO: PostgreSQL is started.
> Testing: monitor
> Testing: monitor
> INFO: Don't check /var/lib/pgsql/data during probe
> Testing: notify
> Checking for demote action
> ocf-exit-reason:Not in a replication mode.
> Checking for promote action
> ocf-exit-reason:Not in a replication mode.
> Testing: demotion of started resource
> ocf-exit-reason:Not in a replication mode.
> * rc=6: Demoting a start resource should not fail
> Testing: promote
> ocf-exit-reason:Not in a replication mode.
> * rc=6: Promote failed
> Testing: demote
> ocf-exit-reason:Not in a replication mode.
> * rc=6: Demote failed
> Aborting tests
> 
> 
> 'Not in a replication mode' disagrees with the res-ars-pgsql above.
> I'm not sure that the pacemaker.log for CIB changes is needed.
> 
> Aug 11 09:19:53 [2757] ars2pengine: info: clone_print:   
> Master/Slave Set: ms-ars-pgsql [res-ars-pgsql]
> Aug 11 09:19:53 [2757] ars2pengine: info: short_print:   
> Stopped: [ ars1 ars2 ]
> Aug 11 09:19:53 [2757] ars2pengine: info: 
> get_failcount_full:   res-ars-pgsql:0 has failed INFINITY times on ars1
> Aug 11 09:19:53 [2757] ars2pengine:  warning: 
> common_apply_stickiness:  Forcing ms-ars-pgsql away from ars1 after 
> 100 
> failures (max=100)
> Aug 11 09:19:53 [2757] ars2pengine: info: 
> get_failcount_full:   ms-ars-pgsql has failed INFINITY times on ars1
> Aug 11 09:19:53 [2757] ars2pengine:  warning: 
> common_apply_stickiness:  Forcing ms-ars-pgsql away