[Pacemaker] Corosync fails to start when NIC is absent

2015-01-09 Thread Kostiantyn Ponomarenko
Hi guys,

Corosync fails to start if there is no such network interface configured in
the system.
Even with rrp_mode: passive the problem is the same when at least one
network interface is not configured in the system.

Is this the expected behavior?
I thought that when you use redundant rings, it is enough to have at least
one NIC configured in the system. Am I wrong?

Thank you,
Kostya
___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[Pacemaker] BUG: crm_mon prints status of clone instances being started as 'Started'

2015-01-09 Thread Vladislav Bogdanov

Hi all,

It seems like lib/pengine/clone.c/clone_print() doesn't respect pending 
state of clone/ms resource instances in the cumulative output:


short_print(list_text, child_text, rsc-variant == pe_master ? 
Slaves : Started, options,

print_data);


in by-node output crm_mon prints correct Starting.

Best,
Vladislav

___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [Pacemaker] Clustermon issue

2015-01-09 Thread Marco Querci

I tried to insert some check code in my script:

#!/bin/bash

echo $(date)  /tmp/check

monitorfile=/tmp/clustermonitor.html
hostname=$(hostname)

echo Cluster state changes detected | mail -r $hostname@domain -s Cluster 
Monitor -a $monitorfile mquerc...@gmail.com


to check if the script is been called or not.
In /tmp dir there is clustermonitor.html file, that been created by 
ClusterMon resource, but no check file is present.


Thanks.



Il 08/01/2015 03:31, Andrew Beekhof ha scritto:

And there is no indication this is being called?


On 7 Jan 2015, at 6:21 pm, Marco Querci mquerc...@gmail.com wrote:

#!/bin/bash

monitorfile=/tmp/clustermonitor.html
hostname=$(hostname)

echo Cluster state changes detected | mail -r $hostname@domain -s Cluster 
Monitor -a $monitorfile mquerc...@gmail.com


Thanks.


Il 06/01/2015 01:21, Andrew Beekhof ha scritto:

On 6 Jan 2015, at 3:37 am, Marco Querci mquerc...@gmail.com wrote:

Hi All.
Any news for my problem?

Maybe post your /home/administrator/clustermonitor_notification.sh script?


Many thanks.


Il 19/12/2014 12:13, Marco Querci ha scritto:

Many tahnk for your reply.
Here is my configuration:

cib epoch=167 num_updates=2 admin_epoch=0 validate-with=pacemaker-1.2 cib-last-written=Thu Dec 18 20:04:43 2014 
update-origin=langate1 update-client=crmd crm_feature_set=3.0.9 have-quorum=1 dc-uuid=langate1
  configuration
crm_config
  cluster_property_set id=cib-bootstrap-options
nvpair id=cib-bootstrap-options-dc-version name=dc-version 
value=1.1.11-97629de/
nvpair id=cib-bootstrap-options-cluster-infrastructure 
name=cluster-infrastructure value=classic openais (with plugin)/
nvpair id=cib-bootstrap-options-expected-quorum-votes 
name=expected-quorum-votes value=2/
nvpair id=cib-bootstrap-options-stonith-enabled name=stonith-enabled 
value=false/
nvpair id=cib-bootstrap-options-no-quorum-policy name=no-quorum-policy 
value=ignore/
nvpair id=cib-bootstrap-options-last-lrm-refresh name=last-lrm-refresh 
value=1418929320/
  /cluster_property_set
/crm_config
nodes
  node id=langate2 uname=langate2
instance_attributes id=nodes-langate2/
  /node
  node id=langate1 uname=langate1
instance_attributes id=nodes-langate1/
  /node
/nodes
resources
  group id=Gateway
primitive class=ocf id=ClusterIP_int provider=heartbeat 
type=IPaddr2
  instance_attributes id=ClusterIP_int-instance_attributes
nvpair id=ClusterIP_int-instance_attributes-ip name=ip 
value=192.168.0.254/
nvpair id=ClusterIP_int-instance_attributes-cidr_netmask name=cidr_netmask 
value=32/
nvpair id=ClusterIP_int-instance_attributes-nic name=nic 
value=eth3/
  /instance_attributes
  operations
op id=ClusterIP_int-monitor-interval-60s interval=60s 
name=monitor/
  /operations
/primitive
primitive class=lsb id=WanFailover type=wanfailover
  instance_attributes id=WanFailover-instance_attributes/
  operations
op id=WanFailover-monitor-interval-60s interval=60s 
name=monitor/
  /operations
  meta_attributes id=WanFailover-meta_attributes/
/primitive
  /group
  clone id=Shorewall-clone
primitive class=lsb id=Shorewall type=shorewall
  instance_attributes id=Shorewall-instance_attributes/
  operations
op id=Shorewall-monitor-interval-60s interval=60s 
name=monitor/
  /operations
/primitive
meta_attributes id=Shorewall-clone-meta/
  /clone
  group id=External
primitive class=ocf id=ClusterIP_ext1 provider=heartbeat 
type=IPaddr2
  instance_attributes id=ClusterIP_ext1-instance_attributes
nvpair id=ClusterIP_ext1-instance_attributes-ip name=ip 
value=10.10.10.2/
nvpair id=ClusterIP_ext1-instance_attributes-cidr_netmask name=cidr_netmask 
value=32/
nvpair id=ClusterIP_ext1-instance_attributes-nic name=nic 
value=eth0/
  /instance_attributes
  operations
op id=ClusterIP_ext1-monitor-interval-60s interval=60s 
name=monitor/
  /operations
/primitive
primitive class=ocf id=ClusterIP_ext2 provider=heartbeat 
type=IPaddr2
  instance_attributes id=ClusterIP_ext2-instance_attributes
nvpair id=ClusterIP_ext2-instance_attributes-ip name=ip 
value=172.16.0.2/
nvpair id=ClusterIP_ext2-instance_attributes-cidr_netmask name=cidr_netmask 
value=32/
nvpair id=ClusterIP_ext2-instance_attributes-nic name=nic 
value=eth1/
  /instance_attributes
  operations
op id=ClusterIP_ext2-monitor-interval-60s interval=60s 
name=monitor/
  /operations
/primitive
primitive class=lsb id=Fail2ban type=fail2ban
  instance_attributes id=Fail2ban-instance_attributes/
  operations
op