>>> Sriram <sriram...@gmail.com> schrieb am 08.08.2017 um 09:30 in Nachricht
<CAMvdjurcQc6t=ZfGr=crl25xq0je9h9f_tvzxyxvan3n+dv...@mail.gmail.com>:
> Hi Ken & Jan,
> 
> In the cluster we have, there is only one resource running. Its a OPT-IN
> cluster with resource-stickiness set to INFINITY.
> 
> Just to clarify my question, lets take a scenario where there are four
> nodes N1, N2, N3, N4
> a. N1 comes up first, starts the cluster.

The cluster will start once it has a quorum.

> b. N1 Checks that there is no resource running, so it will add the
> resource(R) with the some location constraint(lets say score 100)
> c. So Resource(R) runs in N1 now.
> d. N2 comes up next, checks that resource(R) is already running in N1, so
> it will update the location constraint(lets say score 200)
> e. N3 comes up next, checks that resource(R) is already running in N1, so
> it will update the location constraint(lets say score 300)

See my remark on quorum above.

> f.  N4 comes up next, checks that resource(R) is already running in N1, so
> it will update the location constraint(lets say score 400)
> g. For the some reason, if N1 goes down, resource(R) shifts to N4(as its
> score is higher than anyone).
> 
> In this case is it possible to notify the nodes N2, N3 that newly elected
> active node is N4 ?

What type of notification, and what would the node do with it?
Any node in the cluster always has up to date configuration information. So it 
knows the status of the other nodes also.

> 
> I went through clone notifications and master-slave, Iooks like it either
> requires identical resources(Anonymous) or Unique or Stateful resources to
> be running
> in all the nodes of the cluster, where as in our case there is only
> resource running in the whole cluster.

Maybe the main reason for not having notifications is that if a node fails 
hard, it won't be able to send out much status information to the other nodes.

Regards,
Ulrich

> 
> Regards,
> Sriram.
> 
> 
> 
> 
> On Mon, Aug 7, 2017 at 11:28 AM, Sriram <sriram...@gmail.com> wrote:
> 
>>
>> Thanks Ken, Jan. Will look into the clone notifications.
>>
>> Regards,
>> Sriram.
>>
>> On Sat, Aug 5, 2017 at 1:25 AM, Ken Gaillot <kgail...@redhat.com> wrote:
>>
>>> On Thu, 2017-08-03 at 12:31 +0530, Sriram wrote:
>>> >
>>> > Hi Team,
>>> >
>>> >
>>> > We have a four node cluster (1 active : 3 standby) in our lab for a
>>> > particular service. If the active node goes down, one of the three
>>> > standby node  becomes active. Now there will be (1 active :  2
>>> > standby : 1 offline).
>>> >
>>> >
>>> > Is there any way where this newly elected node sends notification to
>>> > the remaining 2 standby nodes about its new status ?
>>>
>>> Hi Sriram,
>>>
>>> This depends on how your service is configured in the cluster.
>>>
>>> If you have a clone or master/slave resource, then clone notifications
>>> is probably what you want (not alerts, which is the path you were going
>>> down -- alerts are designed to e.g. email a system administrator after
>>> an important event).
>>>
>>> For details about clone notifications, see:
>>>
>>> http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html-sing 
>>> le/Pacemaker_Explained/index.html#_clone_resource_agent_requirements
>>>
>>> The RA must support the "notify" action, which will be called when a
>>> clone instance is started or stopped. See the similar section later for
>>> master/slave resources for additional information. See the mysql or
>>> pgsql resource agents for examples of notify implementations.
>>>
>>> > I was exploring "notification agent" and "notification recipient"
>>> > features, but that doesn't seem to work. /etc/sysconfig/notify.sh
>>> > doesn't get invoked even in the newly elected active node.
>>>
>>> Yep, that's something different altogether -- it's only enabled on RHEL
>>> systems, and solely for backward compatibility with an early
>>> implementation of the alerts interface. The new alerts interface is more
>>> flexible, but it's not designed to send information between cluster
>>> nodes -- it's designed to send information to something external to the
>>> cluster, such as a human, or an SNMP server, or a monitoring system.
>>>
>>>
>>> > Cluster Properties:
>>> >  cluster-infrastructure: corosync
>>> >  dc-version: 1.1.17-e2e6cdce80
>>> >  default-action-timeout: 240
>>> >  have-watchdog: false
>>> >  no-quorum-policy: ignore
>>> >  notification-agent: /etc/sysconfig/notify.sh
>>> >  notification-recipient: /var/log/notify.log
>>> >  placement-strategy: balanced
>>> >  stonith-enabled: false
>>> >  symmetric-cluster: false
>>> >
>>> >
>>> >
>>> >
>>> > I m using the following versions of pacemaker and corosync.
>>> >
>>> >
>>> > /usr/sbin # ./pacemakerd --version
>>> > Pacemaker 1.1.17
>>> > Written by Andrew Beekhof
>>> > /usr/sbin # ./corosync -v
>>> > Corosync Cluster Engine, version '2.3.5'
>>> > Copyright (c) 2006-2009 Red Hat, Inc.
>>> >
>>> >
>>> > Can you please suggest if I m doing anything wrong or if there any
>>> > other mechanisms to achieve this ?
>>> >
>>> >
>>> > Regards,
>>> > Sriram.
>>> >
>>> >
>>> > _______________________________________________
>>> > Users mailing list: Users@clusterlabs.org 
>>> > http://lists.clusterlabs.org/mailman/listinfo/users 
>>> >
>>> > Project Home: http://www.clusterlabs.org 
>>> > Getting started: http://www.clusterlabs.org/doc 
>>> /Cluster_from_Scratch.pdf
>>> > Bugs: http://bugs.clusterlabs.org 
>>>
>>> --
>>> Ken Gaillot <kgail...@redhat.com>
>>>
>>>
>>>
>>>
>>>
>>> _______________________________________________
>>> Users mailing list: Users@clusterlabs.org 
>>> http://lists.clusterlabs.org/mailman/listinfo/users 
>>>
>>> Project Home: http://www.clusterlabs.org 
>>> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf 
>>> Bugs: http://bugs.clusterlabs.org 
>>>
>>
>>




_______________________________________________
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org

Reply via email to