Re: [ClusterLabs] Knowing where a resource is running

2018-06-04 Thread Ken Gaillot
On Wed, 2018-05-16 at 14:37 -0500, Ryan Thomas wrote:
> I’m attempting to implement a resource that is “master” on only one
> node, but the other slave nodes know where the resource is running so
> they can forward requests to the “master” node.  It seems like this
> can be accomplished by creating a multi-state resource with
> configured with 1 master with the ‘notify’ action enabled, and then
> have the slave’s listen to the pre-promote or post-promote
> notifications.  However, if node A is the master, and node B is
> rebooted… when node B starts back up, I don’t think it will see a
> pre/post-promote notification because the promote has already
> occurred.  Is there a way for the resource on node B to be informed
> that node A is already the master in this case?  I know that I could
> run ‘pcs resource’ or ‘pcs status’ or etc to see the local status of
> the resources and parse the output, but I’d prefer a cleaner and less
> fragile solution.  Any suggestions?
> Thanks!

You're right node B won't get notifications in that case, but you can
check the value of OCF_RESKEY_CRM_meta_notify_master_uname in a start
action when notify is true.
-- 
Ken Gaillot 
___
Users mailing list: Users@clusterlabs.org
https://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] PAF not starting resource successfully after node reboot (was: How to set up fencing/stonith)

2018-06-04 Thread Andrei Borzenkov
04.06.2018 18:53, Casey & Gina пишет:
>> There are different code paths when RA is called automatically by
>> resource manager and when RA is called manually by crm_resource. The
>> latter did not export this environment variable until 1.1.17. So
>> documentation is correct in that you do not need 1.1.17 to use RA
>> normally, as part of pacemaker configuration.
> 
> Okay, got it.
> 
>> You should be able to workaround it (should you ever need to manually
>> trigger actions with crm_resource) by manually exporting this
>> environment variable before calling crm_resource.
> 
> Awesome, thanks!  How would I know what to set the variable value to?
> 

That's the value of crm_feature_set CIB attribute; like:


...

You can use cibadmin -Q to display current CIB.
___
Users mailing list: Users@clusterlabs.org
https://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] PAF not starting resource successfully after node reboot (was: How to set up fencing/stonith)

2018-06-04 Thread Casey & Gina
> There are different code paths when RA is called automatically by
> resource manager and when RA is called manually by crm_resource. The
> latter did not export this environment variable until 1.1.17. So
> documentation is correct in that you do not need 1.1.17 to use RA
> normally, as part of pacemaker configuration.

Okay, got it.

> You should be able to workaround it (should you ever need to manually
> trigger actions with crm_resource) by manually exporting this
> environment variable before calling crm_resource.

Awesome, thanks!  How would I know what to set the variable value to?

Best wishes,
-- 
Casey

___
Users mailing list: Users@clusterlabs.org
https://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Resource-stickiness is not working

2018-06-04 Thread Ken Gaillot
On Sat, 2018-06-02 at 22:14 +0800, Confidential Company wrote:
> On Fri, 2018-06-01 at 22:58 +0800, Confidential Company wrote:
> > Hi,
> > 
> > I have two-node active/passive setup. My goal is to failover a
> > resource once a Node goes down with minimal downtime as possible.
> > Based on my testing, when Node1 goes down it failover to Node2. If
> > Node1 goes up after link reconnection (reconnect physical cable),
> > resource failback to Node1 even though I configured resource-
> > stickiness. Is there something wrong with configuration below?
> > 
> > #service firewalld stop
> > #vi /etc/hosts --> 192.168.10.121 (Node1) / 192.168.10.122 (Node2)
> --
> > --- Private Network (Direct connect)
> > #systemctl start pcsd.service
> > #systemctl enable pcsd.service
> > #passwd hacluster --> define pw
> > #pcs cluster auth Node1 Node2
> > #pcs setup --name Cluster Node1 Node2
> > #pcs cluster start -all
> > #pcs property set stonith-enabled=false
> > #pcs resource create ClusterIP ocf:heartbeat:IPaddr2
> > ip=192.168.10.123 cidr_netmask=32 op monitor interval=30s
> > #pcs resource defaults resource-stickiness=100
> > 
> > Regards,
> > imnotarobot
> 
> Your configuration is correct, but keep in mind scores of all kinds
> will be added together to determine where the final placement is.
> 
> In this case, I'd check that you don't have any constraints with a
> higher score preferring the other node. For example, if you
> previously 
> did a "move" or "ban" from the command line, that adds a constraint
> that has to be removed manually if you no longer want it.
> -- 
> Ken Gaillot 
> 
> 
> >>
> I'm confused. constraint from what I think means there's a preferred
> node. But if I want my resources not to have a preferred node is that
> possible?
> 
> Regards,
> imnotarobot

Yes, that's one type of constraint -- but you may not have realized you
added one if you ran something like "pcs resource move", which is a way
of saying there's a preferred node.

There are a variety of other constraints. For example, as you add more
resources, you might say that resource A can't run on the same node as
resource B, and if that constraint's score is higher than the
stickiness, A might move if B starts on its node.

To see your existing constraints using pcs, run "pcs constraint show".
If there are any you don't want, you can remove them with various pcs
commands.
-- 
Ken Gaillot 
___
Users mailing list: Users@clusterlabs.org
https://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org