Re: [ClusterLabs] Resource switchover taking more time upon shutting off one of the node in a 2 node cluster

2018-03-26 Thread Ken Gaillot
On Sat, 2018-02-24 at 15:02 +0530, avinash sharma wrote:
> Hi Ken,
> 
> Thanks for the reply. 
> Here the resource in question is RoutingManager and floatingips which
> has no dependency on stateful_consul resource so i think we can
> ignore stateful_consul_promote failures.
> RoutingManager (MS) and aaaip, nataccessgwip, accessip,
> natcpcoregwip, cpcoreip from floatingips resource group, are the
> resources for which switcover action by crmd got delayed.

The delay happens around this:

Feb 21 21:42:26 [24021] IVM-1   lrmd:  warning: operation_finished:
stateful_wildfly_promote_0:869 - timed out after 30ms

So it's waiting on that, whether due to a constraint or for some other
reason. For example, if the transition that was started in got aborted,
the cluster has to wait for that result before starting a new
transition.

> Thanks,
> Avinash Sharma
> 
> On Fri, Feb 23, 2018 at 8:57 PM, Ken Gaillot 
> wrote:
> > On Fri, 2018-02-23 at 16:15 +0530, avinash sharma wrote:
> > > Subject: Switchover of resource(MS) 'RoutingManager' and resource
> > > group 'floatingips', which have 'colocation' and 'after'
> > constraints
> > > on each other, are taking around 5 minutes to get promoted when
> > node
> > > running master instance goes down.
> > 
> > 
> > 
> > When Pacemaker runs the resource agent, it will log any error
> > messages
> > that the agent prints. I didn't look at the entire log, but I
> > suspect
> > this is the cause, the promote action didn't succeed during that
> > time:
> > 
> > > Feb 21 21:37:40 [24021] IVM-1       lrmd:   notice:
> > > operation_finished:   stateful_consul_promote_0:864:stderr [
> > > ssh_exchange_identification: Connection closed by remote host
> > >  ]
> > > Feb 21 21:37:40 [24021] IVM-1       lrmd:   notice:
> > > operation_finished:   stateful_consul_promote_0:864:stderr [
> > > rsync: connection unexpectedly closed (0 bytes received so far)
> > > [sender] ]
> > > Feb 21 21:37:40 [24021] IVM-1       lrmd:   notice:
> > > operation_finished:   stateful_consul_promote_0:864:stderr [
> > > rsync error: unexplained error (code 255) at io.c(226)
> > [sender=3.1.2]
> > > ]
-- 
Ken Gaillot 
___
Users mailing list: Users@clusterlabs.org
https://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Dependency loop

2018-03-26 Thread Ken Gaillot
On Fri, 2018-03-16 at 13:00 +0200, George Kourvoulis wrote:
> Hi,
> 
> my logs keep being flooded by "Breaking dependency loop at
> " but I cannot figure out why. I haven't spotted such a
> loop.
> 
> redhat-release CENTOS 7.2.1511
> pcs --version 0.9.143
> 
> Here's an excerpt from the logs:
> 
> Mar 16 07:29:54 [3670] filesrv12.localdomain    pengine:     info:
> rsc_merge_weights:lvm_data-bak: Breaking dependency loop at
> cluster1_vip
> Mar 16 07:29:54 [3670] filesrv12.localdomain    pengine:     info:
> rsc_merge_weights:lvm_data-bak: Breaking dependency loop at
> cluster1_vip
> Mar 16 07:29:54 [3670] filesrv12.localdomain    pengine:     info:
> rsc_merge_weights:lvm_data-bak: Breaking dependency loop at
> cluster1_vip
> Mar 16 07:29:54 [3670] filesrv12.localdomain    pengine:     info:
> rsc_merge_weights:lvm_data-bak: Breaking dependency loop at
> cluster1_vip
> Mar 16 07:29:54 [3670] filesrv12.localdomain    pengine:     info:
> rsc_merge_weights:lvm_data-bak: Breaking dependency loop at
> cluster1_vip
> Mar 16 07:29:54 [3670] filesrv12.localdomain    pengine:     info:
> rsc_merge_weights:lvm_data-bak: Breaking dependency loop at
> cluster1_vip
> Mar 16 07:29:54 [3670] filesrv12.localdomain    pengine:     info:
> rsc_merge_weights:lvm_data-bak: Breaking dependency loop at
> cluster1_vip
> Mar 16 07:29:54 [3670] filesrv12.localdomain    pengine:     info:
> rsc_merge_weights:lvm_data-bak: Breaking dependency loop at
> cluster1_vip
> Mar 16 07:29:54 [3670] filesrv12.localdomain    pengine:     info:
> rsc_merge_weights:lvm_data-bak: Breaking dependency loop at
> cluster1_vip
> Mar 16 07:29:54 [3670] filesrv12.localdomain    pengine:     info:
> rsc_merge_weights:lvm_data-bak: Breaking dependency loop at
> lvm_titanas-bak
> Mar 16 07:29:54 [3670] filesrv12.localdomain    pengine:     info:
> rsc_merge_weights:lvm_data-bak: Breaking dependency loop at
> cluster1_vip
> Mar 16 07:29:54 [3670] filesrv12.localdomain    pengine:     info:
> rsc_merge_weights:lvm_data-bak: Breaking dependency loop at
> cluster1_vip
> Mar 16 07:29:54 [3670] filesrv12.localdomain    pengine:     info:
> rsc_merge_weights:lvm_data-bak: Breaking dependency loop at
> cluster1_vip
> Mar 16 07:29:54 [3670] filesrv12.localdomain    pengine:     info:
> rsc_merge_weights:lvm_data-bak: Breaking dependency loop at
> cluster1_vip
> Mar 16 07:29:54 [3670] filesrv12.localdomain    pengine:     info:
> rsc_merge_weights:lvm_data-bak: Breaking dependency loop at
> cluster1_vip
> Mar 16 07:29:54 [3670] filesrv12.localdomain    pengine:     info:
> rsc_merge_weights:lvm_data-bak: Breaking dependency loop at
> cluster1_vip
> Mar 16 07:29:54 [3670] filesrv12.localdomain    pengine:     info:
> rsc_merge_weights:lvm_data-bak: Breaking dependency loop at
> cluster1_vip
> Mar 16 07:29:54 [3670] filesrv12.localdomain    pengine:     info:
> rsc_merge_weights:lvm_data-bak: Breaking dependency loop at
> lvm_titanas-bak
> Mar 16 07:29:54 [3670] filesrv12.localdomain    pengine:     info:
> rsc_merge_weights:lvm_data-bak: Breaking dependency loop at
> cluster1_vip
> Mar 16 07:29:54 [3670] filesrv12.localdomain    pengine:     info:
> rsc_merge_weights:lvm_data-bak: Breaking dependency loop at
> cluster1_vip
> Mar 16 07:29:54 [3670] filesrv12.localdomain    pengine:     info:
> rsc_merge_weights:lvm_data-bak: Breaking dependency loop at
> cluster1_vip
> Mar 16 07:29:54 [3670] filesrv12.localdomain    pengine:     info:
> rsc_merge_weights:lvm_data-bak: Breaking dependency loop at
> cluster1_vip
> Mar 16 07:29:54 [3670] filesrv12.localdomain    pengine:     info:
> rsc_merge_weights:lvm_data-bak: Breaking dependency loop at
> cluster1_vip
> Mar 16 07:29:54 [3670] filesrv12.localdomain    pengine:     info:
> rsc_merge_weights:lvm_data-bak: Breaking dependency loop at
> cluster1_vip
> Mar 16 07:29:54 [3670] filesrv12.localdomain    pengine:     info:
> rsc_merge_weights:lvm_data-bak: Breaking dependency loop at
> cluster1_vip
> Mar 16 07:29:54 [3670] filesrv12.localdomain    pengine:     info:
> rsc_merge_weights:lvm_data-bak: Breaking dependency loop at
> lvm_titanas-bak
> Mar 16 07:29:54 [3670] filesrv12.localdomain    pengine:     info:
> rsc_merge_weights:lvm_data-bak: Breaking dependency loop at
> cluster1_vip
> Mar 16 07:29:54 [3670] filesrv12.localdomain    pengine:     info:
> rsc_merge_weights:lvm_data-bak: Breaking dependency loop at
> cluster1_vip
> Mar 16 07:29:54 [3670] filesrv12.localdomain    pengine:     info:
> rsc_merge_weights:lvm_data-bak: Breaking dependency loop at
> cluster1_vip
> Mar 16 07:29:54 [3670] filesrv12.localdomain    pengine:     info:
> rsc_merge_weights:lvm_data-bak: Breaking dependency loop at
> cluster1_vip
> Mar 16 07:29:54 [3670] filesrv12.localdomain    pengine:     info:
> rsc_merge_weights:lvm_data-bak: Breaking dependency loop at
> cluster1_vip
> 
> Here's the output of "pcs constraint 

Re: [ClusterLabs] copy file

2018-03-26 Thread Ken Gaillot
On Fri, 2018-03-09 at 10:35 +0100, Mevo Govo wrote:
> Hi, 
> Thank for advices, I'm thinking an optimal config for us. While the
> DB is working, it would do native DB replication. But oracle needs
> synchronized controlfiles when it starts normally. I can save the
> file before overwrite it. Currently I mean this (c1, c2, c3, c4, c5,
> c6 are control files):
> 
> c1: on node A, local file system
> c2: on node A, on DRBD device1
> c3: on node A, on DRBD device2 (FRA)
> c4: on node B, on DRBD device2 (FRA)
> c5: on node B, on DRBD device1
> c6: on node B, local file system
> 
> c2+c3 is a "standard" oracle config. c2 is replicated into FRA (fast
> recovery area of oracle). c1 (and c6) is just if all data in DRBD is
> lost.
> c1, c2, c3, c4, c5 (but not c6) are in sync while DB runs on node A.
> (c1,c2,c3: native DB replication, c2-c5, c3-c4 DRBD replication,
> protocol C)
> When I switch from node A to node B, c6 is out of sync (older
> version). I can (and I will) save it before overwrite by c5. But if
> c5 is corrupt, manual repair is needed, and there are other
> replications to repair it (c4, c3, c2, c1)
> If c1 and c6 would be the same file on an nfs filesystem, there would
> be a replication outside of DRBD without this "copy sync" problem.
> But in this case, the fail of only one component (the nfs) would
> cause unavailable the oracle DB on both node. (oracle DB stops if
> either of controlfile is lost or corrupted. No automatic reapair
> happens)
> As I think, the above consideration is similar to 3 node.
> If we trust DRBD, no c1 and c6 would be needed, but we are new users
> of DRBD.
> Thanks: lados.

I'm not familiar with Oracle, so I'm making some guesses here ...

If I follow you correctly, you have an Oracle configuration consisting
of two parts (c2+c3), and you're using DRBD to mirror these to another
node (c4+c5, which are really synchronized instances of c2+c3).

Somehow you are saving a copy of each file outside DRBD (c1+c6). Before
starting the db, you want a resource that checks whether the original
config needs repair, and if so, copy it from the backup outside DRBD.

It sounds like you should make a copy of the oracle agent, and modify
its start action to do what you want.

> 2018-03-08 20:12 GMT+01:00 Ken Gaillot :
> > On Thu, 2018-03-08 at 18:49 +0100, Mevo Govo wrote:
> > > Hi, 
> > > thanks for advice and your intrest.
> > > We would use oracle database over DRBD. Datafiles (and control
> > and
> > > redo files) will be on DRBD. FRA also (on an other DRBD device).
> > But
> > > we are new in DRBD, and DRBD is also a component what can fails.
> > We
> > > plan a scenario to recover the database without DRBD (without
> > data
> > > loss, if possible). We would use nfs or local filesystem for
> > this. If
> > > we use local FS, the control file is out of sync on the B node
> > when
> > > switch over (from A to B). We would copy controlfile (and redo
> > files)
> > > from DRBD to the local FS. After this, oracle can start, and it
> > keeps
> > > the controlfiles syncronized. If other backup related files
> > (archlog,
> > > backup) are also available on the local FS of either node, we can
> > > recover the DB without DRBD (without data loss)
> > > (I know it is a worst case scenario, because if DRBD fails, the
> > FS on
> > > it should be available at least on one node)
> > > Thanks: lados.
> > 
> > Why not use native database replication instead of copying files?
> > 
> > Any method getting files from a DRBD cluster to a non-DRBD node
> > will
> > have some inherent problems: it would have to be periodic, losing
> > some
> > data since the last run; it would still fail if some DRBD issue
> > corrupted the on-disk data, because you would be copying the
> > corrupted
> > data; and databases generally have in-memory state information that
> > makes files copied from a live server insufficient for data
> > integrity.
> > 
> > Native replication would avoid all that.
> > 
> > > 2018-03-07 10:20 GMT+01:00 Klaus Wenninger :
> > > > On 03/07/2018 10:03 AM, Mevo Govo wrote:
> > > > > Thanks for advices, I will try!
> > > > > lados.
> > > > >
> > > > > 2018-03-05 23:29 GMT+01:00 Ken Gaillot  > > > > >:
> > > > >
> > > > >     On Mon, 2018-03-05 at 15:09 +0100, Mevo Govo wrote:
> > > > >     > Hi,
> > > > >     > I am new in pacemaker. I think, I should use DRBD
> > instead
> > > > of copy
> > > > >     > file. But in this case, I would copy a file from a DRBD
> > to
> > > > an
> > > > >     > external device. Is there a builtin way to copy a file
> > > > before a
> > > > >     > resource is started (and after the DRBD is promoted)?
> > For
> > > > example a
> > > > >     > "copy" resource? I did not find it. 
> > > > >     > Thanks: lados.
> > > > >     >
> > > > >
> > > > >     There's no stock way of doing that, but you could easily
> > > > write an
> > > > >     agent
> > > > >     that simply 

Re: [ClusterLabs] symmetric-cluster=false doesn't work

2018-03-26 Thread Ken Gaillot
On Tue, 2018-03-20 at 22:03 +0300, George Melikov wrote:
> Hello,
>  
> I tried to create an asymmetric cluster via property symmetric-
> cluster=false , but my resources try to start on any node, though I
> have set locations for them.
>  
> What did I miss?
>  
> cib: https://pastebin.com/AhYqgUdw
>  
> Thank you for any help!
> 
> Sincerely,
> George Melikov

That output looks fine -- the resources are started only on nodes where
they are allowed. What are you expecting to be different?

Note that resources will be *probed* on every node (a one-time monitor
action to check whether they are already running there), but they
should only be *started* on allowed nodes.
-- 
Ken Gaillot 
___
Users mailing list: Users@clusterlabs.org
https://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Colocation constraint for grouping all master-mode stateful resources with important stateless resources

2018-03-26 Thread Ken Gaillot
On Mon, 2018-03-26 at 15:42 +, Sam Gardner wrote:
> Thanks, Andrei and Alberto.
> 
> Alberto, I will look into the node-constraint parameters, though I
> suspect Andrei is correct - my "base" resource is DRBDFS in this
> case, and the issue I'm seeing is that a failure in my secondary
> resources does not cause the other secondary resources nor the "base"
> resource to move to the other node.
> 
> Andrei, I have no restrictions on the particulars of the rules that
> I'm putting in place - I can completely discard the rules that I have
> implemented already.
> 
> Here's a simple diagram:
> https://imgur.com/a/5LTmJ
> 
> These are my restrictions:
> 1) If any of DRBD-Master, DRBDFS, INIF-Master, or OUTIF-Master moves
> to D2, all other resources should move to D2.
> 2) If DRBDFS or DRBD-Master cannot run on either D1 or D2, all other
> resources should be stopped.
> 3) If INIF-Master or OUTIF-Master cannot run on either D1 or D2, no
> other resources should be stopped.

4) Keep INIF-Master with a working IN interface and OUTIF-Master with a
working OUT interface.

One problem is that 4) conflicts with 1). It's possible for INIF to be
working on only one node and OUTIF to be working only on the other
node.

I'm thinking you want something like this:

* Don't constrain DRBD-Master

* Colocate DRBDFS with DRBD-Master, +INFINITY

* Colocate INIF-Master and OUTIF-Master with DRBDFS, +INFINITY (using
separate individual constraints)

* Keep your INIF/OUTIF rules, but use a finite negative score. That
way, they *must* stay with DRBFDS and DRBD-Master (due to the previous
constraints), but will *prefer* a node where their interface is up. If
one of them is more important than the other, give it a stronger score,
to break the tie when one interface is up on each node.

* Order DRBDFS start after DRBD-Master promote

No order is necessary on INIF/OUTIF, since an IP can work regardless of
the file system.

That should meet your intentions.

> This sounds like a particular constraint that may not be possible to
> do per our discussions in this thread.
> 
> I can get pretty close with a workaround - I'm using ethmonitor on
> the Master/Slave resources as you can see in the config, so if I
> create new "heartbeat:Dummy" active resources with the same
> ethmonitor location constraint, unplugging the interface will move
> everything over.
> 
> However, a failure of a different type on the master/slave VIPs that
> would not also be apparent on the dummy base resource would not cause
> a failover of the entire group, which isn't ideal (though admittedly
> unlikely in this particular use case).
> 
> Thanks much for all of the help,
> -- 
> Sam Gardner
> Trustwave | SMART SECURITY ON DEMAND
> 
> 
> 
> 
> 
> 
> 
> 
> On 3/25/18, 6:06 AM, "Users on behalf of Andrei Borzenkov"  n...@clusterlabs.org on behalf of arvidj...@gmail.com> wrote:
> 
> > 25.03.2018 10:21, Alberto Mijares пишет:
> > > On Sat, Mar 24, 2018 at 2:16 PM, Andrei Borzenkov  > > l.com> wrote:
> > > > 23.03.2018 20:42, Sam Gardner пишет:
> > > > > Thanks, Ken.
> > > > > 
> > > > > I just want all master-mode resources to be running wherever
> > > > > DRBDFS is running (essentially). If the cluster detects that
> > > > > any of the master-mode resources can't run on the current
> > > > > node (but can run on the other per ethmon), all other master-
> > > > > mode resources as well as DRBDFS should move over to the
> > > > > other node.
> > > > > 
> > > > > The current set of constraints I have will let DRBDFS move to
> > > > > the standby node and "take" the Master mode resources with
> > > > > it, but the Master mode resources failing over to the other
> > > > > node won't take the other Master resources or DRBDFS.
> > > > > 
> > > > 
> > > > I do not think it is possible. There is no way to express
> > > > symmetrical
> > > > colocation rule like "always run A and B together". You start
> > > > with A and
> > > > place B relative to A; but then A is not affected by B's state.
> > > > Attempting now to place A relative to B will result in a loop
> > > > and is
> > > > ignored. See also old discussion:
> > > > 
> > > 
> > > 
> > > It is possible. Check this thread
> > > https://scanmail.trustwave.com/?c=4062=qYK32i8YnPIdkrPQRoURDTOy
> > > qGVIytWo2-
> > > H2bJ__2w=5=https%3a%2f%2flists%2eclusterlabs%2eorg%2fpipermai
> > > l%2fusers%2f2017-November%2f006788%2ehtml
> > > 
> > 
> > I do not see how it answers the question. It explains how to use
> > other
> > criteria than node name for colocating resources, but it does not
> > change
> > basic fact that colocating is asymmetrical. Actually this thread
> > explicitly suggests "Pick one resource as your base resource that
> > everything else should go along with".
> > 
> > If you you actually have configuration that somehow implements
> > symmetrical colocation between resources, I would appreciate if you
> > could post your configuration.
> > 
> > Regarding the original problem, the root cause is 

Re: [ClusterLabs] Colocation constraint for grouping all master-mode stateful resources with important stateless resources

2018-03-26 Thread Alberto Mijares
>>
>> It is possible. Check this thread
>> https://lists.clusterlabs.org/pipermail/users/2017-November/006788.html
>>
>
> I do not see how it answers the question. It explains how to use other
> criteria than node name for colocating resources, but it does not change
> basic fact that colocating is asymmetrical. Actually this thread
> explicitly suggests "Pick one resource as your base resource that
> everything else should go along with".
>


To be honest, I didn't fully read the OP. I just read a fragment and
it looked to me like the grouping constraint could work.

Sorry, guys. Please ignore my post.

Best regards,


Alberto Mijares
___
Users mailing list: Users@clusterlabs.org
https://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Colocation constraint for grouping all master-mode stateful resources with important stateless resources

2018-03-26 Thread Sam Gardner
Thanks, Andrei and Alberto.

Alberto, I will look into the node-constraint parameters, though I suspect 
Andrei is correct - my "base" resource is DRBDFS in this case, and the issue 
I'm seeing is that a failure in my secondary resources does not cause the other 
secondary resources nor the "base" resource to move to the other node.

Andrei, I have no restrictions on the particulars of the rules that I'm putting 
in place - I can completely discard the rules that I have implemented already.

Here's a simple diagram:
https://imgur.com/a/5LTmJ

These are my restrictions:
1) If any of DRBD-Master, DRBDFS, INIF-Master, or OUTIF-Master moves to D2, all 
other resources should move to D2.
2) If DRBDFS or DRBD-Master cannot run on either D1 or D2, all other resources 
should be stopped.
3) If INIF-Master or OUTIF-Master cannot run on either D1 or D2, no other 
resources should be stopped.


This sounds like a particular constraint that may not be possible to do per our 
discussions in this thread.

I can get pretty close with a workaround - I'm using ethmonitor on the 
Master/Slave resources as you can see in the config, so if I create new 
"heartbeat:Dummy" active resources with the same ethmonitor location 
constraint, unplugging the interface will move everything over.

However, a failure of a different type on the master/slave VIPs that would not 
also be apparent on the dummy base resource would not cause a failover of the 
entire group, which isn't ideal (though admittedly unlikely in this particular 
use case).

Thanks much for all of the help,
-- 
Sam Gardner
Trustwave | SMART SECURITY ON DEMAND








On 3/25/18, 6:06 AM, "Users on behalf of Andrei Borzenkov" 
 wrote:

>25.03.2018 10:21, Alberto Mijares пишет:
>> On Sat, Mar 24, 2018 at 2:16 PM, Andrei Borzenkov  
>> wrote:
>>> 23.03.2018 20:42, Sam Gardner пишет:
 Thanks, Ken.

 I just want all master-mode resources to be running wherever DRBDFS is 
 running (essentially). If the cluster detects that any of the master-mode 
 resources can't run on the current node (but can run on the other per 
 ethmon), all other master-mode resources as well as DRBDFS should move 
 over to the other node.

 The current set of constraints I have will let DRBDFS move to the standby 
 node and "take" the Master mode resources with it, but the Master mode 
 resources failing over to the other node won't take the other Master 
 resources or DRBDFS.

>>>
>>> I do not think it is possible. There is no way to express symmetrical
>>> colocation rule like "always run A and B together". You start with A and
>>> place B relative to A; but then A is not affected by B's state.
>>> Attempting now to place A relative to B will result in a loop and is
>>> ignored. See also old discussion:
>>>
>> 
>> 
>> It is possible. Check this thread
>> https://scanmail.trustwave.com/?c=4062=qYK32i8YnPIdkrPQRoURDTOyqGVIytWo2-H2bJ__2w=5=https%3a%2f%2flists%2eclusterlabs%2eorg%2fpipermail%2fusers%2f2017-November%2f006788%2ehtml
>> 
>
>I do not see how it answers the question. It explains how to use other
>criteria than node name for colocating resources, but it does not change
>basic fact that colocating is asymmetrical. Actually this thread
>explicitly suggests "Pick one resource as your base resource that
>everything else should go along with".
>
>If you you actually have configuration that somehow implements
>symmetrical colocation between resources, I would appreciate if you
>could post your configuration.
>
>Regarding the original problem, the root cause is slightly different though.
>
>@Sam, the behavior you describe is correct for your constraints that you
>show. When colocating with resource set, all resources in the set must
>be active on the same node. It means that in your case of
>
>  id="pcs_rsc_colocation_set_drbdfs_set_drbd.master_inside-interface-sameip.master_outside-interface-sameip.master"
>score="INFINITY">
>   
> 
>   
>   id="pcs_rsc_set_drbd.master_inside-interface-sameip.master_outside-interface-sameip.master"
>role="Master" sequential="false">
> 
> 
> 
>   
>  
>
>if one IP resource (master) is moved to another node, dependent resource
>(drbdfs) simply cannot run anywhere.
>
>Before discussing low level pacemaker implementation you really need to
>have high level model of resources relationship. On one hand you
>apparently intend to always run everything on the same node - on the
>other hand you have two rules that independently decide where to place
>two resources. That does not fit together.
>___
>Users mailing list: Users@clusterlabs.org
>https://scanmail.trustwave.com/?c=4062=qYK32i8YnPIdkrPQRoURDTOyqGVIytWo2-H0aZn72g=5=https%3a%2f%2flists%2eclusterlabs%2eorg%2fmailman%2flistinfo%2fusers
>
>Project Home: 

Re: [ClusterLabs] corosync 2+ version on centos6.9

2018-03-26 Thread Jan Friesse

Jing,


Hi,

Is there a way to install corosync 2.0 or above on CentOS 6.9. I need it to
work with pgsqlms module which supports postgresql10. Thanks,


Yes, but no official rpms are provided. You can compile it by yourself 
and it will work (I'm still use some RHEL 6 boxes for Needle development 
;) ). Just make sure to install older libqb (stable one) instead of git 
master (master doesn't compile on COS 6).


Regards,
  Honza



-Jing



___
Users mailing list: Users@clusterlabs.org
https://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org



___
Users mailing list: Users@clusterlabs.org
https://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org