Hi,
On a freshly rebooted cluster node (after crm_mon reports it as
'online'), I get the following:
wferi@vhbl08:~$ sudo crm_resource -r vm-cedar --cleanup
Cleaning up vm-cedar on vhbl03, removing fail-count-vm-cedar
Cleaning up vm-cedar on vhbl04, removing fail-count-vm-cedar
Cleaning up
pacemaker 1.1.12-11.12
openais 1.1.4-5.24.5
corosync 1.4.7-0.23.5
Its a two node active/passive cluster and we just upgraded the SLES 11
SP 3 to SLES 11 SP 4(nothing else) but when we try to start the cluster
service we get the following error:
"Totem is unable to form a cluster because of
>> Agree. So far I haven't created any ordering constraints because it
>> isn't important to me, YET, the order for starting services. However I
>> have a question... if I don't have any ordering constraints at all, am
>> I still able to activate resources no matter the order?
>
> Sort of, but not
On 04/07/2016 10:30 AM, Jason Voorhees wrote:
>> FYI, commands that "move" a resource do so by adding location
>> constraints. The ID of these constraints will start with "cli-". They
>> override the normal behavior of the cluster, and stay in effect until
>> you explicitly remove them. (With pcs,
> FYI, commands that "move" a resource do so by adding location
> constraints. The ID of these constraints will start with "cli-". They
> override the normal behavior of the cluster, and stay in effect until
> you explicitly remove them. (With pcs, you can remove them with "pcs
> resource clear".)
>> At least you've got interactive debugging ability then. So try to find
>> out why the Corosync membership broke down. The output of
>> corosync-quorumtool and corosync-cpgtool might help. Also try pinging
>> the Corosync ring0 addresses between the nodes.
Dear Feri and all,
Just to come
On 04/06/2016 09:29 PM, Jason Voorhees wrote:
> Hey guys:
>
> I've been reading a little bit more about rules but there are certain
> things that are not so clear to me yet. First, I've created 3 normal
> resources and one master/slave resource (clusterdataClone). My
> resources and constraints
I was discovery problem. lrmd started resource as postgres:haclient instead
postgres:postgres.
I did not know - is this bug fixed or not, couse my pacemaker a bit oldest.
2016-04-07 17:12 GMT+03:00 Ken Gaillot :
> On 04/07/2016 06:40 AM, jaspal singla wrote:
> > Hello,
> >
On 04/07/2016 06:40 AM, jaspal singla wrote:
> Hello,
>
> As we have clusvcadm -U and clusvcadm -Z
> to freeze and unfreeze resource in CMAN. Would really appreciate if
> someone please give some pointers for freezing/unfreezing a resource in
> Pacemaker (pcs) as well.
>
> Thanks,
> Jaspal
Hello,
As we have clusvcadm -U and clusvcadm -Z
to freeze and unfreeze resource in CMAN. Would really appreciate if
someone please give some pointers for freezing/unfreezing a resource in
Pacemaker (pcs) as well.
Thanks,
Jaspal Singla
___
Users
Hi!
I try use PAF resource agent and catch strange problem:
Apr 07 11:37:35 [1681] a.server lrmd:debug: operation_finished:
pgsqld_start_0:1827:stdout [ waiting for server to start2016-04-07
11:37:30 MSK FATAL: could not access private key file
Andrei Maruha writes:
> Hi,
> Attached patch contains a little bit reworked function next_nodeid,
> because some times it is not clear why node gets new id during rejoin
> operation. Ex:
> 1. cluster with one node, id is 1;
> 2. join another node, joined node gets id
>>> Ken Gaillot schrieb am 07.04.2016 um 00:04 in
Nachricht
<57058805.8050...@redhat.com>:
> On 03/30/2016 12:18 PM, Sam Gardner wrote:
>> I'll check about the cluster-recheck-interval. Attached is a crm_report.
>>
>> In the meantime, what is all performed on that interval?
13 matches
Mail list logo