On October 1, 2018 8:01:36 PM UTC, Patrick Whitney
wrote:
[...]
>so we were lucky enough our test environment is a KVM/libvirt
>environment,
>so I used fence_virsh. Again, I had the same problem... when the "bad"
>node was fenced, dlm_controld would issue (what appears to be) a
>fence_all,
Hi Ulrich,
When I first encountered this issue, I posted this:
https://lists.clusterlabs.org/pipermail/users/2018-September/015637.html
... I was using resource fencing in this example, but, as I've mentioned
before, the issue would come about, not when fencing occurred, but when the
fenced
Hi!
Maybe explain how your "node failed": Usually when I node has failed, fencing
is just to make sure that the node is really dead, so in most cases it won't
actually do a thing unless a false "node dead" had been detected. Fencing makes
sure that the fenced node has absolutely no chance to
Hi!
It would be much more helpful, if you could provide logs around the problem
events. Personally I think you _must_ implement proper fencing. In addition,
DLM seems to do its own fencing when there is a communication problem.
Regards,
Ulrich
>>> Patrick Whitney 01.10.18 16.25 Uhr >>>
Hi
On October 1, 2018 5:44:20 PM UTC, Patrick Whitney
wrote:
>We tested with both, and experienced the same behavior using both
>fencing
>strategies: an abandoned DLM lockspace. More than once, within this
>forum, I've heard that DLM only supports power fencing, but without
>explanation. Can
We tested with both, and experienced the same behavior using both fencing
strategies: an abandoned DLM lockspace. More than once, within this
forum, I've heard that DLM only supports power fencing, but without
explanation. Can you explain why DLM requires power fencing?
Best,
-Pat
On Mon,
On October 1, 2018 4:55:07 PM UTC, Patrick Whitney
wrote:
>>
>> Fencing in clustering is always required, but unlike pacemaker that
>lets
>> you turn it off and take your chances, DLM doesn't.
>
>
>As a matter of fact, DLM has a setting "enable_fencing=0|1" for what
>that's
>worth.
>
>
>> You
On 2018-10-01 03:06 AM, Ulrich Windl wrote:
digimer schrieb am 28.09.2018 um 19:11 in Nachricht
> <968d00cd-fad5-8f17-edfd-7787a9964...@alteeve.ca>:
>> On 2018-09-04 8:49 p.m., Ken Gaillot wrote:
>>> On Tue, 2018-08-21 at 10:23 -0500, Ryan Thomas wrote:
I’m seeing unexpected behavior
On 2018-10-01 12:55 PM, Patrick Whitney wrote:
> Fencing in clustering is always required, but unlike pacemaker that lets
> you turn it off and take your chances, DLM doesn't.
>
>
> As a matter of fact, DLM has a setting "enable_fencing=0|1" for what
> that's worth.
I did not know
Probably you need to enable_startup_fencing = 0 instead of enable_fencing = 0.
S pozdravem Kristián Feldsam
Tel.: +420 773 303 353, +421 944 137 535
E-mail.: supp...@feldhost.cz
www.feldhost.cz - FeldHost™ – Hostingové služby přizpůsobíme Vám Máte
specifické požadavky? Poradíme si s nimi.
>
> Fencing in clustering is always required, but unlike pacemaker that lets
> you turn it off and take your chances, DLM doesn't.
As a matter of fact, DLM has a setting "enable_fencing=0|1" for what that's
worth.
> You must have
> working fencing for DLM (and anything using it) to function
On 2018-10-01 12:04 PM, Ferenc Wágner wrote:
> Patrick Whitney writes:
>
>> I have a two node (test) cluster running corosync/pacemaker with DLM
>> and CLVM.
>>
>> I was running into an issue where when one node failed, the remaining node
>> would appear to do the right thing, from the pcmk
Patrick Whitney writes:
> I have a two node (test) cluster running corosync/pacemaker with DLM
> and CLVM.
>
> I was running into an issue where when one node failed, the remaining node
> would appear to do the right thing, from the pcmk perspective, that is.
> It would create a new cluster (of
On Mon, 2018-10-01 at 11:09 -0400, Marc Smith wrote:
> Hi,
>
> I'm looking for the correct constraint setup to use for the following
> resource configuration:
> --snip--
> node 1: tgtnode2.parodyne.com
> node 2: tgtnode1.parodyne.com
> primitive p_iscsi_tgtnode1 iscsi \
> params
Hi,
I'm looking for the correct constraint setup to use for the following
resource configuration:
--snip--
node 1: tgtnode2.parodyne.com
node 2: tgtnode1.parodyne.com
primitive p_iscsi_tgtnode1 iscsi \
params portal=172.16.0.12 target=tgtnode2_redirect udev=no
try_recovery=true \
On Sat, 2018-09-29 at 22:42 +0800, lkxjtu wrote:
>
> Version information
> [root@paas-controller-172-167-40-24:~]$ rpm -q corosync
> corosync-2.4.0-9.el7_4.2.x86_64
> [root@paas-controller-172-167-40-24:~]$ rpm -q pacemaker
> pacemaker-1.1.16-12.el7_4.2.x86_64
>
> The crmd process exited with
On Fri, 2018-09-28 at 19:41 +, Brian Vagnini wrote:
> Greetings,
> We are implementing an HA cluster solution and as a part of it, are
> using crm_mon. Part of my job is to document training materials for
> certain things. I am running into a problem in defining some of the
> information that
Hi Everyone,
I wanted to solicit input on my configuration.
I have a two node (test) cluster running corosync/pacemaker with DLM and
CLVM.
I was running into an issue where when one node failed, the remaining node
would appear to do the right thing, from the pcmk perspective, that is.
It would
On 01/10/18 07:45, Ulrich Windl wrote:
Ferenc Wágner schrieb am 27.09.2018 um 21:16
> in
> Nachricht <87zhw23g5p@lant.ki.iif.hu>:
>> Christine Caulfield writes:
>>
>>> I'm also looking into high‑res timestamps for logfiles too.
>>
>> Wouldn't that be a useful option for the syslog
>>> digimer schrieb am 28.09.2018 um 19:11 in Nachricht
<968d00cd-fad5-8f17-edfd-7787a9964...@alteeve.ca>:
> On 2018-09-04 8:49 p.m., Ken Gaillot wrote:
>> On Tue, 2018-08-21 at 10:23 -0500, Ryan Thomas wrote:
>>> I’m seeing unexpected behavior when using “unfencing” – I don’t think
>>> I’m
>>> Ken Gaillot schrieb am 28.09.2018 um 15:50 in
>>> Nachricht
<1538142642.4679.1.ca...@redhat.com>:
> On Fri, 2018-09-28 at 15:26 +0530, Prasad Nagaraj wrote:
>> Hi Ken - Only if I turn off corosync on the node [ where I crashed
>> pacemaker] other nodes are able to detect and put the node as
>>> Ferenc Wágner schrieb am 27.09.2018 um 21:16
in
Nachricht <87zhw23g5p@lant.ki.iif.hu>:
> Christine Caulfield writes:
>
>> I'm also looking into high‑res timestamps for logfiles too.
>
> Wouldn't that be a useful option for the syslog output as well? I'm
> sometimes concerned by the
lkxjtu,
Corosync.log has kept printing the following logs for several days. What's
wrong with the corosync cluster? Now the cpu load is not high.
Interesting messages from logs you've sent are:
Sep 30 01:23:28 [127667] paas-controller-172-21-0-2 corosync warning
[MAIN ]
23 matches
Mail list logo