Hi list need your help
[root@voipserver ~]# pcs status
Cluster name: ClusterKrusher
Stack: corosync
Current DC: voipserver.backup (version 1.1.16-12.el7_4.2-94ff4df) -
partition with quorum
Last updated: Tue Oct 17 19:46:05 2017
Last change: Tue Oct 17 19:28:22 2017 by root via cibadmin on
That makes sense. I've tried copying the anything resource and changed its
name and id (which I guess should be enough to make pacemaker think they
are different) but I still have the same problem.
After more debugging I have reduced the problem to this:
* First cloned resource running fine
*
On Fri, 2017-09-22 at 18:30 +0200, Ferenc Wágner wrote:
> Ken Gaillot writes:
>
> > Hmm, stop+reload is definitely a bug. Can you attach (or email it
> > to me
> > privately, or file a bz with it attached) the above pe-input file
> > with
> > any sensitive info removed?
>
>
On Tue, 2017-10-17 at 15:30 +0600, Sergey Korobitsin wrote:
> Ken Gaillot ☫ → To Cluster Labs - All topics related to open-source
> clustering welcomed @ Thu, Oct 12, 2017 09:47 -0500
>
> Thanks for the answer, Ken,
>
> > > I found several ways to achieve that:
> > >
> > > 1. Put cluster in
On Tue, 2017-10-17 at 11:47 +0200, Gerard Garcia wrote:
> Thanks Ken. Yes, inspecting the logs seems that the failcount of the
> correctly running resource reaches the maximum number of allowed
> failures and gets banned in all nodes.
>
> What is weird is that I just see how the failcount for the
Hi Lars,
On Mon, Oct 16, 2017 at 08:52:04PM +0200, Lars Ellenberg wrote:
> On Mon, Oct 16, 2017 at 08:09:21PM +0200, Dejan Muhamedagic wrote:
> > Hi,
> >
> > On Thu, Oct 12, 2017 at 03:30:30PM +0900, Christian Balzer wrote:
> > >
> > > Hello,
> > >
> > > 2nd post in 10 years, lets see if this
Thanks Ken. Yes, inspecting the logs seems that the failcount of the
correctly running resource reaches the maximum number of allowed failures
and gets banned in all nodes.
What is weird is that I just see how the failcount for the first resource
gets updated, is like the failcount are being
Ken Gaillot ☫ → To Cluster Labs - All topics related to open-source clustering
welcomed @ Thu, Oct 12, 2017 09:47 -0500
Thanks for the answer, Ken,
> > I found several ways to achieve that:
> >
> > 1. Put cluster in maintainance mode (as described here:
> >
>>> Ken Gaillot schrieb am 16.10.2017 um 22:57 in
>>> Nachricht
<1508187437.6286.7.ca...@redhat.com>:
> On Mon, 2017-10-16 at 21:49 +0200, Lars Ellenberg wrote:
>> On Mon, Oct 16, 2017 at 09:20:52PM +0200, Lentes, Bernd wrote:
>> > - On Oct 16, 2017, at 7:38 PM, Digimer