On Mon, 2018-02-12 at 23:25 +0800, lkxjtu wrote:
> These logs are both print when system is abnormal, I am very confused
> what they mean. Does anyone know what they mean? Thank you very much
> corosync version 2.4.0
> pacemaker version 1.1.16
>
> 1)
> Feb 01 10:57:58 [18927]
lkxjtu,
I will just comment corosync log.
These logs are both print when system is abnormal, I am very confused what they
mean. Does anyone know what they mean? Thank you very much
corosync version 2.4.0
pacemaker version 1.1.16
1)
Feb 01 10:57:58 [18927] paas-controller-192-167-0-2
Eric,
General question. I tried to set up a cman + corosync + pacemaker cluster using two corosync rings. When I start the cluster, everything works fine, except when I do a 'corosync-cfgtool -s' it only shows one ring. I tried manually editing the /etc/cluster/cluster.conf file adding two
On 2018-02-12 07:10 AM, Eric Robinson wrote:
> General question. I tried to set up a cman + corosync + pacemaker
> cluster using two corosync rings. When I start the cluster, everything
> works fine, except when I do a ‘corosync-cfgtool -s’ it only shows one
> ring. I tried manually editing the
On 2018-02-12 08:15 AM, Klaus Wenninger wrote:
> On 02/12/2018 01:02 PM, Maxim wrote:
>> Hello,
>>
>> [Sorry for a message duplication. Web mail client ruined the
>> formatting of the previous e-mail =( ]
>>
>> There is a simple configuration of two cluster nodes (built via RHEL 6
>> pcs
On 02/12/2018 04:34 PM, Maxim wrote:
> 12.02.2018 16:15, Klaus Wenninger пишет:
>> On 02/12/2018 01:02 PM, Maxim wrote:
> > fencing-disabled is probably due to it being a test-setup ... RHEL 6
> > pcs being made for configuring a cman-pacemaker-setup I'm not sure if
> > it is advisable to do a
12.02.2018 16:15, Klaus Wenninger пишет:
On 02/12/2018 01:02 PM, Maxim wrote:
> fencing-disabled is probably due to it being a test-setup ... RHEL 6
> pcs being made for configuring a cman-pacemaker-setup I'm not sure if
> it is advisable to do a setup for a corosync-2 pacemaker setup with
>
These logs are both print when system is abnormal, I am very confused what they
mean. Does anyone know what they mean? Thank you very much
corosync version 2.4.0
pacemaker version 1.1.16
1)
Feb 01 10:57:58 [18927] paas-controller-192-167-0-2 crmd: warning:
find_xml_node:Could
On 02/12/2018 01:02 PM, Maxim wrote:
> Hello,
>
> [Sorry for a message duplication. Web mail client ruined the
> formatting of the previous e-mail =( ]
>
> There is a simple configuration of two cluster nodes (built via RHEL 6
> pcs interface) with multiple master/slave resources, disabled fencing
General question. I tried to set up a cman + corosync + pacemaker cluster using
two corosync rings. When I start the cluster, everything works fine, except
when I do a 'corosync-cfgtool -s' it only shows one ring. I tried manually
editing the /etc/cluster/cluster.conf file adding two sections,
Hello,
[Sorry for a message duplication. Web mail client ruined the formatting
of the previous e-mail =( ]
There is a simple configuration of two cluster nodes (built via RHEL 6
pcs interface) with multiple master/slave resources, disabled fencing
and the single sync interface.
All is ok
Jan Pokorný writes:
> I guess you are linking your python extension with one of the
> pacemaker libraries (directly on indirectly to libcrmcommon), and in
> that case, you need to rebuild pacemaker with the patched libqb[*] for
> the whole arrangement to work. Likewise in
[let's move this to developers list]
On 12/02/18 07:22 +0100, Kristoffer Grönlund wrote:
> (and especially the libqb developers)
>
> I started hacking on a python library written in C which links to
> pacemaker, and so to libqb as well, but I'm encountering a strange
> problem which I don't know
Hello
There is a simple configuration of two cluster nodes (built via RHEL 6 pcs
interface) with multiple master/slave resources, disabled fencing and the single
sync interface.
All is ok mainly. But there is some problem of the cluster activity performance
when the master node is powered off
Thanks Ondrej for the response. I also figured out the same and reduced the
HADR_TIMEOUT and increased the promote timeout which helped in resolving
the issue.
Regards,
15 matches
Mail list logo