Hi,
In my environment I have deployed 5 VirtualDomains as one can see below:
[root@vdicnode01 ~]# pcs status
Cluster name: vdic-cluster
Stack: corosync
Current DC: vdicnode01-priv (version 1.1.15-11.el7_3.2-e174ec8) - partition
with quorum
Last updated: Thu Feb 16 09:02:53 2017 Last
Hi all,
We are currently setting up a MySQL cluster (Master-Slave) over this
platform:
- Two nodes, on RHEL 7.0
- pacemaker-1.1.10-29.el7.x86_64
- corosync-2.3.3-2.el7.x86_64
- pcs-0.9.115-32.el7.x86_64
There is a IP address resource to be used as a "virtual IP".
This is configuration of
On 16/02/17 03:51, cys wrote:
> At 2017-02-15 23:13:08, "Christine Caulfield" wrote:
>>
>> Yes, it seems that some corosync SEGVs trigger this obscure bug in
>> libqb. I've chased a few possible causes and none have been fruitful.
>>
>> If you get this then corosync has
adding "sleep 5" before return in the stop func fixed the issue. so I suspect
there must be concurrency bug somewhere in the code. just FYI.
原始邮件
发件人: <kgail...@redhat.com>
收件人:何海龙10164561
抄送人: <users@clusterlabs.org>
日 期 :2017年02月15日 23:22
主 题 :Re: 答复: Re: 答复: Re: [ClusterLabs]
Hi Everyone,
I have developed several ansible modules for interacting with pacemaker
cluster using 'pcs' utility. The modules cover enough to create cluster,
authorize nodes and add/delete/update resource in it (idempotency
included - for example resources are updated only if they differ).
>>> Oscar Segarra schrieb am 16.02.2017 um 13:55 in
Nachricht
:
> Hi Klaus,
>
> Thanks a lot, I will try to delete the stop monitor.
>
> Nevertheless, I have 6 domains configured exactly the same... Is
Suppose I have a N node cluster where N > 2 running m*N resources. Resources
don’t have preferred nodes, but since resources take RAM and CPU it is
important to distribute them equally among the nodes.
Will pacemaker do the equal distribution, e.g. m resources per node?
If a node fails, will
Hi,
On 02/16/2017 08:16 PM, Ulrich Windl wrote:
[snip]
Any other advice? Is ocf:heartbeat:LVM or ocf:lvm2:VolumeGroup the
more popular RA for managing LVM VG's? Any comments from other users
on experiences using either (good, bad)?
I had a little bit experience on "ocf:heartbeat:LVM". Each
On 02/15/2017 10:30 PM, Ken Gaillot wrote:
> On 02/15/2017 12:17 PM, dur...@mgtsciences.com wrote:
>> I have 2 Fedora VMs (node1, and node2) running on a Windows 10 machine
>> using Virtualbox.
>>
>> I began with this.
>>
On 16/02/17 09:31, cys wrote:
> The attachment includes coredump and logs just before corosync went wrong.
>
> The packages we use:
> corosync-2.3.4-7.el7_2.1.x86_64
> corosynclib-2.3.4-7.el7_2.1.x86_64
> libqb-0.17.1-2.el7.1.x86_64
>
> But they are not available any more at mirror.centos.org.
Hi Ulrich!
On 02/16/2017 03:31 PM, Ulrich Windl wrote:
Eric Ren schrieb am 16.02.2017 um 04:50 in Nachricht
:
Hi,
On 11/09/2016 12:37 AM, Marc Smith wrote:
Hi,
First, I realize ocf:lvm2:VolumeGroup comes from the LVM2 package
Hi Kaluss
Which is your proposal to fix this behavior?
Thanks a lot!
El 16 feb. 2017 10:57 a. m., "Klaus Wenninger"
escribió:
On 02/16/2017 09:05 AM, Oscar Segarra wrote:
> Hi,
>
> In my environment I have deployed 5 VirtualDomains as one can see below:
>
On 02/16/2017 11:02 AM, Oscar Segarra wrote:
> Hi Kaluss
>
> Which is your proposal to fix this behavior?
First you can try to remove the monitor op for role=stopped.
Then the startup-probing will probably still fail but for that
the behaviour is different.
The startup-probing can be disabled
Sorry, In the other node I get exactly the same log entries:
VirtualDomain(vm-vdicone01)[125890]:2017/02/16_17:11:45 ERROR:
Configuration file /mnt/nfs-vdic-mgmt-vm/vdicone01.xml does not exist or is
not readable.
VirtualDomain(vm-vdicone01)[125912]:2017/02/16_17:11:45 INFO:
Configuration
On 02/16/2017 02:26 AM, Félix Díaz de Rada wrote:
>
> Hi all,
>
> We are currently setting up a MySQL cluster (Master-Slave) over this
> platform:
> - Two nodes, on RHEL 7.0
> - pacemaker-1.1.10-29.el7.x86_64
> - corosync-2.3.3-2.el7.x86_64
> - pcs-0.9.115-32.el7.x86_64
> There is a IP address
Hi Klaus,
I have delted the op stop:
pcs resource op remove vm-vdicone01 stop interval=0s timeout=90
pcs resource op remove vm-vdicdb01 stop interval=0s timeout=90
pcs resource op remove vm-vdicsunstone01 stop interval=0s timeout=90
pcs resource op remove vm-vdicudsserver stop interval=0s
On 02/16/2017 05:42 PM, dur...@mgtsciences.com wrote:
> Klaus Wenninger wrote on 02/16/2017 03:27:07 AM:
>
> > From: Klaus Wenninger
> > To: kgail...@redhat.com, Cluster Labs - All topics related to open-
> > source clustering welcomed
On 02/16/2017 05:12 PM, Oscar Segarra wrote:
> Sorry, In the other node I get exactly the same log entries:
>
> VirtualDomain(vm-vdicone01)[125890]:2017/02/16_17:11:45 ERROR:
> Configuration file /mnt/nfs-vdic-mgmt-vm/vdicone01.xml does not exist
> or is not readable.
>
Klaus Wenninger wrote on 02/16/2017 10:43:19 AM:
> From: Klaus Wenninger
> To: dur...@mgtsciences.com, Cluster Labs - All topics related to
> open-source clustering welcomed
> Cc: kgail...@redhat.com
> Date: 02/16/2017 10:43 AM
Klaus Wenninger wrote on 02/16/2017 03:27:07 AM:
> From: Klaus Wenninger
> To: kgail...@redhat.com, Cluster Labs - All topics related to open-
> source clustering welcomed
> Date: 02/16/2017 03:27 AM
> Subject: Re: [ClusterLabs]
>>> Ilia Sokolinski schrieb am 17.02.2017 um 07:30 in
Nachricht <28de945e-894f-41b0-b191-53ce90542...@clearskydata.com>:
> Suppose I have a N node cluster where N > 2 running m*N resources. Resources
> don’t have preferred nodes, but since resources take RAM and CPU it is
21 matches
Mail list logo