>>> Eric Ren schrieb am 16.02.2017 um 04:50 in Nachricht
:
> Hi,
>
> On 11/09/2016 12:37 AM, Marc Smith wrote:
>> Hi,
>>
>> First, I realize ocf:lvm2:VolumeGroup comes from the LVM2 package and
>> not resource-agents, but I'm hoping
>>> Jan Pokorný schrieb am 15.02.2017 um 18:04 in
Nachricht
<20170215170435.gk18...@redhat.com>:
> On 15/02/17 15:13 +, Christine Caulfield wrote:
>> On 15/02/17 14:50, Jan Friesse wrote:
Hi all,
Corosync Cluster Engine, version '2.3.4'
Copyright (c)
Hi,
On 11/09/2016 12:37 AM, Marc Smith wrote:
Hi,
First, I realize ocf:lvm2:VolumeGroup comes from the LVM2 package and
not resource-agents, but I'm hoping someone on this list is familiar
with this RA and can provide some insight.
In my cluster configuration, I'm using ocf:lvm2:VolumeGroup
please note that everything works fine when there is only one clone resource
configured, the resource will get restarted and the vip will get moved.
anyway, I will check my ocfs again.
原始邮件
发件人: <kgail...@redhat.com>
收件人:何海龙10164561
抄送人: <users@clusterlabs.org>
日 期 :2017年02月15日
On 02/15/2017 12:17 PM, dur...@mgtsciences.com wrote:
> I have 2 Fedora VMs (node1, and node2) running on a Windows 10 machine
> using Virtualbox.
>
> I began with this.
> http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html-single/Clusters_from_Scratch/
>
>
> When it came to fencing, I
On 15/02/17 18:04 +0100, Jan Pokorný wrote:
> On 15/02/17 15:13 +, Christine Caulfield wrote:
>> On 15/02/17 14:50, Jan Friesse wrote:
Hi all,
Corosync Cluster Engine, version '2.3.4'
Copyright (c) 2006-2009 Red Hat, Inc.
Today I found corosync consuming 100%
Hi,
I have configured two virtualIP resources but one of them does not start
and I'm not able to find the reason.
Nothing appear in logs:
[root@vdicnode01 ~]# crm_mon -1 -r
Stack: corosync
Current DC: vdicnode02-priv (version 1.1.15-11.el7_3.2-e174ec8) - partition
with quorum
Last updated: Wed
I have 2 Fedora VMs (node1, and node2) running on a Windows 10 machine
using Virtualbox.
I began with this.
http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html-single/Clusters_from_Scratch/
When it came to fencing, I refered to this.
http://www.linux-ha.org/wiki/SBD_Fencing
To the file
Hi folks,
I'm running some test scenarios with an ocf_heartbeat_iscsi pacemaker
resource,
using the following XIV multipath'ed configuration:
I created a single XIV iscsi host definition containing all the pacemaker
host (cluster node) 'Initiator's:
XIV 7812475>>host_list_ports
On 15/02/17 15:13 +, Christine Caulfield wrote:
> On 15/02/17 14:50, Jan Friesse wrote:
>>> Hi all,
>>>
>>> Corosync Cluster Engine, version '2.3.4'
>>> Copyright (c) 2006-2009 Red Hat, Inc.
>>>
>>> Today I found corosync consuming 100% cpu. Strace showed following:
>>>
>>> write(7,
Hi all,
Corosync Cluster Engine, version '2.3.4'
Copyright (c) 2006-2009 Red Hat, Inc.
Today I found corosync consuming 100% cpu. Strace showed following:
write(7, "\v\0\0\0", 4) = -1 EAGAIN (Resource temporarily
unavailable)
write(7, "\v\0\0\0", 4) = -1 EAGAIN
I switch back to "location" tonight to continue with the testing, at least
sometimes the vip is moving..
I mentioned earlier, with "location", only one clone resource would get
restarted, the other two would not,,,but just now, all 3 clone resources get
restarted and the vips get moved as
I just tried using colocation, it dosen't work.
I failed the node paas-controller-3, but sdclient_vip didn't get moved:
Online: [ paas-controller-1 paas-controller-2 paas-controller-3 ]
router_vip (ocf::heartbeat:IPaddr2): Started paas-controller-1
sdclient_vip
13 matches
Mail list logo