I use same corosync / pacemaker on three host, but:
Host A & B have same kernel, C is different:
A & B : 3.7.10-1.45-desktop
C: 4.1.15-8-default
> This work fine with :
> - 3.11.10-25-default / 3.11.10-29-default
> - 3.7.10-1.1-desktop / 3.11.10-29-default
> -
>>> I use same corosync / pacemaker on three host, but:
>>> Host A & B have same kernel, C is different:
>>> A & B : 3.7.10-1.45-desktop
>>> C: 4.1.15-8-default
>>
>> I don't know why you do that, but I'd either put C into standby, or put
>> A
>> nad B into standby and upgrade them one by one to
> I'm fairly new to Pacemaker and have a few questions about
>
> The following log event and why resources was removed from my cluster
> Right before the resources being killed SIGTERM I notice the following
> message.
> Dec 18 19:18:18 clusternode38.mf stonith-ng[10739]: notice: On loss of
>
I change my setting from clusterip_hash="sourceip-sourceport" to
clusterip_hash="sourceip".
And try to ping.
>From one host (not a node) on the network, I get no answer.
>From another host (not a node) on the network I get:
PING 10.0.0.97 (10.0.0.97) 56(84) bytes of data.
64 bytes from 10.0.0.97:
> ... For me, this work with arp multicast, who give same "virtual" arp
> to different hosts
Every hosts in the cluster get the request, and a modulo choose which one
answer. It's just how I understand this shared ip.
___
Users mailing list:
> Maybe I'm missing something here, and if so, my apologies, but to me it
> looks like you are trying to put the same IP address on three different
> machines SIMULTANEOUSLY.
Yes it what I do. But it's seem normal for me, I just follow guide like
> 3 Nodes A B C.
> If resource on:
> A + B => ok
> Only A => ok
> Only B => ok
> Only C => ok
> A + C => random fail
> B + C => random fail
> A + B + C => random fail
I use same corosync / pacemaker on three host, but:
Host A & B have same kernel, C is different:
A & B : 3.7.10-1.45-desktop
C:
Hi,
My problem is still here. I search but don't find. I try to change network
cable to put the 3 hosts together on same switch, but same problem.
So with this:
primitive ip_apache_localnet ocf:heartbeat:IPaddr2 \
params ip="10.0.0.99" \
cidr_netmask="32" op monitor interval="30s"
clone
Hi,
I got some trouble since one week and can't find solution by myself. Any
help will be really appreciated !
I use corosync / pacemaker for 3 or 4 years and all works well, for
failover or load-balancing.
I have shared ip between 3 servers, and need to remove one for upgrade.
But after I
> On 12/15/2016 02:02 PM, al...@amisw.com wrote:
>> primitive ip_apache_localnet ocf:heartbeat:IPaddr2 params ip="10.0.0.99"
>> cidr_netmask="32" op monitor interval="30s"
>> clone cl_ip_apache_localnet ip_apache_localnet \
>> meta globally-unique="true" clone-max="3" clone-node-max="1"
>
>
> Seeing your configuration might help. Did you set globally-unique=true
> and clone-node-max=3 on the clone? If not, the other nodes can't pick up
> the lost node's share of requests.
Yes for both, I have globally-unique=true, and I change clone-node-max=3
to clone-node-max=2, and now, as I
Hi,
I got some trouble since one week and can't find solution by myself. Any
help will be really appreciated !
I use corosync / pacemaker for 3 or 4 years and all works well, for
failover or load-balancing.
I have shared ip between 3 servers, and need to remove one for upgrade.
But after I
12 matches
Mail list logo