On Fri, Aug 6, 2021 at 3:47 PM Ulrich Windl
wrote:
>
> >>> Antony Stone schrieb am 06.08.2021 um
> 14:41 in
> Nachricht <202108061441.59936.antony.st...@ha.open.source.it>:
> ...
> > location pref_A GroupA rule ‑inf: site ne cityA
> > location pref_B GroupB rule ‑inf: site ne cityB
>
On Friday 06 August 2021 at 14:47:03, Ulrich Windl wrote:
> Antony Stone schrieb am 06.08.2021 um 14:41
>
> > location pref_A GroupA rule ‑inf: site ne cityA
> > location pref_B GroupB rule ‑inf: site ne cityB
>
> I'm wondering whether the first is equivalentto
> location pref_A GroupA r
>>> Antony Stone schrieb am 06.08.2021 um
14:41 in
Nachricht <202108061441.59936.antony.st...@ha.open.source.it>:
...
> location pref_A GroupA rule ‑inf: site ne cityA
> location pref_B GroupB rule ‑inf: site ne cityB
I'm wondering whether the first is equivalentto
location pref_A Gro
Hi!
Nice to hear. What could be "interesting" is how stable the WAN-type of
corosync communication works.
If it's not that stable, the cluster could try to fence nodes rather
frequently. OK, you disabled fencing; maybe it works without.
Did you tune the parameters?
Regards,
Ulrich
>>> Antony Sto
On 05/08/2021 00:11, Frank D. Engel, Jr. wrote:
In theory if you could have an independent voting infrastructure among
the three clusters which serves to effectively create a second cluster
infrastructure interconnecting them to support resource D, you could
Yes. It's called booth.
have D ru
On Thursday 05 August 2021 at 07:43:30, Andrei Borzenkov wrote:
> On 05.08.2021 00:01, Antony Stone wrote:
> >
> > Requirements 1, 2 and 3 are easy to achieve - don't connect the clusters.
> >
> > Requirement 4 is the one I'm stuck with how to implement.
>
> You either have single cluster and d
On 05.08.2021 00:01, Antony Stone wrote:
> On Wednesday 04 August 2021 at 22:06:39, Frank D. Engel, Jr. wrote:
>
>> There is no safe way to do what you are trying to do.
>>
>> If the resource is on cluster A and contact is lost between clusters A
>> and B due to a network failure, how does cluster
I still can't understand why the whole cluster will fail when only 3 nodes are
down and a qdisk is used.
CityA -> 3 nodes to run packageA -> 3 votesCityB -> 3 nodes to run packageB ->
3 votesCityC -> 1 node which cannot run any package (qdisk) -> 1 vote
Max votes:7Quorum: 4
As long as one city is
In theory if you could have an independent voting infrastructure among
the three clusters which serves to effectively create a second cluster
infrastructure interconnecting them to support resource D, you could
have D running on one of the clusters so long as at least two of them
can communicat
On Wednesday 04 August 2021 at 22:06:39, Frank D. Engel, Jr. wrote:
> There is no safe way to do what you are trying to do.
>
> If the resource is on cluster A and contact is lost between clusters A
> and B due to a network failure, how does cluster B know if the resource
> is still running on cl
There is no safe way to do what you are trying to do.
If the resource is on cluster A and contact is lost between clusters A
and B due to a network failure, how does cluster B know if the resource
is still running on cluster A or not?
It has no way of knowing if cluster A is even up and runni
On Wednesday 04 August 2021 at 20:57:49, Strahil Nikolov wrote:
> That's why you need a qdisk at a 3-rd location, so you will have 7 votes in
> total.When 3 nodes in cityA die, all resources will be started on the
> remaining 3 nodes.
I think I have not explained this properly.
I have three node
That's why you need a qdisk at a 3-rd location, so you will have 7 votes in
total.When 3 nodes in cityA die, all resources will be started on the remaining
3 nodes.
Best Regards,Strahil Nikolov
On Wed, Aug 4, 2021 at 17:23, Antony Stone
wrote: On Wednesday 04 August 2021 at 16:07:39, And
On Wednesday 04 August 2021 at 16:07:39, Andrei Borzenkov wrote:
> On Wed, Aug 4, 2021 at 5:03 PM Antony Stone wrote:
> > On Wednesday 04 August 2021 at 13:31:12, Andrei Borzenkov wrote:
> > > On Wed, Aug 4, 2021 at 1:48 PM Antony Stone wrote:
> > > > On Tuesday 03 August 2021 at 12:12:03, Strahil
On Wed, Aug 4, 2021 at 5:03 PM Antony Stone
wrote:
>
> On Wednesday 04 August 2021 at 13:31:12, Andrei Borzenkov wrote:
>
> > On Wed, Aug 4, 2021 at 1:48 PM Antony Stone wrote:
> > > On Tuesday 03 August 2021 at 12:12:03, Strahil Nikolov via Users wrote:
> > > > Won't something like this work ? Ea
On Wednesday 04 August 2021 at 13:31:12, Andrei Borzenkov wrote:
> On Wed, Aug 4, 2021 at 1:48 PM Antony Stone wrote:
> > On Tuesday 03 August 2021 at 12:12:03, Strahil Nikolov via Users wrote:
> > > Won't something like this work ? Each node in LA will have same score
> > > of 5000, while other c
On Wed, Aug 4, 2021 at 1:48 PM Antony Stone
wrote:
>
> On Tuesday 03 August 2021 at 12:12:03, Strahil Nikolov via Users wrote:
>
> > Won't something like this work ? Each node in LA will have same score of
> > 5000, while other cities will be -5000.
> >
> > pcs constraint location DummyRes1 rule s
On Tuesday 03 August 2021 at 12:12:03, Strahil Nikolov via Users wrote:
> Won't something like this work ? Each node in LA will have same score of
> 5000, while other cities will be -5000.
>
> pcs constraint location DummyRes1 rule score=5000 city eq LA
> pcs constraint location DummyRes1 rule sco
Won't something like this work ? Each node in LA will have same score of 5000,
while other cities will be -5000.
pcs constraint location DummyRes1 rule score=5000 city eq LA
pcs constraint location DummyRes1 rule score=-5000 city ne LA
stickiness -> 1
Best Regards,Strahil Nikolov
Out of c
>>> Antony Stone schrieb am 03.08.2021 um
10:40 in
Nachricht <202108031040.28312.antony.st...@ha.open.source.it>:
> On Tuesday 11 May 2021 at 12:56:01, Strahil Nikolov wrote:
>
>> Here is the example I had promised:
>>
>> pcs node attribute server1 city=LA
>> pcs node attribute server2 city=NY
>>
20 matches
Mail list logo