Pro Turm wrote 01.11.2021 15:21:
My goal is indeed replication without a single point of failure. It the above
would have been possible (replication from two masters), than the following
would also have been possible :
H would be a replica in the case when nothing else works anymore, A,B,D have
failed
D->H is not necessary needed in this case
If your goal is a high availability system then this schema is
overcomplicated (and have problem with duplicating data flow).
Let's consider architecture of a high availability system from the beginning:
A system with single server A is dangerous. Its destruction leads to system
unavailability and big data losses.
A system of two servers with replication A->B is save. After destruction
(failure) of single server the system is still available though reduced to
dangerous mode. If failed server is A then there is small data losses*. You must
rebuild the system ASAP.
A system of three servers A->B->C (or A->B + A->C) is safe. After single
failure it is still safe in 2 cases of 3 and you have more time to rebuild it.
A system of four servers A->B-C + A->D is safe after any single failure and
even a double failure may still keep it safe (for example if failed servers are
D and C).
A system of five servers A->B->C + A->D->E can be in danger only after a
definite double failure (B and D).
A system of six servers A->B->C + A->D->E + A->F is safe after any double
failure.
The bigger this snowflake grows the more safe it is.
Extreme high availability can be provided by fully linked net, which stay
available and safe after failure of every but two servers. Unfortunately this
scheme is impossible with current built-in replication out-of-box (sophisticated
log delivery system is required).
*Small data losses are unavoidable with asynchronous replication which always
have non-zero replication lag.
--
WBR, SD.
Firebird-Devel mailing list, web interface at
https://lists.sourceforge.net/lists/listinfo/firebird-devel