Hi
 
Could you please give some clues about the pitfalls of this solution below?
 
We have 3 ZooKeepers in 3 data centers.
6 Kafka nodes in 2 data centers
replication factor=4
insync =3
unclean.leader.election.enable=false
We use rack.awareness to distribute partitions across data centers.
 
We want to stay alive after failing 1 data center.
When the data center goes down, we have at least 1 in-sync replica and 1 not 
insync
 
The cluster is available only for reading but not for writing.
 
To become available for writing, we temporarily make insync = 2 (using some 
automatic service), then after the data center becomes available, set it in 
sync = 3.
 
With best regards
Evgeny

Reply via email to