tisonkun opened a new pull request #15675: URL: https://github.com/apache/flink/pull/15675
## What is the purpose of the change When ZooKeeper client experiences a `ConnectionLossException`, it doesn't mean that the latch on the ensemble is lost, which infer that the current contender is possibly still the leader. Given a common case that we actually have one contender for leader election but the network environment between client and ensemble is unstable. In this case contender such as Dispatcher will experience a "revoke" - "regranted" loop that cleanups and recovers internal states. This causes two shortcomings. 1. It is unnecessary cycles cost since from ZK or said global perspective the contender never lost its leadership. 2. Unfortunately, Flink suffers several HA problems such as FLINK-10333 which will be in worse case when this leadership flip-flops happens frequently. Although the general solution should be we properly fix the problems in our high-availability implementation, this PR eases quite a bit of user experience. Following thoughts would be 1. As a previous common inside `ZooKeeperLeaderElectionService`. It says `// Maybe we have to throw an exception here to terminate the JobManager`. I agree on the statement and it would be better we use a JobManager(child) → Dispatcher(supervisor) and Dispatcher/RM(child) → ClusterEntrypoint(supervisor) model to properly handle failure as Actor Model does. It will totally regenerate the child instead of trying to handle states among different leader epochs, which is error-prone. And since we tolerate temporarily suspended leadership lost would be a rare event. 2. We internally implemented a LeaderStore based high-availability abstraction which is described in [this thread](https://lists.apache.org/x/thread.html/0839a4fb972ffdf65c8f301b94509bfbba2c3ed41c4ad32c9d3e87d2@%3Cuser.curator.apache.org%3E). It is verified online for a while and I'm digging a way to contribute it back to our community. Basically the interface and architecture changed so it might cost some time for find a good rolling-up way. ## Verifying this change This change added tests and can be verified as follows: - ZooKeeperLeaderElectionConnectionLossTest ## Does this pull request potentially affect one of the following parts: - Dependencies (does it add or upgrade a dependency): (no) - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: (no) - The serializers: (no) - The runtime per-record code paths (performance sensitive): (no) - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (yes) - The S3 file system connector: (no) ## Documentation - Does this pull request introduce a new feature? (yes) - If yes, how is the feature documented? (docs) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected]
