hangc0276 opened a new pull request, #4057: URL: https://github.com/apache/bookkeeper/pull/4057
### Motivation Rack-aware placement policies enable preference for bookies that reside in the same rack as the bookie client. - Initiate local node by resolving the rack information https://github.com/apache/bookkeeper/blob/5b5c05331757e7356579076970e61f119f5d34ae/bookkeeper-server/src/main/java/org/apache/bookkeeper/client/RackawareEnsemblePlacementPolicyImpl.java#L207-L215 - When generating new ensembles for a ledger, the selecting algorithm will set the localNode's rack to `curRack` and select one bookie from `curRack` first https://github.com/apache/bookkeeper/blob/5b5c05331757e7356579076970e61f119f5d34ae/bookkeeper-server/src/main/java/org/apache/bookkeeper/client/RackawareEnsemblePlacementPolicyImpl.java#L420-L426 However, when resolving the local node's rack information, we use IP to resolve the rack name, which is unfriendly with k8s deployment. https://github.com/apache/bookkeeper/blob/5b5c05331757e7356579076970e61f119f5d34ae/bookkeeper-server/src/main/java/org/apache/bookkeeper/client/RackawareEnsemblePlacementPolicyImpl.java#L209 In k8s deployment, we usually use the hostname as bookieId and Pulsar broker name instead of IP, because the IP will be changed when the pods migrated to other nodes. ### Modification In order not to bring break change to the current behavior, I introduced a flag `useHostnameResolveLocalNodePlacementPolicy` in the BookKeeper client configuration to control whether to use the hostname to resolve the bookie client's local node rack information. The flag is `false` by default, which is the same behavior as the current logic. Due to this PR doesn't introduce any break changes, I think we can cherry-pick it back to the patch releases (branch-4.14, branch-4.15 and branch-4.16) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
