massakam opened a new issue #2490:
URL: https://github.com/apache/bookkeeper/issues/2490


   **QUESTION**
   
   Strange behavior of ensemble placement policies
   
   We use BookKeeper with Pulsar. We tried enabling the ensemble placement 
policy on Pulsar's broker and we saw some strange behavior. Please tell me if 
these behaviors are as expected.
   
   1. If the policy is `RackawareEnsemblePlacementPolicy`, bookies with rack 
information and bookies without rack information cannot coexist in the same 
cluster. If they co-exist, we get the following error:
   ```
   11:36:13.177 [BookKeeperClientScheduler-OrderedScheduler-0-0] WARN  
o.a.b.c.RackawareEnsemblePlacementPolicyImpl - Failed to resolve network 
location for dev-bookie101.pulsar.xxx.xxx.yahoo.co.jp, using default rack for 
it : /default-rack.
   11:36:13.177 [BookKeeperClientScheduler-OrderedScheduler-0-0] ERROR 
o.a.b.net.NetworkTopologyImpl        - Error: can't add leaf node 
<Bookie:dev-bookie101.pulsar.xxx.xxx.yahoo.co.jp:3181> at depth 2 to topology:
   Number of racks: 2
   Expected number of leaves:2
   /region1/rack1/dev-bookie100.pulsar.xxx.xxx.yahoo.co.jp:3181
   /region1/rack2/dev-bookie102.pulsar.xxx.xxx.yahoo.co.jp:3181
   ```
   
   2. If the policy is `RackawareEnsemblePlacementPolicy`, adding or removing 
bookie rack information does not take effect dynamically (until the broker or 
the bookie is restarted). For example, even if we add rack information to a 
bookie that did not have it, that bookie will not be selected as an ensemble 
member.
   
   3. If the policy is `RegionAwareEnsemblePlacementPolicy`, region information 
updates does not take effect dynamically (until the broker or the bookie is 
restarted).
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to