equanz commented on PR #2931:
URL: https://github.com/apache/bookkeeper/pull/2931#issuecomment-1205955943

   @dlg99 https://github.com/apache/bookkeeper/pull/2931#issuecomment-1190559164
   
   Sorry for the late reply. I've checked 
https://github.com/apache/bookkeeper/pull/3359#pullrequestreview-1061219674 .
   The approach (using autorecovery) sounds better. If it was merged and still 
has the above issue, I'll rebase to follow 
https://github.com/apache/bookkeeper/pull/3359 interfaces.
   
   > As I understand, there is enforceMinNumRacksPerWriteQuorum option to 
prevent that (and enforceMinNumFaultDomainsForWrite, 
enforceStrictZoneawarePlacement for other placement policies)
   
   In my understanding, if we use `enforceMinNumRacksPerWriteQuorum` , we can't 
recover quorum temporally for redundancy when a rack goes down. I want to avoid 
it.
   
   > As I understand, Auditor's placementPolicyCheck just detects the problem. 
Maybe it makes sense to make Auditor (optionally) put the ledgers with bad 
placement for re-replication and make AutoRecovery handle that?
   In this case the CLI command is already existing triggeraudit.
   
   I think so too.
   I thought the next step is asynchronous relocation, like the auto-recovery 
feature (mentioned in 
[here](https://lists.apache.org/thread/vqwmff7rmsn54fxhcojnm2zqvz3s80lj)). 
However, it was implemented in https://github.com/apache/bookkeeper/pull/3359 .
   
   > Another note is that the test only covers rack-aware policy. What happens 
in the case of the region-aware or zone-aware policies?
   
   When using non-Rackaware, it throws an Exception. I'll add the test.
   
https://github.com/equanz/bookkeeper/blob/62970b3122fa1f8266763f0d3f6115a066fcabe9/bookkeeper-server/src/main/java/org/apache/bookkeeper/client/EnsemblePlacementPolicy.java#L460-L467


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to