[
https://issues.apache.org/jira/browse/BOOKKEEPER-249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13497061#comment-13497061
]
Rakesh R commented on BOOKKEEPER-249:
-------------------------------------
bq.after updating failed bookie's ledgers in '/ledgers/deleted/Bi' when
changing ensemble
In general, I agree with your suggestion of first update for deletion and then
go to rereplication. Please go through the corner case in the ZKLedgerManager
impl and causing data loss like, successfully updated the failed bookie's
ledgers in '/ledgers/deleted/Bi' and while changing ensemble for
auto-rereplication zk got disconnected. Now Bi will go ahead with ledger
deletion if it rejoins, also rereplication process will see the bookie is live
and stops rereplication. I feel, this is very straight scenario with
ZKLedgerManager and nothing comes in my mind otherthan multi-transaction api
usage:(
> Revisit garbage collection algorithm in Bookie server
> -----------------------------------------------------
>
> Key: BOOKKEEPER-249
> URL: https://issues.apache.org/jira/browse/BOOKKEEPER-249
> Project: Bookkeeper
> Issue Type: Improvement
> Components: bookkeeper-server
> Reporter: Sijie Guo
> Fix For: 4.2.0
>
> Attachments: gc_revisit.pdf
>
>
> Per discussion in BOOKKEEPER-181, it would be better to revisit garbage
> collection algorithm in bookie server. so create a subtask to focus on it.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira