kevinrr888 commented on PR #4524: URL: https://github.com/apache/accumulo/pull/4524#issuecomment-2273932737
The `TODO 4131` comments have now been removed and I believe this PR is ready to be merged :tada: I was planning on creating the follow-on issues after this is merged in as a batch of new issues. As suggested by @ctubbsii in our meeting today, I will post a list of all the outstanding changes to be made below. The trivial ones, I will just make a PR for without creating an issue, the more involved follow-ons will be tracked in an issue. **_Here is the list of changes to be made:_** _To be tracked by issue:_ - [ ] Refactor MultipleStoresIT to function more similarly to other Fate tests like FateIT - [ ] Deprecate AbstractFateStore.createDummyLockID(): this includes creating a path in ZK for utilities to get a ZK lock, changing `admin fate fail` and `admin fate delete` commands to get a LockID at this path and no longer require the Manager to be down, and only use a LockID for a store if write operations are expected: the store should fail on write operations if writes are not expected. - [ ] A single ZK node for all fate data for each fate id - [ ] Replace AFS.verifyReserved with a condition on the RESERVATION_COLUMN to verify that it is reserved - [ ] Make WorkFinder and the TransactionRunners critical to the Manager - [ ] Refactor how the RESERVATION_COLUMN works for UserFateStore: create/delete column on reserve/unreserve _Will do in one or a few small PRs:_ - [ ] Add a toString() to FateKey - [ ] Move MetaFateStore to org.apache.accumulo.core.fate.zookeeper - [ ] Periodic clean up of dead reservations should be increased from every 30 seconds to every few minutes - [ ] Add additional fate test case suggested in https://github.com/apache/accumulo/pull/4524#discussion_r1702301996 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
