kevinrr888 opened a new issue, #5036:
URL: https://github.com/apache/accumulo/issues/5036

   **Is your feature request related to a problem? Please describe.**
   A question was posed about where FATE op data is stored for ops on the 
Accumulo FATE table, to which I wasn't sure. After looking into it, found that 
`USER` transactions (stored in `UserFateStore`) and `META` transactions (stored 
in `MetaFateStore`) are defined as follows:
   
   `META` : for operations on tables in the Accumulo table namespace (`ROOT`, 
`METADATA`, `FATE`, and `SCAN_REF` tables)
   `USER` : for operations on all other tables (user tables)
   
   
https://github.com/apache/accumulo/blob/7da7ac89cc54a4c8e660eb713b0afd5c525c0ec9/core/src/main/java/org/apache/accumulo/core/fate/FateInstanceType.java#L29-L32
   
   **Describe the solution you'd like**
   The answer to this question may have been more obvious if the `meta` prefix 
was something else. @dlmarion suggested `system` which seems like a good 
replacement. So, this would mean changing 
   
   `FateInstanceType.META` --> `FateInstanceType.SYSTEM` 
   and changing
   `MetaFateStore` --> `SystemFateStore`
   
   **Describe alternatives you've considered**
   An alternative to this would be to split `MetaFateStore` into two separate 
stores. Perhaps `MetaFateStore` would store fate data for the `ROOT` and 
`METADATA` tables (which is inline with its name) while a new store would store 
fate ops for all other system tables. This is certainly much more involved than 
the above suggested change, and would probably (at least right now) be 
unnecessary. This is a change that may be needed depending on how often ops are 
performed on the system tables (which may change with multiple managers and the 
distribution of FATE). Just throwing this out there for discussion.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to