snazy opened a new issue, #766:
URL: https://github.com/apache/polaris/issues/766

   ### Describe the bug
   
   Despite that `PolarisMetaStoreSession` [clearly 
says](https://github.com/apache/polaris/blob/88994f495a844f420187b9404511504601081a1a/polaris-core/src/main/java/org/apache/polaris/core/persistence/PolarisMetaStoreSession.java#L45-L46)
 that `it [is] really easy to back this using [...] simpler KV store.` the 
whole code architecture of everything around persistence in Polaris requires 
nothing less than (relational) transactions and strong consistency across 
multiple rows in multiple tables.
   
   For example 
`org.apache.polaris.core.persistence.PolarisMetaStoreManagerImpl#writeEntity`:
   ```java
     private void writeEntity(
         @Nonnull PolarisMetaStoreSession ms,
         @Nonnull PolarisBaseEntity entity,
         boolean writeToActive) {
       ms.writeToEntities(entity);
       ms.writeToEntitiesChangeTracking(entity);
   
       if (writeToActive) {
         ms.writeToEntitiesActive(entity);
       }
     }
   ```
   works against 2-3 different _rows_ in 2-3 different _tables_, requiring that 
all or none of the changes succeed - this is multiplied by the number of 
entities being written. On top it requires that entities being read before 
(existing and non-existing) did not change when the tx gets committed.
   
   The "pattern" of having these different tables leaks into a lot of places. 
It effectively makes it rather impossible to use anything else than a 
relational database w/ isolation level `SERIALIZABLE`.
   
   ### To Reproduce
   
   _No response_
   
   ### Actual Behavior
   
   _No response_
   
   ### Expected Behavior
   
   _No response_
   
   ### Additional context
   
   _No response_
   
   ### System information
   
   _No response_


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to