GitHub user unknowntpo added a comment to the discussion: Proposal for Integrating Redis Distributed Cache alongside Caffeine for Enhanced Scalability and Consistency
I suggest using transactional outbox pattern to sync between different local cache across Gravitino nodes. <img width="523" height="400" alt="image" src="https://github.com/user-attachments/assets/0146b3ce-139f-4adb-bfb4-90677faa0e62" /> Let's say we have two Gravitino nodes. `nodeA`, `nodeB`. When node started, an `startTimestamp` is recorded within their memory. When `nodeA` updates an entity `entity1`, a record is inserted to `entity_change_event` table **within** the same DB transaction. And each node have a `EntityEventSyncer`, which periodically polls change event from `entity_change_event` started from `startTimestamp`, and invalidate the key mentioned by the change event. Note that The cache data is lazy loaded. There's no need to insert updated entity into cache. ## Pros: - An entity change event is guaranteed to be received by each Gravitino node. - No additional dependencies added, easy to debug. ## Cons: - Cache miss problem: If we have `N` Gravitino nodes, for a entity key which doesn't exist in any local cache, in worse case, we need to query DB `N` times to make this key present in local cache of each node. ---- Although `Redis` is a common tool for caching. Here's some concerns that make me don't suggest using Redis: - if one Gravitino instance fails to update the cache, then other nodes will get the old data. - Refer to the documentation of [Redis cluster](https://redis.io/docs/latest/operate/oss_and_stack/management/scaling/#redis-cluster-consistency-guarantees), Redis cluster does not guarantee strong consistency, It would be hard to discover a data loss. - Additional complexity: User needs to maintain Gravitino nodes, RDBMS like `MySQL`, a Redis Cluster. GitHub link: https://github.com/apache/gravitino/discussions/8480#discussioncomment-14375611 ---- This is an automatically sent email for [email protected]. To unsubscribe, please send an email to: [email protected]
