bdeggleston commented on code in PR #2144:
URL: https://github.com/apache/cassandra/pull/2144#discussion_r1128353425
##########
src/java/org/apache/cassandra/service/accord/AccordStateCache.java:
##########
@@ -149,17 +145,12 @@ public NamedMap(String name)
}
}
- public final Map<Object, Node<?, ?>> active = new HashMap<>();
private final Map<Object, Node<?, ?>> cache = new HashMap<>();
- private final Map<Object, WriteOnlyGroup<?, ?>> pendingWriteOnly = new
HashMap<>();
- private final Set<Instance<?, ?>> instances = new HashSet<>();
-
- private final NamedMap<Object, Future<?>> loadFutures = new
NamedMap<>("loadFutures");
- private final NamedMap<Object, Future<?>> saveFutures = new
NamedMap<>("saveFutures");
+ private final Set<Instance<?, ?, ?>> instances = new HashSet<>();
- private final NamedMap<Object, Future<Data>> readFutures = new
NamedMap<>("readFutures");
- private final NamedMap<Object, Future<?>> writeFutures = new
NamedMap<>("writeFutures");
+ private final NamedMap<Object, AsyncResult<Void>> saveResults = new
NamedMap<>("saveResults");
+ private int linked = 0;
Review Comment:
I think I get the gist of what you’re proposing here. A few things I noticed:
First, we can’t start evicting from the tail, since the tail may not be
evictable. So we’d need to either maintain a separate tail, or start scanning
from the tail to find the first evictable node. Scanning from the tail has the
same efficiency problem, at least until we find the first evictable node.
Maintaining a separate tail would be possible, but we’d have to store some
additional state on the node to indicate which was pushed onto the queue first,
so older nodes that become evictable would know they should replace the current
eviction tail.
First, this is less resilient against failures and bugs. The lazy evaluation
approach currently used only requires that a node be unreferenced, and will
evict on it’s own as the save results complete, without additional input from
AsyncOperation, which simplifies failures handling and serves as a built in
escape hatch from some leaking bugs. Without some additional maintenance,
evictable nodes may never make their way into the eviction queue and be kept in
memory forever.
Second, since this relies on the AsyncOperation completing to mark nodes as
evictable, nothing from a large txn becomes evictable until all write
operations have completed. With lazy evaluation, they’re evictable as soon as
their writes complete.
Also disclaimer: I’m reading and responding in the time between 2
appointments, so may have missed something important.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]