[
https://issues.apache.org/jira/browse/BEAM-11707?focusedWorklogId=556831&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-556831
]
ASF GitHub Bot logged work on BEAM-11707:
-----------------------------------------
Author: ASF GitHub Bot
Created on: 24/Feb/21 09:56
Start Date: 24/Feb/21 09:56
Worklog Time Spent: 10m
Work Description: scwhittle commented on a change in pull request #13862:
URL: https://github.com/apache/beam/pull/13862#discussion_r581816892
##########
File path:
runners/google-cloud-dataflow-java/worker/src/main/java/org/apache/beam/runners/dataflow/worker/WindmillStateCache.java
##########
@@ -50,55 +50,68 @@
* StreamingDataflowWorker} ensures that a single computation * processing key
is executing on one
* thread at a time, so this is safe.
*/
-@SuppressWarnings({
- "nullness" // TODO(https://issues.apache.org/jira/browse/BEAM-10402)
-})
public class WindmillStateCache implements StatusDataProvider {
// Convert Megabytes to bytes
private static final long MEGABYTES = 1024 * 1024;
// Estimate of overhead per StateId.
- private static final int PER_STATE_ID_OVERHEAD = 20;
+ private static final long PER_STATE_ID_OVERHEAD = 28;
// Initial size of hash tables per entry.
private static final int INITIAL_HASH_MAP_CAPACITY = 4;
// Overhead of each hash map entry.
private static final int HASH_MAP_ENTRY_OVERHEAD = 16;
- // Overhead of each cache entry. Three longs, plus a hash table.
+ // Overhead of each StateCacheEntry. One long, plus a hash table.
private static final int PER_CACHE_ENTRY_OVERHEAD =
- 24 + HASH_MAP_ENTRY_OVERHEAD * INITIAL_HASH_MAP_CAPACITY;
+ 8 + HASH_MAP_ENTRY_OVERHEAD * INITIAL_HASH_MAP_CAPACITY;
private Cache<StateId, StateCacheEntry> stateCache;
- private HashMultimap<WindmillComputationKey, StateId> keyIndex =
- HashMultimap.<WindmillComputationKey, StateId>create();
- private long displayedWeight = 0; // Only used for status pages and unit
tests.
+ // Contains the current valid ForKey object. Entries in the cache are keyed
by ForKey with pointer
+ // equality so entries may be invalidated by creating a new key object,
rendering the previous
+ // entries inaccessible. They will be evicted through normal cache operation.
+ private ConcurrentMap<WindmillComputationKey, ForKey> keyIndex =
+ new MapMaker().weakValues().concurrencyLevel(4).makeMap();
private long workerCacheBytes; // Copy workerCacheMb and convert to bytes.
- public WindmillStateCache(Integer workerCacheMb) {
+ public WindmillStateCache(long workerCacheMb) {
final Weigher<Weighted, Weighted> weigher =
Weighers.weightedKeysAndValues();
workerCacheBytes = workerCacheMb * MEGABYTES;
stateCache =
CacheBuilder.newBuilder()
.maximumWeight(workerCacheBytes)
.recordStats()
.weigher(weigher)
- .removalListener(
- removal -> {
- if (removal.getCause() != RemovalCause.REPLACED) {
- synchronized (this) {
- StateId id = (StateId) removal.getKey();
- if (removal.getCause() != RemovalCause.EXPLICIT) {
- // When we invalidate a key explicitly, we'll also
update the keyIndex, so
- // no need to do it here.
- keyIndex.remove(id.getWindmillComputationKey(), id);
- }
- displayedWeight -= weigher.weigh(id, removal.getValue());
- }
- }
- })
+ .concurrencyLevel(4)
.build();
}
+ private static class EntryStats {
+ long entries;
+ long idWeight;
+ long entryWeight;
+ long entryValues;
+ long maxEntryValues;
+ }
+
+ private EntryStats calculateEntryStats() {
+ class CacheConsumer implements BiConsumer<StateId, StateCacheEntry> {
+ public EntryStats stats = new EntryStats();
+
+ @Override
+ public void accept(StateId stateId, StateCacheEntry stateCacheEntry) {
+ stats.entries++;
+ stats.idWeight += stateId.getWeight();
+ stats.entryWeight += stateCacheEntry.getWeight();
+ stats.entryValues += stateCacheEntry.values.size();
+ stats.maxEntryValues = Math.max(stats.maxEntryValues,
stateCacheEntry.values.size());
+ }
+ }
+ CacheConsumer consumer = new CacheConsumer();
Review comment:
Done, thanks
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
Issue Time Tracking
-------------------
Worklog Id: (was: 556831)
Time Spent: 50m (was: 40m)
> Optimize WindmillStateCache CPU usage
> -------------------------------------
>
> Key: BEAM-11707
> URL: https://issues.apache.org/jira/browse/BEAM-11707
> Project: Beam
> Issue Type: Bug
> Components: runner-dataflow
> Reporter: Sam Whittle
> Assignee: Sam Whittle
> Priority: P2
> Time Spent: 50m
> Remaining Estimate: 0h
>
> From profiling nexmark Query11 which has many unique tags per key, I observed
> that the WindmillStateCache cpu usage was 6% of CPU.
> The usage appears to be due to the invalidation set maintenance as well as
> many reads/inserts.
> The invalidation set is maintained so that if a key encounters an error
> processing or the cache token changes, we can invalidate all the entries for
> a key. Currently this is done by removing all entries for the key from the
> cache. Another alternative which appears much more CPU efficient is to
> instead leave the entries in the cache but make them unreachable. This can
> be done by having a per-key object that uses object equality as part of the
> cache lookup. Then to discard entries for the key, we start using a new
> per-key object. Cleanup of per-key objects can be done with a weak reference
> map.
> Another cost to the cache is that objects are grouped by window so that they
> are kept/evicted all at once. However currently when reading items into the
> cache, we fetch the window set and then lookup each tag in it. This could be
> cached for the key to avoid multiple cache lookups. Similarly for putting
> objects we lookup and insert each tag separately and then update the cache to
> update the weight for the per-window set. This could be done once after all
> updates for the window have been made.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)