gianm commented on a change in pull request #8116: remove unnecessary lock in 
ForegroundCachePopulator leading to a lot of contention
URL: https://github.com/apache/incubator-druid/pull/8116#discussion_r305607757
 
 

 ##########
 File path: 
server/src/main/java/org/apache/druid/client/cache/ForegroundCachePopulator.java
 ##########
 @@ -22,16 +22,22 @@
 import com.fasterxml.jackson.core.JsonGenerator;
 import com.fasterxml.jackson.databind.ObjectMapper;
 import com.google.common.base.Preconditions;
+import org.apache.commons.lang.mutable.MutableBoolean;
 import org.apache.druid.java.util.common.guava.Sequence;
 import org.apache.druid.java.util.common.guava.SequenceWrapper;
 import org.apache.druid.java.util.common.guava.Sequences;
 import org.apache.druid.java.util.common.logger.Logger;
 
 import java.io.ByteArrayOutputStream;
 import java.io.IOException;
-import java.util.concurrent.atomic.AtomicBoolean;
 import java.util.function.Function;
 
+/**
+ * {@link CachePopulator} implementation that populates a cache on the same 
thread that is processing the
+ * {@link Sequence}. Used if config "druid.*.cache.numBackgroundThreads" is 0 
(the default). This {@link CachePopulator}
+ * should be more efficient than {@link BackgroundCachePopulator} if maximum 
cache entry size, specified by config
+ * "druid.*.cache.maxEntrySize", is exceeded.
 
 Review comment:
   I'd add that typically, the thread processing this sequence (and hence 
populating the cache) is either:
   
   - a processing thread (if historical/task)
   - an http thread (if broker)

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to