leventov commented on a change in pull request #8116: remove unnecessary lock 
in ForegroundCachePopulator leading to a lot of contention
URL: https://github.com/apache/incubator-druid/pull/8116#discussion_r305973396
 
 

 ##########
 File path: docs/content/configuration/index.md
 ##########
 @@ -1176,6 +1176,8 @@ You can optionally configure caching to be enabled on 
the peons by setting cachi
 |`druid.realtime.cache.useCache`|true, false|Enable the cache on the 
realtime.|false|
 |`druid.realtime.cache.populateCache`|true, false|Populate the cache on the 
realtime.|false|
 |`druid.realtime.cache.unCacheable`|All druid query types|All query types to 
not cache.|`["groupBy", "select"]`|
+|`druid.realtime.cache.numBackgroundThreads`|If greater than 0, cache will be 
populated in the background thread pool of the configured size. By default 
cache is populated in the foreground, which can more efficiently handle 
reaching `maxEntrySize` than when done in the background. Note that there is no 
load shedding for background cache population, so it can also lead to out of 
memory scenarios depending on background threadpool utilization.|0|
+|`druid.realtime.cache.maxEntrySize`|Maximum cache entry size in 
bytes.|1_000_000|
 
 Review comment:
   This description (or the more general description on the common page) should 
explain what happens if the serialized form of a query result is bigger than 
this size. I suppose it is "the result is not recorded in the cache; 
`XXX/put/oversized` metric is incremented" (I don't know what should go into 
XXX, this question should be researched.)

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to