techdocsmith commented on a change in pull request #10848:
URL: https://github.com/apache/druid/pull/10848#discussion_r588589809
##########
File path: docs/querying/caching.md
##########
@@ -22,63 +22,75 @@ title: "Query caching"
~ under the License.
-->
+You can enable caching in Apache Druid to improve query times for frequently
accessed data. This topic defines the different types of caching for Druid. It
describes the default caching behavior and provides guidance and examples to
help you hone your caching strategy.
-Apache Druid supports query result caching at both the segment and whole-query
result level. Cache data can be stored in the
-local JVM heap or in an external distributed key/value store. In all cases,
the Druid cache is a query result cache.
-The only difference is whether the result is a _partial result_ for a
particular segment, or the result for an entire
-query. In both cases, the cache is invalidated as soon as any underlying data
changes; it will never return a stale
-result.
+If you're unfamiliar with Druid architecture, review the following topics
before proceeding with caching:
+- [Druid Design](../design/architecture.md)
+- [Segments](../design/segments.md)
+- [Query execution](./query-execution)
-Segment-level caching allows the cache to be leveraged even when some of the
underling segments are mutable and
-undergoing real-time ingestion. In this case, Druid will potentially cache
query results for immutable historical
-segments, while re-computing results for the real-time segments on each query.
Whole-query result level caching is not
-useful in this scenario, since it would be continuously invalidated.
+For instructions to configure caching see [Using query
caching](./using-caching.md).
-Segment-level caching does require Druid to merge the per-segment results on
each query, even when they are served
-from the cache. For this reason, whole-query result level caching can be more
efficient if invalidation due to real-time
-ingestion is not an issue.
+## Cache types
+Druid supports segment caching which stores _partial results_ of a query for a
specific segment, and whole-query caching which stores all results for a query.
To avoid returning stale results, Druid invalidates the cache the moment any
underlying data changes for both types of cache.
-## Using and populating cache
+Druid can store cache data the local JVM heap or in an external distributed
key/value store. See [Cache
configuration](../configuration/index.md#cache-configuration) for information
on how to configure cache storage.
-All caches have a pair of parameters that control the behavior of how
individual queries interact with the cache, a 'use' cache parameter, and a
'populate' cache parameter. These settings must be enabled at the service level
via [runtime properties](../configuration/index.md) to utilize cache, but can
be controlled on a per query basis by setting them on the [query
context](../querying/query-context.md). The 'use' parameter obviously controls
if a query will utilize cached results. The 'populate' parameter controls if a
query will update cached results. These are separate parameters to allow
queries on uncommon data to utilize cached results without polluting the cache
with results that are unlikely to be re-used by other queries, for example
large reports or very old data.
+### Segment caching
-## Query caching on Brokers
+The primary form of caching in Druid is the **segment cache**. The segment
cache stores query results on a per-segment basis. It is enabled on Historical
services by default.
-Brokers support both segment-level and whole-query result level caching.
Segment-level caching is controlled by the
-parameters `useCache` and `populateCache`. Whole-query result level caching is
controlled by the parameters
-`useResultLevelCache` and `populateResultLevelCache` and [runtime
properties](../configuration/index.md)
-`druid.broker.cache.*`.
+When your queries include data from segments that are mutable and undergoing
real-time ingestion, use a segment cache. In this case Druid caches query
results for immutable historical segments when possible. It re-computes results
for the real-time segments at query time.
-Enabling segment-level caching on the Broker can yield faster results than if
query caches were enabled on Historicals for small
-clusters. This is the recommended setup for smaller production clusters (< 5
servers). Populating segment-level caches on
-the Broker is _not_ recommended for large production clusters, since when the
property `druid.broker.cache.populateCache` is
-set to `true` (and query context parameter `populateCache` is _not_ set to
`false`), results from Historicals are returned
-on a per segment basis, and Historicals will not be able to do any local
result merging. This impairs the ability of the
-Druid cluster to scale well.
+For example, you have queries that frequently include incoming data from a
Kafka or Kinesis stream alongside unchanging segments. Whole-query caching is
not helpful in this scenario because the new data from real-time ingestion
continually invalidates the cache. Segment caching lets Druid cache results
from older immutable segments and merge them with updated data.
Review comment:
I reversed the sentences to put the emphasis on Per-segment caching and
how this is a negative example for whole query
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]