clintropolis commented on a change in pull request #10288:
URL: https://github.com/apache/druid/pull/10288#discussion_r494180211
##########
File path: docs/querying/query-context.md
##########
@@ -58,7 +58,7 @@ These parameters apply to all query types.
|parallelMergeInitialYieldRows|`druid.processing.merge.task.initialYieldNumRows`|Number
of rows to yield per ForkJoinPool merge task for parallel result merging on
the Broker, before forking off a new task to continue merging sequences. See
[Broker configuration](../configuration/index.html#broker) for more details.|
|parallelMergeSmallBatchRows|`druid.processing.merge.task.smallBatchNumRows`|Size
of result batches to operate on in ForkJoinPool merge tasks for parallel
result merging on the Broker. See [Broker
configuration](../configuration/index.html#broker) for more details.|
|useFilterCNF|`false`| If true, Druid will attempt to convert the query filter
to Conjunctive Normal Form (CNF). During query processing, columns can be
pre-filtered by intersecting the bitmap indexes of all values that match the
eligible filters, often greatly reducing the raw number of rows which need to
be scanned. But this effect only happens for the top level filter, or
individual clauses of a top level 'and' filter. As such, filters in CNF
potentially have a higher chance to utilize a large amount of bitmap indexes on
string columns during pre-filtering. However, this setting should be used with
great caution, as it can sometimes have a negative effect on performance, and
in some cases, the act of computing CNF of a filter can be expensive. We
recommend hand tuning your filters to produce an optimal form if possible, or
at least verifying through experimentation that using this parameter actually
improves your query performance with no ill-effects.|
-|segmentPruning|`true`|Enable segment pruning on the Broker. Segment pruning
can be applied to only the segments partitioned by hash or range.|
+|secondaryPartitionPruning|`true`|Enable secondary partition pruning on the
Broker. The broker can basically prune segments unnecessary for queries based
on a filter on time intervals. If the datasource is further partitioned based
on hash or range partitioning, this query context will enable secondary
partition pruning so that the broker can eliminate unnecessary segments from
the input scan based on a filter on secondary partition dimensions.|
Review comment:
> The broker can basically prune segments unnecessary for queries based
on a filter on time intervals.
Sorry to nitpick, but this reads sort of funny, also I think 'Broker' should
be consistently capitalized. How about:
```suggestion
|secondaryPartitionPruning|`true`|Enable secondary partition pruning on the
Broker. The Broker will always prune unnecessary segments from the input scan
based on a filter on time intervals, but if the data is further partitioned
with hash or range partitioning, this option will enable additional pruning
based on a filter on secondary partition dimensions.|
```
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]