This is an automated email from the ASF dual-hosted git repository.

brile pushed a commit to branch 34.0.0
in repository https://gitbox.apache.org/repos/asf/druid.git


The following commit(s) were added to refs/heads/34.0.0 by this push:
     new a9159d8f96d docs: link fixes (#18308) (#18309)
a9159d8f96d is described below

commit a9159d8f96dc66f052b0b19fc3e6e9c3596ee061
Author: Victoria Lim <[email protected]>
AuthorDate: Tue Jul 22 18:45:19 2025 -0700

    docs: link fixes (#18308) (#18309)
---
 docs/development/extensions-core/datasketches-kll.md       |  2 +-
 docs/development/extensions-core/datasketches-quantiles.md |  4 ++--
 docs/multi-stage-query/concepts.md                         |  4 ++--
 docs/multi-stage-query/reference.md                        |  2 +-
 docs/querying/sql-functions.md                             | 10 +++++-----
 5 files changed, 11 insertions(+), 11 deletions(-)

diff --git a/docs/development/extensions-core/datasketches-kll.md 
b/docs/development/extensions-core/datasketches-kll.md
index 5245f816d88..b8e372dc942 100644
--- a/docs/development/extensions-core/datasketches-kll.md
+++ b/docs/development/extensions-core/datasketches-kll.md
@@ -23,7 +23,7 @@ title: "DataSketches KLL Sketch module"
   -->
 
 
-This module provides Apache Druid aggregators based on numeric quantiles 
KllFloatsSketch and KllDoublesSketch from [Apache 
DataSketches](https://datasketches.apache.org/) library. KLL quantiles sketch 
is a mergeable streaming algorithm to estimate the distribution of values, and 
approximately answer queries about the rank of a value, probability mass 
function of the distribution (PMF) or histogram, cumulative distribution 
function (CDF), and quantiles (median, min, max, 95th percentile a [...]
+This module provides Apache Druid aggregators based on numeric quantiles 
KllFloatsSketch and KllDoublesSketch from [Apache 
DataSketches](https://datasketches.apache.org/) library. KLL quantiles sketch 
is a mergeable streaming algorithm to estimate the distribution of values, and 
approximately answer queries about the rank of a value, probability mass 
function of the distribution (PMF) or histogram, cumulative distribution 
function (CDF), and quantiles (median, min, max, 95th percentile a [...]
 
 There are three major modes of operation:
 
diff --git a/docs/development/extensions-core/datasketches-quantiles.md 
b/docs/development/extensions-core/datasketches-quantiles.md
index 6f1962dd5e3..2b2b83a47a9 100644
--- a/docs/development/extensions-core/datasketches-quantiles.md
+++ b/docs/development/extensions-core/datasketches-quantiles.md
@@ -23,7 +23,7 @@ title: "DataSketches Quantiles Sketch module"
   -->
 
 
-This module provides Apache Druid aggregators based on numeric quantiles 
DoublesSketch from [Apache DataSketches](https://datasketches.apache.org/) 
library. Quantiles sketch is a mergeable streaming algorithm to estimate the 
distribution of values, and approximately answer queries about the rank of a 
value, probability mass function of the distribution (PMF) or histogram, 
cumulative distribution function (CDF), and quantiles (median, min, max, 95th 
percentile and such). See [Quantiles Sk [...]
+This module provides Apache Druid aggregators based on numeric quantiles 
DoublesSketch from [Apache DataSketches](https://datasketches.apache.org/) 
library. Quantiles sketch is a mergeable streaming algorithm to estimate the 
distribution of values, and approximately answer queries about the rank of a 
value, probability mass function of the distribution (PMF) or histogram, 
cumulative distribution function (CDF), and quantiles (median, min, max, 95th 
percentile and such). See [Quantiles Sk [...]
 
 There are three major modes of operation:
 
@@ -57,7 +57,7 @@ The result of the aggregation is a DoublesSketch that is the 
union of all sketch
 |`type`|This string should always be "quantilesDoublesSketch"|yes|
 |`name`|String representing the output column to store sketch values.|yes|
 |`fieldName`|A string for the name of the input field (can contain sketches or 
raw numeric values).|yes|
-|`k`|Parameter that determines the accuracy and size of the sketch. Higher k 
means higher accuracy but more space to store sketches. Must be a power of 2 
from 2 to 32768. See [accuracy 
information](https://datasketches.apache.org/docs/Quantiles/OrigQuantilesSketch)
 in the DataSketches documentation for details.|no, defaults to 128|
+|`k`|Parameter that determines the accuracy and size of the sketch. Higher k 
means higher accuracy but more space to store sketches. Must be a power of 2 
from 2 to 32768. See [accuracy 
information](https://datasketches.apache.org/docs/Quantiles/ClassicQuantilesSketch.html#accuracy-and-size)
 in the DataSketches documentation for details.|no, defaults to 128|
 |`maxStreamLength`|This parameter defines the number of items that can be 
presented to each sketch before it may need to move from off-heap to on-heap 
memory. This is relevant to query types that use off-heap memory, including 
[TopN](../../querying/topnquery.md) and 
[GroupBy](../../querying/groupbyquery.md). Ideally, should be set high enough 
such that most sketches can stay off-heap.|no, defaults to 1000000000|
 |`shouldFinalize`|Return the final double type representing the estimate 
rather than the intermediate sketch type itself. In addition to controlling the 
finalization of this aggregator, you can control whether all aggregators are 
finalized with the query context parameters 
[`finalize`](../../querying/query-context.md) and 
[`sqlFinalizeOuterSketches`](../../querying/sql-query-context.md).|no, defaults 
to `true`|
 
diff --git a/docs/multi-stage-query/concepts.md 
b/docs/multi-stage-query/concepts.md
index 0042467a9df..dee6f80bdd4 100644
--- a/docs/multi-stage-query/concepts.md
+++ b/docs/multi-stage-query/concepts.md
@@ -189,8 +189,8 @@ available in the **Segments** view under the 
**Partitioning** column.
 For more information about syntax, see [`CLUSTERED 
BY`](./reference.md#clustered-by).
 
 For more information about the mechanics of clustering, refer to
-[Secondary partitioning](../ingestion/partitioning#secondary-partitioning) and
-[Sorting](../ingestion/partitioning#sorting).
+[Secondary partitioning](../ingestion/partitioning.md#secondary-partitioning) 
and
+[Sorting](../ingestion/partitioning.md#sorting).
 
 ### Rollup
 
diff --git a/docs/multi-stage-query/reference.md 
b/docs/multi-stage-query/reference.md
index 5e4e3a5a309..1bd82f00efe 100644
--- a/docs/multi-stage-query/reference.md
+++ b/docs/multi-stage-query/reference.md
@@ -403,7 +403,7 @@ The following table lists the context parameters for the 
MSQ task engine:
 | `sqlJoinAlgorithm` | SELECT, INSERT, REPLACE<br /><br />Algorithm to use for 
JOIN. Use `broadcast` (the default) for broadcast hash join or `sortMerge` for 
sort-merge join. Affects all JOIN operations in the query. This is a hint to 
the MSQ engine and the actual joins in the query may proceed in a different way 
than specified. See [Joins](#joins) for more details. | `broadcast` |
 | `rowsInMemory` | INSERT or REPLACE<br /><br />Maximum number of rows to 
store in memory at once before flushing to disk during the segment generation 
process. Ignored for non-INSERT queries. In most cases, use the default value. 
You may need to override the default if you run into one of the [known 
issues](./known-issues.md) around memory usage. | 100,000 |
 | `segmentSortOrder` | INSERT or REPLACE<br /><br />Normally, Druid sorts rows 
in individual segments using `__time` first, followed by the [CLUSTERED 
BY](#clustered-by) clause. When you set `segmentSortOrder`, Druid uses the 
order from this context parameter instead. Provide the column list as 
comma-separated values or as a JSON array in string form.<br />< br/>For 
example, consider an INSERT query that uses `CLUSTERED BY country` and has 
`segmentSortOrder` set to `__time,city,country`. [...]
-| `forceSegmentSortByTime` | INSERT or REPLACE<br /><br />When set to `true` 
(the default), Druid prepends `__time` to [CLUSTERED BY](#clustered-by) when 
determining the sort order for individual segments. Druid also requires that 
`segmentSortOrder`, if provided, starts with `__time`.<br /><br />When set to 
`false`, Druid uses the [CLUSTERED BY](#clustered-by) alone to determine the 
sort order for individual segments, and does not require that 
`segmentSortOrder` begin with `__time`. Sett [...]
+| `forceSegmentSortByTime` | INSERT or REPLACE<br /><br />When set to `true` 
(the default), Druid prepends `__time` to [CLUSTERED BY](#clustered-by) when 
determining the sort order for individual segments. Druid also requires that 
`segmentSortOrder`, if provided, starts with `__time`.<br /><br />When set to 
`false`, Druid uses the [CLUSTERED BY](#clustered-by) alone to determine the 
sort order for individual segments, and does not require that 
`segmentSortOrder` begin with `__time`. Sett [...]
 | `maxParseExceptions`| SELECT, INSERT, REPLACE<br /><br />Maximum number of 
parse exceptions that are ignored while executing the query before it stops 
with `TooManyWarningsFault`. To ignore all the parse exceptions, set the value 
to -1. | 0 |
 | `rowsPerSegment` | INSERT or REPLACE<br /><br />The number of rows per 
segment to target. The actual number of rows per segment may be somewhat higher 
or lower than this number. In most cases, use the default. For general 
information about sizing rows per segment, see [Segment Size 
Optimization](../operations/segment-optimization.md). | 3,000,000 |
 | `indexSpec` | INSERT or REPLACE<br /><br />An 
[`indexSpec`](../ingestion/ingestion-spec.md#indexspec) to use when generating 
segments. May be a JSON string or object. See [Front 
coding](../ingestion/ingestion-spec.md#front-coding) for details on configuring 
an `indexSpec` with front coding. | See 
[`indexSpec`](../ingestion/ingestion-spec.md#indexspec). |
diff --git a/docs/querying/sql-functions.md b/docs/querying/sql-functions.md
index ce0ce53460f..9f75a96ce79 100644
--- a/docs/querying/sql-functions.md
+++ b/docs/querying/sql-functions.md
@@ -285,7 +285,7 @@ Returns the following:
 
 ## APPROX_COUNT_DISTINCT_DS_THETA
 
-Returns the approximate number of distinct values in a Theta sketch column or 
a regular column. See [DataSketches Theta Sketch 
module](../development/extensions-core/datasketches-theta#aggregator) for a 
description of optional parameters.
+Returns the approximate number of distinct values in a Theta sketch column or 
a regular column. See [DataSketches Theta Sketch 
module](../development/extensions-core/datasketches-theta.md#aggregator) for a 
description of optional parameters.
 
 * **Syntax:** `APPROX_COUNT_DISTINCT_DS_THETA(expr, [size])`
 * **Function type:** Aggregation
@@ -2305,7 +2305,7 @@ Returns a result similar to the following:
 
 ## DS_THETA
 
-Creates a Theta sketch on a column containing Theta sketches or a regular 
column. See [DataSketches Theta Sketch 
module](../development/extensions-core/datasketches-theta#aggregator) for a 
description of optional parameters.
+Creates a Theta sketch on a column containing Theta sketches or a regular 
column. See [DataSketches Theta Sketch 
module](../development/extensions-core/datasketches-theta.md#aggregator) for a 
description of optional parameters.
 
 * **Syntax:** `DS_THETA(expr, [size])`
 * **Function type:** Aggregation
@@ -5551,7 +5551,7 @@ Returns the following:
 
 ## THETA_SKETCH_INTERSECT
 
-Returns an intersection of Theta sketches. Each input expression must return a 
Theta sketch. See [DataSketches Theta Sketch 
module](../development/extensions-core/datasketches-theta#aggregator) for a 
description of optional parameters. 
+Returns an intersection of Theta sketches. Each input expression must return a 
Theta sketch. See [DataSketches Theta Sketch 
module](../development/extensions-core/datasketches-theta.md#aggregator) for a 
description of optional parameters. 
 
 * **Syntax:** `THETA_SKETCH_INTERSECT([size], expr0, expr1, ...)`
 * **Function type:** Scalar, sketch
@@ -5585,7 +5585,7 @@ Returns the following:
 
 ## THETA_SKETCH_NOT
 
-Returns a set difference of Theta sketches. Each input expression must return 
a Theta sketch. See [DataSketches Theta Sketch 
module](../development/extensions-core/datasketches-theta#aggregator) for a 
description of optional parameters.
+Returns a set difference of Theta sketches. Each input expression must return 
a Theta sketch. See [DataSketches Theta Sketch 
module](../development/extensions-core/datasketches-theta.md#aggregator) for a 
description of optional parameters.
 
 * **Syntax:** `THETA_SKETCH_NOT([size], expr0, expr1, ...)`
 * **Function type:** Scalar, sketch
@@ -5620,7 +5620,7 @@ Returns the following:
 
 ## THETA_SKETCH_UNION
 
-Returns a union of Theta sketches. Each input expression must return a Theta 
sketch. See [DataSketches Theta Sketch 
module](../development/extensions-core/datasketches-theta#aggregator) for a 
description of optional parameters.
+Returns a union of Theta sketches. Each input expression must return a Theta 
sketch. See [DataSketches Theta Sketch 
module](../development/extensions-core/datasketches-theta.md#aggregator) for a 
description of optional parameters.
 
 * **Syntax:**`THETA_SKETCH_UNION([size], expr0, expr1, ...)`
 * **Function type:** Scalar, sketch


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to