This is an automated email from the ASF dual-hosted git repository.

victoria pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git


The following commit(s) were added to refs/heads/master by this push:
     new 6254658f61 docs: fix links (#14111)
6254658f61 is described below

commit 6254658f619a42c4ab0af1d7e61e861bd67d2083
Author: 317brian <[email protected]>
AuthorDate: Fri May 12 09:59:16 2023 -0700

    docs: fix links (#14111)
---
 docs/design/historical.md                                     |  2 +-
 docs/development/extensions-contrib/compressed-big-decimal.md |  2 +-
 docs/development/extensions-core/datasketches-hll.md          |  2 +-
 docs/development/extensions-core/druid-basic-security.md      |  2 +-
 docs/development/extensions-core/s3.md                        |  6 +++---
 docs/ingestion/ingestion-spec.md                              |  2 +-
 docs/multi-stage-query/reference.md                           | 10 +++++-----
 docs/querying/caching.md                                      |  6 +++---
 docs/tutorials/docker.md                                      |  2 +-
 9 files changed, 17 insertions(+), 17 deletions(-)

diff --git a/docs/design/historical.md b/docs/design/historical.md
index 25a957404c..f3580f5dac 100644
--- a/docs/design/historical.md
+++ b/docs/design/historical.md
@@ -45,7 +45,7 @@ Each Historical process copies or "pulls" segment files from 
Deep Storage to loc
 
 See the [Tuning 
Guide](../operations/basic-cluster-tuning.md#segment-cache-size) for more 
information.
 
-The [Coordinator](../design/coordinator.html) controls the assignment of 
segments to Historicals and the balance of segments between Historicals. 
Historical processes do not communicate directly with each other, nor do they 
communicate directly with the Coordinator.  Instead, the Coordinator creates 
ephemeral entries in Zookeeper in a [load queue 
path](../configuration/index.md#path-configuration). Each Historical process 
maintains a connection to Zookeeper, watching those paths for segm [...]
+The [Coordinator](../design/coordinator.md) controls the assignment of 
segments to Historicals and the balance of segments between Historicals. 
Historical processes do not communicate directly with each other, nor do they 
communicate directly with the Coordinator.  Instead, the Coordinator creates 
ephemeral entries in Zookeeper in a [load queue 
path](../configuration/index.md#path-configuration). Each Historical process 
maintains a connection to Zookeeper, watching those paths for segmen [...]
 
 For more information about how the Coordinator assigns segments to Historical 
processes, see [Coordinator](../design/coordinator.md).
 
diff --git a/docs/development/extensions-contrib/compressed-big-decimal.md 
b/docs/development/extensions-contrib/compressed-big-decimal.md
index dc3c9651fe..5c96527493 100644
--- a/docs/development/extensions-contrib/compressed-big-decimal.md
+++ b/docs/development/extensions-contrib/compressed-big-decimal.md
@@ -61,7 +61,7 @@ There are currently no configuration properties specific to 
Compressed Big Decim
 |dimensions|A JSON list of [DimensionSpec](../../querying/dimensionspecs.md) 
(Notice that property is optional)|no|
 |limitSpec|See [LimitSpec](../../querying/limitspec.md)|no|
 |having|See [Having](../../querying/having.md)|no|
-|granularity|A period granularity; See [Period 
Granularities](../../querying/granularities.html#period-granularities)|yes|
+|granularity|A period granularity; See [Period 
Granularities](../../querying/granularities.md#period-granularities)|yes|
 |filter|See [Filters](../../querying/filters.md)|no|
 |aggregations|Aggregations forms the input to Averagers; See 
[Aggregations](../../querying/aggregations.md). The Aggregations must specify 
type, scale and size as follows for compressedBigDecimal Type 
```"aggregations": [{"type": "compressedBigDecimal","name": "..","fieldName": 
"..","scale": [Numeric],"size": [Numeric]}```.  Please refer query example in 
Examples section.  |Yes|
 |postAggregations|Supports only aggregations as input; See [Post 
Aggregations](../../querying/post-aggregations.md)|no|
diff --git a/docs/development/extensions-core/datasketches-hll.md 
b/docs/development/extensions-core/datasketches-hll.md
index 8a56c982d1..0fec926993 100644
--- a/docs/development/extensions-core/datasketches-hll.md
+++ b/docs/development/extensions-core/datasketches-hll.md
@@ -65,7 +65,7 @@ For additional sketch types supported in Druid, see 
[DataSketches extension](dat
 The `HLLSketchBuild` aggregator builds an HLL sketch object from the specified 
input column. When used during ingestion, Druid stores pre-generated HLL sketch 
objects in the datasource instead of the raw data from the input column.
 When applied at query time on an existing dimension, you can use the resulting 
column as an intermediate dimension by the 
[post-aggregators](#post-aggregators).
 
-> It is very common to use `HLLSketchBuild` in combination with 
[rollup](../../ingestion/rollup.md) to create a 
[metric](../../ingestion/ingestion-spec.html#metricsspec) on high-cardinality 
columns.  In this example, a metric called `userid_hll` is included in the 
`metricsSpec`.  This will perform a HLL sketch on the `userid` field at 
ingestion time, allowing for highly-performant approximate `COUNT DISTINCT` 
query operations and improving roll-up ratios when `userid` is then left out of 
[...]
+> It is very common to use `HLLSketchBuild` in combination with 
[rollup](../../ingestion/rollup.md) to create a 
[metric](../../ingestion/ingestion-spec.md#metricsspec) on high-cardinality 
columns.  In this example, a metric called `userid_hll` is included in the 
`metricsSpec`.  This will perform a HLL sketch on the `userid` field at 
ingestion time, allowing for highly-performant approximate `COUNT DISTINCT` 
query operations and improving roll-up ratios when `userid` is then left out of 
t [...]
 >
 > ```
 > "metricsSpec": [
diff --git a/docs/development/extensions-core/druid-basic-security.md 
b/docs/development/extensions-core/druid-basic-security.md
index c11a43a024..1732ba19ac 100644
--- a/docs/development/extensions-core/druid-basic-security.md
+++ b/docs/development/extensions-core/druid-basic-security.md
@@ -35,7 +35,7 @@ druid.extensions.loadList=["postgresql-metadata-storage", 
"druid-hdfs-storage",
 ```
 
 To enable basic auth, configure the basic Authenticator, Escalator, and 
Authorizer in `common.runtime.properties`.
-See [Security 
overview](../../operations/security-overview.html#enable-an-authenticator) for 
an example configuration for HTTP basic authentication.
+See [Security 
overview](../../operations/security-overview.md#enable-an-authenticator) for an 
example configuration for HTTP basic authentication.
 
 Visit [Authentication and Authorization](../../design/auth.md) for more 
information on the implemented extension interfaces and for an example 
configuration.
 
diff --git a/docs/development/extensions-core/s3.md 
b/docs/development/extensions-core/s3.md
index 35a17eea89..c8fa755dfb 100644
--- a/docs/development/extensions-core/s3.md
+++ b/docs/development/extensions-core/s3.md
@@ -32,7 +32,7 @@ To use this Apache Druid extension, 
[include](../../development/extensions.md#lo
 
 ### Reading data from S3
 
-Use a native batch [Parallel task](../../ingestion/native-batch.md) with an 
[S3 input 
source](../../ingestion/native-batch-input-sources.html#s3-input-source) to 
read objects directly from S3.
+Use a native batch [Parallel task](../../ingestion/native-batch.md) with an 
[S3 input source](../../ingestion/native-batch-input-source.md#s3-input-source) 
to read objects directly from S3.
 
 Alternatively, use a [Hadoop task](../../ingestion/hadoop.md),
 and specify S3 paths in your 
[`inputSpec`](../../ingestion/hadoop.md#inputspec).
@@ -79,9 +79,9 @@ The configuration options are listed in order of precedence.  
For example, if yo
 
 For more information, refer to the [Amazon Developer 
Guide](https://docs.aws.amazon.com/fr_fr/sdk-for-java/v1/developer-guide/credentials).
 
-Alternatively, you can bypass this chain by specifying an access key and 
secret key using a [Properties 
Object](../../ingestion/native-batch-input-sources.html#s3-input-source) inside 
your ingestion specification.
+Alternatively, you can bypass this chain by specifying an access key and 
secret key using a [Properties 
Object](../../ingestion/native-batch-input-source.md#s3-input-source) inside 
your ingestion specification.
 
-Use the property 
[`druid.startup.logging.maskProperties`](../../configuration/index.html#startup-logging)
 to mask credentials information in Druid logs.  For example, `["password", 
"secretKey", "awsSecretAccessKey"]`.
+Use the property 
[`druid.startup.logging.maskProperties`](../../configuration/index.md#startup-logging)
 to mask credentials information in Druid logs.  For example, `["password", 
"secretKey", "awsSecretAccessKey"]`.
 
 ### S3 permissions settings
 
diff --git a/docs/ingestion/ingestion-spec.md b/docs/ingestion/ingestion-spec.md
index d2e9a52a80..079f8e9f19 100644
--- a/docs/ingestion/ingestion-spec.md
+++ b/docs/ingestion/ingestion-spec.md
@@ -212,7 +212,7 @@ A `dimensionsSpec` can have the following components:
 | `dimensions`           | A list of [dimension names or 
objects](#dimension-objects). You cannot include the same column in both 
`dimensions` and `dimensionExclusions`.<br /><br />If `dimensions` and 
`spatialDimensions` are both null or empty arrays, Druid treats all columns 
other than timestamp or metrics that do not appear in `dimensionExclusions` as 
String-typed dimension columns. See [inclusions and 
exclusions](#inclusions-and-exclusions) for details.<br /><br />As a best 
practice,  [...]
 | `dimensionExclusions`  | The names of dimensions to exclude from ingestion. 
Only names are supported here, not objects.<br /><br />This list is only used 
if the `dimensions` and `spatialDimensions` lists are both null or empty 
arrays; otherwise it is ignored. See [inclusions and 
exclusions](#inclusions-and-exclusions) below for details.                      
                                                                                
                                                   [...]
 | `spatialDimensions`    | An array of [spatial 
dimensions](../development/geo.md).                                             
                                                                                
                                                                                
                                                                                
                                                                                
                                             [...]
-| `includeAllDimensions` | You can set `includeAllDimensions` to true to 
ingest both explicit dimensions in the `dimensions` field and other dimensions 
that the ingestion task discovers from input data. In this case, the explicit 
dimensions will appear first in order that you specify them and the dimensions 
dynamically discovered will come after. This flag can be useful especially with 
auto schema discovery using [`flattenSpec`](./data-formats.html#flattenspec). 
If this is not set and th [...]
+| `includeAllDimensions` | You can set `includeAllDimensions` to true to 
ingest both explicit dimensions in the `dimensions` field and other dimensions 
that the ingestion task discovers from input data. In this case, the explicit 
dimensions will appear first in order that you specify them and the dimensions 
dynamically discovered will come after. This flag can be useful especially with 
auto schema discovery using [`flattenSpec`](./data-formats.md#flattenspec). If 
this is not set and the  [...]
 
 #### Dimension objects
 
diff --git a/docs/multi-stage-query/reference.md 
b/docs/multi-stage-query/reference.md
index b1055fb4c3..171d842e7e 100644
--- a/docs/multi-stage-query/reference.md
+++ b/docs/multi-stage-query/reference.md
@@ -86,7 +86,7 @@ The input source and format are as above. The columns are 
expressed as in a SQL
 Example: `(timestamp VARCHAR, metricType VARCHAR, value BIGINT)`. The optional 
`EXTEND` keyword
 can precede the column list: `EXTEND (timestamp VARCHAR...)`.
 
-For more information, see [Read external data with EXTERN](concepts.md#extern).
+For more information, see [Read external data with 
EXTERN](concepts.md#read-external-data-with-extern).
 
 ### `INSERT`
 
@@ -114,7 +114,7 @@ INSERT consists of the following parts:
 4. A [PARTITIONED BY](#partitioned-by) clause, such as `PARTITIONED BY DAY`.
 5. An optional [CLUSTERED BY](#clustered-by) clause.
 
-For more information, see [Load data with INSERT](concepts.md#insert).
+For more information, see [Load data with 
INSERT](concepts.md#load-data-with-insert).
 
 ### `REPLACE`
 
@@ -197,7 +197,7 @@ The following ISO 8601 periods are supported for 
`TIME_FLOOR` and the string con
 - P3M
 - P1Y
 
-For more information about partitioning, see 
[Partitioning](concepts.md#partitioning).
+For more information about partitioning, see 
[Partitioning](concepts.md#partitioning-by-time).
 
 ### `CLUSTERED BY`
 
@@ -267,7 +267,7 @@ inputs are all connected as broadcast inputs to the "base" 
stage.
 Together, all of these non-base leaf inputs must not exceed the [limit on 
broadcast table footprint](#limits). There
 is no limit on the size of the base (leftmost) input.
 
-Only LEFT JOIN, INNER JOIN, and CROSS JOIN are supported with with `broadcast`.
+Only LEFT JOIN, INNER JOIN, and CROSS JOIN are supported with `broadcast`.
 
 Join conditions, if present, must be equalities. It is not necessary to 
include a join condition; for example,
 `CROSS JOIN` and comma join do not require join conditions.
@@ -336,7 +336,7 @@ The context parameter that sets `sqlJoinAlgorithm` to 
`sortMerge` is not shown i
 
 ## Durable Storage
 
-Using durable storage with your SQL-based ingestions can improve their 
reliability by writing intermediate files to a storage location temporarily. 
+Using durable storage with your SQL-based ingestion can improve their 
reliability by writing intermediate files to a storage location temporarily. 
 
 To prevent durable storage from getting filled up with temporary files in case 
the tasks fail to clean them up, a periodic
 cleaner can be scheduled to clean the directories corresponding to which there 
isn't a controller task running. It utilizes
diff --git a/docs/querying/caching.md b/docs/querying/caching.md
index e3ded14f8a..e8f3fcaedf 100644
--- a/docs/querying/caching.md
+++ b/docs/querying/caching.md
@@ -32,7 +32,7 @@ If you're unfamiliar with Druid architecture, review the 
following topics before
 
 For instructions to configure query caching see [Using query 
caching](./using-caching.md).
 
-Cache monitoring, including the hit rate and number of evictions, is available 
in [Druid metrics](../operations/metrics.html#cache).
+Cache monitoring, including the hit rate and number of evictions, is available 
in [Druid metrics](../operations/metrics.md#cache).
 
 Query-level caching is in addition to [data-level 
caching](../design/historical.md) on Historicals.
 
@@ -53,7 +53,7 @@ Druid invalidates any cache the moment any underlying data 
change to avoid retur
 
 The primary form of caching in Druid is a *per-segment results cache*.  This 
cache stores partial query results on a per-segment basis and is enabled on 
Historical services by default.
 
-The per-segment results cache allows Druid to maintain a low-eviction-rate 
cache for segments that do not change, especially important for those segments 
that [historical](../design/historical.html) processes pull into their local 
_segment cache_ from [deep storage](../dependencies/deep-storage.html). 
Real-time segments, on the other hand, continue to have results computed at 
query time.
+The per-segment results cache allows Druid to maintain a low-eviction-rate 
cache for segments that do not change, especially important for those segments 
that [historical](../design/historical.md) processes pull into their local 
_segment cache_ from [deep storage](../dependencies/deep-storage.md). Real-time 
segments, on the other hand, continue to have results computed at query time.
 
 Druid may potentially merge per-segment cached results with the results of 
later queries that use a similar basic shape with similar filters, 
aggregations, etc. For example, if the query is identical except that it covers 
a different time period.
 
@@ -79,7 +79,7 @@ Use *whole-query caching* on the Broker to increase query 
efficiency when there
 
 - On Brokers for small production clusters with less than five servers. 
 
-Avoid using per-segment cache at the Broker for large production clusters. 
When the Broker cache is enabled (`druid.broker.cache.populateCache` is `true`) 
and `populateCache` _is not_ `false` in the [query 
context](../querying/query-context.html), individual Historicals will _not_ 
merge individual segment-level results, and instead pass these back to the lead 
Broker.  The Broker must then carry out a large merge from _all_ segments on 
its own.
+Avoid using per-segment cache at the Broker for large production clusters. 
When the Broker cache is enabled (`druid.broker.cache.populateCache` is `true`) 
and `populateCache` _is not_ `false` in the [query 
context](../querying/query-context.md), individual Historicals will _not_ merge 
individual segment-level results, and instead pass these back to the lead 
Broker.  The Broker must then carry out a large merge from _all_ segments on 
its own.
 
 **Whole-query cache** is available exclusively on Brokers.
 
diff --git a/docs/tutorials/docker.md b/docs/tutorials/docker.md
index 955a6e2f7b..5b9c2351a0 100644
--- a/docs/tutorials/docker.md
+++ b/docs/tutorials/docker.md
@@ -35,7 +35,7 @@ This tutorial assumes you will download the required files 
from GitHub. The file
 
 ### Docker memory requirements
 
-The default `docker-compose.yml` launches eight containers: Zookeeper, 
PostgreSQL, and six Druid containers based upon the [micro quickstart 
configuration](../operations/single-server.html#single-server-reference-configurations-deprecated).
+The default `docker-compose.yml` launches eight containers: Zookeeper, 
PostgreSQL, and six Druid containers based upon the [micro quickstart 
configuration](../operations/single-server.md#single-server-reference-configurations-deprecated).
 Each Druid service is configured to use up to 7 GiB of memory (6 GiB direct 
memory and 1 GiB heap). However, the quickstart will not use all the available 
memory.
 
 For this setup, Docker needs at least 6 GiB of memory available for the Druid 
cluster. For Docker Desktop on Mac OS, adjust the memory settings in the 
[Docker Desktop preferences](https://docs.docker.com/desktop/mac/). If you 
experience a crash with a 137 error code you likely don't have enough memory 
allocated to Docker.


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to