This is an automated email from the ASF dual-hosted git repository.

techdocsmith pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git


The following commit(s) were added to refs/heads/master by this push:
     new 7a1e1f88bb Remove experimental note from stable features (#12973)
7a1e1f88bb is described below

commit 7a1e1f88bb16876d231da8d57b6bb121738c8e16
Author: Jill Osborne <[email protected]>
AuthorDate: Thu Aug 25 17:26:46 2022 +0100

    Remove experimental note from stable features (#12973)
    
    * Removed experimental note for features that are no longer experimental
    
    * Updated native batch doc
---
 docs/design/router.md                                          | 4 ----
 docs/development/extensions-core/approximate-histograms.md     | 8 +-------
 docs/development/extensions-core/kafka-extraction-namespace.md | 2 --
 docs/development/extensions-core/kafka-ingestion.md            | 2 --
 docs/development/extensions-core/lookups-cached-global.md      | 3 ---
 docs/ingestion/data-formats.md                                 | 2 --
 docs/ingestion/native-batch.md                                 | 4 +---
 docs/querying/lookups.md                                       | 3 ---
 8 files changed, 2 insertions(+), 26 deletions(-)

diff --git a/docs/design/router.md b/docs/design/router.md
index 436a8b7b21..d493a9a76a 100644
--- a/docs/design/router.md
+++ b/docs/design/router.md
@@ -22,10 +22,6 @@ title: "Router Process"
   ~ under the License.
   -->
 
-
-> The Router is an optional and [experimental](../development/experimental.md) 
feature due to the fact that its recommended place in the Druid cluster 
architecture is still evolving.
-> However, it has been battle-tested in production, and it hosts the powerful 
[Druid console](../operations/druid-console.md), so you should feel safe 
deploying it.
-
 The Apache Druid Router process can be used to route queries to different 
Broker processes. By default, the broker routes queries based on how 
[Rules](../operations/rule-configuration.md) are set up. For example, if 1 
month of recent data is loaded into a `hot` cluster, queries that fall within 
the recent month can be routed to a dedicated set of brokers. Queries outside 
this range are routed to another set of brokers. This set up provides query 
isolation such that queries for more impor [...]
 
 For query routing purposes, you should only ever need the Router process if 
you have a Druid cluster well into the terabyte range.
diff --git a/docs/development/extensions-core/approximate-histograms.md 
b/docs/development/extensions-core/approximate-histograms.md
index 7c4450ee84..08dd753353 100644
--- a/docs/development/extensions-core/approximate-histograms.md
+++ b/docs/development/extensions-core/approximate-histograms.md
@@ -43,13 +43,7 @@ to compute approximate histograms, with the following 
modifications:
   increasing accuracy when there are few data points, or when dealing with
   discrete data points. You can find some of the details in [this 
post](https://metamarkets.com/2013/histograms/).
 
-Approximate histogram sketches are still experimental for a reason, and you
-should understand the limitations of the current implementation before using
-them. The approximation is heavily data-dependent, which makes it difficult to
-give good general guidelines, so you should experiment and see what parameters
-work well for your data.
-
-Here are a few things to note before using them:
+Here are a few things to note before using approximate histograms:
 
 - As indicated in the original paper, there are no formal error bounds on the
   approximation. In practice, the approximation gets worse if the distribution
diff --git a/docs/development/extensions-core/kafka-extraction-namespace.md 
b/docs/development/extensions-core/kafka-extraction-namespace.md
index 93e6858ca4..0efbf7b815 100644
--- a/docs/development/extensions-core/kafka-extraction-namespace.md
+++ b/docs/development/extensions-core/kafka-extraction-namespace.md
@@ -22,8 +22,6 @@ title: "Apache Kafka Lookups"
   ~ under the License.
   -->
 
-> Lookups are an [experimental](../experimental.md) feature.
-
 To use this Apache Druid extension, 
[include](../../development/extensions.md#loading-extensions) 
`druid-lookups-cached-global` and `druid-kafka-extraction-namespace` in the 
extensions load list.
 
 If you need updates to populate as promptly as possible, it is possible to 
plug into a Kafka topic whose key is the old value and message is the desired 
new value (both in UTF-8) as a LookupExtractorFactory.
diff --git a/docs/development/extensions-core/kafka-ingestion.md 
b/docs/development/extensions-core/kafka-ingestion.md
index c3b592f37a..f6f1b00a79 100644
--- a/docs/development/extensions-core/kafka-ingestion.md
+++ b/docs/development/extensions-core/kafka-ingestion.md
@@ -134,8 +134,6 @@ If you want to ingest data from other fields in addition to 
the Kafka message co
 - the Kafka event timestamp
 - the Kafka event value that stores the payload.
 
-> The Kafka inputFormat is currently designated as experimental.
-
 For example, consider the following structure for a message that represents a 
fictitious wiki edit in a development environment:
 - **Event headers**: {"environment": "development"}
 - **Event key**: {"key: "wiki-edit"}
diff --git a/docs/development/extensions-core/lookups-cached-global.md 
b/docs/development/extensions-core/lookups-cached-global.md
index 6a8bcbcb5c..5842d3dea0 100644
--- a/docs/development/extensions-core/lookups-cached-global.md
+++ b/docs/development/extensions-core/lookups-cached-global.md
@@ -22,9 +22,6 @@ title: "Globally Cached Lookups"
   ~ under the License.
   -->
 
-
-> Lookups are an [experimental](../experimental.md) feature.
-
 To use this Apache Druid extension, 
[include](../extensions.md#loading-extensions) `druid-lookups-cached-global` in 
the extensions load list.
 
 ## Configuration
diff --git a/docs/ingestion/data-formats.md b/docs/ingestion/data-formats.md
index dbce610c21..751408d5a4 100644
--- a/docs/ingestion/data-formats.md
+++ b/docs/ingestion/data-formats.md
@@ -155,8 +155,6 @@ Be sure to change the `delimiter` to the appropriate 
delimiter for your data. Li
 
 Configure the Kafka `inputFormat` to load complete kafka records including 
header, key, and value. 
 
-> That Kafka `inputFormat` is currently designated as experimental.
-
 | Field | Type | Description | Required |
 |-------|------|-------------|----------|
 | type | String | Set value to `kafka`. | yes |
diff --git a/docs/ingestion/native-batch.md b/docs/ingestion/native-batch.md
index aba390b228..2cfe39afb2 100644
--- a/docs/ingestion/native-batch.md
+++ b/docs/ingestion/native-batch.md
@@ -393,9 +393,7 @@ them to create the final segments. Finally, they push the 
final segments to the
 
 #### Multi-dimension range partitioning
 
-> Multiple dimension (multi-dimension) range partitioning is an experimental 
feature.
-> Multi-dimension range partitioning is not supported in the sequential mode 
of the
-> `index_parallel` task type.
+> Multi-dimension range partitioning is not supported in the sequential mode 
of the `index_parallel` task type.
 
 Range partitioning has [several benefits](#benefits-of-range-partitioning) 
related to storage footprint and query
 performance. Multi-dimension range partitioning improves over single-dimension 
range partitioning by allowing
diff --git a/docs/querying/lookups.md b/docs/querying/lookups.md
index 9fe104b4ad..786d43faec 100644
--- a/docs/querying/lookups.md
+++ b/docs/querying/lookups.md
@@ -22,9 +22,6 @@ title: "Lookups"
   ~ under the License.
   -->
 
-
-> Lookups are an [experimental](../development/experimental.md) feature.
-
 Lookups are a concept in Apache Druid where dimension values are (optionally) 
replaced with new values, allowing join-like
 functionality. Applying lookups in Druid is similar to joining a dimension 
table in a data warehouse. See
 [dimension specs](../querying/dimensionspecs.md) for more information. For the 
purpose of these documents, a "key"


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to