This is an automated email from the ASF dual-hosted git repository.
jonwei pushed a commit to branch 0.14.0-incubating
in repository https://gitbox.apache.org/repos/asf/incubator-druid.git
The following commit(s) were added to refs/heads/0.14.0-incubating by this push:
new c1f0d67 Add more Apache branding to docs (#7515) (#7517)
c1f0d67 is described below
commit c1f0d6705b8694f41fe89fb7fbd0ce8d4eefee97
Author: Jonathan Wei <[email protected]>
AuthorDate: Fri Apr 19 18:41:46 2019 -0700
Add more Apache branding to docs (#7515) (#7517)
---
docs/content/comparisons/druid-vs-elasticsearch.md | 2 +-
docs/content/comparisons/druid-vs-key-value.md | 2 +-
docs/content/comparisons/druid-vs-kudu.md | 2 +-
docs/content/comparisons/druid-vs-redshift.md | 2 +-
docs/content/comparisons/druid-vs-spark.md | 2 +-
docs/content/comparisons/druid-vs-sql-on-hadoop.md | 2 +-
docs/content/configuration/index.md | 2 +-
docs/content/configuration/logging.md | 2 +-
docs/content/configuration/realtime.md | 2 +-
docs/content/dependencies/cassandra-deep-storage.md | 2 +-
docs/content/dependencies/deep-storage.md | 2 +-
docs/content/dependencies/metadata-storage.md | 2 +-
docs/content/dependencies/zookeeper.md | 2 +-
docs/content/design/auth.md | 2 ++
docs/content/design/broker.md | 4 ++--
docs/content/design/coordinator.md | 2 +-
docs/content/design/historical.md | 2 +-
docs/content/design/index.md | 4 ++--
docs/content/design/indexing-service.md | 2 +-
docs/content/design/middlemanager.md | 2 +-
docs/content/design/overlord.md | 2 +-
docs/content/design/peons.md | 2 +-
docs/content/design/plumber.md | 2 +-
docs/content/design/processes.md | 2 +-
docs/content/design/realtime.md | 2 +-
docs/content/design/segments.md | 2 +-
docs/content/development/build.md | 2 +-
docs/content/development/experimental.md | 2 +-
.../development/extensions-contrib/ambari-metrics-emitter.md | 4 ++--
docs/content/development/extensions-contrib/azure.md | 4 ++--
docs/content/development/extensions-contrib/cassandra.md | 4 ++--
docs/content/development/extensions-contrib/cloudfiles.md | 4 +++-
docs/content/development/extensions-contrib/distinctcount.md | 2 +-
docs/content/development/extensions-contrib/google.md | 2 +-
docs/content/development/extensions-contrib/graphite.md | 2 +-
docs/content/development/extensions-contrib/influx.md | 2 +-
docs/content/development/extensions-contrib/kafka-emitter.md | 4 ++--
docs/content/development/extensions-contrib/kafka-simple.md | 4 ++--
docs/content/development/extensions-contrib/materialized-view.md | 2 +-
docs/content/development/extensions-contrib/opentsdb-emitter.md | 2 +-
docs/content/development/extensions-contrib/rabbitmq.md | 2 +-
docs/content/development/extensions-contrib/redis-cache.md | 2 ++
docs/content/development/extensions-contrib/rocketmq.md | 2 +-
docs/content/development/extensions-contrib/sqlserver.md | 2 +-
docs/content/development/extensions-contrib/statsd.md | 2 +-
docs/content/development/extensions-contrib/thrift.md | 2 +-
docs/content/development/extensions-contrib/time-min-max.md | 2 +-
docs/content/development/extensions-core/approximate-histograms.md | 2 +-
docs/content/development/extensions-core/avro.md | 2 +-
docs/content/development/extensions-core/bloom-filter.md | 2 +-
docs/content/development/extensions-core/datasketches-extension.md | 2 +-
docs/content/development/extensions-core/datasketches-hll.md | 2 +-
docs/content/development/extensions-core/datasketches-quantiles.md | 2 +-
docs/content/development/extensions-core/datasketches-theta.md | 2 +-
docs/content/development/extensions-core/datasketches-tuple.md | 2 +-
docs/content/development/extensions-core/druid-basic-security.md | 2 +-
docs/content/development/extensions-core/druid-kerberos.md | 2 +-
docs/content/development/extensions-core/druid-lookups.md | 2 +-
docs/content/development/extensions-core/hdfs.md | 2 +-
docs/content/development/extensions-core/kafka-eight-firehose.md | 4 ++--
.../development/extensions-core/kafka-extraction-namespace.md | 4 ++--
docs/content/development/extensions-core/kafka-ingestion.md | 4 ++--
docs/content/development/extensions-core/kinesis-ingestion.md | 4 ++--
docs/content/development/extensions-core/lookups-cached-global.md | 2 +-
docs/content/development/extensions-core/mysql.md | 2 +-
docs/content/development/extensions-core/parquet.md | 6 +++---
docs/content/development/extensions-core/postgresql.md | 2 +-
docs/content/development/extensions-core/protobuf.md | 2 +-
docs/content/development/extensions-core/s3.md | 2 +-
.../content/development/extensions-core/simple-client-sslcontext.md | 2 +-
docs/content/development/extensions-core/stats.md | 2 +-
docs/content/development/extensions-core/test-stats.md | 2 +-
docs/content/development/extensions.md | 2 +-
docs/content/development/geo.md | 2 +-
.../development/integrating-druid-with-other-technologies.md | 2 +-
docs/content/development/javascript.md | 2 +-
docs/content/development/modules.md | 2 +-
docs/content/development/overview.md | 2 +-
docs/content/development/router.md | 2 +-
docs/content/development/versioning.md | 2 +-
docs/content/ingestion/batch-ingestion.md | 2 +-
docs/content/ingestion/command-line-hadoop-indexer.md | 2 +-
docs/content/ingestion/compaction.md | 2 +-
docs/content/ingestion/data-formats.md | 2 +-
docs/content/ingestion/delete-data.md | 2 +-
docs/content/ingestion/faq.md | 2 +-
docs/content/ingestion/firehose.md | 2 +-
docs/content/ingestion/hadoop-vs-native-batch.md | 2 +-
docs/content/ingestion/hadoop.md | 2 +-
docs/content/ingestion/index.md | 2 +-
docs/content/ingestion/ingestion-spec.md | 2 +-
docs/content/ingestion/locking-and-priority.md | 2 +-
docs/content/ingestion/native_tasks.md | 2 +-
docs/content/ingestion/schema-changes.md | 2 +-
docs/content/ingestion/schema-design.md | 2 +-
docs/content/ingestion/stream-ingestion.md | 2 +-
docs/content/ingestion/stream-pull.md | 2 +-
docs/content/ingestion/stream-push.md | 2 +-
docs/content/ingestion/tasks.md | 2 +-
docs/content/ingestion/transform-spec.md | 2 +-
docs/content/ingestion/update-existing-data.md | 2 +-
docs/content/misc/math-expr.md | 2 +-
docs/content/misc/papers-and-talks.md | 2 +-
docs/content/operations/alerts.md | 2 +-
docs/content/operations/api-reference.md | 2 +-
docs/content/operations/druid-console.md | 2 +-
docs/content/operations/dump-segment.md | 2 +-
docs/content/operations/http-compression.md | 2 +-
docs/content/operations/including-extensions.md | 2 +-
docs/content/operations/management-uis.md | 2 +-
docs/content/operations/metrics.md | 2 +-
docs/content/operations/other-hadoop.md | 4 ++--
docs/content/operations/password-provider.md | 2 +-
docs/content/operations/performance-faq.md | 2 +-
docs/content/operations/pull-deps.md | 4 ++--
docs/content/operations/recommendations.md | 2 +-
docs/content/operations/reset-cluster.md | 2 +-
docs/content/operations/rolling-updates.md | 2 +-
docs/content/operations/rule-configuration.md | 2 +-
docs/content/operations/segment-optimization.md | 2 +-
docs/content/operations/tls-support.md | 2 +-
docs/content/querying/aggregations.md | 2 +-
docs/content/querying/caching.md | 2 +-
docs/content/querying/datasource.md | 2 +-
docs/content/querying/datasourcemetadataquery.md | 2 +-
docs/content/querying/dimensionspecs.md | 2 +-
docs/content/querying/filters.md | 2 +-
docs/content/querying/granularities.md | 2 +-
docs/content/querying/groupbyquery.md | 2 +-
docs/content/querying/having.md | 2 +-
docs/content/querying/hll-old.md | 2 +-
docs/content/querying/joins.md | 2 +-
docs/content/querying/lookups.md | 2 +-
docs/content/querying/multi-value-dimensions.md | 2 +-
docs/content/querying/multitenancy.md | 2 +-
docs/content/querying/post-aggregations.md | 2 +-
docs/content/querying/query-context.md | 2 +-
docs/content/querying/querying.md | 2 +-
docs/content/querying/scan-query.md | 2 +-
docs/content/querying/searchquery.md | 2 +-
docs/content/querying/segmentmetadataquery.md | 2 +-
docs/content/querying/select-query.md | 2 +-
docs/content/querying/sql.md | 2 +-
docs/content/querying/timeboundaryquery.md | 2 +-
docs/content/querying/timeseriesquery.md | 2 +-
docs/content/querying/topnmetricspec.md | 2 +-
docs/content/querying/topnquery.md | 2 +-
docs/content/querying/virtual-columns.md | 2 +-
docs/content/toc.md | 4 ++--
docs/content/tutorials/cluster.md | 2 +-
docs/content/tutorials/index.md | 6 +++---
docs/content/tutorials/tutorial-batch-hadoop.md | 4 ++--
docs/content/tutorials/tutorial-batch.md | 2 +-
docs/content/tutorials/tutorial-compaction.md | 2 +-
docs/content/tutorials/tutorial-delete-data.md | 2 +-
docs/content/tutorials/tutorial-ingestion-spec.md | 2 +-
docs/content/tutorials/tutorial-kafka.md | 4 ++--
docs/content/tutorials/tutorial-query.md | 2 +-
docs/content/tutorials/tutorial-retention.md | 2 +-
docs/content/tutorials/tutorial-rollup.md | 2 +-
docs/content/tutorials/tutorial-tranquility.md | 2 +-
docs/content/tutorials/tutorial-transform-spec.md | 2 +-
docs/content/tutorials/tutorial-update-data.md | 2 +-
163 files changed, 187 insertions(+), 181 deletions(-)
diff --git a/docs/content/comparisons/druid-vs-elasticsearch.md
b/docs/content/comparisons/druid-vs-elasticsearch.md
index da0a0c7..451160f 100644
--- a/docs/content/comparisons/druid-vs-elasticsearch.md
+++ b/docs/content/comparisons/druid-vs-elasticsearch.md
@@ -1,6 +1,6 @@
---
layout: doc_page
-title: "Druid vs Elasticsearch"
+title: "Apache Druid (incubating) vs Elasticsearch"
---
<!--
diff --git a/docs/content/comparisons/druid-vs-key-value.md
b/docs/content/comparisons/druid-vs-key-value.md
index 49cb75b..252e16c 100644
--- a/docs/content/comparisons/druid-vs-key-value.md
+++ b/docs/content/comparisons/druid-vs-key-value.md
@@ -1,6 +1,6 @@
---
layout: doc_page
-title: "Druid vs. Key/Value Stores (HBase/Cassandra/OpenTSDB)"
+title: "Apache Druid (incubating) vs. Key/Value Stores
(HBase/Cassandra/OpenTSDB)"
---
<!--
diff --git a/docs/content/comparisons/druid-vs-kudu.md
b/docs/content/comparisons/druid-vs-kudu.md
index 8d8b70a..37d11c8 100644
--- a/docs/content/comparisons/druid-vs-kudu.md
+++ b/docs/content/comparisons/druid-vs-kudu.md
@@ -1,6 +1,6 @@
---
layout: doc_page
-title: "Druid vs Kudu"
+title: "Apache Druid (incubating) vs Kudu"
---
<!--
diff --git a/docs/content/comparisons/druid-vs-redshift.md
b/docs/content/comparisons/druid-vs-redshift.md
index fc4fe57..009f0eb 100644
--- a/docs/content/comparisons/druid-vs-redshift.md
+++ b/docs/content/comparisons/druid-vs-redshift.md
@@ -1,6 +1,6 @@
---
layout: doc_page
-title: "Druid vs Redshift"
+title: "Apache Druid (incubating) vs Redshift"
---
<!--
diff --git a/docs/content/comparisons/druid-vs-spark.md
b/docs/content/comparisons/druid-vs-spark.md
index 5df91d9..7b8c263 100644
--- a/docs/content/comparisons/druid-vs-spark.md
+++ b/docs/content/comparisons/druid-vs-spark.md
@@ -1,6 +1,6 @@
---
layout: doc_page
-title: "Druid vs Spark"
+title: "Apache Druid (incubating) vs Spark"
---
<!--
diff --git a/docs/content/comparisons/druid-vs-sql-on-hadoop.md
b/docs/content/comparisons/druid-vs-sql-on-hadoop.md
index e75261d..e0c1fba 100644
--- a/docs/content/comparisons/druid-vs-sql-on-hadoop.md
+++ b/docs/content/comparisons/druid-vs-sql-on-hadoop.md
@@ -1,6 +1,6 @@
---
layout: doc_page
-title: "Druid vs SQL-on-Hadoop"
+title: "Apache Druid (incubating) vs SQL-on-Hadoop"
---
<!--
diff --git a/docs/content/configuration/index.md
b/docs/content/configuration/index.md
index ac703c8..58d0a5b 100644
--- a/docs/content/configuration/index.md
+++ b/docs/content/configuration/index.md
@@ -24,7 +24,7 @@ title: "Configuration Reference"
# Configuration Reference
-This page documents all of the configuration properties for each Druid service
type.
+This page documents all of the configuration properties for each Apache Druid
(incubating) service type.
## Table of Contents
* [Recommended Configuration File
Organization](#recommended-configuration-file-organization)
diff --git a/docs/content/configuration/logging.md
b/docs/content/configuration/logging.md
index 131bc87..1c89b7d 100644
--- a/docs/content/configuration/logging.md
+++ b/docs/content/configuration/logging.md
@@ -24,7 +24,7 @@ title: "Logging"
# Logging
-Druid processes will emit logs that are useful for debugging to the console.
Druid processes also emit periodic metrics about their state. For more about
metrics, see [Configuration](../configuration/index.html#enabling-metrics).
Metric logs are printed to the console by default, and can be disabled with
`-Ddruid.emitter.logging.logLevel=debug`.
+Apache Druid (incubating) processes will emit logs that are useful for
debugging to the console. Druid processes also emit periodic metrics about
their state. For more about metrics, see
[Configuration](../configuration/index.html#enabling-metrics). Metric logs are
printed to the console by default, and can be disabled with
`-Ddruid.emitter.logging.logLevel=debug`.
Druid uses [log4j2](http://logging.apache.org/log4j/2.x/) for logging. Logging
can be configured with a log4j2.xml file. Add the path to the directory
containing the log4j2.xml file (e.g. the _common/ dir) to your classpath if you
want to override default Druid log configuration. Note that this directory
should be earlier in the classpath than the druid jars. The easiest way to do
this is to prefix the classpath with the config dir.
diff --git a/docs/content/configuration/realtime.md
b/docs/content/configuration/realtime.md
index 4e806da..49cc934 100644
--- a/docs/content/configuration/realtime.md
+++ b/docs/content/configuration/realtime.md
@@ -24,7 +24,7 @@ title: "Realtime Process Configuration"
# Realtime Process Configuration
-For general Realtime Process information, see [here](../design/realtime.html).
+For general Apache Druid (incubating) Realtime Process information, see
[here](../design/realtime.html).
Runtime Configuration
---------------------
diff --git a/docs/content/dependencies/cassandra-deep-storage.md
b/docs/content/dependencies/cassandra-deep-storage.md
index 4137f04..6cb42d2 100644
--- a/docs/content/dependencies/cassandra-deep-storage.md
+++ b/docs/content/dependencies/cassandra-deep-storage.md
@@ -26,7 +26,7 @@ title: "Cassandra Deep Storage"
## Introduction
-Druid can use Cassandra as a deep storage mechanism. Segments and their
metadata are stored in Cassandra in two tables:
+Apache Druid (incubating) can use Apache Cassandra as a deep storage
mechanism. Segments and their metadata are stored in Cassandra in two tables:
`index_storage` and `descriptor_storage`. Underneath the hood, the Cassandra
integration leverages Astyanax. The
index storage table is a [Chunked
Object](https://github.com/Netflix/astyanax/wiki/Chunked-Object-Store)
repository. It contains
compressed segments for distribution to Historical processes. Since segments
can be large, the Chunked Object storage allows the integration to multi-thread
diff --git a/docs/content/dependencies/deep-storage.md
b/docs/content/dependencies/deep-storage.md
index eace739..c9c8eff 100644
--- a/docs/content/dependencies/deep-storage.md
+++ b/docs/content/dependencies/deep-storage.md
@@ -24,7 +24,7 @@ title: "Deep Storage"
# Deep Storage
-Deep storage is where segments are stored. It is a storage mechanism that
Druid does not provide. This deep storage infrastructure defines the level of
durability of your data, as long as Druid processes can see this storage
infrastructure and get at the segments stored on it, you will not lose data no
matter how many Druid nodes you lose. If segments disappear from this storage
layer, then you will lose whatever data those segments represented.
+Deep storage is where segments are stored. It is a storage mechanism that
Apache Druid (incubating) does not provide. This deep storage infrastructure
defines the level of durability of your data, as long as Druid processes can
see this storage infrastructure and get at the segments stored on it, you will
not lose data no matter how many Druid nodes you lose. If segments disappear
from this storage layer, then you will lose whatever data those segments
represented.
## Local Mount
diff --git a/docs/content/dependencies/metadata-storage.md
b/docs/content/dependencies/metadata-storage.md
index 5135eca1..e76eb2f 100644
--- a/docs/content/dependencies/metadata-storage.md
+++ b/docs/content/dependencies/metadata-storage.md
@@ -24,7 +24,7 @@ title: "Metadata Storage"
# Metadata Storage
-The Metadata Storage is an external dependency of Druid. Druid uses it to store
+The Metadata Storage is an external dependency of Apache Druid (incubating).
Druid uses it to store
various metadata about the system, but not to store the actual data. There are
a number of tables used for various purposes described below.
diff --git a/docs/content/dependencies/zookeeper.md
b/docs/content/dependencies/zookeeper.md
index e944441..a41e815 100644
--- a/docs/content/dependencies/zookeeper.md
+++ b/docs/content/dependencies/zookeeper.md
@@ -24,7 +24,7 @@ title: "ZooKeeper"
# ZooKeeper
-Druid uses [ZooKeeper](http://zookeeper.apache.org/) (ZK) for management of
current cluster state. The operations that happen over ZK are
+Apache Druid (incubating) uses [Apache
ZooKeeper](http://zookeeper.apache.org/) (ZK) for management of current cluster
state. The operations that happen over ZK are
1. [Coordinator](../design/coordinator.html) leader election
2. Segment "publishing" protocol from [Historical](../design/historical.html)
and [Realtime](../design/realtime.html)
diff --git a/docs/content/design/auth.md b/docs/content/design/auth.md
index 8063889..c46c83f 100644
--- a/docs/content/design/auth.md
+++ b/docs/content/design/auth.md
@@ -24,6 +24,8 @@ title: "Authentication and Authorization"
# Authentication and Authorization
+This document describes non-extension specific Apache Druid (incubating)
authentication and authorization configurations.
+
|Property|Type|Description|Default|Required|
|--------|-----------|--------|--------|--------|
|`druid.auth.authenticatorChain`|JSON List of Strings|List of Authenticator
type names|["allowAll"]|no|
diff --git a/docs/content/design/broker.md b/docs/content/design/broker.md
index 29ea42e..9f11551 100644
--- a/docs/content/design/broker.md
+++ b/docs/content/design/broker.md
@@ -26,7 +26,7 @@ title: "Broker"
### Configuration
-For Broker Process Configuration, see [Broker
Configuration](../configuration/index.html#broker).
+For Apache Druid (incubating) Broker Process Configuration, see [Broker
Configuration](../configuration/index.html#broker).
### HTTP endpoints
@@ -45,7 +45,7 @@ org.apache.druid.cli.Main server broker
### Forwarding Queries
-Most druid queries contain an interval object that indicates a span of time
for which data is requested. Likewise, Druid
[Segments](../design/segments.html) are partitioned to contain data for some
interval of time and segments are distributed across a cluster. Consider a
simple datasource with 7 segments where each segment contains data for a given
day of the week. Any query issued to the datasource for more than one day of
data will hit more than one segment. These segments will likely [...]
+Most Druid queries contain an interval object that indicates a span of time
for which data is requested. Likewise, Druid
[Segments](../design/segments.html) are partitioned to contain data for some
interval of time and segments are distributed across a cluster. Consider a
simple datasource with 7 segments where each segment contains data for a given
day of the week. Any query issued to the datasource for more than one day of
data will hit more than one segment. These segments will likely [...]
To determine which processes to forward queries to, the Broker process first
builds a view of the world from information in Zookeeper. Zookeeper maintains
information about [Historical](../design/historical.html) and streaming
ingestion [Peon](../design/peons.html) processes and the segments they are
serving. For every datasource in Zookeeper, the Broker process builds a
timeline of segments and the processes that serve them. When queries are
received for a specific datasource and interv [...]
diff --git a/docs/content/design/coordinator.md
b/docs/content/design/coordinator.md
index 810f212..49d8a51 100644
--- a/docs/content/design/coordinator.md
+++ b/docs/content/design/coordinator.md
@@ -26,7 +26,7 @@ title: "Coordinator Process"
### Configuration
-For Coordinator Process Configuration, see [Coordinator
Configuration](../configuration/index.html#coordinator).
+For Apache Druid (incubating) Coordinator Process Configuration, see
[Coordinator Configuration](../configuration/index.html#coordinator).
### HTTP endpoints
diff --git a/docs/content/design/historical.md
b/docs/content/design/historical.md
index c44d3fb..098950c 100644
--- a/docs/content/design/historical.md
+++ b/docs/content/design/historical.md
@@ -26,7 +26,7 @@ title: "Historical Process"
### Configuration
-For Historical Process Configuration, see [Historical
Configuration](../configuration/index.html#historical).
+For Apache Druid (incubating) Historical Process Configuration, see
[Historical Configuration](../configuration/index.html#historical).
### HTTP Endpoints
diff --git a/docs/content/design/index.md b/docs/content/design/index.md
index 60a5a2b..ec7e38a 100644
--- a/docs/content/design/index.md
+++ b/docs/content/design/index.md
@@ -1,6 +1,6 @@
---
layout: doc_page
-title: "Design"
+title: "Apache Druid (incubating) Design"
---
<!--
@@ -24,7 +24,7 @@ title: "Design"
# What is Druid?<a id="what-is-druid"></a>
-Druid is a data store designed for high-performance slice-and-dice analytics
+Apache Druid (incubating) is a data store designed for high-performance
slice-and-dice analytics
("[OLAP](http://en.wikipedia.org/wiki/Online_analytical_processing)"-style) on
large data sets. Druid is most often
used as a data store for powering GUI analytical applications, or as a backend
for highly-concurrent APIs that need
fast aggregations. Common application areas for Druid include:
diff --git a/docs/content/design/indexing-service.md
b/docs/content/design/indexing-service.md
index 19d788b..3c66bc1 100644
--- a/docs/content/design/indexing-service.md
+++ b/docs/content/design/indexing-service.md
@@ -24,7 +24,7 @@ title: "Indexing Service"
# Indexing Service
-The indexing service is a highly-available, distributed service that runs
indexing related tasks.
+The Apache Druid (incubating) indexing service is a highly-available,
distributed service that runs indexing related tasks.
Indexing [tasks](../ingestion/tasks.html) create (and sometimes destroy) Druid
[segments](../design/segments.html). The indexing service has a master/slave
like architecture.
diff --git a/docs/content/design/middlemanager.md
b/docs/content/design/middlemanager.md
index 60d1d0e..52b193f 100644
--- a/docs/content/design/middlemanager.md
+++ b/docs/content/design/middlemanager.md
@@ -26,7 +26,7 @@ title: "MiddleManager Process"
### Configuration
-For Middlemanager Process Configuration, see [Indexing Service
Configuration](../configuration/index.html#middlemanager-and-peons).
+For Apache Druid (incubating) Middlemanager Process Configuration, see
[Indexing Service
Configuration](../configuration/index.html#middlemanager-and-peons).
### HTTP Endpoints
diff --git a/docs/content/design/overlord.md b/docs/content/design/overlord.md
index 9fb2465..139c91e 100644
--- a/docs/content/design/overlord.md
+++ b/docs/content/design/overlord.md
@@ -26,7 +26,7 @@ title: "Overlord Process"
### Configuration
-For Overlord Process Configuration, see [Overlord
Configuration](../configuration/index.html#overlord).
+For Apache Druid (incubating) Overlord Process Configuration, see [Overlord
Configuration](../configuration/index.html#overlord).
### HTTP Endpoints
diff --git a/docs/content/design/peons.md b/docs/content/design/peons.md
index 5af3fd8..668a26a 100644
--- a/docs/content/design/peons.md
+++ b/docs/content/design/peons.md
@@ -26,7 +26,7 @@ title: "Peons"
### Configuration
-For Peon Configuration, see [Peon Query
Configuration](../configuration/index.html#peon-query-configuration) and
[Additional Peon
Configuration](../configuration/index.html#additional-peon-configuration).
+For Apache Druid (incubating) Peon Configuration, see [Peon Query
Configuration](../configuration/index.html#peon-query-configuration) and
[Additional Peon
Configuration](../configuration/index.html#additional-peon-configuration).
### HTTP Endpoints
diff --git a/docs/content/design/plumber.md b/docs/content/design/plumber.md
index a4dd6ee..e6e4ac8 100644
--- a/docs/content/design/plumber.md
+++ b/docs/content/design/plumber.md
@@ -1,6 +1,6 @@
---
layout: doc_page
-title: "Druid Plumbers"
+title: "Apache Druid (incubating) Plumbers"
---
<!--
diff --git a/docs/content/design/processes.md b/docs/content/design/processes.md
index 3cab6ec..d010d6e 100644
--- a/docs/content/design/processes.md
+++ b/docs/content/design/processes.md
@@ -1,6 +1,6 @@
---
layout: doc_page
-title: "Druid Processes and Servers"
+title: "Apache Druid (incubating) Processes and Servers"
---
<!--
diff --git a/docs/content/design/realtime.md b/docs/content/design/realtime.md
index f3b618a..df6b4e0 100644
--- a/docs/content/design/realtime.md
+++ b/docs/content/design/realtime.md
@@ -28,7 +28,7 @@ title: "Real-time Process"
NOTE: Realtime processes are deprecated. Please use the <a
href="../development/extensions-core/kafka-ingestion.html">Kafka Indexing
Service</a> for stream pull use cases instead.
</div>
-For Real-time Process Configuration, see [Realtime
Configuration](../configuration/realtime.html).
+For Apache Druid (incubating) Real-time Process Configuration, see [Realtime
Configuration](../configuration/realtime.html).
For Real-time Ingestion, see [Realtime
Ingestion](../ingestion/stream-ingestion.html).
diff --git a/docs/content/design/segments.md b/docs/content/design/segments.md
index 7fe7e6e..d8d69c1 100644
--- a/docs/content/design/segments.md
+++ b/docs/content/design/segments.md
@@ -24,7 +24,7 @@ title: "Segments"
# Segments
-Druid stores its index in *segment files*, which are partitioned by
+Apache Druid (incubating) stores its index in *segment files*, which are
partitioned by
time. In a basic setup, one segment file is created for each time
interval, where the time interval is configurable in the
`segmentGranularity` parameter of the `granularitySpec`, which is
diff --git a/docs/content/development/build.md
b/docs/content/development/build.md
index e1affc8..3600406 100644
--- a/docs/content/development/build.md
+++ b/docs/content/development/build.md
@@ -24,7 +24,7 @@ title: "Build from Source"
# Build from Source
-You can build Druid directly from source. Please note that these instructions
are for building the latest stable version of Druid.
+You can build Apache Druid (incubating) directly from source. Please note that
these instructions are for building the latest stable version of Druid.
For building the latest code in master, follow the instructions
[here](https://github.com/apache/incubator-druid/blob/master/docs/content/development/build.md).
diff --git a/docs/content/development/experimental.md
b/docs/content/development/experimental.md
index e09d26b..adf4e24 100644
--- a/docs/content/development/experimental.md
+++ b/docs/content/development/experimental.md
@@ -36,4 +36,4 @@ To enable experimental features, include their artifacts in
the configuration ru
druid.extensions.loadList=["druid-histogram"]
```
-The configuration files for all the Druid processes need to be updated with
this.
+The configuration files for all the Apache Druid (incubating) processes need
to be updated with this.
diff --git
a/docs/content/development/extensions-contrib/ambari-metrics-emitter.md
b/docs/content/development/extensions-contrib/ambari-metrics-emitter.md
index a498cb8..d8c3833 100644
--- a/docs/content/development/extensions-contrib/ambari-metrics-emitter.md
+++ b/docs/content/development/extensions-contrib/ambari-metrics-emitter.md
@@ -24,11 +24,11 @@ title: "Ambari Metrics Emitter"
# Ambari Metrics Emitter
-To use this extension, make sure to
[include](../../operations/including-extensions.html) `ambari-metrics-emitter`
extension.
+To use this Apache Druid (incubating) extension, make sure to
[include](../../operations/including-extensions.html) `ambari-metrics-emitter`
extension.
## Introduction
-This extension emits druid metrics to a ambari-metrics carbon server.
+This extension emits Druid metrics to a ambari-metrics carbon server.
Events are sent after been
[pickled](http://ambari-metrics.readthedocs.org/en/latest/feeding-carbon.html#the-pickle-protocol);
the size of the batch is configurable.
## Configuration
diff --git a/docs/content/development/extensions-contrib/azure.md
b/docs/content/development/extensions-contrib/azure.md
index 5ebd42c..6bdb020 100644
--- a/docs/content/development/extensions-contrib/azure.md
+++ b/docs/content/development/extensions-contrib/azure.md
@@ -24,11 +24,11 @@ title: "Microsoft Azure"
# Microsoft Azure
-To use this extension, make sure to
[include](../../operations/including-extensions.html) `druid-azure-extensions`
extension.
+To use this Apache Druid (incubating) extension, make sure to
[include](../../operations/including-extensions.html) `druid-azure-extensions`
extension.
## Deep Storage
-[Microsoft Azure Storage](http://azure.microsoft.com/en-us/services/storage/)
is another option for deep storage. This requires some additional druid
configuration.
+[Microsoft Azure Storage](http://azure.microsoft.com/en-us/services/storage/)
is another option for deep storage. This requires some additional Druid
configuration.
|Property|Possible Values|Description|Default|
|--------|---------------|-----------|-------|
diff --git a/docs/content/development/extensions-contrib/cassandra.md
b/docs/content/development/extensions-contrib/cassandra.md
index 0f5d57e..2bbf641 100644
--- a/docs/content/development/extensions-contrib/cassandra.md
+++ b/docs/content/development/extensions-contrib/cassandra.md
@@ -24,8 +24,8 @@ title: "Apache Cassandra"
# Apache Cassandra
-To use this extension, make sure to
[include](../../operations/including-extensions.html) `druid-cassandra-storage`
extension.
+To use this Apache Druid (incubating) extension, make sure to
[include](../../operations/including-extensions.html) `druid-cassandra-storage`
extension.
[Apache
Cassandra](http://www.datastax.com/what-we-offer/products-services/datastax-enterprise/apache-cassandra)
can also
-be leveraged for deep storage. This requires some additional druid
configuration as well as setting up the necessary
+be leveraged for deep storage. This requires some additional Druid
configuration as well as setting up the necessary
schema within a Cassandra keystore.
diff --git a/docs/content/development/extensions-contrib/cloudfiles.md
b/docs/content/development/extensions-contrib/cloudfiles.md
index b04b5d9..ad11caa 100644
--- a/docs/content/development/extensions-contrib/cloudfiles.md
+++ b/docs/content/development/extensions-contrib/cloudfiles.md
@@ -24,9 +24,11 @@ title: "Rackspace Cloud Files"
# Rackspace Cloud Files
+To use this Apache Druid (incubating) extension, make sure to
[include](../../operations/including-extensions.html)
`druid-cloudfiles-extensions` extension.
+
## Deep Storage
-[Rackspace Cloud Files](http://www.rackspace.com/cloud/files/) is another
option for deep storage. This requires some additional druid configuration.
+[Rackspace Cloud Files](http://www.rackspace.com/cloud/files/) is another
option for deep storage. This requires some additional Druid configuration.
|Property|Possible Values|Description|Default|
|--------|---------------|-----------|-------|
diff --git a/docs/content/development/extensions-contrib/distinctcount.md
b/docs/content/development/extensions-contrib/distinctcount.md
index 350007d..a392360 100644
--- a/docs/content/development/extensions-contrib/distinctcount.md
+++ b/docs/content/development/extensions-contrib/distinctcount.md
@@ -24,7 +24,7 @@ title: "DistinctCount Aggregator"
# DistinctCount Aggregator
-To use this extension, make sure to
[include](../../operations/including-extensions.html) the `druid-distinctcount`
extension.
+To use this Apache Druid (incubating) extension, make sure to
[include](../../operations/including-extensions.html) the `druid-distinctcount`
extension.
Additionally, follow these steps:
diff --git a/docs/content/development/extensions-contrib/google.md
b/docs/content/development/extensions-contrib/google.md
index d3225be..ac49ff1 100644
--- a/docs/content/development/extensions-contrib/google.md
+++ b/docs/content/development/extensions-contrib/google.md
@@ -24,7 +24,7 @@ title: "Google Cloud Storage"
# Google Cloud Storage
-To use this extension, make sure to
[include](../../operations/including-extensions.html) `druid-google-extensions`
extension.
+To use this Apache Druid (incubating) extension, make sure to
[include](../../operations/including-extensions.html) `druid-google-extensions`
extension.
## Deep Storage
diff --git a/docs/content/development/extensions-contrib/graphite.md
b/docs/content/development/extensions-contrib/graphite.md
index 569e1ce..deac93a 100644
--- a/docs/content/development/extensions-contrib/graphite.md
+++ b/docs/content/development/extensions-contrib/graphite.md
@@ -24,7 +24,7 @@ title: "Graphite Emitter"
# Graphite Emitter
-To use this extension, make sure to
[include](../../operations/including-extensions.html) `graphite-emitter`
extension.
+To use this Apache Druid (incubating) extension, make sure to
[include](../../operations/including-extensions.html) `graphite-emitter`
extension.
## Introduction
diff --git a/docs/content/development/extensions-contrib/influx.md
b/docs/content/development/extensions-contrib/influx.md
index 65e5e8c..c5c071b 100644
--- a/docs/content/development/extensions-contrib/influx.md
+++ b/docs/content/development/extensions-contrib/influx.md
@@ -24,7 +24,7 @@ title: "InfluxDB Line Protocol Parser"
# InfluxDB Line Protocol Parser
-To use this extension, make sure to
[include](../../operations/including-extensions.html) `druid-influx-extensions`.
+To use this Apache Druid (incubating) extension, make sure to
[include](../../operations/including-extensions.html) `druid-influx-extensions`.
This extension enables Druid to parse the [InfluxDB Line
Protocol](https://docs.influxdata.com/influxdb/v1.5/write_protocols/line_protocol_tutorial/),
a popular text-based timeseries metric serialization format.
diff --git a/docs/content/development/extensions-contrib/kafka-emitter.md
b/docs/content/development/extensions-contrib/kafka-emitter.md
index 52026f1..a059306 100644
--- a/docs/content/development/extensions-contrib/kafka-emitter.md
+++ b/docs/content/development/extensions-contrib/kafka-emitter.md
@@ -24,11 +24,11 @@ title: "Kafka Emitter"
# Kafka Emitter
-To use this extension, make sure to
[include](../../operations/including-extensions.html) `kafka-emitter` extension.
+To use this Apache Druid (incubating) extension, make sure to
[include](../../operations/including-extensions.html) `kafka-emitter` extension.
## Introduction
-This extension emits Druid metrics to a [Kafka](https://kafka.apache.org)
directly with JSON format.<br>
+This extension emits Druid metrics to [Apache Kafka](https://kafka.apache.org)
directly with JSON format.<br>
Currently, Kafka has not only their nice ecosystem but also consumer API
readily available.
So, If you currently use Kafka, It's easy to integrate various tool or UI
to monitor the status of your Druid cluster with this extension.
diff --git a/docs/content/development/extensions-contrib/kafka-simple.md
b/docs/content/development/extensions-contrib/kafka-simple.md
index 998d12b..3211efe 100644
--- a/docs/content/development/extensions-contrib/kafka-simple.md
+++ b/docs/content/development/extensions-contrib/kafka-simple.md
@@ -24,11 +24,11 @@ title: "Kafka Simple Consumer"
# Kafka Simple Consumer
-To use this extension, make sure to
[include](../../operations/including-extensions.html)
`druid-kafka-eight-simpleConsumer` extension.
+To use this Apache Druid (incubating) extension, make sure to
[include](../../operations/including-extensions.html)
`druid-kafka-eight-simpleConsumer` extension.
## Firehose
-This is an experimental firehose to ingest data from kafka using kafka simple
consumer api. Currently, this firehose would only work inside standalone
realtime processes.
+This is an experimental firehose to ingest data from Apache Kafka using the
Kafka simple consumer api. Currently, this firehose would only work inside
standalone realtime processes.
The configuration for KafkaSimpleConsumerFirehose is similar to the Kafka
Eight Firehose , except `firehose` should be replaced with `firehoseV2` like
this:
```json
diff --git a/docs/content/development/extensions-contrib/materialized-view.md
b/docs/content/development/extensions-contrib/materialized-view.md
index 96d8ffb..95bfde9 100644
--- a/docs/content/development/extensions-contrib/materialized-view.md
+++ b/docs/content/development/extensions-contrib/materialized-view.md
@@ -24,7 +24,7 @@ title: "Materialized View"
# Materialized View
-To use this feature, make sure to only load `materialized-view-selection` on
Broker and load `materialized-view-maintenance` on Overlord. In addtion, this
feature currently requires a Hadoop cluster.
+To use this Apache Druid (incubating) feature, make sure to only load
`materialized-view-selection` on Broker and load
`materialized-view-maintenance` on Overlord. In addtion, this feature currently
requires a Hadoop cluster.
This feature enables Druid to greatly improve the query performance,
especially when the query dataSource has a very large number of dimensions but
the query only required several dimensions. This feature includes two parts.
One is `materialized-view-maintenance`, and the other is
`materialized-view-selection`.
diff --git a/docs/content/development/extensions-contrib/opentsdb-emitter.md
b/docs/content/development/extensions-contrib/opentsdb-emitter.md
index dc49dff..fc18717 100644
--- a/docs/content/development/extensions-contrib/opentsdb-emitter.md
+++ b/docs/content/development/extensions-contrib/opentsdb-emitter.md
@@ -24,7 +24,7 @@ title: "OpenTSDB Emitter"
# OpenTSDB Emitter
-To use this extension, make sure to
[include](../../operations/including-extensions.html) `opentsdb-emitter`
extension.
+To use this Apache Druid (incubating) extension, make sure to
[include](../../operations/including-extensions.html) `opentsdb-emitter`
extension.
## Introduction
diff --git a/docs/content/development/extensions-contrib/rabbitmq.md
b/docs/content/development/extensions-contrib/rabbitmq.md
index 2d55be1..e9eefc5 100644
--- a/docs/content/development/extensions-contrib/rabbitmq.md
+++ b/docs/content/development/extensions-contrib/rabbitmq.md
@@ -24,7 +24,7 @@ title: "RabbitMQ"
# RabbitMQ
-To use this extension, make sure to
[include](../../operations/including-extensions.html) `druid-rabbitmq`
extension.
+To use this Apache Druid (incubating) extension, make sure to
[include](../../operations/including-extensions.html) `druid-rabbitmq`
extension.
## Firehose
diff --git a/docs/content/development/extensions-contrib/redis-cache.md
b/docs/content/development/extensions-contrib/redis-cache.md
index e32ea99..4dd8276 100644
--- a/docs/content/development/extensions-contrib/redis-cache.md
+++ b/docs/content/development/extensions-contrib/redis-cache.md
@@ -24,6 +24,8 @@ title: "Druid Redis Cache"
# Druid Redis Cache
+To use this Apache Druid (incubating) extension, make sure to
[include](../../operations/including-extensions.html) `druid-redis-cache`
extension.
+
A cache implementation for Druid based on
[Redis](https://github.com/antirez/redis).
# Configuration
diff --git a/docs/content/development/extensions-contrib/rocketmq.md
b/docs/content/development/extensions-contrib/rocketmq.md
index 45016cd..4dd0eea 100644
--- a/docs/content/development/extensions-contrib/rocketmq.md
+++ b/docs/content/development/extensions-contrib/rocketmq.md
@@ -24,6 +24,6 @@ title: "RocketMQ"
# RocketMQ
-To use this extension, make sure to
[include](../../operations/including-extensions.html) `druid-rocketmq`
extension.
+To use this Apache Druid (incubating) extension, make sure to
[include](../../operations/including-extensions.html) `druid-rocketmq`
extension.
Original author: [https://github.com/lizhanhui](https://github.com/lizhanhui).
diff --git a/docs/content/development/extensions-contrib/sqlserver.md
b/docs/content/development/extensions-contrib/sqlserver.md
index af47868..e14b7e1 100644
--- a/docs/content/development/extensions-contrib/sqlserver.md
+++ b/docs/content/development/extensions-contrib/sqlserver.md
@@ -24,7 +24,7 @@ title: "Microsoft SQLServer"
# Microsoft SQLServer
-Make sure to [include](../../operations/including-extensions.html)
`sqlserver-metadata-storage` as an extension.
+To use this Apache Druid (incubating) extension, make sure to
[include](../../operations/including-extensions.html)
`sqlserver-metadata-storage` as an extension.
## Setting up SQLServer
diff --git a/docs/content/development/extensions-contrib/statsd.md
b/docs/content/development/extensions-contrib/statsd.md
index 0d8e654..b25a113 100644
--- a/docs/content/development/extensions-contrib/statsd.md
+++ b/docs/content/development/extensions-contrib/statsd.md
@@ -24,7 +24,7 @@ title: "StatsD Emitter"
# StatsD Emitter
-To use this extension, make sure to
[include](../../operations/including-extensions.html) `statsd-emitter`
extension.
+To use this Apache Druid (incubating) extension, make sure to
[include](../../operations/including-extensions.html) `statsd-emitter`
extension.
## Introduction
diff --git a/docs/content/development/extensions-contrib/thrift.md
b/docs/content/development/extensions-contrib/thrift.md
index 40ca31b..9b8a54f 100644
--- a/docs/content/development/extensions-contrib/thrift.md
+++ b/docs/content/development/extensions-contrib/thrift.md
@@ -24,7 +24,7 @@ title: "Thrift"
# Thrift
-To use this extension, make sure to
[include](../../operations/including-extensions.html) `druid-thrift-extensions`.
+To use this Apache Druid (incubating) extension, make sure to
[include](../../operations/including-extensions.html) `druid-thrift-extensions`.
This extension enables Druid to ingest thrift compact data online
(`ByteBuffer`) and offline (SequenceFile of type `<Writable, BytesWritable>` or
LzoThriftBlock File).
diff --git a/docs/content/development/extensions-contrib/time-min-max.md
b/docs/content/development/extensions-contrib/time-min-max.md
index e8143b4..ff9e4d0 100644
--- a/docs/content/development/extensions-contrib/time-min-max.md
+++ b/docs/content/development/extensions-contrib/time-min-max.md
@@ -24,7 +24,7 @@ title: "Timestamp Min/Max aggregators"
# Timestamp Min/Max aggregators
-To use this extension, make sure to
[include](../../operations/including-extensions.html) `druid-time-min-max`.
+To use this Apache Druid (incubating) extension, make sure to
[include](../../operations/including-extensions.html) `druid-time-min-max`.
These aggregators enable more precise calculation of min and max time of given
events than `__time` column whose granularity is sparse, the same as query
granularity.
To use this feature, a "timeMin" or "timeMax" aggregator must be included at
indexing time.
diff --git a/docs/content/development/extensions-core/approximate-histograms.md
b/docs/content/development/extensions-core/approximate-histograms.md
index b60ff13..73a5207 100644
--- a/docs/content/development/extensions-core/approximate-histograms.md
+++ b/docs/content/development/extensions-core/approximate-histograms.md
@@ -24,7 +24,7 @@ title: "Approximate Histogram aggregators"
# Approximate Histogram aggregators
-Make sure to [include](../../operations/including-extensions.html)
`druid-histogram` as an extension.
+To use this Apache Druid (incubating) extension, make sure to
[include](../../operations/including-extensions.html) `druid-histogram` as an
extension.
The `druid-histogram` extension provides an approximate histogram aggregator
and a fixed buckets histogram aggregator.
diff --git a/docs/content/development/extensions-core/avro.md
b/docs/content/development/extensions-core/avro.md
index 00e8469..156149a 100644
--- a/docs/content/development/extensions-core/avro.md
+++ b/docs/content/development/extensions-core/avro.md
@@ -24,7 +24,7 @@ title: "Avro"
# Avro
-This extension enables Druid to ingest and understand the Apache Avro data
format. Make sure to [include](../../operations/including-extensions.html)
`druid-avro-extensions` as an extension.
+This Apache Druid (incubating) extension enables Druid to ingest and
understand the Apache Avro data format. Make sure to
[include](../../operations/including-extensions.html) `druid-avro-extensions`
as an extension.
### Avro Stream Parser
diff --git a/docs/content/development/extensions-core/bloom-filter.md
b/docs/content/development/extensions-core/bloom-filter.md
index 86c645d..3d6749a 100644
--- a/docs/content/development/extensions-core/bloom-filter.md
+++ b/docs/content/development/extensions-core/bloom-filter.md
@@ -24,7 +24,7 @@ title: "Bloom Filter"
# Bloom Filter
-This extension adds the ability to both construct bloom filters from query
results, and filter query results by testing
+This Apache Druid (incubating) extension adds the ability to both construct
bloom filters from query results, and filter query results by testing
against a bloom filter. Make sure to
[include](../../operations/including-extensions.html) `druid-bloom-filter` as
an
extension.
diff --git a/docs/content/development/extensions-core/datasketches-extension.md
b/docs/content/development/extensions-core/datasketches-extension.md
index 32e45f9..3a5b126 100644
--- a/docs/content/development/extensions-core/datasketches-extension.md
+++ b/docs/content/development/extensions-core/datasketches-extension.md
@@ -24,7 +24,7 @@ title: "DataSketches extension"
# DataSketches extension
-Druid aggregators based on [datasketches](http://datasketches.github.io/)
library. Sketches are data structures implementing approximate streaming
mergeable algorithms. Sketches can be ingested from the outside of Druid or
built from raw data at ingestion time. Sketches can be stored in Druid segments
as additive metrics.
+Apache Druid (incubating) aggregators based on
[datasketches](http://datasketches.github.io/) library. Sketches are data
structures implementing approximate streaming mergeable algorithms. Sketches
can be ingested from the outside of Druid or built from raw data at ingestion
time. Sketches can be stored in Druid segments as additive metrics.
To use the datasketches aggregators, make sure you
[include](../../operations/including-extensions.html) the extension in your
config file:
diff --git a/docs/content/development/extensions-core/datasketches-hll.md
b/docs/content/development/extensions-core/datasketches-hll.md
index 365ecd2..799cbc0 100644
--- a/docs/content/development/extensions-core/datasketches-hll.md
+++ b/docs/content/development/extensions-core/datasketches-hll.md
@@ -24,7 +24,7 @@ title: "DataSketches HLL Sketch module"
# DataSketches HLL Sketch module
-This module provides Druid aggregators for distinct counting based on HLL
sketch from [datasketches](http://datasketches.github.io/) library. At
ingestion time, this aggregator creates the HLL sketch objects to be stored in
Druid segments. At query time, sketches are read and merged together. In the
end, by default, you receive the estimate of the number of distinct values
presented to the sketch. Also, you can use post aggregator to produce a union
of sketch columns in the same row.
+This module provides Apache Druid (incubating) aggregators for distinct
counting based on HLL sketch from
[datasketches](http://datasketches.github.io/) library. At ingestion time, this
aggregator creates the HLL sketch objects to be stored in Druid segments. At
query time, sketches are read and merged together. In the end, by default, you
receive the estimate of the number of distinct values presented to the sketch.
Also, you can use post aggregator to produce a union of sketch columns [...]
You can use the HLL sketch aggregator on columns of any identifiers. It will
return estimated cardinality of the column.
To use this aggregator, make sure you
[include](../../operations/including-extensions.html) the extension in your
config file:
diff --git a/docs/content/development/extensions-core/datasketches-quantiles.md
b/docs/content/development/extensions-core/datasketches-quantiles.md
index ad495b1..2282de2 100644
--- a/docs/content/development/extensions-core/datasketches-quantiles.md
+++ b/docs/content/development/extensions-core/datasketches-quantiles.md
@@ -24,7 +24,7 @@ title: "DataSketches Quantiles Sketch module"
# DataSketches Quantiles Sketch module
-This module provides Druid aggregators based on numeric quantiles
DoublesSketch from [datasketches](http://datasketches.github.io/) library.
Quantiles sketch is a mergeable streaming algorithm to estimate the
distribution of values, and approximately answer queries about the rank of a
value, probability mass function of the distribution (PMF) or histogram,
cummulative distribution function (CDF), and quantiles (median, min, max, 95th
percentile and such). See [Quantiles Sketch Overview]( [...]
+This module provides Apache Druid (incubating) aggregators based on numeric
quantiles DoublesSketch from [datasketches](http://datasketches.github.io/)
library. Quantiles sketch is a mergeable streaming algorithm to estimate the
distribution of values, and approximately answer queries about the rank of a
value, probability mass function of the distribution (PMF) or histogram,
cummulative distribution function (CDF), and quantiles (median, min, max, 95th
percentile and such). See [Quantil [...]
There are three major modes of operation:
diff --git a/docs/content/development/extensions-core/datasketches-theta.md
b/docs/content/development/extensions-core/datasketches-theta.md
index 60e945f..e248da3 100644
--- a/docs/content/development/extensions-core/datasketches-theta.md
+++ b/docs/content/development/extensions-core/datasketches-theta.md
@@ -24,7 +24,7 @@ title: "DataSketches Theta Sketch module"
# DataSketches Theta Sketch module
-This module provides Druid aggregators based on Theta sketch from
[datasketches](http://datasketches.github.io/) library. Note that sketch
algorithms are approximate; see details in the "Accuracy" section of the
datasketches doc.
+This module provides Apache Druid (incubating) aggregators based on Theta
sketch from [datasketches](http://datasketches.github.io/) library. Note that
sketch algorithms are approximate; see details in the "Accuracy" section of the
datasketches doc.
At ingestion time, this aggregator creates the Theta sketch objects which get
stored in Druid segments. Logically speaking, a Theta sketch object can be
thought of as a Set data structure. At query time, sketches are read and
aggregated (set unioned) together. In the end, by default, you receive the
estimate of the number of unique entries in the sketch object. Also, you can
use post aggregators to do union, intersection or difference on sketch columns
in the same row.
Note that you can use `thetaSketch` aggregator on columns which were not
ingested using the same. It will return estimated cardinality of the column. It
is recommended to use it at ingestion time as well to make querying faster.
diff --git a/docs/content/development/extensions-core/datasketches-tuple.md
b/docs/content/development/extensions-core/datasketches-tuple.md
index 480b67e..69db25a 100644
--- a/docs/content/development/extensions-core/datasketches-tuple.md
+++ b/docs/content/development/extensions-core/datasketches-tuple.md
@@ -24,7 +24,7 @@ title: "DataSketches Tuple Sketch module"
# DataSketches Tuple Sketch module
-This module provides Druid aggregators based on Tuple sketch from
[datasketches](http://datasketches.github.io/) library. ArrayOfDoublesSketch
sketches extend the functionality of the count-distinct Theta sketches by
adding arrays of double values associated with unique keys.
+This module provides Apache Druid (incubating) aggregators based on Tuple
sketch from [datasketches](http://datasketches.github.io/) library.
ArrayOfDoublesSketch sketches extend the functionality of the count-distinct
Theta sketches by adding arrays of double values associated with unique keys.
To use this aggregator, make sure you
[include](../../operations/including-extensions.html) the extension in your
config file:
diff --git a/docs/content/development/extensions-core/druid-basic-security.md
b/docs/content/development/extensions-core/druid-basic-security.md
index 442cad6..adba32b 100644
--- a/docs/content/development/extensions-core/druid-basic-security.md
+++ b/docs/content/development/extensions-core/druid-basic-security.md
@@ -24,7 +24,7 @@ title: "Basic Security"
# Druid Basic Security
-This extension adds:
+This Apache Druid (incubating) extension adds:
- an Authenticator which supports [HTTP Basic
authentication](https://en.wikipedia.org/wiki/Basic_access_authentication)
- an Authorizer which implements basic role-based access control
diff --git a/docs/content/development/extensions-core/druid-kerberos.md
b/docs/content/development/extensions-core/druid-kerberos.md
index 649dc20..46af7f4 100644
--- a/docs/content/development/extensions-core/druid-kerberos.md
+++ b/docs/content/development/extensions-core/druid-kerberos.md
@@ -24,7 +24,7 @@ title: "Kerberos"
# Kerberos
-Druid Extension to enable Authentication for Druid Processes using Kerberos.
+Apache Druid (incubating) Extension to enable Authentication for Druid
Processes using Kerberos.
This extension adds an Authenticator which is used to protect HTTP Endpoints
using the simple and protected GSSAPI negotiation mechanism
[SPNEGO](https://en.wikipedia.org/wiki/SPNEGO).
Make sure to [include](../../operations/including-extensions.html)
`druid-kerberos` as an extension.
diff --git a/docs/content/development/extensions-core/druid-lookups.md
b/docs/content/development/extensions-core/druid-lookups.md
index 3379f94..53476eb 100644
--- a/docs/content/development/extensions-core/druid-lookups.md
+++ b/docs/content/development/extensions-core/druid-lookups.md
@@ -27,7 +27,7 @@ title: "Cached Lookup Module"
<div class="note info">Please note that this is an experimental module and the
development/testing still at early stage. Feel free to try it and give us your
feedback.</div>
## Description
-This module provides a per-lookup caching mechanism for JDBC data sources.
+This Apache Druid (incubating) module provides a per-lookup caching mechanism
for JDBC data sources.
The main goal of this cache is to speed up the access to a high latency lookup
sources and to provide a caching isolation for every lookup source.
Thus user can define various caching strategies or and implementation per
lookup, even if the source is the same.
This module can be used side to side with other lookup module like the global
cached lookup module.
diff --git a/docs/content/development/extensions-core/hdfs.md
b/docs/content/development/extensions-core/hdfs.md
index 870d457..c129698 100644
--- a/docs/content/development/extensions-core/hdfs.md
+++ b/docs/content/development/extensions-core/hdfs.md
@@ -24,7 +24,7 @@ title: "HDFS"
# HDFS
-Make sure to [include](../../operations/including-extensions.html)
`druid-hdfs-storage` as an extension.
+To use this Apache Druid (incubating) extension, make sure to
[include](../../operations/including-extensions.html) `druid-hdfs-storage` as
an extension.
## Deep Storage
diff --git a/docs/content/development/extensions-core/kafka-eight-firehose.md
b/docs/content/development/extensions-core/kafka-eight-firehose.md
index 83e80e5..740e5fa 100644
--- a/docs/content/development/extensions-core/kafka-eight-firehose.md
+++ b/docs/content/development/extensions-core/kafka-eight-firehose.md
@@ -1,6 +1,6 @@
---
layout: doc_page
-title: "Kafka Eight Firehose"
+title: "Apache Kafka Eight Firehose"
---
<!--
@@ -24,7 +24,7 @@ title: "Kafka Eight Firehose"
# Kafka Eight Firehose
-Make sure to [include](../../operations/including-extensions.html)
`druid-kafka-eight` as an extension.
+To use this Apache Druid (incubating) extension, make sure to
[include](../../operations/including-extensions.html) `druid-kafka-eight` as an
extension.
This firehose acts as a Kafka 0.8.x consumer and ingests data from Kafka.
diff --git
a/docs/content/development/extensions-core/kafka-extraction-namespace.md
b/docs/content/development/extensions-core/kafka-extraction-namespace.md
index 82c4ce4..f28c233 100644
--- a/docs/content/development/extensions-core/kafka-extraction-namespace.md
+++ b/docs/content/development/extensions-core/kafka-extraction-namespace.md
@@ -1,6 +1,6 @@
---
layout: doc_page
-title: "Kafka Lookups"
+title: "Apache Kafka Lookups"
---
<!--
@@ -28,7 +28,7 @@ title: "Kafka Lookups"
Lookups are an <a href="../experimental.html">experimental</a> feature.
</div>
-Make sure to [include](../../operations/including-extensions.html)
`druid-lookups-cached-global` and `druid-kafka-extraction-namespace` as an
extension.
+To use this Apache Druid (incubating) extension, make sure to
[include](../../operations/including-extensions.html)
`druid-lookups-cached-global` and `druid-kafka-extraction-namespace` as an
extension.
If you need updates to populate as promptly as possible, it is possible to
plug into a kafka topic whose key is the old value and message is the desired
new value (both in UTF-8) as a LookupExtractorFactory.
diff --git a/docs/content/development/extensions-core/kafka-ingestion.md
b/docs/content/development/extensions-core/kafka-ingestion.md
index 2816ab0..90db0c9 100644
--- a/docs/content/development/extensions-core/kafka-ingestion.md
+++ b/docs/content/development/extensions-core/kafka-ingestion.md
@@ -1,6 +1,6 @@
---
layout: doc_page
-title: "Kafka Indexing Service"
+title: "Apache Kafka Indexing Service"
---
<!--
@@ -31,7 +31,7 @@ able to read non-recent events from Kafka and are not subject
to the window peri
ingestion mechanisms using Tranquility. The supervisor oversees the state of
the indexing tasks to coordinate handoffs, manage failures,
and ensure that the scalability and replication requirements are maintained.
-This service is provided in the `druid-kafka-indexing-service` core extension
(see
+This service is provided in the `druid-kafka-indexing-service` core Apache
Druid (incubating) extension (see
[Including Extensions](../../operations/including-extensions.html)).
<div class="note info">
diff --git a/docs/content/development/extensions-core/kinesis-ingestion.md
b/docs/content/development/extensions-core/kinesis-ingestion.md
index 38e3830..3d406ed 100644
--- a/docs/content/development/extensions-core/kinesis-ingestion.md
+++ b/docs/content/development/extensions-core/kinesis-ingestion.md
@@ -1,6 +1,6 @@
---
layout: doc_page
-title: "Kinesis Indexing Service"
+title: "Amazon Kinesis Indexing Service"
---
<!--
@@ -31,7 +31,7 @@ able to read non-recent events from Kinesis and are not
subject to the window pe
ingestion mechanisms using Tranquility. The supervisor oversees the state of
the indexing tasks to coordinate handoffs, manage failures,
and ensure that the scalability and replication requirements are maintained.
-The Kinesis indexing service is provided as the
`druid-kinesis-indexing-service` core extension (see
+The Kinesis indexing service is provided as the
`druid-kinesis-indexing-service` core Apache Druid (incubating) extension (see
[Including Extensions](../../operations/including-extensions.html)). Please
note that this is
currently designated as an *experimental feature* and is subject to the usual
[experimental caveats](../experimental.html).
diff --git a/docs/content/development/extensions-core/lookups-cached-global.md
b/docs/content/development/extensions-core/lookups-cached-global.md
index eed6084..55a2c38 100644
--- a/docs/content/development/extensions-core/lookups-cached-global.md
+++ b/docs/content/development/extensions-core/lookups-cached-global.md
@@ -28,7 +28,7 @@ title: "Globally Cached Lookups"
Lookups are an <a href="../experimental.html">experimental</a> feature.
</div>
-Make sure to [include](../../operations/including-extensions.html)
`druid-lookups-cached-global` as an extension.
+To use this Apache Druid (incubating) extension, make sure to
[include](../../operations/including-extensions.html)
`druid-lookups-cached-global` as an extension.
## Configuration
<div class="note caution">
diff --git a/docs/content/development/extensions-core/mysql.md
b/docs/content/development/extensions-core/mysql.md
index acd05ca..6cdcf3c 100644
--- a/docs/content/development/extensions-core/mysql.md
+++ b/docs/content/development/extensions-core/mysql.md
@@ -24,7 +24,7 @@ title: "MySQL Metadata Store"
# MySQL Metadata Store
-Make sure to [include](../../operations/including-extensions.html)
`mysql-metadata-storage` as an extension.
+To use this Apache Druid (incubating) extension, make sure to
[include](../../operations/including-extensions.html) `mysql-metadata-storage`
as an extension.
<div class="note caution">
The MySQL extension requires the MySQL Connector/J library which is not
included in the Druid distribution.
diff --git a/docs/content/development/extensions-core/parquet.md
b/docs/content/development/extensions-core/parquet.md
index 6f72c51..9b628b9 100644
--- a/docs/content/development/extensions-core/parquet.md
+++ b/docs/content/development/extensions-core/parquet.md
@@ -1,6 +1,6 @@
---
layout: doc_page
-title: "Druid Parquet Extension"
+title: "Apache Parquet Extension"
---
<!--
@@ -22,9 +22,9 @@ title: "Druid Parquet Extension"
~ under the License.
-->
-# Druid Parquet Extension
+# Apache Parquet Extension
-This module extends [Druid Hadoop based indexing](../../ingestion/hadoop.html)
to ingest data directly from offline
+This Apache Druid (incubating) module extends [Druid Hadoop based
indexing](../../ingestion/hadoop.html) to ingest data directly from offline
Apache Parquet files.
Note: `druid-parquet-extensions` depends on the `druid-avro-extensions`
module, so be sure to
diff --git a/docs/content/development/extensions-core/postgresql.md
b/docs/content/development/extensions-core/postgresql.md
index 64be9de..07a2a78 100644
--- a/docs/content/development/extensions-core/postgresql.md
+++ b/docs/content/development/extensions-core/postgresql.md
@@ -24,7 +24,7 @@ title: "PostgreSQL Metadata Store"
# PostgreSQL Metadata Store
-Make sure to [include](../../operations/including-extensions.html)
`postgresql-metadata-storage` as an extension.
+To use this Apache Druid (incubating) extension, make sure to
[include](../../operations/including-extensions.html)
`postgresql-metadata-storage` as an extension.
## Setting up PostgreSQL
diff --git a/docs/content/development/extensions-core/protobuf.md
b/docs/content/development/extensions-core/protobuf.md
index 8c25780..655d0c7 100644
--- a/docs/content/development/extensions-core/protobuf.md
+++ b/docs/content/development/extensions-core/protobuf.md
@@ -24,7 +24,7 @@ title: "Protobuf"
# Protobuf
-This extension enables Druid to ingest and understand the Protobuf data
format. Make sure to [include](../../operations/including-extensions.html)
`druid-protobuf-extensions` as an extension.
+This Apache Druid (incubating) extension enables Druid to ingest and
understand the Protobuf data format. Make sure to
[include](../../operations/including-extensions.html)
`druid-protobuf-extensions` as an extension.
## Protobuf Parser
diff --git a/docs/content/development/extensions-core/s3.md
b/docs/content/development/extensions-core/s3.md
index cead6cd..e93e5e0 100644
--- a/docs/content/development/extensions-core/s3.md
+++ b/docs/content/development/extensions-core/s3.md
@@ -24,7 +24,7 @@ title: "S3-compatible"
# S3-compatible
-Make sure to [include](../../operations/including-extensions.html)
`druid-s3-extensions` as an extension.
+To use this Apache Druid (incubating) extension, make sure to
[include](../../operations/including-extensions.html) `druid-s3-extensions` as
an extension.
## Deep Storage
diff --git
a/docs/content/development/extensions-core/simple-client-sslcontext.md
b/docs/content/development/extensions-core/simple-client-sslcontext.md
index 9c2638f..7247f26 100644
--- a/docs/content/development/extensions-core/simple-client-sslcontext.md
+++ b/docs/content/development/extensions-core/simple-client-sslcontext.md
@@ -24,7 +24,7 @@ title: "Simple SSLContext Provider Module"
# Simple SSLContext Provider Module
-This module contains a simple implementation of
[SSLContext](http://docs.oracle.com/javase/8/docs/api/javax/net/ssl/SSLContext.html)
+This Apache Druid (incubating) module contains a simple implementation of
[SSLContext](http://docs.oracle.com/javase/8/docs/api/javax/net/ssl/SSLContext.html)
that will be injected to be used with HttpClient that Druid processes use
internally to communicate with each other. To learn more about
Java's SSL support, please refer to
[this](http://docs.oracle.com/javase/8/docs/technotes/guides/security/jsse/JSSERefGuide.html)
guide.
diff --git a/docs/content/development/extensions-core/stats.md
b/docs/content/development/extensions-core/stats.md
index 504c79e..0b3b4e3 100644
--- a/docs/content/development/extensions-core/stats.md
+++ b/docs/content/development/extensions-core/stats.md
@@ -24,7 +24,7 @@ title: "Stats aggregator"
# Stats aggregator
-Includes stat-related aggregators, including variance and standard deviations,
etc. Make sure to [include](../../operations/including-extensions.html)
`druid-stats` as an extension.
+This Apache Druid (incubating) extension includes stat-related aggregators,
including variance and standard deviations, etc. Make sure to
[include](../../operations/including-extensions.html) `druid-stats` as an
extension.
## Variance aggregator
diff --git a/docs/content/development/extensions-core/test-stats.md
b/docs/content/development/extensions-core/test-stats.md
index e36eb94..156052f 100644
--- a/docs/content/development/extensions-core/test-stats.md
+++ b/docs/content/development/extensions-core/test-stats.md
@@ -24,7 +24,7 @@ title: "Test Stats Aggregators"
# Test Stats Aggregators
-Incorporates test statistics related aggregators, including z-score and
p-value. Please refer to
[https://www.paypal-engineering.com/2017/06/29/democratizing-experimentation-data-for-product-innovations/](https://www.paypal-engineering.com/2017/06/29/democratizing-experimentation-data-for-product-innovations/)
for math background and details.
+This Apache Druid (incubating) extension incorporates test statistics related
aggregators, including z-score and p-value. Please refer to
[https://www.paypal-engineering.com/2017/06/29/democratizing-experimentation-data-for-product-innovations/](https://www.paypal-engineering.com/2017/06/29/democratizing-experimentation-data-for-product-innovations/)
for math background and details.
Make sure to include `druid-stats` extension in order to use these
aggregrators.
diff --git a/docs/content/development/extensions.md
b/docs/content/development/extensions.md
index 91f6add..698337b 100644
--- a/docs/content/development/extensions.md
+++ b/docs/content/development/extensions.md
@@ -1,6 +1,6 @@
---
layout: doc_page
-title: "Druid extensions"
+title: "Apache Druid (incubating) extensions"
---
<!--
diff --git a/docs/content/development/geo.md b/docs/content/development/geo.md
index 7092ef1..b482740 100644
--- a/docs/content/development/geo.md
+++ b/docs/content/development/geo.md
@@ -24,7 +24,7 @@ title: "Geographic Queries"
# Geographic Queries
-Druid supports filtering specially spatially indexed columns based on an
origin and a bound.
+Apache Druid (incubating) supports filtering specially spatially indexed
columns based on an origin and a bound.
# Spatial Indexing
In any of the data specs, there is the option of providing spatial dimensions.
For example, for a JSON data spec, spatial dimensions can be specified as
follows:
diff --git
a/docs/content/development/integrating-druid-with-other-technologies.md
b/docs/content/development/integrating-druid-with-other-technologies.md
index 075341c..d435aee 100644
--- a/docs/content/development/integrating-druid-with-other-technologies.md
+++ b/docs/content/development/integrating-druid-with-other-technologies.md
@@ -1,6 +1,6 @@
---
layout: doc_page
-title: "Integrating Druid With Other Technologies"
+title: "Integrating Apache Druid (incubating) With Other Technologies"
---
<!--
diff --git a/docs/content/development/javascript.md
b/docs/content/development/javascript.md
index 087d626..ae0aad4 100644
--- a/docs/content/development/javascript.md
+++ b/docs/content/development/javascript.md
@@ -24,7 +24,7 @@ title: "JavaScript Programming Guide"
# JavaScript Programming Guide
-This page discusses how to use JavaScript to extend Druid.
+This page discusses how to use JavaScript to extend Apache Druid (incubating).
## Examples
diff --git a/docs/content/development/modules.md
b/docs/content/development/modules.md
index ef9d168..c665b8e 100644
--- a/docs/content/development/modules.md
+++ b/docs/content/development/modules.md
@@ -1,6 +1,6 @@
---
layout: doc_page
-title: "Extending Druid With Custom Modules"
+title: "Extending Apache Druid (incubating) With Custom Modules"
---
<!--
diff --git a/docs/content/development/overview.md
b/docs/content/development/overview.md
index 25579c3..56bf134 100644
--- a/docs/content/development/overview.md
+++ b/docs/content/development/overview.md
@@ -1,6 +1,6 @@
---
layout: doc_page
-title: "Developing on Druid"
+title: "Developing on Apache Druid (incubating)"
---
<!--
diff --git a/docs/content/development/router.md
b/docs/content/development/router.md
index 0aa69c3..3c8f3b7 100644
--- a/docs/content/development/router.md
+++ b/docs/content/development/router.md
@@ -24,7 +24,7 @@ title: "Router Process"
# Router Process
-The Router process can be used to route queries to different Broker processes.
By default, the broker routes queries based on how
[Rules](../operations/rule-configuration.html) are set up. For example, if 1
month of recent data is loaded into a `hot` cluster, queries that fall within
the recent month can be routed to a dedicated set of brokers. Queries outside
this range are routed to another set of brokers. This set up provides query
isolation such that queries for more important data a [...]
+The Apache Druid (incubating) Router process can be used to route queries to
different Broker processes. By default, the broker routes queries based on how
[Rules](../operations/rule-configuration.html) are set up. For example, if 1
month of recent data is loaded into a `hot` cluster, queries that fall within
the recent month can be routed to a dedicated set of brokers. Queries outside
this range are routed to another set of brokers. This set up provides query
isolation such that queries [...]
For query routing purposes, you should only ever need the Router process if
you have a Druid cluster well into the terabyte range.
diff --git a/docs/content/development/versioning.md
b/docs/content/development/versioning.md
index 59e87e4..b33b6f7 100644
--- a/docs/content/development/versioning.md
+++ b/docs/content/development/versioning.md
@@ -1,6 +1,6 @@
---
layout: doc_page
-title: "Versioning Druid"
+title: "Versioning Apache Druid (incubating)"
---
<!--
diff --git a/docs/content/ingestion/batch-ingestion.md
b/docs/content/ingestion/batch-ingestion.md
index 8f81710..27c57d8 100644
--- a/docs/content/ingestion/batch-ingestion.md
+++ b/docs/content/ingestion/batch-ingestion.md
@@ -24,7 +24,7 @@ title: "Batch Data Ingestion"
# Batch Data Ingestion
-Druid can load data from static files through a variety of methods described
here.
+Apache Druid (incubating) can load data from static files through a variety of
methods described here.
## Native Batch Ingestion
diff --git a/docs/content/ingestion/command-line-hadoop-indexer.md
b/docs/content/ingestion/command-line-hadoop-indexer.md
index fc0bc5a..231852e 100644
--- a/docs/content/ingestion/command-line-hadoop-indexer.md
+++ b/docs/content/ingestion/command-line-hadoop-indexer.md
@@ -32,7 +32,7 @@ java -Xmx256m -Duser.timezone=UTC -Dfile.encoding=UTF-8
-classpath lib/*:<hadoop
## Options
-- "--coordinate" - provide a version of Hadoop to use. This property will
override the default Hadoop coordinates. Once specified, Druid will look for
those Hadoop dependencies from the location specified by
`druid.extensions.hadoopDependenciesDir`.
+- "--coordinate" - provide a version of Apache Hadoop to use. This property
will override the default Hadoop coordinates. Once specified, Apache Druid
(incubating) will look for those Hadoop dependencies from the location
specified by `druid.extensions.hadoopDependenciesDir`.
- "--no-default-hadoop" - don't pull down the default hadoop version
## Spec file
diff --git a/docs/content/ingestion/compaction.md
b/docs/content/ingestion/compaction.md
index 4d1b71b..1c5dfe4 100644
--- a/docs/content/ingestion/compaction.md
+++ b/docs/content/ingestion/compaction.md
@@ -90,7 +90,7 @@ data segments loaded in it (or if the interval you specify is
empty).
The output segment can have different metadata from the input segments unless
all input segments have the same metadata.
-- Dimensions: since Druid supports schema change, the dimensions can be
different across segments even if they are a part of the same dataSource.
+- Dimensions: since Apache Druid (incubating) supports schema change, the
dimensions can be different across segments even if they are a part of the same
dataSource.
If the input segments have different dimensions, the output segment basically
includes all dimensions of the input segments.
However, even if the input segments have the same set of dimensions, the
dimension order or the data type of dimensions can be different. For example,
the data type of some dimensions can be
changed from `string` to primitive types, or the order of dimensions can be
changed for better locality.
diff --git a/docs/content/ingestion/data-formats.md
b/docs/content/ingestion/data-formats.md
index be7ea3e..73ad2ae 100644
--- a/docs/content/ingestion/data-formats.md
+++ b/docs/content/ingestion/data-formats.md
@@ -24,7 +24,7 @@ title: "Data Formats for Ingestion"
# Data Formats for Ingestion
-Druid can ingest denormalized data in JSON, CSV, or a delimited form such as
TSV, or any custom format. While most examples in the documentation use data in
JSON format, it is not difficult to configure Druid to ingest any other
delimited data.
+Apache Druid (incubating) can ingest denormalized data in JSON, CSV, or a
delimited form such as TSV, or any custom format. While most examples in the
documentation use data in JSON format, it is not difficult to configure Druid
to ingest any other delimited data.
We welcome any contributions to new formats.
For additional data formats, please see our [extensions
list](../development/extensions.html).
diff --git a/docs/content/ingestion/delete-data.md
b/docs/content/ingestion/delete-data.md
index 7e28ffc..7e21e99 100644
--- a/docs/content/ingestion/delete-data.md
+++ b/docs/content/ingestion/delete-data.md
@@ -24,7 +24,7 @@ title: "Deleting Data"
# Deleting Data
-Permanent deletion of a Druid segment has two steps:
+Permanent deletion of a segment in Apache Druid (incubating) has two steps:
1. The segment must first be marked as "unused". This occurs when a segment is
dropped by retention rules, and when a user manually disables a segment through
the Coordinator API.
2. After segments have been marked as "unused", a Kill Task will delete any
"unused" segments from Druid's metadata store as well as deep storage.
diff --git a/docs/content/ingestion/faq.md b/docs/content/ingestion/faq.md
index b2c304b..e14425b 100644
--- a/docs/content/ingestion/faq.md
+++ b/docs/content/ingestion/faq.md
@@ -1,6 +1,6 @@
---
layout: doc_page
-title: "My Data isn't being loaded"
+title: "Apache Druid (incubating) FAQ"
---
<!--
diff --git a/docs/content/ingestion/firehose.md
b/docs/content/ingestion/firehose.md
index ff10206..d5ae237 100644
--- a/docs/content/ingestion/firehose.md
+++ b/docs/content/ingestion/firehose.md
@@ -1,6 +1,6 @@
---
layout: doc_page
-title: "Druid Firehoses"
+title: "Apache Druid (incubating) Firehoses"
---
<!--
diff --git a/docs/content/ingestion/hadoop-vs-native-batch.md
b/docs/content/ingestion/hadoop-vs-native-batch.md
index 89a8e02..85373a0 100644
--- a/docs/content/ingestion/hadoop-vs-native-batch.md
+++ b/docs/content/ingestion/hadoop-vs-native-batch.md
@@ -24,7 +24,7 @@ title: "Hadoop-based Batch Ingestion VS Native Batch
Ingestion"
# Comparison of Batch Ingestion Methods
-Druid basically supports three types of batch ingestion: Hadoop-based
+Apache Druid (incubating) basically supports three types of batch ingestion:
Apache Hadoop-based
batch ingestion, native parallel batch ingestion, and native local batch
ingestion. The below table shows what features are supported by each
ingestion method.
diff --git a/docs/content/ingestion/hadoop.md b/docs/content/ingestion/hadoop.md
index c824fd0..249bd02 100644
--- a/docs/content/ingestion/hadoop.md
+++ b/docs/content/ingestion/hadoop.md
@@ -24,7 +24,7 @@ title: "Hadoop-based Batch Ingestion"
# Hadoop-based Batch Ingestion
-Hadoop-based batch ingestion in Druid is supported via a Hadoop-ingestion
task. These tasks can be posted to a running
+Apache Hadoop-based batch ingestion in Apache Druid (incubating) is supported
via a Hadoop-ingestion task. These tasks can be posted to a running
instance of a Druid [Overlord](../design/overlord.html).
Please check [Hadoop-based Batch Ingestion VS Native Batch
Ingestion](./hadoop-vs-native-batch.html) for differences between native batch
ingestion and Hadoop-based ingestion.
diff --git a/docs/content/ingestion/index.md b/docs/content/ingestion/index.md
index a9b12be..9141cb5 100644
--- a/docs/content/ingestion/index.md
+++ b/docs/content/ingestion/index.md
@@ -30,7 +30,7 @@ title: "Ingestion"
### Datasources and segments
-Druid data is stored in "datasources", which are similar to tables in a
traditional RDBMS. Each datasource is
+Apache Druid (incubating) data is stored in "datasources", which are similar
to tables in a traditional RDBMS. Each datasource is
partitioned by time and, optionally, further partitioned by other attributes.
Each time range is called a "chunk" (for
example, a single day, if your datasource is partitioned by day). Within a
chunk, data is partitioned into one or more
"segments". Each segment is a single file, typically comprising up to a few
million rows of data. Since segments are
diff --git a/docs/content/ingestion/ingestion-spec.md
b/docs/content/ingestion/ingestion-spec.md
index b578b54..6a9e5c6 100644
--- a/docs/content/ingestion/ingestion-spec.md
+++ b/docs/content/ingestion/ingestion-spec.md
@@ -24,7 +24,7 @@ title: "Ingestion Spec"
# Ingestion Spec
-A Druid ingestion spec consists of 3 components:
+An Apache Druid (incubating) ingestion spec consists of 3 components:
```json
{
diff --git a/docs/content/ingestion/locking-and-priority.md
b/docs/content/ingestion/locking-and-priority.md
index 6dbe013..e9bbbeb 100644
--- a/docs/content/ingestion/locking-and-priority.md
+++ b/docs/content/ingestion/locking-and-priority.md
@@ -43,7 +43,7 @@ Tasks are also part of a "task group", which is a set of
tasks that can share in
## Priority
-Druid's indexing tasks use locks for atomic data ingestion. Each lock is
acquired for the combination of a dataSource and an interval. Once a task
acquires a lock, it can write data for the dataSource and the interval of the
acquired lock unless the lock is released or preempted. Please see [the below
Locking section](#locking)
+Apache Druid (incubating)'s indexing tasks use locks for atomic data
ingestion. Each lock is acquired for the combination of a dataSource and an
interval. Once a task acquires a lock, it can write data for the dataSource and
the interval of the acquired lock unless the lock is released or preempted.
Please see [the below Locking section](#locking)
Each task has a priority which is used for lock acquisition. The locks of
higher-priority tasks can preempt the locks of lower-priority tasks if they try
to acquire for the same dataSource and interval. If some locks of a task are
preempted, the behavior of the preempted task depends on the task
implementation. Usually, most tasks finish as failed if they are preempted.
diff --git a/docs/content/ingestion/native_tasks.md
b/docs/content/ingestion/native_tasks.md
index 4ecaccf..d67cf8d 100644
--- a/docs/content/ingestion/native_tasks.md
+++ b/docs/content/ingestion/native_tasks.md
@@ -24,7 +24,7 @@ title: "Native Index Tasks"
# Native Index Tasks
-Druid currently has two types of native batch indexing tasks, `index_parallel`
which runs tasks
+Apache Druid (incubating) currently has two types of native batch indexing
tasks, `index_parallel` which runs tasks
in parallel on multiple MiddleManager processes, and `index` which will run a
single indexing task locally on a single
MiddleManager.
diff --git a/docs/content/ingestion/schema-changes.md
b/docs/content/ingestion/schema-changes.md
index a29133a..7ed617d 100644
--- a/docs/content/ingestion/schema-changes.md
+++ b/docs/content/ingestion/schema-changes.md
@@ -24,7 +24,7 @@ title: "Schema Changes"
# Schema Changes
-Schemas for datasources can change at any time and Druid supports different
schemas among segments.
+Schemas for datasources can change at any time and Apache Druid (incubating)
supports different schemas among segments.
## Replacing Segments
diff --git a/docs/content/ingestion/schema-design.md
b/docs/content/ingestion/schema-design.md
index 58f0fae..0f344e9 100644
--- a/docs/content/ingestion/schema-design.md
+++ b/docs/content/ingestion/schema-design.md
@@ -24,7 +24,7 @@ title: "Schema Design"
# Schema Design
-This page is meant to assist users in designing a schema for data to be
ingested in Druid. Druid offers a unique data
+This page is meant to assist users in designing a schema for data to be
ingested in Apache Druid (incubating). Druid offers a unique data
modeling system that bears similarity to both relational and timeseries
models. The key factors are:
* Druid data is stored in [datasources](index.html#datasources), which are
similar to tables in a traditional RDBMS.
diff --git a/docs/content/ingestion/stream-ingestion.md
b/docs/content/ingestion/stream-ingestion.md
index 4e7b660..cc1cb91 100644
--- a/docs/content/ingestion/stream-ingestion.md
+++ b/docs/content/ingestion/stream-ingestion.md
@@ -24,7 +24,7 @@ title: "Loading Streams"
# Loading Streams
-Streams can be ingested in Druid using either
[Tranquility](https://github.com/druid-io/tranquility) (a Druid-aware
+Streams can be ingested in Apache Druid (incubating) using either
[Tranquility](https://github.com/druid-io/tranquility) (a Druid-aware
client) or the [Kafka Indexing
Service](../development/extensions-core/kafka-ingestion.html).
## Tranquility (Stream Push)
diff --git a/docs/content/ingestion/stream-pull.md
b/docs/content/ingestion/stream-pull.md
index ea1aff7..38f6a80 100644
--- a/docs/content/ingestion/stream-pull.md
+++ b/docs/content/ingestion/stream-pull.md
@@ -29,7 +29,7 @@ NOTE: Realtime processes are deprecated. Please use the <a
href="../development/
# Stream Pull Ingestion
If you have an external service that you want to pull data from, you have two
options. The simplest
-option is to set up a "copying" service that reads from the data source and
writes to Druid using
+option is to set up a "copying" service that reads from the data source and
writes to Apache Druid (incubating) using
the [stream push method](stream-push.html).
Another option is *stream pull*. With this approach, a Druid Realtime Process
ingests data from a
diff --git a/docs/content/ingestion/stream-push.md
b/docs/content/ingestion/stream-push.md
index 708ee54..610177d 100644
--- a/docs/content/ingestion/stream-push.md
+++ b/docs/content/ingestion/stream-push.md
@@ -24,7 +24,7 @@ title: "Stream Push"
# Stream Push
-Druid can connect to any streaming data source through
+Apache Druid (incubating) can connect to any streaming data source through
[Tranquility](https://github.com/druid-io/tranquility/blob/master/README.md),
a package for pushing
streams to Druid in real-time. Druid does not come bundled with Tranquility,
and you will have to download the distribution.
diff --git a/docs/content/ingestion/tasks.md b/docs/content/ingestion/tasks.md
index 4653d6b..59c1cee 100644
--- a/docs/content/ingestion/tasks.md
+++ b/docs/content/ingestion/tasks.md
@@ -24,7 +24,7 @@ title: "Tasks Overview"
# Tasks Overview
-Tasks are run on MiddleManagers and always operate on a single data source.
+Apache Druid (incubating) tasks are run on MiddleManagers and always operate
on a single data source.
Tasks are submitted using POST requests to the Overlord. Please see [Overlord
Task API](../operations/api-reference.html#overlord-tasks) for API details.
diff --git a/docs/content/ingestion/transform-spec.md
b/docs/content/ingestion/transform-spec.md
index 9cb3b06..2a102c7 100644
--- a/docs/content/ingestion/transform-spec.md
+++ b/docs/content/ingestion/transform-spec.md
@@ -24,7 +24,7 @@ title: "Transform Specs"
# Transform Specs
-Transform specs allow Druid to filter and transform input data during
ingestion.
+Transform specs allow Apache Druid (incubating) to filter and transform input
data during ingestion.
## Syntax
diff --git a/docs/content/ingestion/update-existing-data.md
b/docs/content/ingestion/update-existing-data.md
index 0575726..c825616 100644
--- a/docs/content/ingestion/update-existing-data.md
+++ b/docs/content/ingestion/update-existing-data.md
@@ -24,7 +24,7 @@ title: "Updating Existing Data"
# Updating Existing Data
-Once you ingest some data in a dataSource for an interval and create Druid
segments, you might want to make changes to
+Once you ingest some data in a dataSource for an interval and create Apache
Druid (incubating) segments, you might want to make changes to
the ingested data. There are several ways this can be done.
##### Updating Dimension Values
diff --git a/docs/content/misc/math-expr.md b/docs/content/misc/math-expr.md
index ff749ea..d43dab1 100644
--- a/docs/content/misc/math-expr.md
+++ b/docs/content/misc/math-expr.md
@@ -1,6 +1,6 @@
---
layout: doc_page
-title: "Druid Expressions"
+title: "Apache Druid (incubating) Expressions"
---
<!--
diff --git a/docs/content/misc/papers-and-talks.md
b/docs/content/misc/papers-and-talks.md
index 989b909..6ba1c40 100644
--- a/docs/content/misc/papers-and-talks.md
+++ b/docs/content/misc/papers-and-talks.md
@@ -1,6 +1,6 @@
---
layout: doc_page
-title: "Papers"
+title: "Apache Druid (incubating) Papers"
---
<!--
diff --git a/docs/content/operations/alerts.md
b/docs/content/operations/alerts.md
index f976113..acbd60d 100644
--- a/docs/content/operations/alerts.md
+++ b/docs/content/operations/alerts.md
@@ -1,6 +1,6 @@
---
layout: doc_page
-title: "Druid Alerts"
+title: "Apache Druid (incubating) Alerts"
---
<!--
diff --git a/docs/content/operations/api-reference.md
b/docs/content/operations/api-reference.md
index 5ad0e6c..de96075 100644
--- a/docs/content/operations/api-reference.md
+++ b/docs/content/operations/api-reference.md
@@ -1,6 +1,6 @@
---
layout: doc_page
-title: "API Reference"
+title: "Apache Druid (incubating) API Reference"
---
<!--
diff --git a/docs/content/operations/druid-console.md
b/docs/content/operations/druid-console.md
index a6e4a90..902aaeb 100644
--- a/docs/content/operations/druid-console.md
+++ b/docs/content/operations/druid-console.md
@@ -1,6 +1,6 @@
---
layout: doc_page
-title: "Druid console"
+title: "Apache Druid (incubating) console"
---
<!--
diff --git a/docs/content/operations/dump-segment.md
b/docs/content/operations/dump-segment.md
index 3ce0354..b6dd433 100644
--- a/docs/content/operations/dump-segment.md
+++ b/docs/content/operations/dump-segment.md
@@ -24,7 +24,7 @@ title: "DumpSegment tool"
# DumpSegment tool
-The DumpSegment tool can be used to dump the metadata or contents of a segment
for debugging purposes. Note that the
+The DumpSegment tool can be used to dump the metadata or contents of an Apache
Druid (incubating) segment for debugging purposes. Note that the
dump is not necessarily a full-fidelity translation of the segment. In
particular, not all metadata is included, and
complex metric values may not be complete.
diff --git a/docs/content/operations/http-compression.md
b/docs/content/operations/http-compression.md
index 77824c6..1f1cc1c 100644
--- a/docs/content/operations/http-compression.md
+++ b/docs/content/operations/http-compression.md
@@ -24,7 +24,7 @@ title: "HTTP Compression"
# HTTP Compression
-Druid supports http request decompression and response compression, to use
this, http request header `Content-Encoding:gzip` and `Accept-Encoding:gzip`
is needed to be set.
+Apache Druid (incubating) supports http request decompression and response
compression, to use this, http request header `Content-Encoding:gzip` and
`Accept-Encoding:gzip` is needed to be set.
# General Configuration
diff --git a/docs/content/operations/including-extensions.md
b/docs/content/operations/including-extensions.md
index d3321f7..ebcb9ba 100644
--- a/docs/content/operations/including-extensions.md
+++ b/docs/content/operations/including-extensions.md
@@ -26,7 +26,7 @@ title: "Loading extensions"
## Loading core extensions
-Druid bundles all [core
extensions](../development/extensions.html#core-extensions) out of the box.
+Apache Druid (incubating) bundles all [core
extensions](../development/extensions.html#core-extensions) out of the box.
See the [list of extensions](../development/extensions.html#core-extensions)
for your options. You
can load bundled extensions by adding their names to your
common.runtime.properties
`druid.extensions.loadList` property. For example, to load the
*postgresql-metadata-storage* and
diff --git a/docs/content/operations/management-uis.md
b/docs/content/operations/management-uis.md
index 635d4fe..b20b3ad 100644
--- a/docs/content/operations/management-uis.md
+++ b/docs/content/operations/management-uis.md
@@ -1,6 +1,6 @@
---
layout: doc_page
-title: "Management UIs"
+title: "Apache Druid (incubating) Management UIs"
---
<!--
diff --git a/docs/content/operations/metrics.md
b/docs/content/operations/metrics.md
index 99934cb..da14039 100644
--- a/docs/content/operations/metrics.md
+++ b/docs/content/operations/metrics.md
@@ -1,6 +1,6 @@
---
layout: doc_page
-title: "Druid Metrics"
+title: "Apache Druid (incubating) Metrics"
---
<!--
diff --git a/docs/content/operations/other-hadoop.md
b/docs/content/operations/other-hadoop.md
index 0c73d57..899deaf 100644
--- a/docs/content/operations/other-hadoop.md
+++ b/docs/content/operations/other-hadoop.md
@@ -1,6 +1,6 @@
---
layout: doc_page
-title: "Working with different versions of Hadoop"
+title: "Working with different versions of Apache Hadoop"
---
<!--
@@ -24,7 +24,7 @@ title: "Working with different versions of Hadoop"
# Working with different versions of Hadoop
-Druid can interact with Hadoop in two ways:
+Apache Druid (incubating) can interact with Hadoop in two ways:
1. [Use HDFS for deep storage](../development/extensions-core/hdfs.html) using
the druid-hdfs-storage extension.
2. [Batch-load data from Hadoop](../ingestion/hadoop.html) using Map/Reduce
jobs.
diff --git a/docs/content/operations/password-provider.md
b/docs/content/operations/password-provider.md
index 5e3497b..77a248d 100644
--- a/docs/content/operations/password-provider.md
+++ b/docs/content/operations/password-provider.md
@@ -24,7 +24,7 @@ title: "Password Provider"
# Password Provider
-Druid needs some passwords for accessing various secured systems like metadata
store, Key Store containing server certificates etc.
+Apache Druid (incubating) needs some passwords for accessing various secured
systems like metadata store, Key Store containing server certificates etc.
All these passwords have corresponding runtime properties associated with
them, for example `druid.metadata.storage.connector.password` corresponds to
the metadata store password.
By default users can directly set the passwords in plaintext for these runtime
properties, for example `druid.metadata.storage.connector.password=pwd` sets
the metadata store password
diff --git a/docs/content/operations/performance-faq.md
b/docs/content/operations/performance-faq.md
index 8567001..c5a48a0 100644
--- a/docs/content/operations/performance-faq.md
+++ b/docs/content/operations/performance-faq.md
@@ -26,7 +26,7 @@ title: "Performance FAQ"
## I can't match your benchmarked results
-Improper configuration is by far the largest problem we see people trying to
deploy Druid. The example configurations listed in the tutorials are designed
for a small volume of data where all processes are on a single machine. The
configs are extremely poor for actual production use.
+Improper configuration is by far the largest problem we see people trying to
deploy Apache Druid (incubating). The example configurations listed in the
tutorials are designed for a small volume of data where all processes are on a
single machine. The configs are extremely poor for actual production use.
## What should I set my JVM heap?
diff --git a/docs/content/operations/pull-deps.md
b/docs/content/operations/pull-deps.md
index 8f8c4e1..2af9a7d 100644
--- a/docs/content/operations/pull-deps.md
+++ b/docs/content/operations/pull-deps.md
@@ -24,7 +24,7 @@ title: "pull-deps Tool"
# pull-deps Tool
-`pull-deps` is a tool that can pull down dependencies to the local repository
and lay dependencies out into the extension directory as needed.
+`pull-deps` is an Apache Druid (incubating) tool that can pull down
dependencies to the local repository and lay dependencies out into the
extension directory as needed.
`pull-deps` has several command line options, they are as follows:
@@ -34,7 +34,7 @@ Extension coordinate to pull down, followed by a maven
coordinate, e.g. org.apac
`-h` or `--hadoop-coordinate` (Can be specified multiply times)
-Hadoop dependency to pull down, followed by a maven coordinate, e.g.
org.apache.hadoop:hadoop-client:2.4.0
+Apache Hadoop dependency to pull down, followed by a maven coordinate, e.g.
org.apache.hadoop:hadoop-client:2.4.0
`--no-default-hadoop`
diff --git a/docs/content/operations/recommendations.md
b/docs/content/operations/recommendations.md
index 03a4fb2..311b46d 100644
--- a/docs/content/operations/recommendations.md
+++ b/docs/content/operations/recommendations.md
@@ -1,6 +1,6 @@
---
layout: doc_page
-title: "Recommendations"
+title: "Apache Druid (incubating) Recommendations"
---
<!--
diff --git a/docs/content/operations/reset-cluster.md
b/docs/content/operations/reset-cluster.md
index 548e464..50d54dd 100644
--- a/docs/content/operations/reset-cluster.md
+++ b/docs/content/operations/reset-cluster.md
@@ -24,7 +24,7 @@ title: "ResetCluster tool"
# ResetCluster tool
-ResetCluster tool can be used to completely wipe out Druid cluster state
stored on Metadata and Deep storage. This is
+ResetCluster tool can be used to completely wipe out Apache Druid (incubating)
cluster state stored on Metadata and Deep storage. This is
intended to be used in dev/test environments where you typically want to reset
the cluster before running
the test suite.
ResetCluster automatically figures out necessary information from Druid
cluster configuration. So the java classpath
diff --git a/docs/content/operations/rolling-updates.md
b/docs/content/operations/rolling-updates.md
index a7672f5..bfecc64 100644
--- a/docs/content/operations/rolling-updates.md
+++ b/docs/content/operations/rolling-updates.md
@@ -24,7 +24,7 @@ title: "Rolling Updates"
# Rolling Updates
-For rolling Druid cluster updates with no downtime, we recommend updating
Druid processes in the
+For rolling Apache Druid (incubating) cluster updates with no downtime, we
recommend updating Druid processes in the
following order:
1. Historical
diff --git a/docs/content/operations/rule-configuration.md
b/docs/content/operations/rule-configuration.md
index 4847b16..c2d38b7 100644
--- a/docs/content/operations/rule-configuration.md
+++ b/docs/content/operations/rule-configuration.md
@@ -24,7 +24,7 @@ title: "Retaining or Automatically Dropping Data"
# Retaining or Automatically Dropping Data
-Coordinator processes use rules to determine what data should be loaded to or
dropped from the cluster. Rules are used for data retention and query
execution, and are set on the Coordinator console (http://coordinator_ip:port).
+In Apache Druid (incubating), Coordinator processes use rules to determine
what data should be loaded to or dropped from the cluster. Rules are used for
data retention and query execution, and are set on the Coordinator console
(http://coordinator_ip:port).
There are three types of rules, i.e., load rules, drop rules, and broadcast
rules. Load rules indicate how segments should be assigned to different
historical process tiers and how many replicas of a segment should exist in
each tier.
Drop rules indicate when segments should be dropped entirely from the cluster.
Finally, broadcast rules indicate how segments of different data sources should
be co-located in Historical processes.
diff --git a/docs/content/operations/segment-optimization.md
b/docs/content/operations/segment-optimization.md
index 179b418..be00daf 100644
--- a/docs/content/operations/segment-optimization.md
+++ b/docs/content/operations/segment-optimization.md
@@ -24,7 +24,7 @@ title: "Segment Size Optimization"
# Segment Size Optimization
-In Druid, it's important to optimize the segment size because
+In Apache Druid (incubating), it's important to optimize the segment size
because
1. Druid stores data in segments. If you're using the [best-effort
roll-up](../design/index.html#roll-up-modes) mode,
increasing the segment size might introduce further aggregation which
reduces the dataSource size.
diff --git a/docs/content/operations/tls-support.md
b/docs/content/operations/tls-support.md
index 744f20c..e7aefda 100644
--- a/docs/content/operations/tls-support.md
+++ b/docs/content/operations/tls-support.md
@@ -36,7 +36,7 @@ and `druid.tlsPort` properties on each process. Please see
`Configuration` secti
# Jetty Server TLS Configuration
-Druid uses Jetty as an embedded web server. To get familiar with TLS/SSL in
general and related concepts like Certificates etc.
+Apache Druid (incubating) uses Jetty as an embedded web server. To get
familiar with TLS/SSL in general and related concepts like Certificates etc.
reading this [Jetty
documentation](http://www.eclipse.org/jetty/documentation/9.4.x/configuring-ssl.html)
might be helpful.
To get more in depth knowledge of TLS/SSL support in Java in general, please
refer to this
[guide](http://docs.oracle.com/javase/8/docs/technotes/guides/security/jsse/JSSERefGuide.html).
The documentation
[here](http://www.eclipse.org/jetty/documentation/9.4.x/configuring-ssl.html#configuring-sslcontextfactory)
diff --git a/docs/content/querying/aggregations.md
b/docs/content/querying/aggregations.md
index ca99de3..99e5c3c 100644
--- a/docs/content/querying/aggregations.md
+++ b/docs/content/querying/aggregations.md
@@ -24,7 +24,7 @@ title: "Aggregations"
# Aggregations
-Aggregations can be provided at ingestion time as part of the ingestion spec
as a way of summarizing data before it enters Druid.
+Aggregations can be provided at ingestion time as part of the ingestion spec
as a way of summarizing data before it enters Apache Druid (incubating).
Aggregations can also be specified as part of many queries at query time.
Available aggregations are:
diff --git a/docs/content/querying/caching.md b/docs/content/querying/caching.md
index c83d067..c5e7363 100644
--- a/docs/content/querying/caching.md
+++ b/docs/content/querying/caching.md
@@ -24,7 +24,7 @@ title: "Query Caching"
# Query Caching
-Druid supports query result caching through an LRU cache. Results are stored
as a whole or either on a per segment basis along with the
+Apache Druid (incubating) supports query result caching through an LRU cache.
Results are stored as a whole or either on a per segment basis along with the
parameters of a given query. Segment level caching allows Druid to return
final results based partially on segment results in the cache
and partially on segment results from scanning historical/real-time segments.
Result level caching enables Druid to cache the entire
result set, so that query results can be completely retrieved from the cache
for identical queries.
diff --git a/docs/content/querying/datasource.md
b/docs/content/querying/datasource.md
index 552e1cc..cba9d54 100644
--- a/docs/content/querying/datasource.md
+++ b/docs/content/querying/datasource.md
@@ -24,7 +24,7 @@ title: "Datasources"
# Datasources
-A data source is the Druid equivalent of a database table. However, a query
can also masquerade as a data source, providing subquery-like functionality.
Query data sources are currently supported only by
[GroupBy](../querying/groupbyquery.html) queries.
+A data source is the Apache Druid (incubating) equivalent of a database table.
However, a query can also masquerade as a data source, providing subquery-like
functionality. Query data sources are currently supported only by
[GroupBy](../querying/groupbyquery.html) queries.
### Table Data Source
The table data source is the most common type. It's represented by a string,
or by the full structure:
diff --git a/docs/content/querying/datasourcemetadataquery.md
b/docs/content/querying/datasourcemetadataquery.md
index 7d9fe23..b278016 100644
--- a/docs/content/querying/datasourcemetadataquery.md
+++ b/docs/content/querying/datasourcemetadataquery.md
@@ -41,7 +41,7 @@ There are 2 main parts to a Data Source Metadata query:
|property|description|required?|
|--------|-----------|---------|
-|queryType|This String should always be "dataSourceMetadata"; this is the
first thing Druid looks at to figure out how to interpret the query|yes|
+|queryType|This String should always be "dataSourceMetadata"; this is the
first thing Apache Druid (incubating) looks at to figure out how to interpret
the query|yes|
|dataSource|A String or Object defining the data source to query, very similar
to a table in a relational database. See
[DataSource](../querying/datasource.html) for more information.|yes|
|context|See [Context](../querying/query-context.html)|no|
diff --git a/docs/content/querying/dimensionspecs.md
b/docs/content/querying/dimensionspecs.md
index ce57f46..00434d5 100644
--- a/docs/content/querying/dimensionspecs.md
+++ b/docs/content/querying/dimensionspecs.md
@@ -67,7 +67,7 @@ Please refer to the [Output Types](#output-types) section for
more details.
### Filtered DimensionSpecs
-These are only useful for multi-value dimensions. If you have a row in druid
that has a multi-value dimension with values ["v1", "v2", "v3"] and you send a
groupBy/topN query grouping by that dimension with [query filter](filters.html)
for value "v1". In the response you will get 3 rows containing "v1", "v2" and
"v3". This behavior might be unintuitive for some use cases.
+These are only useful for multi-value dimensions. If you have a row in Apache
Druid (incubating) that has a multi-value dimension with values ["v1", "v2",
"v3"] and you send a groupBy/topN query grouping by that dimension with [query
filter](filters.html) for value "v1". In the response you will get 3 rows
containing "v1", "v2" and "v3". This behavior might be unintuitive for some use
cases.
It happens because "query filter" is internally used on the bitmaps and only
used to match the row to be included in the query result processing. With
multi-value dimensions, "query filter" behaves like a contains check, which
will match the row with dimension value ["v1", "v2", "v3"]. Please see the
section on "Multi-value columns" in [segment](../design/segments.html) for more
details.
Then groupBy/topN processing pipeline "explodes" all multi-value dimensions
resulting 3 rows for "v1", "v2" and "v3" each.
diff --git a/docs/content/querying/filters.md b/docs/content/querying/filters.md
index e126981..2f9b23a 100644
--- a/docs/content/querying/filters.md
+++ b/docs/content/querying/filters.md
@@ -24,7 +24,7 @@ title: "Query Filters"
# Query Filters
-A filter is a JSON object indicating which rows of data should be included in
the computation for a query. It’s essentially the equivalent of the WHERE
clause in SQL. Druid supports the following types of filters.
+A filter is a JSON object indicating which rows of data should be included in
the computation for a query. It’s essentially the equivalent of the WHERE
clause in SQL. Apache Druid (incubating) supports the following types of
filters.
### Selector filter
diff --git a/docs/content/querying/granularities.md
b/docs/content/querying/granularities.md
index b7438c7..7dc02ef 100644
--- a/docs/content/querying/granularities.md
+++ b/docs/content/querying/granularities.md
@@ -39,7 +39,7 @@ Supported granularity strings are: `all`, `none`, `second`,
`minute`, `fifteen_m
#### Example:
-Suppose you have data below stored in Druid with millisecond ingestion
granularity,
+Suppose you have data below stored in Apache Druid (incubating) with
millisecond ingestion granularity,
``` json
{"timestamp": "2013-08-31T01:02:33Z", "page": "AAA", "language" : "en"}
diff --git a/docs/content/querying/groupbyquery.md
b/docs/content/querying/groupbyquery.md
index c2aa9aa..125d793 100644
--- a/docs/content/querying/groupbyquery.md
+++ b/docs/content/querying/groupbyquery.md
@@ -24,7 +24,7 @@ title: "groupBy Queries"
# groupBy Queries
-These types of queries take a groupBy query object and return an array of JSON
objects where each object represents a
+These types of Apache Druid (incubating) queries take a groupBy query object
and return an array of JSON objects where each object represents a
grouping asked for by the query.
<div class="note info">
diff --git a/docs/content/querying/having.md b/docs/content/querying/having.md
index 3b45547..a1059ab 100644
--- a/docs/content/querying/having.md
+++ b/docs/content/querying/having.md
@@ -28,7 +28,7 @@ A having clause is a JSON object identifying which rows from
a groupBy query sho
It is essentially the equivalent of the HAVING clause in SQL.
-Druid supports the following types of having clauses.
+Apache Druid (incubating) supports the following types of having clauses.
### Query filters
diff --git a/docs/content/querying/hll-old.md b/docs/content/querying/hll-old.md
index 8597630..e416eda 100644
--- a/docs/content/querying/hll-old.md
+++ b/docs/content/querying/hll-old.md
@@ -26,7 +26,7 @@ title: "Cardinality/HyperUnique aggregators"
## Cardinality aggregator
-Computes the cardinality of a set of Druid dimensions, using HyperLogLog to
estimate the cardinality. Please note that this
+Computes the cardinality of a set of Apache Druid (incubating) dimensions,
using HyperLogLog to estimate the cardinality. Please note that this
aggregator will be much slower than indexing a column with the hyperUnique
aggregator. This aggregator also runs over a dimension column, which
means the string dimension cannot be removed from the dataset to improve
rollup. In general, we strongly recommend using the hyperUnique aggregator
instead of the cardinality aggregator if you do not care about the individual
values of a dimension.
diff --git a/docs/content/querying/joins.md b/docs/content/querying/joins.md
index 0a55ff0..fe162b7 100644
--- a/docs/content/querying/joins.md
+++ b/docs/content/querying/joins.md
@@ -24,7 +24,7 @@ title: "Joins"
# Joins
-Druid has limited support for joins through [query-time
lookups](../querying/lookups.html). The common use case of
+Apache Druid (incubating) has limited support for joins through [query-time
lookups](../querying/lookups.html). The common use case of
query-time lookups is to replace one dimension value (e.g. a String ID) with
another value (e.g. a human-readable String value). This is similar to a
star-schema join.
Druid does not yet have full support for joins. Although Druid’s storage
format would allow for the implementation
diff --git a/docs/content/querying/lookups.md b/docs/content/querying/lookups.md
index 33cf767..68f3287 100644
--- a/docs/content/querying/lookups.md
+++ b/docs/content/querying/lookups.md
@@ -28,7 +28,7 @@ title: "Lookups"
Lookups are an <a href="../development/experimental.html">experimental</a>
feature.
</div>
-Lookups are a concept in Druid where dimension values are (optionally)
replaced with new values, allowing join-like
+Lookups are a concept in Apache Druid (incubating) where dimension values are
(optionally) replaced with new values, allowing join-like
functionality. Applying lookups in Druid is similar to joining a dimension
table in a data warehouse. See
[dimension specs](../querying/dimensionspecs.html) for more information. For
the purpose of these documents, a "key"
refers to a dimension value to match, and a "value" refers to its replacement.
So if you wanted to map
diff --git a/docs/content/querying/multi-value-dimensions.md
b/docs/content/querying/multi-value-dimensions.md
index 417cb1a..04c7357 100644
--- a/docs/content/querying/multi-value-dimensions.md
+++ b/docs/content/querying/multi-value-dimensions.md
@@ -24,7 +24,7 @@ title: "Multi-value dimensions"
# Multi-value dimensions
-Druid supports "multi-value" string dimensions. These are generated when an
input field contains an array of values
+Apache Druid (incubating) supports "multi-value" string dimensions. These are
generated when an input field contains an array of values
instead of a single value (e.e. JSON arrays, or a TSV field containing one or
more `listDelimiter` characters).
This document describes the behavior of groupBy (topN has similar behavior)
queries on multi-value dimensions when they
diff --git a/docs/content/querying/multitenancy.md
b/docs/content/querying/multitenancy.md
index 69c57a9..cbac624 100644
--- a/docs/content/querying/multitenancy.md
+++ b/docs/content/querying/multitenancy.md
@@ -24,7 +24,7 @@ title: "Multitenancy Considerations"
# Multitenancy Considerations
-Druid is often used to power user-facing data applications, where multitenancy
is an important requirement. This
+Apache Druid (incubating) is often used to power user-facing data
applications, where multitenancy is an important requirement. This
document outlines Druid's multitenant storage and querying features.
## Shared datasources or datasource-per-tenant?
diff --git a/docs/content/querying/post-aggregations.md
b/docs/content/querying/post-aggregations.md
index 7bdb65d..99ec722 100644
--- a/docs/content/querying/post-aggregations.md
+++ b/docs/content/querying/post-aggregations.md
@@ -24,7 +24,7 @@ title: "Post-Aggregations"
# Post-Aggregations
-Post-aggregations are specifications of processing that should happen on
aggregated values as they come out of Druid. If you include a post aggregation
as part of a query, make sure to include all aggregators the post-aggregator
requires.
+Post-aggregations are specifications of processing that should happen on
aggregated values as they come out of Apache Druid (incubating). If you include
a post aggregation as part of a query, make sure to include all aggregators the
post-aggregator requires.
There are several post-aggregators available.
diff --git a/docs/content/querying/query-context.md
b/docs/content/querying/query-context.md
index 185ae2d..abcdf3d 100644
--- a/docs/content/querying/query-context.md
+++ b/docs/content/querying/query-context.md
@@ -31,7 +31,7 @@ The query context is used for various query configuration
parameters. The follow
|timeout | `druid.server.http.defaultQueryTimeout`| Query timeout in
millis, beyond which unfinished queries will be cancelled. 0 timeout means `no
timeout`. To set the default timeout, see [Broker
configuration](../configuration/index.html#broker) |
|priority | `0` | Query Priority.
Queries with higher priority get precedence for computational resources.|
|queryId | auto-generated | Unique identifier
given to this query. If a query ID is set or known, this can be used to cancel
the query |
-|useCache | `true` | Flag indicating
whether to leverage the query cache for this query. When set to false, it
disables reading from the query cache for this query. When set to true, Druid
uses druid.broker.cache.useCache or druid.historical.cache.useCache to
determine whether or not to read from the query cache |
+|useCache | `true` | Flag indicating
whether to leverage the query cache for this query. When set to false, it
disables reading from the query cache for this query. When set to true, Apache
Druid (incubating) uses druid.broker.cache.useCache or
druid.historical.cache.useCache to determine whether or not to read from the
query cache |
|populateCache | `true` | Flag indicating
whether to save the results of the query to the query cache. Primarily used for
debugging. When set to false, it disables saving the results of this query to
the query cache. When set to true, Druid uses druid.broker.cache.populateCache
or druid.historical.cache.populateCache to determine whether or not to save the
results of this query to the query cache |
|useResultLevelCache | `false` | Flag
indicating whether to leverage the result level cache for this query. When set
to false, it disables reading from the query cache for this query. When set to
true, Druid uses druid.broker.cache.useResultLevelCache to determine whether or
not to read from the query cache |
|populateResultLevelCache | `false` | Flag
indicating whether to save the results of the query to the result level cache.
Primarily used for debugging. When set to false, it disables saving the results
of this query to the query cache. When set to true, Druid uses
druid.broker.cache.populateCache to determine whether or not to save the
results of this query to the query cache |
diff --git a/docs/content/querying/querying.md
b/docs/content/querying/querying.md
index af2ee6c..73c56bd 100644
--- a/docs/content/querying/querying.md
+++ b/docs/content/querying/querying.md
@@ -24,7 +24,7 @@ title: "Querying"
# Querying
-Queries are made using an HTTP REST style request to queryable processes
([Broker](../design/broker.html),
+Apache Druid (incubating) queries are made using an HTTP REST style request to
queryable processes ([Broker](../design/broker.html),
[Historical](../design/historical.html). [Peons](../design/peons.html)) that
are running stream ingestion tasks can also accept queries. The
query is expressed in JSON and each of these process types expose the same
REST query interface. For normal Druid operations, queries should be issued to
the Broker processes. Queries can be posted
diff --git a/docs/content/querying/scan-query.md
b/docs/content/querying/scan-query.md
index 462d14f..f7c56d7 100644
--- a/docs/content/querying/scan-query.md
+++ b/docs/content/querying/scan-query.md
@@ -24,7 +24,7 @@ title: "Scan query"
# Scan query
-Scan query returns raw Druid rows in streaming mode.
+Scan query returns raw Apache Druid (incubating) rows in streaming mode.
```json
{
diff --git a/docs/content/querying/searchquery.md
b/docs/content/querying/searchquery.md
index aaaf97e..87a3051 100644
--- a/docs/content/querying/searchquery.md
+++ b/docs/content/querying/searchquery.md
@@ -52,7 +52,7 @@ There are several main parts to a search query:
|property|description|required?|
|--------|-----------|---------|
-|queryType|This String should always be "search"; this is the first thing
Druid looks at to figure out how to interpret the query.|yes|
+|queryType|This String should always be "search"; this is the first thing
Apache Druid (incubating) looks at to figure out how to interpret the
query.|yes|
|dataSource|A String or Object defining the data source to query, very similar
to a table in a relational database. See
[DataSource](../querying/datasource.html) for more information.|yes|
|granularity|Defines the granularity of the query. See
[Granularities](../querying/granularities.html).|yes|
|filter|See [Filters](../querying/filters.html).|no|
diff --git a/docs/content/querying/segmentmetadataquery.md
b/docs/content/querying/segmentmetadataquery.md
index aa7b9f1..d590e52 100644
--- a/docs/content/querying/segmentmetadataquery.md
+++ b/docs/content/querying/segmentmetadataquery.md
@@ -48,7 +48,7 @@ There are several main parts to a segment metadata query:
|property|description|required?|
|--------|-----------|---------|
-|queryType|This String should always be "segmentMetadata"; this is the first
thing Druid looks at to figure out how to interpret the query|yes|
+|queryType|This String should always be "segmentMetadata"; this is the first
thing Apache Druid (incubating) looks at to figure out how to interpret the
query|yes|
|dataSource|A String or Object defining the data source to query, very similar
to a table in a relational database. See
[DataSource](../querying/datasource.html) for more information.|yes|
|intervals|A JSON Object representing ISO-8601 Intervals. This defines the
time ranges to run the query over.|no|
|toInclude|A JSON Object representing what columns should be included in the
result. Defaults to "all".|no|
diff --git a/docs/content/querying/select-query.md
b/docs/content/querying/select-query.md
index 6b16f1c..8df2155 100644
--- a/docs/content/querying/select-query.md
+++ b/docs/content/querying/select-query.md
@@ -24,7 +24,7 @@ title: "Select Queries"
# Select Queries
-Select queries return raw Druid rows and support pagination.
+Select queries return raw Apache Druid (incubating) rows and support
pagination.
```json
{
diff --git a/docs/content/querying/sql.md b/docs/content/querying/sql.md
index 288bfb0..84be5b1 100644
--- a/docs/content/querying/sql.md
+++ b/docs/content/querying/sql.md
@@ -29,7 +29,7 @@ Built-in SQL is an <a
href="../development/experimental.html">experimental</a> f
subject to change.
</div>
-Druid SQL is a built-in SQL layer and an alternative to Druid's native
JSON-based query language, and is powered by a
+Apache Druid (incubating) SQL is a built-in SQL layer and an alternative to
Druid's native JSON-based query language, and is powered by a
parser and planner based on [Apache Calcite](https://calcite.apache.org/).
Druid SQL translates SQL into native Druid
queries on the query Broker (the first process you query), which are then
passed down to data processes as native Druid
queries. Other than the (slight) overhead of translating SQL on the Broker,
there isn't an additional performance
diff --git a/docs/content/querying/timeboundaryquery.md
b/docs/content/querying/timeboundaryquery.md
index ebb3c6e..686ef3e 100644
--- a/docs/content/querying/timeboundaryquery.md
+++ b/docs/content/querying/timeboundaryquery.md
@@ -39,7 +39,7 @@ There are 3 main parts to a time boundary query:
|property|description|required?|
|--------|-----------|---------|
-|queryType|This String should always be "timeBoundary"; this is the first
thing Druid looks at to figure out how to interpret the query|yes|
+|queryType|This String should always be "timeBoundary"; this is the first
thing Apache Druid (incubating) looks at to figure out how to interpret the
query|yes|
|dataSource|A String or Object defining the data source to query, very similar
to a table in a relational database. See
[DataSource](../querying/datasource.html) for more information.|yes|
|bound | Optional, set to `maxTime` or `minTime` to return only the latest
or earliest timestamp. Default to returning both if not set| no |
|filter|See [Filters](../querying/filters.html)|no|
diff --git a/docs/content/querying/timeseriesquery.md
b/docs/content/querying/timeseriesquery.md
index d8ab592..9feef88 100644
--- a/docs/content/querying/timeseriesquery.md
+++ b/docs/content/querying/timeseriesquery.md
@@ -68,7 +68,7 @@ There are 7 main parts to a timeseries query:
|property|description|required?|
|--------|-----------|---------|
-|queryType|This String should always be "timeseries"; this is the first thing
Druid looks at to figure out how to interpret the query|yes|
+|queryType|This String should always be "timeseries"; this is the first thing
Apache Druid (incubating) looks at to figure out how to interpret the query|yes|
|dataSource|A String or Object defining the data source to query, very similar
to a table in a relational database. See
[DataSource](../querying/datasource.html) for more information.|yes|
|descending|Whether to make descending ordered result. Default is
`false`(ascending).|no|
|intervals|A JSON Object representing ISO-8601 Intervals. This defines the
time ranges to run the query over.|yes|
diff --git a/docs/content/querying/topnmetricspec.md
b/docs/content/querying/topnmetricspec.md
index 09355f4..43245cd 100644
--- a/docs/content/querying/topnmetricspec.md
+++ b/docs/content/querying/topnmetricspec.md
@@ -24,7 +24,7 @@ title: "TopNMetricSpec"
# TopNMetricSpec
-The topN metric spec specifies how topN values should be sorted.
+In Apache Druid (incubating), the topN metric spec specifies how topN values
should be sorted.
## Numeric TopNMetricSpec
diff --git a/docs/content/querying/topnquery.md
b/docs/content/querying/topnquery.md
index d973f71..f4acea0 100644
--- a/docs/content/querying/topnquery.md
+++ b/docs/content/querying/topnquery.md
@@ -24,7 +24,7 @@ title: "TopN queries"
# TopN queries
-TopN queries return a sorted set of results for the values in a given
dimension according to some criteria. Conceptually, they can be thought of as
an approximate [GroupByQuery](../querying/groupbyquery.html) over a single
dimension with an [Ordering](../querying/limitspec.html) spec. TopNs are much
faster and resource efficient than GroupBys for this use case. These types of
queries take a topN query object and return an array of JSON objects where each
object represents a value asked f [...]
+Apache Druid (incubating) TopN queries return a sorted set of results for the
values in a given dimension according to some criteria. Conceptually, they can
be thought of as an approximate [GroupByQuery](../querying/groupbyquery.html)
over a single dimension with an [Ordering](../querying/limitspec.html) spec.
TopNs are much faster and resource efficient than GroupBys for this use case.
These types of queries take a topN query object and return an array of JSON
objects where each object [...]
TopNs are approximate in that each data process will rank their top K results
and only return those top K results to the Broker. K, by default in Druid, is
`max(1000, threshold)`. In practice, this means that if you ask for the top
1000 items ordered, the correctness of the first ~900 items will be 100%, and
the ordering of the results after that is not guaranteed. TopNs can be made
more accurate by increasing the threshold.
diff --git a/docs/content/querying/virtual-columns.md
b/docs/content/querying/virtual-columns.md
index 002741d..0de7288 100644
--- a/docs/content/querying/virtual-columns.md
+++ b/docs/content/querying/virtual-columns.md
@@ -30,7 +30,7 @@ A virtual column can potentially draw from multiple
underlying columns, although
Virtual columns can be used as dimensions or as inputs to aggregators.
-Each Druid query can accept a list of virtual columns as a parameter. The
following scan query is provided as an example:
+Each Apache Druid (incubating) query can accept a list of virtual columns as a
parameter. The following scan query is provided as an example:
```
{
diff --git a/docs/content/toc.md b/docs/content/toc.md
index e3a610f..138d9d1 100644
--- a/docs/content/toc.md
+++ b/docs/content/toc.md
@@ -32,8 +32,8 @@ layout: toc
* [Ingestion overview](/docs/VERSION/ingestion/index.html)
* [Quickstart](/docs/VERSION/tutorials/index.html)
* [Tutorial: Loading a file](/docs/VERSION/tutorials/tutorial-batch.html)
- * [Tutorial: Loading stream data from
Kafka](/docs/VERSION/tutorials/tutorial-kafka.html)
- * [Tutorial: Loading a file using
Hadoop](/docs/VERSION/tutorials/tutorial-batch-hadoop.html)
+ * [Tutorial: Loading stream data from Apache
Kafka](/docs/VERSION/tutorials/tutorial-kafka.html)
+ * [Tutorial: Loading a file using Apache
Hadoop](/docs/VERSION/tutorials/tutorial-batch-hadoop.html)
* [Tutorial: Loading stream data using HTTP
push](/docs/VERSION/tutorials/tutorial-tranquility.html)
* [Tutorial: Querying data](/docs/VERSION/tutorials/tutorial-query.html)
* Further tutorials
diff --git a/docs/content/tutorials/cluster.md
b/docs/content/tutorials/cluster.md
index e3b577b..1205c1c 100644
--- a/docs/content/tutorials/cluster.md
+++ b/docs/content/tutorials/cluster.md
@@ -24,7 +24,7 @@ title: "Clustering"
# Clustering
-Druid is designed to be deployed as a scalable, fault-tolerant cluster.
+Apache Druid (incubating) is designed to be deployed as a scalable,
fault-tolerant cluster.
In this document, we'll set up a simple cluster and discuss how it can be
further configured to meet
your needs.
diff --git a/docs/content/tutorials/index.md b/docs/content/tutorials/index.md
index 483d3e6..9f79165 100644
--- a/docs/content/tutorials/index.md
+++ b/docs/content/tutorials/index.md
@@ -1,6 +1,6 @@
---
layout: doc_page
-title: "Quickstart"
+title: "Apache Druid (incubating) Quickstart"
---
<!--
@@ -185,11 +185,11 @@ The following tutorials demonstrate various methods of
loading data into Druid,
This tutorial demonstrates how to perform a batch file load, using Druid's
native batch ingestion.
-### [Tutorial: Loading stream data from Kafka](./tutorial-kafka.html)
+### [Tutorial: Loading stream data from Apache Kafka](./tutorial-kafka.html)
This tutorial demonstrates how to load streaming data from a Kafka topic.
-### [Tutorial: Loading a file using Hadoop](./tutorial-batch-hadoop.html)
+### [Tutorial: Loading a file using Apache
Hadoop](./tutorial-batch-hadoop.html)
This tutorial demonstrates how to perform a batch file load, using a remote
Hadoop cluster.
diff --git a/docs/content/tutorials/tutorial-batch-hadoop.md
b/docs/content/tutorials/tutorial-batch-hadoop.md
index e709bfa..59f2dff 100644
--- a/docs/content/tutorials/tutorial-batch-hadoop.md
+++ b/docs/content/tutorials/tutorial-batch-hadoop.md
@@ -1,6 +1,6 @@
---
layout: doc_page
-title: "Tutorial: Load batch data using Hadoop"
+title: "Tutorial: Load batch data using Apache Hadoop"
---
<!--
@@ -24,7 +24,7 @@ title: "Tutorial: Load batch data using Hadoop"
# Tutorial: Load batch data using Hadoop
-This tutorial shows you how to load data files into Druid using a remote
Hadoop cluster.
+This tutorial shows you how to load data files into Apache Druid (incubating)
using a remote Hadoop cluster.
For this tutorial, we'll assume that you've already completed the previous
[batch ingestion tutorial](tutorial-batch.html) using Druid's native batch
ingestion system.
diff --git a/docs/content/tutorials/tutorial-batch.md
b/docs/content/tutorials/tutorial-batch.md
index a18d434..f80540e 100644
--- a/docs/content/tutorials/tutorial-batch.md
+++ b/docs/content/tutorials/tutorial-batch.md
@@ -26,7 +26,7 @@ title: "Tutorial: Loading a file"
## Getting started
-This tutorial demonstrates how to perform a batch file load, using Druid's
native batch ingestion.
+This tutorial demonstrates how to perform a batch file load, using Apache
Druid (incubating)'s native batch ingestion.
For this tutorial, we'll assume you've already downloaded Druid as described
in
the [single-machine quickstart](index.html) and have it running on your local
machine. You
diff --git a/docs/content/tutorials/tutorial-compaction.md
b/docs/content/tutorials/tutorial-compaction.md
index e51b3c6..8919a4c 100644
--- a/docs/content/tutorials/tutorial-compaction.md
+++ b/docs/content/tutorials/tutorial-compaction.md
@@ -29,7 +29,7 @@ This tutorial demonstrates how to compact existing segments
into fewer but large
Because there is some per-segment memory and processing overhead, it can
sometimes be beneficial to reduce the total number of segments.
Please check [Segment size
optimization](../operations/segment-optimization.html) for details.
-For this tutorial, we'll assume you've already downloaded Druid as described
in
+For this tutorial, we'll assume you've already downloaded Apache Druid
(incubating) as described in
the [single-machine quickstart](index.html) and have it running on your local
machine.
It will also be helpful to have finished [Tutorial: Loading a
file](../tutorials/tutorial-batch.html) and [Tutorial: Querying
data](../tutorials/tutorial-query.html).
diff --git a/docs/content/tutorials/tutorial-delete-data.md
b/docs/content/tutorials/tutorial-delete-data.md
index bd80cd7..f3133d2 100644
--- a/docs/content/tutorials/tutorial-delete-data.md
+++ b/docs/content/tutorials/tutorial-delete-data.md
@@ -26,7 +26,7 @@ title: "Tutorial: Deleting data"
This tutorial demonstrates how to delete existing data.
-For this tutorial, we'll assume you've already downloaded Druid as described
in
+For this tutorial, we'll assume you've already downloaded Apache Druid
(incubating) as described in
the [single-machine quickstart](index.html) and have it running on your local
machine.
Completing [Tutorial: Configuring
retention](../tutorials/tutorial-retention.html) first is highly recommended,
as we will be using retention rules in this tutorial.
diff --git a/docs/content/tutorials/tutorial-ingestion-spec.md
b/docs/content/tutorials/tutorial-ingestion-spec.md
index 3311a6f..29b0ea9 100644
--- a/docs/content/tutorials/tutorial-ingestion-spec.md
+++ b/docs/content/tutorials/tutorial-ingestion-spec.md
@@ -26,7 +26,7 @@ title: "Tutorial: Writing an ingestion spec"
This tutorial will guide the reader through the process of defining an
ingestion spec, pointing out key considerations and guidelines.
-For this tutorial, we'll assume you've already downloaded Druid as described
in
+For this tutorial, we'll assume you've already downloaded Apache Druid
(incubating) as described in
the [single-machine quickstart](index.html) and have it running on your local
machine.
It will also be helpful to have finished [Tutorial: Loading a
file](../tutorials/tutorial-batch.html), [Tutorial: Querying
data](../tutorials/tutorial-query.html), and [Tutorial:
Rollup](../tutorials/tutorial-rollup.html).
diff --git a/docs/content/tutorials/tutorial-kafka.md
b/docs/content/tutorials/tutorial-kafka.md
index 3285619..98e81a6 100644
--- a/docs/content/tutorials/tutorial-kafka.md
+++ b/docs/content/tutorials/tutorial-kafka.md
@@ -1,6 +1,6 @@
---
layout: doc_page
-title: "Tutorial: Load streaming data from Kafka"
+title: "Tutorial: Load streaming data from Apache Kafka"
---
<!--
@@ -26,7 +26,7 @@ title: "Tutorial: Load streaming data from Kafka"
## Getting started
-This tutorial demonstrates how to load data from a Kafka stream, using the
Druid Kafka indexing service.
+This tutorial demonstrates how to load data into Apache Druid (incubating)
from a Kafka stream, using Druid's Kafka indexing service.
For this tutorial, we'll assume you've already downloaded Druid as described
in
the [single-machine quickstart](index.html) and have it running on your local
machine. You
diff --git a/docs/content/tutorials/tutorial-query.md
b/docs/content/tutorials/tutorial-query.md
index a5cc7e2..9829197 100644
--- a/docs/content/tutorials/tutorial-query.md
+++ b/docs/content/tutorials/tutorial-query.md
@@ -24,7 +24,7 @@ title: "Tutorial: Querying data"
# Tutorial: Querying data
-This tutorial will demonstrate how to query data in Druid, with examples for
Druid's native query format and Druid SQL.
+This tutorial will demonstrate how to query data in Apache Druid (incubating),
with examples for Druid's native query format and Druid SQL.
The tutorial assumes that you've already completed one of the 4 ingestion
tutorials, as we will be querying the sample Wikipedia edits data.
diff --git a/docs/content/tutorials/tutorial-retention.md
b/docs/content/tutorials/tutorial-retention.md
index 66c0ee5..dafca32 100644
--- a/docs/content/tutorials/tutorial-retention.md
+++ b/docs/content/tutorials/tutorial-retention.md
@@ -26,7 +26,7 @@ title: "Tutorial: Configuring data retention"
This tutorial demonstrates how to configure retention rules on a datasource to
set the time intervals of data that will be retained or dropped.
-For this tutorial, we'll assume you've already downloaded Druid as described
in
+For this tutorial, we'll assume you've already downloaded Apache Druid
(incubating) as described in
the [single-machine quickstart](index.html) and have it running on your local
machine.
It will also be helpful to have finished [Tutorial: Loading a
file](../tutorials/tutorial-batch.html) and [Tutorial: Querying
data](../tutorials/tutorial-query.html).
diff --git a/docs/content/tutorials/tutorial-rollup.md
b/docs/content/tutorials/tutorial-rollup.md
index 013fa0e..f7b418d 100644
--- a/docs/content/tutorials/tutorial-rollup.md
+++ b/docs/content/tutorials/tutorial-rollup.md
@@ -24,7 +24,7 @@ title: "Tutorial: Roll-up"
# Tutorial: Roll-up
-Druid can summarize raw data at ingestion time using a process we refer to as
"roll-up". Roll-up is a first-level aggregation operation over a selected set
of columns that reduces the size of stored segments.
+Apache Druid (incubating) can summarize raw data at ingestion time using a
process we refer to as "roll-up". Roll-up is a first-level aggregation
operation over a selected set of columns that reduces the size of stored
segments.
This tutorial will demonstrate the effects of roll-up on an example dataset.
diff --git a/docs/content/tutorials/tutorial-tranquility.md
b/docs/content/tutorials/tutorial-tranquility.md
index edfae18..46c02b1 100644
--- a/docs/content/tutorials/tutorial-tranquility.md
+++ b/docs/content/tutorials/tutorial-tranquility.md
@@ -26,7 +26,7 @@ title: "Tutorial: Load streaming data with HTTP push"
## Getting started
-This tutorial shows you how to load streaming data into Druid using HTTP push
via Tranquility Server.
+This tutorial shows you how to load streaming data into Apache Druid
(incubating) using HTTP push via Tranquility Server.
[Tranquility
Server](https://github.com/druid-io/tranquility/blob/master/docs/server.md)
allows a stream of data to be pushed into Druid using HTTP POSTs.
diff --git a/docs/content/tutorials/tutorial-transform-spec.md
b/docs/content/tutorials/tutorial-transform-spec.md
index 60d7bf8..9a96da2 100644
--- a/docs/content/tutorials/tutorial-transform-spec.md
+++ b/docs/content/tutorials/tutorial-transform-spec.md
@@ -26,7 +26,7 @@ title: "Tutorial: Transforming input data"
This tutorial will demonstrate how to use transform specs to filter and
transform input data during ingestion.
-For this tutorial, we'll assume you've already downloaded Druid as described
in
+For this tutorial, we'll assume you've already downloaded Apache Druid
(incubating) as described in
the [single-machine quickstart](index.html) and have it running on your local
machine.
It will also be helpful to have finished [Tutorial: Loading a
file](../tutorials/tutorial-batch.html) and [Tutorial: Querying
data](../tutorials/tutorial-query.html).
diff --git a/docs/content/tutorials/tutorial-update-data.md
b/docs/content/tutorials/tutorial-update-data.md
index 3d870de..d55ce97 100644
--- a/docs/content/tutorials/tutorial-update-data.md
+++ b/docs/content/tutorials/tutorial-update-data.md
@@ -26,7 +26,7 @@ title: "Tutorial: Updating existing data"
This tutorial demonstrates how to update existing data, showing both
overwrites and appends.
-For this tutorial, we'll assume you've already downloaded Druid as described
in
+For this tutorial, we'll assume you've already downloaded Apache Druid
(incubating) as described in
the [single-machine quickstart](index.html) and have it running on your local
machine.
It will also be helpful to have finished [Tutorial: Loading a
file](../tutorials/tutorial-batch.html), [Tutorial: Querying
data](../tutorials/tutorial-query.html), and [Tutorial:
Rollup](../tutorials/tutorial-rollup.html).
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]