This is an automated email from the ASF dual-hosted git repository.

fjy pushed a commit to branch 0.16.0-incubating
in repository https://gitbox.apache.org/repos/asf/incubator-druid.git


The following commit(s) were added to refs/heads/0.16.0-incubating by this push:
     new 5aee550  move google ext docs from contrib to core (#8512) (#8532)
5aee550 is described below

commit 5aee550144ead7f8f7ec33ad5c8b15919a9cafb0
Author: Clint Wylie <cwy...@apache.org>
AuthorDate: Thu Sep 12 19:46:37 2019 -0700

    move google ext docs from contrib to core (#8512) (#8532)
    
    * move google ext docs from contrib to core
    
    * fix links
    
    * revert unintended change
    
    * more links, add note to example ext doc that it was removed, unlink from 
sidebar
---
 docs/development/extensions-core/examples.md       | 20 +-----
 .../google.md                                      |  0
 docs/development/extensions.md                     | 72 +++++++++++-----------
 docs/ingestion/native-batch.md                     | 34 +++++-----
 website/i18n/en.json                               |  6 +-
 website/redirects.json                             |  1 +
 website/sidebars.json                              |  3 +-
 7 files changed, 59 insertions(+), 77 deletions(-)

diff --git a/docs/development/extensions-core/examples.md 
b/docs/development/extensions-core/examples.md
index 88d1f86..eff21dd 100644
--- a/docs/development/extensions-core/examples.md
+++ b/docs/development/extensions-core/examples.md
@@ -23,22 +23,4 @@ title: "Extension Examples"
   -->
 
 
-## TwitterSpritzerFirehose
-
-This firehose connects directly to the twitter spritzer data stream.
-
-Sample spec:
-
-```json
-"firehose" : {
-    "type" : "twitzer",
-    "maxEventCount": -1,
-    "maxRunMinutes": 0
-}
-```
-
-|property|description|default|required?|
-|--------|-----------|-------|---------|
-|type|This should be "twitzer"|N/A|yes|
-|maxEventCount|max events to receive, -1 is infinite, 0 means nothing is 
delivered; use this to prevent infinite space consumption or to prevent getting 
throttled at an inconvenient time.|N/A|yes|
-|maxRunMinutes|maximum number of minutes to fetch Twitter events.  Use this to 
prevent getting throttled at an inconvenient time. If zero or less, no time 
limit for run.|N/A|yes|
+This extension was removed in Apache Druid (incubating) 0.16.0. In prior 
versions, the extension provided obsolete facilities to ingest data from the 
Twitter 'Spritzer' data stream as well as the Wikipedia changes IRC channel.
diff --git a/docs/development/extensions-contrib/google.md 
b/docs/development/extensions-core/google.md
similarity index 100%
rename from docs/development/extensions-contrib/google.md
rename to docs/development/extensions-core/google.md
diff --git a/docs/development/extensions.md b/docs/development/extensions.md
index c0fa27d..b194dbc 100644
--- a/docs/development/extensions.md
+++ b/docs/development/extensions.md
@@ -39,20 +39,21 @@ Core extensions are maintained by Druid committers.
 |druid-avro-extensions|Support for data in Apache Avro data 
format.|[link](../development/extensions-core/avro.md)|
 |druid-basic-security|Support for Basic HTTP authentication and role-based 
access control.|[link](../development/extensions-core/druid-basic-security.md)|
 |druid-bloom-filter|Support for providing Bloom filters in druid 
queries.|[link](../development/extensions-core/bloom-filter.md)|
-|druid-caffeine-cache|A local cache implementation backed by 
Caffeine.|[link](../configuration/index.html#cache-configuration)|
-|druid-datasketches|Support for approximate counts and set operations with 
[DataSketches](https://datasketches.github.io/).|[link](../development/extensions-core/datasketches-extension.html)|
-|druid-hdfs-storage|HDFS deep 
storage.|[link](../development/extensions-core/hdfs.html)|
-|druid-histogram|Approximate histograms and quantiles aggregator. Deprecated, 
please use the [DataSketches quantiles 
aggregator](../development/extensions-core/datasketches-quantiles.html) from 
the `druid-datasketches` extension 
instead.|[link](../development/extensions-core/approximate-histograms.html)|
-|druid-kafka-extraction-namespace|Kafka-based namespaced lookup. Requires 
namespace lookup 
extension.|[link](../development/extensions-core/kafka-extraction-namespace.html)|
-|druid-kafka-indexing-service|Supervised exactly-once Kafka ingestion for the 
indexing service.|[link](../development/extensions-core/kafka-ingestion.html)|
-|druid-kinesis-indexing-service|Supervised exactly-once Kinesis ingestion for 
the indexing 
service.|[link](../development/extensions-core/kinesis-ingestion.html)|
-|druid-kerberos|Kerberos authentication for druid 
processes.|[link](../development/extensions-core/druid-kerberos.html)|
-|druid-lookups-cached-global|A module for [lookups](../querying/lookups.html) 
providing a jvm-global eager caching for lookups. It provides JDBC and URI 
implementations for fetching lookup 
data.|[link](../development/extensions-core/lookups-cached-global.html)|
-|druid-lookups-cached-single| Per lookup caching module to support the use 
cases where a lookup need to be isolated from the global pool of lookups 
|[link](../development/extensions-core/druid-lookups.html)|
-|druid-orc-extensions|Support for data in Apache Orc data 
format.|[link](../development/extensions-core/orc.html)|
-|druid-parquet-extensions|Support for data in Apache Parquet data format. 
Requires druid-avro-extensions to be 
loaded.|[link](../development/extensions-core/parquet.html)|
-|druid-protobuf-extensions| Support for data in Protobuf data 
format.|[link](../development/extensions-core/protobuf.html)|
-|druid-s3-extensions|Interfacing with data in AWS S3, and using S3 as deep 
storage.|[link](../development/extensions-core/s3.html)|
+|druid-caffeine-cache|A local cache implementation backed by 
Caffeine.|[link](../configuration/index.md#cache-configuration)|
+|druid-datasketches|Support for approximate counts and set operations with 
[DataSketches](https://datasketches.github.io/).|[link](../development/extensions-core/datasketches-extension.md)|
+|druid-google-extensions|Google Cloud Storage deep 
storage.|[link](../development/extensions-core/google.md)|
+|druid-hdfs-storage|HDFS deep 
storage.|[link](../development/extensions-core/hdfs.md)|
+|druid-histogram|Approximate histograms and quantiles aggregator. Deprecated, 
please use the [DataSketches quantiles 
aggregator](../development/extensions-core/datasketches-quantiles.md) from the 
`druid-datasketches` extension 
instead.|[link](../development/extensions-core/approximate-histograms.md)|
+|druid-kafka-extraction-namespace|Kafka-based namespaced lookup. Requires 
namespace lookup 
extension.|[link](../development/extensions-core/kafka-extraction-namespace.md)|
+|druid-kafka-indexing-service|Supervised exactly-once Kafka ingestion for the 
indexing service.|[link](../development/extensions-core/kafka-ingestion.md)|
+|druid-kinesis-indexing-service|Supervised exactly-once Kinesis ingestion for 
the indexing 
service.|[link](../development/extensions-core/kinesis-ingestion.md)|
+|druid-kerberos|Kerberos authentication for druid 
processes.|[link](../development/extensions-core/druid-kerberos.md)|
+|druid-lookups-cached-global|A module for [lookups](../querying/lookups.md) 
providing a jvm-global eager caching for lookups. It provides JDBC and URI 
implementations for fetching lookup 
data.|[link](../development/extensions-core/lookups-cached-global.md)|
+|druid-lookups-cached-single| Per lookup caching module to support the use 
cases where a lookup need to be isolated from the global pool of lookups 
|[link](../development/extensions-core/druid-lookups.md)|
+|druid-orc-extensions|Support for data in Apache Orc data 
format.|[link](../development/extensions-core/orc.md)|
+|druid-parquet-extensions|Support for data in Apache Parquet data format. 
Requires druid-avro-extensions to be 
loaded.|[link](../development/extensions-core/parquet.md)|
+|druid-protobuf-extensions| Support for data in Protobuf data 
format.|[link](../development/extensions-core/protobuf.md)|
+|druid-s3-extensions|Interfacing with data in AWS S3, and using S3 as deep 
storage.|[link](../development/extensions-core/s3.md)|
 |druid-ec2-extensions|Interfacing with AWS EC2 for autoscaling middle 
managers|UNDOCUMENTED|
 |druid-stats|Statistics related module including variance and standard 
deviation.|[link](../development/extensions-core/stats.md)|
 |mysql-metadata-storage|MySQL metadata 
store.|[link](../development/extensions-core/mysql.md)|
@@ -66,29 +67,28 @@ Core extensions are maintained by Druid committers.
 A number of community members have contributed their own extensions to Druid 
that are not packaged with the default Druid tarball.
 If you'd like to take on maintenance for a community extension, please post on 
[d...@druid.apache.org](https://lists.apache.org/list.html?d...@druid.apache.org)
 to let us know!
 
-All of these community extensions can be downloaded using 
[pull-deps](../operations/pull-deps.html) while specifying a `-c` coordinate 
option to pull 
`org.apache.druid.extensions.contrib:{EXTENSION_NAME}:{DRUID_VERSION}`.
+All of these community extensions can be downloaded using 
[pull-deps](../operations/pull-deps.md) while specifying a `-c` coordinate 
option to pull 
`org.apache.druid.extensions.contrib:{EXTENSION_NAME}:{DRUID_VERSION}`.
 
 |Name|Description|Docs|
 |----|-----------|----|
-|ambari-metrics-emitter|Ambari Metrics Emitter 
|[link](../development/extensions-contrib/ambari-metrics-emitter.html)|
-|druid-azure-extensions|Microsoft Azure deep 
storage.|[link](../development/extensions-contrib/azure.html)|
-|druid-cassandra-storage|Apache Cassandra deep 
storage.|[link](../development/extensions-contrib/cassandra.html)|
-|druid-cloudfiles-extensions|Rackspace Cloudfiles deep storage and 
firehose.|[link](../development/extensions-contrib/cloudfiles.html)|
-|druid-distinctcount|DistinctCount 
aggregator|[link](../development/extensions-contrib/distinctcount.html)|
-|druid-redis-cache|A cache implementation for Druid based on 
Redis.|[link](../development/extensions-contrib/redis-cache.html)|
-|druid-time-min-max|Min/Max aggregator for 
timestamp.|[link](../development/extensions-contrib/time-min-max.html)|
-|druid-google-extensions|Google Cloud Storage deep 
storage.|[link](../development/extensions-contrib/google.html)|
-|sqlserver-metadata-storage|Microsoft SqlServer deep 
storage.|[link](../development/extensions-contrib/sqlserver.html)|
-|graphite-emitter|Graphite metrics 
emitter|[link](../development/extensions-contrib/graphite.html)|
-|statsd-emitter|StatsD metrics 
emitter|[link](../development/extensions-contrib/statsd.html)|
-|kafka-emitter|Kafka metrics 
emitter|[link](../development/extensions-contrib/kafka-emitter.html)|
-|druid-thrift-extensions|Support thrift ingestion 
|[link](../development/extensions-contrib/thrift.html)|
-|druid-opentsdb-emitter|OpenTSDB metrics emitter 
|[link](../development/extensions-contrib/opentsdb-emitter.html)|
-|materialized-view-selection, materialized-view-maintenance|Materialized 
View|[link](../development/extensions-contrib/materialized-view.html)|
-|druid-moving-average-query|Support for [Moving 
Average](https://en.wikipedia.org/wiki/Moving_average) and other Aggregate 
[Window 
Functions](https://en.wikibooks.org/wiki/Structured_Query_Language/Window_functions)
 in Druid 
queries.|[link](../development/extensions-contrib/moving-average-query.html)|
-|druid-influxdb-emitter|InfluxDB metrics 
emitter|[link](../development/extensions-contrib/influxdb-emitter.html)|
-|druid-momentsketch|Support for approximate quantile queries using the 
[momentsketch](https://github.com/stanford-futuredata/momentsketch) 
library|[link](../development/extensions-contrib/momentsketch-quantiles.html)|
-|druid-tdigestsketch|Support for approximate sketch aggregators based on 
[T-Digest](https://github.com/tdunning/t-digest)|[link](../development/extensions-contrib/tdigestsketch-quantiles.html)|
+|ambari-metrics-emitter|Ambari Metrics Emitter 
|[link](../development/extensions-contrib/ambari-metrics-emitter.md)|
+|druid-azure-extensions|Microsoft Azure deep 
storage.|[link](../development/extensions-contrib/azure.md)|
+|druid-cassandra-storage|Apache Cassandra deep 
storage.|[link](../development/extensions-contrib/cassandra.md)|
+|druid-cloudfiles-extensions|Rackspace Cloudfiles deep storage and 
firehose.|[link](../development/extensions-contrib/cloudfiles.md)|
+|druid-distinctcount|DistinctCount 
aggregator|[link](../development/extensions-contrib/distinctcount.md)|
+|druid-redis-cache|A cache implementation for Druid based on 
Redis.|[link](../development/extensions-contrib/redis-cache.md)|
+|druid-time-min-max|Min/Max aggregator for 
timestamp.|[link](../development/extensions-contrib/time-min-max.md)|
+|sqlserver-metadata-storage|Microsoft SqlServer deep 
storage.|[link](../development/extensions-contrib/sqlserver.md)|
+|graphite-emitter|Graphite metrics 
emitter|[link](../development/extensions-contrib/graphite.md)|
+|statsd-emitter|StatsD metrics 
emitter|[link](../development/extensions-contrib/statsd.md)|
+|kafka-emitter|Kafka metrics 
emitter|[link](../development/extensions-contrib/kafka-emitter.md)|
+|druid-thrift-extensions|Support thrift ingestion 
|[link](../development/extensions-contrib/thrift.md)|
+|druid-opentsdb-emitter|OpenTSDB metrics emitter 
|[link](../development/extensions-contrib/opentsdb-emitter.md)|
+|materialized-view-selection, materialized-view-maintenance|Materialized 
View|[link](../development/extensions-contrib/materialized-view.md)|
+|druid-moving-average-query|Support for [Moving 
Average](https://en.wikipedia.org/wiki/Moving_average) and other Aggregate 
[Window 
Functions](https://en.wikibooks.org/wiki/Structured_Query_Language/Window_functions)
 in Druid 
queries.|[link](../development/extensions-contrib/moving-average-query.md)|
+|druid-influxdb-emitter|InfluxDB metrics 
emitter|[link](../development/extensions-contrib/influxdb-emitter.md)|
+|druid-momentsketch|Support for approximate quantile queries using the 
[momentsketch](https://github.com/stanford-futuredata/momentsketch) 
library|[link](../development/extensions-contrib/momentsketch-quantiles.md)|
+|druid-tdigestsketch|Support for approximate sketch aggregators based on 
[T-Digest](https://github.com/tdunning/t-digest)|[link](../development/extensions-contrib/tdigestsketch-quantiles.md)|
 
 ## Promoting community extensions to core extensions
 
@@ -102,8 +102,8 @@ For information how to create your own extension, please 
see [here](../developme
 
 ### Loading core extensions
 
-Apache Druid (incubating) bundles all [core 
extensions](../development/extensions.html#core-extensions) out of the box.
-See the [list of extensions](../development/extensions.html#core-extensions) 
for your options. You
+Apache Druid (incubating) bundles all [core 
extensions](../development/extensions.md#core-extensions) out of the box.
+See the [list of extensions](../development/extensions.md#core-extensions) for 
your options. You
 can load bundled extensions by adding their names to your 
common.runtime.properties
 `druid.extensions.loadList` property. For example, to load the 
*postgresql-metadata-storage* and
 *druid-hdfs-storage* extensions, use the configuration:
diff --git a/docs/ingestion/native-batch.md b/docs/ingestion/native-batch.md
index 6f84908..55ffb59 100644
--- a/docs/ingestion/native-batch.md
+++ b/docs/ingestion/native-batch.md
@@ -30,7 +30,7 @@ multiple tasks in parallel, and `index` which will run a 
single indexing task. P
 (simple), and native batch (parallel) ingestion.
 
 To run either kind of native batch indexing task, write an ingestion spec as 
specified below. Then POST it to the
-[`/druid/indexer/v1/task`](../operations/api-reference.html#tasks) endpoint on 
the Overlord, or use the
+[`/druid/indexer/v1/task`](../operations/api-reference.md#tasks) endpoint on 
the Overlord, or use the
 `bin/post-index-task` script included with Druid.
 
 ## Tutorial
@@ -66,10 +66,10 @@ Otherwise, this task runs sequentially. Here is the list of 
currently splittable
 - [`LocalFirehose`](#local-firehose)
 - [`IngestSegmentFirehose`](#segment-firehose)
 - [`HttpFirehose`](#http-firehose)
-- [`StaticS3Firehose`](../development/extensions-core/s3.html#firehose)
-- 
[`StaticAzureBlobStoreFirehose`](../development/extensions-contrib/azure.html#firehose)
-- 
[`StaticGoogleBlobStoreFirehose`](../development/extensions-contrib/google.html#firehose)
-- 
[`StaticCloudFilesFirehose`](../development/extensions-contrib/cloudfiles.html#firehose)
+- [`StaticS3Firehose`](../development/extensions-core/s3.md#firehose)
+- 
[`StaticAzureBlobStoreFirehose`](../development/extensions-contrib/azure.md#firehose)
+- 
[`StaticGoogleBlobStoreFirehose`](../development/extensions-core/google.md#firehose)
+- 
[`StaticCloudFilesFirehose`](../development/extensions-contrib/cloudfiles.md#firehose)
 
 The splittable firehose is responsible for generating _splits_. The supervisor 
task generates _worker task specs_ containing a split
 and submits worker tasks using those specs. As a result, the number of worker 
tasks depends on
@@ -205,10 +205,10 @@ The tuningConfig is optional and default parameters will 
be used if no tuningCon
 |maxTotalRows|Deprecated. Use `partitionsSpec` instead. Total number of rows 
in segments waiting for being pushed. Used in determining when intermediate 
pushing should occur.|20000000|no|
 |numShards|Deprecated. Use `partitionsSpec` instead. Directly specify the 
number of shards to create. If this is specified and `intervals` is specified 
in the `granularitySpec`, the index task can skip the determine 
intervals/partitions pass through the data. `numShards` cannot be specified if 
`maxRowsPerSegment` is set.|null|no|
 |partitionsSpec|Defines how to partition data in each timeChunk, see 
[PartitionsSpec](#partitionsspec)|`dynamic` if `forceGuaranteedRollup` = false, 
`hashed` if `forceGuaranteedRollup` = true|no|
-|indexSpec|Defines segment storage format options to be used at indexing time, 
see [IndexSpec](index.html#indexspec)|null|no|
-|indexSpecForIntermediatePersists|Defines segment storage format options to be 
used at indexing time for intermediate persisted temporary segments. this can 
be used to disable dimension/metric compression on intermediate segments to 
reduce memory required for final merging. however, disabling compression on 
intermediate segments might increase page cache use while they are used before 
getting merged into final segment published, see 
[IndexSpec](index.html#indexspec) for possible values.| [...]
+|indexSpec|Defines segment storage format options to be used at indexing time, 
see [IndexSpec](index.md#indexspec)|null|no|
+|indexSpecForIntermediatePersists|Defines segment storage format options to be 
used at indexing time for intermediate persisted temporary segments. this can 
be used to disable dimension/metric compression on intermediate segments to 
reduce memory required for final merging. however, disabling compression on 
intermediate segments might increase page cache use while they are used before 
getting merged into final segment published, see 
[IndexSpec](index.md#indexspec) for possible values.|sa [...]
 |maxPendingPersists|Maximum number of persists that can be pending but not 
started. If this limit would be exceeded by a new intermediate persist, 
ingestion will block until the currently-running persist finishes. Maximum heap 
memory usage for indexing scales with maxRowsInMemory * (2 + 
maxPendingPersists).|0 (meaning one persist can be running concurrently with 
ingestion, and none can be queued up)|no|
-|forceGuaranteedRollup|Forces guaranteeing the [perfect 
rollup](../ingestion/index.html#rollup). The perfect rollup optimizes the total 
size of generated segments and querying time while indexing time will be 
increased. If this is set to true, `numShards` in `tuningConfig` and 
`intervals` in `granularitySpec` must be set. Note that the result segments 
would be hash-partitioned. This flag cannot be used with `appendToExisting` of 
IOConfig. For more details, see the below __Segment pushing [...]
+|forceGuaranteedRollup|Forces guaranteeing the [perfect 
rollup](../ingestion/index.md#rollup). The perfect rollup optimizes the total 
size of generated segments and querying time while indexing time will be 
increased. If this is set to true, `numShards` in `tuningConfig` and 
`intervals` in `granularitySpec` must be set. Note that the result segments 
would be hash-partitioned. This flag cannot be used with `appendToExisting` of 
IOConfig. For more details, see the below __Segment pushing m [...]
 |reportParseExceptions|If true, exceptions encountered during parsing will be 
thrown and will halt ingestion; if false, unparseable rows and fields will be 
skipped.|false|no|
 |pushTimeout|Milliseconds to wait for pushing segments. It must be >= 0, where 
0 means to wait forever.|0|no|
 |segmentWriteOutMediumFactory|Segment write-out medium to use when creating 
segments. See 
[SegmentWriteOutMediumFactory](#segmentwriteoutmediumfactory).|Not specified, 
the value from `druid.peon.defaultSegmentWriteOutMediumFactory.type` is used|no|
@@ -223,7 +223,7 @@ The tuningConfig is optional and default parameters will be 
used if no tuningCon
 ### `partitionsSpec`
 
 PartitionsSpec is to describe the secondary partitioning method.
-You should use different partitionsSpec depending on the [rollup 
mode](../ingestion/index.html#rollup) you want.
+You should use different partitionsSpec depending on the [rollup 
mode](../ingestion/index.md#rollup) you want.
 For perfect rollup, you should use `hashed`.
 
 |property|description|default|required?|
@@ -608,10 +608,10 @@ The tuningConfig is optional and default parameters will 
be used if no tuningCon
 |numShards|Deprecated. Use `partitionsSpec` instead. Directly specify the 
number of shards to create. If this is specified and `intervals` is specified 
in the `granularitySpec`, the index task can skip the determine 
intervals/partitions pass through the data. `numShards` cannot be specified if 
`maxRowsPerSegment` is set.|null|no|
 |partitionDimensions|Deprecated. Use `partitionsSpec` instead. The dimensions 
to partition on. Leave blank to select all dimensions. Only used with 
`forceGuaranteedRollup` = true, will be ignored otherwise.|null|no|
 |partitionsSpec|Defines how to partition data in each timeChunk, see 
[PartitionsSpec](#partitionsspec)|`dynamic` if `forceGuaranteedRollup` = false, 
`hashed` if `forceGuaranteedRollup` = true|no|
-|indexSpec|Defines segment storage format options to be used at indexing time, 
see [IndexSpec](index.html#indexspec)|null|no|
-|indexSpecForIntermediatePersists|Defines segment storage format options to be 
used at indexing time for intermediate persisted temporary segments. this can 
be used to disable dimension/metric compression on intermediate segments to 
reduce memory required for final merging. however, disabling compression on 
intermediate segments might increase page cache use while they are used before 
getting merged into final segment published, see 
[IndexSpec](index.html#indexspec) for possible values.| [...]
+|indexSpec|Defines segment storage format options to be used at indexing time, 
see [IndexSpec](index.md#indexspec)|null|no|
+|indexSpecForIntermediatePersists|Defines segment storage format options to be 
used at indexing time for intermediate persisted temporary segments. this can 
be used to disable dimension/metric compression on intermediate segments to 
reduce memory required for final merging. however, disabling compression on 
intermediate segments might increase page cache use while they are used before 
getting merged into final segment published, see 
[IndexSpec](index.md#indexspec) for possible values.|sa [...]
 |maxPendingPersists|Maximum number of persists that can be pending but not 
started. If this limit would be exceeded by a new intermediate persist, 
ingestion will block until the currently-running persist finishes. Maximum heap 
memory usage for indexing scales with maxRowsInMemory * (2 + 
maxPendingPersists).|0 (meaning one persist can be running concurrently with 
ingestion, and none can be queued up)|no|
-|forceGuaranteedRollup|Forces guaranteeing the [perfect 
rollup](../ingestion/index.html#rollup). The perfect rollup optimizes the total 
size of generated segments and querying time while indexing time will be 
increased. If this is set to true, the index task will read the entire input 
data twice: one for finding the optimal number of partitions per time chunk and 
one for generating segments. Note that the result segments would be 
hash-partitioned. This flag cannot be used with `appendToE [...]
+|forceGuaranteedRollup|Forces guaranteeing the [perfect 
rollup](../ingestion/index.md#rollup). The perfect rollup optimizes the total 
size of generated segments and querying time while indexing time will be 
increased. If this is set to true, the index task will read the entire input 
data twice: one for finding the optimal number of partitions per time chunk and 
one for generating segments. Note that the result segments would be 
hash-partitioned. This flag cannot be used with `appendToExi [...]
 |reportParseExceptions|DEPRECATED. If true, exceptions encountered during 
parsing will be thrown and will halt ingestion; if false, unparseable rows and 
fields will be skipped. Setting `reportParseExceptions` to true will override 
existing configurations for `maxParseExceptions` and `maxSavedParseExceptions`, 
setting `maxParseExceptions` to 0 and limiting `maxSavedParseExceptions` to no 
more than 1.|false|no|
 |pushTimeout|Milliseconds to wait for pushing segments. It must be >= 0, where 
0 means to wait forever.|0|no|
 |segmentWriteOutMediumFactory|Segment write-out medium to use when creating 
segments. See 
[SegmentWriteOutMediumFactory](#segmentwriteoutmediumfactory).|Not specified, 
the value from `druid.peon.defaultSegmentWriteOutMediumFactory.type` is used|no|
@@ -622,7 +622,7 @@ The tuningConfig is optional and default parameters will be 
used if no tuningCon
 ### `partitionsSpec`
 
 PartitionsSpec is to describe the secondary partitioning method.
-You should use different partitionsSpec depending on the [rollup 
mode](../ingestion/index.html#rollup) you want.
+You should use different partitionsSpec depending on the [rollup 
mode](../ingestion/index.md#rollup) you want.
 For perfect rollup, you should use `hashed`.
 
 |property|description|default|required?|
@@ -644,7 +644,7 @@ For best-effort rollup, you should use `dynamic`.
 
 |Field|Type|Description|Required|
 |-----|----|-----------|--------|
-|type|String|See [Additional Peon Configuration: 
SegmentWriteOutMediumFactory](../configuration/index.html#segmentwriteoutmediumfactory)
 for explanation and available options.|yes|
+|type|String|See [Additional Peon Configuration: 
SegmentWriteOutMediumFactory](../configuration/index.md#segmentwriteoutmediumfactory)
 for explanation and available options.|yes|
 
 ### Segment pushing modes
 
@@ -783,7 +783,7 @@ This firehose will accept any type of parser, but will only 
utilize the list of
 |interval|A String representing the ISO-8601 interval. This defines the time 
range to fetch the data over.|yes|
 |dimensions|The list of dimensions to select. If left empty, no dimensions are 
returned. If left null or not defined, all dimensions are returned. |no|
 |metrics|The list of metrics to select. If left empty, no metrics are 
returned. If left null or not defined, all metrics are selected.|no|
-|filter| See [Filters](../querying/filters.html)|no|
+|filter| See [Filters](../querying/filters.md)|no|
 |maxInputSegmentBytesPerTask|When used with the native parallel index task, 
the maximum number of bytes of input segments to process in a single task. If a 
single segment is larger than this number, it will be processed by itself in a 
single task (input segments are never split across tasks). Defaults to 
150MB.|no|
 
 <a name="sql-firehose"></a>
@@ -796,8 +796,8 @@ If there are multiple queries from which data needs to be 
indexed, queries are p
 This firehose will accept any type of parser, but will only utilize the list 
of dimensions and the timestamp specification. See the extension documentation 
for more detailed ingestion examples.
 
 Requires one of the following extensions:
- * [MySQL Metadata Store](../development/extensions-core/mysql.html).
- * [PostgreSQL Metadata Store](../development/extensions-core/postgresql.html).
+ * [MySQL Metadata Store](../development/extensions-core/mysql.md).
+ * [PostgreSQL Metadata Store](../development/extensions-core/postgresql.md).
 
 
 ```json
diff --git a/website/i18n/en.json b/website/i18n/en.json
index cab6bf2..8415477 100644
--- a/website/i18n/en.json
+++ b/website/i18n/en.json
@@ -101,9 +101,6 @@
       "development/extensions-contrib/distinctcount": {
         "title": "DistinctCount Aggregator"
       },
-      "development/extensions-contrib/google": {
-        "title": "Google Cloud Storage"
-      },
       "development/extensions-contrib/graphite": {
         "title": "Graphite Emitter"
       },
@@ -182,6 +179,9 @@
       "development/extensions-core/examples": {
         "title": "Extension Examples"
       },
+      "development/extensions-core/google": {
+        "title": "Google Cloud Storage"
+      },
       "development/extensions-core/hdfs": {
         "title": "HDFS"
       },
diff --git a/website/redirects.json b/website/redirects.json
index f0ab4e5..603af71 100644
--- a/website/redirects.json
+++ b/website/redirects.json
@@ -197,3 +197,4 @@
 {"source": "tutorials/tutorial-loading-streaming-data.html", "target": 
"tutorial-kafka.html"}
 {"source": "tutorials/tutorial-the-druid-cluster.html", "target": 
"cluster.html"}
 {"source": "tutorials/tutorial-tranquility.md", "target": 
"../ingestion/tranquility.html"}
+{"source": "development/extensions-contrib/google.html", "target": 
"../extensions-core/google.html"}
diff --git a/website/sidebars.json b/website/sidebars.json
index 25e2b14..97fa96f 100644
--- a/website/sidebars.json
+++ b/website/sidebars.json
@@ -157,7 +157,7 @@
       "development/extensions-core/druid-basic-security",
       "development/extensions-core/druid-kerberos",
       "development/extensions-core/druid-lookups",
-      "development/extensions-core/examples",
+      "development/extensions-core/google",
       "development/extensions-core/hdfs",
       "development/extensions-core/kafka-extraction-namespace",
       "development/extensions-core/lookups-cached-global",
@@ -175,7 +175,6 @@
       "development/extensions-contrib/cassandra",
       "development/extensions-contrib/cloudfiles",
       "development/extensions-contrib/distinctcount",
-      "development/extensions-contrib/google",
       "development/extensions-contrib/graphite",
       "querying/aggregations",
       "querying/datasource",


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org

Reply via email to