This is an automated email from the ASF dual-hosted git repository.

fjy pushed a commit to branch 0.12.3
in repository https://gitbox.apache.org/repos/asf/incubator-druid.git


The following commit(s) were added to refs/heads/0.12.3 by this push:
     new 3690cca  Fixes for new docs (#6167)
3690cca is described below

commit 3690ccab4b9df3dc2b28054c4f2f4de538da26a9
Author: Jonathan Wei <jon-...@users.noreply.github.com>
AuthorDate: Mon Aug 13 12:57:04 2018 -0700

    Fixes for new docs (#6167)
---
 docs/_redirects.json                              | 10 +++---
 docs/content/configuration/index.md               |  3 +-
 docs/content/ingestion/batch-ingestion.md         |  2 +-
 docs/content/ingestion/overview.md                |  2 +-
 docs/content/toc.md                               |  7 ++--
 docs/content/tutorials/index.md                   | 15 +++++----
 docs/content/tutorials/tutorial-batch-hadoop.md   | 26 +++++++--------
 docs/content/tutorials/tutorial-batch.md          |  6 ++--
 docs/content/tutorials/tutorial-compaction.md     | 12 +++----
 docs/content/tutorials/tutorial-delete-data.md    | 16 ++++-----
 docs/content/tutorials/tutorial-ingestion-spec.md | 40 +++++++++++------------
 docs/content/tutorials/tutorial-kafka.md          |  8 ++---
 docs/content/tutorials/tutorial-query.md          | 21 ++++++------
 docs/content/tutorials/tutorial-retention.md      | 14 ++++----
 docs/content/tutorials/tutorial-rollup.md         | 24 +++++++-------
 docs/content/tutorials/tutorial-tranquility.md    | 17 ++++------
 docs/content/tutorials/tutorial-transform-spec.md | 14 ++++----
 docs/content/tutorials/tutorial-update-data.md    | 16 ++++-----
 18 files changed, 126 insertions(+), 127 deletions(-)

diff --git a/docs/_redirects.json b/docs/_redirects.json
index 03bec51..8e50fbe 100644
--- a/docs/_redirects.json
+++ b/docs/_redirects.json
@@ -91,11 +91,11 @@
   {"source": "comparisons/druid-vs-hadoop.html", "target": 
"druid-vs-sql-on-hadoop.html"},
   {"source": "comparisons/druid-vs-impala-or-shark.html", "target": 
"druid-vs-sql-on-hadoop.html"},
   {"source": "comparisons/druid-vs-vertica.html", "target": 
"druid-vs-redshift.html"},
-  {"source": "configuration/broker.html", "target": 
"configuration/index.html#broker"},
-  {"source": "configuration/caching.html", "target": 
"configuration/index.html#cache-configuration"},
-  {"source": "configuration/coordinator.html", "target": 
"configuration/index.html#coordinator"},
-  {"source": "configuration/historical.html", "target": 
"configuration/index.html#historical"},
-  {"source": "configuration/indexing-service.html", "target": 
"configuration/index.html#overlord"},
+  {"source": "configuration/broker.html", "target": 
"../configuration/index.html#broker"},
+  {"source": "configuration/caching.html", "target": 
"../configuration/index.html#cache-configuration"},
+  {"source": "configuration/coordinator.html", "target": 
"../configuration/index.html#coordinator"},
+  {"source": "configuration/historical.html", "target": 
"../configuration/index.html#historical"},
+  {"source": "configuration/indexing-service.html", "target": 
"../configuration/index.html#overlord"},
   {"source": "configuration/simple-cluster.html", "target": 
"../tutorials/cluster.html"},
   {"source": "design/concepts-and-terminology.html", "target": "index.html"},
   {"source": "development/approximate-histograms.html", "target": 
"extensions-core/approximate-histograms.html"},
diff --git a/docs/content/configuration/index.md 
b/docs/content/configuration/index.md
index ecc6902..4889ca4 100644
--- a/docs/content/configuration/index.md
+++ b/docs/content/configuration/index.md
@@ -7,8 +7,9 @@ layout: doc_page
 This page documents all of the configuration properties for each Druid service 
type.
 
 ## Table of Contents
+  * [Recommended Configuration File 
Organization](#recommended-configuration-file-organization)
   * [Common configurations](#common-configurations)
-    * [JVM Configuration Best Practices](#jvm-configuration-best-practices]
+    * [JVM Configuration Best Practices](#jvm-configuration-best-practices)
     * [Extensions](#extensions)
     * [Modules](#modules)
     * [Zookeeper](#zookeper)
diff --git a/docs/content/ingestion/batch-ingestion.md 
b/docs/content/ingestion/batch-ingestion.md
index fb5b4fc..1fa7783 100644
--- a/docs/content/ingestion/batch-ingestion.md
+++ b/docs/content/ingestion/batch-ingestion.md
@@ -8,7 +8,7 @@ Druid can load data from static files through a variety of 
methods described her
 
 ## Native Batch Ingestion
 
-Druid has built-in batch ingestion functionality. See 
[here](../ingestion/native_tasks.html) for more info.
+Druid has built-in batch ingestion functionality. See 
[here](../ingestion/native-batch.html) for more info.
 
 ## Hadoop Batch Ingestion
 
diff --git a/docs/content/ingestion/overview.md 
b/docs/content/ingestion/overview.md
index c7f0d67..b82c8f5 100644
--- a/docs/content/ingestion/overview.md
+++ b/docs/content/ingestion/overview.md
@@ -153,7 +153,7 @@ the best one for your situation.
 
 |Method|How it works|Can append and overwrite?|Can handle late 
data?|Exactly-once ingestion?|Real-time queries?|
 
|------|------------|-------------------------|---------------------|-----------------------|------------------|
-|[Native batch](native_tasks.html)|Druid loads data directly from S3, HTTP, 
NFS, or other networked storage.|Append or overwrite|Yes|Yes|No|
+|[Native batch](native-batch.html)|Druid loads data directly from S3, HTTP, 
NFS, or other networked storage.|Append or overwrite|Yes|Yes|No|
 |[Hadoop](hadoop.html)|Druid launches Hadoop Map/Reduce jobs to load data 
files.|Append or overwrite|Yes|Yes|No|
 |[Kafka indexing 
service](../development/extensions-core/kafka-ingestion.html)|Druid reads 
directly from Kafka.|Append only|Yes|Yes|Yes|
 |[Tranquility](stream-push.html)|You use Tranquility, a client side library, 
to push individual records into Druid.|Append only|No - late data is dropped|No 
- may drop or duplicate data|Yes|
diff --git a/docs/content/toc.md b/docs/content/toc.md
index 176a188..c1eb5e8 100644
--- a/docs/content/toc.md
+++ b/docs/content/toc.md
@@ -18,12 +18,13 @@ layout: toc
     * [Tutorial: Loading stream data using HTTP 
push](/docs/VERSION/tutorials/tutorial-tranquility.html)
     * [Tutorial: Querying data](/docs/VERSION/tutorials/tutorial-query.html)
   * [Further tutorials](/docs/VERSION/tutorials/advanced.html)
-    * [Tutorial: Rollup](/docs/VERSION/tutorials/rollup.html)
+    * [Tutorial: Rollup](/docs/VERSION/tutorials/tutorial-rollup.html)
     * [Tutorial: Configuring 
retention](/docs/VERSION/tutorials/tutorial-retention.html)
     * [Tutorial: Updating existing 
data](/docs/VERSION/tutorials/tutorial-update-data.html)
     * [Tutorial: Compacting 
segments](/docs/VERSION/tutorials/tutorial-compaction.html)
     * [Tutorial: Deleting 
data](/docs/VERSION/tutorials/tutorial-delete-data.html)
     * [Tutorial: Writing your own ingestion 
specs](/docs/VERSION/tutorials/tutorial-ingestion-spec.html)
+    * [Tutorial: Transforming input 
data](/docs/VERSION/tutorials/tutorial-transform-spec.html)
   * [Clustering](/docs/VERSION/tutorials/cluster.html)
 
 ## Data Ingestion
@@ -33,8 +34,8 @@ layout: toc
   * [Schema Design](/docs/VERSION/ingestion/schema-design.html)
   * [Schema Changes](/docs/VERSION/ingestion/schema-changes.html)
   * [Batch File Ingestion](/docs/VERSION/ingestion/batch-ingestion.html)
-    * [Native Batch Ingestion](docs/VERSION/ingestion/native-batch.html)
-    * [Hadoop Batch Ingestion](docs/VERSION/ingestion/hadoop.html)
+    * [Native Batch Ingestion](/docs/VERSION/ingestion/native-batch.html)
+    * [Hadoop Batch Ingestion](/docs/VERSION/ingestion/hadoop.html)
   * [Stream Ingestion](/docs/VERSION/ingestion/stream-ingestion.html)
     * [Stream Push](/docs/VERSION/ingestion/stream-push.html)
     * [Stream Pull](/docs/VERSION/ingestion/stream-pull.html)
diff --git a/docs/content/tutorials/index.md b/docs/content/tutorials/index.md
index 3b9b43d..50c49ae 100644
--- a/docs/content/tutorials/index.md
+++ b/docs/content/tutorials/index.md
@@ -50,7 +50,7 @@ Before proceeding, please download the [tutorial examples 
package](../tutorials/
 
 This tarball contains sample data and ingestion specs that will be used in the 
tutorials. 
 
-```
+```bash
 curl -O http://druid.io/docs/#{DRUIDVERSION}/tutorials/tutorial-examples.tar.gz
 tar zxvf tutorial-examples.tar.gz
 ```
@@ -98,7 +98,8 @@ Later on, if you'd like to stop the services, CTRL-C to exit 
from the running ja
 want a clean start after stopping the services, delete the `log` and `var` 
directory and run the `init` script again.
 
 From the druid-#{DRUIDVERSION} directory:
-```
+
+```bash
 rm -rf log
 rm -rf var
 bin/init
@@ -134,7 +135,7 @@ The sample data has the following columns, and an example 
event is shown below:
   * regionName
   * user
  
-```
+```json
 {
   "timestamp":"2015-09-12T20:03:45.018Z",
   "channel":"#en.wikipedia",
@@ -164,18 +165,18 @@ The following tutorials demonstrate various methods of 
loading data into Druid,
 
 This tutorial demonstrates how to perform a batch file load, using Druid's 
native batch ingestion.
 
-### [Tutorial: Loading stream data from Kafka](../tutorial-kafka.html)
+### [Tutorial: Loading stream data from Kafka](./tutorial-kafka.html)
 
 This tutorial demonstrates how to load streaming data from a Kafka topic.
 
-### [Tutorial: Loading a file using Hadoop](../tutorial-batch-hadoop.html)
+### [Tutorial: Loading a file using Hadoop](./tutorial-batch-hadoop.html)
 
 This tutorial demonstrates how to perform a batch file load, using a remote 
Hadoop cluster.
 
-### [Tutorial: Loading data using Tranquility](../tutorial-tranquility.html)
+### [Tutorial: Loading data using Tranquility](./tutorial-tranquility.html)
 
 This tutorial demonstrates how to load streaming data by pushing events to 
Druid using the Tranquility service.
 
-### [Tutorial: Writing your own ingestion 
spec](../tutorial-ingestion-spec.html)
+### [Tutorial: Writing your own ingestion spec](./tutorial-ingestion-spec.html)
 
 This tutorial demonstrates how to write a new ingestion spec and use it to 
load data.
\ No newline at end of file
diff --git a/docs/content/tutorials/tutorial-batch-hadoop.md 
b/docs/content/tutorials/tutorial-batch-hadoop.md
index 821f6ea..311ccff 100644
--- a/docs/content/tutorials/tutorial-batch-hadoop.md
+++ b/docs/content/tutorials/tutorial-batch-hadoop.md
@@ -20,9 +20,9 @@ For this tutorial, we've provided a Dockerfile for a Hadoop 
2.7.3 cluster, which
 
 This Dockerfile and related files are located at `examples/hadoop/docker`.
 
-From the druid-${DRUIDVERSION} package root, run the following commands to 
build a Docker image named "druid-hadoop-demo" with version tag "2.7.3":
+From the druid-#{DRUIDVERSION} package root, run the following commands to 
build a Docker image named "druid-hadoop-demo" with version tag "2.7.3":
 
-```
+```bash
 cd examples/hadoop/docker
 docker build -t druid-hadoop-demo:2.7.3 .
 ```
@@ -37,7 +37,7 @@ We'll need a shared folder between the host and the Hadoop 
container for transfe
 
 Let's create some folders under `/tmp`, we will use these later when starting 
the Hadoop container:
 
-```
+```bash
 mkdir -p /tmp/shared
 mkdir -p /tmp/shared/hadoop-xml
 ```
@@ -54,13 +54,13 @@ On the host machine, add the following entry to 
`/etc/hosts`:
 
 Once the `/tmp/shared` folder has been created and the `etc/hosts` entry has 
been added, run the following command to start the Hadoop container.
 
-```
+```bash
 docker run -it  -h druid-hadoop-demo -p 50010:50010 -p 50020:50020 -p 
50075:50075 -p 50090:50090 -p 8020:8020 -p 10020:10020 -p 19888:19888 -p 
8030:8030 -p 8031:8031 -p 8032:8032 -p 8033:8033 -p 8040:8040 -p 8042:8042 -p 
8088:8088 -p 8443:8443 -p 2049:2049 -p 9000:9000 -p 49707:49707 -p 2122:2122  
-p 34455:34455 -v /tmp/shared:/shared druid-hadoop-demo:2.7.3 /etc/bootstrap.sh 
-bash
 ```
 
 Once the container is started, your terminal will attach to a bash shell 
running inside the container:
 
-```
+```bash
 Starting sshd:                                             [  OK  ]
 18/07/26 17:27:15 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
 Starting namenodes on [druid-hadoop-demo]
@@ -80,9 +80,9 @@ The `Unable to load native-hadoop library for your 
platform... using builtin-jav
 
 ### Copy input data to the Hadoop container
 
-From the druid-${DRUIDVERSION} package root on the host, copy the 
`quickstart/wikiticker-2015-09-12-sampled.json.gz` sample data to the shared 
folder:
+From the druid-#{DRUIDVERSION} package root on the host, copy the 
`quickstart/wikiticker-2015-09-12-sampled.json.gz` sample data to the shared 
folder:
 
-```
+```bash
 cp quickstart/wikiticker-2015-09-12-sampled.json.gz 
/tmp/shared/wikiticker-2015-09-12-sampled.json.gz
 ```
 
@@ -90,7 +90,7 @@ cp quickstart/wikiticker-2015-09-12-sampled.json.gz 
/tmp/shared/wikiticker-2015-
 
 In the Hadoop container's shell, run the following commands to setup the HDFS 
directories needed by this tutorial and copy the input data to HDFS.
 
-```
+```bash
 cd /usr/local/hadoop/bin
 ./hadoop fs -mkdir /druid
 ./hadoop fs -mkdir /druid/segments
@@ -113,13 +113,13 @@ Some additional steps are needed to configure the Druid 
cluster for Hadoop batch
 
 From the Hadoop container's shell, run the following command to copy the 
Hadoop .xml configuration files to the shared folder:
 
-```
+```bash
 cp /usr/local/hadoop/etc/hadoop/*.xml /shared/hadoop-xml
 ```
 
 From the host machine, run the following, where {PATH_TO_DRUID} is replaced by 
the path to the Druid package.
 
-```
+```bash
 cp /tmp/shared/hadoop-xml/*.xml 
{PATH_TO_DRUID}/examples/conf/druid/_common/hadoop-xml/
 ```
 
@@ -201,14 +201,14 @@ indicating "fully available": 
[http://localhost:8081/#/](http://localhost:8081/#
 Your data should become fully available within a minute or two after the task 
completes. You can monitor this process on 
 your Coordinator console at 
[http://localhost:8081/#/](http://localhost:8081/#/).
 
-Please follow the [query tutorial](../tutorial/tutorial-query.html) to run 
some example queries on the newly loaded data.
+Please follow the [query tutorial](../tutorials/tutorial-query.html) to run 
some example queries on the newly loaded data.
 
 ## Cleanup
 
-This tutorial is only meant to be used together with the [query 
tutorial](../tutorial/tutorial-query.html). 
+This tutorial is only meant to be used together with the [query 
tutorial](../tutorials/tutorial-query.html). 
 
 If you wish to go through any of the other tutorials, you will need to:
-* Shut down the cluster and reset the cluster state by following the [reset 
instructions](index.html#resetting-the-cluster).
+* Shut down the cluster and reset the cluster state by following the [reset 
instructions](index.html#resetting-cluster-state).
 * Revert the deep storage and task storage config back to local types in 
`examples/conf/druid/_common/common.runtime.properties`
 * Restart the cluster
 
diff --git a/docs/content/tutorials/tutorial-batch.md 
b/docs/content/tutorials/tutorial-batch.md
index 2aaf058..0b5d23f 100644
--- a/docs/content/tutorials/tutorial-batch.md
+++ b/docs/content/tutorials/tutorial-batch.md
@@ -19,7 +19,7 @@ A data load is initiated by submitting an *ingestion task* 
spec to the Druid ove
 We have provided an ingestion spec at `examples/wikipedia-index.json`, shown 
here for convenience,
 which has been configured to read the 
`quickstart/wikiticker-2015-09-12-sampled.json.gz` input file:
 
-```
+```json
 {
   "type" : "index",
   "spec" : {
@@ -121,11 +121,11 @@ indicating "fully available": 
[http://localhost:8081/#/](http://localhost:8081/#
 Your data should become fully available within a minute or two. You can 
monitor this process on 
 your Coordinator console at 
[http://localhost:8081/#/](http://localhost:8081/#/).
 
-Once the data is loaded, please follow the [query 
tutorial](../tutorial/tutorial-query.html) to run some example queries on the 
newly loaded data.
+Once the data is loaded, please follow the [query 
tutorial](../tutorials/tutorial-query.html) to run some example queries on the 
newly loaded data.
 
 ## Cleanup
 
-If you wish to go through any of the other ingestion tutorials, you will need 
to reset the cluster and follow these [reset 
instructions](index.html#resetting-the-cluster), as the other tutorials will 
write to the same "wikipedia" datasource.
+If you wish to go through any of the other ingestion tutorials, you will need 
to reset the cluster and follow these [reset 
instructions](index.html#resetting-cluster-state), as the other tutorials will 
write to the same "wikipedia" datasource.
 
 ## Further reading
 
diff --git a/docs/content/tutorials/tutorial-compaction.md 
b/docs/content/tutorials/tutorial-compaction.md
index 73c182a..2b02048 100644
--- a/docs/content/tutorials/tutorial-compaction.md
+++ b/docs/content/tutorials/tutorial-compaction.md
@@ -11,7 +11,7 @@ Because there is some per-segment memory and processing 
overhead, it can sometim
 For this tutorial, we'll assume you've already downloaded Druid as described 
in 
 the [single-machine quickstart](index.html) and have it running on your local 
machine. 
 
-It will also be helpful to have finished [Tutorial: Loading a 
file](/docs/VERSION/tutorials/tutorial-batch.html) and [Tutorial: Querying 
data](/docs/VERSION/tutorials/tutorial-query.html).
+It will also be helpful to have finished [Tutorial: Loading a 
file](../tutorials/tutorial-batch.html) and [Tutorial: Querying 
data](../tutorials/tutorial-query.html).
 
 ## Load the initial data
 
@@ -19,7 +19,7 @@ For this tutorial, we'll be using the Wikipedia edits sample 
data, with an inges
 
 The ingestion spec can be found at `examples/compaction-init-index.json`. 
Let's submit that spec, which will create a datasource called 
`compaction-tutorial`:
 
-```
+```bash
 curl -X 'POST' -H 'Content-Type:application/json' -d 
@examples/compaction-init-index.json http://localhost:8090/druid/indexer/v1/task
 ```
 
@@ -35,7 +35,7 @@ Running a COUNT(*) query on this datasource shows that there 
are 39,244 rows:
 curl -X 'POST' -H 'Content-Type:application/json' -d 
@examples/compaction-count-sql.json http://localhost:8082/druid/v2/sql
 ```
 
-```
+```json
 [{"EXPR$0":39244}]
 ```
 
@@ -45,7 +45,7 @@ Let's now combine these 24 segments into one segment.
 
 We have included a compaction task spec for this tutorial datasource at 
`examples/compaction-final-index.json`:
 
-```
+```json
 {
   "type": "compact",
   "dataSource": "compaction-tutorial",
@@ -67,7 +67,7 @@ In this tutorial example, only one compacted segment will be 
created, as the 392
 
 Let's submit this task now:
 
-```
+```json
 curl -X 'POST' -H 'Content-Type:application/json' -d 
@examples/compaction-final-index.json 
http://localhost:8090/druid/indexer/v1/task
 ```
 
@@ -88,7 +88,7 @@ Let's try running a COUNT(*) on `compaction-tutorial` again, 
where the row count
 curl -X 'POST' -H 'Content-Type:application/json' -d 
@examples/compaction-count-sql.json http://localhost:8082/druid/v2/sql
 ```
 
-```
+```json
 [{"EXPR$0":39244}]
 ```
 
diff --git a/docs/content/tutorials/tutorial-delete-data.md 
b/docs/content/tutorials/tutorial-delete-data.md
index 19f0b57..d950c08 100644
--- a/docs/content/tutorials/tutorial-delete-data.md
+++ b/docs/content/tutorials/tutorial-delete-data.md
@@ -9,7 +9,7 @@ This tutorial demonstrates how to delete existing data.
 For this tutorial, we'll assume you've already downloaded Druid as described 
in 
 the [single-machine quickstart](index.html) and have it running on your local 
machine. 
 
-Completing [Tutorial: Configuring 
retention](/docs/VERSION/tutorials/tutorial-retention.html) first is highly 
recommended, as we will be using retention rules in this tutorial.
+Completing [Tutorial: Configuring 
retention](../tutorials/tutorial-retention.html) first is highly recommended, 
as we will be using retention rules in this tutorial.
 
 ## Load initial data
 
@@ -17,7 +17,7 @@ In this tutorial, we will use the Wikipedia edits data, with 
an indexing spec th
 
 Let's load this initial data:
 
-```
+```bash
 curl -X 'POST' -H 'Content-Type:application/json' -d 
@examples/deletion-index.json http://localhost:8090/druid/indexer/v1/task
 ```
 
@@ -48,9 +48,9 @@ In the `rule #2` box at the bottom, click `Drop` and 
`Forever`.
 
 This will cause the first 12 segments of `deletion-tutorial` to be dropped. 
However, these dropped segments are not removed from deep storage.
 
-You can see that all 24 segments are still present in deep storage by listing 
the contents of `druid-{DRUIDVERSION}/var/druid/segments/deletion-tutorial`:
+You can see that all 24 segments are still present in deep storage by listing 
the contents of `var/druid/segments/deletion-tutorial`:
 
-```
+```bash
 $ ls -l1 var/druid/segments/deletion-tutorial/
 2015-09-12T00:00:00.000Z_2015-09-12T01:00:00.000Z
 2015-09-12T01:00:00.000Z_2015-09-12T02:00:00.000Z
@@ -90,7 +90,7 @@ The top of the info box shows the full segment ID, e.g. 
`deletion-tutorial_2016-
 
 Let's disable the hour 14 segment by sending the following DELETE request to 
the coordinator, where {SEGMENT-ID} is the full segment ID shown in the info 
box:
 
-```
+```bash
 curl -XDELETE 
http://localhost:8081/druid/coordinator/v1/datasources/deletion-tutorial/segments/{SEGMENT-ID}
 ```
 
@@ -100,7 +100,7 @@ After that command completes, you should see that the 
segment for hour 14 has be
 
 Note that the hour 14 segment is still in deep storage:
 
-```
+```bash
 $ ls -l1 var/druid/segments/deletion-tutorial/
 2015-09-12T00:00:00.000Z_2015-09-12T01:00:00.000Z
 2015-09-12T01:00:00.000Z_2015-09-12T02:00:00.000Z
@@ -134,13 +134,13 @@ Now that we have disabled some segments, we can submit a 
Kill Task, which will d
 
 A Kill Task spec has been provided at `examples/deletion-kill.json`. Submit 
this task to the Overlord with the following command:
 
-```
+```bash
 curl -X 'POST' -H 'Content-Type:application/json' -d 
@examples/deletion-kill.json http://localhost:8090/druid/indexer/v1/task
 ```
 
 After this task completes, you can see that the disabled segments have now 
been removed from deep storage:
 
-```
+```bash
 $ ls -l1 var/druid/segments/deletion-tutorial/
 2015-09-12T12:00:00.000Z_2015-09-12T13:00:00.000Z
 2015-09-12T13:00:00.000Z_2015-09-12T14:00:00.000Z
diff --git a/docs/content/tutorials/tutorial-ingestion-spec.md 
b/docs/content/tutorials/tutorial-ingestion-spec.md
index 65a0282..a691b29 100644
--- a/docs/content/tutorials/tutorial-ingestion-spec.md
+++ b/docs/content/tutorials/tutorial-ingestion-spec.md
@@ -9,7 +9,7 @@ This tutorial will guide the reader through the process of 
defining an ingestion
 For this tutorial, we'll assume you've already downloaded Druid as described 
in 
 the [single-machine quickstart](index.html) and have it running on your local 
machine. 
 
-It will also be helpful to have finished [Tutorial: Loading a 
file](/docs/VERSION/tutorials/tutorial-batch.html), [Tutorial: Querying 
data](/docs/VERSION/tutorials/tutorial-query.html), and [Tutorial: 
Rollup](/docs/VERSION/tutorials/tutorial-rollup.html).
+It will also be helpful to have finished [Tutorial: Loading a 
file](../tutorials/tutorial-batch.html), [Tutorial: Querying 
data](../tutorials/tutorial-query.html), and [Tutorial: 
Rollup](../tutorials/tutorial-rollup.html).
 
 ## Example data
 
@@ -24,7 +24,7 @@ Suppose we have the following network flow data:
 * `bytes`: number of bytes transmitted
 * `cost`: the cost of sending the traffic
 
-```
+```json
 {"ts":"2018-01-01T01:01:35Z","srcIP":"1.1.1.1", "dstIP":"2.2.2.2", 
"srcPort":2000, "dstPort":3000, "protocol": 6, "packets":10, "bytes":1000, 
"cost": 1.4}
 {"ts":"2018-01-01T01:01:51Z","srcIP":"1.1.1.1", "dstIP":"2.2.2.2", 
"srcPort":2000, "dstPort":3000, "protocol": 6, "packets":20, "bytes":2000, 
"cost": 3.1}
 {"ts":"2018-01-01T01:01:59Z","srcIP":"1.1.1.1", "dstIP":"2.2.2.2", 
"srcPort":2000, "dstPort":3000, "protocol": 6, "packets":30, "bytes":3000, 
"cost": 0.4}
@@ -74,7 +74,7 @@ A `dataSchema` has a `parser` field, which defines the parser 
that Druid will us
 
 Since our input data is represented as JSON strings, we'll use a `string` 
parser with `json` format:
 
-```
+```json
 "dataSchema" : {
   "dataSource" : "ingestion-tutorial",
   "parser" : {
@@ -92,7 +92,7 @@ The `parser` needs to know how to extract the main timestamp 
field from the inpu
 
 The timestamp column in our input data is named "ts", containing ISO 8601 
timestamps, so let's add a `timestampSpec` with that information to the 
`parseSpec`:
 
-```
+```json
 "dataSchema" : {
   "dataSource" : "ingestion-tutorial",
   "parser" : {
@@ -128,7 +128,7 @@ For this tutorial, let's enable rollup. This is specified 
with a `granularitySpe
 
 Note that the `granularitySpec` lies outside of the `parser`. We will revist 
the `parser` soon when we define our dimensions and metrics.
 
-```
+```json
 "dataSchema" : {
   "dataSource" : "ingestion-tutorial",
   "parser" : {
@@ -163,7 +163,7 @@ Let's look at how to define these dimensions and metrics 
within the ingestion sp
 
 Dimensions are specified with a `dimensionsSpec` inside the `parseSpec`.
 
-```
+```json
 "dataSchema" : {
   "dataSource" : "ingestion-tutorial",
   "parser" : {
@@ -255,7 +255,7 @@ Note that we have also defined a `count` aggregator. The 
count aggregator will t
 
 If we were not using rollup, all columns would be specified in the 
`dimensionsSpec`, e.g.:
 
-```
+```json
       "dimensionsSpec" : {
         "dimensions": [
           "srcIP",
@@ -284,7 +284,7 @@ There are some additional properties we need to set in the 
`granularitySpec`:
 
 Segment granularity is configured by the `segmentGranularity` property in the 
`granularitySpec`. For this tutorial, we'll create hourly segments:
 
-```
+```json
 "dataSchema" : {
   "dataSource" : "ingestion-tutorial",
   "parser" : {
@@ -326,7 +326,7 @@ Our input data has events from two separate hours, so this 
task will generate tw
 
 The query granularity is configured by the `queryGranularity` property in the 
`granularitySpec`. For this tutorial, let's use minute granularity:
 
-```
+```json
 "dataSchema" : {
   "dataSource" : "ingestion-tutorial",
   "parser" : {
@@ -365,13 +365,13 @@ The query granularity is configured by the 
`queryGranularity` property in the `g
 
 To see the effect of the query granularity, let's look at this row from the 
raw input data:
 
-```
+```json
 {"ts":"2018-01-01T01:03:29Z","srcIP":"1.1.1.1", "dstIP":"2.2.2.2", 
"srcPort":5000, "dstPort":7000, "protocol": 6, "packets":60, "bytes":6000, 
"cost": 4.3}
 ```
 
 When this row is ingested with minute queryGranularity, Druid will floor the 
row's timestamp to minute buckets:
 
-```
+```json
 {"ts":"2018-01-01T01:03:00Z","srcIP":"1.1.1.1", "dstIP":"2.2.2.2", 
"srcPort":5000, "dstPort":7000, "protocol": 6, "packets":60, "bytes":6000, 
"cost": 4.3}
 ```
 
@@ -381,7 +381,7 @@ For batch tasks, it is necessary to define a time interval. 
Input rows with time
 
 The interval is also specified in the `granularitySpec`:
 
-```
+```json
 "dataSchema" : {
   "dataSource" : "ingestion-tutorial",
   "parser" : {
@@ -425,7 +425,7 @@ We've now finished defining our `dataSchema`. The remaining 
steps are to place t
 
 The `dataSchema` is shared across all task types, but each task type has its 
own specification format. For this tutorial, we will use the native batch 
ingestion task:
 
-```
+```json
 {
   "type" : "index",
   "spec" : {
@@ -473,7 +473,7 @@ The `dataSchema` is shared across all task types, but each 
task type has its own
 Now let's define our input source, which is specified in an `ioConfig` object. 
Each task type has its own type of `ioConfig`. The native batch task uses 
"firehoses" to read input data, so let's configure a "local" firehose to read 
the example netflow data we saved earlier:
 
 
-```
+```json
     "ioConfig" : {
       "type" : "index",
       "firehose" : {
@@ -484,7 +484,7 @@ Now let's define our input source, which is specified in an 
`ioConfig` object. E
     }
 ```
 
-```
+```json
 {
   "type" : "index",
   "spec" : {
@@ -541,7 +541,7 @@ Each ingestion task has a `tuningConfig` section that 
allows users to tune vario
 
 As an example, let's add a `tuningConfig` that sets a target segment size for 
the native batch ingestion task:
 
-```
+```json
     "tuningConfig" : {
       "type" : "index",
       "targetPartitionSize" : 5000000
@@ -554,7 +554,7 @@ Note that each ingestion task has its own type of 
`tuningConfig`.
 
 We've finished defining the ingestion spec, it should now look like the 
following:
 
-```
+```json
 {
   "type" : "index",
   "spec" : {
@@ -611,9 +611,9 @@ We've finished defining the ingestion spec, it should now 
look like the followin
 
 ## Submit the task and query the data
 
-From the druid-${DRUIDVERSION} package root, run the following command:
+From the druid-#{DRUIDVERSION} package root, run the following command:
 
-```
+```bash
 curl -X 'POST' -H 'Content-Type:application/json' -d 
@examples/ingestion-tutorial-index.json 
http://localhost:8090/druid/indexer/v1/task
 ```
 
@@ -625,7 +625,7 @@ Let's issue a `select * from "ingestion-tutorial";` query 
to see what data was i
 curl -X 'POST' -H 'Content-Type:application/json' -d 
@examples/ingestion-tutorial-select-sql.json http://localhost:8082/druid/v2/sql
 ```
 
-```
+```json
 [
   {
     "__time": "2018-01-01T01:01:00.000Z",
diff --git a/docs/content/tutorials/tutorial-kafka.md 
b/docs/content/tutorials/tutorial-kafka.md
index 77b83e7..cae026f 100644
--- a/docs/content/tutorials/tutorial-kafka.md
+++ b/docs/content/tutorials/tutorial-kafka.md
@@ -48,7 +48,7 @@ curl -XPOST -H'Content-Type: application/json' -d 
@examples/wikipedia-kafka-supe
 If the supervisor was successfully created, you will get a response containing 
the ID of the supervisor; in our case we should see `{"id":"wikipedia-kafka"}`.
 
 For more details about what's going on here, check out the
-[Druid Kafka indexing service 
documentation](http://druid.io/docs/{{druidVersion}}/development/extensions-core/kafka-ingestion.html).
+[Druid Kafka indexing service 
documentation](../development/extensions-core/kafka-ingestion.html).
 
 ## Load data
 
@@ -67,12 +67,12 @@ The previous command posted sample events to the 
*wikipedia* Kafka topic which w
 
 After data is sent to the Kafka stream, it is immediately available for 
querying.
 
-Please follow the [query tutorial](../tutorial/tutorial-query.html) to run 
some example queries on the newly loaded data.
+Please follow the [query tutorial](../tutorials/tutorial-query.html) to run 
some example queries on the newly loaded data.
 
 ## Cleanup
 
-If you wish to go through any of the other ingestion tutorials, you will need 
to reset the cluster and follow these [reset 
instructions](index.html#resetting-the-cluster), as the other tutorials will 
write to the same "wikipedia" datasource.
+If you wish to go through any of the other ingestion tutorials, you will need 
to reset the cluster and follow these [reset 
instructions](index.html#resetting-cluster-state), as the other tutorials will 
write to the same "wikipedia" datasource.
 
 ## Further reading
 
-For more information on loading data from Kafka streams, please see the [Druid 
Kafka indexing service 
documentation](http://druid.io/docs/{{druidVersion}}/development/extensions-core/kafka-ingestion.html).
+For more information on loading data from Kafka streams, please see the [Druid 
Kafka indexing service 
documentation](../development/extensions-core/kafka-ingestion.html).
diff --git a/docs/content/tutorials/tutorial-query.md 
b/docs/content/tutorials/tutorial-query.md
index 3744df0..9f4a441 100644
--- a/docs/content/tutorials/tutorial-query.md
+++ b/docs/content/tutorials/tutorial-query.md
@@ -8,10 +8,10 @@ This tutorial will demonstrate how to query data in Druid, 
with examples for Dru
 
 The tutorial assumes that you've already completed one of the 4 ingestion 
tutorials, as we will be querying the sample Wikipedia edits data.
 
-* [Tutorial: Loading a file](/docs/VERSION/tutorials/tutorial-batch.html)
-* [Tutorial: Loading stream data from 
Kafka](/docs/VERSION/tutorials/tutorial-kafka.html)
-* [Tutorial: Loading a file using 
Hadoop](/docs/VERSION/tutorials/tutorial-batch-hadoop.html)
-* [Tutorial: Loading stream data using 
Tranquility](/docs/VERSION/tutorials/tutorial-tranquility.html)
+* [Tutorial: Loading a file](../tutorials/tutorial-batch.html)
+* [Tutorial: Loading stream data from Kafka](../tutorials/tutorial-kafka.html)
+* [Tutorial: Loading a file using 
Hadoop](../tutorials/tutorial-batch-hadoop.html)
+* [Tutorial: Loading stream data using 
Tranquility](../tutorials/tutorial-tranquility.html)
 
 ## Native JSON queries
 
@@ -102,7 +102,7 @@ curl -X 'POST' -H 'Content-Type:application/json' -d 
@examples/wikipedia-top-pag
 
 The following results should be returned:
 
-```
+```json
 [
   {
     "page": "Wikipedia:Vandalismusmeldung",
@@ -165,7 +165,7 @@ curl -X 'POST' -H 'Content-Type:application/json' -d 
@examples/wikipedia-timeser
 
 The following results should be returned:
 
-```
+```json
 [
   {
     "HourTime": "2015-09-12T00:00:00.000Z",
@@ -275,7 +275,7 @@ curl -X 'POST' -H 'Content-Type:application/json' -d 
@examples/wikipedia-groupby
 
 The following results should be returned:
 
-```
+```json
 [
   {
     "channel": "#en.wikipedia",
@@ -347,7 +347,8 @@ curl -X 'POST' -H 'Content-Type:application/json' -d 
@examples/wikipedia-explain
 ```
 
 This will return the following plan:
-```
+
+```json
 [
   {
     "PLAN": 
"DruidQueryRel(query=[{\"queryType\":\"topN\",\"dataSource\":{\"type\":\"table\",\"name\":\"wikipedia\"},\"virtualColumns\":[],\"dimension\":{\"type\":\"default\",\"dimension\":\"page\",\"outputName\":\"d0\",\"outputType\":\"STRING\"},\"metric\":{\"type\":\"numeric\",\"metric\":\"a0\"},\"threshold\":10,\"intervals\":{\"type\":\"intervals\",\"intervals\":[\"2015-09-12T00:00:00.000Z/2015-09-13T00:00:00.001Z\"]},\"filter\":null,\"granularity\":{\"type\":\"all\"},\"aggregations\"
 [...]
@@ -357,6 +358,6 @@ This will return the following plan:
 
 ## Further reading
 
-The [Queries documentation](/docs/VERSION/querying/querying.html) has more 
information on Druid's native JSON queries.
+The [Queries documentation](../querying/querying.html) has more information on 
Druid's native JSON queries.
 
-The [Druid SQL documentation](/docs/VERSION/querying/sql.html) has more 
information on using Druid SQL queries.
\ No newline at end of file
+The [Druid SQL documentation](../querying/sql.html) has more information on 
using Druid SQL queries.
\ No newline at end of file
diff --git a/docs/content/tutorials/tutorial-retention.md 
b/docs/content/tutorials/tutorial-retention.md
index afea218..d106afa 100644
--- a/docs/content/tutorials/tutorial-retention.md
+++ b/docs/content/tutorials/tutorial-retention.md
@@ -9,7 +9,7 @@ This tutorial demonstrates how to configure retention rules on 
a datasource to s
 For this tutorial, we'll assume you've already downloaded Druid as described 
in 
 the [single-machine quickstart](index.html) and have it running on your local 
machine. 
 
-It will also be helpful to have finished [Tutorial: Loading a 
file](/docs/VERSION/tutorials/tutorial-batch.html) and [Tutorial: Querying 
data](/docs/VERSION/tutorials/tutorial-query.html).
+It will also be helpful to have finished [Tutorial: Loading a 
file](../tutorials/tutorial-batch.html) and [Tutorial: Querying 
data](../tutorials/tutorial-query.html).
 
 ## Load the example data
 
@@ -17,7 +17,7 @@ For this tutorial, we'll be using the Wikipedia edits sample 
data, with an inges
 
 The ingestion spec can be found at `examples/retention-index.json`. Let's 
submit that spec, which will create a datasource called `retention-tutorial`:
 
-```
+```bash
 curl -X 'POST' -H 'Content-Type:application/json' -d 
@examples/retention-index.json http://localhost:8090/druid/indexer/v1/task
 ```
 
@@ -67,13 +67,11 @@ The segments for the first 12 hours of 2015-09-12 are now 
gone:
 
 The resulting retention rule chain is the following:
 
-```
-loadByInterval 2015-09-12T12/2015-09-13 (12 hours)
+* loadByInterval 2015-09-12T12/2015-09-13 (12 hours)
 
-dropForever
+* dropForever
 
-loadForever (default rule)
-```
+*  loadForever (default rule)
 
 The rule chain is evaluated from top to bottom, with the default rule chain 
always added at the bottom.
 
@@ -89,4 +87,4 @@ If instead you want to retain data based on how old it is 
(e.g., retain data tha
 
 ## Further reading
 
-* [Load rules](/docs/VERSION/operations/rule-configuration.html)
+* [Load rules](../operations/rule-configuration.html)
diff --git a/docs/content/tutorials/tutorial-rollup.md 
b/docs/content/tutorials/tutorial-rollup.md
index 8fe4584..2b631eb 100644
--- a/docs/content/tutorials/tutorial-rollup.md
+++ b/docs/content/tutorials/tutorial-rollup.md
@@ -11,13 +11,13 @@ This tutorial will demonstrate the effects of roll-up on an 
example dataset.
 For this tutorial, we'll assume you've already downloaded Druid as described 
in 
 the [single-machine quickstart](index.html) and have it running on your local 
machine.
 
-It will also be helpful to have finished [Tutorial: Loading a 
file](/docs/VERSION/tutorials/tutorial-batch.html) and [Tutorial: Querying 
data](/docs/VERSION/tutorials/tutorial-query.html).
+It will also be helpful to have finished [Tutorial: Loading a 
file](../tutorials/tutorial-batch.html) and [Tutorial: Querying 
data](../tutorials/tutorial-query.html).
 
 ## Example data
 
 For this tutorial, we'll use a small sample of network flow event data, 
representing packet and byte counts for traffic from a source to a destination 
IP address that occurred within a particular second.
 
-```
+```json
 {"timestamp":"2018-01-01T01:01:35Z","srcIP":"1.1.1.1", 
"dstIP":"2.2.2.2","packets":20,"bytes":9024}
 {"timestamp":"2018-01-01T01:01:51Z","srcIP":"1.1.1.1", 
"dstIP":"2.2.2.2","packets":255,"bytes":21133}
 {"timestamp":"2018-01-01T01:01:59Z","srcIP":"1.1.1.1", 
"dstIP":"2.2.2.2","packets":11,"bytes":5780}
@@ -33,7 +33,7 @@ A file containing this sample input data is located at 
`examples/rollup-data.jso
 
 We'll ingest this data using the following ingestion task spec, located at 
`examples/rollup-index.json`.
 
-```
+```json
 {
   "type" : "index",
   "spec" : {
@@ -95,9 +95,9 @@ We will see how these definitions are used after we load this 
data.
 
 ## Load the example data
 
-From the druid-${DRUIDVERSION} package root, run the following command:
+From the druid-#{DRUIDVERSION} package root, run the following command:
 
-```
+```bash
 curl -X 'POST' -H 'Content-Type:application/json' -d 
@examples/rollup-index.json http://localhost:8090/druid/indexer/v1/task
 ```
 
@@ -113,7 +113,7 @@ curl -X 'POST' -H 'Content-Type:application/json' -d 
@examples/rollup-select-sql
 
 The following results will be returned:
 
-```
+```json
 [
   {
     "__time": "2018-01-01T01:01:00.000Z",
@@ -160,7 +160,7 @@ The following results will be returned:
 
 Let's look at the three events in the original input data that occurred during 
`2018-01-01T01:01`:
 
-```
+```json
 {"timestamp":"2018-01-01T01:01:35Z","srcIP":"1.1.1.1", 
"dstIP":"2.2.2.2","packets":20,"bytes":9024}
 {"timestamp":"2018-01-01T01:01:51Z","srcIP":"1.1.1.1", 
"dstIP":"2.2.2.2","packets":255,"bytes":21133}
 {"timestamp":"2018-01-01T01:01:59Z","srcIP":"1.1.1.1", 
"dstIP":"2.2.2.2","packets":11,"bytes":5780}
@@ -168,7 +168,7 @@ Let's look at the three events in the original input data 
that occurred during `
 
 These three rows have been "rolled up" into the following row:
 
-```
+```json
   {
     "__time": "2018-01-01T01:01:00.000Z",
     "bytes": 35937,
@@ -185,12 +185,12 @@ Before the grouping occurs, the timestamps of the 
original input data are bucket
 
 Likewise, these two events that occurred during `2018-01-01T01:02` have been 
rolled up:
 
-```
+```json
 {"timestamp":"2018-01-01T01:02:14Z","srcIP":"1.1.1.1", 
"dstIP":"2.2.2.2","packets":38,"bytes":6289}
 {"timestamp":"2018-01-01T01:02:29Z","srcIP":"1.1.1.1", 
"dstIP":"2.2.2.2","packets":377,"bytes":359971}
 ```
 
-```
+```json
   {
     "__time": "2018-01-01T01:02:00.000Z",
     "bytes": 366260,
@@ -203,11 +203,11 @@ Likewise, these two events that occurred during 
`2018-01-01T01:02` have been rol
 
 For the last event recording traffic between 1.1.1.1 and 2.2.2.2, no roll-up 
took place, because this was the only event that occurred during 
`2018-01-01T01:03`:
 
-```
+```json
 {"timestamp":"2018-01-01T01:03:29Z","srcIP":"1.1.1.1", 
"dstIP":"2.2.2.2","packets":49,"bytes":10204}
 ```
 
-```
+```json
   {
     "__time": "2018-01-01T01:03:00.000Z",
     "bytes": 10204,
diff --git a/docs/content/tutorials/tutorial-tranquility.md 
b/docs/content/tutorials/tutorial-tranquility.md
index 773355c..3e9536d 100644
--- a/docs/content/tutorials/tutorial-tranquility.md
+++ b/docs/content/tutorials/tutorial-tranquility.md
@@ -18,7 +18,7 @@ don't need to have loaded any data yet.
 
 In the Druid package root, run the following commands:
 
-```
+```bash
 curl 
http://static.druid.io/tranquility/releases/tranquility-distribution-0.8.2.tgz 
-o tranquility-distribution-0.8.2.tgz
 tar -xzf tranquility-distribution-0.8.2.tgz
 cd tranquility-distribution-0.8.2
@@ -28,7 +28,7 @@ cd tranquility-distribution-0.8.2
 
 Run the following command:
 
-```
+```bash
 bin/tranquility server -configFile 
../examples/conf/tranquility/wikipedia-server.json 
-Ddruid.extensions.loadList=[]
 ```
 
@@ -36,13 +36,13 @@ bin/tranquility server -configFile 
../examples/conf/tranquility/wikipedia-server
 
 Let's send the sample Wikipedia edits data to Tranquility:
 
-```
+```bash
 curl -XPOST -H'Content-Type: application/json' --data-binary 
@quickstart/wikiticker-2015-09-12-sampled.json 
http://localhost:8200/v1/post/wikipedia
 ```
 
 Which will print something like:
 
-```
+```json
 {"result":{"received":39244,"sent":39244}}
 ```
 
@@ -56,20 +56,17 @@ Once the data is sent to Druid, you can immediately query 
it.
 
 If you see a `sent` count of 0, retry the send command until the `sent` count 
also shows 39244:
 
-```
+```json
 {"result":{"received":39244,"sent":0}}
 ```
 
 ## Querying your data
 
-Please follow the [query tutorial](../tutorial/tutorial-query.html) to run 
some example queries on the newly loaded data.
+Please follow the [query tutorial](../tutorials/tutorial-query.html) to run 
some example queries on the newly loaded data.
 
 ## Cleanup
 
-If you wish to go through any of the other ingestion tutorials, you will need 
to reset the cluster and follow these [reset 
instructions](index.html#resetting-the-cluster), as the other tutorials will 
write to the same "wikipedia" datasource.
-
-When cleaning up after running this Tranquility tutorial, it is also necessary 
to recomment the `tranquility-server` line in 
`quickstart/tutorial/conf/tutorial-cluster.conf` before restarting the cluster.
-
+If you wish to go through any of the other ingestion tutorials, you will need 
to reset the cluster and follow these [reset 
instructions](index.html#resetting-cluster-state), as the other tutorials will 
write to the same "wikipedia" datasource.
 
 ## Further reading
 
diff --git a/docs/content/tutorials/tutorial-transform-spec.md 
b/docs/content/tutorials/tutorial-transform-spec.md
index 4d3ab06..23c750e 100644
--- a/docs/content/tutorials/tutorial-transform-spec.md
+++ b/docs/content/tutorials/tutorial-transform-spec.md
@@ -9,13 +9,13 @@ This tutorial will demonstrate how to use transform specs to 
filter and transfor
 For this tutorial, we'll assume you've already downloaded Druid as described 
in 
 the [single-machine quickstart](index.html) and have it running on your local 
machine.
 
-It will also be helpful to have finished [Tutorial: Loading a 
file](/docs/VERSION/tutorials/tutorial-batch.html) and [Tutorial: Querying 
data](/docs/VERSION/tutorials/tutorial-query.html).
+It will also be helpful to have finished [Tutorial: Loading a 
file](../tutorials/tutorial-batch.html) and [Tutorial: Querying 
data](../tutorials/tutorial-query.html).
 
 ## Sample data
 
 We've included sample data for this tutorial at 
`examples/transform-data.json`, reproduced here for convenience:
 
-```
+```json
 {"timestamp":"2018-01-01T07:01:35Z","animal":"octopus",  "location":1, 
"number":100}
 {"timestamp":"2018-01-01T05:01:35Z","animal":"mongoose", 
"location":2,"number":200}
 {"timestamp":"2018-01-01T06:01:35Z","animal":"snake", "location":3, 
"number":300}
@@ -26,7 +26,7 @@ We've included sample data for this tutorial at 
`examples/transform-data.json`,
 
 We will ingest the sample data using the following spec, which demonstrates 
the use of transform specs:
 
-```
+```json
 {
   "type" : "index",
   "spec" : {
@@ -113,9 +113,9 @@ Additionally, we have an OR filter with three clauses:
 
 This filter selects the first 3 rows, and it will exclude the final "lion" row 
in the input data. Note that the filter is applied after the transformation.
 
-Let's submit this task now, which has been included at 
`quickstart/tutorial/transform-index.json`:
+Let's submit this task now, which has been included at 
`examples/transform-index.json`:
 
-```
+```bash
 curl -X 'POST' -H 'Content-Type:application/json' -d 
@examples/transform-index.json http://localhost:8090/druid/indexer/v1/task
 ```
 
@@ -123,11 +123,11 @@ curl -X 'POST' -H 'Content-Type:application/json' -d 
@examples/transform-index.j
 
 Let's a `select * from "transform-tutorial";` query to see what was ingested:
 
-```
+```bash
 curl -X 'POST' -H 'Content-Type:application/json' -d 
@examples/transform-select-sql.json http://localhost:8082/druid/v2/sql
 ```
 
-```
+```json
 [
   {
     "__time": "2018-01-01T05:01:00.000Z",
diff --git a/docs/content/tutorials/tutorial-update-data.md 
b/docs/content/tutorials/tutorial-update-data.md
index e37b18a..d7fb38e 100644
--- a/docs/content/tutorials/tutorial-update-data.md
+++ b/docs/content/tutorials/tutorial-update-data.md
@@ -9,7 +9,7 @@ This tutorial demonstrates how to update existing data, showing 
both overwrites
 For this tutorial, we'll assume you've already downloaded Druid as described 
in 
 the [single-machine quickstart](index.html) and have it running on your local 
machine. 
 
-It will also be helpful to have finished [Tutorial: Loading a 
file](/docs/VERSION/tutorials/tutorial-batch.html), [Tutorial: Querying 
data](/docs/VERSION/tutorials/tutorial-query.html), and [Tutorial: 
Rollup](/docs/VERSION/tutorials/tutorial-rollup.html).
+It will also be helpful to have finished [Tutorial: Loading a 
file](../tutorials/tutorial-batch.html), [Tutorial: Querying 
data](../tutorials/tutorial-query.html), and [Tutorial: 
Rollup](../tutorials/tutorial-rollup.html).
 
 ## Overwrite
 
@@ -23,7 +23,7 @@ The spec we'll use for this tutorial is located at 
`examples/updates-init-index.
 
 Let's submit that task:
 
-```
+```bash
 curl -X 'POST' -H 'Content-Type:application/json' -d 
@examples/updates-init-index.json http://localhost:8090/druid/indexer/v1/task
 ```
 
@@ -33,7 +33,7 @@ We have three initial rows containing an "animal" dimension 
and "number" metric:
 curl -X 'POST' -H 'Content-Type:application/json' -d 
@examples/updates-select-sql.json http://localhost:8082/druid/v2/sql
 ```
 
-```
+```json
 [
   {
     "__time": "2018-01-01T01:01:00.000Z",
@@ -66,7 +66,7 @@ Note that this task reads input from 
`examples/updates-data2.json`, and `appendT
 
 Let's submit that task:
 
-```
+```bash
 curl -X 'POST' -H 'Content-Type:application/json' -d 
@examples/updates-overwrite-index.json 
http://localhost:8090/druid/indexer/v1/task
 ```
 
@@ -76,7 +76,7 @@ When Druid finishes loading the new segment from this 
overwrite task, the "tiger
 curl -X 'POST' -H 'Content-Type:application/json' -d 
@examples/updates-select-sql.json http://localhost:8082/druid/v2/sql
 ```
 
-```
+```json
 [
   {
     "__time": "2018-01-01T01:01:00.000Z",
@@ -108,13 +108,13 @@ The `examples/updates-append-index.json` task spec reads 
input from `examples/up
 
 Let's submit that task:
 
-```
+```bash
 curl -X 'POST' -H 'Content-Type:application/json' -d 
@examples/updates-append-index.json http://localhost:8090/druid/indexer/v1/task
 ```
 
 When the new data is loaded, we can see two additional rows after "octopus". 
Note that the new "bear" row with number 222 has not been rolled up with the 
existing bear-111 row, because the new data is held in a separate segment. The 
same applies to the two "lion" rows.
 
-```
+```json
 [
   {
     "__time": "2018-01-01T01:01:00.000Z",
@@ -181,7 +181,7 @@ If we run a GroupBy query instead of a `select *`, we can 
see that the separate
 curl -X 'POST' -H 'Content-Type:application/json' -d 
@examples/updates-groupby-sql.json http://localhost:8082/druid/v2/sql
 ```
 
-```
+```json
 [
   {
     "__time": "2018-01-01T01:01:00.000Z",


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org

Reply via email to