[GitHub] surekhasaharan commented on a change in pull request #6126: New quickstart and tutorials

2018-08-09 Thread GitBox
surekhasaharan commented on a change in pull request #6126: New quickstart and 
tutorials
URL: https://github.com/apache/incubator-druid/pull/6126#discussion_r209108427
 
 

 ##
 File path: docs/content/tutorials/tutorial-kafka.md
 ##
 @@ -37,149 +30,56 @@ Start a Kafka broker by running the following command in 
a new terminal:
 ./bin/kafka-server-start.sh config/server.properties
 ```
 
-Run this command to create a Kafka topic called *metrics*, to which we'll send 
data:
+Run this command to create a Kafka topic called *wikipedia*, to which we'll 
send data:
 
 ```bash
-./bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 
1 --partitions 1 --topic metrics
+./bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 
1 --partitions 1 --topic wikipedia
 ```
 
-## Send example data
-
-Let's launch a console producer for our topic and send some data!
-
-In your Druid directory, generate some metrics by running:
-
-```bash
-bin/generate-example-metrics
-```
+## Enable Druid Kafka ingestion
 
-In your Kafka directory, run:
+We will use Druid's Kafka indexing service to ingest messages from our newly 
created *wikipedia* topic. To start the
+service, we will need to submit a supervisor spec to the Druid overlord by 
running the following from the Imply directory:
 
 Review comment:
   Imply Directory?  :)


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] surekhasaharan commented on a change in pull request #6126: New quickstart and tutorials

2018-08-09 Thread GitBox
surekhasaharan commented on a change in pull request #6126: New quickstart and 
tutorials
URL: https://github.com/apache/incubator-druid/pull/6126#discussion_r209065442
 
 

 ##
 File path: docs/content/tutorials/tutorial-batch-hadoop.md
 ##
 @@ -0,0 +1,247 @@
+---
+layout: doc_page
+---
+
+# Tutorial: Load batch data using Hadoop
+
+This tutorial shows you how to load data files into Druid using a remote 
Hadoop cluster.
+
+For this tutorial, we'll assume that you've already completed the previous 
[batch ingestion tutorial](tutorial-batch.html).
+using Druid's native batch ingestion system.
+
+## Install Docker
+
+This tutorial requires [Docker](https://docs.docker.com/install/) to be 
installed on the tutorial machine.
+
+Once the Docker install is complete, please proceed to the next steps in the 
tutorial.
+
+## Build the Hadoop docker image
+
+For this tutorial, we've provided a Dockerfile for a Hadoop 2.8.3 cluster, 
which we'll use to run the batch indexing task.
+
+This Dockerfile and related files are located at 
`quickstart/tutorial/hadoop/docker`.
+
+From the druid-${DRUIDVERSION} package root, run the following commands to 
build a Docker image named "druid-hadoop-demo" with version tag "2.8.3":
+
+```
+cd quickstart/tutorial/hadoop/docker
+docker build -t druid-hadoop-demo:2.8.3 .
+```
+
+This will start building the Hadoop image. Once the image build is done, you 
should see the message `Successfully tagged druid-hadoop-demo:2.8.3` printed to 
the console.
+
+## Setup the Hadoop docker cluster
+
+### Create temporary shared directory
+
+We'll need a shared folder between the host and the Hadoop container for 
transferring some files.
+
+Let's create some folders under `/tmp`, we will use these later when starting 
the Hadoop container:
+
+```
+mkdir -p /tmp/shared
+mkdir -p /tmp/shared/hadoop_xml
+```
+
+### Configure /etc/hosts
+
+On the host machine, add the following entry to `/etc/hosts`:
+
+```
+127.0.0.1 druid-hadoop-demo
+```
+
+### Start the Hadoop container
+
+Once the `/tmp/shared` folder has been created and the `etc/hosts` entry has 
been added, run the following command to start the Hadoop container.
+
+```
+docker run -it  -h druid-hadoop-demo -p 50010:50010 -p 50020:50020 -p 
50075:50075 -p 50090:50090 -p 8020:8020 -p 10020:10020 -p 19888:19888 -p 
8030:8030 -p 8031:8031 -p 8032:8032 -p 8033:8033 -p 8040:8040 -p 8042:8042 -p 
8088:8088 -p 8443:8443 -p 2049:2049 -p 9000:9000 -p 49707:49707 -p 2122:2122  
-p 34455:34455 -v /tmp/shared:/shared druid-hadoop-demo:2.8.3 /etc/bootstrap.sh 
-bash
+```
+
+Once the container is started, your terminal will attach to a bash shell 
running inside the container:
+
+```
+Starting sshd: [  OK  ]
+18/07/26 17:27:15 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
+Starting namenodes on [druid-hadoop-demo]
+druid-hadoop-demo: starting namenode, logging to 
/usr/local/hadoop/logs/hadoop-root-namenode-druid-hadoop-demo.out
+localhost: starting datanode, logging to 
/usr/local/hadoop/logs/hadoop-root-datanode-druid-hadoop-demo.out
+Starting secondary namenodes [0.0.0.0]
+0.0.0.0: starting secondarynamenode, logging to 
/usr/local/hadoop/logs/hadoop-root-secondarynamenode-druid-hadoop-demo.out
+18/07/26 17:27:31 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
+starting yarn daemons
+starting resourcemanager, logging to 
/usr/local/hadoop/logs/yarn--resourcemanager-druid-hadoop-demo.out
+localhost: starting nodemanager, logging to 
/usr/local/hadoop/logs/yarn-root-nodemanager-druid-hadoop-demo.out
+starting historyserver, logging to 
/usr/local/hadoop/logs/mapred--historyserver-druid-hadoop-demo.out
+bash-4.1#  
+```
+
+The `Unable to load native-hadoop library for your platform... using 
builtin-java classes where applicable` warning messages can be safely ignored.
+
+### Copy input data to the Hadoop container
+
+From the druid-${DRUIDVERSION} package root on the host, copy the 
`quickstart/wikiticker-2015-09-12-sampled.json.gz` sample data to the shared 
folder:
+
+```
+cp quickstart/wikiticker-2015-09-12-sampled.json.gz 
/tmp/shared/wikiticker-2015-09-12-sampled.json.gz
+```
+
+### Setup HDFS directories
+
+In the Hadoop container's shell, run the following commands to setup the HDFS 
directories needed by this tutorial and copy the input data to HDFS.
+
+```
+cd /usr/local/hadoop/bin
+./hadoop fs -mkdir /druid
+./hadoop fs -mkdir /druid/segments
+./hadoop fs -mkdir /quickstart
+./hadoop fs -chmod 777 /druid
+./hadoop fs -chmod 777 /druid/segments
+./hadoop fs -chmod 777 /quickstart
+./hadoop fs -chmod -R 777 /tmp
+./hadoop fs -chmod -R 777 /user
+./hadoop fs -put /shared/wikiticker-2015-09-12-sampled.json.gz 
/quickstart/wikiticker-2015-09-12-sampled.json.gz
+```
+
+If you encounter namenode errors when running this 

[GitHub] surekhasaharan commented on a change in pull request #6126: New quickstart and tutorials

2018-08-09 Thread GitBox
surekhasaharan commented on a change in pull request #6126: New quickstart and 
tutorials
URL: https://github.com/apache/incubator-druid/pull/6126#discussion_r209071989
 
 

 ##
 File path: docs/content/tutorials/tutorial-compaction.md
 ##
 @@ -0,0 +1,103 @@
+---
+layout: doc_page
+---
+
+# Tutorial: Compacting segments
+
+This tutorial demonstrates how to compact existing segments into fewer but 
larger segments.
+
+For this tutorial, we'll assume you've already downloaded Druid as described 
in 
+the [single-machine quickstart](index.html) and have it running on your local 
machine. 
+
+It will also be helpful to have finished [Tutorial: Loading a 
file](/docs/VERSION/tutorials/tutorial-batch.html) and [Tutorial: Querying 
data](/docs/VERSION/tutorials/tutorial-query.html).
+
+## Load the initial data
+
+For this tutorial, we'll be using the Wikipedia edits sample data, with an 
ingestion task spec that will create a separate segment for each hour in the 
input data.
+
+The ingestion spec can be found at 
`quickstart/tutorial/compaction-init-index.json`. Let's submit that spec, which 
will create a datasource called `compaction-tutorial`:
+
+```
+bin/post-index-task --file quickstart/tutorial/compaction-init-index.json 
+```
+
+After the ingestion completes, go to 
http://localhost:8081/#/datasources/compaction-tutorial in a browser to view 
information about the new datasource in the Coordinator console.
+
+There will be 24 segments for this datasource, one segment per hour in the 
input data:
+
+![Original segments](../tutorials/img/tutorial-retention-01.png "Original 
segments")
+
+Running a COUNT(*) query on this datasource shows that there are 24,433 rows:
+
+```
+dsql> select count(*) from "compaction-tutorial";
+┌┐
+│ EXPR$0 │
+├┤
+│  39244 │
+└┘
+Retrieved 1 row in 1.38s.
+```
+
+## Compact the data
+
+Let's now combine these 22 segments into one segment.
 
 Review comment:
   Not a comment on docs, but I didn't understand how it's 22 segments.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] surekhasaharan commented on a change in pull request #6126: New quickstart and tutorials

2018-08-09 Thread GitBox
surekhasaharan commented on a change in pull request #6126: New quickstart and 
tutorials
URL: https://github.com/apache/incubator-druid/pull/6126#discussion_r209114320
 
 

 ##
 File path: docs/content/tutorials/tutorial-update-data.md
 ##
 @@ -0,0 +1,150 @@
+---
+layout: doc_page
+---
+
+# Tutorial: Updating existing data
+
+This tutorial demonstrates how to update existing data, showing both 
overwrites and appends.
+
+For this tutorial, we'll assume you've already downloaded Druid as described 
in 
+the [single-machine quickstart](index.html) and have it running on your local 
machine. 
+
+It will also be helpful to have finished [Tutorial: Loading a 
file](/docs/VERSION/tutorials/tutorial-batch.html), [Tutorial: Querying 
data](/docs/VERSION/tutorials/tutorial-query.html), and [Tutorial: 
Rollup](/docs/VERSION/tutorials/tutorial-rollup.html).
+
+## Overwrite
+
+This section of the tutorial will cover how to overwrite an existing interval 
of data.
+
+### Load initial data
+
+Let's load an initial data set which we will overwrite and append to.
+
+The spec we'll use for this tutorial is located at 
`quickstart/tutorial/updates-init-index.json`. This spec creates a datasource 
called `updates-tutorial` from the `quickstart/tutorial/updates-data.json` 
input file.
+
+Let's submit that task:
+
+```
+bin/post-index-task --file quickstart/tutorial/updates-init-index.json 
+```
+
+We have three initial rows containing an "animal" dimension and "number" 
metric:
+
+```
+dsql> select * from "updates-tutorial"; 
+┌──┬──┬───┬┐
+│ __time   │ animal   │ count │ number │
+├──┼──┼───┼┤
+│ 2018-01-01T01:01:00.000Z │ tiger│ 1 │100 │
+│ 2018-01-01T03:01:00.000Z │ aardvark │ 1 │ 42 │
 
 Review comment:
   hmm, unusual animal names, I had to lookup `aardvark` :) 
   ```The aardvark is a medium-sized, burrowing, nocturnal mammal native to 
Africa. It is the only living species of the order Tubulidentata, although 
other prehistoric species and genera of Tubulidentata are known```


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] surekhasaharan commented on a change in pull request #6126: New quickstart and tutorials

2018-08-09 Thread GitBox
surekhasaharan commented on a change in pull request #6126: New quickstart and 
tutorials
URL: https://github.com/apache/incubator-druid/pull/6126#discussion_r209061274
 
 

 ##
 File path: docs/content/tutorials/tutorial-batch-hadoop.md
 ##
 @@ -0,0 +1,247 @@
+---
+layout: doc_page
+---
+
+# Tutorial: Load batch data using Hadoop
+
+This tutorial shows you how to load data files into Druid using a remote 
Hadoop cluster.
+
+For this tutorial, we'll assume that you've already completed the previous 
[batch ingestion tutorial](tutorial-batch.html).
+using Druid's native batch ingestion system.
+
+## Install Docker
+
+This tutorial requires [Docker](https://docs.docker.com/install/) to be 
installed on the tutorial machine.
+
+Once the Docker install is complete, please proceed to the next steps in the 
tutorial.
+
+## Build the Hadoop docker image
+
+For this tutorial, we've provided a Dockerfile for a Hadoop 2.8.3 cluster, 
which we'll use to run the batch indexing task.
+
+This Dockerfile and related files are located at 
`quickstart/tutorial/hadoop/docker`.
+
+From the druid-${DRUIDVERSION} package root, run the following commands to 
build a Docker image named "druid-hadoop-demo" with version tag "2.8.3":
+
+```
+cd quickstart/tutorial/hadoop/docker
+docker build -t druid-hadoop-demo:2.8.3 .
+```
+
+This will start building the Hadoop image. Once the image build is done, you 
should see the message `Successfully tagged druid-hadoop-demo:2.8.3` printed to 
the console.
+
+## Setup the Hadoop docker cluster
+
+### Create temporary shared directory
+
+We'll need a shared folder between the host and the Hadoop container for 
transferring some files.
+
+Let's create some folders under `/tmp`, we will use these later when starting 
the Hadoop container:
+
+```
+mkdir -p /tmp/shared
+mkdir -p /tmp/shared/hadoop_xml
+```
+
+### Configure /etc/hosts
+
+On the host machine, add the following entry to `/etc/hosts`:
+
+```
+127.0.0.1 druid-hadoop-demo
+```
+
+### Start the Hadoop container
+
+Once the `/tmp/shared` folder has been created and the `etc/hosts` entry has 
been added, run the following command to start the Hadoop container.
+
+```
+docker run -it  -h druid-hadoop-demo -p 50010:50010 -p 50020:50020 -p 
50075:50075 -p 50090:50090 -p 8020:8020 -p 10020:10020 -p 19888:19888 -p 
8030:8030 -p 8031:8031 -p 8032:8032 -p 8033:8033 -p 8040:8040 -p 8042:8042 -p 
8088:8088 -p 8443:8443 -p 2049:2049 -p 9000:9000 -p 49707:49707 -p 2122:2122  
-p 34455:34455 -v /tmp/shared:/shared druid-hadoop-demo:2.8.3 /etc/bootstrap.sh 
-bash
+```
+
+Once the container is started, your terminal will attach to a bash shell 
running inside the container:
+
+```
+Starting sshd: [  OK  ]
+18/07/26 17:27:15 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
+Starting namenodes on [druid-hadoop-demo]
+druid-hadoop-demo: starting namenode, logging to 
/usr/local/hadoop/logs/hadoop-root-namenode-druid-hadoop-demo.out
+localhost: starting datanode, logging to 
/usr/local/hadoop/logs/hadoop-root-datanode-druid-hadoop-demo.out
+Starting secondary namenodes [0.0.0.0]
+0.0.0.0: starting secondarynamenode, logging to 
/usr/local/hadoop/logs/hadoop-root-secondarynamenode-druid-hadoop-demo.out
+18/07/26 17:27:31 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
+starting yarn daemons
+starting resourcemanager, logging to 
/usr/local/hadoop/logs/yarn--resourcemanager-druid-hadoop-demo.out
+localhost: starting nodemanager, logging to 
/usr/local/hadoop/logs/yarn-root-nodemanager-druid-hadoop-demo.out
+starting historyserver, logging to 
/usr/local/hadoop/logs/mapred--historyserver-druid-hadoop-demo.out
+bash-4.1#  
+```
+
+The `Unable to load native-hadoop library for your platform... using 
builtin-java classes where applicable` warning messages can be safely ignored.
+
+### Copy input data to the Hadoop container
+
+From the druid-${DRUIDVERSION} package root on the host, copy the 
`quickstart/wikiticker-2015-09-12-sampled.json.gz` sample data to the shared 
folder:
+
+```
+cp quickstart/wikiticker-2015-09-12-sampled.json.gz 
/tmp/shared/wikiticker-2015-09-12-sampled.json.gz
+```
+
+### Setup HDFS directories
+
+In the Hadoop container's shell, run the following commands to setup the HDFS 
directories needed by this tutorial and copy the input data to HDFS.
+
+```
+cd /usr/local/hadoop/bin
+./hadoop fs -mkdir /druid
+./hadoop fs -mkdir /druid/segments
+./hadoop fs -mkdir /quickstart
+./hadoop fs -chmod 777 /druid
+./hadoop fs -chmod 777 /druid/segments
+./hadoop fs -chmod 777 /quickstart
+./hadoop fs -chmod -R 777 /tmp
+./hadoop fs -chmod -R 777 /user
+./hadoop fs -put /shared/wikiticker-2015-09-12-sampled.json.gz 
/quickstart/wikiticker-2015-09-12-sampled.json.gz
+```
+
+If you encounter namenode errors when running this