edgar2020 commented on code in PR #16762:
URL: https://github.com/apache/druid/pull/16762#discussion_r1688765475
##########
docs/tutorials/tutorial-rollup.md:
##########
@@ -49,150 +49,101 @@ For this tutorial, we'll use a small sample of network
flow event data, represen
{"timestamp":"2018-01-02T21:35:45Z","srcIP":"7.7.7.7",
"dstIP":"8.8.8.8","packets":12,"bytes":2818}
```
-A file containing this sample input data is located at
`quickstart/tutorial/rollup-data.json`.
+The tutorial guides you through how to ingest this data using rollup.
-We'll ingest this data using the following ingestion task spec, located at
`quickstart/tutorial/rollup-index.json`.
+## Load the example data
-```json
-{
- "type" : "index_parallel",
- "spec" : {
- "dataSchema" : {
- "dataSource" : "rollup-tutorial",
- "dimensionsSpec" : {
- "dimensions" : [
- "srcIP",
- "dstIP"
- ]
- },
- "timestampSpec": {
- "column": "timestamp",
- "format": "iso"
- },
- "metricsSpec" : [
- { "type" : "count", "name" : "count" },
- { "type" : "longSum", "name" : "packets", "fieldName" : "packets" },
- { "type" : "longSum", "name" : "bytes", "fieldName" : "bytes" }
- ],
- "granularitySpec" : {
- "type" : "uniform",
- "segmentGranularity" : "week",
- "queryGranularity" : "minute",
- "intervals" : ["2018-01-01/2018-01-03"],
- "rollup" : true
- }
- },
- "ioConfig" : {
- "type" : "index_parallel",
- "inputSource" : {
- "type" : "local",
- "baseDir" : "quickstart/tutorial",
- "filter" : "rollup-data.json"
- },
- "inputFormat" : {
- "type" : "json"
- },
- "appendToExisting" : false
- },
- "tuningConfig" : {
- "type" : "index_parallel",
- "partitionsSpec": {
- "type": "dynamic"
- },
- "maxRowsInMemory" : 25000
- }
- }
-}
+Load the sample dataset using INSERT and EXTERN functions. The EXTERN function
lets you read external data or write to an external location.
+
+In the Druid web console, go to the Query view and run the following query:
+
+```sql
+INSERT INTO "rollup_tutorial"
+WITH "inline_data" AS (
+ SELECT *
+ FROM TABLE(EXTERN('{
+ "type":"inline",
+
"data":"{\"timestamp\":\"2018-01-01T01:01:35Z\",\"srcIP\":\"1.1.1.1\",\"dstIP\":\"2.2.2.2\",\"packets\":20,\"bytes\":9024}\n{\"timestamp\":\"2018-01-01T01:02:14Z\",\"srcIP\":\"1.1.1.1\",\"dstIP\":\"2.2.2.2\",\"packets\":38,\"bytes\":6289}\n{\"timestamp\":\"2018-01-01T01:01:59Z\",\"srcIP\":\"1.1.1.1\",\"dstIP\":\"2.2.2.2\",\"packets\":11,\"bytes\":5780}\n{\"timestamp\":\"2018-01-01T01:01:51Z\",\"srcIP\":\"1.1.1.1\",\"dstIP\":\"2.2.2.2\",\"packets\":255,\"bytes\":21133}\n{\"timestamp\":\"2018-01-01T01:02:29Z\",\"srcIP\":\"1.1.1.1\",\"dstIP\":\"2.2.2.2\",\"packets\":377,\"bytes\":359971}\n{\"timestamp\":\"2018-01-01T01:03:29Z\",\"srcIP\":\"1.1.1.1\",\"dstIP\":\"2.2.2.2\",\"packets\":49,\"bytes\":10204}\n{\"timestamp\":\"2018-01-02T21:33:14Z\",\"srcIP\":\"7.7.7.7\",\"dstIP\":\"8.8.8.8\",\"packets\":38,\"bytes\":6289}\n{\"timestamp\":\"2018-01-02T21:33:45Z\",\"srcIP\":\"7.7.7.7\",\"dstIP\":\"8.8.8.8\",\"packets\":123,\"bytes\":93999}\n{\"timestamp\":\"2018-01-02T21:35:45Z\",\"srcIP\"
:\"7.7.7.7\",\"dstIP\":\"8.8.8.8\",\"packets\":12,\"bytes\":2818}"}',
+ '{"type":"json"}'))
+ EXTEND ("timestamp" VARCHAR, "srcIP" VARCHAR, "dstIP" VARCHAR, "packets"
BIGINT, "bytes" BIGINT)
+)
+SELECT
+ FLOOR(TIME_PARSE("timestamp") TO MINUTE) AS __time,
+ "srcIP",
+ "dstIP",
+ SUM("bytes") AS "bytes",
+ COUNT(*) AS "count",
+ SUM("packets") AS "packets"
+FROM "inline_data"
+GROUP BY 1, 2, 3
+PARTITIONED BY DAY
```
-Rollup has been enabled by setting `"rollup" : true` in the `granularitySpec`.
+Note that the query uses the `FLOOR` function to give the `__time` a
granularity of `MINUTE`. The query defines the dimensions of the rollup by
grouping columns 1, 2, and 3, which corresponds to the `timestamp`, `srcIP`,
and `dstIP` columns. The query defines the metrics of the rollup by aggregating
the `bytes` and `packets` columns.
-Note that we have `srcIP` and `dstIP` defined as dimensions, a longSum metric
is defined for the `packets` and `bytes` columns, and the `queryGranularity`
has been defined as `minute`.
+After the ingestion completes, you can query the data.
-We will see how these definitions are used after we load this data.
-
-## Load the example data
+## Query the example data
-From the apache-druid-{{DRUIDVERSION}} package root, run the following command:
+Open a new tab in the Query view and run the following query to see what data
was ingested:
Review Comment:
[x]Query is made bold
[x]added "In the web console,..."
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]