vtlim commented on code in PR #12723: URL: https://github.com/apache/druid/pull/12723#discussion_r914212872
########## docs/tutorials/tutorial-sketches-theta.md: ########## @@ -0,0 +1,302 @@ +--- +id: tutorial-sketches-theta +title: Approximations with Theta sketches +sidebar_label: Theta sketches +--- + +A common problem in clickstream analytics is counting unique things, like visitors or sessions. Generally this involves scanning through all detail data, because unique counts **do not add up** as you aggregate the numbers. + +For instance, we might be interested in the number of visitors that watched episodes of a TV show. Let's say we found that at a given day, 1000 unique visitors watched the first episode, and 800 visitors watched the second episode. We may want to explore further trends, for example: +- How many visitors watched _both_ episodes? +- How many visitors are there that watched _at least one_ of the episodes? +- How many visitors watched episode 1 _but not_ episode 2? + +There is no way to answer these questions by just looking at the aggregated numbers. We will have to go back to the detail data and scan every single row. If the data volume is high enough, this may take long, meaning that an interactive data exploration is not possible. + +An additional nuisance is that unique counts don't work well with rollups. For the example above, it would be great if we could have just one row of data per 15 minute interval[^1], show, and episode. After all, we are not interested in the individual user IDs, just the unique counts. + +[^1]: Why 15 minutes and not just 1 hour? Intervals of 15 minutes work better with international timezones because those are not always aligned by hour. India, for instance, is 30 minutes off, and Nepal is even 45 minutes off. With 15 minute aggregates, you can get hourly sums for any of those timezones, too! + +Is there a way to avoid crunching the detail data every single time, and maybe even enable rollup? + +## Fast approximation with set operations: Theta sketches + +Theta sketches are a probabilistic data structure to enable fast approximate analysis of big data. Druid's implementation relies on the [Apache DataSketches](https://datasketches.apache.org/) library. + +Theta sketches have a few nice properties: + +- They give you a **fast approximate estimate** for the distinct count of items that you put into them. +- They are **mergeable**. This means we can work with rolled up data and merge the sketches over various time intervals. Thus, we can take advantage of Druid's rollup feature. +- Theta sketches support **set operations**. Given two Theta sketches over subsets of the data, we can compute the union, intersection, or set difference of these two. This gives us the ability to answer the questions above about the number of visitors that watched a specific combination of episodes. + +There is a lot of advanced math behind Theta sketches[^2]. But with Druid, you do not need to bother about the complex algorithms - Theta sketches just work! + +[^2]: Specifically, the accuracy of the result is governed by the size _k_ of the Theta sketch, and by the operations you perform. See more details in the [Apache DataSketches documentation](https://datasketches.apache.org/docs/Theta/ThetaAccuracy.html). There's also a version of the sketch estimator `THETA_SKETCH_ESTIMATE_WITH_ERROR_BOUNDS` which takes an additional integer parameter and returns the error boundaries for the result in a JSON object. + +This tutorial shows you how to create Theta sketches from your input data at ingestion time and how to run distinct count and set operation queries on the Theta sketches. + +For this tutorial, we'll assume you've already downloaded Druid as described in +the [single-machine quickstart](index.md) and have it running on your local machine. +It will also be helpful to have finished [Tutorial: Loading a file](../tutorials/tutorial-batch.md) and [Tutorial: Querying data](../tutorials/tutorial-query.md). + +## Ingest data using Theta sketches + +This tutorial works with data in the snippet below, which has just the bare basics that are needed: +- **date**: a timestamp. In this case it's just dates but as mentioned above a finer granularity makes sense in real life. +- **uid**: a user ID +- **show**: name of a TV show +- **episode**: episode identifier + +```csv +date,uid,show,episode +2022-05-19,alice,Game of Thrones,S1E1 +2022-05-19,alice,Game of Thrones,S1E2 +2022-05-19,alice,Game of Thrones,S1E1 +2022-05-19,bob,Bridgerton,S1E1 +2022-05-20,alice,Game of Thrones,S1E1 +2022-05-20,carol,Bridgerton,S1E2 +2022-05-20,dan,Bridgerton,S1E1 +2022-05-21,alice,Game of Thrones,S1E1 +2022-05-21,carol,Bridgerton,S1E1 +2022-05-21,erin,Game of Thrones,S1E1 +2022-05-21,alice,Bridgerton,S1E1 +2022-05-22,bob,Game of Thrones,S1E1 +2022-05-22,bob,Bridgerton,S1E1 +2022-05-22,carol,Bridgerton,S1E2 +2022-05-22,bob,Bridgerton,S1E1 +2022-05-22,erin,Game of Thrones,S1E1 +2022-05-22,erin,Bridgerton,S1E2 +2022-05-23,erin,Game of Thrones,S1E1 +2022-05-23,alice,Game of Thrones,S1E1 +``` + +Navigate to the **Load data** wizard in the Druid console. +Select `Paste data` as the data source and paste the sample from above: + + + +Leave the source type as `inline` and click **Apply** and **Next: Parse data**. +Parse the data as CSV, with included headers: + + + +Accept the default values in the **Parse time**, **Transform**, and **Filter** stages. + +In the **Configure schema** stage, enable rollup and confirm your choice in the dialog. Then set the query granularity to `day`. + + + +You also add the Theta sketch during this stage. Select **Add metric**. +Define the new metric as a Theta sketch with the following details: +* **Name**: `theta_uid` +* **Type**: `thetaSketch` +* **Field name**: `uid` +* **Size**: Leave at the default value, `16384`. +* **Is input theta sketch**: Leave at the default value, `False`. + + + +Click **Apply** to add the new metric to the data model. + + +We have to perform one more step to complete the data model. We are not interested in individual user ID's, only the unique counts. Right now, `uid` is still in the data model. Let's get rid of that! + +Click on the `uid` column in the data model and delete it using the trashcan icon on the right: + + + +For the rest of the **Load data** wizard, set the following options: +* **Partition** stage: Set **Segment granularity** to `day`. +* **Tune**: Leave the default options. +* **Publish**: Set the datasource name to `ts_tutorial`. + +On the **Edit spec** page, your final input spec should look like the following JSON: + +```json +{ + "type": "index_parallel", + "spec": { + "ioConfig": { + "type": "index_parallel", + "inputSource": { + "type": "inline", + "data": "date,uid,show,episode\n2022-05-19,alice,Game of Thrones,S1E1\n2022-05-19,alice,Game of Thrones,S1E2\n2022-05-19,alice,Game of Thrones,S1E1\n2022-05-19,bob,Bridgerton,S1E1\n2022-05-20,alice,Game of Thrones,S1E1\n2022-05-20,carol,Bridgerton,S1E2\n2022-05-20,dan,Bridgerton,S1E1\n2022-05-21,alice,Game of Thrones,S1E1\n2022-05-21,carol,Bridgerton,S1E1\n2022-05-21,erin,Game of Thrones,S1E1\n2022-05-21,alice,Bridgerton,S1E1\n2022-05-22,bob,Game of Thrones,S1E1\n2022-05-22,bob,Bridgerton,S1E1\n2022-05-22,carol,Bridgerton,S1E2\n2022-05-22,bob,Bridgerton,S1E1\n2022-05-22,erin,Game of Thrones,S1E1\n2022-05-22,erin,Bridgerton,S1E2\n2022-05-23,erin,Game of Thrones,S1E1\n2022-05-23,alice,Game of Thrones,S1E1" + }, + "inputFormat": { + "type": "csv", + "findColumnsFromHeader": true + } + }, + "tuningConfig": { + "type": "index_parallel", + "partitionsSpec": { + "type": "hashed" + }, + "forceGuaranteedRollup": true + }, + "dataSchema": { + "dataSource": "inline_data", Review Comment: described in line 131 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
