petermarshallio commented on code in PR #14501:
URL: https://github.com/apache/druid/pull/14501#discussion_r1253981007


##########
examples/quickstart/jupyter-notebooks/notebooks/02-ingestion/XX-example-flightdata-events.ipynb:
##########
@@ -0,0 +1,807 @@
+{
+ "cells": [
+  {
+   "cell_type": "markdown",
+   "id": "e79d7d48-b403-4b9e-8cc6-0f0accecac1f",
+   "metadata": {},
+   "source": [
+    "# Data modeling and ingestion principles - creating Events from Druid's 
sample flight data\n",
+    "\n",
+    "Druid's data loader allows you to quickly ingest sample carrier data into 
a `TABLE`, giving you an easy way to learn about the SQL functions that are 
available. It's also a great place to start understanding how data modeling for 
event analytics in a real-time database differs from modeling you'd apply in 
other databases, as well as being small enough to safely see - and try out - 
different data layout designs safely.\n",
+    "\n",
+    "In this notebook, you'll walk through creating a table of events out of 
the sample data set, applying data modeling principles as you go. At the end 
you'll have a `TABLE` called \"flight-events\" that you can then use as you 
continue your learning in Apache Druid.\n",
+    "\n",
+    "## Prerequisites\n",
+    "\n",
+    "In order to use this notebook, you'll need access to a small Druid 
deployment.\n",
+    "\n",
+    "It's a good idea to test ingesting the data \"as is\" on that cluster to 
make sure it's operational before you get going.\n",
+    "\n",
+    "## Getting started\n",

Review Comment:
   Reworded.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to