This is an automated email from the ASF dual-hosted git repository.

jihoonson pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git


The following commit(s) were added to refs/heads/master by this push:
     new d4bd6e5  ingestion and tutorial doc update (#10202)
d4bd6e5 is described below

commit d4bd6e52070394a651724697ce588c9fe4f81436
Author: mans2singh <[email protected]>
AuthorDate: Tue Jul 21 20:52:23 2020 -0400

    ingestion and tutorial doc update (#10202)
---
 docs/ingestion/index.md                 | 2 +-
 docs/ingestion/native-batch.md          | 2 +-
 docs/tutorials/tutorial-batch-hadoop.md | 2 +-
 3 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/docs/ingestion/index.md b/docs/ingestion/index.md
index 384b051..d84c0e4 100644
--- a/docs/ingestion/index.md
+++ b/docs/ingestion/index.md
@@ -284,7 +284,7 @@ The following table shows how each ingestion method handles 
partitioning:
 ## Ingestion specs
 
 No matter what ingestion method you use, data is loaded into Druid using 
either one-time [tasks](tasks.html) or
-ongoing "supervisors" (which run and supervised a set of tasks over time). In 
any case, part of the task or supervisor
+ongoing "supervisors" (which run and supervise a set of tasks over time). In 
any case, part of the task or supervisor
 definition is an _ingestion spec_.
 
 Ingestion specs consists of three main components:
diff --git a/docs/ingestion/native-batch.md b/docs/ingestion/native-batch.md
index 4dbbca7..f1c22e2 100644
--- a/docs/ingestion/native-batch.md
+++ b/docs/ingestion/native-batch.md
@@ -261,7 +261,7 @@ The three `partitionsSpec` types have different 
characteristics.
 The recommended use case for each partitionsSpec is:
 - If your data has a uniformly distributed column which is frequently used in 
your queries,
 consider using `single_dim` partitionsSpec to maximize the performance of most 
of your queries.
-- If your data doesn't a uniformly distributed column, but is expected to have 
a [high rollup ratio](./index.md#maximizing-rollup-ratio)
+- If your data doesn't have a uniformly distributed column, but is expected to 
have a [high rollup ratio](./index.md#maximizing-rollup-ratio)
 when you roll up with some dimensions, consider using `hashed` partitionsSpec.
 It could reduce the size of datasource and query latency by improving data 
locality.
 - If the above two scenarios are not the case or you don't need to roll up 
your datasource,
diff --git a/docs/tutorials/tutorial-batch-hadoop.md 
b/docs/tutorials/tutorial-batch-hadoop.md
index 38abbfa..bd02464 100644
--- a/docs/tutorials/tutorial-batch-hadoop.md
+++ b/docs/tutorials/tutorial-batch-hadoop.md
@@ -205,7 +205,7 @@ We've included a sample of Wikipedia edits from September 
12, 2015 to get you st
 To load this data into Druid, you can submit an *ingestion task* pointing to 
the file. We've included
 a task that loads the `wikiticker-2015-09-12-sampled.json.gz` file included in 
the archive.
 
-Let's submit the `wikipedia-index-hadoop-.json` task:
+Let's submit the `wikipedia-index-hadoop.json` task:
 
 ```bash
 bin/post-index-task --file quickstart/tutorial/wikipedia-index-hadoop.json 
--url http://localhost:8081


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to