jihoonson commented on a change in pull request #9360: Create splits of 
multiple files for parallel indexing
URL: https://github.com/apache/druid/pull/9360#discussion_r382233119
 
 

 ##########
 File path: docs/ingestion/native-batch.md
 ##########
 @@ -42,11 +42,12 @@ demonstrates the "simple" (single-task) mode.
 ## Parallel task
 
 The Parallel task (type `index_parallel`) is a task for parallel batch 
indexing. This task only uses Druid's resource and
-doesn't depend on other external systems like Hadoop. `index_parallel` task is 
a supervisor task which basically creates
-multiple worker tasks and submits them to the Overlord. Each worker task reads 
input data and creates segments. Once they
-successfully generate segments for all input data, they report the generated 
segment list to the supervisor task. 
+doesn't depend on other external systems like Hadoop. The `index_parallel` 
task is a supervisor task which orchestrates
+the whole indexing process. It splits the input data and and issues worker 
tasks
+to the Overlord which actually process the assigned input split and create 
segments.
+Once a worker task successfully processes all assigned input split, it reports 
the generated segment list to the supervisor task. 
 
 Review comment:
   Thanks for taking a look! 
   
   > If not, for a lighter edit, maybe just clarify that it's the worker tasks 
more specifically, rather than the overlord, that is processing input splits 
(if that's the case).
   
   This is correct. I tried to make it more clear.
   
   ```
   The Parallel task (type `index_parallel`) is a task for parallel batch 
indexing. This task only uses Druid’s resource and
   doesn’t depend on other external systems like Hadoop. The `index_parallel` 
task is a supervisor task that orchestrates
   the whole indexing process. The supervisor task splits the input data and 
creates worker tasks to process those splits.
   The created worker tasks are issued to the Overlord so that they can be 
scheduled and run on MiddleManagers or Indexers.
   Once a worker task successfully processes the assigned input split, it 
reports the generated segment list to the supervisor task.
   The supervisor task periodically checks the status of worker tasks. If one 
of them fails, it retries the failed task
   until the number of retries reaches the configured limit. If all worker 
tasks succeed, it publishes the reported segments at once and finalizes 
ingestion.
   ```

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to