sthetland commented on a change in pull request #9360: Create splits of multiple files for parallel indexing URL: https://github.com/apache/druid/pull/9360#discussion_r381499664
########## File path: docs/ingestion/native-batch.md ########## @@ -42,11 +42,12 @@ demonstrates the "simple" (single-task) mode. ## Parallel task The Parallel task (type `index_parallel`) is a task for parallel batch indexing. This task only uses Druid's resource and -doesn't depend on other external systems like Hadoop. `index_parallel` task is a supervisor task which basically creates -multiple worker tasks and submits them to the Overlord. Each worker task reads input data and creates segments. Once they -successfully generate segments for all input data, they report the generated segment list to the supervisor task. +doesn't depend on other external systems like Hadoop. The `index_parallel` task is a supervisor task which orchestrates +the whole indexing process. It splits the input data and and issues worker tasks Review comment: Extra "and" ```suggestion the whole indexing process. It splits the input data and issues worker tasks ``` ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected] With regards, Apache Git Services --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
