This is an automated email from the ASF dual-hosted git repository.

maytasm pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git


The following commit(s) were added to refs/heads/master by this push:
     new 5600e1c  fix docs error in hadoop-based part (#9907)
5600e1c is described below

commit 5600e1c204c0b4c5f1060c89ad9f623d81aaae73
Author: Jianhuan Liu <[email protected]>
AuthorDate: Sat Jun 20 17:14:54 2020 +0800

    fix docs error in hadoop-based part (#9907)
    
    * fix docs error: google to azure and hdfs to http
    
    * fix docs error: indexSpecForIntermediatePersists of tuningConfig in 
hadoop-based batch part
    
    * fix docs error: logParseExceptions of tuningConfig in hadoop-based batch 
part
    
    * fix docs error: maxParseExceptions of tuningConfig in hadoop-based batch 
part
---
 docs/ingestion/hadoop.md | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/docs/ingestion/hadoop.md b/docs/ingestion/hadoop.md
index 2101c1b..0f5db6a 100644
--- a/docs/ingestion/hadoop.md
+++ b/docs/ingestion/hadoop.md
@@ -329,12 +329,12 @@ The tuningConfig is optional and default parameters will 
be used if no tuningCon
 |useCombiner|Boolean|Use Hadoop combiner to merge rows at mapper if 
possible.|no (default == false)|
 |jobProperties|Object|A map of properties to add to the Hadoop job 
configuration, see below for details.|no (default == null)|
 |indexSpec|Object|Tune how data is indexed. See 
[`indexSpec`](index.md#indexspec) on the main ingestion page for more 
information.|no|
-|indexSpecForIntermediatePersists|defines segment storage format options to be 
used at indexing time for intermediate persisted temporary segments. this can 
be used to disable dimension/metric compression on intermediate segments to 
reduce memory required for final merging. however, disabling compression on 
intermediate segments might increase page cache use while they are used before 
getting merged into final segment published, see 
[`indexSpec`](index.md#indexspec) for possible values.| [...]
+|indexSpecForIntermediatePersists|Object|defines segment storage format 
options to be used at indexing time for intermediate persisted temporary 
segments. this can be used to disable dimension/metric compression on 
intermediate segments to reduce memory required for final merging. however, 
disabling compression on intermediate segments might increase page cache use 
while they are used before getting merged into final segment published, see 
[`indexSpec`](index.md#indexspec) for possible v [...]
 |numBackgroundPersistThreads|Integer|The number of new background threads to 
use for incremental persists. Using this feature causes a notable increase in 
memory pressure and CPU usage but will make the job finish more quickly. If 
changing from the default of 0 (use current thread for persists), we recommend 
setting it to 1.|no (default == 0)|
 |forceExtendableShardSpecs|Boolean|Forces use of extendable shardSpecs. 
Hash-based partitioning always uses an extendable shardSpec. For 
single-dimension partitioning, this option should be set to true to use an 
extendable shardSpec. For partitioning, please check [Partitioning 
specification](#partitionsspec). This option can be useful when you need to 
append more data to existing dataSource.|no (default = false)|
 |useExplicitVersion|Boolean|Forces HadoopIndexTask to use version.|no (default 
= false)|
-|logParseExceptions|Boolean|If true, log an error message when a parsing 
exception occurs, containing information about the row where the error 
occurred.|false|no|
-|maxParseExceptions|Integer|The maximum number of parse exceptions that can 
occur before the task halts ingestion and fails. Overrides `ignoreInvalidRows` 
if `maxParseExceptions` is defined.|unlimited|no|
+|logParseExceptions|Boolean|If true, log an error message when a parsing 
exception occurs, containing information about the row where the error 
occurred.|no(default = false)|
+|maxParseExceptions|Integer|The maximum number of parse exceptions that can 
occur before the task halts ingestion and fails. Overrides `ignoreInvalidRows` 
if `maxParseExceptions` is defined.|no(default = unlimited)|
 |useYarnRMJobStatusFallback|Boolean|If the Hadoop jobs created by the indexing 
task are unable to retrieve their completion status from the JobHistory server, 
and this parameter is true, the indexing task will try to fetch the application 
status from `http://<yarn-rm-address>/ws/v1/cluster/apps/<application-id>`, 
where `<yarn-rm-address>` is the value of `yarn.resourcemanager.webapp.address` 
in your Hadoop configuration. This flag is intended as a fallback for cases 
where an indexing tas [...]
 
 ### `jobProperties`


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to