This is an automated email from the ASF dual-hosted git repository.
jihoonson pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-druid.git
The following commit(s) were added to refs/heads/master by this push:
new 61f4abe Add more warning to the doc for resetOffsetAutomatically
(#8153)
61f4abe is described below
commit 61f4abece445e5bb1f15df093cbd800ebbd8c501
Author: Jihoon Son <[email protected]>
AuthorDate: Wed Jul 24 17:37:32 2019 -0700
Add more warning to the doc for resetOffsetAutomatically (#8153)
* Add more warnings to the doc for resetOffsetAutomatically
* fix kinesis doc
* fix typos
* revise the description
* capital
* capitalize
---
.../development/extensions-core/kafka-ingestion.md | 50 ++++++++---------
.../extensions-core/kinesis-ingestion.md | 62 +++++++++++-----------
2 files changed, 56 insertions(+), 56 deletions(-)
diff --git a/docs/content/development/extensions-core/kafka-ingestion.md
b/docs/content/development/extensions-core/kafka-ingestion.md
index ec1d046..091f02f 100644
--- a/docs/content/development/extensions-core/kafka-ingestion.md
+++ b/docs/content/development/extensions-core/kafka-ingestion.md
@@ -130,31 +130,31 @@ A sample supervisor spec is shown below:
The tuningConfig is optional and default parameters will be used if no
tuningConfig is specified.
-|Field|Type|Description|Required|
-|-----|----|-----------|--------|
-|`type`|String|The indexing task type, this should always be `kafka`.|yes|
-|`maxRowsInMemory`|Integer|The number of rows to aggregate before persisting.
This number is the post-aggregation rows, so it is not equivalent to the number
of input events, but the number of aggregated rows that those events result in.
This is used to manage the required JVM heap size. Maximum heap memory usage
for indexing scales with maxRowsInMemory * (2 + maxPendingPersists). Normally
user does not need to set this, but depending on the nature of data, if rows
are short in terms of [...]
-|`maxBytesInMemory`|Long|The number of bytes to aggregate in heap memory
before persisting. This is based on a rough estimate of memory usage and not
actual usage. Normally this is computed internally and user does not need to
set it. The maximum heap memory usage for indexing is maxBytesInMemory * (2 +
maxPendingPersists). |no (default == One-sixth of max JVM memory)|
-|`maxRowsPerSegment`|Integer|The number of rows to aggregate into a segment;
this number is post-aggregation rows. Handoff will happen either if
`maxRowsPerSegment` or `maxTotalRows` is hit or every
`intermediateHandoffPeriod`, whichever happens earlier.|no (default == 5000000)|
-|`maxTotalRows`|Long|The number of rows to aggregate across all segments; this
number is post-aggregation rows. Handoff will happen either if
`maxRowsPerSegment` or `maxTotalRows` is hit or every
`intermediateHandoffPeriod`, whichever happens earlier.|no (default ==
unlimited)|
-|`intermediatePersistPeriod`|ISO8601 Period|The period that determines the
rate at which intermediate persists occur.|no (default == PT10M)|
-|`maxPendingPersists`|Integer|Maximum number of persists that can be pending
but not started. If this limit would be exceeded by a new intermediate persist,
ingestion will block until the currently-running persist finishes. Maximum heap
memory usage for indexing scales with maxRowsInMemory * (2 +
maxPendingPersists).|no (default == 0, meaning one persist can be running
concurrently with ingestion, and none can be queued up)|
-|indexSpec|Object|Tune how data is indexed. See [IndexSpec](#indexspec) for
more information.|no|
-|indexSpecForIntermediatePersists|defines segment storage format options to be
used at indexing time for intermediate persisted temporary segments. this can
be used to disable dimension/metric compression on intermediate segments to
reduce memory required for final merging. however, disabling compression on
intermediate segments might increase page cache use while they are used before
getting merged into final segment published, see [IndexSpec](#indexspec) for
possible values.|no (defaul [...]
-|`reportParseExceptions`|Boolean|*DEPRECATED*. If true, exceptions encountered
during parsing will be thrown and will halt ingestion; if false, unparseable
rows and fields will be skipped. Setting `reportParseExceptions` to true will
override existing configurations for `maxParseExceptions` and
`maxSavedParseExceptions`, setting `maxParseExceptions` to 0 and limiting
`maxSavedParseExceptions` to no more than 1.|no (default == false)|
-|`handoffConditionTimeout`|Long|Milliseconds to wait for segment handoff. It
must be >= 0, where 0 means to wait forever.|no (default == 0)|
-|`resetOffsetAutomatically`|Boolean|Whether to reset the consumer offset if
the next offset that it is trying to fetch is less than the earliest available
offset for that particular partition. The consumer offset will be reset to
either the earliest or latest offset depending on `useEarliestOffset` property
of `KafkaSupervisorIOConfig` (see below). This situation typically occurs when
messages in Kafka are no longer available for consumption and therefore won't
be ingested into Druid. If [...]
-|`workerThreads`|Integer|The number of threads that will be used by the
supervisor for asynchronous operations.|no (default == min(10, taskCount))|
-|`chatThreads`|Integer|The number of threads that will be used for
communicating with indexing tasks.|no (default == min(10, taskCount *
replicas))|
-|`chatRetries`|Integer|The number of times HTTP requests to indexing tasks
will be retried before considering tasks unresponsive.|no (default == 8)|
-|`httpTimeout`|ISO8601 Period|How long to wait for a HTTP response from an
indexing task.|no (default == PT10S)|
-|`shutdownTimeout`|ISO8601 Period|How long to wait for the supervisor to
attempt a graceful shutdown of tasks before exiting.|no (default == PT80S)|
-|`offsetFetchPeriod`|ISO8601 Period|How often the supervisor queries Kafka and
the indexing tasks to fetch current offsets and calculate lag.|no (default ==
PT30S, min == PT5S)|
-|`segmentWriteOutMediumFactory`|Object|Segment write-out medium to use when
creating segments. See below for more information.|no (not specified by
default, the value from `druid.peon.defaultSegmentWriteOutMediumFactory.type`
is used)|
-|`intermediateHandoffPeriod`|ISO8601 Period|How often the tasks should hand
off segments. Handoff will happen either if `maxRowsPerSegment` or
`maxTotalRows` is hit or every `intermediateHandoffPeriod`, whichever happens
earlier.|no (default == P2147483647D)|
-|`logParseExceptions`|Boolean|If true, log an error message when a parsing
exception occurs, containing information about the row where the error
occurred.|no, default == false|
-|`maxParseExceptions`|Integer|The maximum number of parse exceptions that can
occur before the task halts ingestion and fails. Overridden if
`reportParseExceptions` is set.|no, unlimited default|
-|`maxSavedParseExceptions`|Integer|When a parse exception occurs, Druid can
keep track of the most recent parse exceptions. "maxSavedParseExceptions"
limits how many exception instances will be saved. These saved exceptions will
be made available after the task finishes in the [task completion
report](../../ingestion/reports.html). Overridden if `reportParseExceptions` is
set.|no, default == 0|
+| Field | Type | Description
[...]
+|-----------------------------------|----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
[...]
+| `type` | String | The indexing task type,
this should always be `kafka`.
[...]
+| `maxRowsInMemory` | Integer | The number of rows to
aggregate before persisting. This number is the post-aggregation rows, so it is
not equivalent to the number of input events, but the number of aggregated rows
that those events result in. This is used to manage the required JVM heap size.
Maximum heap memory usage for indexing scales with maxRowsInMemory * (2 +
maxPendingPersists). Normally user does not need to set this, but depending on
the nature of data, if [...]
+| `maxBytesInMemory` | Long | The number of bytes to
aggregate in heap memory before persisting. This is based on a rough estimate
of memory usage and not actual usage. Normally this is computed internally and
user does not need to set it. The maximum heap memory usage for indexing is
maxBytesInMemory * (2 + maxPendingPersists).
[...]
+| `maxRowsPerSegment` | Integer | The number of rows to
aggregate into a segment; this number is post-aggregation rows. Handoff will
happen either if `maxRowsPerSegment` or `maxTotalRows` is hit or every
`intermediateHandoffPeriod`, whichever happens earlier.
[...]
+| `maxTotalRows` | Long | The number of rows to
aggregate across all segments; this number is post-aggregation rows. Handoff
will happen either if `maxRowsPerSegment` or `maxTotalRows` is hit or every
`intermediateHandoffPeriod`, whichever happens earlier.
[...]
+| `intermediatePersistPeriod` | ISO8601 Period | The period that
determines the rate at which intermediate persists occur.
[...]
+| `maxPendingPersists` | Integer | Maximum number of
persists that can be pending but not started. If this limit would be exceeded
by a new intermediate persist, ingestion will block until the currently-running
persist finishes. Maximum heap memory usage for indexing scales with
maxRowsInMemory * (2 + maxPendingPersists).
[...]
+| `indexSpec` | Object | Tune how data is
indexed. See [IndexSpec](#indexspec) for more information.
[...]
+| `indexSpecForIntermediatePersists`| | Defines segment storage
format options to be used at indexing time for intermediate persisted temporary
segments. This can be used to disable dimension/metric compression on
intermediate segments to reduce memory required for final merging. However,
disabling compression on intermediate segments might increase page cache use
while they are used before getting merged into final segment published, see
[IndexSpec](#indexspec) for possib [...]
+| `reportParseExceptions` | Boolean | *DEPRECATED*. If true,
exceptions encountered during parsing will be thrown and will halt ingestion;
if false, unparseable rows and fields will be skipped. Setting
`reportParseExceptions` to true will override existing configurations for
`maxParseExceptions` and `maxSavedParseExceptions`, setting
`maxParseExceptions` to 0 and limiting `maxSavedParseExceptions` to no more
than 1. [...]
+| `handoffConditionTimeout` | Long | Milliseconds to wait
for segment handoff. It must be >= 0, where 0 means to wait forever.
[...]
+| `resetOffsetAutomatically` | Boolean | Controls behavior when
Druid needs to read Kafka messages that are no longer available (i.e. when
OffsetOutOfRangeException is encountered).<br/><br/>If false, the exception
will bubble up, which will cause your tasks to fail and ingestion to halt. If
this occurs, manual intervention is required to correct the situation;
potentially using the [Reset Supervisor
API](../../operations/api-reference.html#supervisors). This mode is useful [...]
+| `workerThreads` | Integer | The number of threads
that will be used by the supervisor for asynchronous operations.
[...]
+| `chatThreads` | Integer | The number of threads
that will be used for communicating with indexing tasks.
[...]
+| `chatRetries` | Integer | The number of times
HTTP requests to indexing tasks will be retried before considering tasks
unresponsive.
[...]
+| `httpTimeout` | ISO8601 Period | How long to wait for a
HTTP response from an indexing task.
[...]
+| `shutdownTimeout` | ISO8601 Period | How long to wait for
the supervisor to attempt a graceful shutdown of tasks before exiting.
[...]
+| `offsetFetchPeriod` | ISO8601 Period | How often the
supervisor queries Kafka and the indexing tasks to fetch current offsets and
calculate lag.
[...]
+| `segmentWriteOutMediumFactory` | Object | Segment write-out
medium to use when creating segments. See below for more information.
[...]
+| `intermediateHandoffPeriod` | ISO8601 Period | How often the tasks
should hand off segments. Handoff will happen either if `maxRowsPerSegment` or
`maxTotalRows` is hit or every `intermediateHandoffPeriod`, whichever happens
earlier.
[...]
+| `logParseExceptions` | Boolean | If true, log an error
message when a parsing exception occurs, containing information about the row
where the error occurred.
[...]
+| `maxParseExceptions` | Integer | The maximum number of
parse exceptions that can occur before the task halts ingestion and fails.
Overridden if `reportParseExceptions` is set.
[...]
+| `maxSavedParseExceptions` | Integer | When a parse exception
occurs, Druid can keep track of the most recent parse exceptions.
"maxSavedParseExceptions" limits how many exception instances will be saved.
These saved exceptions will be made available after the task finishes in the
[task completion report](../../ingestion/reports.html). Overridden if
`reportParseExceptions` is set.
[...]
#### IndexSpec
diff --git a/docs/content/development/extensions-core/kinesis-ingestion.md
b/docs/content/development/extensions-core/kinesis-ingestion.md
index 8c2ac33..c0a25f9 100644
--- a/docs/content/development/extensions-core/kinesis-ingestion.md
+++ b/docs/content/development/extensions-core/kinesis-ingestion.md
@@ -126,37 +126,37 @@ A sample supervisor spec is shown below:
The tuningConfig is optional and default parameters will be used if no
tuningConfig is specified.
-|Field|Type|Description|Required|
-|-----|----|-----------|--------|
-|`type`|String|The indexing task type, this should always be `kinesis`.|yes|
-|`maxRowsInMemory`|Integer|The number of rows to aggregate before persisting.
This number is the post-aggregation rows, so it is not equivalent to the number
of input events, but the number of aggregated rows that those events result in.
This is used to manage the required JVM heap size. Maximum heap memory usage
for indexing scales with maxRowsInMemory * (2 + maxPendingPersists).|no
(default == 100000)|
-|`maxBytesInMemory`|Long|The number of bytes to aggregate in heap memory
before persisting. This is based on a rough estimate of memory usage and not
actual usage. Normally this is computed internally and user does not need to
set it. The maximum heap memory usage for indexing is maxBytesInMemory * (2 +
maxPendingPersists). |no (default == One-sixth of max JVM memory)|
-|`maxRowsPerSegment`|Integer|The number of rows to aggregate into a segment;
this number is post-aggregation rows. Handoff will happen either if
`maxRowsPerSegment` or `maxTotalRows` is hit or every
`intermediateHandoffPeriod`, whichever happens earlier.|no (default == 5000000)|
-|`maxTotalRows`|Long|The number of rows to aggregate across all segments; this
number is post-aggregation rows. Handoff will happen either if
`maxRowsPerSegment` or `maxTotalRows` is hit or every
`intermediateHandoffPeriod`, whichever happens earlier.|no (default ==
unlimited)|
-|`intermediatePersistPeriod`|ISO8601 Period|The period that determines the
rate at which intermediate persists occur.|no (default == PT10M)|
-|`maxPendingPersists`|Integer|Maximum number of persists that can be pending
but not started. If this limit would be exceeded by a new intermediate persist,
ingestion will block until the currently-running persist finishes. Maximum heap
memory usage for indexing scales with maxRowsInMemory * (2 +
maxPendingPersists).|no (default == 0, meaning one persist can be running
concurrently with ingestion, and none can be queued up)|
-|indexSpec|Object|Tune how data is indexed. See [IndexSpec](#indexspec) for
more information.|no|
-|indexSpecForIntermediatePersists|defines segment storage format options to be
used at indexing time for intermediate persisted temporary segments. this can
be used to disable dimension/metric compression on intermediate segments to
reduce memory required for final merging. however, disabling compression on
intermediate segments might increase page cache use while they are used before
getting merged into final segment published, see [IndexSpec](#indexspec) for
possible values.|no (defaul [...]
-|`reportParseExceptions`|Boolean|If true, exceptions encountered during
parsing will be thrown and will halt ingestion; if false, unparseable rows and
fields will be skipped.|no (default == false)|
-|`handoffConditionTimeout`|Long|Milliseconds to wait for segment handoff. It
must be >= 0, where 0 means to wait forever.|no (default == 0)|
-|`resetOffsetAutomatically`|Boolean|Whether to reset the consumer sequence
numbers if the next sequence number that it is trying to fetch is less than the
earliest available sequence number for that particular shard. The sequence
number will be reset to either the earliest or latest sequence number depending
on `useEarliestOffset` property of `KinesisSupervisorIOConfig` (see below).
This situation typically occurs when messages in Kinesis are no longer
available for consumption and there [...]
-|`skipSequenceNumberAvailabilityCheck`|Boolean|Whether to enable checking if
the current sequence number is still available in a particular Kinesis shard.
If set to false, the indexing task will attempt to reset the current sequence
number (or not), depending on the value of `resetOffsetAutomatically`. |no
(default == false)|
-|`workerThreads`|Integer|The number of threads that will be used by the
supervisor for asynchronous operations.|no (default == min(10, taskCount))|
-|`chatThreads`|Integer|The number of threads that will be used for
communicating with indexing tasks.|no (default == min(10, taskCount *
replicas))|
-|`chatRetries`|Integer|The number of times HTTP requests to indexing tasks
will be retried before considering tasks unresponsive.|no (default == 8)|
-|`httpTimeout`|ISO8601 Period|How long to wait for a HTTP response from an
indexing task.|no (default == PT10S)|
-|`shutdownTimeout`|ISO8601 Period|How long to wait for the supervisor to
attempt a graceful shutdown of tasks before exiting.|no (default == PT80S)|
-|`recordBufferSize`|Integer|Size of the buffer (number of events) used between
the Kinesis fetch threads and the main ingestion thread.|no (default == 10000)|
-|`recordBufferOfferTimeout`|Integer|Length of time in milliseconds to wait for
space to become available in the buffer before timing out.|no (default == 5000)|
-|`recordBufferFullWait`|Integer|Length of time in milliseconds to wait for the
buffer to drain before attempting to fetch records from Kinesis again.|no
(default == 5000)|
-|`fetchSequenceNumberTimeout`|Integer|Length of time in milliseconds to wait
for Kinesis to return the earliest or latest sequence number for a shard.
Kinesis will not return the latest sequence number if no data is actively being
written to that shard. In this case, this fetch call will repeatedly timeout
and retry until fresh data is written to the stream.|no (default == 60000)|
-|`fetchThreads`|Integer|Size of the pool of threads fetching data from
Kinesis. There is no benefit in having more threads than Kinesis shards.|no
(default == max(1, {numProcessors} - 1))|
-|`segmentWriteOutMediumFactory`|Object|Segment write-out medium to use when
creating segments. See below for more information.|no (not specified by
default, the value from `druid.peon.defaultSegmentWriteOutMediumFactory.type`
is used)|
-|`intermediateHandoffPeriod`|ISO8601 Period|How often the tasks should hand
off segments. Handoff will happen either if `maxRowsPerSegment` or
`maxTotalRows` is hit or every `intermediateHandoffPeriod`, whichever happens
earlier.|no (default == P2147483647D)|
-|`logParseExceptions`|Boolean|If true, log an error message when a parsing
exception occurs, containing information about the row where the error
occurred.|no, default == false|
-|`maxParseExceptions`|Integer|The maximum number of parse exceptions that can
occur before the task halts ingestion and fails. Overridden if
`reportParseExceptions` is set.|no, unlimited default|
-|`maxSavedParseExceptions`|Integer|When a parse exception occurs, Druid can
keep track of the most recent parse exceptions. "maxSavedParseExceptions"
limits how many exception instances will be saved. These saved exceptions will
be made available after the task finishes in the [task completion
report](../../ingestion/reports.html). Overridden if `reportParseExceptions` is
set.|no, default == 0|
-|`maxRecordsPerPoll`|Integer| The maximum number of records/events to be
fetched from buffer per poll. The actual maximum will be
`Max(maxRecordsPerPoll, Max(bufferSize, 1)) |no, default == 100|
+| Field | Type | Description
[...]
+|---------------------------------------|----------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
[...]
+| `type` | String | The indexing task
type, this should always be `kinesis`.
[...]
+| `maxRowsInMemory` | Integer | The number of rows
to aggregate before persisting. This number is the post-aggregation rows, so it
is not equivalent to the number of input events, but the number of aggregated
rows that those events result in. This is used to manage the required JVM heap
size. Maximum heap memory usage for indexing scales with maxRowsInMemory * (2 +
maxPendingPersists).
[...]
+| `maxBytesInMemory` | Long | The number of bytes
to aggregate in heap memory before persisting. This is based on a rough
estimate of memory usage and not actual usage. Normally this is computed
internally and user does not need to set it. The maximum heap memory usage for
indexing is maxBytesInMemory * (2 + maxPendingPersists).
[...]
+| `maxRowsPerSegment` | Integer | The number of rows
to aggregate into a segment; this number is post-aggregation rows. Handoff will
happen either if `maxRowsPerSegment` or `maxTotalRows` is hit or every
`intermediateHandoffPeriod`, whichever happens earlier.
[...]
+| `maxTotalRows` | Long | The number of rows
to aggregate across all segments; this number is post-aggregation rows. Handoff
will happen either if `maxRowsPerSegment` or `maxTotalRows` is hit or every
`intermediateHandoffPeriod`, whichever happens earlier.
[...]
+| `intermediatePersistPeriod` | ISO8601 Period | The period that
determines the rate at which intermediate persists occur.
[...]
+| `maxPendingPersists` | Integer | Maximum number of
persists that can be pending but not started. If this limit would be exceeded
by a new intermediate persist, ingestion will block until the currently-running
persist finishes. Maximum heap memory usage for indexing scales with
maxRowsInMemory * (2 + maxPendingPersists).
[...]
+| `indexSpec` | Object | Tune how data is
indexed. See [IndexSpec](#indexspec) for more information.
[...]
+| `indexSpecForIntermediatePersists` | | Defines segment
storage format options to be used at indexing time for intermediate persisted
temporary segments. This can be used to disable dimension/metric compression on
intermediate segments to reduce memory required for final merging. However,
disabling compression on intermediate segments might increase page cache use
while they are used before getting merged into final segment published, see
[IndexSpec](#indexspec) for po [...]
+| `reportParseExceptions` | Boolean | If true, exceptions
encountered during parsing will be thrown and will halt ingestion; if false,
unparseable rows and fields will be skipped.
[...]
+| `handoffConditionTimeout` | Long | Milliseconds to
wait for segment handoff. It must be >= 0, where 0 means to wait forever.
[...]
+| `resetOffsetAutomatically` | Boolean | Controls behavior
when Druid needs to read Kinesis messages that are no longer
available.<br/><br/>If false, the exception will bubble up, which will cause
your tasks to fail and ingestion to halt. If this occurs, manual intervention
is required to correct the situation; potentially using the [Reset Supervisor
API](../../operations/api-reference.html#supervisors). This mode is useful for
production, since it will make you aware o [...]
+| `skipSequenceNumberAvailabilityCheck` | Boolean | Whether to enable
checking if the current sequence number is still available in a particular
Kinesis shard. If set to false, the indexing task will attempt to reset the
current sequence number (or not), depending on the value of
`resetOffsetAutomatically`.
[...]
+| `workerThreads` | Integer | The number of
threads that will be used by the supervisor for asynchronous operations.
[...]
+| `chatThreads` | Integer | The number of
threads that will be used for communicating with indexing tasks.
[...]
+| `chatRetries` | Integer | The number of times
HTTP requests to indexing tasks will be retried before considering tasks
unresponsive.
[...]
+| `httpTimeout` | ISO8601 Period | How long to wait
for a HTTP response from an indexing task.
[...]
+| `shutdownTimeout` | ISO8601 Period | How long to wait
for the supervisor to attempt a graceful shutdown of tasks before exiting.
[...]
+| `recordBufferSize` | Integer | Size of the buffer
(number of events) used between the Kinesis fetch threads and the main
ingestion thread.
[...]
+| `recordBufferOfferTimeout` | Integer | Length of time in
milliseconds to wait for space to become available in the buffer before timing
out.
[...]
+| `recordBufferFullWait` | Integer | Length of time in
milliseconds to wait for the buffer to drain before attempting to fetch records
from Kinesis again.
[...]
+| `fetchSequenceNumberTimeout` | Integer | Length of time in
milliseconds to wait for Kinesis to return the earliest or latest sequence
number for a shard. Kinesis will not return the latest sequence number if no
data is actively being written to that shard. In this case, this fetch call
will repeatedly timeout and retry until fresh data is written to the stream.
[...]
+| `fetchThreads` | Integer | Size of the pool of
threads fetching data from Kinesis. There is no benefit in having more threads
than Kinesis shards.
[...]
+| `segmentWriteOutMediumFactory` | Object | Segment write-out
medium to use when creating segments. See below for more information.
[...]
+| `intermediateHandoffPeriod` | ISO8601 Period | How often the tasks
should hand off segments. Handoff will happen either if `maxRowsPerSegment` or
`maxTotalRows` is hit or every `intermediateHandoffPeriod`, whichever happens
earlier.
[...]
+| `logParseExceptions` | Boolean | If true, log an
error message when a parsing exception occurs, containing information about the
row where the error occurred.
[...]
+| `maxParseExceptions` | Integer | The maximum number
of parse exceptions that can occur before the task halts ingestion and fails.
Overridden if `reportParseExceptions` is set.
[...]
+| `maxSavedParseExceptions` | Integer | When a parse
exception occurs, Druid can keep track of the most recent parse exceptions.
"maxSavedParseExceptions" limits how many exception instances will be saved.
These saved exceptions will be made available after the task finishes in the
[task completion report](../../ingestion/reports.html). Overridden if
`reportParseExceptions` is set.
[...]
+| `maxRecordsPerPoll` | Integer | The maximum number
of records/events to be fetched from buffer per poll. The actual maximum will
be `Max(maxRecordsPerPoll, Max(bufferSize, 1))
[...]
#### IndexSpec
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]