ektravel commented on code in PR #14651: URL: https://github.com/apache/druid/pull/14651#discussion_r1273906770
##########
docs/development/extensions-core/kafka-supervisor-reference.md:
##########
@@ -186,64 +198,45 @@ Supported `inputFormat`s include:
For more information, see [Data formats](../../ingestion/data-formats.md). You
can also read [`thrift`](../extensions-contrib/thrift.md) formats using
`parser`.
-<a name="tuningconfig"></a>
-
-## KafkaSupervisorTuningConfig
-
-The `tuningConfig` is optional and default parameters will be used if no
`tuningConfig` is specified.
-
-| Field | Type | Description
| Required
|
-|-----------------------------------|----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------|
-| `type` | String | The indexing task type,
this should always be `kafka`.
| yes
|
-| `maxRowsInMemory` | Integer | The number of rows to
aggregate before persisting. This number is the post-aggregation rows, so it is
not equivalent to the number of input events, but the number of aggregated rows
that those events result in. This is used to manage the required JVM heap size.
Maximum heap memory usage for indexing scales with `maxRowsInMemory` * (2 +
`maxPendingPersists`). Normally user does not need to set this, but depending
on the nature of data, if rows are short in terms of bytes, user may not want
to store a million rows in memory and this value should be set.
| no (default ==
150000)
|
-| `maxBytesInMemory` | Long | The number of bytes to
aggregate in heap memory before persisting. This is based on a rough estimate
of memory usage and not actual usage. Normally this is computed internally and
user does not need to set it. The maximum heap memory usage for indexing is
`maxBytesInMemory` * (2 + `maxPendingPersists`).
| no (default ==
One-sixth of max JVM memory)
|
-| `maxRowsPerSegment` | Integer | The number of rows to
aggregate into a segment; this number is post-aggregation rows. Handoff will
happen either if `maxRowsPerSegment` or `maxTotalRows` is hit or every
`intermediateHandoffPeriod`, whichever happens earlier.
| no (default ==
5000000)
|
-| `maxTotalRows` | Long | The number of rows to
aggregate across all segments; this number is post-aggregation rows. Handoff
will happen either if `maxRowsPerSegment` or `maxTotalRows` is hit or every
`intermediateHandoffPeriod`, whichever happens earlier.
| no (default == 20000000)
|
-| `intermediatePersistPeriod` | ISO8601 Period | The period that
determines the rate at which intermediate persists occur.
| no (default == PT10M)
|
-| `maxPendingPersists` | Integer | Maximum number of
persists that can be pending but not started. If this limit would be exceeded
by a new intermediate persist, ingestion will block until the currently-running
persist finishes. Maximum heap memory usage for indexing scales with
`maxRowsInMemory` * (2 + `maxPendingPersists`).
| no
(default == 0, meaning one persist can be running concurrently with ingestion,
and none can be queued up) |
-| `indexSpec` | Object | Tune how data is
indexed. See [IndexSpec](#indexspec) for more information.
| no
|
-| `indexSpecForIntermediatePersists`| | Defines segment storage
format options to be used at indexing time for intermediate persisted temporary
segments. This can be used to disable dimension/metric compression on
intermediate segments to reduce memory required for final merging. However,
disabling compression on intermediate segments might increase page cache use
while they are used before getting merged into final segment published, see
[IndexSpec](#indexspec) for possible values.
| no (default
= same as `indexSpec`)
|
-| `reportParseExceptions` | Boolean | *DEPRECATED*. If true,
exceptions encountered during parsing will be thrown and will halt ingestion;
if false, unparseable rows and fields will be skipped. Setting
`reportParseExceptions` to true will override existing configurations for
`maxParseExceptions` and `maxSavedParseExceptions`, setting
`maxParseExceptions` to 0 and limiting `maxSavedParseExceptions` to no more
than 1.
| no (default == false)
|
-| `handoffConditionTimeout` | Long | Number of milliseconds
to wait for segment handoff. Set to a value >= 0, where 0 means to wait
indefinitely.
| no
(default == 900000 [15 minutes])
|
-| `resetOffsetAutomatically` | Boolean | Controls behavior when
Druid needs to read Kafka messages that are no longer available (i.e. when
`OffsetOutOfRangeException` is encountered).<br/><br/>If false, the exception
will bubble up, which will cause your tasks to fail and ingestion to halt. If
this occurs, manual intervention is required to correct the situation;
potentially using the [Reset Supervisor
API](../../api-reference/supervisor-api.md). This mode is useful for
production, since it will make you aware of issues with ingestion.<br/><br/>If
true, Druid will automatically reset to the earlier or latest offset available
in Kafka, based on the value of the `useEarliestOffset` property (earliest if
true, latest if false). Note that this can lead to data being _DROPPED_ (if
`useEarliestOffset` is false) or _DUPLICATED_ (if `useEarliestOffset` is true)
without your knowledge. Messages will be logged indicating that a reset has
occurred, but ingestion will continue. Th
is mode is useful for non-production situations, since it will make Druid
attempt to recover from problems automatically, even if they lead to quiet
dropping or duplicating of data.<br/><br/>This feature behaves similarly to the
Kafka `auto.offset.reset` consumer property. | no (default == false) |
-| `workerThreads` | Integer | The number of threads
that the supervisor uses to handle requests/responses for worker tasks, along
with any other internal asynchronous operation.
| no (default == min(10, taskCount))
|
-| `chatAsync` | Boolean | If true, use
asynchronous communication with indexing tasks, and ignore the `chatThreads`
parameter. If false, use synchronous communication in a thread pool of size
`chatThreads`.
| no (default ==
true) |
-| `chatThreads` | Integer | The number of threads
that will be used for communicating with indexing tasks. Ignored if `chatAsync`
is `true` (the default).
| no (default == min(10,
taskCount * replicas))
|
-| `chatRetries` | Integer | The number of times
HTTP requests to indexing tasks will be retried before considering tasks
unresponsive.
| no (default == 8)
|
-| `httpTimeout` | ISO8601 Period | How long to wait for a
HTTP response from an indexing task.
| no (default == PT10S)
|
-| `shutdownTimeout` | ISO8601 Period | How long to wait for
the supervisor to attempt a graceful shutdown of tasks before exiting.
| no (default == PT80S)
|
-| `offsetFetchPeriod` | ISO8601 Period | How often the
supervisor queries Kafka and the indexing tasks to fetch current offsets and
calculate lag. If the user-specified value is below the minimum value (`PT5S`),
the supervisor ignores the value and uses the minimum value instead.
| no (default ==
PT30S, min == PT5S)
|
-| `segmentWriteOutMediumFactory` | Object | Segment write-out
medium to use when creating segments. See below for more information.
| no (not specified by
default, the value from `druid.peon.defaultSegmentWriteOutMediumFactory.type`
is used) |
-| `intermediateHandoffPeriod` | ISO8601 Period | How often the tasks
should hand off segments. Handoff will happen either if `maxRowsPerSegment` or
`maxTotalRows` is hit or every `intermediateHandoffPeriod`, whichever happens
earlier.
| no (default ==
P2147483647D)
|
-| `logParseExceptions` | Boolean | If true, log an error
message when a parsing exception occurs, containing information about the row
where the error occurred.
| no, default == false
|
-| `maxParseExceptions` | Integer | The maximum number of
parse exceptions that can occur before the task halts ingestion and fails.
Overridden if `reportParseExceptions` is set.
| no, unlimited default
|
-| `maxSavedParseExceptions` | Integer | When a parse exception
occurs, Druid can keep track of the most recent parse exceptions.
`maxSavedParseExceptions` limits how many exception instances will be saved.
These saved exceptions will be made available after the task finishes in the
[task completion report](../../ingestion/tasks.md#task-reports). Overridden if
`reportParseExceptions` is set.
|
no, default == 0
|
+## Supervisor tuning configuration
+
+The `tuningConfig` object is optional. If you don't specify the `tuningConfig`
object, Druid uses the default configuration settings.
+
+|Property|Type|Description|Required|Default|
+|--------|----|-----------|--------|-------|
+|`type`|String|The indexing task type. This should always be `kafka`.|Yes||
+|`maxRowsInMemory`|Integer|The number of rows to aggregate before persisting.
This number represents the post-aggregation rows. It is not equivalent to the
number of input events, but the resulting number of aggregated rows. Druid uses
`maxRowsInMemory` to manage the required JVM heap size. The maximum heap memory
usage for indexing scales is `maxRowsInMemory * (2 + maxPendingPersists)`.
Normally, you do not need to set this, but depending on the nature of data, if
rows are short in terms of bytes, you may not want to store a million rows in
memory and this value should be set.|No|150000|
+|`maxBytesInMemory`|Long|The number of bytes to aggregate in heap memory
before persisting. This is based on a rough estimate of memory usage and not
actual usage. Normally, this is computed internally. The maximum heap memory
usage for indexing is `maxBytesInMemory * (2 +
maxPendingPersists)`.|No|One-sixth of max JVM memory|
+|`skipBytesInMemoryOverheadCheck`|Boolean|The calculation of
`maxBytesInMemory` takes into account overhead objects created during ingestion
and each intermediate persist. To exclude the bytes of these overhead objects
from the `maxBytesInMemory` check, set `skipBytesInMemoryOverheadCheck` to
`true`.|No|`false`|
+|`maxRowsPerSegment`|Integer|The number of rows to aggregate into a segment;
this number is post-aggregation rows. Handoff occurs when `maxRowsPerSegment`
or `maxTotalRows` is reached or every `intermediateHandoffPeriod`, whichever
happens first.|No|5000000|
Review Comment:
Updated
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]
