This is an automated email from the ASF dual-hosted git repository.
gian pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git
The following commit(s) were added to refs/heads/master by this push:
new de27c7d3c1 Update docs.
de27c7d3c1 is described below
commit de27c7d3c12386491b3b5903fa64015cee0b01b1
Author: Gian Merlino <[email protected]>
AuthorDate: Fri Mar 24 17:15:27 2023 -0700
Update docs.
---
docs/multi-stage-query/reference.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/docs/multi-stage-query/reference.md
b/docs/multi-stage-query/reference.md
index 71fc1b43af..b1a25b80e4 100644
--- a/docs/multi-stage-query/reference.md
+++ b/docs/multi-stage-query/reference.md
@@ -751,7 +751,7 @@ The following table describes error codes you may encounter
in the `multiStageQu
| <a name="error_InvalidNullByte">`InvalidNullByte`</a> | A string column
included a null byte. Null bytes in strings are not permitted. | `column`: The
column that included the null byte |
| <a name="error_QueryNotSupported">`QueryNotSupported`</a> | QueryKit could
not translate the provided native query to a multi-stage query.<br /> <br
/>This can happen if the query uses features that aren't supported, like
GROUPING SETS. | |
| <a name="error_RowTooLarge">`RowTooLarge`</a> | The query tried to process a
row that was too large to write to a single frame. See the [Limits](#limits)
table for specific limits on frame size. Note that the effective maximum row
size is smaller than the maximum frame size due to alignment considerations
during frame writing. | `maxFrameSize`: The limit on the frame size. |
-| <a name="error_TaskStartTimeout">`TaskStartTimeout`</a> | Unable to launch
all the worker tasks in time. <br /> <br />There might be insufficient
available slots to start all the worker tasks simultaneously.<br /> <br /> Try
splitting up the query into smaller chunks with lesser `maxNumTasks` number.
Another option is to increase capacity. | `numTasks`: The number of tasks
attempted to launch. |
+| <a name="error_TaskStartTimeout">`TaskStartTimeout`</a> | Unable to launch
`numTasks` tasks within `timeout` milliseconds.<br /><br />There may be
insufficient available slots to start all the worker tasks simultaneously. Try
splitting up your query into smaller chunks using a smaller value of
[`maxNumTasks`](#context-parameters). Another option is to increase capacity. |
`numTasks`: The number of tasks attempted to launch.<br /><br />`timeout`:
Timeout, in milliseconds, that was exceeded. |
| <a name="error_TooManyAttemptsForJob">`TooManyAttemptsForJob`</a> | Total
relaunch attempt count across all workers exceeded max relaunch attempt limit.
See the [Limits](#limits) table for the specific limit. | `maxRelaunchCount`:
Max number of relaunches across all the workers defined in the
[Limits](#limits) section. <br /><br /> `currentRelaunchCount`: current
relaunch counter for the job across all workers. <br /><br /> `taskId`: Latest
task id which failed <br /> <br /> `rootError [...]
| <a name="error_TooManyAttemptsForWorker">`TooManyAttemptsForWorker`</a> |
Worker exceeded maximum relaunch attempt count as defined in the
[Limits](#limits) section. |`maxPerWorkerRelaunchCount`: Max number of
relaunches allowed per worker as defined in the [Limits](#limits) section. <br
/><br /> `workerNumber`: the worker number for which the task failed <br /><br
/> `taskId`: Latest task id which failed <br /> <br /> `rootErrorMessage`:
Error message of the latest failed task.|
| <a name="error_TooManyBuckets">`TooManyBuckets`</a> | Exceeded the maximum
number of partition buckets for a stage (5,000 partition buckets).<br />< br
/>Partition buckets are created for each [`PARTITIONED BY`](#partitioned-by)
time chunk for INSERT and REPLACE queries. The most common reason for this
error is that your `PARTITIONED BY` is too narrow relative to your data. |
`maxBuckets`: The limit on partition buckets. |
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]