amaechler commented on code in PR #13308:
URL: https://github.com/apache/druid/pull/13308#discussion_r1014229196


##########
docs/multi-stage-query/reference.md:
##########
@@ -226,32 +226,32 @@ The following table lists query limits:
 
 The following table describes error codes you may encounter in the 
`multiStageQuery.payload.status.errorReport.error.errorCode` field:
 
-|Code|Meaning|Additional fields|
-|----|-----------|----|
-|  BroadcastTablesTooLarge  | The size of the broadcast tables, used in right 
hand side of the joins, exceeded the memory reserved for them in a worker 
task.<br /><br />Try increasing the peon memory or reducing the size of the 
broadcast tables. | `maxBroadcastTablesSize`: Memory reserved for the broadcast 
tables, measured in bytes. |
-|  Canceled  |  The query was canceled. Common reasons for cancellation:<br 
/><br /><ul><li>User-initiated shutdown of the controller task via the 
`/druid/indexer/v1/task/{taskId}/shutdown` API.</li><li>Restart or failure of 
the server process that was running the controller task.</li></ul>|    |
-|  CannotParseExternalData |  A worker task could not parse data from an 
external datasource.  |    |
-|  ColumnNameRestricted|  The query uses a restricted column name.  |    |
-|  ColumnTypeNotSupported|  Support for writing or reading from a particular 
column type is not supported. |    |
-|  ColumnTypeNotSupported | The query attempted to use a column type that is 
not supported by the frame format. This occurs with ARRAY types, which are not 
yet implemented for frames.  | `columnName`<br /> <br />`columnType`   |
-|  InsertCannotAllocateSegment |  The controller task could not allocate a new 
segment ID due to conflict with existing segments or pending segments. Common 
reasons for such conflicts:<br /> <br /><ul><li>Attempting to mix different 
granularities in the same intervals of the same datasource.</li><li>Prior 
ingestions that used non-extendable shard specs.</li></ul>| `dataSource`<br /> 
<br />`interval`: The interval for the attempted new segment allocation.  |
-|  InsertCannotBeEmpty |  An INSERT or REPLACE query did not generate any 
output rows in a situation where output rows are required for success. This can 
happen for INSERT or REPLACE queries with `PARTITIONED BY` set to something 
other than `ALL` or `ALL TIME`.  |  `dataSource`  |
-|  InsertCannotOrderByDescending  |  An INSERT query contained a `CLUSTERED 
BY` expression in descending order. Druid's segment generation code only 
supports ascending order.  |   `columnName` |
-|  InsertCannotReplaceExistingSegment |  A REPLACE query cannot proceed 
because an existing segment partially overlaps those bounds, and the portion 
within the bounds is not fully overshadowed by query results. <br /> <br 
/>There are two ways to address this without modifying your 
query:<ul><li>Shrink the OVERLAP filter to match the query 
results.</li><li>Expand the OVERLAP filter to fully contain the existing 
segment.</li></ul>| `segmentId`: The existing segment <br /> 
-|  InsertLockPreempted  | An INSERT or REPLACE query was canceled by a 
higher-priority ingestion job, such as a real-time ingestion task.  | |
-|  InsertTimeNull  | An INSERT or REPLACE query encountered a null timestamp 
in the `__time` field.<br /><br />This can happen due to using an expression 
like `TIME_PARSE(timestamp) AS __time` with a timestamp that cannot be parsed. 
(TIME_PARSE returns null when it cannot parse a timestamp.) In this case, try 
parsing your timestamps using a different function or pattern.<br /><br />If 
your timestamps may genuinely be null, consider using COALESCE to provide a 
default value. One option is CURRENT_TIMESTAMP, which represents the start time 
of the job. |
-| InsertTimeOutOfBounds  |  A REPLACE query generated a timestamp outside the 
bounds of the TIMESTAMP parameter for your OVERWRITE WHERE clause.<br /> <br 
/>To avoid this error, verify that the   you specified is valid.  |  
`interval`: time chunk interval corresponding to the out-of-bounds timestamp  |
-|  InvalidNullByte  | A string column included a null byte. Null bytes in 
strings are not permitted. |  `column`: The column that included the null byte |
-| QueryNotSupported   | QueryKit could not translate the provided native query 
to a multi-stage query.<br /> <br />This can happen if the query uses features 
that aren't supported, like GROUPING SETS. |    |
-|  RowTooLarge  |  The query tried to process a row that was too large to 
write to a single frame. See the [Limits](#limits) table for the specific limit 
on frame size. Note that the effective maximum row size is smaller than the 
maximum frame size due to alignment considerations during frame writing.  |   
`maxFrameSize`: The limit on the frame size. |
-|  TaskStartTimeout  | Unable to launch all the worker tasks in time. <br /> 
<br />There might be insufficient available slots to start all the worker tasks 
simultaneously.<br /> <br /> Try splitting up the query into smaller chunks 
with lesser `maxNumTasks` number. Another option is to increase capacity.  | |
-|  TooManyBuckets  |  Exceeded the number of partition buckets for a stage. 
Partition buckets are only used for `segmentGranularity` during INSERT queries. 
The most common reason for this error is that your `segmentGranularity` is too 
narrow relative to the data. See the [Limits](#limits) table for the specific 
limit.  |  `maxBuckets`: The limit on buckets.  |
-| TooManyInputFiles | Exceeded the number of input files/segments per worker. 
See the [Limits](#limits) table for the specific limit. | `umInputFiles`: The 
total number of input files/segments for the stage.<br /><br />`maxInputFiles`: 
The maximum number of input files/segments per worker per stage.<br /><br 
/>`minNumWorker`: The minimum number of workers required for a successful run. |
-|  TooManyPartitions   |  Exceeded the number of partitions for a stage. The 
most common reason for this is that the final stage of an INSERT or REPLACE 
query generated too many segments. See the [Limits](#limits) table for the 
specific limit.  | `maxPartitions`: The limit on partitions which was exceeded  
  |
-|  TooManyColumns |  Exceeded the number of columns for a stage. See the 
[Limits](#limits) table for the specific limit.  | `maxColumns`: The limit on 
columns which was exceeded.  |
-|  TooManyWarnings |  Exceeded the allowed number of warnings of a particular 
type. | `rootErrorCode`: The error code corresponding to the exception that 
exceeded the required limit. <br /><br />`maxWarnings`: Maximum number of 
warnings that are allowed for the corresponding `rootErrorCode`.   |
-|  TooManyWorkers |  Exceeded the supported number of workers running 
simultaneously. See the [Limits](#limits) table for the specific limit.  | 
`workers`: The number of simultaneously running workers that exceeded a hard or 
soft limit. This may be larger than the number of workers in any one stage if 
multiple stages are running simultaneously. <br /><br />`maxWorkers`: The hard 
or soft limit on workers that was exceeded.  |
-|  NotEnoughMemory  |  Insufficient memory to launch a stage.  |  
`serverMemory`: The amount of memory available to a single process.<br /><br 
/>`serverWorkers`: The number of workers running in a single process.<br /><br 
/>`serverThreads`: The number of threads in a single process.  |
-|  WorkerFailed  |  A worker task failed unexpectedly.  |  `workerTaskId`: The 
ID of the worker task.  |
-|  WorkerRpcFailed  |  A remote procedure call to a worker task failed and 
could not recover.  |  `workerTaskId`: the id of the worker task  |
-|  UnknownError   |  All other errors.  |    |
+| Code | Meaning | Additional fields |
+|---|---|---|
+| `BroadcastTablesTooLarge` | The size of the broadcast tables used in the 
right hand side of the join exceeded the memory reserved for them in a worker 
task.<br /><br />Try increasing the peon memory or reducing the size of the 
broadcast tables. | `maxBroadcastTablesSize`: Memory reserved for the broadcast 
tables, measured in bytes. |
+| `Canceled` | The query was canceled. Common reasons for cancellation:<br 
/><br /><ul><li>User-initiated shutdown of the controller task via the 
`/druid/indexer/v1/task/{taskId}/shutdown` API.</li><li>Restart or failure of 
the server process that was running the controller task.</li></ul>| |
+| `CannotParseExternalData` | A worker task could not parse data from an 
external datasource. | `errorMessage`: More details on why parsing failed. |
+| `ColumnNameRestricted` | The query uses a restricted column name. | 
`columnName`: The restricted column name. |
+| `ColumnTypeNotSupported` | Support for writing or reading from a particular 
column type is not supported. | `columnName`: The column name with an unknown 
type. |

Review Comment:
   Thanks @cryptoe. I tried to combine them into one row using a list. 
@techdocsmith Does that wording make sense?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to