This is an automated email from the ASF dual-hosted git repository.
victoria pushed a commit to branch vtlim-patch-2
in repository https://gitbox.apache.org/repos/asf/druid.git
The following commit(s) were added to refs/heads/vtlim-patch-2 by this push:
new b929db6e644 use .md link in reference.md
b929db6e644 is described below
commit b929db6e644af10be5fd6adc361613c0d7bba384
Author: Victoria Lim <[email protected]>
AuthorDate: Tue Jul 22 15:18:07 2025 -0700
use .md link in reference.md
---
docs/multi-stage-query/reference.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/docs/multi-stage-query/reference.md
b/docs/multi-stage-query/reference.md
index 5e4e3a5a309..1bd82f00efe 100644
--- a/docs/multi-stage-query/reference.md
+++ b/docs/multi-stage-query/reference.md
@@ -403,7 +403,7 @@ The following table lists the context parameters for the
MSQ task engine:
| `sqlJoinAlgorithm` | SELECT, INSERT, REPLACE<br /><br />Algorithm to use for
JOIN. Use `broadcast` (the default) for broadcast hash join or `sortMerge` for
sort-merge join. Affects all JOIN operations in the query. This is a hint to
the MSQ engine and the actual joins in the query may proceed in a different way
than specified. See [Joins](#joins) for more details. | `broadcast` |
| `rowsInMemory` | INSERT or REPLACE<br /><br />Maximum number of rows to
store in memory at once before flushing to disk during the segment generation
process. Ignored for non-INSERT queries. In most cases, use the default value.
You may need to override the default if you run into one of the [known
issues](./known-issues.md) around memory usage. | 100,000 |
| `segmentSortOrder` | INSERT or REPLACE<br /><br />Normally, Druid sorts rows
in individual segments using `__time` first, followed by the [CLUSTERED
BY](#clustered-by) clause. When you set `segmentSortOrder`, Druid uses the
order from this context parameter instead. Provide the column list as
comma-separated values or as a JSON array in string form.<br />< br/>For
example, consider an INSERT query that uses `CLUSTERED BY country` and has
`segmentSortOrder` set to `__time,city,country`. [...]
-| `forceSegmentSortByTime` | INSERT or REPLACE<br /><br />When set to `true`
(the default), Druid prepends `__time` to [CLUSTERED BY](#clustered-by) when
determining the sort order for individual segments. Druid also requires that
`segmentSortOrder`, if provided, starts with `__time`.<br /><br />When set to
`false`, Druid uses the [CLUSTERED BY](#clustered-by) alone to determine the
sort order for individual segments, and does not require that
`segmentSortOrder` begin with `__time`. Sett [...]
+| `forceSegmentSortByTime` | INSERT or REPLACE<br /><br />When set to `true`
(the default), Druid prepends `__time` to [CLUSTERED BY](#clustered-by) when
determining the sort order for individual segments. Druid also requires that
`segmentSortOrder`, if provided, starts with `__time`.<br /><br />When set to
`false`, Druid uses the [CLUSTERED BY](#clustered-by) alone to determine the
sort order for individual segments, and does not require that
`segmentSortOrder` begin with `__time`. Sett [...]
| `maxParseExceptions`| SELECT, INSERT, REPLACE<br /><br />Maximum number of
parse exceptions that are ignored while executing the query before it stops
with `TooManyWarningsFault`. To ignore all the parse exceptions, set the value
to -1. | 0 |
| `rowsPerSegment` | INSERT or REPLACE<br /><br />The number of rows per
segment to target. The actual number of rows per segment may be somewhat higher
or lower than this number. In most cases, use the default. For general
information about sizing rows per segment, see [Segment Size
Optimization](../operations/segment-optimization.md). | 3,000,000 |
| `indexSpec` | INSERT or REPLACE<br /><br />An
[`indexSpec`](../ingestion/ingestion-spec.md#indexspec) to use when generating
segments. May be a JSON string or object. See [Front
coding](../ingestion/ingestion-spec.md#front-coding) for details on configuring
an `indexSpec` with front coding. | See
[`indexSpec`](../ingestion/ingestion-spec.md#indexspec). |
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]