gianm commented on code in PR #15245:
URL: https://github.com/apache/druid/pull/15245#discussion_r1378443905
##########
docs/multi-stage-query/reference.md:
##########
@@ -232,23 +232,25 @@ If you're using the web console, you can specify the
context parameters through
The following table lists the context parameters for the MSQ task engine:
-| Parameter | Description
| Default value |
-|---|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---|
-| `maxNumTasks` | SELECT, INSERT, REPLACE<br /><br />The maximum total number
of tasks to launch, including the controller task. The lowest possible value
for this setting is 2: one controller and one worker. All tasks must be able to
launch simultaneously. If they cannot, the query returns a `TaskStartTimeout`
error code after approximately 10 minutes.<br /><br />May also be provided as
`numTasks`. If both are present, `maxNumTasks` takes priority.
| 2 |
-| `taskAssignment` | SELECT, INSERT, REPLACE<br /><br />Determines how many
tasks to use. Possible values include: <ul><li>`max`: Uses as many tasks as
possible, up to `maxNumTasks`.</li><li>`auto`: When file sizes can be
determined through directory listing (for example: local files, S3, GCS, HDFS)
uses as few tasks as possible without exceeding 512 MiB or 10,000 files per
task, unless exceeding these limits is necessary to stay within `maxNumTasks`.
When calculating the size of files, the weighted size is used, which considers
the file format and compression format used if any. When file sizes cannot be
determined through directory listing (for example: http), behaves the same as
`max`.</li></ul>
| `max` |
-| `finalizeAggregations` | SELECT, INSERT, REPLACE<br /><br />Determines the
type of aggregation to return. If true, Druid finalizes the results of complex
aggregations that directly appear in query results. If false, Druid returns the
aggregation's intermediate type rather than finalized type. This parameter is
useful during ingestion, where it enables storing sketches directly in Druid
tables. For more information about aggregations, see [SQL aggregation
functions](../querying/sql-aggregations.md).
| true |
-| `sqlJoinAlgorithm` | SELECT, INSERT, REPLACE<br /><br />Algorithm to use for
JOIN. Use `broadcast` (the default) for broadcast hash join or `sortMerge` for
sort-merge join. Affects all JOIN operations in the query. This is a hint to
the MSQ engine and the actual joins in the query may proceed in a different way
than specified. See [Joins](#joins) for more details.
| `broadcast` |
-| `rowsInMemory` | INSERT or REPLACE<br /><br />Maximum number of rows to
store in memory at once before flushing to disk during the segment generation
process. Ignored for non-INSERT queries. In most cases, use the default value.
You may need to override the default if you run into one of the [known
issues](./known-issues.md) around memory usage.
| 100,000 |
+| Parameter | Description | Default value |
+|---|---|---|
+| `maxNumTasks` | SELECT, INSERT, REPLACE<br /><br />The maximum total number
of tasks to launch, including the controller task. The lowest possible value
for this setting is 2: one controller and one worker. All tasks must be able to
launch simultaneously. If they cannot, the query returns a `TaskStartTimeout`
error code after approximately 10 minutes.<br /><br />May also be provided as
`numTasks`. If both are present, `maxNumTasks` takes priority. | 2 |
+| `taskAssignment` | SELECT, INSERT, REPLACE<br /><br />Determines how many
tasks to use. Possible values include: <ul><li>`max`: Uses as many tasks as
possible, up to `maxNumTasks`.</li><li>`auto`: When file sizes can be
determined through directory listing (for example: local files, S3, GCS, HDFS)
uses as few tasks as possible without exceeding 512 MiB or 10,000 files per
task, unless exceeding these limits is necessary to stay within `maxNumTasks`.
When calculating the size of files, the weighted size is used, which considers
the file format and compression format used if any. When file sizes cannot be
determined through directory listing (for example: http), behaves the same as
`max`.</li></ul> | `max` |
+| `finalizeAggregations` | SELECT, INSERT, REPLACE<br /><br />Determines the
type of aggregation to return. If true, Druid finalizes the results of complex
aggregations that directly appear in query results. If false, Druid returns the
aggregation's intermediate type rather than finalized type. This parameter is
useful during ingestion, where it enables storing sketches directly in Druid
tables. For more information about aggregations, see [SQL aggregation
functions](../querying/sql-aggregations.md). | true |
+| `arrayIngestMode` | INSERT, REPLACE<br /><br /> Controls how ARRAY type
values are stored in Druid segments. When set to `'array'` (recommended for SQL
compliance), Druid will store all ARRAY typed values in [ARRAY typed
columns](../querying/arrays.md), and supports storing both VARCHAR and numeric
typed arrays. When set to `'mvd'` (the default, for backwards compatibility),
Druid only supports VARCHAR typed arrays, and will store them as [multi-value
string columns](../querying/multi-value-dimensions.md). When set to `none`,
Druid will throw an exception when trying to store any type of arrays, used to
help migrate operators from `'mvd'` mode to `'array'` mode and force query
writers to make an explicit choice between ARRAY and multi-value VARCHAR typed
columns. | `'mvd'` (for backwards compatibility, recommended to use `array` for
SQL compliance)|
Review Comment:
`array` is preferred over `'array'`. In the JSON it's `"array"` anyway. (But
forget that, use `array`.)
For `none`, is there a way for operators to set a default value? Otherwise
it doesn't seem like it'd be useful for operators. (The useful flow would be
for operators to set a default of `none`, and users to override it to either
`mvd` or `array` as their preference dictates.)
##########
docs/querying/multi-value-dimensions.md:
##########
@@ -61,20 +77,81 @@ By default, Druid sorts values in multi-value dimensions.
This behavior is contr
See [Dimension Objects](../ingestion/ingestion-spec.md#dimension-objects) for
information on configuring multi-value handling.
+### SQL-based ingestion
+Multi-value dimensions can also be inserted with [SQL-based
ingestion](../multi-stage-query/index.md). The multi-stage query engine does
not have direct handling of class Druid multi-value dimensions. A special pair
of functions, `MV_TO_ARRAY` which converts multi-value dimensions into `VARCHAR
ARRAY` and `ARRAY_TO_MV` to coerce them back into `VARCHAR` exist to enable
handling these types. Multi-value handling is not available when using the
multi-stage query engine to insert data.
Review Comment:
"classic" (spelling)
Although… what does it mean to say that MSQ doesn't have "direct handling of
classic Druid multi-value dimensions"? I would think it does directly handle
them, if you use `ARRAY_TO_MV`? I guess I'm not sure what you're trying to say
here.
Grammar for the sentence starting with "A special pair of functions" is kind
of wonky. Please rewrite it to be clearer.
##########
docs/querying/sql-data-types.md:
##########
@@ -75,6 +75,17 @@ Casts between two SQL types that have different Druid
runtime types generate a r
If a value cannot be cast to the target type, as in `CAST('foo' AS BIGINT)`,
Druid a substitutes [NULL](#null-values).
When `druid.generic.useDefaultValueForNull = true` (legacy mode), Druid
instead substitutes a default value, including when NULL values cast to
non-nullable types. For example, if `druid.generic.useDefaultValueForNull =
true`, a null VARCHAR cast to BIGINT is converted to a zero.
+## Arrays
+
+Druid supports [`ARRAY` types](arrays.md), which behave as standard SQL
arrays, where results are grouped by matching entire arrays. The [`UNNEST`
operator](./sql-array-functions.md#unn) can be used to perform operations on
individual array elements, translating each element into a separate row.
+
+`ARRAY` typed columns can be stored in segments with class JSON based
ingestion using the 'auto' typed dimension schema shared with [schema
auto-discovery](../ingestion/schema-design.md#schema-auto-discovery-for-dimensions)
to detect and ingest arrays as ARRAY typed columns. For [SQL based
ingestion](../multi-stage-query/index.md), the query context parameter
`arrayIngestMode` must be specified as `"array"` to ingest ARRAY types. In
Druid 28, the default mode for this parameter is `"mvd"` for backwards
compatibility, which instead can only handle `ARRAY<STRING>` which it stores in
[multi-value string columns](#multi-value-strings).
Review Comment:
I don't think the word "class" is useful here. I suppose you meant
"classic", but JSON based ingestion isn't entirely "classic" / "legacy"; for
example it's the only way to do realtime still.
##########
docs/querying/multi-value-dimensions.md:
##########
@@ -61,20 +77,81 @@ By default, Druid sorts values in multi-value dimensions.
This behavior is contr
See [Dimension Objects](../ingestion/ingestion-spec.md#dimension-objects) for
information on configuring multi-value handling.
+### SQL-based ingestion
+Multi-value dimensions can also be inserted with [SQL-based
ingestion](../multi-stage-query/index.md). The multi-stage query engine does
not have direct handling of class Druid multi-value dimensions. A special pair
of functions, `MV_TO_ARRAY` which converts multi-value dimensions into `VARCHAR
ARRAY` and `ARRAY_TO_MV` to coerce them back into `VARCHAR` exist to enable
handling these types. Multi-value handling is not available when using the
multi-stage query engine to insert data.
Review Comment:
"classic" (spelling)
Although… what does it mean to say that MSQ doesn't have "direct handling of
classic Druid multi-value dimensions"? I would think it does directly handle
them, if you use `ARRAY_TO_MV`? I guess I'm not sure what you're trying to say
here.
Grammar for the sentence starting with "A special pair of functions" is kind
of wonky. Please rewrite it to be clearer.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]