somu-imply opened a new pull request, #15075:
URL: https://github.com/apache/druid/pull/15075
Previously a query as
```
with t1 as (
select * from foo, unnest(MV_TO_ARRAY("dim3")) as u(d3)
)
select * from t1 JOIN "numFoo" as t2
ON t1.d3 = t2."dim1"
```
would be planned as a join between a query data source on the left and a
query data source on the right. Although the results were correct this is
limiting performance as query data sources are evaluated at the broker where
the number of rows is limited by `maxSubqueryRows`.
Additionally native queries like
```
{
"queryType" : "scan",
"dataSource" : {
"type" : "join",
"left" : {
"type" : "unnest",
"base" : {
"type" : "table",
"name" : "foo"
},
"virtualColumn" : {
"type" : "expression",
"name" : "j0.unnest",
"expression" : "\"dim3\"",
"outputType" : "STRING"
},
"unnestFilter" : null
},
"right" : {
"type" : "query",
"query" : {
"queryType" : "scan",
"dataSource" : {
"type" : "table",
"name" : "numfoo"
},
"intervals" : {
"type" : "intervals",
"intervals" : [
"-146136543-09-08T08:23:32.096Z/146140482-04-24T15:36:27.903Z" ]
},
"resultFormat" : "compactedList",
"columns" : [ "__time", "cnt", "d1", "d2", "dim1", "dim2", "dim3",
"dim4", "dim5", "dim6", "f1", "f2", "l1", "l2", "m1", "m2", "unique_dim1" ],
"legacy" : false,
"context" : {
"defaultTimeout" : 300000,
"maxScatterGatherBytes" : 9223372036854775807,
"sqlCurrentTimestamp" : "2000-01-01T00:00:00Z",
"sqlQueryId" : "dummy",
"vectorSize" : 2,
"vectorize" : "force",
"vectorizeVirtualColumns" : "force"
},
"granularity" : {
"type" : "all"
}
}
},
"rightPrefix" : "_j0.",
"condition" : "(\"j0.unnest\" == \"_j0.dim1\")",
"joinType" : "INNER"
},
"intervals" : {
"type" : "intervals",
"intervals" : [
"-146136543-09-08T08:23:32.096Z/146140482-04-24T15:36:27.903Z" ]
},
"resultFormat" : "compactedList",
"columns" : [ "__time", "_j0.__time", "_j0.cnt", "_j0.d1", "_j0.d2",
"_j0.dim1", "_j0.dim2", "_j0.dim3", "_j0.dim4", "_j0.dim5", "_j0.dim6",
"_j0.f1", "_j0.f2", "_j0.l1", "_j0.l2", "_j0.m1", "_j0.m2", "_j0.unique_dim1",
"cnt", "dim1", "dim2", "dim3", "j0.unnest", "m1", "m2", "unique_dim1" ],
"legacy" : false,
"context" : {
"defaultTimeout" : 300000,
"maxScatterGatherBytes" : 9223372036854775807,
"sqlCurrentTimestamp" : "2000-01-01T00:00:00Z",
"sqlQueryId" : "dummy",
"vectorSize" : 2,
"vectorize" : "force",
"vectorizeVirtualColumns" : "force"
},
"granularity" : {
"type" : "all"
}
}
```
would fail with an error
```
java.lang.ClassCastException: org.apache.druid.query.UnnestDataSource cannot
be cast to org.apache.druid.query.TableDataSource
```
Through this PR we do the following to address this:
1. Refactor getAnalysis() for JoinDataSource to correctly use the base
datasource if the left hand of a join has an UnnestDataSource
2. Update the createSegmentMapFunction for the JoinDataSource to use the
segment map function correctly
3. Additional machinery for the correct query plan
4. Additional unit tests added to support our case
This PR has:
- [ ] been self-reviewed.
- [ ] using the [concurrency
checklist](https://github.com/apache/druid/blob/master/dev/code-review/concurrency.md)
(Remove this item if the PR doesn't have any relation to concurrency.)
- [ ] added documentation for new or modified features or behaviors.
- [ ] a release note entry in the PR description.
- [ ] added Javadocs for most classes and all non-trivial methods. Linked
related entities via Javadoc links.
- [ ] added or updated version, license, or notice information in
[licenses.yaml](https://github.com/apache/druid/blob/master/dev/license.md)
- [ ] added comments explaining the "why" and the intent of the code
wherever would not be obvious for an unfamiliar reader.
- [ ] added unit tests or modified existing tests to cover new code paths,
ensuring the threshold for [code
coverage](https://github.com/apache/druid/blob/master/dev/code-review/code-coverage.md)
is met.
- [ ] added integration tests.
- [ ] been tested in a test Druid cluster.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]