clintropolis commented on a change in pull request #11188:
URL: https://github.com/apache/druid/pull/11188#discussion_r624833355
##########
File path:
extensions-core/histogram/src/test/java/org/apache/druid/query/aggregation/histogram/sql/QuantileSqlAggregatorTest.java
##########
@@ -127,262 +113,195 @@ public SpecificSegmentsQuerySegmentWalker
createQuerySegmentWalker() throws IOEx
}
@Override
- public List<Object[]> getResults(
- final PlannerConfig plannerConfig,
- final Map<String, Object> queryContext,
- final List<SqlParameter> parameters,
- final String sql,
- final AuthenticationResult authenticationResult
- ) throws Exception
+ public DruidOperatorTable createOperatorTable()
{
- return getResults(
- plannerConfig,
- queryContext,
- parameters,
- sql,
- authenticationResult,
- OPERATOR_TABLE,
- CalciteTests.createExprMacroTable(),
- CalciteTests.TEST_AUTHORIZER_MAPPER,
- CalciteTests.getJsonMapper()
- );
- }
-
- private SqlLifecycle getSqlLifecycle()
- {
- return getSqlLifecycleFactory(
- BaseCalciteQueryTest.PLANNER_CONFIG_DEFAULT,
- OPERATOR_TABLE,
- CalciteTests.createExprMacroTable(),
- CalciteTests.TEST_AUTHORIZER_MAPPER,
- CalciteTests.getJsonMapper()
- ).factorize();
+ return OPERATOR_TABLE;
}
@Test
public void testQuantileOnFloatAndLongs() throws Exception
{
- SqlLifecycle sqlLifecycle = getSqlLifecycle();
-
- final String sql = "SELECT\n"
- + "APPROX_QUANTILE(m1, 0.01),\n"
- + "APPROX_QUANTILE(m1, 0.5, 50),\n"
- + "APPROX_QUANTILE(m1, 0.98, 200),\n"
- + "APPROX_QUANTILE(m1, 0.99),\n"
- + "APPROX_QUANTILE(m1 * 2, 0.97),\n"
- + "APPROX_QUANTILE(m1, 0.99) FILTER(WHERE dim1 =
'abc'),\n"
- + "APPROX_QUANTILE(m1, 0.999) FILTER(WHERE dim1 <>
'abc'),\n"
- + "APPROX_QUANTILE(m1, 0.999) FILTER(WHERE dim1 =
'abc'),\n"
- + "APPROX_QUANTILE(cnt, 0.5)\n"
- + "FROM foo";
-
- // Verify results
- final List<Object[]> results = sqlLifecycle.runSimple(
- sql,
- TIMESERIES_CONTEXT_DEFAULT,
- DEFAULT_PARAMETERS,
- AUTH_RESULT
- ).toList();
- final List<Object[]> expectedResults = ImmutableList.of(
- new Object[]{
- 1.0,
- 3.0,
- 5.880000114440918,
- 5.940000057220459,
- 11.640000343322754,
- 6.0,
- 4.994999885559082,
- 6.0,
- 1.0
- }
- );
- Assert.assertEquals(expectedResults.size(), results.size());
- for (int i = 0; i < expectedResults.size(); i++) {
- Assert.assertArrayEquals(expectedResults.get(i), results.get(i));
- }
-
- // Verify query
- Assert.assertEquals(
- Druids.newTimeseriesQueryBuilder()
- .dataSource(CalciteTests.DATASOURCE1)
- .intervals(new
MultipleIntervalSegmentSpec(ImmutableList.of(Filtration.eternity())))
- .granularity(Granularities.ALL)
- .virtualColumns(
- new ExpressionVirtualColumn(
- "v0",
- "(\"m1\" * 2)",
- ValueType.FLOAT,
- TestExprMacroTable.INSTANCE
+ cannotVectorize();
Review comment:
I think this is a bug that this cannot vectorize btw, the aggregator
`canVectorize` method takes a `ColumnInspector`, but ([in
`TimeseriesQueryEngine` at
least](https://github.com/apache/druid/blob/master/processing/src/main/java/org/apache/druid/query/timeseries/TimeseriesQueryEngine.java#L98))
this is the segment adapter, and not wrapped with the virtual columns, so it
does not find column capabilities for `v0` since it doesn't exist on the actual
segment and so reports that it is not able to vectorize. The engine itself will
confirm that `v0` can vectorize, so the aggregator doesn't need to worry about
that I think, but I think we might want a way to wrap the adapter with a new
`VirtualizedColumnInspector` or something (think `VirtualColumns.wrap` but for
a `ColumnInspector` instead of `ColumnSelectorFactory`) so that the column
capabilities of the virtual columns are available to the aggregator to help
make its decision on whether or not it can vectorize.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]