gargvishesh commented on code in PR #16291:
URL: https://github.com/apache/druid/pull/16291#discussion_r1615444796
##########
extensions-core/multi-stage-query/src/main/java/org/apache/druid/msq/exec/ControllerImpl.java:
##########
@@ -1853,25 +1864,42 @@ private static Function<Set<DataSegment>,
Set<DataSegment>> addCompactionStateTo
GranularitySpec granularitySpec = new UniformGranularitySpec(
segmentGranularity,
- dataSchema.getGranularitySpec().getQueryGranularity(),
+
QueryContext.of(task.getContext()).getGranularity(DruidSqlInsert.SQL_INSERT_QUERY_GRANULARITY,
jsonMapper),
dataSchema.getGranularitySpec().isRollup(),
- dataSchema.getGranularitySpec().inputIntervals()
+ ((DataSourceMSQDestination)
task.getQuerySpec().getDestination()).getReplaceTimeChunks()
);
- DimensionsSpec dimensionsSpec = dataSchema.getDimensionsSpec();
Map<String, Object> transformSpec =
TransformSpec.NONE.equals(dataSchema.getTransformSpec())
? null
: new
ClientCompactionTaskTransformSpec(
dataSchema.getTransformSpec().getFilter()
).asMap(jsonMapper);
- List<Object> metricsSpec = dataSchema.getAggregators() == null
- ? null
- : jsonMapper.convertValue(
- dataSchema.getAggregators(), new
TypeReference<List<Object>>()
- {
- });
+ DimensionsSpec dimensionsSpec;
+ List<Object> metricsSpec;
+
+ if (task.getQuerySpec().getQuery() instanceof GroupByQuery) {
Review Comment:
Reverted to using MSQ's dataschema. Added a
`dimensionToAggregatorFactoryMap` to help in finding aggregatorFactories that
have been converted to dimensions, and updated the compaction state comparison
logic.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]