kaijianding commented on a change in pull request #11379:
URL: https://github.com/apache/druid/pull/11379#discussion_r661145408



##########
File path: sql/src/main/java/org/apache/druid/sql/calcite/rel/DruidQuery.java
##########
@@ -977,6 +977,33 @@ public GroupByQuery toGroupByQuery()
       // Cannot handle zero or negative limits.
       return null;
     }
+    Map<String, Object> theContext = plannerContext.getQueryContext();
+
+    Granularity queryGranularity = null;
+
+    if (!grouping.getDimensions().isEmpty()) {
+      for (DimensionExpression dimensionExpression : grouping.getDimensions()) 
{
+        Granularity granularity = Expressions.toQueryGranularity(
+            dimensionExpression.getDruidExpression(),
+            plannerContext.getExprMacroTable()
+        );
+        if (granularity == null) {
+          continue;
+        }
+        if (queryGranularity != null) {
+          // group by more than one timestamp_floor
+          // eg: group by timestamp_floor(__time to 
DAY),timestamp_floor(__time, to HOUR)
+          theContext = plannerContext.getQueryContext();
+          break;
+        }
+        queryGranularity = granularity;
+        int timestampDimensionIndexInDimensions = 
grouping.getDimensions().indexOf(dimensionExpression);
+        theContext = new HashMap<>(plannerContext.getQueryContext());
+        theContext.put(GroupByQuery.CTX_TIMESTAMP_RESULT_FIELD, 
dimensionExpression.getOutputName());
+        theContext.put(GroupByQuery.CTX_TIMESTAMP_RESULT_FIELD_GRANULARITY, 
queryGranularity);

Review comment:
       Yes, this is also my first thought.   
   I tried this solution to rewrite the GroupBy query's dimension and 
granularity. But later I found it is so difficult to correctly handle the 
`QueryMaker.remapFields()` to deal with the new dimensions without time_floor 
dimension and to add the time_floor dimension back when dealing with the 
postAggs and the nested groupBy query.
   
   So I see it in another perspective: the timestamp_result_field thing is 
similar with the universalTimestamp solution, it is a optimization of the 
groupBy process. the compute node(historical and realtime node) use the new 
granularity and new dimensions and add the time_floor dimension back when 
dealing with the postAggs. The sql layer will not be aware of this optimization.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to