[ 
https://issues.apache.org/jira/browse/TRAFODION-1677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15042157#comment-15042157
 ] 

Qifan Chen commented on TRAFODION-1677:
---------------------------------------

I checked the work method for hash group by.  Looks like it is very specific to 
a single aggregate expression and would be extra work to alter it for multiple 
aggregates. 

So to handle multiple aggregate expressions with the hash group by operator 
without much code change, we could use threads, each works on one aggregate 
expression, replying on the existing method.  

This would be another requirement to support threads in yet another operator. 


> Implement the ROLLUP and CUBE SQL functions in Trafodion
> --------------------------------------------------------
>
>                 Key: TRAFODION-1677
>                 URL: https://issues.apache.org/jira/browse/TRAFODION-1677
>             Project: Apache Trafodion
>          Issue Type: Bug
>          Components: sql-general
>            Reporter: Qifan Chen
>
> I am currently working the high level design for 2nd part of the OLAP 
> function RollUp and Cube. The 1st part (the identification of missing OLAP 
> window functions in Trafodion and their implementation) is done. 
> For RollUp and Cube,  I believe the key is to handle many more # of 
> aggregates properly. Note that that number is n for RollUp and 2^n for Cube, 
> where n is the # of grouping columns. Parallel computation of these 
> aggregates is the key.
> One way to achieve such is to extend the Hash Group By operators so that it 
> can do multiple aggregates in a single operator. The requires the following:
> 1. random partition the data fed into the aggregate operator
> 2. force the multiple-level group by 
> 3. pack multiple pairs of groupping and aggregate expressions (both in 
> compiler and in executor, and in MapValueIds, GroupByPartialRoot, and 
> GroupByPartialLeaf). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to