[ 
https://issues.apache.org/jira/browse/PHOENIX-2832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15236458#comment-15236458
 ] 

James Taylor commented on PHOENIX-2832:
---------------------------------------

It'd look something like this (in BaseGroupedAggregatingResultIterator.next()):
{code}
    @Override
    public Tuple next() throws SQLException {
        Tuple result = resultIterator.next();
        if (result == null) {
            return null;
        }
        if (currentKey.get() == UNITIALIZED_KEY_BUFFER) {
            getGroupingKey(result, currentKey);
        }
        Aggregator[] rowAggregators = aggregators.getAggregators();
        aggregators.reset(rowAggregators);
        while (true) {
            try {
                aggregators.aggregate(rowAggregators, result);
                Tuple nextResult = resultIterator.peek();
                if (nextResult == null || 
!currentKey.equals(getGroupingKey(nextResult, nextKey))) {
                    break;
                }
                result = resultIterator.next();
            } catch (StaleRegionBoundaryCacheException e) {
                aggregators.reset(rowAggregators);
                // TODO: rerun scan starting from currentKey
            }
        }
        
        // TODO: if there weren't multiple rows being aggregated, we don't need 
to
        // create a new tuple but can just return the one from the loop.
        byte[] value = aggregators.toBytes(rowAggregators);
        Tuple tuple = wrapKeyValueAsResult(KeyValueUtil.newKeyValue(currentKey, 
SINGLE_COLUMN_FAMILY, SINGLE_COLUMN, AGG_TIMESTAMP, value, 0, value.length));
        currentKey.set(nextKey.get(), nextKey.getOffset(), nextKey.getLength());
        return tuple;
    }
{code}

> Ensure split handled correctly during aggregation over row key
> --------------------------------------------------------------
>
>                 Key: PHOENIX-2832
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-2832
>             Project: Phoenix
>          Issue Type: Bug
>            Reporter: James Taylor
>
> Related to PHOENIX-2628, but specifically for an "ordered" aggregation (i.e. 
> an aggregation over leading row key columns). We should be able to this by 
> catching the StaleRegionBoundaryCacheException in 
> GroupedAggregatingResultIterator (or in potentially a new class once 
> PHOENIX-2818 is implemented) and then starting the scan from the current key 
> returned. This will work because a row would not yet have been returned to 
> the client and we'd just re-calculate the state for the current row being 
> aggregated and move on from there.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to