[ 
https://issues.apache.org/jira/browse/KYLIN-1684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15284141#comment-15284141
 ] 

wangxianbin commented on KYLIN-1684:
------------------------------------

hi hongbin! in your commit for "KYLIN-1465", I notice that 
NotEnoughGTInfoException is the only exception you are trying to catch in 
CubeStorageQuery search regardless segment record count, and CubeGridTable 
newGTInfo is the only point you throw NotEnoughGTInfoException when there is a 
dic info mismatch between CubeManager and Cuboid(in which case dict == null), 
however, seem like your guys have refactor it, for case where dict not 
found(dict == null) in CubeDimEncMap, FixedLenDimEnc is used, and therefore I 
just remove the check, if there is some other runtime exception I should worry 
about, I may miss it, anyway, test is aways a better choice.

> query on table "kylin_sales" return empty resultset after cube 
> "kylin_sales_cube" which generated by sample.sh is ready
> -----------------------------------------------------------------------------------------------------------------------
>
>                 Key: KYLIN-1684
>                 URL: https://issues.apache.org/jira/browse/KYLIN-1684
>             Project: Kylin
>          Issue Type: Bug
>          Components: Query Engine
>    Affects Versions: v1.5.1
>         Environment: cluster:
> hadoop-2.6.0
> hbase-0.98.8
> hive-0.14.0
>            Reporter: wangxianbin
>            Assignee: wangxianbin
>         Attachments: 
> 1.5.1-release-hotfix-KYLIN-1684-query-on-table-kylin_sales-return-empty-r.patch,
>  log for Build Base Cuboid Data.png, log when run query.png
>
>
> there is a check for "InputRecords" in CubeStorageQuery search method which 
> seem like unnecessary, as follow:
>         List<CubeSegmentScanner> scanners = Lists.newArrayList();
>         for (CubeSegment cubeSeg : 
> cubeInstance.getSegments(SegmentStatusEnum.READY)) {
>             CubeSegmentScanner scanner;
>             if (cubeSeg.getInputRecords() == 0) {
>                 logger.info("Skip cube segment {} because its input record is 
> 0", cubeSeg);
>                 continue;
>             }
>             scanner = new CubeSegmentScanner(cubeSeg, cuboid, dimensionsD, 
> groupsD, metrics, filterD, !isExactAggregation);
>             scanners.add(scanner);
>         }
>         if (scanners.isEmpty())
>             return ITupleIterator.EMPTY_TUPLE_ITERATOR;
>         return new SequentialCubeTupleIterator(scanners, cuboid, dimensionsD, 
> metrics, returnTupleInfo, context);
> this check will cause query return empty resultset, even there is data in 
> storage engine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to