morningman opened a new issue #7971:
URL: https://github.com/apache/incubator-doris/issues/7971


   ### Search before asking
   
   - [X] I had searched in the 
[issues](https://github.com/apache/incubator-doris/issues?q=is%3Aissue) and 
found no similar issues.
   
   
   ### Version
   
   trunk
   
   ### What's Wrong?
   
   ```
   CREATE TABLE `test1` (
     `k1` tinyint(4) NULL COMMENT "",
     `k2` smallint(6) NULL COMMENT ""
   ) ENGINE=OLAP
   DUPLICATE KEY(`k1`, `k2`)
   COMMENT "OLAP"
   DISTRIBUTED BY HASH(`k1`) BUCKETS 1
   PROPERTIES (
   "replication_allocation" = "tag.location.default: 1",
   "in_memory" = "false",
   "storage_format" = "V2"
   );
   ```
   
   ```
   SELECT k1 ,GROUPING(k2) FROM db1.test1 GROUP BY CUBE (k1) ORDER BY k1;
   ```
   
   Error:
   ```
   java.lang.IndexOutOfBoundsException: bitIndex < 0: -1
           at java.util.BitSet.get(BitSet.java:623) ~[?:1.8.0_131]
           at 
org.apache.doris.analysis.GroupingInfo.genGroupingList(GroupingInfo.java:157) 
~[palo-fe.jar:0.15-SNAPSHOT]
           at org.apache.doris.planner.RepeatNode.init(RepeatNode.java:178) 
~[palo-fe.jar:0.15-SNAPSHOT]
           at 
org.apache.doris.planner.SingleNodePlanner.createRepeatNodePlan(SingleNodePlanner.java:1069)
 ~[palo-fe.jar:0.15-SNAPSHOT]
           at 
org.apache.doris.planner.SingleNodePlanner.createSelectPlan(SingleNodePlanner.java:1050)
 ~[palo-fe.jar:0.15-SNAPSHOT]
           at 
org.apache.doris.planner.SingleNodePlanner.createQueryPlan(SingleNodePlanner.java:236)
 ~[palo-fe.jar:0.15-SNAPSHOT]
           at 
org.apache.doris.planner.SingleNodePlanner.createSingleNodePlan(SingleNodePlanner.java:165)
 ~[palo-fe.jar:0.15-SNAPSHOT]
           at 
org.apache.doris.planner.Planner.createPlanFragments(Planner.java:170) 
~[palo-fe.jar:0.15-SNAPSHOT]
           at org.apache.doris.planner.Planner.plan(Planner.java:88) 
~[palo-fe.jar:0.15-SNAPSHOT]
           at 
org.apache.doris.qe.StmtExecutor.analyzeAndGenerateQueryPlan(StmtExecutor.java:695)
 ~[palo-fe.jar:0.15-SNAPSHOT]
           at org.apache.doris.qe.StmtExecutor.analyze(StmtExecutor.java:571) 
~[palo-fe.jar:0.15-SNAPSHOT]
           at org.apache.doris.qe.StmtExecutor.execute(StmtExecutor.java:327) 
~[palo-fe.jar:0.15-SNAPSHOT]
           at org.apache.doris.qe.StmtExecutor.execute(StmtExecutor.java:300) 
~[palo-fe.jar:0.15-SNAPSHOT]
           at 
org.apache.doris.qe.ConnectProcessor.handleQuery(ConnectProcessor.java:212) 
~[palo-fe.jar:0.15-SNAPSHOT]
           at 
org.apache.doris.qe.ConnectProcessor.dispatch(ConnectProcessor.java:349) 
~[palo-fe.jar:0.15-SNAPSHOT]
           at 
org.apache.doris.qe.ConnectProcessor.processOnce(ConnectProcessor.java:538) 
~[palo-fe.jar:0.15-SNAPSHOT]
           at 
org.apache.doris.mysql.nio.ReadListener.lambda$handleEvent$0(ReadListener.java:50)
 ~[palo-fe.jar:0.15-SNAPSHOT]
           at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
[?:1.8.0_131]
           at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[?:1.8.0_131]
           at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]
   ```
   
   ### What You Expected?
   
   Return a correct error message like:
   ```
   Column `k2` in GROUPING() function does not exist in GROUP BY clause.
   ``
   
   ### How to Reproduce?
   
   _No response_
   
   ### Anything Else?
   
   _No response_
   
   ### Are you willing to submit PR?
   
   - [ ] Yes I am willing to submit a PR!
   
   ### Code of Conduct
   
   - [X] I agree to follow this project's [Code of 
Conduct](https://www.apache.org/foundation/policies/conduct)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to