cambyzju opened a new issue, #16740:
URL: https://github.com/apache/doris/issues/16740

   ### Search before asking
   
   - [X] I had searched in the 
[issues](https://github.com/apache/doris/issues?q=is%3Aissue) and found no 
similar issues.
   
   
   ### Version
   
   master && 1.1-lts
   
   ### What's Wrong?
   
   `ERROR 1105 (HY000): errCode = 2, detailMessage = failed to get tablet: xxxx 
with schema hash: xxxxx, reason: tablet does not exist. a.b.c.d`
   
   ### What You Expected?
   
   no error
   
   ### How to Reproduce?
   
   1. create table:
   ```
   CREATE TABLE `user_none` (
     `app_id` bigint(20) NOT NULL,
     `stat_time` int(11) NOT NULL,
     `last_date` bigint(20) NOT NULL,
     `first_date` bigint(20) NOT NULL,
     `user_count` largeint(40) SUM NOT NULL,
     `session_count` largeint(40) SUM NOT NULL,
     `session_time` largeint(40) SUM NOT NULL,
     `t_session_count` largeint(40) SUM NOT NULL,
     `user_count_hour` largeint(40) SUM NOT NULL DEFAULT "0"
   ) ENGINE=OLAP
   AGGREGATE KEY(`app_id`, `stat_time`, `last_date`, `first_date`)
   COMMENT 'OLAP'
   DISTRIBUTED BY HASH(`app_id`) BUCKETS 50
   PROPERTIES (
   "replication_allocation" = "tag.location.default: 1",
   "in_memory" = "false",
   "storage_format" = "V2",
   "light_schema_change" = "true",
   "disable_auto_compaction" = "false"
   );
   ```
   
   2. Add rollup
   `alter table user_none add rollup 
user_none_rollup3(app_id,last_date,stat_time,user_count,session_count,session_time,t_session_count,user_count_hour);`
   
   3. add test data
   ```
   insert into user_none values(1,2,3,4,1,1,1,1,1);
   insert into user_none values(2,3,4,5,1,1,1,1,1);
   ```
   
   4. explain SQL:
   `> explain WITH one_day_table AS (                 SELECT app_id,            
          SUM(IF(last_date < 1676217600, user_count, 0)) AS user_count,         
            SUM(IF(last_date = 0, user_count, 0)) AS new_user_count,            
         SUM(session_count) AS ssession_count                 FROM user_none    
             WHERE stat_time >= 1676217600                 AND stat_time < 
1676304000                 AND last_date < 1676217600                 GROUP BY 
app_id                 HAVING ssession_count > 0             ),             
all_table AS (                 SELECT app_id,                     
SUM(user_count) AS accumulative_user_count                 FROM user_none       
          WHERE stat_time < 1676304000                 AND last_date = 0        
         GROUP BY app_id                 HAVING accumulative_user_count > 0     
        )             SELECT                  one_day_table.app_id AS app_id,   
              user_count,                 new_user_count,     
             ssession_count,                 accumulative_user_count            
 FROM one_day_table, all_table             WHERE one_day_table.app_id = 
all_table.app_id;`
   
   The key lines:
   ```
   |   4:VHASH JOIN                                                             
                                                                                
                                                                                
                                                                                
                                                                                
                                                                                
                                                                                
                                                                                
                            
   |   |  join op: INNER JOIN(COLOCATE[])[]
   
   |   |    2:VOlapScanNode                                                     
                                                                                
                                                                                
                                                                                
                                                                                
                                                                                
                                                                                
                                                                                
                            
   |   |       TABLE: default_cluster:example_db.user_none(user_none_rollup3), 
PREAGGREGATION: ON
   
   0:VOlapScanNode
   |      TABLE: default_cluster:example_db.user_none(user_none), 
PREAGGREGATION: OFF. Reason: aggExpr.getChild(0)[FunctionCallExpr{name=if, 
isStar=false, isDistinct=false, ((SlotRef{slotDesc=SlotDescriptor{id=1, 
parent=0, col=last_date, type=BIGINT, materialized=true, byteSize=0, 
byteOffset=-1, nullIndicatorByte=0, nullIndicatorBit=0, slotIdx=0}, 
col=last_date, label=`last_date`, tblName=null} ) 
SlotRef{slotDesc=SlotDescriptor{id=2, parent=0, col=user_count, type=LARGEINT, 
materialized=true, byteSize=0, byteOffset=-1, nullIndicatorByte=0, 
nullIndicatorBit=0, slotIdx=0}, col=user_count, label=`user_count`, 
tblName=null} )}] is not SlotRef or CastExpr|CaseExpr |
   ```
   
   5. The problem:
   We use base table and rollup, but the distributed of base table and rollup 
are not in the same colocate group.
   So if we try join with COLOCATE, we may get error:
   `ERROR 1105 (HY000): errCode = 2, detailMessage = failed to get tablet: xxxx 
with schema hash: xxxxx, reason: tablet does not exist. a.b.c.d`
   
   ### Anything Else?
   
   _No response_
   
   ### Are you willing to submit PR?
   
   - [ ] Yes I am willing to submit a PR!
   
   ### Code of Conduct
   
   - [X] I agree to follow this project's [Code of 
Conduct](https://www.apache.org/foundation/policies/conduct)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to