Mryange opened a new pull request, #24881:
URL: https://github.com/apache/doris/pull/24881

   ## Proposed changes
   
   You can set the level of counters on the backend using 
ADD_COUNTER_WITH_LEVEL/ADD_TIMER_WITH_LEVEL. The profile can then merge 
counters with level 1.
   
   such as
   
   sql
   select  count(*)  from  customer  join    item  on  c_customer_sk  =  
i_item_sk
   
   profile
   ```
   Simple  profile  
     
     PLAN  FRAGMENT  0
       OUTPUT  EXPRS:
           count(*)
       PARTITION:  UNPARTITIONED
   
       VRESULT  SINK
             MYSQL_PROTOCAL
   
   
       7:VAGGREGATE  (merge  finalize)
       |    output:  count(partial_count(*))[#44]
       |    group  by:  
       |    cardinality=1
       |    TotalTime:  avg  725.608us,  max  725.608us,  min  725.608us
       |    RowsReturned:  1
       |    
       6:VEXCHANGE
             offset:  0
             TotalTime:  avg  52.411us,  max  52.411us,  min  52.411us
             RowsReturned:  8
   
   PLAN  FRAGMENT  1
   
       PARTITION:  HASH_PARTITIONED:  c_customer_sk
   
       STREAM  DATA  SINK
           EXCHANGE  ID:  06
           UNPARTITIONED
   
           TotalTime:  avg  106.263us,  max  118.38us,  min  81.403us
           BlocksSent:  8
   
       5:VAGGREGATE  (update  serialize)
       |    output:  partial_count(*)[#43]
       |    group  by:  
       |    cardinality=1
       |    TotalTime:  avg  679.296us,  max  739.395us,  min  554.904us
       |    BuildTime:  avg  33.198us,  max  48.387us,  min  28.880us
       |    ExecTime:  avg  27.633us,  max  40.278us,  min  24.537us
       |    RowsReturned:  8
       |    
       4:VHASH  JOIN
       |    join  op:  INNER  JOIN(PARTITIONED)[]
       |    equal  join  conjunct:  c_customer_sk  =  i_item_sk
       |    runtime  filters:  RF000[bloom]  <-  i_item_sk(18000/16384/1048576)
       |    cardinality=17,740
       |    vec  output  tuple  id:  3
       |    vIntermediate  tuple  ids:  2  
       |    hash  output  slot  ids:  22  
       |    RowsReturned:  18.0K  (18000)
       |    ProbeRows:  18.0K  (18000)
       |    ProbeTime:  avg  862.308us,  max  1.576ms,  min  666.28us
       |    BuildRows:  18.0K  (18000)
       |    BuildTime:  avg  3.8ms,  max  3.860ms,  min  2.317ms
       |    
       |----1:VEXCHANGE
       |              offset:  0
       |              TotalTime:  avg  48.822us,  max  67.459us,  min  30.380us
       |              RowsReturned:  18.0K  (18000)
       |        
       3:VEXCHANGE
             offset:  0
             TotalTime:  avg  33.162us,  max  39.480us,  min  28.854us
             RowsReturned:  18.0K  (18000)
   
   PLAN  FRAGMENT  2
   
       PARTITION:  HASH_PARTITIONED:  c_customer_id
   
       STREAM  DATA  SINK
           EXCHANGE  ID:  03
           HASH_PARTITIONED:  c_customer_sk
   
           TotalTime:  avg  753.954us,  max  1.210ms,  min  499.470us
           BlocksSent:  64
   
       2:VOlapScanNode
             TABLE:  default_cluster:tpcds.customer(customer),  PREAGGREGATION: 
 ON
             runtime  filters:  RF000[bloom]  ->  c_customer_sk
             partitions=1/1,  tablets=12/12,  
tabletList=1550745,1550747,1550749  ...
             cardinality=100000,  avgRowSize=0.0,  numNodes=1
             pushAggOp=NONE
             TotalTime:  avg  18.417us,  max  41.319us,  min  10.189us
             RowsReturned:  18.0K  (18000)
   ```
   
   <!--Describe your changes.-->
   
   ## Further comments
   
   If this is a relatively large or complex change, kick off the discussion at 
[[email protected]](mailto:[email protected]) by explaining why you 
chose the solution you did and what alternatives you considered, etc...
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to