alamb opened a new pull request #960:
URL: https://github.com/apache/arrow-datafusion/pull/960


   # Which issue does this PR close?
   
   Next part https://github.com/apache/arrow-datafusion/issues/866
   
   
    # Rationale for this change
   We want basic understanding of where a plan's time is spent and in what 
operators. See https://github.com/apache/arrow-datafusion/issues/866 for more 
details
   
   # What changes are included in this PR?
   1. Instrument `FilterExec` using the API from 
https://github.com/apache/arrow-datafusion/pull/909
   2. Tests for same
   
   
   # Are there any user-facing changes?
   More fields in `EXPLAIN ANALYZE` are now filled out
   
   Example of how explain analayze is looking:
   
   ```sql
   EXPLAIN ANALYZE select count(*) from (SELECT count(*), c1 FROM 
aggregate_test_100 WHERE c13 != 'C2GT5KVyOPZpgKVl110TyZO0NcJ434' GROUP BY c1 
ORDER BY c1)
   
   
   
+-------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
   | plan_type         | plan                                                   
                                                                                
                                                                                
           |
   
+-------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
   | Plan with Metrics | ProjectionExec: expr=[COUNT(UInt8(1))@0 as 
COUNT(UInt8(1))], metrics=[]                                                    
                                                                                
                       |
   |                   |   HashAggregateExec: mode=Final, gby=[], 
aggr=[COUNT(UInt8(1))], metrics=[output_rows=1, elapsed_compute=69.362µs]       
                                                                                
                         |
   |                   |     CoalescePartitionsExec, metrics=[output_rows=3, 
elapsed_compute=NOT RECORDED]                                                   
                                                                                
              |
   |                   |       HashAggregateExec: mode=Partial, gby=[], 
aggr=[COUNT(UInt8(1))], metrics=[output_rows=3, elapsed_compute=119.041µs]      
                                                                                
                   |
   |                   |         RepartitionExec: 
partitioning=RoundRobinBatch(3), metrics=[send_time{inputPartition=0}=4.775µs, 
fetch_time{inputPartition=0}=12.77346ms, repart_time{inputPartition=0}=NOT 
RECORDED]                                      |
   |                   |           SortExec: [c1@0 ASC], 
metrics=[output_rows=5, elapsed_compute=232.203µs]                              
                                                                                
                                  |
   |                   |             CoalescePartitionsExec, 
metrics=[output_rows=5, elapsed_compute=NOT RECORDED]                           
                                                                                
                              |
   |                   |               HashAggregateExec: 
mode=FinalPartitioned, gby=[c1@0 as c1], aggr=[COUNT(UInt8(1))], 
metrics=[output_rows=5, elapsed_compute=373.876µs]                              
                                                |
   |                   |                 CoalesceBatchesExec: 
target_batch_size=4096, metrics=[]                                              
                                                                                
                             |
   |                   |                   RepartitionExec: 
partitioning=Hash([Column { name: "c1", index: 0 }], 3), 
metrics=[fetch_time{inputPartition=0}=34.072102ms, 
repart_time{inputPartition=0}=252.872µs, send_time{inputPartition=0}=NOT 
RECORDED] |
   |                   |                     HashAggregateExec: mode=Partial, 
gby=[c1@0 as c1], aggr=[COUNT(UInt8(1))], metrics=[output_rows=5, 
elapsed_compute=581.405µs]                                                      
                           |
   |                   |                       CoalesceBatchesExec: 
target_batch_size=4096, metrics=[]                                              
                                                                                
                       |
   |                   |                         FilterExec: c13@1 != 
C2GT5KVyOPZpgKVl110TyZO0NcJ434, metrics=[output_rows=99, 
elapsed_compute=309.095µs]                                                      
                                            |
   |                   |                           RepartitionExec: 
partitioning=RoundRobinBatch(3), 
metrics=[fetch_time{inputPartition=0}=8.239528ms, 
send_time{inputPartition=0}=9.864µs, repart_time{inputPartition=0}=NOT 
RECORDED]                    |
   |                   |                             CsvExec: 
source=Path(ARROW_TEST_DATA/csv/aggregate_test_100.csv: 
[ARROW_TEST_DATA/csv/aggregate_test_100.csv]), has_header=true, metrics=[]      
   |
   
+-------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
   ```
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to