Jefffrey commented on PR #3991:
URL: 
https://github.com/apache/arrow-datafusion/pull/3991#issuecomment-1294084687

   Unsure if this is what is meant by #41, but I gave it a shot
   
   There's probably more to the optimization that could be done, such as 
dealing with cases like having a Projection sandwiched between two repartition 
nodes, or cases like this:
   
   ```
   HashJoinExec: mode=Partitioned, join_type=Inner, on=[(Column { name: 
"int_col", index: 0 }, Column { name: "int_col3", index: 0 })]                  
                                                                                
  
    *RepartitionExec: partitioning=Hash([Column { name: "int_col", index: 0 }], 
12)                                                                             
                                                                          
       HashJoinExec: mode=Partitioned, join_type=Inner, on=[(Column { name: 
"int_col", index: 0 }, Column { name: "int_col2", index: 0 })]                  
                                                                              
        *RepartitionExec: partitioning=Hash([Column { name: "int_col", index: 0 
}], 12)                                                                         
                                                                          
           ProjectionExec: expr=[int_col@2 as int_col, double_col@3 as 
double_col, CAST(date_string_col@4 AS Utf8) as alltypes_plain.date_string_col]  
                                                                                
   
             FilterExec: id@0 > 1 AND CAST(tinyint_col@1 AS Float64) < 
double_col@3                                                                    
                                                                                
   
               ParquetExec: limit=None, 
partitions=[home/jeffrey/Code/arrow-datafusion/parquet-testing/data/alltypes_plain.parquet],
 predicate=id_max@0 > 1 AND true, projection=[id, tinyint_col, int_col, 
double_col, date_string_col]  
         RepartitionExec: partitioning=Hash([Column { name: "int_col2", index: 
0 }], 12)                                                                       
                                                                           
           ProjectionExec: expr=[int_col@0 as int_col2]                         
                                                                                
                                                                          
             ProjectionExec: expr=[int_col@2 as int_col, double_col@3 as 
double_col, CAST(date_string_col@4 AS Utf8) as alltypes_plain.date_string_col]  
                                                                                
 
               FilterExec: id@0 > 1 AND CAST(tinyint_col@1 AS Float64) < 
double_col@3                                                                    
                                                                                
 
                 ParquetExec: limit=None, 
partitions=[home/jeffrey/Code/arrow-datafusion/parquet-testing/data/alltypes_plain.parquet],
 predicate=id_max@0 > 1 AND true, projection=[id, tinyint_col, int_col, 
double_col, date_string_col]
     RepartitionExec: partitioning=Hash([Column { name: "int_col3", index: 0 
}], 12)                                                                         
                                                                             
       ProjectionExec: expr=[int_col@0 as int_col3]                             
                                                                                
                                                                          
         ProjectionExec: expr=[int_col@2 as int_col, double_col@3 as 
double_col, CAST(date_string_col@4 AS Utf8) as alltypes_plain.date_string_col]  
                                                                                
     
           FilterExec: id@0 > 1 AND CAST(tinyint_col@1 AS Float64) < 
double_col@3                                                                    
                                                                                
     
             ParquetExec: limit=None, 
partitions=[home/jeffrey/Code/arrow-datafusion/parquet-testing/data/alltypes_plain.parquet],
 predicate=id_max@0 > 1 AND true, projection=[id, tinyint_col, int_col, 
double_col, date_string_col]
   ```
   
   Where could potentially collapse the two marked RepartitionExec nodes? 
Unsure if that is correct.
   
   Appreciate any feedback on this


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to