erenavsarogullari opened a new pull request, #115:
URL: https://github.com/apache/arrow-datafusion-comet/pull/115

   ## Which issue does this PR close?
   <!--
   We generally require a GitHub issue to be filed for all bug fixes and 
enhancements and this helps us generate change logs for our releases. You can 
link an issue to this PR using the GitHub syntax. For example `Closes #123` 
indicates that this PR will close issue #123.
   -->
   Closes #.
   
   ## Rationale for this change
   <!--
    Why are you proposing this change? If this is already explained clearly in 
the issue then this section is not needed.
    Explaining clearly why changes are proposed helps reviewers understand your 
changes and offer better suggestions for fixes.  
   -->
   This PR aims to add `Coalesce` native support by mitigating fallback to 
Spark. Currently, Spark `CoalesceExec` is `numOfPartitions` based. However, 
DataFusion (v35.0.0) supports following 2 physical operators and they are not 
input numOfPartitions based. Currently, `CoalescePartitionsExec` is used as 
workaround to run end2end execution. One of potential option is that adding 
numOfPartitions based Coalesce physical operator support to DataFusion (if 
feasible). Would like to get more feedback. Thanks in advance.
   
   Please find an example Spark and Comet Query Plans for the same query 
before/after current draft PR.
   ```
   CoalescePartitionsExec: Merge execution plan executes partitions in parallel 
and combines them into a single partition. No guarantees are made about the 
order of the resulting partition.
   CoalesceBatchesExec: CoalesceBatchesExec combines small batches into larger 
batches for more efficient use of vectorized processing by upstream operators. 
This is target_batch_size based (a.k.a Minimum number of rows for coalesces 
batches)
   ```
   **1- Current Spark and Comet Query Plans:**
   Spark Executed Plan:
   ```
   *(1) ColumnarToRow
   +- CometCoalesce Coalesce 3, 3
      +- CometFilter [l#13, b#14L], (isnotnull(l#13) AND ((l#13 + 1) >= 2))
         +- CometScan parquet spark_catalog.default.t1[l#13,b#14L] Batched: 
true, DataFilters: [isnotnull(l#13), ((l#13 + 1) >= 2)], Format: CometParquet, 
Location: InMemoryFileIndex(1 
paths)[file:/Users/eren.avsarogullari/Development/OSS/arrow-datafusion-comet/...,
 PartitionFilters: [], PushedFilters: [IsNotNull(l)], ReadSchema: 
struct<l:int,b:bigint>
   ```
   Comet native query plan:
   ```
   24/02/25 20:29:53 INFO src/execution/jni_api.rs: Comet native query plan:
    FilterExec: col_0@0 IS NOT NULL AND col_0@0 + 1 >= 2
     ScanExec: schema=[col_0: Int32, col_1: Int64]
   
   24/02/25 20:29:53 INFO src/execution/jni_api.rs: Comet native query plan:
    ProjectionExec: expr=[Cast [data_type: Utf8, timezone: America/Los_Angeles, 
child: col_0@0] as col_0, Cast [data_type: Utf8, timezone: America/Los_Angeles, 
child: col_1@1] as col_1]
     ScanExec: schema=[col_0: Int32, col_1: Int64]
   ```
   **2- New Spark and Comet Query Plans:**
   Spark Executed Plan:
   ```
   *(1) ColumnarToRow
   +- CometCoalesce children {
     children {
       scan {
         fields {
           type_id: INT32
         }
         fields {
           type_id: INT64
         }
       }
     }
     filter {
       predicate {
     ...
     }
    }
   }
   , Coalesce 3, 3, [B@4e8e0052
      +- CometFilter [l#13, b#14L], (isnotnull(l#13) AND ((l#13 + 1) >= 2))
         +- CometScan parquet spark_catalog.default.t1[l#13,b#14L] Batched: 
true, DataFilters: [isnotnull(l#13), ((l#13 + 1) >= 2)], Format: CometParquet, 
Location: InMemoryFileIndex(1 
paths)[file:/Users/eren.avsarogullari/Development/OSS/arrow-datafusion-comet/...,
 PartitionFilters: [], PushedFilters: [IsNotNull(l)], ReadSchema: 
struct<l:int,b:bigint>
   ```
   Comet native query plan:
   ```
   24/02/25 21:24:46 INFO src/execution/jni_api.rs: Comet native query plan:
    ProjectionExec: expr=[Cast [data_type: Utf8, timezone: America/Los_Angeles, 
child: col_0@0] as col_0, Cast [data_type: Utf8, timezone: America/Los_Angeles, 
child: col_1@1] as col_1]
     CoalescePartitionsExec
       FilterExec: col_0@0 IS NOT NULL AND col_0@0 + 1 >= 2
         ScanExec: schema=[col_0: Int32, col_1: Int64]
   ```
   
   ## What changes are included in this PR?
   <!--
   There is no need to duplicate the description in the issue here but it is 
sometimes worth providing a summary of the individual changes in this PR.
   -->
   
   ## How are these changes tested?
   <!--
   We typically require tests for all PRs in order to:
   1. Prevent the code from being accidentally broken by subsequent changes
   2. Serve as another way to document the expected behavior of the code
   
   If tests are not included in your PR, please explain why (for example, are 
they covered by existing tests)?
   -->
   Will be adding new UT coverage.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to