akupchinskiy opened a new pull request, #2167:
URL: https://github.com/apache/datafusion-comet/pull/2167

   ## Which issue does this PR close?
   
   <!--
   We generally require a GitHub issue to be filed for all bug fixes and 
enhancements and this helps us generate change logs for our releases. You can 
link an issue to this PR using the GitHub syntax. For example `Closes #123` 
indicates that this PR will close issue #123.
   -->
   
   Closes #.
   
   ## Rationale for this change
   Current CometBroadcast implementation might cause some complex plans to fail 
during the execution. If an execution stage involves several broadcasts and one 
of them corresponds to the evaluated before (ReusedExchange scenario), it might 
fail due to attempt to zip unaligned partitions 
[here](https://github.com/apache/datafusion-comet/blob/2b0e6db77b2c27bfa91eb34d0a2469d1014c20e6/spark/src/main/scala/org/apache/spark/sql/comet/operators.scala#L306)
 or 
[here](https://github.com/apache/datafusion-comet/blob/2b0e6db77b2c27bfa91eb34d0a2469d1014c20e6/spark/src/main/scala/org/apache/spark/sql/comet/operators.scala#L292).
   <!--
    Why are you proposing this change? If this is already explained clearly in 
the issue then this section is not needed.
    Explaining clearly why changes are proposed helps reviewers understand your 
changes and offer better suggestions for fixes.  
   -->
   Initially, I have faced it when I was trying to run TPCDS benchmark on 
10-factor scaled dataset with Spark-4: multiple queries failed with a similar 
error. Yet, it is reproducible for 3.5 version at least (the unit test included 
in the PR fails if run against main branch code). 
   
   ## What changes are included in this PR?
   Basically, the fix prevents picking ReusedExchange plan as a driving RDD 
(one which determines the target partitions number). Also, there is a 
workaround for another problem with ReusedExchangeExec: [this partition 
alignment](https://github.com/apache/datafusion-comet/blob/2b0e6db77b2c27bfa91eb34d0a2469d1014c20e6/spark/src/main/scala/org/apache/spark/sql/comet/operators.scala#L279)
 simply doesn't work for already evaluated broadcasted rdds. That is why it is 
necessary to align all the partitions CometBatchRDD inputs serving as broadcast 
wrappers. After that, all RDD have the same partitions number and zipping them 
works with no errors.
   
   <!--
   There is no need to duplicate the description in the issue here but it is 
sometimes worth providing a summary of the individual changes in this PR.
   -->
   ## How are these changes tested?
   Added a unit which fails against main branch; rebuilt the jar with proposed 
changes and rerun TPCDS-SF10 benchmark using spark-4.0  with no errors.
   <!--
   We typically require tests for all PRs in order to:
   1. Prevent the code from being accidentally broken by subsequent changes
   2. Serve as another way to document the expected behavior of the code
   
   If tests are not included in your PR, please explain why (for example, are 
they covered by existing tests)?
   -->
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscr...@datafusion.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: github-unsubscr...@datafusion.apache.org
For additional commands, e-mail: github-h...@datafusion.apache.org

Reply via email to