tedyu opened a new pull request, #37952:
URL: https://github.com/apache/spark/pull/37952

   ### What changes were proposed in this pull request?
   When running spark application against spark 3.3, I see the following :
   ```
   java.lang.IllegalArgumentException: Unsupported data source V2 partitioning 
type: CustomPartitioning
       at 
org.apache.spark.sql.execution.datasources.v2.V2ScanPartitioning$$anonfun$apply$1.applyOrElse(V2ScanPartitioning.scala:46)
       at 
org.apache.spark.sql.execution.datasources.v2.V2ScanPartitioning$$anonfun$apply$1.applyOrElse(V2ScanPartitioning.scala:34)
       at 
org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:584)
   ```
   The `CustomPartitioning` works fine with Spark 3.2.1
   This PR proposes to relax the code and treat all unknown partitioning the 
same way as that for `UnknownPartitioning`.
   
   ### Why are the changes needed?
   3.3.0 doesn't seem to warrant such behavioral change (from that of 3.2.1 
release).
   
   ### Does this PR introduce _any_ user-facing change?
   This would allow user's custom partitioning to continue to work with 3.3.x 
releases.
   
   ### How was this patch tested?
   Existing test suite.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to