HyukjinKwon opened a new pull request, #41307:
URL: https://github.com/apache/spark/pull/41307

   ### What changes were proposed in this pull request?
   
   This PR proposes to pick a proper number of partitions when create a 
DataFrame from R DataFrame with Arrow.
   Previously, the number of partitions was always `1` if not specified.
   Now, it splits the input R DataFrame by 
`spark.sql.execution.arrow.maxRecordsPerBatch`, and pick a proper number of 
partitions (the number of batches).
   
   This is matched with PySpark code path:
   
https://github.com/apache/spark/blob/46949e692e863992f4c50bdd482d5216d4fd9221/python/pyspark/sql/pandas/conversion.py#L618C11-L626
   
   ### Why are the changes needed?
   
   To avoid having OOM when the R DataFrame is too large, and enables a proper 
distributed computing.
   
   ### Does this PR introduce _any_ user-facing change?
   
   Yes, it changes the default partition number when users call 
`createDataFrame` with R DataFrame when Arrow optimization is enabled.
   The concept of the partition is subject to be internal, and by default it 
doesn't change its behaviour.
   
   ### How was this patch tested?
   
   Manually tested with a large CSV file (3 GB).


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to