HyukjinKwon commented on code in PR #36683:
URL: https://github.com/apache/spark/pull/36683#discussion_r885367253
##########
python/pyspark/sql/pandas/conversion.py:
##########
@@ -596,7 +596,7 @@ def _create_from_pandas_with_arrow(
]
# Slice the DataFrame to be batched
- step = -(-len(pdf) // self.sparkContext.defaultParallelism) # round
int up
+ step = self._jconf.arrowMaxRecordsPerBatch()
Review Comment:
The reason of doing this is to avoid reconfiguring
`spark.rpc.message.maxSize`. When the batch is too large, it throws an
exception with complaining `spark.rpc.message.maxSize` is too small.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]