Yicong-Huang opened a new pull request, #54327: URL: https://github.com/apache/spark/pull/54327
### What changes were proposed in this pull request? Optimize the non-iterator `applyInPandas` path by merging Arrow batches at the Arrow level and converting to pandas via PyArrow's native `table.to_pandas()`, instead of converting each batch individually through PySpark's per-column converter. Changes: - **`GroupPandasUDFSerializer.load_stream`**: Yield raw `Iterator[pa.RecordBatch]` instead of converting to pandas per-batch via `ArrowBatchTransformer.to_pandas`. - **`wrap_grouped_map_pandas_udf`**: Accept a pre-built `pd.DataFrame` directly, removing the per-column `pd.concat` reassembly. - **Non-iterator mapper**: Collect all Arrow batches → `pa.Table.from_batches` → `table.to_pandas()` to get a DataFrame in one call. - **Iterator mapper**: Split into its own `elif` branch; still converts batches lazily via `ArrowBatchTransformer.to_pandas` per-batch. ### Why are the changes needed? Follow-up to SPARK-55459. After SPARK-54316 consolidated the grouped-map serializer, the non-iterator `applyInPandas` lost its efficient Arrow-level batch merge and instead converts each batch to pandas individually, then reassembles via per-column `pd.concat`. This PR restores the Arrow-level merge and uses PyArrow's native `table.to_pandas()` which is more efficient than per-column conversion. A pure-Python microbenchmark (335 groups × 100K rows × 5 float columns) shows: | Approach | Time | vs Master | |---|---|---| | Master (per-batch convert + per-column concat) | 0.935s | 1× | | This PR (`from_batches` + `table.to_pandas`) | 0.224s | **4.2× faster** | ### Does this PR introduce _any_ user-facing change? No. ### How was this patch tested? Existing `applyInPandas` tests (`test_pandas_grouped_map.py`, `test_pandas_grouped_map_iter.py`). ### Was this patch authored or co-authored using generative AI tooling? No. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
