HyukjinKwon opened a new pull request #34505:
URL: https://github.com/apache/spark/pull/34505
### What changes were proposed in this pull request?
This PR proposes to implement `DataFrame.mapInArrow` that allows users to
apply a function with PyArrow record batches such as:
```python
def do_something(iterator):
for arrow_batch in iterator:
# do something with `pyarrow.RecordBatch` and create new
`pyarrow.RecordBatch`.
# ...
yield arrow_batch
df.mapInPandas(do_something, df.schema).show()
```
### Why are the changes needed?
For usability and technical problems. Both are elabourated in more details
at SPARK-37227.
### Does this PR introduce _any_ user-facing change?
Yes, this PR adds a new API:
```python
import pyarrow as pa
df = self.spark.createDataFrame(
[(1, "foo"), (2, None), (3, "bar"), (4, "bar")], "a int, b string")
def func(iterator):
for batch in iterator:
assert isinstance(batch, pa.RecordBatch)
yield batch
df.mapInArrow(func, df.schema).collect()
```
### How was this patch tested?
Manually tested, and unit tests were added.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]