BryanCutler commented on a change in pull request #29818:
URL: https://github.com/apache/spark/pull/29818#discussion_r538827775
##########
File path: python/pyspark/sql/tests/test_arrow.py
##########
@@ -191,6 +191,32 @@ def test_pandas_round_trip(self):
pdf_arrow = df.toPandas()
assert_frame_equal(pdf_arrow, pdf)
+ def test_pandas_self_destruct(self):
+ import pyarrow as pa
+ rows = 2 ** 16
+ cols = 8
+ df = self.spark.range(0, rows).select(*[rand() for _ in range(cols)])
+ expected_bytes = rows * cols * 8
+ with
self.sql_conf({"spark.sql.execution.arrow.pyspark.selfDestruct.enabled": True}):
+ # We hold on to the table reference here, so if self destruct
didn't work, then
+ # there would be 2 copies of the data (one in Arrow, one in
Pandas), both
+ # tracked by the Arrow allocator
Review comment:
Not really, it's not ideal but it would still test that splitting and
reallocating the columns in `_collect_as_arrow()` will allow for self
destruction, which is most of additions here. The rest is just calling
`to_pandas()` with the right options. I would recommend testing something like
this:
```python
with
self.sql_conf({"spark.sql.execution.arrow.pyspark.selfDestruct.enabled": True}):
# get initial pa.total_allocated_bytes()
batches = _collect_as_arrow(_force_split_batches=True)
table = # convert batches to table as done in toPandas()
pdf = table.to_pandas(self_destruct=True, split_blocks=True,
use_threads=False)
# get after pa.total_allocated_bytes() and check difference is expected
# Call with the full code path and compare resulting DataFrame for good
measure
assert_frames_equal(df.toPandas(), pdf)
```
That should be sufficient testing, wdyt?
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]