lidavidm commented on a change in pull request #29818:
URL: https://github.com/apache/spark/pull/29818#discussion_r535648140
##########
File path: python/pyspark/sql/tests/test_arrow.py
##########
@@ -191,6 +191,32 @@ def test_pandas_round_trip(self):
pdf_arrow = df.toPandas()
assert_frame_equal(pdf_arrow, pdf)
+ def test_pandas_self_destruct(self):
+ import pyarrow as pa
+ rows = 2 ** 16
+ cols = 8
+ df = self.spark.range(0, rows).select(*[rand() for _ in range(cols)])
+ expected_bytes = rows * cols * 8
+ with
self.sql_conf({"spark.sql.execution.arrow.pyspark.selfDestruct.enabled": True}):
+ # We hold on to the table reference here, so if self destruct
didn't work, then
+ # there would be 2 copies of the data (one in Arrow, one in
Pandas), both
+ # tracked by the Arrow allocator
+ pdf, table = df._collect_as_arrow_table()
+ self.assertEqual((rows, cols), pdf.shape)
+ # If self destruct did work, then memory usage should be only a
little above
+ # the minimum memory necessary for the dataframe
+ self.assertLessEqual(pa.total_allocated_bytes(), 1.2 *
expected_bytes)
Review comment:
Hmm, I think that would work if we compare it after we delete the
dataframe (but not the table). Then self_destruct should ensure that Arrow
doesn't hold any copies of data.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]