lidavidm commented on pull request #29818:
URL: https://github.com/apache/spark/pull/29818#issuecomment-710303071


   Ah and for
   
   > The last I saw was that the self_destruct option is experimental. Do you 
know if or when this might change? I'm a little unsure about adding 
experimental features, especially if it could lead to issues with the resulting 
Pandas DataFrame.
   
   I can follow up on the experimental status, but AIUI, it's going to just be 
a long tail of "this Pandas operation didn't expect an immutable backing array" 
that we would need to flush out anyways over time. We can leave it turned off 
by default. Also, I see the PyArrow optimization in Spark is itself 
experimental anyways.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to