Bryan Cutler commented on SPARK-23030:

Hi [~icexelloss], I have something working, just need to write it up then we 
can discuss on the PR.  You're right though, if we want to keep collection as 
fast as possible, it must be fully asynchronous.  Then unfortunately there is 
no way to avoid the worst case of having all data in the JVM driver memory.  I 
did improve the average case and got a little speedup, so hopefully it will be 
worth it.  I'll put up a PR soon.

> Decrease memory consumption with toPandas() collection using Arrow
> ------------------------------------------------------------------
>                 Key: SPARK-23030
>                 URL: https://issues.apache.org/jira/browse/SPARK-23030
>             Project: Spark
>          Issue Type: Sub-task
>          Components: PySpark, SQL
>    Affects Versions: 2.3.0
>            Reporter: Bryan Cutler
>            Priority: Major
> Currently with Arrow enabled, calling {{toPandas()}} results in a collection 
> of all partitions in the JVM in the form of batches of Arrow file format.  
> Once collected in the JVM, they are served to the Python driver process. 
> I believe using the Arrow stream format can help to optimize this and reduce 
> memory consumption in the JVM by only loading one record batch at a time 
> before sending it to Python.  This might also reduce the latency between 
> making the initial call in Python and receiving the first batch of records.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to