Github user wesm commented on the issue:

    https://github.com/apache/spark/pull/15821
  
    Very nice to see the improved wall clock times. I have been busy 
engineering the pipeline between the byte stream from Spark and the resulting 
DataFrame -- the only major thing still left on the table that might help is 
converting in C++ to pandas.Categorical rather than returning a dense array of 
strings. 
    
    I'll review this patch in more detail when I can
    
    I do a bit of performance analysis (esp. on the Python side) and flesh out 
some of the architectural next-steps (e.g. what @leifwalsh has described) in 
advance of Spark Summit in a couple weeks. Parallelizing the record batch 
conversion and streaming it to Python would be another significant perf win. 
Having these tools should also be helpful for speeding up UDF evaluation


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to