Github user FRosner commented on the pull request:

    https://github.com/apache/spark/pull/9222#issuecomment-150494201
  
    @felixcheung the ticket is mentioning two numbers 
(https://issues.apache.org/jira/browse/SPARK-11258#) in the description. For a 
small data frame it is already a speedup of 4. But anyway - quadratic 
complexity is the problem as you run into issues with increasing number of 
columns quickly. We were not even able to load one of our data frames (> 1 
Million rows, several hundred of columns) with the old method but it ran 
through with the new one.
    
    I can provide some more benchmarks later today.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to