Github user jimfcarroll commented on the pull request:
https://github.com/apache/spark/pull/3254#issuecomment-63226470
Cool. Thanks. I had some pretty poor performance on a 7.5 million row, 500
column data set so I profiled it. Some of the slowness was my code and some was
that 'size' call.
The problem was exacerbated by wide datasets since 'size' was called on a
per-column basis for each row and the 'size' call itself is proportionally
slower the wider the dataset.
I was actually surprised the scala implemetation of that List (which had
the work "optimied" in its name) didn't cache the size the first time it
calculated it.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]