Github user sethah commented on the issue:
https://github.com/apache/spark/pull/11974
Mini-batching in Spark generally isn't that efficient, since to extract a
mini-batch you still need to iterate over the entire dataset - and that means
reading it from disk if it doesn't fit into memory.
The performance tests posted on the jira are hard to interpret. It looks to
me like the computation time goes down as you sample less data, but the cost
function doesn't decrease as much. What's the conclusion? I'd be more
interested to see how long it takes to get to the same cost, all we've showed
so far, AFAICT, is that sampling is faster but produces a worse model. Why
didn't those tests run until convergence?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]