Github user davies commented on the pull request:
https://github.com/apache/spark/pull/3193#issuecomment-63108072
For 1), I could put the refactor in another JIRA/PR.
For the performance regression, I think it's a acceptable balance in
performance and code manageability. There are lots of way to improve the
performance of PySpark, such as numpy/Cython/numba/pypy/pandas, we should
balance the dependence and complicity.
Actually, the current approach introduce problems, if numpy is available in
driver, but not installed in slaves, it will failed. And someone try to fix
this by https://github.com/apache/spark/pull/2313, but that PR may introduce
another problems, the result will be sample() will be no-reproducible if some
of the slaves have numpy but others do not, these complicate the problem a lot,
but did not contribute huge performance gain.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]