Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/7461#issuecomment-137272017
btw @kmadhugit -- your point about `numExecutors` is very true. I thought
about this a bit more and we could in some cases have a very large cluster and
not all executors might have this RDD for instance. So in that case we should
only use executors which have this RDD.
Unfortunately right now this either requires two passes over the RDD or
some techniques to inspect the output of the map stage and then size the number
of reducers based on that. cc @mateiz who has been working on the latter and
might have something to add.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]