Github user mengxr commented on the pull request:
https://github.com/apache/spark/pull/11578#issuecomment-199129930
@viirya Generating a random value could be more expensive than an iterator
call. With gap sampling and p=0.8, we probably need to generate more values
than the number of elements because there are not many "gaps". Please try a
very small `p`, e.g., 0.01, and test the performance. A small `p` is what we
usually use for big datasets anyway.
@davies If the performance difference is not significant, do we still want
to make this change? What are other benefits? I just want to learn more about
the context.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]