Github user akopich commented on the issue:
https://github.com/apache/spark/pull/19565
@WeichenXu123, in a case of large dataset this "adjustment" would have
infinitesimal effect. (IMO, no adjustment is needed -- the expected number of
non-empty docs in the same and does not depend on the order of filter and
sample and equals to `docs.size * miniBatchFraction * fractionOfNonEmptyDocs`).
So I believe, we all agree that sampling should go before filtering. I'll
send a commit shortly.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]