Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/3399#issuecomment-64097958
I believe I agree with this change. The actual size of the mini batch
depends on the sampling.
This raises another potential problem -- what if 0 elements are sampled?
You could keep trying until at least 1 is sampled. If the fraction is very
small and data is small, this might take ages to succeed. If 0 are sampled, you
could fall back to randomly choosing 1 (and logging a warning maybe), to
enforce that the mini batch always has at least 1 element.
You could also not proceed at all if the expected sample size (`numExamples
* miniBatchFraction`) is less than 1, just like in the case that `numExamples
== 0`.
I see this change just deals with it by leaving no result in the stochastic
loss history. That seems a bit less than optimal to address it that way, while
we're here.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]