Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/8314#issuecomment-153719068
So for `Word2VecSuite`, a stop-gap to get this passing in order to get this
fix in is to change the expected output to match the actual final output
(wouldn't mind fixing the nextFloat / nextGaussian thing too) and later sort
out how to changes `codes`.
For `MultilayerPerceptronClassiferSuite`, I was referring to the two seeds
in the test. One is `11L` to `MultilayerPerceptronClassifier` and one is `42`
to `generateMultinomialLogisticInput`. I don't know how much either of them
matters. Since the test now fails, something must depend on the new, different
seeding; could be the second one? in which case, may just be a case of hunting
for a seed that works there, or loosening the test or something.
How do you want to proceed to close this out? hard-coding some new test
results seems reasonable in these cases since we're fixing a bug, improving
other tests, and not making the others any more brittle.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]