Github user sethah commented on a diff in the pull request:
https://github.com/apache/spark/pull/12767#discussion_r63535245
--- Diff: python/pyspark/ml/tests.py ---
@@ -594,17 +594,21 @@ def test_fit_minimize_metric(self):
iee = InducedErrorEstimator()
evaluator = RegressionEvaluator(metricName="rmse")
- grid = (ParamGridBuilder()
- .addGrid(iee.inducedError, [100.0, 0.0, 10000.0])
- .build())
+ grid = ParamGridBuilder() \
+ .addGrid(iee.inducedError, [100.0, 0.0, 10000.0]) \
+ .build()
tvs = TrainValidationSplit(estimator=iee, estimatorParamMaps=grid,
evaluator=evaluator)
tvsModel = tvs.fit(dataset)
bestModel = tvsModel.bestModel
bestModelMetric = evaluator.evaluate(bestModel.transform(dataset))
+ validationMetrics = tvsModel.validationMetrics
self.assertEqual(0.0, bestModel.getOrDefault('inducedError'),
"Best model should have zero induced error")
self.assertEqual(0.0, bestModelMetric, "Best model has RMSE of 0")
+ self.assertEqual(len(grid), len(validationMetrics),
+ "validationMetrics has the same size of grid
parameter")
+ self.assertEqual(bestModelMetric, min(validationMetrics))
--- End diff --
With @MLnick's previous comment in mind, my preference would be to instead
check:
`self.assertEqual(0.0, min(validationMetrics))`
That way we don't put in a check that is wrong in general, but correct
here. I don't feel too strongly about it, but since it's an easy fix and we
might save some future developer a bit of confusion then I think it's a good
idea. Same in the maximize version.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]