Github user sethah commented on a diff in the pull request: https://github.com/apache/spark/pull/11119#discussion_r79955656 --- Diff: mllib/src/test/scala/org/apache/spark/ml/util/DefaultReadWriteTest.scala --- @@ -107,24 +133,34 @@ trait DefaultReadWriteTest extends TempDirectory { self: Suite => } val model = estimator.fit(dataset) - // Test Estimator save/load - val estimator2 = testDefaultReadWrite(estimator) - testParams.foreach { case (p, v) => - val param = estimator.getParam(p) - assert(estimator.get(param).get === estimator2.get(param).get) + val testFunctions = if (testParams.contains("initialModel")) { + Map(("initialModel", checkModelData.asInstanceOf[(Any, Any) => Unit])) --- End diff -- There is an assumption here that checkModelData will work on the initial models, which is not always true. In the `HasInitialModel` trait we do not require the initial model to be of the same type of the model which inherits it. So, if we ever use a model that has an initial model other than itself, we will get runtime errors here. We either need to update this to handle arbitrary initial models, or specify somehow that the initial model must be the same type of the model which inherits it. I prefer updating the tests, but I spent some time tinkering with it and didn't come up with anything that wasn't really clunky.
--- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org