CyborgDroid commented on a change in pull request #26838:
[SPARK-30144][ML][PySpark] Make MultilayerPerceptronClassificationModel extend
MultilayerPerceptronParams
URL: https://github.com/apache/spark/pull/26838#discussion_r358998827
##########
File path: python/pyspark/ml/classification.py
##########
@@ -2274,21 +2276,21 @@ def setSolver(self, value):
return self._set(solver=value)
-class
MultilayerPerceptronClassificationModel(JavaProbabilisticClassificationModel,
JavaMLWritable,
+class
MultilayerPerceptronClassificationModel(JavaProbabilisticClassificationModel,
+ _MultilayerPerceptronParams,
JavaMLWritable,
JavaMLReadable):
"""
Model fitted by MultilayerPerceptronClassifier.
.. versionadded:: 1.6.0
"""
- @property
- @since("1.6.0")
- def layers(self):
+ @since("3.0.0")
+ def setLayers(self, value):
Review comment:
I opened the jira ticket and want to thank everyone for their work and
comments. @viirya - I'm not sure I understood your question about setters,
however, from a user perspective (me) the important thing is to expose what
parameters were used to train the model. After running a cross validation,
there is no other way of accessing parameters to analyze the results of the
grid search. There is also no built in way to log the parameters during each
grid search attempt.
Here you can see an example of how that is useful to narrow down the
hyperparameter space based on which hyperparameters generally give good results.
https://github.com/CyborgDroid/spark_mlflow/blob/develop/README.md#mlflow-to-analyze-best-hyperparameters-for-further-optimization
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]