WeichenXu123 commented on a change in pull request #30471:
URL: https://github.com/apache/spark/pull/30471#discussion_r531111061



##########
File path: python/pyspark/ml/tuning.py
##########
@@ -207,6 +210,205 @@ def _to_java_impl(self):
         return java_estimator, java_epms, java_evaluator
 
 
+class _ValidatorSharedReadWrite:
+
+    @staticmethod
+    def saveImpl(path, instance, sc, extraMetadata=None):
+        from pyspark.ml.classification import OneVsRest
+        numParamsNotJson = 0
+        jsonEstimatorParamMaps = []
+        for paramMap in instance.getEstimatorParamMaps():
+            jsonParamMap = []
+            for p, v in paramMap.items():
+                jsonParam = {'parent': p.parent, 'name': p.name}
+                if (isinstance(v, Estimator) and not (
+                        isinstance(v, _ValidatorParams) or
+                        isinstance(v, OneVsRest))
+                    ) or isinstance(v, Transformer) or \

Review comment:
       > It's easy to add the extension when that case is explicitly supported
   
   Seems no.
   Suppose there're some 3rd party libraries depends on spark. And 3rd party 
estimators may include an evaluator param (e.g. for supervising in training 
early stop) . We need to support them.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to