Github user yanboliang commented on a diff in the pull request:
https://github.com/apache/spark/pull/18538#discussion_r138027427
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/evaluation/ClusteringEvaluator.scala
---
@@ -0,0 +1,437 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.ml.evaluation
+
+import org.apache.spark.SparkContext
+import org.apache.spark.annotation.{Experimental, Since}
+import org.apache.spark.broadcast.Broadcast
+import org.apache.spark.ml.linalg.{BLAS, DenseVector, Vector, Vectors,
VectorUDT}
+import org.apache.spark.ml.param.{Param, ParamMap, ParamValidators}
+import org.apache.spark.ml.param.shared.{HasFeaturesCol, HasPredictionCol}
+import org.apache.spark.ml.util.{DefaultParamsReadable,
DefaultParamsWritable, Identifiable, SchemaUtils}
+import org.apache.spark.sql.{DataFrame, Dataset}
+import org.apache.spark.sql.functions.{avg, col, udf}
+import org.apache.spark.sql.types.IntegerType
+
+/**
+ * :: Experimental ::
+ * Evaluator for clustering results.
+ * The metric computes the Silhouette measure
+ * using the squared Euclidean distance.
+ *
+ * The Silhouette is a measure for the validation
+ * of the consistency within clusters. It ranges
+ * between 1 and -1, where a value close to 1
+ * means that the points in a cluster are close
+ * to the other points in the same cluster and
+ * far from the points of the other clusters.
+ */
+@Experimental
+@Since("2.3.0")
+class ClusteringEvaluator @Since("2.3.0") (@Since("2.3.0") override val
uid: String)
+ extends Evaluator with HasPredictionCol with HasFeaturesCol with
DefaultParamsWritable {
+
+ @Since("2.3.0")
+ def this() = this(Identifiable.randomUID("cluEval"))
+
+ @Since("2.3.0")
+ override def copy(pMap: ParamMap): ClusteringEvaluator =
this.defaultCopy(pMap)
+
+ @Since("2.3.0")
+ override def isLargerBetter: Boolean = true
+
+ /** @group setParam */
+ @Since("2.3.0")
+ def setPredictionCol(value: String): this.type = set(predictionCol,
value)
+
+ /** @group setParam */
+ @Since("2.3.0")
+ def setFeaturesCol(value: String): this.type = set(featuresCol, value)
+
+ /**
+ * param for metric name in evaluation
+ * (supports `"squaredSilhouette"` (default))
+ * @group param
+ */
+ @Since("2.3.0")
+ val metricName: Param[String] = {
+ val allowedParams = ParamValidators.inArray(Array("squaredSilhouette"))
--- End diff --
I'd suggest the metric name is ```silhouette```, since we may add
silhouette for other distance, then we can add another param like
```distance``` to control that. The param ```metricName``` should not bind to
any distance computation way. There are lots of other metrics for clustering
algorithms, like
[these](http://scikit-learn.org/stable/modules/classes.html#clustering-metrics)
in sklearn. We would not add all of them for MLlib, but we may add part of them
in the future.
cc @jkbradley @MLnick @WeichenXu123
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]