This is an automated email from the ASF dual-hosted git repository.

srowen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
     new d22c9f6  [SPARK-30933][ML][DOCS] ML, GraphX 3.0 QA: Update user guide 
for new features & APIs
d22c9f6 is described below

commit d22c9f6c0de76b7f3df7c9c98fdb0303c29e8e73
Author: Huaxin Gao <huax...@us.ibm.com>
AuthorDate: Wed Mar 18 13:21:24 2020 -0500

    [SPARK-30933][ML][DOCS] ML, GraphX 3.0 QA: Update user guide for new 
features & APIs
    
    ### What changes were proposed in this pull request?
    Change ml-tuning.html.
    
    ### Why are the changes needed?
    Add description for ```MultilabelClassificationEvaluator``` and 
```RankingEvaluator```.
    
    ### Does this PR introduce any user-facing change?
    Yes
    
    before:
    
![image](https://user-images.githubusercontent.com/13592258/76437013-2c5ffb80-6376-11ea-8946-f5c2e7379b7c.png)
    
    after:
    
![image](https://user-images.githubusercontent.com/13592258/76437054-397cea80-6376-11ea-867f-fe8d8fa4e5b3.png)
    
    ### How was this patch tested?
    
    Closes #27880 from huaxingao/spark-30933.
    
    Authored-by: Huaxin Gao <huax...@us.ibm.com>
    Signed-off-by: Sean Owen <sro...@gmail.com>
---
 docs/ml-tuning.md | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/docs/ml-tuning.md b/docs/ml-tuning.md
index 49e2368..274f195 100644
--- a/docs/ml-tuning.md
+++ b/docs/ml-tuning.md
@@ -65,9 +65,11 @@ At a high level, these model selection tools work as follows:
 
 The `Evaluator` can be a 
[`RegressionEvaluator`](api/scala/org/apache/spark/ml/evaluation/RegressionEvaluator.html)
 for regression problems, a 
[`BinaryClassificationEvaluator`](api/scala/org/apache/spark/ml/evaluation/BinaryClassificationEvaluator.html)
-for binary data, or a 
[`MulticlassClassificationEvaluator`](api/scala/org/apache/spark/ml/evaluation/MulticlassClassificationEvaluator.html)
-for multiclass problems. The default metric used to choose the best `ParamMap` 
can be overridden by the `setMetricName`
-method in each of these evaluators.
+for binary data, a 
[`MulticlassClassificationEvaluator`](api/scala/org/apache/spark/ml/evaluation/MulticlassClassificationEvaluator.html)
+for multiclass problems, a 
[`MultilabelClassificationEvaluator`](api/scala/org/apache/spark/ml/evaluation/MultilabelClassificationEvaluator.html)
+ for multi-label classifications, or a
+[`RankingEvaluator`](api/scala/org/apache/spark/ml/evaluation/RankingEvaluator.html)
 for ranking problems. The default metric used to
+choose the best `ParamMap` can be overridden by the `setMetricName` method in 
each of these evaluators.
 
 To help construct the parameter grid, users can use the 
[`ParamGridBuilder`](api/scala/org/apache/spark/ml/tuning/ParamGridBuilder.html)
 utility.
 By default, sets of parameters from the parameter grid are evaluated in 
serial. Parameter evaluation can be done in parallel by setting `parallelism` 
with a value of 2 or more (a value of 1 will be serial) before running model 
selection with `CrossValidator` or `TrainValidationSplit`.


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to