This is an automated email from the ASF dual-hosted git repository.

srowen pushed a commit to branch branch-3.0
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-3.0 by this push:
     new 4e31555  [SPARK-30933][ML][DOCS] ML, GraphX 3.0 QA: Update user guide 
for new features & APIs
4e31555 is described below

commit 4e3155546be6f0e4c244295b91164b112fc6c522
Author: Huaxin Gao <[email protected]>
AuthorDate: Wed Mar 18 13:21:24 2020 -0500

    [SPARK-30933][ML][DOCS] ML, GraphX 3.0 QA: Update user guide for new 
features & APIs
    
    ### What changes were proposed in this pull request?
    Change ml-tuning.html.
    
    ### Why are the changes needed?
    Add description for ```MultilabelClassificationEvaluator``` and 
```RankingEvaluator```.
    
    ### Does this PR introduce any user-facing change?
    Yes
    
    before:
    
![image](https://user-images.githubusercontent.com/13592258/76437013-2c5ffb80-6376-11ea-8946-f5c2e7379b7c.png)
    
    after:
    
![image](https://user-images.githubusercontent.com/13592258/76437054-397cea80-6376-11ea-867f-fe8d8fa4e5b3.png)
    
    ### How was this patch tested?
    
    Closes #27880 from huaxingao/spark-30933.
    
    Authored-by: Huaxin Gao <[email protected]>
    Signed-off-by: Sean Owen <[email protected]>
    (cherry picked from commit d22c9f6c0de76b7f3df7c9c98fdb0303c29e8e73)
    Signed-off-by: Sean Owen <[email protected]>
---
 docs/ml-tuning.md | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/docs/ml-tuning.md b/docs/ml-tuning.md
index 49e2368..274f195 100644
--- a/docs/ml-tuning.md
+++ b/docs/ml-tuning.md
@@ -65,9 +65,11 @@ At a high level, these model selection tools work as follows:
 
 The `Evaluator` can be a 
[`RegressionEvaluator`](api/scala/org/apache/spark/ml/evaluation/RegressionEvaluator.html)
 for regression problems, a 
[`BinaryClassificationEvaluator`](api/scala/org/apache/spark/ml/evaluation/BinaryClassificationEvaluator.html)
-for binary data, or a 
[`MulticlassClassificationEvaluator`](api/scala/org/apache/spark/ml/evaluation/MulticlassClassificationEvaluator.html)
-for multiclass problems. The default metric used to choose the best `ParamMap` 
can be overridden by the `setMetricName`
-method in each of these evaluators.
+for binary data, a 
[`MulticlassClassificationEvaluator`](api/scala/org/apache/spark/ml/evaluation/MulticlassClassificationEvaluator.html)
+for multiclass problems, a 
[`MultilabelClassificationEvaluator`](api/scala/org/apache/spark/ml/evaluation/MultilabelClassificationEvaluator.html)
+ for multi-label classifications, or a
+[`RankingEvaluator`](api/scala/org/apache/spark/ml/evaluation/RankingEvaluator.html)
 for ranking problems. The default metric used to
+choose the best `ParamMap` can be overridden by the `setMetricName` method in 
each of these evaluators.
 
 To help construct the parameter grid, users can use the 
[`ParamGridBuilder`](api/scala/org/apache/spark/ml/tuning/ParamGridBuilder.html)
 utility.
 By default, sets of parameters from the parameter grid are evaluated in 
serial. Parameter evaluation can be done in parallel by setting `parallelism` 
with a value of 2 or more (a value of 1 will be serial) before running model 
selection with `CrossValidator` or `TrainValidationSplit`.


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to