mgaido91 commented on a change in pull request #23773: [SPARK-26721][ML] Avoid 
per-tree normalization in featureImportance for GBT
URL: https://github.com/apache/spark/pull/23773#discussion_r257274678
 
 

 ##########
 File path: 
mllib/src/test/scala/org/apache/spark/ml/classification/GBTClassifierSuite.scala
 ##########
 @@ -363,7 +363,8 @@ class GBTClassifierSuite extends MLTest with 
DefaultReadWriteTest {
     val gbtWithFeatureSubset = gbt.setFeatureSubsetStrategy("1")
     val importanceFeatures = gbtWithFeatureSubset.fit(df).featureImportances
     val mostIF = importanceFeatures.argmax
-    assert(mostImportantFeature !== mostIF)
+    assert(mostIF === 1)
 
 Review comment:
   Not sure about the exact reason why they were different earlier (of course 
the behavior changed because of the fix, but this is expected). You can compare 
the importances vector with the one returned by `sklearn`: as I mentioned in 
the PR description they are very similar (so `sklearn` too says 1 is the most 
important in both scenarios using sklearn too).

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to