mgaido91 commented on a change in pull request #23773: [SPARK-26721][ML] Avoid
per-tree normalization in featureImportance for GBT
URL: https://github.com/apache/spark/pull/23773#discussion_r257454186
##########
File path:
mllib/src/test/scala/org/apache/spark/ml/classification/GBTClassifierSuite.scala
##########
@@ -363,7 +363,8 @@ class GBTClassifierSuite extends MLTest with
DefaultReadWriteTest {
val gbtWithFeatureSubset = gbt.setFeatureSubsetStrategy("1")
val importanceFeatures = gbtWithFeatureSubset.fit(df).featureImportances
val mostIF = importanceFeatures.argmax
- assert(mostImportantFeature !== mostIF)
+ assert(mostIF === 1)
Review comment:
In the first case, every tree can choose among all features. Since feature 1
basically is the correct "label" (I mean they are the same), in the first case
all the trees choose feature 1 in the first node and they get 100% accuracy.
Hence the importance vector is [1.0, 0.0, ...]. In the second case, only 1
random feature per time can be considered. So the trees are more "diverse" and
they consider also other features. So the importance vector is the one I
mentioned above. You can maybe try and debug this UT if you want to understand
better (probably it is more effective than my poor english) or you can try and
run the same in sklearn.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]