Github user imatiach-msft commented on a diff in the pull request:

    https://github.com/apache/spark/pull/16441#discussion_r95517520
  
    --- Diff: 
mllib/src/main/scala/org/apache/spark/ml/classification/GBTClassifier.scala ---
    @@ -275,18 +316,33 @@ class GBTClassificationModel private[ml](
       @Since("2.0.0")
       lazy val featureImportances: Vector = 
TreeEnsembleModel.featureImportances(trees, numFeatures)
     
    +  private def margin(features: Vector): Double = {
    +    val treePredictions = 
_trees.map(_.rootNode.predictImpl(features).prediction)
    +    blas.ddot(numTrees, treePredictions, 1, _treeWeights, 1)
    +  }
    +
       /** (private[ml]) Convert to a model in the old API */
       private[ml] def toOld: OldGBTModel = {
         new OldGBTModel(OldAlgo.Classification, _trees.map(_.toOld), 
_treeWeights)
       }
     
    +  /**
    +   * Note: this is currently an optimization that should be removed when 
we have more loss
    +   * functions available than only logistic.
    +   */
    +  private lazy val loss = getOldLossType
    --- End diff --
    
    removed lazy, removed comment.  I made it lazy so as to not do the lookup 
if it doesn't need to be done, but since that isn't actually expensive and that 
only seemed to confuse it's better to remove it.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to