Github user sethah commented on a diff in the pull request:

    https://github.com/apache/spark/pull/9008#discussion_r41648955
  
    --- Diff: 
mllib/src/main/scala/org/apache/spark/ml/tree/impl/RandomForest.scala ---
    @@ -1211,4 +1213,28 @@ private[ml] object RandomForest extends Logging {
         }
       }
     
    +  /**
    +   * Inject the sample weight to sub-sample weights of the baggedPoints
    +   */
    +  private[impl] def reweightSubSampleWeights(
    +      baggedTreePoints: RDD[BaggedPoint[TreePoint]]): 
RDD[BaggedPoint[TreePoint]] = {
    +    baggedTreePoints.map {bagged =>
    +      val treePoint = bagged.datum
    +      val adjustedSubSampleWeights = bagged.subsampleWeights.map(w => w * 
treePoint.weight)
    +      new BaggedPoint[TreePoint](treePoint, adjustedSubSampleWeights)
    +    }
    +  }
    +
    +  /**
    +   * A thin adaptor to 
[[org.apache.spark.mllib.tree.impl.DecisionTreeMetadata.buildMetadata]]
    +   */
    +  private[impl] def buildWeightedMetadata(
    --- End diff --
    
    I am working on another PR where it is an issue that the ML and MLlib 
implementations share the `DecisionTreeMetadata` class. I'm not sure what usual 
protocol is around this type of thing, but since the MLlib implementation will 
be phased out, I wonder if we can't just copy the `DecisionTreeMetadata` code 
to ML so we can separate it from the MLlib implementation. @jkbradley 
[mentioned](https://github.com/apache/spark/pull/7294) that the shared classes 
can be ported to ML lazily when the initial ML implementation was done. Doing 
that now would prevent having to build this thin wrapper to `buildMetadata`. 
Any feedback would be appreciated.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to