Github user MechCoder commented on a diff in the pull request:
https://github.com/apache/spark/pull/12370#discussion_r65994490
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/tree/impl/BaggedPoint.scala ---
@@ -33,13 +33,20 @@ import org.apache.spark.util.random.XORShiftRandom
* this datum has 1 copy, 0 copies, and 4 copies in the 3 subsamples,
respectively.
*
* @param datum Data instance
- * @param subsampleWeights Weight of this instance in each subsampled
dataset.
- *
- * TODO: This does not currently support (Double) weighted instances.
Once MLlib has weighted
- * dataset support, update. (We store subsampleWeights as Double
for this future extension.)
+ * @param subsampleCounts Number of samples of this instance in each
subsampled dataset.
+ * @param sampleWeight The weight of this instance.
*/
-private[spark] class BaggedPoint[Datum](val datum: Datum, val
subsampleWeights: Array[Double])
- extends Serializable
+private[spark] class BaggedPoint[Datum](
+ val datum: Datum,
+ val subsampleCounts: Array[Int],
+ val sampleWeight: Double) extends Serializable {
+
+ /**
+ * Subsample counts weighted by the sample weight.
+ */
+ def weightedCounts: Array[Double] = subsampleCounts.map(_ * sampleWeight)
--- End diff --
Should this be a `val`?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]