Github user jkbradley commented on a diff in the pull request:

    https://github.com/apache/spark/pull/2780#discussion_r19052553
  
    --- Diff: 
mllib/src/main/scala/org/apache/spark/mllib/tree/DecisionTree.scala ---
    @@ -1011,4 +1014,99 @@ object DecisionTree extends Serializable with 
Logging {
         categories
       }
     
    +  /**
    +   * Find splits for a continuous feature
    +   * NOTE: Returned number of splits is set based on `featureSamples` and
    +   *       may be different with `numSplits`.
    +   *       MetaData's number of splits will be set accordingly.
    +   * @param featureSamples feature values of each sample
    +   * @param metadata decision tree metadata
    +   * @param featureIndex feature index to find splits
    +   * @return array of splits
    +   */
    +  private[tree] def findSplitsForContinuousFeature(
    +      featureSamples: Array[Double],
    +      metadata: DecisionTreeMetadata,
    +      featureIndex: Int): Array[Double] = {
    +    require(metadata.isContinuous(featureIndex),
    +      s"findSplitsForContinuousFeature can only be used " +
    +        s"to find splits for a continuous feature.")
    +
    +    /**
    +     * Get count for each distinct value
    +     */
    +    def getValueCount(arr: Array[Double]): Array[(Double, Int)] = {
    +      val valueCount = new ArrayBuffer[(Double, Int)]
    +      var index = 1
    +      var currentValue = arr(0)
    +      var currentCount = 1
    +      while (index < arr.length) {
    +        if (currentValue != arr(index)) {
    +          valueCount.append((currentValue, currentCount))
    +          currentCount = 1
    +          currentValue = arr(index)
    +        } else {
    +          currentCount += 1
    +        }
    +        index += 1
    +      }
    +
    +      valueCount.append((currentValue, currentCount))
    +
    +      valueCount.toArray
    +    }
    +
    +
    +    val splits = {
    +      val numSplits = metadata.numSplits(featureIndex)
    +
    +      // sort feature samples first
    +      val sortedFeatureSamples = featureSamples.sorted
    +
    +      // get count for each distinct value
    +      val valueCount = getValueCount(sortedFeatureSamples)
    --- End diff --
    
    If there are a lot of samples with the same value, then it could be faster 
to sort after counting:
    * Create a Map[Double, Int] of counts.
    * then sort the map's keys
    It might also be shorter to implement using Scala's built-in aggregation 
methods (instead of a specialized getValueCount method).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to