Github user dbtsai commented on a diff in the pull request:

    https://github.com/apache/spark/pull/13796#discussion_r74829219
  
    --- Diff: 
mllib/src/main/scala/org/apache/spark/ml/classification/LogisticRegression.scala
 ---
    @@ -945,13 +955,139 @@ class BinaryLogisticRegressionSummary 
private[classification] (
     private class LogisticAggregator(
         private val numFeatures: Int,
         numClasses: Int,
    -    fitIntercept: Boolean) extends Serializable {
    +    fitIntercept: Boolean,
    +    multinomial: Boolean,
    +    standardize: Boolean) extends Serializable {
     
       private var weightSum = 0.0
       private var lossSum = 0.0
     
    -  private val gradientSumArray =
    -    Array.ofDim[Double](if (fitIntercept) numFeatures + 1 else numFeatures)
    +  private val totalCoefficientLength = {
    +    val cols = if (fitIntercept) numFeatures + 1 else numFeatures
    +    val rows = if (multinomial) numClasses else 1
    +    rows * cols
    +  }
    +
    +  private val gradientSumArray = 
Array.ofDim[Double](totalCoefficientLength)
    +
    +  /** Update gradient and loss using binary loss function. */
    +  private def binaryUpdateInPlace(
    +      features: Vector,
    +      weight: Double,
    +      label: Double,
    +      coefficients: Array[Double],
    +      gradient: Array[Double],
    +      featuresStd: Array[Double],
    +      numFeaturesPlusIntercept: Int,
    +      standardize: Boolean): Unit = {
    +    val margin = - {
    +      var sum = 0.0
    +      features.foreachActive { (index, value) =>
    +        if (featuresStd(index) != 0.0 && value != 0.0) {
    +          val x = if (standardize) value / featuresStd(index) else value
    --- End diff --
    
    I know the history of why different approaches were taken since I initially 
wrote both of the versions :) In the old `mllib` implementation, I just decided 
to have a copy of entire standardized dataset and had it cached for simplicity. 
After talking to couple people for their use cases, many times, they're 
training models on the same cached dataset for different regularizations, and 
then the old `mllib` will cache them again and again which will result pressure 
on GC and waste some memory space. 
    
    Then when I started to implement `ml` version, I benchmark and found that 
doing real time standardization is not too expensive and not a big overhead if 
it's implemented carefully. Since we only need to standardize the active 
elements (non-zeros) in the input features, and given that computing the 
objective function and gradient has to touch the feature values once, dividing 
them by the standard deviation should be really cheap compared with other 
operations. I believe this is also valid for MLOR. 
    
    My thoughts are that we do the standardization in the iteration for both 
`BLOR` and `MLOR` for now, and benchmark the bottleneck. If there is a 
performance issue, we can address it in the followup PR. Thus, we will not 
change too many logics in one PR. Thanks.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to