Github user dbtsai commented on a diff in the pull request:

    https://github.com/apache/spark/pull/10702#discussion_r51077516
  
    --- Diff: 
mllib/src/main/scala/org/apache/spark/ml/regression/LinearRegression.scala ---
    @@ -219,33 +219,43 @@ class LinearRegression @Since("1.3.0") 
(@Since("1.3.0") override val uid: String
         }
     
         val yMean = ySummarizer.mean(0)
    -    val yStd = math.sqrt(ySummarizer.variance(0))
    -
    -    // If the yStd is zero, then the intercept is yMean with zero 
coefficient;
    -    // as a result, training is not needed.
    -    if (yStd == 0.0) {
    -      logWarning(s"The standard deviation of the label is zero, so the 
coefficients will be " +
    -        s"zeros and the intercept will be the mean of the label; as a 
result, " +
    -        s"training is not needed.")
    -      if (handlePersistence) instances.unpersist()
    -      val coefficients = Vectors.sparse(numFeatures, Seq())
    -      val intercept = yMean
    -
    -      val model = new LinearRegressionModel(uid, coefficients, intercept)
    -      // Handle possible missing or invalid prediction columns
    -      val (summaryModel, predictionColName) = 
model.findSummaryModelAndPredictionCol()
    -
    -      val trainingSummary = new LinearRegressionTrainingSummary(
    -        summaryModel.transform(dataset),
    -        predictionColName,
    -        $(labelCol),
    -        model,
    -        Array(0D),
    -        $(featuresCol),
    -        Array(0D))
    -      return copyValues(model.setSummary(trainingSummary))
    +    val rawYStd = math.sqrt(ySummarizer.variance(0))
    +    if (rawYStd == 0.0) {
    +      if ($(fitIntercept)) {
    +        // If the rawYStd is zero and fitIntercept=true, then the 
intercept is yMean with
    +        // zero coefficient; as a result, training is not needed.
    +        logWarning(s"The standard deviation of the label is zero, so the 
coefficients will be " +
    +          s"zeros and the intercept will be the mean of the label; as a 
result, " +
    +          s"training is not needed.")
    +        if (handlePersistence) instances.unpersist()
    +        val coefficients = Vectors.sparse(numFeatures, Seq())
    +        val intercept = yMean
    +
    +        val model = new LinearRegressionModel(uid, coefficients, intercept)
    +        // Handle possible missing or invalid prediction columns
    +        val (summaryModel, predictionColName) = 
model.findSummaryModelAndPredictionCol()
    +
    +        val trainingSummary = new LinearRegressionTrainingSummary(
    +          summaryModel.transform(dataset),
    +          predictionColName,
    +          $(labelCol),
    +          model,
    +          Array(0D),
    +          $(featuresCol),
    +          Array(0D))
    +        return copyValues(model.setSummary(trainingSummary))
    +      } else {
    +        require(!($(regParam) > 0.0 && $(standardization)),
    --- End diff --
    
    Your PR for `WeightedLeastSquares` is doing the right behavior, and 
consistent to `require($(regParam) != 0.0)`. The problem only happens when 
`standardizationLabel = true`, and in your `WeightedLeastSquares`, you already 
throw exception in this case. 
    
    GLMNET will do `standardizationLabel = true` even `standardization == 
false`, and that's why you see inconsistent solution between GLMNET and your 
analytical solution. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to