Github user actuaryzhang commented on a diff in the pull request:

    https://github.com/apache/spark/pull/16131#discussion_r90955908
  
    --- Diff: 
mllib/src/main/scala/org/apache/spark/ml/regression/GeneralizedLinearRegression.scala
 ---
    @@ -505,7 +505,7 @@ object GeneralizedLinearRegression extends 
DefaultParamsReadable[GeneralizedLine
         override def initialize(y: Double, weight: Double): Double = {
           require(y >= 0.0, "The response variable of Poisson family " +
             s"should be non-negative, but got $y")
    -      y
    +      y + 0.1
    --- End diff --
    
    @srowen Theoretically, one only needs to add 0.1 to the y = 0 case, which 
is a guess of the mean for those cases. But I think it may be better to add 
this small number to all cases. Imagine that one models the rates of 
occurrence, i.e., frequency divided by exposure. For certain large exposure, 
the rate can get tiny and close to zero. Adding 0.1 to that may help avoid 
numerical issues too in that case. Does that make sense? 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to