Github user sethah commented on a diff in the pull request:

    https://github.com/apache/spark/pull/17078#discussion_r103236865
  
    --- Diff: 
mllib/src/main/scala/org/apache/spark/ml/classification/LogisticRegression.scala
 ---
    @@ -1447,7 +1447,7 @@ private class LogisticAggregator(
           label: Double): Unit = {
     
         val localFeaturesStd = bcFeaturesStd.value
    -    val localCoefficients = bcCoefficients.value
    +    val localCoefficients = bcCoefficients.value.toArray
    --- End diff --
    
    The above check got us into trouble because if we don't add `@transient 
lazy val` then we'll serialize the coefficients. The call to `toArray` is 
really just a small bit of pointer indirection, and while I agree it is not 
_great_ to call it every time, the extra function call should pale in 
comparison to the `O(numClasses * numFeatures)` ops we do in the method. 
    
    That said, I'm ok with either solution, just wanted to point out the 
pros/cons of each. Let me know what you think, thanks for reviewing!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to