Github user MLnick commented on the issue:

    https://github.com/apache/spark/pull/17094
  
    Sure, makes sense. We can always consider it later. Or even an alternate 
version of it to have `L2` and a subclass `StandardizedL2` or whatever (that's 
more if we were to start thinking about exposing the building blocks to 
external algorithm developers).
    
    For point (2), it's just that each loss function ("squared loss", 
"logistic", etc) can implement a `Loss` trait similar to the old 
`org.apache.spark.mllib.optimization.Gradient` approach. The `Loss` would then 
be an arg of the `Aggregator` I suppose and the `add` method could be further 
consolidated. Not sure if it adds that much value here because of the funky 
standardization stuff we do in LiR and LoR...


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to