Github user haying commented on a diff in the pull request:
https://github.com/apache/incubator-madlib/pull/75#discussion_r87922091
--- Diff: src/modules/convex/algo/igd.hpp ---
@@ -86,6 +101,32 @@ IGD<State, ConstState, Task>::merge(state_type &state,
template <class State, class ConstState, class Task>
void
+IGD<State, ConstState, Task>::mergeInPlace(state_type &state,
--- End diff --
I see. It makes sense. Both loss and gradient are kept for the purpose to
decide the convergence and step size tuning, to the upper layer of internal
code or end user. I'm sure what you do for that, I guess you still need to pass
the gradient to the inter-iteration layer? or it's not necessary anymore for
this approach?
I think code redundancy is not a huge deal in this short piece of code. The
man issue is making people wonder about the difference between the functions
and which one to use in the future. I would suggest at least some amount of
comment to explain the difference before we have time to refactor it cleanly.
Great work!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---