Github user njayaram2 commented on a diff in the pull request:

    https://github.com/apache/madlib/pull/243#discussion_r176218740
  
    --- Diff: src/modules/convex/mlp_igd.cpp ---
    @@ -130,6 +145,90 @@ mlp_igd_transition::run(AnyType &args) {
     
         return state;
     }
    +/**
    + * @brief Perform the multilayer perceptron minibatch transition step
    + *
    + * Called for each tuple.
    + */
    +AnyType
    +mlp_minibatch_transition::run(AnyType &args) {
    +    // For the first tuple: args[0] is nothing more than a marker that
    +    // indicates that we should do some initial operations.
    +    // For other tuples: args[0] holds the computation state until last 
tuple
    +    MLPMiniBatchState<MutableArrayHandle<double> > state = args[0];
    +
    +    // initilize the state if first tuple
    +    if (state.algo.numRows == 0) {
    +        if (!args[3].isNull()) {
    +            MLPMiniBatchState<ArrayHandle<double> > previousState = 
args[3];
    --- End diff --
    
    Tried it, it was cleaner this way.


---

Reply via email to