Github user njayaram2 commented on a diff in the pull request:
https://github.com/apache/madlib/pull/243#discussion_r175948252
--- Diff: src/modules/convex/mlp_igd.cpp ---
@@ -130,6 +145,90 @@ mlp_igd_transition::run(AnyType &args) {
return state;
}
+/**
+ * @brief Perform the multilayer perceptron minibatch transition step
+ *
+ * Called for each tuple.
+ */
+AnyType
+mlp_minibatch_transition::run(AnyType &args) {
+ // For the first tuple: args[0] is nothing more than a marker that
+ // indicates that we should do some initial operations.
+ // For other tuples: args[0] holds the computation state until last
tuple
+ MLPMiniBatchState<MutableArrayHandle<double> > state = args[0];
+
+ // initilize the state if first tuple
+ if (state.algo.numRows == 0) {
+ if (!args[3].isNull()) {
+ MLPMiniBatchState<ArrayHandle<double> > previousState =
args[3];
+ state.allocate(*this, previousState.task.numberOfStages,
+ previousState.task.numbersOfUnits);
+ state = previousState;
+ } else {
+ // configuration parameters
+ ArrayHandle<double> numbersOfUnits =
args[4].getAs<ArrayHandle<double> >();
--- End diff --
We probably could, but there are a couple of extra arguments that only
minibatch gets, and not IGD (batch_size and n_epochs).
---