Github user asfgit closed the pull request at:
https://github.com/apache/madlib/pull/272
---
Github user njayaram2 commented on a diff in the pull request:
https://github.com/apache/madlib/pull/272#discussion_r194151705
--- Diff: doc/design/modules/neural-network.tex ---
@@ -117,6 +122,26 @@ \subsubsection{Backpropagation}
\[\boxed{\delta_{k}^j = \sum_{t=1}^{n_{k+1}}
Github user njayaram2 commented on a diff in the pull request:
https://github.com/apache/madlib/pull/272#discussion_r194151826
--- Diff: doc/design/modules/neural-network.tex ---
@@ -196,17 +221,28 @@ \subsubsection{The $\mathit{Gradient}$ Function}
\end{algorithmic}
Github user njayaram2 commented on a diff in the pull request:
https://github.com/apache/madlib/pull/272#discussion_r192246910
--- Diff: doc/design/modules/neural-network.tex ---
@@ -117,6 +117,24 @@ \subsubsection{Backpropagation}
\[\boxed{\delta_{k}^j = \sum_{t=1}^{n_{k+1}}
Github user njayaram2 commented on a diff in the pull request:
https://github.com/apache/madlib/pull/272#discussion_r192246463
--- Diff: doc/design/modules/neural-network.tex ---
@@ -117,6 +117,24 @@ \subsubsection{Backpropagation}
\[\boxed{\delta_{k}^j = \sum_{t=1}^{n_{k+1}}
Github user njayaram2 commented on a diff in the pull request:
https://github.com/apache/madlib/pull/272#discussion_r192251198
--- Diff: doc/design/modules/neural-network.tex ---
@@ -117,6 +117,24 @@ \subsubsection{Backpropagation}
\[\boxed{\delta_{k}^j = \sum_{t=1}^{n_{k+1}}
Github user njayaram2 commented on a diff in the pull request:
https://github.com/apache/madlib/pull/272#discussion_r192245605
--- Diff: doc/design/modules/neural-network.tex ---
@@ -117,6 +117,24 @@ \subsubsection{Backpropagation}
\[\boxed{\delta_{k}^j = \sum_{t=1}^{n_{k+1}}
Github user njayaram2 commented on a diff in the pull request:
https://github.com/apache/madlib/pull/272#discussion_r192248589
--- Diff: doc/design/modules/neural-network.tex ---
@@ -117,6 +117,24 @@ \subsubsection{Backpropagation}
\[\boxed{\delta_{k}^j = \sum_{t=1}^{n_{k+1}}
Github user kaknikhil commented on a diff in the pull request:
https://github.com/apache/madlib/pull/272#discussion_r191945714
--- Diff: src/modules/convex/task/mlp.hpp ---
@@ -197,6 +244,7 @@ MLP::loss(
const model_type,
const
Github user kaknikhil commented on a diff in the pull request:
https://github.com/apache/madlib/pull/272#discussion_r191942290
--- Diff: src/ports/postgres/modules/convex/mlp_igd.py_in ---
@@ -1781,7 +1799,7 @@ class MLPMinibatchPreProcessor:
summary_table_columns
Github user kaknikhil commented on a diff in the pull request:
https://github.com/apache/madlib/pull/272#discussion_r191942284
--- Diff: src/ports/postgres/modules/convex/mlp.sql_in ---
@@ -1474,13 +1480,15 @@ CREATE AGGREGATE MADLIB_SCHEMA.mlp_minibatch_step(
/*
Github user njayaram2 commented on a diff in the pull request:
https://github.com/apache/madlib/pull/272#discussion_r191575057
--- Diff: src/ports/postgres/modules/convex/mlp_igd.py_in ---
@@ -1781,7 +1799,7 @@ class MLPMinibatchPreProcessor:
summary_table_columns
Github user njayaram2 commented on a diff in the pull request:
https://github.com/apache/madlib/pull/272#discussion_r191533727
--- Diff: src/modules/convex/type/model.hpp ---
@@ -126,45 +129,96 @@ struct MLPModel {
for (k = 0; k < N; k ++) {
size +=
Github user njayaram2 commented on a diff in the pull request:
https://github.com/apache/madlib/pull/272#discussion_r191574947
--- Diff: src/ports/postgres/modules/convex/mlp.sql_in ---
@@ -1474,13 +1480,15 @@ CREATE AGGREGATE MADLIB_SCHEMA.mlp_minibatch_step(
/*
Github user njayaram2 commented on a diff in the pull request:
https://github.com/apache/madlib/pull/272#discussion_r191539168
--- Diff: src/modules/convex/task/mlp.hpp ---
@@ -197,6 +244,7 @@ MLP::loss(
const model_type,
const
Github user njayaram2 commented on a diff in the pull request:
https://github.com/apache/madlib/pull/272#discussion_r191537654
--- Diff: src/modules/convex/task/mlp.hpp ---
@@ -126,68 +157,84 @@ MLP::getLossAndUpdateModel(
const Matrix _true_batch,
GitHub user kaknikhil opened a pull request:
https://github.com/apache/madlib/pull/272
MLP: Add momentum and nesterov to gradient updates.
JIRA: MADLIB-1210
We refactored the minibatch code to separate out the momentum and model
update functions. We initially were using
17 matches
Mail list logo