[
https://issues.apache.org/jira/browse/MADLIB-413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Frank McQuillan updated MADLIB-413:
-----------------------------------
Description:
Multilayer perceptron with backpropagation
Modules:
* mlp_classification
* mlp_regression
Interface
{code}
source_table VARCHAR
output_table VARCHAR
independent_varname VARCHAR -- Column name for input features, should be a Real
Valued array
dependent_varname VARCHAR, -- Column name for target values, should be Real
Valued array of size 1 or greater
hidden_layer_sizes INTEGER[], -- Number of units per hidden layer (can be empty
or null, in which case, no hidden layers)
optimizer_params VARCHAR, -- Specified below
weights VARCHAR, -- Column name for weights. Weights the loss for each input
vector. Column should contain positive real value
activation_function VARCHAR, -- One of 'sigmoid' (default), 'tanh', 'relu', or
any prefix (eg. 't', 's')
grouping_cols
)
{code}
where
{code}
optimizer_params: -- eg "step_size=0.5, n_tries=5"
{
step_size DOUBLE PRECISION, -- Learning rate
n_iterations INTEGER, -- Number of iterations per try
n_tries INTEGER, -- Total number of training cycles, with random
initializations to avoid local minima.
tolerance DOUBLE PRECISION, -- Maximum distance between weights before training
stops (or until it reaches n_iterations)
}
{code}
was:
Multilayer perceptron with backpropagation
Modules:
* mlp_classification
* mlp_regression
Interface
{code}
source_table VARCHAR
output_table VARCHAR
independent_varname VARCHAR -- Column name for input features, should be a Real
Valued array
dependent_varname VARCHAR, -- Column name for target values, should be Real
Valued array of size 1 or greater
n_neurons_per_hidden_layer INTEGER[], -- Number of units per hidden layer (can
be empty or null, in which case, no hidden layers)
optimizer_params VARCHAR, -- Specified below
weights VARCHAR, -- Column name for weights. Weights the loss for each input
vector. Column should contain positive real value
activation_function VARCHAR, -- One of 'sigmoid' (default), 'tanh', 'relu', or
any prefix (eg. 't', 's')
grouping_cols
)
{code}
where
{code}
optimizer_params: -- eg "step_size=0.5, n_tries=5"
{
step_size DOUBLE PRECISION, -- Learning rate
n_iterations INTEGER, -- Number of iterations per try
n_tries INTEGER, -- Total number of training cycles, with random
initializations to avoid local minima.
tolerance DOUBLE PRECISION, -- Maximum distance between weights before training
stops (or until it reaches n_iterations)
}
{code}
> Neural Networks - MLP
> ---------------------
>
> Key: MADLIB-413
> URL: https://issues.apache.org/jira/browse/MADLIB-413
> Project: Apache MADlib
> Issue Type: New Feature
> Components: Module: Neural Networks
> Reporter: Caleb Welton
> Assignee: Cooper Sloan
> Fix For: v1.12
>
>
> Multilayer perceptron with backpropagation
> Modules:
> * mlp_classification
> * mlp_regression
> Interface
> {code}
> source_table VARCHAR
> output_table VARCHAR
> independent_varname VARCHAR -- Column name for input features, should be a
> Real Valued array
> dependent_varname VARCHAR, -- Column name for target values, should be Real
> Valued array of size 1 or greater
> hidden_layer_sizes INTEGER[], -- Number of units per hidden layer (can be
> empty or null, in which case, no hidden layers)
> optimizer_params VARCHAR, -- Specified below
> weights VARCHAR, -- Column name for weights. Weights the loss for each input
> vector. Column should contain positive real value
> activation_function VARCHAR, -- One of 'sigmoid' (default), 'tanh', 'relu',
> or any prefix (eg. 't', 's')
> grouping_cols
> )
> {code}
> where
> {code}
> optimizer_params: -- eg "step_size=0.5, n_tries=5"
> {
> step_size DOUBLE PRECISION, -- Learning rate
> n_iterations INTEGER, -- Number of iterations per try
> n_tries INTEGER, -- Total number of training cycles, with random
> initializations to avoid local minima.
> tolerance DOUBLE PRECISION, -- Maximum distance between weights before
> training stops (or until it reaches n_iterations)
> }
> {code}
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)